Multi view depth perception

Changing the way machines understand the world around them.

Cameras “see” like humans
The human visual system has evolved to perceive a range of environments, from the tip of one's nose to, under the right conditions, detecting candle light nearly 3 kilometers away. And through the use of stereopsis, images from both eyes are combined into a coherent 3D representation of the world — a critical step when performing perception tasks like driving.

Like the human eye, a camera can capture significant scene information from photons alone. With two or more cameras pointed in the same direction, each camera captures the scene from a slightly different perspective. Like stereopsis in humans, machines can use parallax, or the differences between these perspectives, to compute depth inside the field of view overlap of the camera images.

By providing accurate, dense depth, subsequent processing such as object detection, tracking, classification, and segmentation can be significantly improved — hallmarks of perception, whether human or machine — leading to superior scene understanding.
image displaying how our eyes interpret disparity
Image source: Scientific Reports
Beyond human vision

The human visual system provides an incredible, general-purpose solution to sensing and perception. But that does not mean it cannot be improved upon.

Humans have two eyes, on a single plane, with a relatively small distance separating each eye. Unlike humans, a camera array is not limited to only two apertures, a specific focal length or resolution, or even a single, horizontal baseline. By using more cameras, with wider and multiple baselines, a multi view array can provide better scene depth than the unaided human eye. Cameras can also be easily modified for a given scenario, providing optimal depth accuracy and precision for a respective use case or application.

Cameras that see what is invisible to the human eye, such as Long-wave Infrared (LWIR) or Short-wave Infrared (SWIR), can also provide better-than-human capabilities. As with visible spectrum cameras, their non-visible counterparts can be used in a camera array with similar depth perception benefits.

There is an existing and broad ecosystem of camera component suppliers and integrators that are relentlessly improving performance, reliability, and cost across multiple industries. Their pace of innovation is another source of continuous improvements for multi-camera systems.

Allows for more than two apertures
Multiple, wider, and multi-axis baselines
Optimize for specific use cases or applications
Relentless industry improvements in camera hardware
Extensible to non-visible spectrum cameras
Multi view perception is essential for solving scene understanding at scale
Perception is not the only hurdle to wide-scale deployment of autonomous vehicles — but it is a major factor. A greater understanding of the environment enables an end-to-end system to make better decisions, faster, and more consistently.
Active sensing modalities pose challenges
While the cost of active sensing modalities, such as lidar, have come down they remain prohibitively high for many applications. Additionally, increased power consumption, range limitations, sparse returns, long-term operational reliability, and/or interference make them challenging to widely incorporate into next generation ADAS and L3+ autonomy solutions. By reenvisioning how we use existing, commercially-available passive optics we can lessen or even eliminate the dependency on active sensing technologies.
Scenes must be measured for true understanding
Machine learning is an extremely powerful set of tools and well-trained models are key to the advancement of autonomous systems. However, systems that rely solely on inferencing continue to have an element of non-determinism. When it comes to safety and demanding applications, this uncertainty may be inadequate. As a key enabler to improved perception, depth needs to be measured, accurate, and stable over time.
Example image of what lidar can detect around a car


A popular technology for depth sensing in many markets comes with several shortcomings, many of which are exacerbated at long range.


  • Limited resolution
  • Active scanning
  • Interference
  • Eye & equipment safety
  • Cost
  • Manufacturability
Example image of what radar can detect around a car


Touted to be an all-weather solution with conditions such as temperature and humidity having no effect, it’s capabilities are limited.


  • Lower horizontal & vertical resolution
  • Sensor fusion for object recognition & classification
  • Small or static object detection
  • Multiple object detection
  • Interference
  • Active scanning
Example image of what a monocular with inferencing can detect while driving

Monocular camera with inferencing

Monocular vision systems often utilize AI inferencing to segment, detect, and track objects, as well as determine scene depth. But they are only as good as the data used to train the AI models, and are often used in-conjunction with other sensor modalities or HD maps to improve accuracy, precision, and reliability. Despite such efforts, "long tail" edge and corner cases remain unresolved for many commercial applications relying on monocular perception.


  • Limited depth precision & accuracy
  • High and unpredictable failure probability
    A breakthrough in precise and accurate depth perception
    Light’s multi-camera depth perception platform improves upon existing stereo vision systems by using additional cameras, novel calibration, as well as unique signal processing to provide unprecedented depth quality across the camera field of view. With temporally consistent, full field of view depth that is intrinsically unified with the reference camera’s image at a pixel level, perception engineers are unshackled from the existing constraints of depth range, frequency, and even errors attributed to sensor fusion.
    Radar chart of Light's camera depth perception platform. Precision, range, object details, accuracy, cost, power efficiency.
    • Advanced signal processing for reliable, physics-based depth
    • Native fusion of image and depth across the entire camera field of view
    • Unparalleled accuracy and precision throughout the operating range
    • Enables improved detection, tracking, and velocity for near, mid, and long-range objects
    • High-quality, robust multi view calibration
    • Able to detect horizontal and vertical edges
    • Low power, self-contained processing by way of custom Light silicon
    • Leverages existing, high volume, automotive grade camera components
    Radar chart of Light's camera depth perception platform. Precision, range, object details, accuracy, cost, power efficiency.
    Any application that needs to understand a scene benefits from improved perception
    Cutting-edge solutions that interpret the world around them increasingly rely on technologies like machine learning. Such scene understanding, and the inferencing involved, can be vastly improved by having accurate, robust depth at high operating frequency.
    Heavy Equipment
    & Industrial

    Mines, farms, and warehouses have a multitude of challenging edge-cases. Measured depth is key to improving system efficacy in such dynamic environments.

    ADAS and Self-driving

    For the next generation of ADAS or self-driving cars and trucks, true scene understanding is crucial to efficiency and safety.

    Smart Cities, Security,

    & Infrastructure

    Urban environments are as complex as they come. Perception is fundamental to maintaining situational awareness, and object depth and velocity are key elements in such operational domains.

    Robotics, drones, survey, mapping, …

    Any machine that needs to understand its operating environment will benefit from improved perception. 

    Lorem Ipsum
    Shipped real-world product and applications
    Manufacturing and delivery of real hardware and software products Custom silicon
    Years of multi-view 
    and experience
    Building highly capable systems from industry standard components.
    Wide-ranging patented approaches
    Very strong existing portfolio with many new patents on the way.
    Light Logo
    Partner Program
    Our Partner Program provides a limited number of participants early access to Light’s groundbreaking perception platform.
    limited spots open now!
    Autonomous mini van with Light logo on the side

    Contact Us

    If you are an automaker, supplier, or full-stack developer looking to enable the next generation of automobiles with greater safety and capability we'd like to hear from you.

    *At this time we are not offering participation to academic institutions.
    Thank you! Your submission has been received!
    Oops! Something went wrong while submitting the form.