How does Clarity™ change what your car can see?

By Light Staff
April 29, 2021

Imagine that the vehicle you are driving can see everything you see.

Imagine that the same vehicle, using a single sensing solution, could spot everything from a small leaf just in front of the vehicle to people, cars, and any other structure even hundreds of meters away. All at the same time. Multiple times a second. 

With that information your vehicle could comprehensively perceive what was happening all around it and do more than just slam on the brakes when something goes wrong. What if it could actually sense enough of the world every fraction of a second, both near and far, to plan around danger in the first place? These are things that Light imagines to be true, and why Light has built Clarity™.

Carnegie Mellon University's entry at the 2004 DARPA Grand Challenge

Bridging the gap from promising tech to reality

The 2004 DARPA Grand Challenge kicked off a flood of innovation in self-driving. It was a key moment that started many on a long path to create a system that will enable vehicles to drive with little, and eventually, no human assistance. 

Clarity bridges a major gap between the currently available technologies and the goal of full self-driving. Clarity is camera-based, multi-view depth perception. Instead of relying on active scanning to create a representation of the world, or relying solely on machine learning to estimate distances, it sees the way humans see — using two or more cameras to create a highly detailed 3D model of the world

Existing sensing solutions are not enough for tomorrow's automotive requirements 

Clarity not only enables a new level of scene understanding that is necessary for vehicles to drive themselves, but it meaningfully advances current driver-assistance features to better protect today’s drivers and passengers. And it does so while also meeting automaker requirements of performance, cost and safety for mass market use. Clarity can see every 3D structure in the road from 10 centimeters to 1000 meters, no matter what it is. This makes driving safer and enables the next-generation of transportation — with or without a driver.

The sensors used in current driver-assistance systems suffer from numerous shortcomings which create a limited understanding of what is around the vehicle. For example, Lidar sensors require active scanning, provide limited resolution, and still require sensor fusion with cameras. Radar sensors also require active scanning, have difficulty with small or static object detection, and also provide limited resolution. Monocular cameras that rely on inference alone or techniques using motion have relatively low depth accuracy. With a useful range often less than 250 meters — and some far less than that — these existing approaches have a limit to the safety they provide in ADAS systems let alone self-driving systems.

Measured depth is the key to true scene understanding

Clarity provides measured depth and color information across the entire field of view for reliable, physics-based scene understanding without relying on sensor fusion.
Measured depth provides unparalleled accuracy and precision both near and far, allowing systems to know exactly where objects are, their size and scale, and even how quickly they are moving and in what direction. This enables improved detection, identification, and tracking of near, mid-, and long-range objects as well as estimating the behavior of objects all of which are requirements for safer driving.

Thanks to Clarity, vehicles can sense the 3D structure for everything in their field of view allowing them to better predict behavior and navigate their world safely.

Clarity is needed for the next generation of vehicle capability and safety.

The Clarity platform’s camera calibration algorithms, paired with Light’s dedicated Silicon IP, provides unprecedented scene understanding and depth quality in unforgiving, real-world conditions. Clarity’s novel calibration provides the flexibility to leverage proven, low-cost, high-volume automotive grade cameras while still benefiting from constant industry improvements in camera components. 

Light’s unique approach to calibration enables two or more cameras working in tandem to gather information about the operating environment from slightly different angles. Light’s dedicated Silicon IP processes an enormous amount of scene detail, quickly providing depth and color information to downstream systems for reliable decision making. Clarity creates a 3D map of the world in front of, next to, or behind the vehicle up to 30 times a second which enables vehicles to not just react, but to make safe proactive decisions. 

That level of understanding is exactly what is needed to make next generation vehicles safer and more capable, and what is needed to push us to having cars that truly drive themselves.

If you'd like to learn more about the Clarity platform, visit

11.4.2020 | 1:00pm ET
Light's Clarity™ Platform: A Breakthrough in Depth Perception for ADAS and autonomy.
screenshot from light's webinar

Contact Us

If you are an automaker, supplier, or full-stack developer looking to enable the next generation of automobiles with greater safety and capability we'd like to hear from you.

*At this time we are not offering participation to academic institutions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.