Feature Story


More feature stories by year:

2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998

Return to: 2021 Feature Stories

CLIENT: PATHPARTNER TECHNOLOGY

Oct. 22, 2021: IT Briefcase

Image Quality Challenges for Automotive Vision Applications Introduction

With the increase in the integration of digital cameras in automotive applications, the image quality concept of the same is well studied and understood from a human point of view. As it has become a major player when it comes to the safety of the user, image quality becomes ever more important. Image quality is more of a subjective matter to many observers or consumers, there has been significant work undergone to objectively measure the image quality for human perception use cases. Consumer cameras like mobile camera phones, digital cameras, action cameras are great examples of achieving image quality to please human viewers.

However, cameras are not only limited to human visual consumption these days but are also being used for many analytics applications where computers consume the camera videos and help to derive analytics, eventually helping to make some decisions. Examples are video surveillance, automotive, machine vision, etc. Industries where cameras are used for anomaly detection, understanding the surroundings, defects in machinery, etc. that use advanced image processing techniques, computer vision, or even machine learning approaches.

Automotive imaging is a special case where different cameras are used for perception purposes, including human vision and machine vision. Figure 1 shows different use cases of cameras being used in automotive applications. The automotive imaging systems need to work under challenging environmental conditions such as very high dynamic range scenes, low light/night vision, harsh weather conditions, fast motion etc. The image quality produced by these cameras are crucial to machine vision algorithms running on automotive embedded hardware that helps to assist drivers like lane departure warning, vulnerable road user detection, traffic light detection etc. For these systems, traditional image quality KPIs used for human vision may not be directly applicable rather improving the accuracy of computer vision algorithms becomes key priority.

auto1 e1634925879912 Image Quality Challenges for Automotive Vision Applications IntroductionFigure1: Camera use cases in Automotive ADAS

Image Signal Processing (ISP) Pipeline

ISPs are typically hardware-based image processing blocks that converts Bayer raw data streamed out of image sensor into good looking or usable images/videos. The conversion process is complex and usually involves discrete image processing operations in specific order depending on the type of ISPs. One such example architecture is shown in Fig 2. The hardware blocks are associated with configurable parameters which enables to control the functionality that needs to be adapted to various environmental conditions to adapt to the intended camera use cases.

Figur2: Camera Pipeline

Automotive ISP requirements: The ISPs used in ADAS/AV based applications are much complex than traditional consumer ISPs. These ISPs have to work for both viewing applications such as surround view, camera monitor systems/e-mirrors and computer vision applications such as traffic light detection, vulnerable road user detection etc. to understand the surroundings to take or influence better decision.

For viewing applications, ISPs has to produce images that is natural, pleasing and intuitive to the driver that is close to the human viewing capability or sometimes beyond. For computer vision applications, the images produced should be useful such that underlying vision algorithms reliably detect, understand and react to the surrounding environment.

The ISPs have to support various HDR schemes or formats offered by automotive image sensors. Wide angle optics is common in applications like surround view where large FOV is of primary importance to overlap viewing from all surrounding cameras for seamless stitching and eventually help driver to view bird eye view of surroundings. Surround view applications also demands higher data rate processing requirements and low latency operations for ISPs. Apart from this, distortion correction functionality is required inside the ISP to undistort wide FOV videos in real time for all the cameras.

IQ for Automotive vision

Image quality refers to perception of how picture looks good and majorly in essence a subject matter. IQ as such is attributed by the camera performance depending on shooting conditions such as scene content, illumination, optics, sensor parameters, ISP, display etc. Measuring or assessing the image quality is important aspect for any camera system to justify the intended use case.

Automotive vision systems are safety critical. Imperfections in visual quality can impact performance of vision based ADAS system. Image Quality Degradation which typically sourced from optics, sensor, processing pipeline resulting in loss of resolution, increased noise, lower dynamic range etc. Understanding and correcting the influence of such degradations on the quality of automotive systems is an important step towards developing robust driver assistance systems. In case of automotive vision systems, IQ becomes a responsibility rather than a choice!

However, meeting the target IQ or evaluating the IQ for automotive cameras is not straightforward task. The environment at which automotive cameras have to function itself poses challenges in terms of scene dependent factors like dynamic lighting, harsh weather conditions (snow, rain, dust) etc. apart from scene independent factors like image capture, compression and transmission. Also, as we discussed in earlier section, automotive cameras required for two use cases – human perception where videos are displayed to driver and other is for ADA systems where machine vision algorithms consume videos for detection or recognition algorithms for objects of interest in the scene. The notion of good image quality for one use case does not necessarily equate for the other use case. E.g. usage of non-standard Bayer pattern image sensors such as RCCC/RCCB/RGGB/RGB-IR is becoming more and more common these days for vision algorithms and traditional image metrics have to be tweaked to make more sense and interpretation while measuring the IQ.

Another challenge encountered in automotive vision systems is flickering effect that arise due to LED light sources typically used in head lamps, traffic lights and signs which utilizes pulse width modulation (PWM) to turn LED on and off with defined frequency with defined fraction of cycle. Though human visual system does not perceive this but camera system shows an artefact of “flicker” when the image sensor integrates light energy for short exposure time. This causes LED pulses being missed to be captured by camera system making it worse for vision algorithms that rely on image data for detecting or recognizing LED light or sign for example.

It is important to define and align on the KPIs and associated pass criteria for either scenario with target use case in mind. This task may not be straight forward especially for machine vision algorithms in most of the cases given the variations in algorithms e.g., classical computer vision-based algorithms or deep learning-based algorithms etc. The KPI does not necessarily associated with captured image quality but rather could be associated with performance of the machine vision algorithm in terms of accuracy or functionality which is easier to measure and accurately define unlike in human vision system due to inconvenient sense of aesthetics.

IQ Standardization

Automotive vision systems have moved from comfort to safety functions which is critical to save lives. It is very important to have an agreement on the characterization of image quality across components like windshields, lens, sensors, ISP etc. While the fundamental concepts of image quality are well defined and established in non-automotive camera domains such as CPIQ and EMVA (correlating human vison system), to date there has not been a consistent approach in the automotive industry to measure image quality across various components of the system.

The existing standards such as CPIQ or EMVA does not adequately address the image quality for varied and distinct landscape of automotive imaging conditions. Therefore, the IEEE P2020 working group has set the goal of shaping relevant metrics and key performance indicators (KPIs) for automotive image quality, enabling customers and suppliers to efficiently define, measure, and communicate image quality of their imaging systems.

There are several subgroups working on defining the test process and validating the image quality. Some of them are: LED flicker standards, image quality for viewing, image quality for computer vision, image quality safety etc. For more details about the IEEE P2020, please visit to the link do download the white paper which provides overview of the requirements of test system for automotive cameras and gap analysis over existing standards. One of the new approaches introduced is “Contrast Detection Probability (CDP)” for evaluating effectiveness of computer vision algorithms. The CDP provides an idea on whether automotive vision system can detect specific contrast under test or not.

IQ tuning strategy

The image quality tuning is an iterative process to achieve the target KPI defined for the intended camera use case. The process involves calibrating and tuning hundreds or sometimes thousands of sensor and ISP parameters to achieve the desired image quality as most of the times default values does not attribute to selected lens and sensor. The process is initially carried in controlled lab conditions with calibrated charts, controlled lighting conditions etc. Then there is subjective or fine tuning depending on the actual use case of the camera.

Majority of ISPs in ADAS SoC consists of single ISP which has to serve for both human vision and machine vision purposes for some of the automotive applications. As requirements are different for human vision and machine vision, using single ISP parameter configuration derived for human vision does not necessarily optimize computer vision performance. The camera firmware software plays key role in dynamically configuring the ISP during run time and load separately optimized ISP parameters.

Return to: 2021 Feature Stories