How do you make a car drive autonomously? Let it watch and learn from real drivers. The open computing platform DRIVE PX 2 by NVIDIA makes it possible. What makes this hardware platform special is the use of graphics processing units (GPUs) which speeds up the training and calculation of neural networks and deep learning applications in comparison to the sole use of central processing units (CPUs). After only a short time, the algorithms based on neural networks already recognize in real time what other road users or traffic signs look like, and facilitate orientation in traffic even in complex traffic situations. The simulation software CarMaker can now be coupled with the NVIDIA hardware platform and be used to train neural networks in virtual test driving. In addition, this coupling allows for the virtual development and testing of deep learning sensor fusion algorithms to control advanced driver assistance systems and automated driving functions.
NVIDIA graphics processors boast a level of performance so high that they can be applied in the interpretation of sensor data. Data obtained by up to twelve cameras as well as lidar, radar and ultrasonic sensors may be analyzed, thus enabling the representation of a complete 360-degree environment in real time. In terms of the recognition and classification of objects, deep neural networks (DNN) allow for a high accuracy of the results from the fused sensor data and for the autonomous car to drive with precision and on a safe route adapted to the respective conditions.
Coupling the simulation solutions of the CarMaker product family and the NVIDIA hardware platforms offers the development departments of OEMs and the supplier industry the opportunity to conduct tests of advanced driver assistance systems and automated driving functions early on with virtual test driving. Until now, videos of real driving including corresponding data on road markings, road users, buildings, parked cars, etc. in various visibility and weather conditions such as cloudiness, fog, snow and rain, day or night were played to the neural networks. Using CarMaker, reproducible data of the most diverse scenarios can now be generated virtually and used to train neural networks. “Some of our customers already use the NVIDIA board and are now able to test deep learning sensor fusion algorithms in virtual test driving as well, thereby saving time and costs,” explained Björn Fath, Business Development Manager Real-Time Simulation Systems at IPG Automotive.
Coupled with the Video Interface Box by IPG Automotive, closed-loop tests of camera control units become possible due to the direct injection of image data. When testing advanced driver assistance systems on so-called hardware-in-the-loop (HIL) test benches, a lack of synchronization between image generation on the monitor and image capturing of the camera can result in “torn” images. At the same time, digital flat screens are too dim and weakly contrasting to be able to supply input data for lighting assistance functions. These problems can be circumvented by feeding images directly into a standard camera control unit using the Video Interface Box. It is also possible to feed images of the simulation environment directly into the NVIDIA DRIVE PX 2 via GMSL (Gigabit Multimedia Serial Link). With a closed-loop HIL approach, the Video Interface Box can analyze camera-based functions with other control units including simulated environment sensors, thus making use of the advantages of virtual test driving in the context of sensor data fusion as well.
The use of the high-performance hardware technology by NVIDIA in combination with CarMaker provides an excellent basis for meeting the challenging demands which autonomous driving imposes on the development. “An early test of these new functions in a virtual environment is key to achieving a high level of product quality and safety” Björn Fath summarized.