Today’s driving simulators have one big problem: they don’t look realistic enough, especially background objects like trees and road signs. But researchers have developed a new way to create photorealistic images for simulators, paving the way for better testing of driverless cars.

Conventional computer graphics use detailed models, meshes, and textures to render 2D images from 3D scenes, a labor-intensive process that produces images that are often not realistic, especially in backgrounds. However, using a machine learning framework called a Generative Adversarial Network (GAN), the researchers were able to train their program to randomly generate life-like environments while improving the program’s visual fidelity – the representation level of the graphics partition. computer with reality.

This is especially important when testing how people react when they are in driverless vehicles or, alternatively, on the road with them.

“When driving simulations look like computer games, most people don’t take them seriously,” he said Patriotic Octoberthe study’s lead author and a research associate i in electrical and computer engineering at The Ohio State University. “That’s why we want to make our simulations look as similar to the real world as possible.”

The study was published in the journal IEEE Transactions on Intelligent Transportation Systems.

The researchers started with CARLA, an open-source driving simulator, as their base. They then used a GAN-based image synthesizer to render background elements such as buildings, vegetation and even the sky, and combine them with more traditional objects.

Yurtsever said that driving simulations will continue to need conventional, graphics-intensive techniques to display key objects of interest, such as nearby cars. But, using artificial intelligence, the GAN can be trained to generate realistic backgrounds and foregrounds using real-world data.

One of these challenges the researchers faced was teaching their program to recognize patterns in their environments, a skill needed to detect and create objects such as vehicles, trees and shadows, and to distinguish these objects from one another. – the other.

“The beauty of it is that these patterns and textures in our model are not designed by engineers,” Yurtsever said. “We have a feature recognition template, but the neural network learns it on its own.”

Their findings showed that mixing foreground objects as opposed to background scenery improved the photorealism of the entire image.

However, instead of modifying an entire simulation at once, the process had to be done frame by frame. But since we don’t live in a frame-by-frame world, the next step of the project will be to improve the temporal consistency of the program, where each frame is consistent with the ones before and after, so that users can have a smooth experience and visually appealing. , said Yurtsever.

Developing photorealistic technologies could also help scientists study the intricacies of driver distraction and help improve experiments with real drivers, Yurtsever said. And with access to larger datasets of roadside scenes, more immersive driving simulations could change the way humans and AI begin to share the road.

“Our research is an extremely important step in conceptualizing and testing new ideas,” Yurtsever said. “We can never replace real-world testing, but if we can make the simulations a little better, we can get better insight into how we can improve autonomous driving systems and how we interact with them.”

Reference: Yurtsever E, Yang D, Koc IM, Redmill KA. Photorealism in driving simulations: Blending generative adversarial image synthesis with rendering. IEEE Trans. Intel. Transp. secu. 2022:1-10. doi: 10.1109/TITS.2022.3193347

This article is reprinted from the following mATERIALS. Note: Material may have been edited for length and content. For more information, please contact the source cited.

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *