ELSA Use Case Update: Autonomous Driving – Robust Perception

,

In the “ELSA Use Case Updates” series, we share insights into the progress of research within the ELSA Use Cases. We speak with ELSA’s Use Case owners, leading researchers, project managers, and engineers.

As the use cases are central to our methodology, we aim to shed light on the research conducted within ELSA, the unique connections our network partners form, and how industries benefit from the results. A research and an industry partner jointly lead each ELSA Use Case.

In the first part of this series, we present our use case “Autonomous Driving – Robust Perception,” which is led by valeo.ai and the Max Planck Society Germany. Valeo.ai is a French automotive supplier. The Max Planck Society is a world-leading research organization in science and technology. For this text, we spoke to Tuan-Hung Vu, Senior Research Scientist at valeo.ai.

Presenting the Use Case
Autonomous Driving – Robust Perception

Each use case focuses on addressing one or more of the grand challenges ELSA has defined for its research:

  • Robustness Guarantees and Certification
  • Private and robust collaborative learning at scale
  • Human-in-the-loop decision making: integrated governance to ensure meaningful oversight 

The use case “Autonomous Driving – Robust Perception” addresses “Robustness Guarantees and Certification”.

To work towards solving this challenge, the use case “Autonomous Driving – Robust Perception” focuses on developing a testbed for robust perception. Testbeds are controlled environments that encompass all necessary elements for testing software.

In this case, the testbed aims to support researchers and developers in assessing and statistically proving the robustness of driving perception models.

The Industry Side: Valeo.ai about data collection, the current state of Autonomous Driving, collaboration, and safety.

“What we do will make Autonomous Driving safer and more explainable,” says Senior Research Scientist Tuan-Hung Vu from valeo.ai. “My goal is to add transparency to systems’ decision-making processes. This is crucial, especially in Autonomous Driving, where the ultimate stage is to give a car full control.” 

The interaction processes of Autonomous Driving are extremely complex: The car not only needs to identify the world correctly. However, it must also understand and interpret the interactions between objects and eventually act within this world. A world which, in this case, is full of factors that are extremely difficult to predict. This puts machine interaction to a whole new level.

Collecting data

The environment in which cars operate is complex and multifaceted. Unknown objects such as caravans, street decorations, or reflections on the roads are just a few examples of things with which systems need to be trained. Collecting all this sample data can be cumbersome and complex.

To create a high-value, comprehensive testbed, valeo.ai has utilized data from various sources. In addition to gathering data with their own driving simulator, they have also incorporated data collected on streets and openly accessible data from other sources, such as SegmentMeIfYouCan (SMIYC).

Collaboration with the Max Planck Society

Industry and academics join forces within the ELSA Use Cases. In the case of the “Autonomous Driving Use Case,” valeo.ai and the Max Planck Society jointly shaped the foundation of the use case. Together, they also defined the Use Case’s benchmarks and, at the International Conference on Computer Vision (ICCV) 2023 in Paris, organized a workshop to create awareness for the use case. Forming a research connection is one of the primary reasons for the partners to work together.

Challenges and Workshops

To initiate the project and raise awareness on an international level, the team organized its first workshop at the International Conference on Computer Vision (ICCV) 2023 in Paris. A year later, a second workshop at the European Conference on Computer Vision (ECCV) 2024 followed. A third workshop is currently being planned.

The BRAVO challenge and the BRAVO dataset 2024

“We proposed the BRAVO challenge for benchmarking semantic segmentation models on urban scenes affected by various forms of natural degradation and realistic synthetic corruptions. For this purpose, we combined three existing datasets— ACDC, SegmentMeIfYouCan, and Out-of-context Cityscapes — with new synthetic data generated using publicly available toolboxes and proprietary generative tools developed by valeo.ai”, explains Tuan-Hung Vu.

The researcher’s work aims to serve the public and be accessible for further use. Hence, the BRAVO dataset is available on arXiv: Open results on arXiv.

Tuan-Hung Vu presenting the progress of the use case at the ELSA General Assembly 2025.
Tuan-Hung Vu presenting the progress of the use case at the ELSA General Assembly 2025.

Use the Use Case Results

“Our testbed offers data for software testing in all stages of Autonomous Driving: from assisting features all the way to full autonomy”, said Tuan-Hung. So far, the Use Case has finished:

  • The BRAVO dataset
  • The evaluation toolkit
  • The benchmarking code, deployed on the ELSA benchmarks platform

“ELSA is a public project, so our results are available for everybody: Companies, researchers, developers, or anybody curious. We will also offer a beginner prototype which can be used as it is or developed further”, Tuan-Hung elaborates.

For those who want to use the testbed, Vu has simple instructions: “The BRAVO dataset is publicly released, enabling participants to benchmark their systems using all images provided. Participants then upload their results to the ELSA benchmarks platform, which compares these results with private ground-truth data to calculate the final BRAVO scores. Methods are ranked based on these scores. All submitted results are stored securely on the ELSA platform.”

Outlook: BRAVO Challenge 2025

As the use case’s work is almost done, valeo.ai is looking at expanding its work within ELSA: “We are about ninety percent done. So for 2025, we decided to expand the task and add another challenge.”

The BRAVO 2025 challenge went online on May 1st, 2025. It is hosted in conjunction with the 4th Workshop on Uncertainty Quantification for Computer Vision at the CVF Computer Vision and Pattern Recognition Conference (CVPR) 2025.

The challenge accepts submissions until mid-June and uses the BRAVO challenge repository.

Wrap-Up

The Use Case “Autonomous Driving – Robust Perception” is focused on enhancing the development of real-life applications for autonomous driving and serves as a prime example of how AI research aims to create a better, safer future.

We thank you for the interview, Tuan-Hung Vu!