In conjunction with the Workshop on Uncertainty Quantification for Computer Vision, the ELSA Use Case “Autonomous Driving – Robust Perception” is organising a challenge on the robustness of autonomous driving in the open world.
The 2025 BRAVO Challenge aims at benchmarking segmentation models on urban scenes undergoing diverse forms of natural degradation and realistic-looking synthetic corruptions.
[New] In the 2025 edition, we extend the challenge with a new track on synthetic-domain training, while continuing the two real-domain training tracks from the BRAVO Challenge 2024.
Top teams will be invited to present their solutions in a dedicated session at the UNCV workshop
For more information, please check the BRAVO Challenge Repository and the Challenge Task Website at ELSA.
Important Dates
All times are 23:59 CEST.
- BRAVO Challenge 2025 launch. Submission server is open: 01/05/2025
- 1st submission deadline for CVPR 2025 edition: 06/06/2025
- BRAVO Challenge session at UNCV: 11/06/2025
- 2nd submission deadline to conclude BRAVO 2025 at ICCV 2025: TBD
- Whitepaper contribution deadline: TBD
General rules
- The task is semantic segmentation with pixel-wise evaluation performed on the 19 semantic classes of Cityscapes.
- Models in each track must be trained using only the datasets allowed for that track.
- Employing generative models for data augmentation is strictly forbidden.
- All results must be reproducible. Participants must submit a white paper containing comprehensive technical details alongside their results. Participants must make models and inference code accessible.
- Evaluation will consider the 19 classes of Cityscapes (see below).
- Teams must register a single account for submitting to the evaluation server. An organization (e.g. a University) may have several teams with independent accounts only if the teams are not cooperating on the challenge.
2. The BRAVO Benchmark Dataset
We created the benchmark dataset with real, captured images and realistic-looking synthetic augmentations, repurposing existing datasets and combining them with newly generated data. The benchmark dataset comprises images from ACDC, SegmentMeIfYouCan, Out-of-context Cityscapes, and new synthetic data.
Get the full benchmark dataset at the following link: full BRAVO Dataset download link.
The dataset includes the following subsets (with individual download links):
bravo-ACDC: real scenes captured in adverse weather conditions, i.e., fog, night, rain, and snow. (download link or directly from ACDC website)
Challenge Tracks
The team proposes two tracks:
Track 1 – Single-domain training
In this track, you must train your models exclusively on the Cityscapes dataset. This track evaluates the robustness of models trained with limited supervision and geographical diversity when facing unexpected corruptions observed in real-world scenarios.
Track 2 – Multi-domain training
In this track, you must train your models over a mix of datasets, whose choice is strictly limited to the list provided below, comprising both natural and synthetic domains. This track assesses the impact of fewer constraints on the training data on robustness.
Allowed training datasets for Track 2:
- Cityscapes
- BDD100k
- Mapillary Vistas
- India Driving Dataset
- WildDash 2
- GTA5 Dataset (synthetic)
- SHIFT Dataset (synthetic)
[New] Track 3 – Synthetic-domain training
In this track, you must train your models exclusively on the synthetic datasets. This track evaluates the robustness of models trained solely on synthetic data when facing corruptions observed in real-world scenarios.
Allowed training datasets for this track:
- GTA5 Dataset (synthetic)
- SYNTHIA Dataset (synthetic)
- UrbanSyn Dataset (synthetic)
- SHIFT Dataset (synthetic)
The team is looking forward to your contributions!

