Efficient Self-Ensemble for Semantic Segmentation

Walid Bousselham, Guillaume Thibault, Lucas Pagano, Archana Machireddy, Joe Gray, Young Hwan Chang, Xubo Song

Research output: Contribution to conferencePaperpeer-review

1 Scopus citations

Abstract

Ensemble of predictions is known to perform better than individual predictions taken separately. However, for tasks that require heavy computational resources, e.g. semantic segmentation, creating an ensemble of learners that needs to be trained separately is hardly tractable. In this work, we propose to leverage the performance boost offered by ensemble methods to enhance the semantic segmentation, while avoiding the traditional heavy training cost of the ensemble. Our self-ensemble approach takes advantage of the multi-scale features set produced by feature pyramid network methods to feed independent decoders, thus creating an ensemble within a single model. Similar to the ensemble, the final prediction is the aggregation of the prediction made by each learner. In contrast to previous works, our model can be trained end-to-end, alleviating the traditional cumbersome multi-stage training of ensembles. Our self-ensemble approach outperforms, by the time of the publication, the previous state-of-the-art on the benchmark datasets Pascal Context and COCO-Stuff-10K for semantic segmentation and is competitive on ADE20K and Cityscapes. Code is publicly available at https://github.com/WalBouss/SenFormer.

Original languageEnglish (US)
StatePublished - 2022
Event33rd British Machine Vision Conference Proceedings, BMVC 2022 - London, United Kingdom
Duration: Nov 21 2022Nov 24 2022

Conference

Conference33rd British Machine Vision Conference Proceedings, BMVC 2022
Country/TerritoryUnited Kingdom
CityLondon
Period11/21/2211/24/22

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition

Fingerprint

Dive into the research topics of 'Efficient Self-Ensemble for Semantic Segmentation'. Together they form a unique fingerprint.

Cite this