Tuan-Hung Vu is a research scientist at valeo.ai (valeo, 2018-now) and the computer vision group Astra-vision (INRIA Paris, 2022-now). He received his PhD from École Normale Supérieure, under the supervision of Ivan Laptev. Tuan-Hung obtained an engineering degree from Télécom Paris and a parallel “Master 2” degree in Mathematics, Machine Learning and Computer Vision (MVA) from École Normale Supérieure Paris-Saclay in 2014. His research interests include deep learning, scene understanding, domain adaptation and data augmentation.
NEWS 06/2023: Submission is open for the BRAVO workshop. Deadline is 19/07/2023.
NEWS 04/2023: “BRAVO: Robustness and Reliability of Autonomous Vehicles in the Open-world” accepted as ICCV’23 workshop.
- 11/2022: Top reviewer NeuRIPS’22
- 09/2022: The code of DenseMTL is released.
- 08/2022: One accepted WACV paper.
- 06/2022: Our recent work on “Cross-task Attention Mechanism for Dense Multi-task Learning” is online. Code is released.
- 05/2022: We’re organizing a workshop on Weakly Supervised Computer Vision at the Deep Learning Indaba 2022.
- 04/2022: Our two papers “CSG0: Continual Urban Scene Generation with Zero Forgetting” and “Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation” are accepted to CVPR’20 CLVISION Workshop.
01/2022: Our paper “Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation” is accepted to T-PAMI.
- 11/2021: The CSG0 preprint of “CSG0: Continual Urban Scene Generation with Zero Forgetting” is online.
- 08/2021: Our paper on boundless unsupervised domain adaptation is accepted to CVIU.
- 07/2021: One paper about multi-target domain adaptation is accepted to ICCV’2021
- 07/2021: The Semantic Palette code is released.
- 06/2021: The preprint and video of Semantic Palette are available.
- 05/2021: Our work in confidence esimation and its use in improving domain adaptation is accepted to T-PAMI. Updated paper is coming soon.
- 05/2021: Outstanding reviewer CVPR’21
- 03/2021: Outstanding reviewer ICLR’21
- 02/2021: Our paper Semantic Palette on layout-scene generation is accepted to CVPR’2021. Preprint is coming soon.
- 01/2021: The xMUDA/XMoSSDA preprint of “Cross-modal Learning for Domain Adaptation in 3D Semantic Segmentation” is online.
01/2021: The ConDA preprint of “Confidence Estimation via Auxiliary Models” is online.
- 06/2020: The xMUDA code is released.
- 05/2020: Our paper ESL of using entropy-guided pseudo-labels in UDA is accepted to CVPR 2020 Workshop on Scalability in Autonomous Driving.
- 04/2020: The BUDA preprint of boundless unsupervised domain adaptation is online.
- 02/2020: Our paper xMUDA of cross 2D-3D unsupervised domain adaptation is accepted to CVPR’20.
- 12/2019: The DADA code is released.
- 12/2019: The Zero-shot semantic segmentation code is released.
- 12/2019: The xMUDA preprint of cross 2D-3D unsupervised domain adaptation is online. Demo
- 09/2019: Our paper Zero-shot Semantic Segmentation is accepted to NeurIPS’19.
- 08/2019: Invited talk at BNP Paribas AI Summer School.
- Our paper DADA is accepted to ICCV’19.
- The ADVENT code is released.
- 06/2019: The ZS3 preprint of zero-shot semantic segmentation is online.
- 06/2019: Keynote talk at ULAD - 1st workshop on Unsupervised Learning for Automated Driving - at IV 2019.
- 04/2019: The DADA preprint of unsupervised domain adaptation is online.
04/2019: Our paper ADVENT is accepted to CVPR’19 as an Oral presentation.
- 12/2018: The Tube-CNN preprint of object detection in videos is online.
- 11/2018: The ADVENT preprint of unsupervised domain adaptation is online.
- 09/2018: I successfully defended my PhD!
- 05/2018: Our MemNet paper of object detection in videos is accepted to WACV’19.
From 03/2018 I’ll join valeo.ai as a research scientist.
- 06/2017: Internship at NEC Labs, Cupertino in summer 2017.
- 06/2016: Code for the ICCV’15 paper was released.
- 11/2015: A paper of human head detection in movies is accepted to ICCV’15.
- 07/2014: Dataset for the ECCV’14 paper was released.
- 07/2014: A paper of action prediction is accepted to ECCV’14.