The primary objective of our research team is to develop intelligent agents that can effectively collaborate with humans in dynamic environments.
To realize this ambition, we focus on three core research directions.
(1) Human-centered Visual Content Understanding and Reasoning: This area seeks to enable machines to actively perceive, analyze,
and interpret human states, behaviors, and underlying motivations in dynamic scenarios.
(2) Omni-modal Scene Perception and Navigation: This emphasizes harnessing diverse sensor modalities to comprehend and navigate complex scenes.
(3) Machine Behavior Planning and Decision-making: This direction is centered on equipping intelligent agents with the ability to make real-time
decisions based on their comprehension of understanding surroundings.
News! We have opening positions for Ph.D., M.phil., Research Assistant and Visiting Student, which are waiting for self-motivated talents.
If you are interested in 3D Scene Understanding, Human-centric Visual Perception and Generation, Embodied Vision, Diffusion Model, Multi-modal Learning and Reinforcement Learning,
please drop me an email via firstname.lastname@example.org or email@example.com.
More details about the recruitment and undergraduate research programme, please see here.
2023-10-20: We present HumanTOMATO, a novel whole-body motion generation framework.
2023-10-15: We present UniPose to detect keypoints of any articulated for fine-grained vision understanding.
2023-09-22: Two papers are accepted by NeurIPS2023. Congrats to all!
2023-09-13: The first large-scale, real-world 3D pose estimation dataset, FreeMan, is released!
2023-07-26: One papers is accepted by ACM MM2023. Congrats to Siyue, Bingliang and Fengyu!
2023-07-14: Two papers are accepted by ICCV2023. Congrats to Jie, Chaoqun and Yiran!
2023-05-25: One paper is early accepted by MICCAI2023. Congrats to all!
2023-03-15: One paper is accepted by Pattern Recognition. Congrats to Qi Liu!
2023-03-02: Two papers are accepted by MIDL2023 and one is rated as the oral presentation. Congrats to Jie Yang and Ye Zhu!
2023-02-28: One paper is accepted by CVPR2023. Congrats to Jie Yang!
2023-02-27: One paper is accepted by T-NNLS. Congrats to Xiaozhe!
2023-01-21: One paper is accepted by ICLR2023. Congrats to Jie Yang!
2022-12-02: One paper is accepted by T-MM. Congrats to Ziyi!
2022-09-17: Two paper are accepted by NeurIPS2022. Congrats to all!
2022-07-05: Two paper are accepted by ECCV2022. Congrats to all!
2022-05-05: One paper is early accepted by MICCAI2022. Congrats to Weijie!
2022-05-01: We associated with MICCAI 2022 to host together MICCAI AMOS Segmentation Challenge 2022.
2021-11-07: One paper is accepted by T-IP. Congrats to Yuying!
2021-10-15: I was selected to receive a NeurIPS 2021 Outstanding Reviewer Award.
2021-07-23: Two papers are accepted by ICCV2021. Congrats to all!
2021-06-12: Two papers are accepted by MICCAI2021. Congrats to all!
2021-05-28: One paper is accepted by T-MM. Congrats to Zhaoyi!
2021-05-05: A long version of polar representation for object detection is accepted by T-PAMI. Congrats to all！
2021-04-29: One paper is accepted by IJCAI2021. Congrats to Weibing and Yanxu！
2021-03-01: One paper is accepted by CVPR2021. Congrats to Yuying！
2021-02-18: I move to CUHKSZ as a Research Assistant Professor.
2020-12: One paper is accepted by AAAI2021.
2020-08: We won the First Prize in 2020 AIM Challenge on Learned Image Signal Processing Pipeline, Track 2.
2020-07: Two papers are accepted by ECCV2020 and MICCAI2020, respectively.
2020-02: Two papers are accepted by CVPR2020.
2019-08: A long version of SwitchNorm was presented in T-PAMI. Two papers are accepted by ICCV2019.
2019-05: I am organizing the second workshop in Fashion and Art.
Address: Room 517, Daoyuan Building, The Chinese Univeristy of Hong Kong, Shenzhen
E-mail: firstname.lastname@example.org or email@example.com