Benjamin Busam's expertise in computer vision, specifically in geometric deep learning and its application to 3D reconstruction and perception, aligns with the core topics of the workshop. His experience enhances discussions on the geometric aspects of pose estimation and the development of [one-shot methods](https
He Wang specializes in 3D computer vision, robotics, and machine learning with a focus on object manipulation and interaction in dynamic environments. His work includes developing algorithms for generalizable transparent object reconstruction and 6-DoF grasp detection, making him an ideal speaker to discuss innovations in category-level pose estimation where adaptability and generalizability are crucial.
Hyung Jin Chang is known for his work in machine learning and computer vision with applications to dynamic scene understanding and motion analysis. His research contributes directly to enhancing algorithms for category-level pose estimation by focusing on adaptability and accuracy in changing environments. He is also the author of the state-of-the-art category-level pose estimation method HS-Pose.
Muhammad Zubair Irshad specializes in robotics and 3D perception, particularly in 3D robotics perception using inductive priors. His work on omni-scene reconstruction can bring attendees insights on understanding object poses in complex scenarios with single RGB images.
Taeyeop Lee's work focuses on understanding 3D worlds through geometry and semantics, specifically in Computer Vision and Robotics, which is crucial for advancing how pose estimation can be optimized for interactive and responsive robotics applications. He is also the author of many advanced pose-estimation algorithms such as [TTA-COPE](https://arxiv.org/pdf
Xiaolong Wang is recognized for his contributions to computer vision, machine learning, and robotics, with specific expertise in learning visual representations that connect image understanding to 3D structure and robotics. His research on sim-to-real generalizable feature learning and 6D pose estimation aligns well with the workshop's theme of enhancing category-level pose estimation in the wild.
Yan Xu's research intersects computer vision, robotics, and embodied AI, focusing on actionable representation learning from natural data, which is pivotal for autonomous systems like those used in category-level pose estimation. His work on object pose estimation and object localization is highly relevant for developing accurate and scalable pose estimation algorithms.