Welcome to the homepage of Yihong WU 中文版
Chinese Academy of Sciences
P.O. Box 2728, Beijing, 100080
Email: yhwu at nlpr.ia.ac.cn
Research Demos (Pose tracking):
1. Vision-based mobile augmented reality
2. SLAM for a general moving object
3. SLAM for scenes
4. SLAM relocalization
5. SLAM with IMU
1. Camera calibration and
pose determination, image matching, three-dimensional reconstruction, vision
geometry, SLAM, vision on mobile devices.
invariants and applications in computer vision and pattern recognition.
3. Polynomial elimination and applications in computer vision.
Associate Editor of Pattern Recognition
Area Chair of ICPR 2018
Editorial Board Member of ACTA AUTOMATICA SINICA
Editorial Board Member of Journal of CAD & CG
Editorial Board Member of Journal of Frontiers of Computer Science and Technology
Editorial Board Member of the Open Artificial Intelligence /Computer Science Journal
PC Member of VISUAL 2007
PC Member of ICCV 2007
PC Member of PCM 2007
PC Member of WCICA 2008
PC Member of CVPR 2008
Session Chair of ACCV 2007
Session Chair of RIUPEEEC 2005
PC Member of KES 2009
Reviewer of ICCV 2009/ACCV 2009
Research Projects as PI
Single view based metrology (863)
A study on the theory and
algorithm of camera self-calibration (NSFC)
Geometric invariant computation
and camera pose determination from n perspective points (NSFC)
Image based 3-dimensional reconstruction
Camera parameter computation
from video sequence (Key NSFC)
Ominidirectional camera calibration
Study on image invariants and applications under multiple camera models (NSFC)
Image-based modeling for complex and large scale environment (NLPR)
Visual SLAM (Nokia RC in Finland)
Non-planar object tracking (Samsung)
Camera pose tracking (Samsung)
VR pose tracking (Huawei)
The First Workshop on Community Based 3D Content and Its Applications in Mobile Internet Environments,In conjunction with ACCV 2009
The Second Workshop on Community Based 3D Content and Its Applications,In conjunction with ICME 2012
Virtual reality (VR), augmented reality (AR), robotics, and autonomous driving have recently attracted much attention from the academic as well as the industrial community. 3 dimensional (3D) computer vision plays important roles in these fields. Autonomous localization and navigation is necessary for a moving robot, where using cameras is the most flexible and low cost approach for building map and localization. To augment reality in images, camera pose determination or localization is needed. To view virtual environments, the corresponding viewing angle is necessary to be computed. Furthermore, cameras are ubiquitous and people carry mobile phones that have cameras every day. There have been some AR applications in mobile phones. Therefore, 3D computer vision has great and widespread applications.
This tutorial will provide the developments of 3D computer vision of the past two years. The important works since 2017 will be introduced in image matching, camera localization including camera pose determination and simultaneous localization and mapping (SLAM), 3D reconstruction.
The contents have five parts:
Some fundamental knowledge of 3D vision is introduced as well as some events related to 3D vision in this past two years.
2. Developments in image matching
Some important works of image feature detectors and descriptors since 2017 are introduced. Also, some important woks of image matching and two dataset since 2017 are introduced. Among these works, deep learning based methods are growing.
3. Developments in camera localization
A complete classification of image based camera localization mapped as a tree structure is given. The important directions are pointed out. The developments of camera localization in both known environments and unknown environments are introduced since 2017. The ones in known environments are PnP works. The ones in unknown environments are SLAM works. SLAM works include the general geometric SLAM, learning SLAM, semantic SLAM, and marker SLAM. Except SLAM under the traditional cameras, there are some SLAM works under the event cameras and RGBD cameras.
4. Developments in 3D reconstruction
This part will introduce structure from motion (SFM) based 3D reconstruction, learning 3D reconstruction, RGBD 3D reconstruction or RGBD SLAM since 2017.
5. Trends of 3D vision
I will share my views for 3D vision trends.