Visual Representation and Sensing from Mobile Cameras
Due to advancement in minimization and mass-production, cameras are ubiquitously embedded in most current mobile devices including smartphones, autonomous vehicles, robots, and body-worn devices. These mobile cameras are cheap and can gather, in real time, large amounts of visual data about the surrounding environments. Moreover, these mobile cameras are equipped with other sensing modalities like inertial and depth measurements, and even have active motion. Professor Do’s research project will develop efficient visual representation and sensing schemes for mobile cameras using ideas from geometric vision, plenoptic functions, information theory, sensor fusion, and bio-inspired algorithms. He will aim for a holistic approach for mobile vision that combines multi-modal sensing, geometric reconstruction, and semantic recognition for visual perception of dynamic environments. This approach will lead to a number of novel methods for extracting visual information from mobile devices including pose and location estimation, 3D environment mapping, object localization and recognition, and motion detection and recognition.