Computer Vision and Computational Optical Imaging

Computer vision is one the core research fields of the Visual Computing section at DTU Compute. We aim to develop fundamental methods that allow for fast and accurate detections and measurments of the real world. Our focus area spans all of object geometry, optical properties, lighting environments, as well as sub-resolution micro-geometry. We want to be able to record the full digital twin of a natural scene by taking into account the interactions between light and material. Our highlights include 3D scanning, Acquisition of surface BRDFs, and Seeing transparency.

3D scanning

Low-cost sensors such as Microsoft Kinect and time of flight cameras have made the 3D sensor ubiquitous and have resulted in a vast amount of new applications and methods. However, such low-cost sensors are generally limited in their accuracy and precision, making them unsuitable for problems such as precise tracking and pose estimation. With recent improvements in projector technology, increased processing power, and new method developments with central contributions from our research group, it is now possible to perform faster and highly accurate structured light scans. This offers new opportunities for studying dynamic scenes, quality control, human-computer interaction and more.

Acquisition of surface BRDFs

Accurate models of real-world 3D scenes complete with geometry, surface textures, and surface radiance models, finds application within multiple fields such as design, additive manufacturing, and virtual reality, amongst others. This area aims to acquire surface reflectance models. The radiometric behaviour of an object plays a crucial role in 3D scanning. Often this behaviour has been ignored since this allows for acceptable reconstructions of geometry, but often poor recovery of the surface reflection.

hhh

hhh

Seeing transparency

The appearance of a transparent object is determined by a combination of refraction and reflection, as governed by a complex function of its shape as well as the surrounding environment. Prior works on 3D reconstruction have largely ignored transparent objects due to this challenge, yet they occur frequently in real-world scenes. We show, however, that it is possible to estimate depths and normals for transparent objects using a single image acquired under a distant but otherwise arbitrary environment map. In particular, we have used deep convolutional neural networks (CNNs) and analysis by photorealistic rendering for this task.

We also collaborate and are a part of the DTU 3D Imaging Center - 3DIM, which is our X-ray µCT laboratory.

If you are interested in an MSc, BSc, or other student projects in this area, you are welcome to see our Cyber-Physical 3D Ecosystem (Eco3D).