Recent News
Partnering for success: Computer Science students represent UNM in NASA and Supercomputing Competitions
December 11, 2024
New associate dean interested in helping students realize their potential
August 6, 2024
Hand and Machine Lab researchers showcase work at Hawaii conference
June 13, 2024
Two from School of Engineering to receive local 40 Under 40 awards
April 18, 2024
News Archives
[Colloquium] Dual Photography
September 8, 2006
Watch Colloquium:
AVI (182 Meg)
Quicktime part 1 (124 Meg), Quicktime part 2 (91 Meg)
MPEG1 (190 Megs)
- Date: Friday, September 8, 2006
- Time: 1 pm — 2:15 pm
- Place: ME 218
Pradeep Sen
Electrical and Computer Engineering UNM
Abstract The goal of interactive, photorealistic graphics will provide many interesting challenges in the upcoming years for both geometry-based and image-based rendering. In this talk, I will begin by presenting some of the results from my dissertation work on silhouette maps and show how they improve the appearance of textures rendered by real-time, geometry-based systems. However, the focus of this talk will be in the area of image-based rendering, where I address the specific problem of scene relighting. I present a novel approach to this problem called “dual photography,” which exploits Helmholtz reciprocity to interchange the lights and cameras in a scene. With a video projector providing structured illumination, reciprocity permits us to generate pictures from the viewpoint of the projector, even though no camera was present at that location. The technique is completely image-based, requiring no knowledge of scene geometry or surface properties, and by its nature automatically includes all transport paths, including shadows, inter-reflections and caustics.
In its simplest form, the technique can be used to take photographs without a camera. I will show results of images we captured using only a projector and a photo-resistor. If the photo-resistor is replaced by a camera, we can produce a 4D dataset that allows for relighting with 2D incident illumination. Using an array of cameras we can produce a 6D slice of the 8D reflectance field that allows for relighting with arbitrary light fields. Since an array of cameras can operate in parallel without interference, whereas an array of light sources cannot, dual photography is fundamentally a more efficient way to capture such a 6D dataset than a system based on multiple projectors and one camera. Therefore, while current techniques must acquire the data necessary for relighting in linear time with respect to the number of samples, dual photography enables us to capture the same data in constant time. As a demonstration, I will show various scenes that we have captured and relit using our technique.
Bio Pradeep Sen is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of New Mexico. He received his B.S. degree in Computer and Electrical Engineering from Purdue University in 1996 and his M.S. in Electrical Engineering from Stanford University in 1998 in the area of electron-beam lithography. After two years of working at a profitable startup company which he co-founded, he joined the Stanford Graphics Lab in the Fall of 2000 where he received his Ph.D. in Electrical Engineering in June 2006. His work in graphics includes discontinuous texture representations, real-time shading and dual photography and has been featured in Slashdot, New Scientist, CGWorld, CPU, and Technology Research News Magazine. His interests include real-time graphics and graphics hardware, global illumination algorithms, computational photography and display technology.