Just when you thought the present was beginning to be what you imagined the future would be like, a research team at Queen’s University has created a human-scale 3D hologram pod that allows people in different locations to videoconferenc e as if they are standing in front of each other.
Called TeleHuman, the technology is the creation of professor Roel Vertegaal, director of the Human Media Lab, and his graduate team at the Queen’s University in Kingston, Ontario.
Similar to the Star Trek holodeck, participants can walk around the 3D hologram of the remote person they’re talking to and view them from all sides. More importantly, the system captures 3D visual cues that 2D video miss, such as head orientation, gaze and overall body posture.
To create the hologram, the system uses six Microsoft Kinects 3D video cameras perched atop a life-sized cylindrical pod (a 1.8 metre-tall translucent acrylic cylinder). The cameras capture participants as they converse, gesture and move in relation to the 3D hologram-like images of each other. An additional four depth-sensitive Kinects are stationed around the cylinder to capture side and back video.
Once captured, depth information in the 3D video is then assigned 3D vertices values for every pixel of the user to create a 3D mesh model. The 2D RGB video is then “overlaid” on this mesh as a texture. The resulting 3D video is then transmitted to the other user’s “pod” where a 3D projector at the base and a convex mirror at the top display the holographic image, seemingly, within the cylindrical tube.
To maintain the illusion of tele-presence, the Kinex cameras not only capture video and depth data but also track the viewer’s position in relation to the display pod. This allows the system to display a “motion parallax corrected perspective of the 3D
video model of the other user,” according to Dr. Vertegaal’s soon to be published paper on the system.
In addition to TeleHuman, the Queen’s researchers have also developed BodiPod, an interactive 3D anatomy model of the human body. The model can be explored 360 degrees around the model through gestures and speech interactions.
When people approach the Pod, they can wave in thin air to peel off layers of tissue. In X-ray mode, as users get closer to the Pod they can see deeper into the anatomy, revealing the model’s muscles, organs and bone structure. Voice commands such as “show brain” or “show heart” will automatically zoom into a 3D model of a brain or heart.
Dr. Vertegaal will unveil TeleHuman and BodiPod at CHI 2012, the premier international conference on human-computer interaction, in Austin, Texas May 5-10.