SIGGRAPH has always served as a unique forum for an array of cool and innovative technologies and computer graphics approaches. Last week’s SIGGRAPH 2015 conference continued the trend.
This marked the 42nd conference and exhibition; SIGGRAPH reported that almost 15,000 attendees, partners and media from 70+ nations descended upon the Los Angeles Convention Center.
Kristy Pron, SIGGRAPH’s Emerging Technologies Program Chair said at this year’s conference, “we wanted to find technologies that can be applied to daily life, whether it will be tomorrow or in a few years. We also wanted to uncover practical emerging technology apps from various industries such as automotive.”
So here are a few examples that are still in the nascent stage but could have real-world applications soon:
SemanticPaint – A collaborative effort by Microsoft, the University of Oxford and Stanford University. The SIGGRAPH demo unveiled what the research team says is a “new and interactive and online approach to 3D scene understanding.” The system lets users simultaneously scan their environment and interactively segments a scene by “reaching out and touching any desired object or surface.” Users have continuous live feedback online. The researchers further stated that errors can be immediately corrected in the segmentation and/or learning, which they claim isn’t currently available to batch and offline methods. They believe SemanticPaint will usher in new apps in augmented reality, interior design and human/robot navigation.
“It provides the ability to capture substantially labelled 3D datasets for training largescale visual recognitions,” noted the researchers.
Cypress, CA-based Christie Digital Systems USA, a visual and audio technology company, demonstrated its latest digital ‘sandbox.’ The company auto-calibrated projection-mapped displays on a number of different types of surfaces and scaled them down to less than 30 seconds. Attendees saw a 3D printed apartment building projection-mapped in real-time. The process – which uses cameras, projectors and 3D geometry “to augment any real object’s surface with imagery defined by a virtual model” – has also been recently patented.
Mid-Air Touch Display – A team of researchers from Keio University and the University of Tokyo in Japan demonstrated a system allowing visuo-tactile interaction with bare hands of mid-air 3D dimensional objects. The researchers created ultrasound fields that created rich tactile textures. Users could see and touch virtually floating objects with the naked eye and their hands.
These and other projects are pushing technology boundaries; going forward, it’ll be fascinating to see how many of these will ultimately impact the way we live and work.