if (!isset($meta_desc)) { $meta_desc = "Leavitt Communications is a full-service international marketing communications and public relations agency established in 1991"; } ?>
Feature Story
More feature stories by year:
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998
Return to: 2011 Feature Stories
CLIENT: STANTUM
July 14, 2011: Electronic Design
Those of you who've read my previous columns may have spotted a penchant to proselytize the combination of multi-touch and a stylus. The motivation behind this crusade is to pave the way to a new, natural user-interface paradigm that is both bi-manual and multi-modal. Bi-manual means that the interface leverages our natural ability to use our two hands to facilitate the execution of complex tasks.
If we are born with two hands, there must be a good reason. Multi-modal interfaces allow us to choose the input technique that looks the most appropriate for a given task or context, e.g., the tip of our finger to flip the pages of an e-book, a stylus for drawing or annotating it. Though YouTube already hosts dozens of popular videos showcasing the art of finger-painting on an iPad, we can safely assert that, in their everyday life, homo sapiens won't move back anytime soon to a practice that Neanderthals gave up about 45,000 years ago.
Rhoda Alexander, director of monitor research for IHS iSuppli Market Research, puts it this way: "In the mad vendor rush to cash in on the iPad phenomenon with similarly configured tablet devices, hardware designers are overlooking an opportunity to leapfrog over what Apple is currently offering by moving the mobile platform beyond a simple consumption tablet to include true creation offerings. Today's mobile media tablets offer a fresh doorway into an increasingly rich content environment, including Web sites, e-books, movies, gaming, and a wealth of new applications."
Ambidexterity and multi-modality are the twin pillars that will move "the mobile platform beyond a simple consumption tablet" and make the use of touch-enabled devices more creative and productive. Among others, there is one field of application where we truly see a soaring need for ambidexterity and multi-modality: augmented textbooks.
Unlike their printed predecessors, electronic textbooks can empower traditional educational materials with embedded annotation tools, advanced bookmarks and search features, didactical videos and animation, interactive assessments, and exercises that can be completed directly on the book, then stored instantaneously in the cloud.
You might think that such applications have no chance to replace printed textbooks in the near future. Actually, this revolution is already on track in various countries around the globe. In South Korea, the government recently confirmed its plan to replace all printed textbooks nationwide by electronic counterparts by 2015, a move that will benefit from now available multi-touch technologies with precision and stylus input that enable handwriting recognition, making them especially suitable for products sold into Asian markets.
Getting back to ambidexterity and multi-modality, a little more than a year ago, Stantum introduced at the SID Display Week exhibition a new generation of IVSM (Interpolated Voltage Sensing Matrix) touch panels capable of handling full multi-touch finger input and high-resolution stylus input simultaneously—10 simultaneous touches, in fact.
Unlike existing solutions, this touch panel did not need a dedicated electronic stylus and was able to track fingers and a stylus at the same time. By contrast, alternative solutions involving the combination of a capacitive sensor and an electromagnetic stylus need to alternate between both of these input techniques.
The only limitation of that system was that, at the time, it was not capable of accurately discriminating between the different types of contacts (stylus, finger, palm, etc.). But "smart" contact discrimination is the key to delivering ambidexterity and multi-modality, so flash forward to SID Display Week 2011 in May.
There, the first generation of touch panels was introduced that deliver at each acquisition frame not only all the contact points, but also the type of contact, be it a finger, the palm of your hand, or a stylus. For each contact detected, the controller also reports its weight as well as its aspect ratio (width and height). And since the size of a finger contact is proportional to the pressure the finger exercises on the screen, three-level (low, medium, hard) pressure sensing can be extrapolated. This gives even more room for user expérience (UX) designers' creativity.
"Designers are understandably wary of pursuing stylus solutions, given the abundance of poorly executed solutions users have endured in the past. But improvements in touch technology, more powerful controllers, and increasingly sophisticated mobile operating systems make for a whole new design environment," Alexander says. "The technology is there for the designer willing to take a chance on raising the game."
Past solutions were so "poorly executed" mostly due to the lack of trustworthy palm rejection, now made possible thanks to contact discrimination. This allows handwriting input on a display to feel natural and transparent so a user can at last comfortably write on the device even with the palm resting on the display.
Enriched with these new capabilities, all complementing each other, the coming generation of multi-touch systems helps realize that new, natural user-interface paradigm I mentioned above that the industry has long been hoping for.
Return to: 2011 Feature Stories