Neal's Notes

An Overview of Key Technologies Rolled Out Since 1946

As IEEE celebrates its 75th anniversary this year, it got me wondering – what key technologies (including hardware and software products/services/processes) have made a significant impact in our lives?  I reached out to several colleagues worldwide to glean their formidable insights. 

Here’s a brief intro for each:

Chris Longstaff, VP-Product Management, Mindtech.  UK-based Mindtech develops solutions around AI and visual processing. The company focuses on the provision of tools and datasets for training AI systems.

Charles Macfarlane, CBO, Codeplay. The UK-based software company has a long history of developing compilers and tools for different hardware architectures.

Bob O’Brien, Co-Founder/Principal Analyst/CFO, Display Supply Chain Consultants (DSCC). The company delivers insights through consulting, syndicated reports and events and has offices in the US, Europe, Japan, Korea and China.

Jon Peddie, CEO, Jon Peddie Research (JPR). Dr. Peddie is an IEEE senior/lifetime member who heads up Tiburon, CA-based JPR.  JPR is a technically oriented multimedia and graphics research and consulting firm. JPR’s Market Watch is a quarterly report focused on PC graphics controllers’ market activity for notebook and desktop computing.

Sri Peruvemba, CMO, Marketer International. Based in Silicon Valley, Peruvemba has been an influential advocate in the advancement of electronic hardware technologies and is an acknowledged expert on sensors, electronic displays, haptics, touch screens, and related technologies.He advises tech firms throughout the US, Canada, and Europe.

Karu Sankaralingam, Founder/CEO/CTO, SimpleMachines (SMI). Dr. Sankaralingam started as a professor of computer science at UW-Madison in 2007. He has 17 patents and has published 91 papers. Founded in 2017, SMI is an AI-focused semiconductor company. 

Ken Werner, Principal, Nutmeg Consultants.  Werner, based in Norwalk, CT, is a leading authority in the global electronic display industry and is especially active with companies evaluating display technologies for new products; entering or repositioning themselves in the industry; or requiring display technology validation or strategic information on display technologies.

And here’s what they had to say:

Chris Longstaff:

More than any other technology, including 3D and holographic display technologies, artificial intelligence has without a doubt given us more false peaks of hope and deep troughs of despair over the last 75 years.  More recently, AI technology has been in clear ascendency, and has delivered many real-world results that were unthinkable years earlier without these advanced machine learning algorithms, driven by deep neural networks. From speech understanding and natural translation of texts through to image understanding and motion analysis. These outstanding results have been driven through a perfect trifecta of factors: compute, algorithms, and data. 

Enhanced compute capability for training networks has been driven by dedicated silicon specifically targeting training with solutions from both the established players and new entrants alike. This in turn has led to the ability to train and iterate networks and algorithms at a higher speed than ever before.  Other advances have led to the availability of high performance, low power edge compute, bringing the implementation of the trained networks to the end user.

The final piece of the puzzle has been the availability of data. Massive amounts of real-world data have been gathered and labelled for training these AI systems, though access to such data remains limited to a privileged few. Significant recent developments by companies have also advanced the use of synthetic data, allowing for an almost unlimited supply of high quality, privacy free data to be used in training networks, helping in part to address the 800 lb. gorilla of ethics in AI.

Charles Macfarlane:

With every new market segment, hardware is invented and software starts in low level and proprietary ways. Gaming required bespoke implementations and evolved to embrace industry agreed standards with games written in a way to run on multiple games consoles. Similarly, with PCs, mobile phones, tablets and to some extent TVs, they started with complex proprietary software running on the increasingly complex and performant processors but embraced a standard programming platform. Benefits include embracing modern programming techniques for the latest complex processor/accelerators/memory/interconnect, speed of development and reuse, availability of engineers and long-term maintenance. 

Three markets that are now deep in experiencing the path to maturity are automotive, AI and high-performance computing (HPC). 

Firstly, automotive has traditionally been slow to embrace the latest technology, ensuring safety standards are at the highest standards. They are embracing the latest Computer Vision (CV) techniques, with Machine Learning (ML) and benefiting from the latest processor solutions to make cars safer, through features like driver monitoring and collision avoidance. Automotive is feeling the pain of traditional software techniques and is on the cusp of taking advantage of open standards to achieve their demanding roadmap of safety and automated features. 

AI has spawned a massive amount of processor innovation, with architectures tuned to speed up critical functions such as matrix-multiply and convolutions. They are part of a bigger system but provides software developers with the challenge of integrating their features. Most today are creating their own bespoke software stack and leaving their customers and end-user developers with a headache of integration. 

HPC has become the new race to the moon. Big computers have been around for years and used for applications such as science and code breaking. Many examples of the earliest mainstream computers were achieved thanks to IBM and Intel (much more history available at the Computer Museum in Mountain View). The Top 500 supercomputers today are mostly Intel CPU systems, and most interestingly is the inclusion of an accelerator, achieved today with Graphics Processor Units (GPUs). Yet again, programmability was a huge barrier to developers, with Nvidia succeeding in creating their CUDA platform, which is now extensively adopted by programmers.

Today SYCL is becoming the open standard platform to carry developers through the next decades of development, embracing the rapid introduction of processor systems and allowing AI to be deployed in our lives.

Bob O’Brien:

Doubtless many non-display technologies like the transistor and optical communications deserve to be on IEEE’s list, but others can cover those better than I can.

The technology that has dominated the display industry for most of IEEE’s lifetime has been the color cathode ray tube (CRT).  Readers new to displays may be surprised to learn that the peak year for unit sales of CRT TVs was as recent as 2004.  While monochrome CRT had been developed prior to 1946, and the first years of television featured programs in black and white, the development of color by RCA scientists in the 1950s led to the first global display industry.

I started my career as an engineer in CRTs, and I recall a colleague commenting about the technology that if it did not already exist, you could not convince him that it would work. In a color CRT, three electron beams are projected at high voltage through a cone-shaped volume held under vacuum.  Color separation is accomplished by a shadow mask, and most of the electrons hit the mask, but those that hit the screen illuminate red, green and blue phosphors.  The beams are scanned across the screen by an electromagnetic deflection coil.

Big and bulky they were, but CRTs were a successful product for decades.  Screens got larger and eventually got flatter, but both of these dimensions added weight because of the requirement to operate in a vacuum, culminating in Sony’s 40” Wega TV which weighed more than 300 lbs. (136kg).  When the personal computer industry emerged, CRTs were enlisted to serve as computer monitors. Over the history of the industry, more than three billion color CRT TVs were sold, and more than one billion color CRT monitors.

Color CRT enabled a media revolution in TV and a productivity revolution in personal computing, but flat panel displays were required for the next step to mobile computing and communications.  Most of the people reading this article will do so on a liquid crystal display (LCD).  LCDs were also first invented by RCA scientists but remained niche products until they were integrated into notebook PCs in the 1990s.  Light weight, compact, and less power hungry than CRTs, LCDs were essential to mobile computing, and businesses (and a few wealthy consumers) were willing to pay a tremendous premium for the technology.  In the early 1990s, a 10.4” VGA panel cost about $2500 for a laptop PC costing up to $5000.

Those high prices allowed a virtuous cycle to take hold, where investments in new capacity led to economies of scale and learning effects, driving costs and prices down, which led to growing markets and new applications. The cost of LCDs has been reduced by 99% in three decades, and the volume of LCDs has increased from millions per year to billions.  More LCDs are sold each year than in the entire history of CRT TV.

The color CRT brought entertainment and productivity to the developed world, but the LCD has enabled devices accessible to nearly every person on Earth.  Our world of ubiquitous displays would be hardly recognizable to the engineers at IEEE’s founding in 1946, and these two technologies have done the most to get us here.

Displays have helped to diminish the impact of the pandemic by enabling WFH and other social distancing practices, but while cases are falling it remains important to remember the three W’s.  Stay safe.

Jon Peddie:

Most people attribute the beginning of 3D in computers to Ivan Sutherland’s Sketchpad project at MIT in 1963, but over a decade of work computer graphics had gone on before Sutherland ever got to the MIT campus.

First CG computer 1951

The Whirlwind computer was developed at MIT in 1951 and was the first computer that operated in real time, used video displays for output, and the first that was not simply an electronic replacement of older mechanical systems. 1

Other than the above examples, it is generally accepted that the first workstation was developed in 1972 at Xerox PARC (Palo Alto Research Center Incorporated) when the company launched Project “Alto“ to build a personal computer to be used for research.

Tablet designed 1972

The Alto, as it came to be known, was sort of a test bed for the ideas Alan Kay had for his (now) famous Dynabook tablet design. Kay saw that the technologies needed to develop a tablet computer could not be realized until closer to the end of the millennium. So, he saw the Alto as a vision or a rallying call for others who might later evolve a fully-fledged Dynabook. The Alto was started in late 1972, as much as a reaction to POLOS as anything else. The team felt time-share had had its day and agreed with Kay; they wanted to see computing power delivered into the hands of the individual.

Game console introduced 1972

Magnavox’s Odyssey introduced in 1972 is generally considered to be the first commercially available home video game console and set the stage for Atari and others.

The PC 1975

Although it is argued that the first personal computer was the Datapoint 2200, introduced in 1970, it wasn’t used by consumers. The first computer consumers could play with was the Mark-8 microcomputer, designed by Jonathan Titus, using the Intel 8008 processor. Shortly after that, MITS completed their first prototype Altair 8800 microcomputer. Titus’ original name for the computer was “PE-8”, in honor of the Popular Electronics magazine.

3D graphics in games 1983

I, Robot, released in 1983 by Atari is considered the first 3D-polygonal game, produced and sold commercially. The genre coalesced in 1992 with Wolfenstein 3D, which has been credited with creating the genre proper and the basic archetype upon which subsequent titles were based. Originally released on May 5, 1992 for DOS, the game was inspired by the 1980s Muse Software 2D video games Castle Wolfenstein and Beyond Castle Wolfenstein. The game is widely regarded by critics and game journalists as having helped popularize the genre on the PC and having established the basic run-and-shoot archetype for subsequent FPS games.

The GPU 1999

The graphics processor unit—GPU of today is quite different from the first graphics controllers mentioned above. The GPU was and is the culmination of functions and large-scale semiconductor integration. As early as 1991, we began to see integrated graphics processors. By 1985 we saw the graphics controllers, which were the GPU’s predecessor, become heterogeneous in their functions. 3Dlabs (in the UK) developed its Glint Gamma processor, the first programmable transform and lighting engine (T&L) as part of its Glint workstation graphics chips and introduced the term GPU in 1997. Then in 1999, Nvidia developed an integrated T&L engine for their consumer graphics chip, the GeForce 256. ATI quickly followed with their Radeon graphics chip. But Nvidia popularized the term GPU and has forever since been associated with it and credited with inventing the GPU.

Tessellation, AI, and ray tracing 2001 -2018

Specialized computer graphics operations that had been run in software on the CPU moved to dedicated hardware accelerators within the GPU and got incredibly faster in the process. It was a combination of Moore’s law and putting the function where the action was.

AMD introduced hardware tessellation, Nvidia introduced hardware AI and accelerated ray tracing. Nvidia astonished the world with introducing the largest GPU ever made, the Ampere with a mind-boggling 38 billion transistors.

Mesh shading 2020

GPU development never slowed down and in 2016 AMD introduced Primitive Shaders in their Graphics Core Next (GCN) Vega GPU. That led to the mesh shader developed by Nvidia in its Turning GPU in 2018 and that led to Microsoft’s DirectX 12 Ultimate enabling it all. 

Meanwhile, Epic put out a demo of mesh shading on the new PlayStation 5 that showed an astonishing billions of subpixels in real-time. 

The visuals are stunning and made with what Epic is calling Nanite virtualized micropolygon geometry. This new level of detail (LOD) geometry, says the company, will free artists to create as much polygon detail as the eye can see—that may be an understatement.

Performance 2000 – 2026

Computer performance can be measured in several ways, as can computer graphics performance. No one way is the best, or the more correct and it’s really a matter of what is important to user.

GigaFLOPS is as good as a measurement as any and one that can cut across different platforms for comparison. The real point of this chart is the roll off of performance gains over time. As many have said Moore’s law is slowing down. That’s true, and what it reveals is we will see new, clever, innovative ways to squeeze more performance out of our nanometer sized transistors. Architectural tricks with caches, multi-chips, memory, and most of all software.

Sri Peruvemba:

Reading on paper is a joy but the content doesn’t change; content on an electronic display changes but it’s not as fun to read. In a quest to create a medium that looks and feels like paper but could bring the world’s knowledge to a ‘single sheet’, scientists at MIT created Electrophoretic Displays, popularly known as Electronic Paper or ePaper. 

This was in 1997, today ePaper is the reason you purchase the Amazon Kindle or Electronic Shelf Labels etc. ePaper not only looks like paper, it’s not distracting like traditional displays, and they consume hardly any power. 

In the future we will carry a rolled up or folded piece of ePaper that will be our map, our phone, our laptop/tablet, our book……. No, it won’t replace toilet paper…. Don’t go there.

Karu Sankaralingam:

Processor architecture: 

One of the pillars of how computers work is the von Neuman computing model, which in simple terms is to fetch one instruction, execute it, and then fetch the next and so on. While being extremely powerful and programmable, this model ends being extremely power-hungry. With the recent explosive need of computational capabilities for AI, this model has become too cumbersome. Dataflow computing including various hybrids of dataflow and von Neuman computing, eliminate this overhead by processing information as dictated by where the information should flow – hence the name dataflow computing. First invented academically in the late 70s, explicit dataflow machines are being pioneered by startups and are powering a new generation of chips for AI and machine learning.

Compilers: 

Related to dataflow and how information is processed, one of the things that made the von Neuman computing model so successful was the advent of modern compilers that could easily transform high-level languages into low-level machine language. Such a transformation becomes challenging for dataflow machines since the compiler roles includes the placement of data, placement of compute operations based on the semantics of the programmer, routing of values from data-source to data-destination, and possible timing of when this communication should occur. 

Various mathematical optimization techniques have been used to develop compilers for specific dataflow machines. Recent breakthroughs have converged theoretical advances in numerical optimization theory, fast numerical solvers, and theoretical formulations that cast the compiler problem into numerical optimization problems. Such compilers based on open-source frameworks like Julia have allow dataflow machines to have generic compilers like von Neuman compilers, and their compilation times are also fast. Some of these have fueled the commercial adoption of dataflow.

Ken Werner:

In 1985 Toshiba introduced the T1100, the world’s first mass-market laptop computer, which, significantly, was compatible with the IBM desktop PC.   At a price of US $1,899 ($4,514 in 2019 dollars), Toshiba was delighted to sell 10,000 of them in the first year.   

Cramming all of the electronic functionality of an IBM PC into a laptop package was challenging, but there was one component without which the T1100 could not have been made, a monochrome twisted-nematic (TN) LCD capable of displaying 80×25 alphanumeric characters and CGA (640×200) graphics on a screen measuring nine by four inches. The display had a just-adequate contrast ratio of 4:1 and a vertical viewing angle of +40 degrees / -15 degrees. 

The display could be tilted to bring the viewing angle withing the readable zone.  The image quality of the display was poor, but it made the T1100 a functional product and it allowed the computer to run for eight hours on a battery charge.  In early 1987, Toshiba upgraded the display to a supertwisted nematic (STN) unit in the T1100 PLUS.

The display made the T1100 possible. T1100 and its successors provided the time and the revenue stream that permitted the liquid-crystal display to be developed into the vastly better device it eventually became.

1 Peddie, Jon, Developing the Computer, in The History of Visual Magic in Computers, Springer Nature Switzerland AG., 2013, pages 148-158.

Comments - 0 »


Smart Home Technologies Playing an Even More Pivotal Role

With literally hundreds of millions of Americans nationwide now sheltering in place because of the coronavirus, smart home security and automation is fast becoming an integral component of domestic life.  

The numbers substantiate this.  Zion Market Research says the global smart home tech market will be almost $54 billion by 2022. Swedish market research firm Berg Insight estimates that about 63 million American homes will qualify as ‘smart’ by 2022 as well. 

Patrick Lucas Austin, writing in Time, says by 2030 we’ll have total Internet of Things (IoT) immersion (and as an aside, ABI Research predicts consumers will spent $123 billion on IoT products by next year):

“The smartest homes will be able to truly learn about their owners or occupants, eventually anticipating their needs.  Developments in robotics will give us machines that offer a helping hand with cleaning, cooking and more.  New sensors will keep tabs on our well-being.  Central to all of this will be the data that smart homes collect, analyze and act upon, helping to turn the house of the future from a mere collection of gadgets and accessories into truly smart homes.”

Daniel Cooley, chief strategy officer at Silicon Labs, says smart-home technology will eventually “will be like plumbing – you’ll rely on it.”  

And Michael Gardner, who founded Luxus Design Build, a construction firm, adds that many homes are already being built ‘smart’from the gitgo – “it’s such an integral part of the home that we’re designing it from the beginning, where beforehand technology was always an afterthought.”

Many companies are rolling out an array of innovative products and services.  Computer graphic company Nvidia is developing a smart robotic arm that slices/dices vegetables and assists in cleaning up after meals. Japanese bathroom fixture company Toto is testing urine-sampling toilets. 

Some additional examples:

  • Smart thermostat systems from Honeywell and Nest are already using machine learning to adapt their behavior to a home’s occupants, all based on first observing, then replicating their habits. 
  • Independent review site Safety.com reports that Blue by ADT has introduced smart security cameras that offer facial recognition, custom motion zones and smartphone control. The cameras have 24 free hours of cloud storage to guard data from hackers.
  • Kwikset, Lockly and August are releasing smart locks this year.  Safety.com says the “Kwikset Halo Touch uses your finger to open the door and connects to your favorite voice assistants to control your locks hand-free.  Other locks use digital key fobs, codes or your smartphone.”

Growth rate of the smart home tech market is poised to grow substantially as people become more accustomed to having integrated devices in their homes and also feel more secure that their devices won’t be hacked, and their data compromised.

Comments - 0 »


Coping with the World’s E-Waste

Understandably the world’s attention is currently focused on the coronavirus.  If you’re adhering to your country’s guidelines/protocols, you’re sheltering in place, keeping your distance when outside; in short, being smart and playing a part in helping to defeat this global scourge.

At home, you’re probably using an array of gadgets – smartphone, tablet, desktop computer, laptop, etc. 

Odds are, some of these items may be getting a bit long in the tooth.  But when it comes to parting with any of them, where do you think they go once tossed?

We now annually churn out more than 50 million tons of electronic waste, and according to a United Nations estimate, less than a quarter of all U.S. electronic waste is recycled – the rest ends up in landfills or is incinerated. 

To grasp that figure, the World Economic Forum (WEF) said this could increase to 120 million tons by 2050.  And 50 million tons, says the WEF, is equivalent in weight to 4,500 Eiffel Towers, which would cover an area as big as Manhattan.

And with 5G looming on the horizon which will result in much faster speeds, experts say the mountain of obsolete gadgets will become an avalanche.

Fresno, CA-based recycler ERI, for instance, processes more than six million lb. of discarded electronics each month – but it’s just a drop in the bucket.

“I don’t think people understand the magnitude of the transition,” said ERI co-founder and executive chairman John Shegerian.  “This is bigger than the change of black-and-white to color, bigger than analog to digital, by many multitudes.”

Kyle Wiens, who founded do-it-yourself repair guide company iFixit, added that “our products today don’t last as long as they used to, and it’s a strategy by manufacturers to force us into shorter and shorter upgrade cycles.”

So are we destined to be buried by our gadgets?

One solution that has been touted for a few years by e-waste experts is a ‘circular economy’ paradigm, e.g., recycled, reused, refurbished raw materials are used to create a more sustainable future. The WEF also calls this ‘dematerializing the electronics industry.’ In fact, at the now postponed 2020 Olympic Games in Tokyo, athletes were slated to get gold, silver and bronze medals made from recycled e-waste.

At the 2019 WEF annual meeting, one of the program articles stated that the rise of ‘device-as-a-service business model’ could be one avenue.

“This is an extension of current leasing models, in which consumers can access the latest technology without high up-front costs,” said the WEF. “With new ownership models, the manufacturer has the incentive to ensure that all resources are used optimally over a device’s lifecycle.

The WEF added that Internet of Things (IoT) and cloud computing advances will help speed up dematerialization – “better product tracking and take-back schemes, which consumers trust, also constitute an important first step to circular global value chains.”

But the potential advances come with a serious caveat – academics, business and labor leaders, lawmakers, investors, and entrepreneurs all need to work together in order to make this ‘circular economy’ work – “innovative business and reverse supply chain models, circular designs, safety for e-waste collectors and ways of formalizing and empowering informal e-waste workers are all part of the picture.”

E-waste does pose as serious an existential threat to the environment as other types of pollution like plastic. But if we develop effective e-waste policies and improve e-waste reporting, we can make a difference.

Comments - 0 »


Will AI and Big Data Help Tamp Down Coronavirus?

Most of us by now have had our respective Inboxes inundated with endless jokes about toilet paper, disinfectant wipes, Purell, and more. During this challenging time of social distancing, sheltering at place, wearing a mask, and more, it’s important to maintain some semblance of normality and also a sense of humor.

But on a more serious note, countless companies and educational institutions are in R&D warp drive to quell the worldwide COVID-19 pandemic. And they’re utilizing sophisticated AI and Big Data tools. 

A snapshot of some these efforts:

BBC reports that Facebook is currently working with researchers at Taiwan’s National Tsing Hua University and Harvard University’s School of Public Health, sharing anonymized data about high-res population density maps and people’s movements, which is assisting in forecasting the spread of coronavirus.

And during an interview with British startup Exscienta, BBC asked the company’s CEO, Andrew Hopkins, how AI may be effective in combatting the virus.  Hopkins said it may be helpful in rapidly developing antibodies and vaccines; scanning through existing drugs to see if any can be repurposed and designing a drug to fight both current and future coronavirus outbreaks.

“The fastest this could be done is 18-24 months away because of the manufacturing scale-up and all the safety testing that needs to be done,” noted Hopkins.

And Sabine Hauert, a professor at Bristol University, told BBC that AI may make daily life more bearable during the pandemic.

“It can also be used to put people out of harm’s way, for example, using robots to clean hospitals, or telepresence systems for remote meeting, consultations or simply to connect with loved ones,” said Hauert.

Julie Shah, an MIT AI researcher and roboticist, said at her university, existing mobile technologies are being used to develop privacy-preserving contact tracing.

“When someone tests positive for COVID-19, health care providers could download the names of those who were in close proximity to the infected individual during the relevant time frame without accessing their comings and goings.  With that anchoring information, computer scientists could then integrate data from a broad swath of sources – possibly including the amount of virus in wastewater – to forecast precise community-level infection risks,” said Shah.

That data, added Shah, would allow for more dynamic risk assessments.

“It would allow us to decide not whether schools and workplaces should be open, but which ones should be open, and for how long. A high viral-risk day for a specific locale could be the epidemic equivalent of a storm warning.”

Ken Kaplan, writing in The Forecast by Nutanix (an enterprise software company), indicated that AI and Big Data advancements are enabling even small organizations to examine huge amounts of data and offer problem solving recommendations.

“Big Data and AI technologies helped the small software-as-a-service company Bluedot see the first warning signs more than a week before COVID-19 was officially identified by the World Health Organization,” said Kaplan.

David Yakobovitch, principal data scientist with Galvanize, a technology ecosystem for learners, entrepreneurs, startups, and established companies, wrote in Medium that the University of Southampton is using AI technology to model data from search engines to map the outbreak. 

“AI technology is assisting researchers to understand movement patterns of the coronavirus from Wuhan to other parts of China and the rest of the world,” said Yakobovitch.  “These machine learning and AI technologies have assisted researchers to predict the virus, its structure and its spreading methods. Consequently, this will help health professionals understand the solutions needed to combat further spread of the virus.”

Time, of course, is of the essence.  Shah warned it’s paramount to not only “soften the blow of curtailed timelines and busted budgets but fundamentally redesign the way essential services are delivered and preserve the functions of society.  We have the people.  We have the data.  We have the computational force. We need to deploy them now.”

Comments - 0 »


Biochips Are Here to Stay, but Caveat Emptor – Know the Security Risks and Rewards

If you own a dog or cat and love them like family, odds are your vet may have already recommended injecting a biochip transponder into Fido or Fluffy.  The device basically serves as an RFID tag so if your pet runs away, you can track their whereabouts and hopefully recover your dog or cat.

Now the ante is being raised as microchip implants are starting to be used in humans more frequently not only as a health tool, but in the business environment as well.  

The NYU Tandon School of Engineering, which hosted the nation’s first ever conference on biochip security in September 2018, says that biochips — devices combining biochemistry and electrical/computer processing to run chemical reactions — could revolutionize remote sensors, environmental sampling procedures and medical tests. But these devices may be vulnerable and rife with opportunities to “fake results, or hack, steal corrupt and counterfeit components, more so because many of these systems are connected to other devices via the web.”

“Attackers can come from anywhere,” notes Ramesh Karri, professor of electrical/computer engineering at NYU and one of the conference’s organizers.  “The chemistry, the sample, the biology, the protocol, the surface, the network hardware and software. And if the chip is implanted in a patient or part of a wearable system, the stakes are life and death.”

The market potential, nonetheless, is enormous. In November 2017, a report by India-based MarketsandMarkets Research estimates that by next year, the global biochip market will reach almost $18 billion.  

A Gothenburg, Sweden company, Biohax, has already chipped more than 4,000 people in Sweden and throughout Europe.  According to the company’s founder, Jowan Osterlund, applications range from making purchases to opening locks to passing through security barriers – anything already being done with chips on plastic cards. 

“Tech will move into the body – I’m sure of that,” says Osterlund.

In fact, Fortune reports that Sweden’s national rail network is now biochip-capable; if you have a Biohax chip implanted, for instance, you just hold out our hand and the conductor swipes it (the ticket’s embedded on the biochip). Fortune says that most of the gyms run by Nordic Wellness in Sweden are also biochip-capable – gym members and staffers open secure turnstiles and lockers with their hands and can see their exercise profiles on monitors.

And as reported in The Atlantic, there are now chips capable of tracking a wearer’s live vital signs; coming soon – people will be able to store their medical information on encrypted RFID chips and expect to see GPS-enabled chips that families can use to track relatives suffering from dementia.

“There’s an interest but also a controversy with actual GPS tracking,” notes Luis Martinez, a preventative-medicine specialist in San Juan, Puerto Rico. “A lot of parents will feel safe if they can track real-time where their children are. But other populations are being looked at for different reasons: law enforcement, or you could use a GPS chip to identify registered sex offenders.  I think it’ll be a case-by-case basis where different countries or different societies will decide.”

Dr. Stephen Bryen, a technologist policy expert and strategist who founded the Defense Technology Security Administration and served as Deputy Undersecretary of Defense for Trade Security Policy, wrote in his Bryen’s Blog that as biochips evolve, they’ll probably have additional sensors integrated into them and more data that may include medical information, digital photos or other biometrics.  

“This makes them even more privacy invasive and jacks up the risk commensurately,” says Bryen.   

And he cautions that stealing an implanted chip may also be possible.

“Consider the theft of kidneys and other body parts,” he notes. “The fact that there have been convictions says it’s a real problem. Compared to organ theft, the theft of an implanted RFID is rapid and easy.”

Despite the digital convenience, acceptance of biochips is still a long way off.  Five states – California, Missouri, North Dakota, Oklahoma and Wisconsin have passed laws prohibiting mandatory implantation of biochips. Richard Oglesby, president of AZ Payments Group, a Mesa, AZ consulting firm, adds that “implanting chips is invasive, unnecessary, and not particularly useful. There are wearable solutions that can accomplish the same things.”

So, will biochips herald the end of our personal privacy?

Haley Weiss, writing in The Atlantic, offered this caveat:

“Sooner or later the laws will change, and the frightening will become familiar…implantable RFID will bring us the next iteration of the yin-and-yang symptoms of technology we’ve seen time and time again. We will likely be healthier, safer, more informed, and more connected, and we will continue to disagree over whether it matters if our privacy and autonomy were the corresponding costs.”

Comments - 0 »


Biometrics Making Inroads with Financial Institutions

While numerous financial institutions worldwide have embraced biometric technology, widespread acceptance and adoption by customers is still in its early stages.

In short, the technology aims to make authentication easier by using various biological data and behavioral characteristics. Some of these include voice authentication, and facial or eye-scanning authentication. With voice authentication, for instance, a person’s voice has unique identity markers that help banks recognize repeat calls from known fraudsters. In fact, Fortune said the banking industry has placed more than 60,000 voices on a blacklist – “a clear example that biometrics are money in the bank for financial firms.”

And the technology itself is becoming big business. Biometrics Research Group said biometric security revenues topped almost $2 billion last year, up from $900 million in 2012. And researchers think biometric technologies may be able reduce operational risks for financial institutions by at least 20% between now and 2026.

Alistair Newton, an analyst with market research firm Gartner, said his company conducted a digital banking survey last year; many respondents were unaware their bank offered Apple’s Touch ID logins, an authentication service.

“There were also many consumers who were happy to do the extra step and type in a user name and password because it felt more secure to them. So even with the base stuff, like Touch ID, it’s certainly gaining momentum but still has a long way to go,” said Newton.

A few examples of how some financial institutions are using biometrics:

HSBC recently announced a roll out to its 15 million United Kingdom customers. Account holders can access online banking using their voice or fingerprint.

According to American Banker, Citi unveiled a voice authentication service in the U.S. allowing credit card customers to use their voice to verify themselves when they call the bank. Customers who want to use the service initially provide a brief voice sample; once this is done (Citi says it takes less than 60 seconds to set up), when customers subsequently call, their voices are matched to the pre-recorded data. Citi is also considering implementing iris-scanning ATMs.

Barclays teamed up with Hitachi and is offering ‘finger vein’ technology – customers place a finger inside a desktop scanner which checks the individual vein patterns. The technology is currently being used at ATMs in Japan and Poland.

Lastly, customers of Wells Fargo Bank’s Commercial Electronic Office, a mobile banking app, can select from either voice and facial data collection/recognition, or a photo of the whites of their eyeballs – the red vein eye patterns identify individuals and then allow access into the app.

Gartner’s Newton offered up a few caveats – voice authentication will likely achieve widespread acceptance but other technologies like a facial-scanning ATM may be a tougher sell.

“Is that really easier than just putting a card in there? To change customer behavior and habits in financial services, the new solution has to be so much better or easier than the existing services,” noted Newton.

But these hurdles will eventually be surmounted and biometrics over the next few years will catch on across the financial services world as customers will see the benefits of not having to use passwords, and that the technology provides them with a faster and more secure way to do transactions.

 

Finger Print Smart Phone Access Lock, Business Man Touch Screen Fingerprint Hands Scan Security Flat Vector Illustration

Comments - 0 »


How the Internet of Food Is Driving Food Marketing

The buzzword, the Internet of Food, is nowhere near as ubiquitous and overused as the Internet of Things. In short, it connotes the complete food tech ecosystem – food delivery and distribution services, supply chain analytics, and more – and how all of these elements are quickly become intertwined.

Mandy Saven, who heads up food, beverage and hospitality at research and advisory firm Stylus, told FoodNavigator last year that technology and digital media are greatly influencing the appearance, experience and taste of food.

And marketers are taking notice. A number of supermarkets, for instance, are promoting what Saven calls ‘wonky’ vegetables, or ‘cosmetically challenged produce.’

“Maybe this will make consumers more comfortable; there’s a sense of gratification that they are using foods closer to how they are in nature,” said Saven.

Meanwhile, tech and food companies are embracing high-tech marketing innovations to move food products and simultaneously do a bit of brand promoting.

Some examples:

Patron used virtual reality (VR) headsets to market its tequila. The promo was called ‘The Art of Patron Virtual Reality Experience.’ GoPro cameras were used to give consumers a bird’s eye view of the Patron distillery in Jalisco, Mexico. Since consumers are demanding to know more about what’s in their food, where it came from, how it was produced, using VR helps to bring the experience directly to them – without having get up off the couch.

“Increasingly, consumers want to know the origin and backstories of the products they consume,” said Lee Applbaum, Patron’s global CMO. “For us, VR was the ideal way to bring people inside our doors at scale.”

The Wall Street Journal reported that companies large and small are rushing to meet this demand. Kellogg Co. and General Mills advertise online the names and profiles of farmers who grow oats and wheat for their cereals.

At Sam’s Club, codes are now available on produce packages so shoppers armed with a smartphone can scan and glean who grew the food, where it’s from, and how it was grown.

And Campbell’s has a new website, whatsinmyfood.com that the Journal says is being used “to cultivate a home grown image by detailing, for instance, that SpaghettiOs canned pasta is made with tomatoes mainly from California family farms and cheese that is mostly from Wisconsin.”

On the tech side, food marketers are paying close attention to what food tech companies are rolling out, everything from farming analytics to web platforms connecting consumers with farmers they purchase food from. And these food and beverage startups, according to Dow Jones VentureSource, have raised over $2 billion in less than two years.

One company, for instance, is Tastemade, which describes itself as ‘a video network for the mobile generation.’ The company claims to reach over 88 million people each month.  ‘Tastemakers’ report on an extensive array of unique topics from vegan cooking to Southern BBQ; the site offers up a welter of market intel on food trends.

Another is ipiit. The company’s Food Ambassador app has a database of more than 280,000 food products. Once you set up your preferences (organic, gluten-free, etc.), the app then helps you avoid certain ingredients. You scan the barcode of a given product and check what you want to steer clear of; the app also provides alternative products that fit your parameters.

man pushing a shopping cart on laptop

Going forward, the smartest thing in your Internet-enabled fridge may not be the unit, but the food itself. And all of this will have a profound impact on food marketing.  Bon appetit!

 

 

 

 

 

 

 

Comments - 0 »


Big Data Shaping the Way Marketers Do Business

Big Data. Big Data. Big Data. Those two words have been hyped to death in countless articles (mea culpa – this one too), books, podcasts, videos, and more.

But as a marketer, you can’t ignore it as Big Data has matured enough to become more about math than magic – it’s now revolutionizing marketing. Talk to six market research firms and you’ll get six different forecasts but here’s one example of Big Data’s impact in the years to come – CSC predicts a whopping 4,300% increase in annual data generation by 2020. Gartner predicts that CMOs will soon be spending more on IT than CIOs; and VentureBeat said marketing technology companies have reeled in to date almost $50 billion in investment.

Last spring Forbes Insights surveyed IT execs and senior data honchos about what their firms were doing with Big Data. The results were striking – 21% indicated Big Data was a significant business priority; 38% put it in the top five; almost 80% of the respondents stated revenues had increased 1-3% due to measures implemented by Big Data findings.

And while large enterprises have the resources to weave Big Data into their sales/marketing plans, Mick Hollison, writing in Inc., thinks that start-ups may benefit the most “because they are being born in the era of Big Data and are building capabilities to leverage it into their business models.”

Hollison said as an example, Uber is leveraging geo-spatial data and location intelligence to help expand global operations.

Marketers across an array of every type of business are employing Big Data.

A few examples:

Software Advice, a CRM reviews/analysis firm, uses Big Data to match sales reps to customers. Craig Borowski, a market research associate, said Big Data analytics are being used to match the most qualified sales reps to target specific demographic segments to improve both close rates and customer satisfaction.

Deidre Woollard, head of communications at Partners Trust Real Estate Brokerage & Acquisitions, adds that Big Data is allowing her company to view more refined search data on what people are looking for – and why.

“If we know that people who are doing a search are only looking at two or three photos, those first few photos should be the best ones,” she says. “If we know more people search for homes that are between $1.5-$2.9 million than search for homes that are $3 million and up, it might be a better strategy to price a home that was somewhere in between $2.9 million and $3 million, but under $3 million then to grab that search traffic.”

Lastly, Jackson Riso, who heads up marketing at TickPick, a no-fee secondary ticket marketplace, said his firm uses Big Data to better understand which landing pages shouldn’t be optimized, despite their low-conversion rates.

Riso said Big Data enables TickPick to better understand how/when customers convert; they also know that certain high-traffic landing pages have high bounce rates.

“With Big Data we see that those visitors who bounce are coming back via other channels and are becoming some of our most profitable customers, says Riso. “In other words, the one counterintuitive way markets can use Big Data is to understand where they should optimize their bounce rates and where they should optimize their bounce.”

Big Data can divulge trends and patterns that can significantly impact a company’s business strategies – including marketing. Ion Interactive CTO Scott Brinker summed up how Big Data will drive marketing over the next few years:

“Big Data makes it cheaper and easier to test concepts, but marketing is still about coming up with the big idea. Algorithms are great at optimization, but terrible at imagination.”

Comments - 0 »


E-Commerce Environment Still Facing Supply Chain Challenges

No doubt about it, e-commerce continues to grow and while it represents a burgeoning share of total retail sales, there are still significant hurdles to overcome.

“We’re in the midst of a profound structural shift from physical to digital retail,” noted Jeff Jordan of venture capital firm Andreesen Horowitz.

eMarketer reported, for instance, that e-commerce growth by quarter was about five times that of store locations in 2013 and 2014.

Yet there are headwinds.

Market research firm Market Track said companies that want to succeed in e-commerce must operate successfully amidst these risk areas that could undermine snaring and retaining customers:

• Volatility – Prices changing with increasing frequency and predictability;
• Non-compliance – Pricing and promoting brands and products outside established guidelines;
• Illegal/illicit activity – Counterfeiting and unauthorized resale;
• Size/scope – More retailers, resellers and products available online than ever before.

JDA Software Group also conducted a survey of more than a thousand online U.S. – based shoppers last year. Of the approximately 35% who bought online and elected to pick up their purchases at a store, about 50% experienced problems in initially getting their purchases. Wayne Usie, a JDA senior VP, said it may suggest that retailers might find it challenging expanding their e-commerce sales and keep profit margins.

“Retailers might experience service failures because picking products from storage and packing them into unique combinations for customers is easier in a streamlined warehouse than in a chaotic store, where other customers need to be helped and orders can be misplaced,” said Usie.

Meanwhile, e-commerce retailers are collecting more data than ever, but using it effectively is still a challenge. A study by IHL Group for Dynamic Action, which provides retailers with various software as a service solutions (SaaS), showed that retailers worldwide lost over half a trillion dollars last year due to out-of-stock inventory, an increase of 40% from 2012.

Lindsay Conwell, VP of Solution Engineering for Dynamic Action, said all this big data and the systems built around it can make the e-commerce environment even more challenging for retailers.

“There’s uncertainty as different systems seem to show a slightly different story; retailers are also struggling to get the right data to the right people and places,” said Conwell. “There’s a disconnect in most retailers between marketing and the supply chain. Often, marketing has planned a promotion for a certain date but if the product doesn’t reach the dock in China in time, there is no visibility back to marketing.”

Challenges aside, expect e-commerce retailers to eventually resolve many of these issues and continue to expand online offerings/services. Next year, for example, TradeReady.ca (published by the Forum for International Trade Training, a non-profit organization) reported that countries like India, Indonesia and Mexico are expected to experience the highest growth rate in e-commerce sales, primarily because of consumers’ expanding online access via smartphones.

Noted TradeReady.ca, “the winners in the race to be successful global retailers will be won by those able to adapt, remain agile, and put the latest and greatest technology to work for them, all while giving their customers exactly what they want.”

Comments - 0 »


Internet of Things Upending Real Estate Industry

No matter if you’re a realtor or property manager, technology has redefined the real estate business over the past few years. And now the Internet of Things (IoT) is giving the industry a much-needed makeover – and all for the better.

One example – modelling. 3D printer technology can replicate models from computer-assisted design (CAD), photo montages and other established design tools. Urban Land, published by the Urban Land Institute, says by using architects and engineers’ original data, “real estate concepts and images can be visualized three-dimensionally for agents use in supporting customers’ reviews and decision making.”

And software applications, noted Urban Land, can facilitate making decorating and furniture decisions while construction is underway and during the entire occupancy life cycle:

“Over time, such apps could displace interior designers who will recast themselves as design coaches and logistics managers. Clients will browse the Internet and select their furniture, fixtures and equipment online, relying on the coach’s direction and follow-through to procure, supply and place the goods.”

Beacon technology, powered by Bluetooth, is now helping agents market homes. As reported by Meg White in REALTORMag, the official magazine of the National Association of Realtors (NAR), one example is an app created by Jeffersonville, IN-based Realty Beacon, LLC. The company partnered with Daniel Island Real Estate in Charleston, SC to produce a branded version of the app for a high-end housing development.

“Because the community doesn’t allow ‘For Sale’ signs, the beacons are usually mounted on a home’s front porch. The lack of For Sale signs can present a challenge to buyers but the beacons present an interesting opportunity to circumvent that. It gives buyers a way to explore the island on their own,” said White.

And using data from an array of Internet-connected devices is helping realtors better market their services and product. Internet-enabled lockboxes, for instance, can help manage showing schedules and provide valuable info to both sellers and listing agents about the duration of tours.

“As you have more connected devices, you build a diary for the home,” said Todd Carpenter, NAR’s managing director of data analytics. “One thing that could become really effective is being able to say ‘my house is more energy efficient, and I can prove it.’ ”

On the commercial real estate side, the IoT is already yielding benefits – wireless sensors allow property owners to offer tenants enhanced security without a recurring monthly fee – the sensors are relatively inexpensive and can be installed in individual rental units. An intelligent building, for example, might also have elevators that recognize your speech when you tell them what floor to go to – and these are features that can contribute to greater occupant satisfaction and longer occupant retention.

New York City-based Hipercept, which provides enterprise information management solutions for commercial real estate companies, said IoT applications can provide property and asset managers with the ability to measure data points while using them to improve tenant experience, energy usage, data flow, and more.

“This type of mass connectivity can also present a differentiation factor for brokers when showing space to prospective tenants,” noted a Hipercept article on IoT in real estate. “In the realm of investors, using the Internet of Things provides immense opportunity for better decision making and allocation of funds, based on the data provided on a certain property, set of properties or parcel of land.”

Sandy Apgar, an international authority on housing, real estate and infrastructure, and a former Assistant Secretary of the Army for Installations and Environment during the Clinton Administration, believes we may eventually see the real estate equivalent of the Bloomberg workstation terminal, miniaturized for handheld devices:

“Much as mixed-used developments have dismantled traditional barriers among residential, office, retail, and entertainment uses, the Internet of Things helps consumers to integrate technology with their work-life choices.”

Comments - 0 »


« Previous Posts
Home | Log in
Neal's Notes © 2023