An Overview of Key Technologies Rolled Out Since 1946
As IEEE celebrates its 75th anniversary this year, it got me wondering – what key technologies (including hardware and software products/services/processes) have made a significant impact in our lives? I reached out to several colleagues worldwide to glean their formidable insights.
Here’s a brief intro for each:
Chris Longstaff, VP-Product Management, Mindtech. UK-based Mindtech develops solutions around AI and visual processing. The company focuses on the provision of tools and datasets for training AI systems.
Charles Macfarlane, CBO, Codeplay. The UK-based software company has a long history of developing compilers and tools for different hardware architectures.
Bob O’Brien, Co-Founder/Principal Analyst/CFO, Display Supply Chain Consultants (DSCC). The company delivers insights through consulting, syndicated reports and events and has offices in the US, Europe, Japan, Korea and China.
Jon Peddie, CEO, Jon Peddie Research (JPR). Dr. Peddie is an IEEE senior/lifetime member who heads up Tiburon, CA-based JPR. JPR is a technically oriented multimedia and graphics research and consulting firm. JPR’s Market Watch is a quarterly report focused on PC graphics controllers’ market activity for notebook and desktop computing.
Sri Peruvemba, CMO, Marketer International. Based in Silicon Valley, Peruvemba has been an influential advocate in the advancement of electronic hardware technologies and is an acknowledged expert on sensors, electronic displays, haptics, touch screens, and related technologies.He advises tech firms throughout the US, Canada, and Europe.
Karu Sankaralingam, Founder/CEO/CTO, SimpleMachines (SMI). Dr. Sankaralingam started as a professor of computer science at UW-Madison in 2007. He has 17 patents and has published 91 papers. Founded in 2017, SMI is an AI-focused semiconductor company.
Ken Werner, Principal, Nutmeg Consultants. Werner, based in Norwalk, CT, is a leading authority in the global electronic display industry and is especially active with companies evaluating display technologies for new products; entering or repositioning themselves in the industry; or requiring display technology validation or strategic information on display technologies.
And here’s what they had to say:
Chris Longstaff:
More than any other technology, including 3D and holographic display technologies, artificial intelligence has without a doubt given us more false peaks of hope and deep troughs of despair over the last 75 years. More recently, AI technology has been in clear ascendency, and has delivered many real-world results that were unthinkable years earlier without these advanced machine learning algorithms, driven by deep neural networks. From speech understanding and natural translation of texts through to image understanding and motion analysis. These outstanding results have been driven through a perfect trifecta of factors: compute, algorithms, and data.
Enhanced compute capability for training networks has been driven by dedicated silicon specifically targeting training with solutions from both the established players and new entrants alike. This in turn has led to the ability to train and iterate networks and algorithms at a higher speed than ever before. Other advances have led to the availability of high performance, low power edge compute, bringing the implementation of the trained networks to the end user.
The final piece of the puzzle has been the availability of data. Massive amounts of real-world data have been gathered and labelled for training these AI systems, though access to such data remains limited to a privileged few. Significant recent developments by companies have also advanced the use of synthetic data, allowing for an almost unlimited supply of high quality, privacy free data to be used in training networks, helping in part to address the 800 lb. gorilla of ethics in AI.
Charles Macfarlane:
With every new market segment, hardware is invented and software starts in low level and proprietary ways. Gaming required bespoke implementations and evolved to embrace industry agreed standards with games written in a way to run on multiple games consoles. Similarly, with PCs, mobile phones, tablets and to some extent TVs, they started with complex proprietary software running on the increasingly complex and performant processors but embraced a standard programming platform. Benefits include embracing modern programming techniques for the latest complex processor/accelerators/memory/interconnect, speed of development and reuse, availability of engineers and long-term maintenance.
Three markets that are now deep in experiencing the path to maturity are automotive, AI and high-performance computing (HPC).
Firstly, automotive has traditionally been slow to embrace the latest technology, ensuring safety standards are at the highest standards. They are embracing the latest Computer Vision (CV) techniques, with Machine Learning (ML) and benefiting from the latest processor solutions to make cars safer, through features like driver monitoring and collision avoidance. Automotive is feeling the pain of traditional software techniques and is on the cusp of taking advantage of open standards to achieve their demanding roadmap of safety and automated features.
AI has spawned a massive amount of processor innovation, with architectures tuned to speed up critical functions such as matrix-multiply and convolutions. They are part of a bigger system but provides software developers with the challenge of integrating their features. Most today are creating their own bespoke software stack and leaving their customers and end-user developers with a headache of integration.
HPC has become the new race to the moon. Big computers have been around for years and used for applications such as science and code breaking. Many examples of the earliest mainstream computers were achieved thanks to IBM and Intel (much more history available at the Computer Museum in Mountain View). The Top 500 supercomputers today are mostly Intel CPU systems, and most interestingly is the inclusion of an accelerator, achieved today with Graphics Processor Units (GPUs). Yet again, programmability was a huge barrier to developers, with Nvidia succeeding in creating their CUDA platform, which is now extensively adopted by programmers.
Today SYCL is becoming the open standard platform to carry developers through the next decades of development, embracing the rapid introduction of processor systems and allowing AI to be deployed in our lives.
Bob O’Brien:
Doubtless many non-display technologies like the transistor and optical communications deserve to be on IEEE’s list, but others can cover those better than I can.
The technology that has dominated the display industry for most of IEEE’s lifetime has been the color cathode ray tube (CRT). Readers new to displays may be surprised to learn that the peak year for unit sales of CRT TVs was as recent as 2004. While monochrome CRT had been developed prior to 1946, and the first years of television featured programs in black and white, the development of color by RCA scientists in the 1950s led to the first global display industry.
I started my career as an engineer in CRTs, and I recall a colleague commenting about the technology that if it did not already exist, you could not convince him that it would work. In a color CRT, three electron beams are projected at high voltage through a cone-shaped volume held under vacuum. Color separation is accomplished by a shadow mask, and most of the electrons hit the mask, but those that hit the screen illuminate red, green and blue phosphors. The beams are scanned across the screen by an electromagnetic deflection coil.
Big and bulky they were, but CRTs were a successful product for decades. Screens got larger and eventually got flatter, but both of these dimensions added weight because of the requirement to operate in a vacuum, culminating in Sony’s 40” Wega TV which weighed more than 300 lbs. (136kg). When the personal computer industry emerged, CRTs were enlisted to serve as computer monitors. Over the history of the industry, more than three billion color CRT TVs were sold, and more than one billion color CRT monitors.
Color CRT enabled a media revolution in TV and a productivity revolution in personal computing, but flat panel displays were required for the next step to mobile computing and communications. Most of the people reading this article will do so on a liquid crystal display (LCD). LCDs were also first invented by RCA scientists but remained niche products until they were integrated into notebook PCs in the 1990s. Light weight, compact, and less power hungry than CRTs, LCDs were essential to mobile computing, and businesses (and a few wealthy consumers) were willing to pay a tremendous premium for the technology. In the early 1990s, a 10.4” VGA panel cost about $2500 for a laptop PC costing up to $5000.
Those high prices allowed a virtuous cycle to take hold, where investments in new capacity led to economies of scale and learning effects, driving costs and prices down, which led to growing markets and new applications. The cost of LCDs has been reduced by 99% in three decades, and the volume of LCDs has increased from millions per year to billions. More LCDs are sold each year than in the entire history of CRT TV.
The color CRT brought entertainment and productivity to the developed world, but the LCD has enabled devices accessible to nearly every person on Earth. Our world of ubiquitous displays would be hardly recognizable to the engineers at IEEE’s founding in 1946, and these two technologies have done the most to get us here.
Displays have helped to diminish the impact of the pandemic by enabling WFH and other social distancing practices, but while cases are falling it remains important to remember the three W’s. Stay safe.
Jon Peddie:
Most people attribute the beginning of 3D in computers to Ivan Sutherland’s Sketchpad project at MIT in 1963, but over a decade of work computer graphics had gone on before Sutherland ever got to the MIT campus.
First CG computer 1951
The Whirlwind computer was developed at MIT in 1951 and was the first computer that operated in real time, used video displays for output, and the first that was not simply an electronic replacement of older mechanical systems. 1
Other than the above examples, it is generally accepted that the first workstation was developed in 1972 at Xerox PARC (Palo Alto Research Center Incorporated) when the company launched Project “Alto“ to build a personal computer to be used for research.
Tablet designed 1972
The Alto, as it came to be known, was sort of a test bed for the ideas Alan Kay had for his (now) famous Dynabook tablet design. Kay saw that the technologies needed to develop a tablet computer could not be realized until closer to the end of the millennium. So, he saw the Alto as a vision or a rallying call for others who might later evolve a fully-fledged Dynabook. The Alto was started in late 1972, as much as a reaction to POLOS as anything else. The team felt time-share had had its day and agreed with Kay; they wanted to see computing power delivered into the hands of the individual.
Game console introduced 1972
Magnavox’s Odyssey introduced in 1972 is generally considered to be the first commercially available home video game console and set the stage for Atari and others.
The PC 1975
Although it is argued that the first personal computer was the Datapoint 2200, introduced in 1970, it wasn’t used by consumers. The first computer consumers could play with was the Mark-8 microcomputer, designed by Jonathan Titus, using the Intel 8008 processor. Shortly after that, MITS completed their first prototype Altair 8800 microcomputer. Titus’ original name for the computer was “PE-8”, in honor of the Popular Electronics magazine.
3D graphics in games 1983
I, Robot, released in 1983 by Atari is considered the first 3D-polygonal game, produced and sold commercially. The genre coalesced in 1992 with Wolfenstein 3D, which has been credited with creating the genre proper and the basic archetype upon which subsequent titles were based. Originally released on May 5, 1992 for DOS, the game was inspired by the 1980s Muse Software 2D video games Castle Wolfenstein and Beyond Castle Wolfenstein. The game is widely regarded by critics and game journalists as having helped popularize the genre on the PC and having established the basic run-and-shoot archetype for subsequent FPS games.
The GPU 1999
The graphics processor unit—GPU of today is quite different from the first graphics controllers mentioned above. The GPU was and is the culmination of functions and large-scale semiconductor integration. As early as 1991, we began to see integrated graphics processors. By 1985 we saw the graphics controllers, which were the GPU’s predecessor, become heterogeneous in their functions. 3Dlabs (in the UK) developed its Glint Gamma processor, the first programmable transform and lighting engine (T&L) as part of its Glint workstation graphics chips and introduced the term GPU in 1997. Then in 1999, Nvidia developed an integrated T&L engine for their consumer graphics chip, the GeForce 256. ATI quickly followed with their Radeon graphics chip. But Nvidia popularized the term GPU and has forever since been associated with it and credited with inventing the GPU.
Tessellation, AI, and ray tracing 2001 -2018
Specialized computer graphics operations that had been run in software on the CPU moved to dedicated hardware accelerators within the GPU and got incredibly faster in the process. It was a combination of Moore’s law and putting the function where the action was.
AMD introduced hardware tessellation, Nvidia introduced hardware AI and accelerated ray tracing. Nvidia astonished the world with introducing the largest GPU ever made, the Ampere with a mind-boggling 38 billion transistors.
Mesh shading 2020
GPU development never slowed down and in 2016 AMD introduced Primitive Shaders in their Graphics Core Next (GCN) Vega GPU. That led to the mesh shader developed by Nvidia in its Turning GPU in 2018 and that led to Microsoft’s DirectX 12 Ultimate enabling it all.
Meanwhile, Epic put out a demo of mesh shading on the new PlayStation 5 that showed an astonishing billions of subpixels in real-time.
The visuals are stunning and made with what Epic is calling Nanite virtualized micropolygon geometry. This new level of detail (LOD) geometry, says the company, will free artists to create as much polygon detail as the eye can see—that may be an understatement.
Performance 2000 – 2026
Computer performance can be measured in several ways, as can computer graphics performance. No one way is the best, or the more correct and it’s really a matter of what is important to user.
GigaFLOPS is as good as a measurement as any and one that can cut across different platforms for comparison. The real point of this chart is the roll off of performance gains over time. As many have said Moore’s law is slowing down. That’s true, and what it reveals is we will see new, clever, innovative ways to squeeze more performance out of our nanometer sized transistors. Architectural tricks with caches, multi-chips, memory, and most of all software.
Sri Peruvemba:
Reading on paper is a joy but the content doesn’t change; content on an electronic display changes but it’s not as fun to read. In a quest to create a medium that looks and feels like paper but could bring the world’s knowledge to a ‘single sheet’, scientists at MIT created Electrophoretic Displays, popularly known as Electronic Paper or ePaper.
This was in 1997, today ePaper is the reason you purchase the Amazon Kindle or Electronic Shelf Labels etc. ePaper not only looks like paper, it’s not distracting like traditional displays, and they consume hardly any power.
In the future we will carry a rolled up or folded piece of ePaper that will be our map, our phone, our laptop/tablet, our book……. No, it won’t replace toilet paper…. Don’t go there.
Karu Sankaralingam:
Processor architecture:
One of the pillars of how computers work is the von Neuman computing model, which in simple terms is to fetch one instruction, execute it, and then fetch the next and so on. While being extremely powerful and programmable, this model ends being extremely power-hungry. With the recent explosive need of computational capabilities for AI, this model has become too cumbersome. Dataflow computing including various hybrids of dataflow and von Neuman computing, eliminate this overhead by processing information as dictated by where the information should flow – hence the name dataflow computing. First invented academically in the late 70s, explicit dataflow machines are being pioneered by startups and are powering a new generation of chips for AI and machine learning.
Compilers:
Related to dataflow and how information is processed, one of the things that made the von Neuman computing model so successful was the advent of modern compilers that could easily transform high-level languages into low-level machine language. Such a transformation becomes challenging for dataflow machines since the compiler roles includes the placement of data, placement of compute operations based on the semantics of the programmer, routing of values from data-source to data-destination, and possible timing of when this communication should occur.
Various mathematical optimization techniques have been used to develop compilers for specific dataflow machines. Recent breakthroughs have converged theoretical advances in numerical optimization theory, fast numerical solvers, and theoretical formulations that cast the compiler problem into numerical optimization problems. Such compilers based on open-source frameworks like Julia have allow dataflow machines to have generic compilers like von Neuman compilers, and their compilation times are also fast. Some of these have fueled the commercial adoption of dataflow.
Ken Werner:
In 1985 Toshiba introduced the T1100, the world’s first mass-market laptop computer, which, significantly, was compatible with the IBM desktop PC. At a price of US $1,899 ($4,514 in 2019 dollars), Toshiba was delighted to sell 10,000 of them in the first year.
Cramming all of the electronic functionality of an IBM PC into a laptop package was challenging, but there was one component without which the T1100 could not have been made, a monochrome twisted-nematic (TN) LCD capable of displaying 80×25 alphanumeric characters and CGA (640×200) graphics on a screen measuring nine by four inches. The display had a just-adequate contrast ratio of 4:1 and a vertical viewing angle of +40 degrees / -15 degrees.
The display could be tilted to bring the viewing angle withing the readable zone. The image quality of the display was poor, but it made the T1100 a functional product and it allowed the computer to run for eight hours on a battery charge. In early 1987, Toshiba upgraded the display to a supertwisted nematic (STN) unit in the T1100 PLUS.
The display made the T1100 possible. T1100 and its successors provided the time and the revenue stream that permitted the liquid-crystal display to be developed into the vastly better device it eventually became.
1 Peddie, Jon, Developing the Computer, in The History of Visual Magic in Computers, Springer Nature Switzerland AG., 2013, pages 148-158.