More feature stories by year:
Return to: 2014 Feature Stories
CLIENT: IMAGINATION TECHNOLOGIES
December 2014: Embedded Computing Design
Since its initial release in 1991, Linux has grown to become one of the most popular operating systems (OSs) in the world. Initially developed as a free OS for Intel x86 PCs, it has morphed to encompass the majority of computer hardware platforms. Moreover, Linux is also one of the most popular OSs used with the MIPS architecture. Over the last 30 years, a large ecosystem of Linux software has been developed for both 32-bit and 64-bit MIPS processors, ranging from home Internet routers to backbones.
Today, the Linux kernel runs on a wide array of computer architectures and in various types of devices, from mobile phones to supercomputers. The versatility of this operating system (OS) enables it to be deployed across embedded single-core chips to server systems on chip (SoCs) with several processor cores to base stations containing hundreds of processor cores. This scalability is one of the reasons Linux has been so successful and why it has fostered extensive app development; user apps written for Linux can easily be ported to faster and more powerful processors.
In order to exploit this portability and maximize scalability, it is critical to leverage any software, including Linux, to promote reuse for faster time to market, as well as reliability. This is especially critical as we migrate from 32-bit processors to 64-bit processors in embedded applications. The ability to reuse tried and tested 32-bit software in a 64-bit environment is important in order to quickly and easily develop robust products.
One lesson we can learn from the PC industry's transition from 32-bit to 64-bit computing is that 32-bit software will continue to be around after the hardware and OS transition to 64 bits. Our industry has spent the last 30 years writing and optimizing software that fits the 32-bit space. It is crucial that we reuse as much of this software before completely transitioning to a 64-bit world.
Fortunately, as technologies that are new to embedded designs like hardware virtualization and hardware multithreading become more mainstream, it will be easier to simultaneously increase performance and reliability. By moving to multicore and multithreading, there will be more CPU cycles available to users. On top of that, virtualization now allows apps to run without modification, making it feasible to dedicate a thread or a core to a specific app. Once 64-bit Linux is running on an SoC, virtualization enables users to setup a virtual machine (VM) that can run existing single processor Linux and user applications unmodified. This allows existing software to run at full performance; combine this method with multicore, and simple task partitioning further increases performance.
The future of processing is based on multicore. For example, some applications processors for mobile phones incorporate up to eight 64-bit cores; whether everyone agrees that we truly need that many cores or not, it has become a baseline benchmark. This is partly due to some Android applications that use Linux as a building block and can scale from one to eight cores. However, until the majority of 32-bit applications are ported to 64-bit processors, the latest shiny new octal-core phones will still be running the same 32-bit applications for the foreseeable future.
Meanwhile, networking has always been a large user of multicore and multi-threaded processors. One key feature of networking software is that it is inherently multi-threaded. There are three main tasks in networking software: transmit, receive, and process data packets. As line rates and processing needs increase, the number of parallel tasks increases. It is not uncommon to see core counts that are in hundreds or even thousands of MIPS processors in high-end networking applications, from Internet backbone switches to mobile base stations. Imagination, for example, recently announced its MIPS I6400 processor core that can scale from one to more than 1,500 virtual cores. This level of unprecedented scalability was implemented based on the expectation that core counts will continue to increase over the next few years as gigabit Internet becomes more readily available for consumers.
The Imagination Technologies MIPS I6400 is capable of scaling to more than 1,500 virtual cores.
Residential applications such as home DSL gateways and wireless routers have been using single-core processors, which is now quickly becoming a speed bottleneck. Most of these devices are running Linux with certified voice and DSL codecs written in software. For next-generation routers and gateways, a simple migration to a new 64-bit multicore processor can more than double processor cycles available to codecs. By reusing the same 32-bit software, updated SoCs can be brought to market quicker without extensive recertification. As software optimization continues to evolve, different programming techniques can be utilized to take advantage of multi-threading or multicore scaling.
There are currently two flavors of Linux; uniprocessor (UP) Linux and multiprocessor (SMP) Linux. SMP Linux is designed for multicore processors and is the foundation for Linux scalability. While there is a small performance overhead associated with SMP Linux, adopting SMP Linux can lay the groundwork for future products, and therefore should be proactively adopted today even for single processor systems. This gives device manufacturers the time they need to transition to multi-threaded software in order to achieve higher performance and add new features.
The recently established prpl Foundation will also help to facilitate this migration. prpl is an open-source, community-driven, collaborative, non-profit foundation targeting and supporting the MIPS architecture (and open to others) with a focus on enabling next-generation datacenter-to-device portable software and virtualized architectures. Existing single-core Linux users can get an updated code base for SMP Linux from prpl and its community to help the move to SMP Linux. Once the Linux OS is ported, then each task or application can be bound to a specific processor using "task set" commands. The inherent processing overhead associated with Linux and running multiple applications on a single processor (i.e. the context switch penalty) are distributed over multiple processors, making more CPU cycles available for applications rather than relying on a single processor. To further illustrate the concept, the Figure 2 shows a possible migration path from a single processor to multiple processor cores, and its associated benefits.
Migrating from a single to multiple processor cores significantly reduces operating system (OS) overhead, allowing applications to run more efficiently.
Multicore technology also opens up new methods for power management. Itís more economical from a power standpoint to distribute the task over two slower processors than run one processor at high operating frequency. Multithreading goes one step further to maximize available processing power.
This is an exciting time to be in technology. We're rapidly moving from a single core to a multicore processor world. There are many ways to accomplish this migration, and new technologies like virtualization make migrating to multicore processors relatively simple by creating virtual worlds where existing software, including Linux, can run unmodified on new multicore chips without any changes.
The software that has powered us through single processor generations will power us through the next generation of hardware devices containing an ever-increasing number of processors. With the help of community-driven organizations like the prpl Foundation, new generations of software will take full advantage of multicore and multithreaded hardware and deliver more integrated solutions.
Return to: 2014 Feature Stories