Feature Story


More feature stories by year:

2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
2000
1999
1998

Return to: 2016 Feature Stories

CLIENT: PRPL FOUNDATION

Jan. 28, 2016: IoT Design

How Virtualization, Modern Silicon, and Open Source Are Conspiring to Secure the Internet of Things

IoT Design, Jan. 28, 2016 - In this Q&A with Cesare Garlati, Chief Security Strategist at the prpl Foundation and former Vice President of Mobile Security at Trend Micro discusses his organization's standardization push around roots-of-trust, virtualization, open source, and interoperability in order to secure the Internet of Things (IoT).

The prpl Foundation is known for open source tools and frameworks like OpenWrt and QEMU, but has recently ventured into the security domain with a new Security prpl Engineering Group (PEG) and the "Security Guidance for Critical Areas of Embedded Computing" document, not to mention wooing you away from your role at security giant Trend Micro. What can you tell us about the drivers behind these moves?

GARLATI: One way to look at it is a supply-and-demand schema (Figure 1). On the demand side, according to Gartner, the security market was worth $77 billion in 2015 and it's going to grow much faster. One strong demand-side driver is the need for stronger security, because industry is not doing a very good job of it - and when I say industry I mean from silicon to software to services - and all of the spending is not resulting in better information security.

[Figure 1 | Drivers from both the supply and demand sides of the market are pushing industry towards standardization around new security frameworks.]

Another very important aspect in terms of demand is the specific hardware layer. The current consensus among security professionals is that security is really about risk management, and there's no single solution that can guarantee 100 percent coverage on anything. Currently, the most responsible and professional approach is through multiple tiers that reduce risk. This is an evolution in the security industry, because it wasn't this way just a few years ago. A few years ago you would have vendors selling solutions that would "stop" attacks, but industry has realized this is simply not possible. A more credible approach is to say, "I can offer you the best you can do based on your specific criterion." So from a practitioner or developer perspective, the more layers you put in place, the better off you are. What has been missing, though, is the hardware layer. We have solutions at the networking layer, we have solutions at the web server layer, we have solutions at the application layer, we have solutions in authentication, you name it. All of these tiers are there but what is still missing is the hardware, and there's demand for this hardware-level security.

Now what's intriguing is a multi-tenant use case. Based on my conversations with actual OEMs, carriers, and so forth, the model where one single vendor deploys a box and then controls, administrates, and is ultimately responsible for it is a model of the past. The model that is evolving and is already part of the design and architectural choices that vendors are making today is a model that I would make a rough parallel with the model of an app store. So, how do you secure something where you have multiple entities that are peers in this [new] box? The old model of Trusted Execution Environments was good in the past, but if you are one of those applications that runs in the secure world, you have to trust all of the other secure tenants.

On the supply side, what vendors can do and what is called the march of silicon, is that silicon is so powerful these days that it can embed more and more of the features that have been realized in software layers in the past. And, obviously, anything you move down the stack becomes much more resilient because it become much more difficult to tamper, to change, and so forth.

How specifically is modern silicon going to help improve current security paradigms?

GARLATI: Something that's very important is all of the modern processor cores support some kind of hardware-assisted virtualization, and security by separation is a very well understood concept in the security world that is coming to lower level hardware. This idea of looking at things separately is well described in the "Security Guidance for Critical Areas of Embedded Computing" document, so that although independently they might not be more secure, as a system, one bad apple doesn't compromise the whole system - or in the jargon of the security community, helps prevent lateral movement. If you put in place virtualization and security by separation, what you get is something that's much more difficult for an attacker to penetrate one weak entry point in the system and then move laterally to some more critical aspect.

An example is the Jeep hack, where they got into the entertainment system and were able to re-flash something that controls the CAN bus and control the brakes and the steering wheel. In a virtualized situation, this would simply not be possible - the hardware is the same, but from a technology perspective it appears to applications as completely isolated systems.

Hardware assisted virtualization makes all this possible because there's no performance impact as there was in the past. At the same time it also makes software execution on multicore processors more efficient. If you have multiple guests on a multicore processor and can start mapping guests to defined cores, that's when you start extracting all the power from these multicore platforms. A byproduct of security in this case is performance, which is exactly the opposite of tradition, because traditionally when you secured something you lower performance. But if you put in place security by separation, you're actually much better off using processor virtualization.

But the [virtualized] architecture does not assume multicore. It does not assume anything. Multicore is a better architecture to get value out of, but is not a requirement. The hypervisor, or virtualization manager, itself is a very low-level, thin layer of software, which you could call a microkernel that provides the security, APIs, services, or agent, depending on the situation. If you have a Linux system, you speak in terms of services, or perhaps inter-process communication, and you don't ever reach the operating system; if it's a very small device it's probably some sort of API or library, depending on the situation. The hypervisor provides all the security features - the moment that the hypervisor is trusted, you trust the code, and you're sure that the code that is booting has been signed, the hypervisor itself becomes the virtual root-of-trust for everything else because you don't need multiple keys or different Trusted Execution Environments or whatever you have in place to secure a virtual situation like this. Once the hypervisor is trusted, the only thing each individual entity needs to trust is the hypervisor itself.

You can have a hardware root-of-trust to validate the hypervisor and then the hypervisor becomes the root-of-trust, or you can have many different variations. There might be a trusted element, multiple trusted elements, a one-time programmable element, it can be built into the SoC itself, it can be built into the system in terms of boards and so forth. There are many variations and It depends on the situation, but it's very flexible. The key concept is that the hypervisor becomes the root of trust and provides all the security services through virtualization, meaning that these services can be driven by policies that are specific to guests.

In fact, the more complicated the stack or rich the operating system, the easier it is for the developer to achieve all of this, because these architectural concepts have been in the Linux stack for a long time, but always deployed in terms of software. Now is the time for hardware to meet the software. So from a developer perspective, this really just looks like a driver, and the driver provides all the services, including inter-process communication, key management, and everything else. But again, since it's virtualized it can provide different services for different guests. It's a whole new world.

Where does prpl figure into this whole equation, and how are you working to move this virtualized security architecture forward?

GARLATI: It's all about standards, interoperable protocols and APIs, and making sure this is not a vendor-specific solution, as has happened in the past. Vendor lock-in is not what large players are looking for. The supply side is telling us that proprietary systems, especially in a security context, don't really cut it. You need to be open sourced up, you need to be able to look at the code, you need to be able to change the code, you need to be able to make sure there are no nation-state actors that tampered with code or drove some of the requirements so security components are weaker. There are a whole host of concerns when you look at the global market on the supply side that lead to this sort of open-source security framework (Figure 2).

[Figure 2 | Despite increased security spending, the number and severity of cyberattacks has increased recently. In response, the prpl Foundation has proposed a multi-tenant, multi-tiered security architecture based on roots of trust, virtualization, open source, and interoperability.]

The Security PEG is actively working on this right now, and we have the APIs. The problem is that we have too many APIs. The exercise we are going through as we speak is to try to rationalize these APIs and provide two things. One is the API definition, regardless of implementation. That's what's expected of prpl. These are not standards yet as we are considering various certification options.

The second thing is reference implementations for these various APIs. In the first case these will be documents, and in the second case these will be actual pieces of software. A key aspect of this is that both will be provided as open source with the most permissive licensing possible, which means you're be free to do anything you want, even to build commercial implementations, for everyone. That's really the true nature of prpl. It's not just open source because open source has many different licensing schema, with the General Public License (GPL) being the least permissive and the Berkeley Software Distribution (BSD) considered one of the most permissive. prpl goes even one step further than BSD, which is what commercial entities are really asking for, and what really makes sense for business.

There's room in this model for an open-source community, but there's also room for commercial providers to develop commercial-grade implementations of these APIs. So the definition will be common and standardized, and an open-source, community version will be available for everyone, but not as production ready as required by some major players. In that case there are vendors, and there are already three entities within prpl and many more coming soon that will provide the commercial-grade implementations required.

As I said, all of this is real, but we have too many variations. We need to come to an agreement. That's what an industry consortium is really about. It's not about developing things, it's about getting all the key players together to come up with what's considered the best on both the supply and demand sides. When it comes to security, it has to be cross-vendor, it has to be platform agnostic. Heterogeneous environments are the reality these days, and vendor lock-in is something noone wants, especially on the security side.

Return to: 2016 Feature Stories