“The fly has the CPU of a toaster. Nonetheless, it can do quite a lot.” – Bruno Maisonnier, Founder and CEO, Another Brain.
A fly has one of the fastest visual responses, up to five times faster than our own eyes. This feat of visual magic is achieved because they have specialized photoreceptor cells—not because of a big brain. That phenomenon of nature—specialization—is also driving significant advancements in electronics to achieve the necessary processor price, memory capacity and sensor performance. That is the only way forward to improve the gaming experience, pioneer automotive safety or release next-gen 5G apps.
Why? Because Moore’s law is slowing down: the doubling of transistors every two years is no longer assured.
Application developers want to focus on content: writing software that solves a problem. They want to innovate, both with new ideas and out-of-the-box thinking, as well as through new prototypes and understanding of how to create the most compelling customer experience. And more and more, the understanding of the performance of the hardware matters, whether that’s the mixed-reality HoloLens by Microsoft, wearable sensors or industrial equipment on a factory floor.
General-purpose CPUs are good at everything but slow to a crawl on tasks like computer vision. At the other extreme, hard-wired code performs well, until an application does not meet the customer’s heightened expectations of speed and precision. Across this spectrum, there is a tremendous amount of processor innovation in price, power and performance characteristics.
Over the last decade, a cottage industry of specialized hardware has sprung up thanks to the development of multicore processors, application-specific integrated circuits (ASICs) and Systems on a Chip (SoCs). An army of companies is innovating on processors and software abstractions such as Nvidia, Intel, Qualcomm, Xilinx, Microsoft and Google. Together, with bets by startups like Graphcore, Wave Computing, Mythic, Cerebras Systems and SambaNova Systems, there are thousands of specialized chips and sensors.
As David Patterson said in a recent talk on Embedded computer vision, it truly is the “golden era” of compute.[1]
This golden era is possible because of three forces:
-
Decoupling hardware and software development cycles. The success of hyperscale data centers by the likes of Amazon Web Services, Google and Microsoft Azure has been underpinned by the separation of software and hardware, be that routers, switches or servers. The economics and improved server utilization of shared compute clusters cannot be ignored. According to analysts, 41% of new server shipments will be virtualized in 2020, up from 33% in 2015.[2]
App developers are very familiar with cloud-native infrastructure. And over the last several years, telecommunications providers have decided to architect their 5G infrastructure, so it is virtualized and “software-defined.” Even the radio functions have been moved out of the cell towers and run on x86 servers where more complex Layer 1 functions such as resource allocation are offloaded onto FPGA-based network interface cards. That decoupling from the device to the network matters to applications as they run more of their software at the edge of the network, whether inside a car, smart city infrastructure or an AR headset. App developers will need a flexible contract between the underlying hardware and the 5G network to meet requirements for low latency, high bandwidth, power and space.
-
Hardware accelerators that are optimized for artificial intelligence workloads. Machine learning (ML) has become one of the most important workloads for application developers. Whether in medical imaging, automotive simulations, 3D reconstruction or any other workload. The idea behind hardware accelerators is to apply specialized tricks to optimize for high performance, low cost and low power. They are designed for purpose: convolutional neural networks for computer vision, light detection and ranging (LiDAR) scanners (now available in the new IPad![3]) or simultaneous localization and mapping (SLAM) algorithms to capture structure in motion. For example, the tried and true graphics processing unit (GPU) has higher efficiency in deep learning tasks because of its parallelism and memory bandwidth. And field-programmable gate arrays (FPGAs) have proven advantages in energy efficiency and cost when handling ML-based tasks compared to CPUs and GPUs.
-
Higher-level software abstractions to build and deploy. Every application has a development pipeline of activities that become more complex with hardware dependencies. The first is to choose the right hardware processor, be that a CPU, GPU, APU or DLA. Low-level APIs such as CUDA, HLS compiler-based FPGA programming and OpenCL give direct access to underlying hardware accelerators—making them work out of the box. Next, developers choose programming frameworks and languages such as C/C++, TensorFlow, Python, etc., depending on the more functional requirements. And the end-code is deployed somewhere—on a device, server or cloud—where infrastructure as code has raised developer productivity through containerization technology and DevOps tool chains. There are even more higher-level abstractions for developers such as Kubeflow for structured pipelines or HELM for packaging ML models into a container for portability. To make the task of deployment even more involved, applications are now being broken up into components that are part of a larger system, where they are located in different places around the planet, on a factory shop floor, at a cloud provider or in a private data center.
However, while the trends detailed above are good for application developers, there are still a lot of “infrastructure” details that must be dealt with.
“Everything should be made as simple as possible, but not simpler.” – Albert Einstein
Ideally, developers just want to build their applications and have them run over a GPU, FPGA or another device. They want to make final deployment to the customer as painless as possible. They don’t want to configure access to the GPU or FPGA. And they want to know they can get access to resources over the network, whether that’s through a PCI passthrough or external router. All the infrastructure stuff would be taken out of the way.
That’s sort of what happens on a cloud platform, such as the platforms from Amazon Web Services, Microsoft Azure and Google App Engine. Increasingly, developers have come to depend on container technology to scale code and make it portable across operating systems and hardware platforms. With additional work, containers can help abstract away the complexity of specialized hardware and a distributed computing environment, which is likely with 5G networks.
Developers need to design for hardware, which means they can’t completely ignore the details. However, things can be made simpler for application developers as they gain access to more specialized hardware. This will enable them to:
- Hide the complexity of installing and configuring containers and Kubernetes.
- Streamline the packaging and distribution of hardware drivers for different processors.
- Take care of low-level hardware API access (configuration and runtime) from the container environment.
- Share hardware processors by intelligently scheduling containers to optimize the use of GPUs and FPGAs.
- Position applications and workloads based on location, latency, congestion and quality of service.
Shifting on software and computing in the car
If we look at the automotive sector, which is undergoing a similar journey in compute, we see that the car itself is being transformed into a drivable computer, where compute is the new horsepower. The vehicle of the future will not only be able to drive itself but will offer in-cabin experiences that are radically different than today.
Automotive OEMs and Tier 1 suppliers know their destiny is to shift their core competency from a mechanical-centric business model to a software-centric business model. The automobile has become a larger part of the technology ecosystem with rideshare services and smart city infrastructure. Of course, because of COVID-19, OEMs are under tremendous pressure to figure out what consumers want and how to get it delivered profitably.
Self-driving cars will have to wait a bit. For example, industry forecasters predict that higher levels of autonomy (i.e., SAE Levels 4 and 5) won’t reach the mass market until 2030.[4] For example, Audi recently decided to turn off the traffic jam pilot on its A8 Models because of a lack of government compliance.[5]
With that said, the global demand for advanced driver assistance systems (ADAS) is expected to grow and reach $142 billion by 2027.[6] Much of the focus is expected to be centered on safety features that are mandated by governments, such as distance warning, drowsiness monitoring, automotive emergency braking and adaptive front lighting. It is important to note that the ongoing transition to a simpler electronics/electric platform will be balanced against controlling risk levels of new software, hardware and sourcing strategies.
I expect the dynamics in specialized hardware to continue to play out in the automotive sector. The developer community expects to release software and new compelling customer experiences at a higher velocity. These include:
-
Decoupling hardware and software development cycles. The breadth of automotive computing platforms is truly breathtaking. Today, there can be over 100 electronic control units (ECUs) distributed within the vehicle. Each ECU is classified based on its function (or domain) such as chassis, powertrain, body, ADAS and infotainment. Each ECU has its own independent networks. Need a new function? Add a new ECU.
The introduction of domain controllers that are networked with each other and the consolidation of functions results in lower costs and a simpler electronics/electrical architecture. In addition, there is a class of newer processors that are able to handle more complex functions with software-over-the-air updates onto hardware. The evolution mirrors what is happening in private clouds and data centers, where software is hosted on virtualized servers and located where compute is needed: closer to the point of consumption. In particular, electric vehicle models are pushing further into a software-defined architecture with consolidated domain controllers, an Ethernet backbone and high-performance “automotive driving platforms.” By decoupling the hardware and software development cycles, OEMs are able to release software updates more frequently, without having to wait for a new generation of hardware.
-
Hardware accelerators optimized for artificial intelligence workloads. The hardware in a vehicle can range from microprocessors for telematics, systems-on-a-chip with graphics processors for infotainment all the way to integrated autonomous driving platforms. For example, today’s Level 2-plus cars are equipped with all sorts of multi-spectral sensors—cameras, radar, LiDAR—to make driving safer. Then you have driver and occupancy monitoring systems mandated by safety rating agencies such as Euro NCAP, so passenger behavior can be used to minimize driver disengagement. These systems need high-end AI, using either high-definition resolution cameras or long-wave IR sensors, that can infer eye or facial movements. Of course, even more advanced automatic driving platforms demand high-resolution 360 degrees of perception, trajectory planning and motion control algorithms. Depending on the purpose, the AI hardware is either a multicore, manycore, GPU, FPGA or dedicated accelerator that is engineered by the vendor, such as NXP, Nvidia, Renesas, Intel, Qualcomm and others.
-
Higher-level software abstractions to build and deploy. The functions currently distributed on discrete ECUs will get consolidated into a smaller number of processors to reduce cost and complexity. Knowing this, OEMs are now looking to differentiate the customer experience with novel applications, such as driver assistance, vehicle-to-infrastructure communications and even UVC-sanitation for Covid-19. The software content of the car will grow, regardless of whether it will run on programmable logic devices such as FPGAs or microprocessors. The automotive open systems architecture (AUTOSAR) and its updates (Adaptive AUTOSAR) are open standards that allow OEMs and their suppliers to establish a common software architecture that reduces development time and improves software quality. Applications such as ADAS and infotainment run on top of abstraction layers and runtime environments that allow software components to be updated over the vehicle’s lifecycle. These applications are portable across many vendor ECUs and adhere to safety and risk classification requirements defined in ISO26262. Working alongside AUTOSAR, there are other layers of abstraction that developers must deal with, such as vision processing libraries like OpenCV, deep-learning frameworks such as TensorFlow and Keras as well as containers. Also, we are seeing subscription features—for example, from Cadillac and Tesla—that update software as a one-time purchase or as a payment upon resale.
The Altran Research and Innovation Group—now part of the Capgemini Engineering and R&D business unit—is passionate about developing and promoting open-source projects to tackle the challenges addressed in this article. In 2018, we announced our intentions to “open source” our edge computing platform with Deutsche Telekom.[7] That early work has evolved into Altran’s Project Adrenaline[8] that includes open-source software to ease the installation and use of hardware accelerators “as a service.”
And so…the journey to the future of (open) networks and compute continues.
Reference Links
[1] Patterson, David, “A New Golden Age for Computer Architecture,” Oct. 29, 2019, Association of Computing Machinary –
[2] Suri, Jack, “Change the Economics of Server Virtualization,” Jan. 20, 2020, Communal News
[3] Press Release, “Apple unveils new iPad Pro with breakthrough LiDAR Scanner and brings trackpad support to iPadOS,” Mar. 18, 2020, Apple
[4] Vousden, Michael, “Level 5 fully self-driving cars not due anytime soon,” Jul. 15, 2020, Just Auto
[5] David, Chris, “Auti abandons self-driving plans for current flagship,” Apr. 28, 2020, Slash Gear
[6] “Breakdown Data by Manufacturers, Segments and Regions 2027” MarketWatch, Sep. 28, 2020
[7] Press Release, “Deutsche Telekom and Aricent Create Open Source Edge Software Framework,” Sep. 20, 2018, Altran
[8] Mishra, Shamik, “Compute Acceleration on Network Edge: Project Adrenaline,” Jan 20, 2020, Altran website /