The proliferation of the Internet of Things (IoT) boons the environment for millions of devices communicating with each other, collecting and generating data that offers new and transformative opportunities for existing business verticals to achieve new heights.
This shift includes evolving connected concepts like the truly connected car, super smart home, cutting-edge next-generation retail business and digital industries with the inclusion of artificial intelligence (AI) and machine learning (ML).
With the massive amounts of data collected through connected discrete objects and heterogeneous computing platforms, it becomes highly important to analyze the data that provides actionable insights and helps to take immediate action. Processing all data in the cloud has considerable overhead in terms of bandwidth and latency between the source device and the cloud, and also raises additional security implications.
Edge/Fog Computing solves this problem by bringing the intelligence of the cloud to the edge of the network. With edge compute solutions, IoT devices are able to run the analytics at the closest proximity to the data source and croons the name as edge gateways, edge devices and edge sensors and actuators depending on the role played by the IoT device as a part of the Edge architecture.
The terms edge and fog computing are used interchangeably depending on the use case and the infrastructure built to address the purpose. Fog and edge computing systems share several key similarities; both shifts processing of data towards the source of data generation to decrease the bandwidth and latency, and improve security. The key difference is where the processing takes place. Fog computing processes take place on the local area network level of a network architecture. Fog computing includes a fog infrastructure in a centralized system, incorporate the network needed to get processed data to its destination.
What is Edge Intelligence?
Machine learning, a subset of AI, uses statistical methods to enable machines to improve on two main tasks referred to as ‘inferencing’ and ‘training.’ Typically, training is done in the cloud and requires massive computing power and a large repository of data. Once the system/model is trained and deployed, the inferencing module compares the incoming data from devices against the trained model to make intelligent decisions. The Inferencing module sends the data to the training module only when there is a significant difference during inference and the new data will be sent to the training module to further improve the model.
With the advancement in computing resources in the latest hardware, the core of the idea of edge computing is to move the inferencing to the network edge, without the need to communicate with the cloud or worry about intermittent cloud connectivity. This approach will be beneficial for offline use cases like audio and video applications that require real-time processing where devices can’t reliably connect to the network. Also, in a connected car, analytics can be done offline (referred to as onboard analytics) and decisions can be made immediately without the need to connect with the cloud. For example, prescriptive analytics, which combines the intelligence of both descriptive and predictive analytics, provides recommendations in terms of guiding the driver behavior that better suit the situation based on past experiences.
An edge device includes the core intelligence in-terms of data computing based on the data received from sensors and a filter module that sends only the selected data to the cloud, either directly or through an edge gateway for enhancing the existing model in the cloud.
This method of processing selective data at the edge, filtering unwanted data and sending only the required data to the cloud will help refine the model at the cloud. It further reduces latencies, improves response time, optimizes performance, takes quick action with the model available at the edge, supports offline scenarios, companies with privacy policies and regulations, and reduces data transfer cost.
Edge gateways typically run on full-fledged operating systems with an uninterrupted power supply without any constraint in memory, CPU or storage as compared to edge devices that are often battery operated with memory and CPU constraints. Gateways offer additional services and storage capability. For example, in an enterprise use case, a centralized hub like cloud is required to store huge chunks of data and does a similar processing of algorithms that may run on the cloud to ease and reduce cloud traffic.
In a more complex scenario, micro-data centers can be deployed locally to analyze data from multiple edge nodes. These data centers act like an edge cloud within the local area network that runs the software on the Edge Gateway and are considered to be fog computing.
Edge device/gateway also includes the following:
- Protocol translation: If an edge device doesn’t support MQTT, AMQP, or HTTP, then it uses a gateway device to send data to IoT Hub.
- Data aggregation and normalization: Collect data from heterogeneous sources, eliminate invalid data and select relevant data, normalize data structures, add metadata, anonymize the data, filter and aggregate to reduce the amount of data and encrypt sensitive data
- Edge analytics: Local analytics (rules-centric or device-centric analytics) performed at the edge to decrease latency in the decision-making process on connected devices
- Device management: Supports authentication, device provisioning, configuration, monitoring and maintaining the device firmware and software that provides functional capabilities
Edge devices are often referred to as “light edge,” and edge gateways are referred to as “heavy edge.” However, with the advent of neural hardware acceleration, the light edge can also exhibit the capabilities of heavy edges. Qualcomm Snapdragon Neural Processing Engine (NPE) hardware and Qualcomm Vision Intelligence Platform allows running powerful AI algorithms on the light edge that transforms light edge to an intelligent edge.
Why make the shift from the cloud to the edge?
With the advent of IoT, 5G and other wireless advancements, more devices are getting connected and it becomes inevitable to left shift the most decisive part of the cloud intelligence to IoT Edge devices that enables onboard/on-device processing and analytics, triggers real-time insights, enables proactive use cases and empowers industries to take quicker and smarter decisions while reducing costs.
With the strong knowledge and deep expertise in machine learning and artificial intelligence systems, Altran can help IoT solution/service providers to build end-to-end ecosystems that include capabilities of the full functional solution right from chip-2-cloud, deployments and product support services
For more information to make the shift to cloud and edge computing get in touch with an Altran Expert today.