by Ergin Tuganay, Partner, Head of Industry 4.0 at Nortal, November 21, 2019
Edge computing is shifting from just being appealing to becoming an irreplaceable part of IoT solutions, bringing data platform capabilities to another level. Early adopters are ahead of the game with the advantages of offline capability, low latency and cost savings.
Over the past several years, industrial organizations have been progressively incorporating cloud into their operations to glean insights from large volumes of data. This has become the hygiene factor in achieving key business outcomes such as reduced unplanned downtime, higher production efficiency, improved quality and lower energy consumption.
At the same time, the role of edge computing to date has mostly been to ingest, store, filter and send data to the cloud systems. We are at a point, however, where edge-computing systems are packing more compute, storage and analytic power to consume and act on the data closer to its source. Edge computing is offering capabilities for the growing need to support applications with a hard requirement for offline capability, low latency and near-real-time analysis.
Hence, edge-computing capacity is skyrocketing and becoming increasingly valuable for industrial organizations.
The cloud and edge layers both play essential roles in industrial operations, fulfilling separate but complementary needs. It is the synergy of the two solutions that maximizes the benefits of both. While cloud resources create logically centralized computing and data repositories, edge works with more distributed networks, keeping the heaviest traffic and data processing close to the sources of data.
Practically endless storage capacity and computing resources in the cloud are used to design, build, train and package applications and AI/ML models. Once packaged, a model is deployed to the edge — typically as a collection of APIs — and executed against the local data stream without a need to continuously send the data to the cloud. This ensures faster results with more efficient use of resources. Meanwhile, in the background, a larger batch of data is periodically ingested to the cloud to provide data analysts, engineers and scientists new material for further analyses, creating and re-training models — but this can take place often without any real-time requirements.
In other words, the local computing infrastructure eliminates the time and resources needed to ingest data to the cloud, to act on the request and then move it back to devices. It acts as a local agent, offering an automated decision-making engine and real-time responsiveness.
Today, when referring to edge computing, professionals typically refer to components of modern IoT and cloud technology platforms such as Azure IoT Edge, AWS IoT Greengrass or Google Cloud IoT. However, according to Gartner, by 2024, 50% of MES (Manufacturing Execution System) solutions will also include industrial IoT (IIoT) platforms synchronized with microservices-based apps.
This indicates that components of traditional automation-level (often referred also as L2 and L3 in ISA-95 standard) solutions such as MES, SCADA (Supervisory Control and Data Acquisition) and process historians (time-series telemetry data collection and storage) will likely be called “edge,” and the two worlds of Industrial IoT and traditional MOM (Manufacturing Operations Management), are likely to diffuse in the future.
The cloud plays a crucial role where significant computing power is required to manage and process vast data volumes from machines effectively. For industrial organizations, however, edge computing will become a critical solution when applications need to be available for users in all scenarios, even without access to cloud resources.
Hence, edge computing has a crucial role in use cases that call for essential onsite action such as:
The above list names just a few of the notable use cases made possible by edge computing that significantly increase the quality and efficiency of industrial processes.
Yet, in all IoT cases, some level of edge capabilities is mandatory. At a minimum, simple data buffering at the edge needs to take place to cope with temporary connectivity issues.
Gartner lists edge computing as one of the top 10 strategic technology trends in 2020. Predicting that, by 2023, there could be more than 20 times as many smart devices at the edge of the network as in conventional IT roles.
Exploiting the opportunities of the edge requires an approach that understands the intact industry environment, including a data strategy and roadmap starting with analyzing the business case, carefully selecting accessible data with reasonable investments, preparing the data, and training and deployment of the first models. Even though organizing all of the data houses is a laborious process, steady is the way to go. It’s worth the patience since the potential business value of edge computing goes beyond the traditional gain. Plus, the exploration of new, beneficial use-cases promises to be an ever-evolving journey.