Financial Times pundits have called it the future of computing. Meanwhile, the major cloud services have been racing to jump into the game — Microsoft released its edge platform, Azure IoT Edge, in June, days before Google rolled out Cloud IoT Edge. Around the same time, Amazon amped up the edge capabilities of AWS Greengrass.
The edge concept has obvious appeal. Instead of carrying out IoT data collection and analysis in the cloud, those functions are performed at the outer boundaries of the network, close to the equipment. In other words, the cloud supports the heavy number crunching required to come up with a sophisticated predictive model, then the resulting algorithm is “pushed out to the edge” to handle the day-to-day action. This cloud-independent IoT scenario is especially useful when connectivity, latency or security is an issue.
Some applications — autonomous cars is the classic example — would be impossible without edge computing. It’s also a logical way to bring IoT capabilities to areas like mining and utilities. So when is edge going to transform manufacturing?
Don’t hold your breath
“This change is really evolutionary, not revolutionary,” says Ergin Tuganay, a partner and Head of Industry 4.0 at Nortal. To see the manufacturers’ point of view, he says, you have to start with the understanding that, in their sector, the “edge” part of the equation isn’t novel at all.
“Huge manufacturing processes have always been intelligently controlled by IT systems sitting on the edge, at the factory level,” he said. “What’s new is that there’s a new breed of algorithms available, thanks to Big Data cloud computing power.”
The real potential gain lies in replacing the algorithms currently used at the edge, which are typically created by process engineers based on individual systems’ data, with far superior versions developed in the cloud through machine learning with access to data throughout different systems. These are brilliant at, for example, fine-tuning processes for better efficiency or predicting when a component will fail.
The real potential gain lies in replacing the algorithms currently used at the edge.
But taking advantage of the big brain in the cloud requires overcoming one gigantic hurdle: disconnected and messy data. Before there’s any hope of even beginning to apply AI and creating a supercharged algorithm, data must be prepared — ingested from data sources, cleaned, feature engineered and normalized by data engineers. Without this pre-processing phase, the outcome of the machine learning will likely be poor.
In nearly all manufacturing companies, data are nonuniform and highly compartmentalized, what IoT insiders like to call “siloed.” A typical manufacturing plant may have hundreds of production lines supplied by various equipment vendors, each using its own system for data collection and reporting. Some of them date back decades and lack decent data interface capabilities.
“There’s a huge leap getting to the point where you have the data ingested and prepared, and then it takes time to understand what information the data carries,” Tuganay says. “It’s a time-consuming journey to get it done properly and it requires competencies in several different domains, e.g., integration technologies, data engineering, data analysis and machine learning algorithms — and let’s not forget business domain knowledge and storytelling for delivering the results to all stakeholders.”
Mainly because of this, even the digitally advanced manufacturers have only reached the point where they can effectively use data for reporting and business intelligence, according to Tuganay. To date, creating predictive analytics solutions based on the latest machine learning approach is nearly non-existent in traditional manufacturing.
Compounding the problem, as Tuganay points out, is that it’s often impossible to see precisely what business benefits a company will get until after it has gone through the laborious process of getting its data houses in order. Many owners and managers are understandably reluctant to take the plunge without a clear payoff in sight.
“The potential business value is huge. Nobody can deny that,” he says, “But the biggest fear is that data silos are really exotic. It can become fairly expensive to even get to the phase or maturity level where all data is collected in a central repository.
“That’s why when Nortal does this, we try to make it easy and cost-effective for the clients,” Tuganay explains. Instead of a big bang, Nortal designs a data strategy and roadmap starting with analyzing the business case, carefully selecting data accessible with reasonable investments, preparing the data, and training and deployment of the first models.
“As AI and machine learning is an iterative process, we revisit the models and help improve them in the future with newly added data sources,” Tuganay says. “This way there’s the possibility to pick those low-hanging fruits already from the beginning.”
It’s somewhat telling, Tuganay notes, that the first movers towards edge computing are shaping up to be large, asset-heavy industries such as oil and gas, pulp and paper, metals, utilities and marine, where there are the clearest potential gains as well as capital to invest. Even here, practical application of edge computing is still in its infancy.
The rest of the medium to large manufacturers are proceeding toward edge-enabled IoT, albeit at a far slower pace. They’re at the stage where they’ve seriously started to consider collecting their data in a cloud instead of increasing use of their existing on-premises databases, Tuganay says. He predicts that, in five years, practically all will have their data consolidated, or at least be working on doing so. Only after that happens will edge computing finally be primed to deliver the wonders that today’s enthusiasts are promising.