Service

  • Technology and Engineering

Blog

by Priit Liivak

The pains and gains of the hyperautomation trend

Hyperautomation is an idea that Gartner has identified as a significant technology trend for two years in a row, and the software development industry is definitely pushing the concept forward. However, there are different obstacles and opportunities to a successful hyper-automation. Here’s our experience.

The concept of hyper-automation is simply pushing organizations to increasingly streamline processes using automation and modern technology. Our industry has evolved significantly over the past decade, introducing practices and techniques to further automate and improve our daily operations to increase the speed and quality of the solutions we build. The business of our customers has developed likewise, and emerging technologies have brought forth opportunities never considered possible in the past. It takes broad knowledge of the technology and the business to work toward hyper-automation, which allows for efficient and effective value creation.

Although the elements of hyper-automation are simple, its infinite loop of improvements is often difficult to execute. I would like to share my thoughts on some of the hidden opportunities and obstacles of hyper-automation. I will cover the very basic requirements enabling changes in any organization and some of the technical approaches to overcoming the challenges of automating decision processes.

Motivation for change

Moving toward hyper-automation requires a series of iterative automation improvement loops. Each iteration introduces changes to the software, processes, or ways of working that need to be backed by transparent motivation to succeed.

As with software architecture, every component within a process needs to have a proper justification that correlates with the context of the time it was put in place. In other words, we should assume that every step of the process was put in place because, when the process was designed, the component served a valuable purpose. But as the context changes over time, it is essential to re-evaluate the methods as well. Such redesign often entails significant effort and may require changing the mindset of stakeholders or even changing the supporting legislation. If the motivation for automation is to improve the effectiveness and efficiency of business operations, this effort is unavoidable. On the other hand, if the push for automation is to reduce time spent on manual data entry, it might serve short-term goals but lead to more significant problems as the quality and necessity of collected data are not verified. Both approaches are viable routes toward hyper-automation, but simply digitizing services leads to a significantly longer journey.

An honest, unbiased evaluation of the current situation and history is required to analyze the necessity and alternatives of a process component. As with personal growth, the change of an organizational process can only start from within and cannot be pushed. I believe this analogy is crucial to be understood and accepted before any redesign process, at any level, can begin. And even with the right starting point, several hidden biases can influence transitioning toward more seamless processes. System justification theory proposes that people have several underlying needs that can be satisfied by the defense and justification of the status quo, even when the system may be disadvantageous to certain people. This system justification bias is an additional opposing force that prevents the change from beginning at the root level. It may lead to situations where current manual processes are simply translated to be executed by automation without considering the possibility of designing a better process within the new automated context.

My experience has taught me that all parties easily underestimate the time required to gain sufficient contextual understanding. Customers lean toward lower estimates due to their own expertise in their domain – they don’t perceive complexity in the areas that have become obvious to them over time. For consultants, on the other hand, it is easy to fall for a variation of the Dunning-Kruger effect that leads them to overestimate their ability to solve problems in the customer domain even though they don’t have years of experience in that area like the customer. There is no clear recipe for overcoming these obstacles and risks, but making everyone aware of these certainly helps.

Transparency over control

Whenever the amount of process automation is significant, it creates the illusion of reduced control. If a previously manual decision is now made through automation, the process owner seemingly loses control over the process execution. The need to build trust for automation correlates with the complexity of the decisions the automation is designed to carry out. Regardless of the underlying technical implementation, trust needs to be established for the automated decision process, not for the technology itself. The underlying implementation of the automated decision algorithm can be either a machine learning model or a more transparently defined but complex decision tree. The development team needs to trust this technology; the customers need to trust the technology’s automation process.

I have seen transparency act as a significant influencer toward building trust. We can create tests that verify the algorithm for simple tasks, and we can run simple queries that ensure data integrity that considers existing business rules. This allows us to provide proof that the automation works properly. A step further is an approach that executes both the manual process and the automation in parallel.

This is extremely helpful for building trust in complex systems. The method simply leaves the manual process in place but still runs the automated decision algorithm in the background, persisting the result in the end. In terms of precision, the performance of such automation can be evaluated after several process executions have been completed. The downside of this approach is no apparent change of functionality for the users – this may complicate justifying the development cost. A step closer to the users would allow them to make the decision and then display to them what the automated decision would have been. This way, the user can make their objective and verify by checking if the algorithm acts differently. There are many possibilities for merging such recommendation systems into an existing process, and the change management for such a merge is just as important an influencer for the eventual success as the change itself.

Automation engineering

From a technical viewpoint, automation engineering is much more than just coding a pipeline from commit to deployment. Automating effectively requires a wide range of competencies: strategic thinking, negotiation, operations, monitoring, security, change management, development, and much more. When the direct benefits of automation do not clearly outweigh the costs, more indirect benefits need to be exposed, and automation engineers need the capacity and willingness to create a cost–benefits analysis. Many beneficial side effects stem from a strategy that enables the teams to contribute to the decision-making with their in-depth technical knowledge and access to raw data. But strategy alone is not sufficient. A participatory mindset of the automation teams needs to be met with proactive involvement and support from the management level. Similar dynamics influence the results when considering the development partner and customer relationship. The development partner must have the willingness and capacity to understand the context beyond the contract, and customers should have sufficient trust to include their partners in relevant decisions. As a team is the sum of its members, the participatory mindset of a team is also directly influenced by individuals.

In addition to a desire to make the best possible decisions, the team needs to understand technical software engineering principles and patterns. I will try to cover some of these.

Dependency inversion

The term coupling directionality is defined as, quite simply, the direction of dependency between multiple components in a non-trivial system. We understand the impact of coupling directionality in software design and architecture, and we should also consider its effect when implementing automation. For example, there are a few different approaches for automating the deployment process. The most common practice is still building deployment logic as pipelines that push out the artifacts and maybe even configuration to the target environments. Depending on specific design decisions, we can observe a few different dependencies in this pattern.

First, we can examine the coupling between the deployment process itself and a CI/CD environment. Having a lot of functionality as a pipeline definition ties you more tightly to the chosen CI/CD environment. An alternative could be to implement deployment logic as part of build scripts or use separate bash scripts. This way, the CI/CD platform becomes a coordinator that executes predefined activities with proper parameters. This simple design change can make a huge difference when changing the CI/CD platform. It also allows development teams to use the same automation for their daily building, testing, and deploying activities – keeping them more informed about the details of these scripts. We can also look at the coupling between the pipeline and the target environments. You can ask yourself: does adding another training environment require modifying the pipeline and duplicating some code? If the answer is yes, then your pipeline design contains a dependency that may hinder you in the future.

Most decent-sized projects I have been involved with have eventually reached a point where we discuss on-demand environments requiring standardized environments with parameterized deployment scripts. By understanding these dependencies and their impact, an automation engineer can make better-informed design decisions considering the strategy and future of the given project.

Another approach is to invert everything and have the pipeline know nothing of the deployment. As the last step, the pipeline could publish the artifact, and from there on, environments take over. First, an environment detects a newly published artifact, downloads it, and then deploys it. Obviously, it requires an additional actor in the environment that observes the versions, identifies the change, and executes the actual deployment.

At Nortal, we have had good experiences with ArgoCD in covering the role of such an observer. Once the version has been deployed to a given environment, another pipeline can be triggered that runs additional tests on the system and eventually promotes the artifact to be deployed to yet another environment. In this configuration, the additional actor is coupled to the artifact repository and environments. If such an actor can be deployed to the same environment as the system, we can even remove the coupling from the environment. As a result, we have created environments that observe changes in published artifacts and trigger their own updates.

Configuration promotion

I am a firm believer in “everything as code,” and although infrastructure as code is gaining ground, there are many manually managed configuration transfers that should not be acceptable. One example is the logging configuration. I don’t mean your Logback configuration XML, but rather the logging aggregator configuration. Whether you use Splunk, Graylog, ELK, or anything else, you will create a significant amount of configuration describing stream splitting, dashboards, reusable queries, and alerts. Often, most of this configuration is set up in the production environment, but lower environments lack the same monitoring level. We should treat our log aggregator and monitoring the same way we define our infrastructure – we should automate it. If the tool you’ve chosen does not have the API to export and import the configuration, maybe you should reconsider your choice and look to use more modern tools. If an export option exists, we can store all our dashboards, alerts, and other configuration in the code repository. Changes in specific environments can then trigger a pipeline that updates the configuration descriptors in the code repository and promotes this configuration to the higher environments if applicable.

It may happen that we expect more detailed monitoring in our test environments than in production, but every alert in production should be applied in the test as well. If a strict automated promotion process is in place, it encourages a thorough understanding of the required scope of an added alert or metric and configuration accordingly. Persisting such configuration as code also enables quality control methods such as code reviews and automated linting. There are many challenges to overcome to implement such a configuration promotion process:
• The export/import mechanics
• Configuration adjustments for specific environments
• Augmenting configuration with credentials and other parameters

But when building something sustainable, the effort pays off.

Toward hyper-automation

Best results often stem from the overlapping of different disciplines; it is no different for automation. Striving toward hyper-automation requires the willingness and courage to make significant changes based on the current context; it requires excellent business understanding on all levels, and it requires skilled engineers with a variety of competencies. It is challenging to carry out the infinite loop of improvements if even one of these elements is lacking. The core connecting fiber across all these disciplines is communication – it takes time and patience from the engineers to explain the value of configuration promotion, a “feature” that seems to add no value for the end users. Similar efforts should be planned on the business side to communicate the value of their change requests and new services. The move toward hyper-automation becomes tangible only when business features, new process automation ideas, and technical improvements can be compared using similar criteria by all stakeholders. For us, constant automation improvement is always a high priority, and we’re always on the lookout for like-minded colleagues.

Related content

Article

  • Data and AI
  • Strategy and Transformation
  • Technology and Engineering
  • Enterprise
  • Industry
  • Manufacturing

Five steps to revolutionize Occupational Health and Safety in your factory with AI 

Prioritizing occupational health and safety (OHS) is crucial in the industrial landscape. AI presents a modern solution to revolutionize OHS practices and improve workplace safety. In this article, we present five key steps to harness AI’s potential and enhance safety protocols in manufacturing.

Article

  • Data and AI
  • Strategy and Transformation
  • Technology and Engineering
  • Enterprise
  • Industry

From continuous analysis to continuous improvement - unlock the value of your industrial data with AI investments 

With AI, industrial organizations can seamlessly bridge the gap between structured and unstructured data, liberating experts from manual analysis and propelling them toward success.

Case study

  • Data and AI
  • Strategy and Transformation
  • Technology and Engineering
  • Consumer
  • Enterprise

Data architecture unlocking new business opportunities for one of Finland’s largest food services company 

Nortal built a data platform that functions as the central ecosystem of Compass Group Finland’s data and intelligence.

Get in touch

Let us offer you a new perspective.