Service

  • Cloud Transformation

Article

by Nortal

Managing Serverless Workloads with Knative

The momentum in the software market is containers. In fact, a recent IHS Market report sees revenue for the container market to surpass $1.6 billion by 2023.

Where does Kubernetes come into play here? While Kubernetes might add a layer of complexity in terms of manual configuration for software dependencies, networking, and debugging, it was created to address container management and orchestration. Kubernetes use skyrocketed because, for every cluster, you get the consistency of knowing that every application will act the same way – regardless of its distribution.

So, what is Kubernetes? Kubernetes was designed as an open-source platform best utilized as a management tool for containerized workloads and services. Further, Kubernetes enables automation and declarative configuration. The only challenge with Kubernetes is it can make developers’ lives a bit more difficult and less convenient. Developers have to prioritize fixing bugs, refactoring software, and developing new applications, so many don’t have the time to master Kubernetes.

Also, since Kubernetes functions more similarly to an operating system, it takes a different skill set than those inherent to a role in development. Other concepts developers may need to learn before actively working with Kubernetes include:

  • Virtualization
  • Containers
  • Pods
  • Services
  • Replication controllers
  • Linux
  • Command line interface (CLI)

What is the new solution? Enter Knative. Eyal Manor, the Google Cloud Platform VP of Engineering, announced Knative’s debut in July 2018 as part of their campaign for “bringing the best of serverless” to developers. Other companies who have contributed to Knative include IBM, SAP, Pivotal (acquired by VMware), and Red Hat (now owned by IBM).

As a set of extensions to Kubernetes, you can use Knative to autoscale, automate container builds, create fast eventing capabilities, and access serverless benefits on the Kubernetes platform. If you want to update the life cycles of hundreds of containers, it helps to improve efficiency through automation. Plus, Knative is a portable orchestration platform.

Understanding serverless

Many get confused by the term “serverless,” thinking it means that functions can operate without a server – this isn’t the case. While serverless does require servers, the name was coined because application developers don’t see these servers, nor do they have to define the attributes. As a cloud-native technique and a functions-as-a-service, serverless was initially introduced through Amazon Web Services Lambda in 2014. With Lambda, developers had the opportunity to create new tasks that could be executed in containers, were scalable, and also terminated when the function was completed, making serverless more cost-effective in terms of usage for the development and maintenance of applications.

Now that serverless is becoming more mainstream, developers have found that infrastructure is less complex for containers and/or virtual machines than in the past. While using functional programming tools and languages, developers can create applications that support models of traditional programming. Serverless performs as a functional microservices architecture. Developers can pay for whatever computing resources are used by an application – in other words, It offers utility-like pricing. Some use cases may include stateless applications, query response, voice recognition, and face recognition.

Serverless simplification through Knative

Serverless doesn’t fully address the complexity issue, especially when dealing with various cloud vendor offerings. With Knative, you get three important modules: events, serving, and building. With these modules, you can build an application-agnostic and serverless platform. Not to mention, you can extend the serverless capabilities to monitoring, observability, tracing, microservices, and more.

Since Kubernetes is container-based, it performs intensive work, such as providing rolling updates, automated deployments, and application management. If a process crashes inside the container, Kubernetes will detect it and restart the associated services. The service mesh contributes to routing with traffic control, service discovery, and even per-request retries.

With Knative, developers can create containerized applications that do not require in-depth knowledge of the underlying Kubernetes cluster. Knative also gives developers control over their deployment while decreasing the complexity. How do they work together? In short, Knative is installed into a Kubernetes cluster and it acts as a layer on top of Kubernetes.

Kubernetes and Knative are unlike any other open-source container orchestration tool in the industry. These tools together offer the ability to manage a cluster of containers as a single system and provide the platform for serverless workloads.

The challenges of Knative

One of the most prominent obstacles around serverless and containerized workloads is visibility. There are many third-party tools available to help solve this problem. Most focus on throughput/traffic monitoring or cluster (pod/node) monitoring. These tools range in price and support from open-source options all the way to fully managed and hosted solutions. Cost, level of support, ability of the team, and required visibility should all be considered when making this decision.

The main caveat to using Knative is first running Kubernetes. Your organization will need to have a comfortable understanding of how to actively work with and manage Kubernetes. If your organization is already familiar with Kubernetes, then the complexity of adding Knative is as simple as YAML and a few commands.

Since Knative is fairly new, best practices are still being developed and tested. Still, with serverless workloads, you want to ensure you have strong testing and gating – especially since integration testing is huge. You want to see the end goal deployed.

Why should I use Knative?

In three words: Ease of use. Teams need to write fewer manifest files reducing the lines of code that need to be supported. Right now, many companies are pushing the creation of manifest files onto their operations team. This is not a best practice as it separates the knowledge of the application configuration from the deployment. By using Knative, the operations team can focus on cluster management while the developers can write a simple, often three-line deployment.

Knative is really an out-of-the-box solution. Where Kubernetes empowers the management of containers, Knative provides the ability to build and deploy a serverless application. One huge benefit is its ability to do traffic routing. This allows simple blue/green or Canary deployments from a CI/CD tool into Kubernetes.

Knative also takes away workflow and the development process from developers. Previously, you would have had to write custom code to monitor container builds rather than using manual processes for troubleshooting. You had to stick with a cloud-based serverless platform that specifically offered Kubernetes integration. Instead, with Knative, you don’t have to worry about custom coding, manually-based management, and vendor lock-in by your cloud service provider.

Currently, every client Nortal works with is already using Kubernetes. Knative is the next iteration, the next step forward from Kubernetes, if you will. In fact, over the next 3-5 years, we anticipate high levels of managed Knative service adoption like Cloud Run on GCP or TriggerMesh KLR on AWS.

Knative is the future of Kubernetes deployment. For what reason? Knative pushes off the management side of it, with best practices already written in. Plus, it reduces risk. The reduction in the amount of code developers have to write is paramount; because of this, Knative would be highly appreciated by any engineering team. Additionally, when you have less code in your deployment pipeline, you don’t have to rely completely on a tool to do it for you. Anything that makes the lives of developers easier will attract widespread adoption.

Knative frees developers from building and maintaining their own extensions for Kubernetes, which can be quite tedious. As far as we’re concerned, the time to begin exploring Knative is now. Contact Nortal to get started on deploying Knative today.

Related content

Case study

  • Cloud Transformation
  • Strategy and Transformation
  • Technology and Engineering
  • Healthcare

Moving healthcare to the cloud

Nortal built a Microsoft Azure cloud environment for Finland’s Kanta-Häme county that allows multiple IT vendors to develop services independently with shared underlying infrastructure and governance.

Article

Female doctor holding tablet in the dark
  • Cloud Transformation
  • Strategy and Transformation
  • Technology and Engineering
  • Healthcare

Where's the patient?

Imagine a scenario where doctors no longer need to rush to the department secretary to learn the room availability for patient consultations. Our solution for space resource management in healthcare empowers the creation of novel business models, even in multi-vendor environments. Several well-being services counties in Finland are already pioneering this transformative journey. 

Article

Two female doctors discussion by laptop
  • Cloud Transformation
  • Strategy and Transformation
  • Technology and Engineering
  • Healthcare

Hospital occupancy rates can be multiplied by space resource management and optimization 

The average occupancy rate for outpatient clinic rooms in older hospitals located in well-being services counties in Finland can currently be as low as 20%. This means an admission room may remain empty for up to four out of five working days per week.

Get in touch

Let us offer you a new perspective.