Get Started with Serverless on Red Hat OpenShift

masamh
3 min readJan 31, 2021

If you have worked with containers before, you are probably familiar with how they are useful to isolate your applications and control the workload in your microservices-architecture. As for Serverless, is useful to run your code only when you need it, thus saving on the costs of running applications for your backend.
Of course, there are still challenges when it comes to working with each of these technologies alone, but what if we combine them together? this way we bring “best of both worlds”!

This was the topic Saif Rehman & I discussed on Thursday during our workshop “Introduction to Serverless Applications on Red Hat Openshift using Knative”.

We started the workshop with a review of serverless and containers, which brought us next to talking about the Knative project and OpenShift Serverless. Keep in mind that OpenShift Serverless Operator is based on the Knative project, which in turn allows us to run and manage serverless on Kubernetes.

Why Should we use Containers and Serverless together?

Using containers or serverless alone, each has its own advantages and disadvantages. Here are some reasons to use them together:

  • Avoid vendor lock-in: one of the disadvantages of serverless is vendor lock-in, which means being highly dependant on a vendor for a product or service. When you use serverless with containers, you can easily pick the technologies without worrying about the limitation of options by cloud providers.
  • Save resources: Sometimes, in your Kubernetes cluster, you can have your pods running all the time would lead to consuming more resources than expected, or having specified a quota for your projects might have not let you scale enough number of replicas. Running your containerized applications as serverless will help you scale your applications by consuming the right amount of resources your applications need.
  • Containers support more languages: When using Serverless, the choice of language highly depends on what the cloud provider offers, but with containers, there are no limitations to what you can use, this way you can build serverless applications with any programming language you like.

What is Knative Project?

Knative is an open-source project that allows us to run serverless in Kubernetes and it has 2 main components:

  • Knative Serving: It allows you to auto-scale and scale to zero based on the HTTP load. The deployment model of Knative Serving Service consists of three components: configuration, revision, and route.
  • Knative Eventing: is useful to subscribe to event sources and it includes brokers and triggers to filter these events based on event attributes.

Serverless Demo

In the code lab, we demonstrated Knative serving through a demo that Saif and I built. The demo consists of two applications, the frontend, which is built with angular and Nginx, the backend which is a flask application, and a Cloudant database. The following Architecture Diagram demonstrates the components of the demo.

Sentiment Analysis Serverless — Architecture Diagram

The frontend application consists of a simple form where the user can enter a sentence which is then sent to the backend through an API call. The backend is the serverless application that creates new replicas every time a call to the application was made, which in turn writes to the database information related to the sentence (text, and sentiment value) and retrieves the same details. When the application isn’t called, all pods are terminated and scaled down to zero.

This is a very good and simple use case to demonstrate some of the uses of serverless to make HTTP API calls, writing to the database, and reading from the database.

You can view the project and the steps to build it on GitHub and watch the replay of the workshop on IBM Developer Crowdcast channel.

You can view more resources on Serverless on Red Hat OpenShift

--

--

masamh

Cloud Platform Engineer at IBM | Hybrid Cloud, DevOps, Kubernetes, OpenShift | Women in Tech