No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

What Does Cloud-Native Even Mean?

AdobeStock_84842043.jpeg

Someone pressing a question mark button
Image: maxsim- stock.adobe.com
Somehow, clouds seem to be stuck in a perpetual terminological limbo. Back when cloud computing started, we had IaaS, PaaS, and SaaS, and there were a bunch of users scratching their heads on just what all those “aaS-es” meant. The good news is that you don’t hear much about them now anyway. Instead, we have “containers, microservices, serverless,” and (the favorite of the moment) “cloud-native.” Maybe, we should define these new concepts before they fall out of favor in turn.
 
“Containers” are not limited to the cloud; they’re a form of virtualization. Virtualization used to create virtual machines on servers, but that was a fairly high-overhead process. Containers are a feature of many operating systems, a way of partitioning resources that’s a little more rigorous in separating things than simple multi-tasking but not as rigorous as virtual machines.
 
What makes containers and clouds go together nicely is that a containerized application is highly portable. Containers work in part because they enforce a specific model of application connection and deployment, and that means, that while you don’t need the cloud to utilize them, the cloud can benefit a lot from container use. Finally, containers use less resources, so they can lower cloud costs.
 
Kubernetes is a product of the container revolution, a software tool developed by Google and used to deploy containerized applications. Arguably, Kubernetes is what’s made containers so successful, because it solves the problem of deploying applications that consist of a bunch of interrelated pieces, all of which must somehow roll out and run in harmony. Kubernetes can be made to support cloud deployments, data center deployments, and hybrids of both, so it fits current cloud-based application thinking.
 
What about those pesky microservices? This is one of those concepts that’s popularized in part by the fact that everyone has their own definition of one, which means almost anything can qualify to be a microservice. The general definition is that it’s an atom of business logic, so in most cases, it represents the smallest logical piece of an application. However, Google and others use the term to mean an atom of functionality, something that’s often smaller than a basic unit of business logic.
 
Whichever definition you use, microservices are presumed to be deployed as separate components, linked in some way to the rest of the application (and to each other) with some sort of network mesh. In fact, there’s a number of products, called “service meshes,” designed to provide this kind of linkage. What makes it complicated is that microservices are presumed to be scalable and resilient, meaning you can spin up another one if the first one breaks, or scale them up or down in number to accommodate load changes.
 
Which brings us to “serverless.” Scaling and resilience pose a challenge to applications/components that store information within themselves. There’s a special class of components that never stores information within, and that means everything it does depends only on what’s sent to it as work. Output, as math wizards would say, is a function of input, so of course, this kind of component is often called a “function.” It’s also sometimes called a “lambda,” and if you look that up, you’ll find that a lambda is a nameless function. Okay, that would be refreshing given our terminology overload, but let’s move on.
 
The reason why “functions” and “serverless” are linked is that because any copy of a function, running anywhere, will produce the same output given the same input, why bother worrying about where something is running? You define a function to be associated with a unit of work to be done, and when that unit appears somewhere, you spin up your function and run it. No persistent server relationship is required.
 
Serverless works best when you have a boatload of resources to assign to run those (nameless) functions, which is why most of it is related to public cloud services. The concept is most economical for users when there are things that need to be done only occasionally. To dedicate a virtual machine or container to an application component that waits around for hours for something to do doesn’t improve compute economics. For persistent work, you use persistent stuff—like containers—to process it.
 
And that brings us to the fuzziest of all our modern cloud-related terms, “cloud-native.” At a high level, it’s a simple concept. Cloud-native is an application model designed to run only in the cloud. It never ran in the data center; it was designed for the features of the cloud. The problem is that this begs the question of what those features are.
 
Well, the cloud is resilient and can scale and replace things on demand, so that means that cloud-native applications are built from a bunch of serverless functions, right? No, because cloud-native applications can also use containers to hold software components that are needed regularly, so you can’t keep finding and loading them on demand. So maybe cloud-native is built on microservices, connected in a big mesh? No, because that structure would generate enormous latency from all the network connections.
 
What is cloud-native then? It’s an application model that optimally utilizes the cloud, whatever “optimally” means for a given application. Maybe it’s containers, microservices, functions, or all of the above.
Need more than that? Ask a software vendor, and they’ll tell you not to worry. Whatever “cloud-native” is, they have it.