Skip to main content

EPISODE 10 | Kubernetes Vs. Serverless – When To Use Which?

May 21, 2021 | 27 min 25 sec

Podcast Host – Madhura Gaikwad, Excellarate

Podcast Guest – Vinayak Joglekar, CTO at Excellarate | Dipesh Rane, Solutions Architect at Excellarate

Brief Summary

Choosing between a self-managed vs. a fully-managed infrastructure can be challenging for tech leaders, especially given the vast array of information and advice available to them.

In this episode, Vinayak Joglekar, CTO at Excellarate, is joined by Dipesh Rane, Solutions Architect at Excellarate to understand the nitty-gritty of the common debate – Kubernetes Vs. Serverless.

To help you make the decision, Vinayak and Dipesh cover points such as:

  • Cost benefits of Kubernetes Vs. AWS Lambda
  • What infrastructure makes more sense while launching a new application?
  • Serverless cloud vendor lock-in – should you be worried?
  • Scaling application in serverless vs. self-managed infrastructure

Transcript

Madhura Gaikwad (00:08):

Hello, and welcome to zip radio podcast powered by Synerzip I’m your host Madhura. The topic for today’s episode is Kubernetes versus Serverless when to use which I’m joined today by one of our regular Vinayak Joglekar, who is the CTO at the Synerzip prime group. Joining us along with Vinayak is the Dipesh Rane solutions architect at the Synerzip prime group. Dipesh brings in a vast experience of working in the technology industry across various domains. Today, Dipesh and Vinayak will discuss the advantages, feasibility and architectural impact of self-managed infrastructures like Kubernetes and fully managed serverless infrastructures like AWS Lambda. So welcome onboard Dipesh and Vinayak.

Vinayak Joglekar (00:54):

Thanks. Thanks for having me here. And thanks, Madhura the introductions. So, Dipesh I’m glad you are here today and welcome to the episode, right. Yeah, so wonderful. So, we have been talking about the different ways in which one can deploy containers on AWS and other cloud infrastructures, such as AWS, maybe Google cloud or Azure or Ali Alibaba, right. And, you know, there are variety of options available. So, it started, where, you know, you could just put your container in automated environment, such as ECS, right. That was a few years back. And then we, hacked cops, which is the Kubernetes open-source operator with a very good command line interface where you could put operators and you could do many of the things that you would like to do to orchestrate using Kubernetes, having your own EC two instances and just spin off your own Kubernetes.

Vinayak Joglekar (02:04):

So that was the other thing we used to do. And then EKS kind of in 2018 started becoming more popular and off late. I think EKS is a good option for somebody who would not like to take on the headache of managing his or her own customer. Right. And in between we all along, we had this development of, having serverless architecture using Lambdas. And then you didn’t need to worry about any of the things, including any, you know, not even your containers or what was in your pod, nothing. I mean, you just put the code in it, using a lambda and you know, that was again, the other option. So, this is a variety of options that a user or a developer is faced with, and you need to make choices. So, you know, what is your opinion and how does one go about, so this is a spectrum of shared responsibility.

Vinayak Joglekar (03:10):

That one end you have, you know, something like serverless lambdas does where the entire responsibility of picking up containers, putting it, it in whatever environment in orchestration, whatever clusters you have making it fully available. I mean, you don’t have to worry about availability zones and all that with the Lambda, there’s that one end and the other end, you have you your own cluster where you are managing it, and you have to worry about everything, including how much you are for it. And, you know, what is the availability and all that. So how does one go? I mean, this is whole spectrum. So how does one go about making choices of, what is the right way of deploying my application? Maybe, you know, in today’s day and age, it could be a microservices based containerized application.

Dipesh Rane (04:02):

Right? So, you’re right. I like many of the time people get into this dilemma, like what type of development model would suit for them, whether they should go with Lambda or they should go with, you know, container-based approach. And the reason why people get into this Lambda is the lucrative benefits that Lambda offers. So, it totally depends on, the nature of your application, what exactly your application is going to behave in the production. So that is where, you know, this choice may come because if you see, talk about function as a service or Lambda as the deployment model, but to be honest, I feel like instead of it’s a deployment model, it’s a architectural pattern. The way you are going to write your application needs to, you know, have this event driven architecture because it has to respond to the events.

Dipesh Rane (04:54):

So, all your code that you were writing earlier, which, you know, you were thinking it as a part of a big application, right now you have to think your entire application as a collection of smaller units called as a “function”. And that is where, you know, a lot of developers initially would face challenges like how the application would look like. So instead of a big, huge building block, they now have to think in terms of smaller units of the function, which are going to interact with each other. So just to give you an example on the cost wise, and I did this exercise with the simple AWS calculator also before think of something like, if you are a lambda function that you have written is going to receive 20 million hits in a month. And if you do a simple math, with whatever AWS calculator offers you, then you can find out the total cost of execution of this Lambda function would go around 80 to $90 per month.

Dipesh Rane (05:55):

And if you, implement the same logic over, Kubernetes cluster, the Kubernetes cluster for EKS on AWS comes with the master fixed price, which is around $73 per month. So, you can see here the almost same amount which Lambda would consume for an execution is almost consumed by the master itself, which is not going to do any business logic execution. And for executing your business logic, you’ll also require some worker nodes. So, assume will go with, two T2, large or T3 large, which is the latest family offered by AWS T3, large worker nodes, which is going to take that load. And if you compare like 20 million, this node can suffice, then you’ll find that it consumes around $183 for a month. Now this is fine when you have 20 million, request workload, if this is a predictable workload, imagine like if the workload increases, it goes to around, let’s say 90 million, hits or the request per month, then pricing becomes tricky.

Dipesh Rane (06:59):

In that case, your instances that you spin up may be able to take all that load, or if you can spin on the peak hours, maybe additional worker notice in Kubernetes, still the price that you get would be much less than what you have on Lambda. So, Lambda is pretty good for typically workload where you are not sure how the pattern would be like how many times the request would be coming, because the best pricing model that they offer is pay only for whatever you are requesting, like pay only for the request. And the ideal time is not counted. Whereas in terms of Kubernetes, maybe you are getting a traffic, but half of the time you are at is, I like it is not doing anything, but still, you have to pay for the resources that you are, utilizing. So, in that case, if the load is predictable and if it is pretty high, then I would always recommend to do your pricing mathematics against Lambda to your Kubernetes cluster.

Dipesh Rane (07:55):

And recently in, December 2020, AWS, announced that your Kubernetes cluster worker nodes can be spot instances also earlier, that was not a facility because on the peak load, you may spin up additional node inside your Kubernetes cluster, but then the pricing was something concerned because it would cost you the same as that, of on demand instances. But from since December 2020, you can also spin up the spot instances for Kubernetes that helps Kubernetes cluster cost to keep to, you know, lowest as possible. You can. And plus, also, if you are going on a model where you can do upfront resolution for your instances, then that cost can be controlled. So, Lambda has a lot of advantages in terms of like, you don’t have to worry about availability, you don’t have to worry about the scalability and your applications, or the functions will be always available there. But then the Lambda becomes tricky when the load increases and then you have rising model, which is quite hard and tricky to understand. So, this example I’ve given, I have compared myself, anyone can go ahead and, you know, put values as for their application and figure out what exactly would provide them a cost benefit.

Vinayak Joglekar (09:09):

Yeah, so this is very helpful Dipesh and thanks for going into all the details of costing. And so, 20 million to 90 million is something where, you know, we are already talking of high volumes and majority of the customers. And particularly when you start developing a new application, it doesn’t start with such loads, right? I mean, you can start with something and then move on to something else. So, I have few questions here. So, one question is that the Lambda architecture itself, as you directly mentioned, forces you to think in a certain way, right. And you know, it’s even driven, because every Lambda, as you rightly said is by an event, right. And it forces you to do kind of even sourcing by default. I mean, there’s no choice. There also, there is Lambdas by their very nature have to be shot because can’t have a long running Lambda its life has to be shot. So that is another constraint that the lamb puts on the developer. And the third constraint is that it has to be more or less stateless. You don’t have access to local storage or tons of RAM there. Right?

Dipesh Rane (10:27):

That is, yeah. And that is what happens Vinayak like, a lot of brownfield applications, if you try to refactor them and put it in Lambda, you have to undergo a major rework or maybe rearchitect thing because old applications, brown build application, they were all, you know, hovering around the state management. If you remember our old days, like everything was working around sessions and so on. Yeah. So there has to be some significant effort, you know, to rearchitect your application, if you want to move it to the lambda.

Vinayak Joglekar (10:58):

Right. Yeah. So, you know, that is what it is a glazing in disguise for a application, which is not brownfield, which is a Greenfield application. So, I think it would be right to say, because in a Greenfield application, you also have, you have no prediction of what the load is going to be. Exactly. And maybe there is no need to have even a single EC two instance running when you’re not even sure whether a user is going to use what you’re building. Right. So, in that case, it makes a lot of sense for Greenfield projects where the developer is going to think of architecture from scratch. And lambda does force that discipline on the architect to think in a certain way. Right? So now let’s say you have, now, this is something which is a theoretical question maybe we have never come across such a situation before, but you know, when you think of, you know, the way applications run and, if you take a brownfield application typically, which has been kind of monolith converted to a microservice kind of situation in that situation, compared to that, you know, when you think of a Lambda, which is you ground up, somebody has built a Lambda architecture with lot of these serverless functions talking to manage services.

Vinayak Joglekar (12:20):

So, you have a lot of these manage services available. So, you know whether it is RDS or Atlas or DynamoDB, I mean, you don’t have to worry about how you are getting data. So that can be in fact outsourced to a managed application. So, a combination of these two as a paradigm in terms of the total cost of ownership, not, not just looking at the cost of development or cost of deployment, but looking at the total cost of ownership, which also includes cost of maintenance, but fixing, and, you know, the amount of the kind of pay you would end up paying your developers when they, when their cycle. So, in this overall picture, what would you advise? Right? I mean, what do you think should be the direction we should take initially or eventually?

Dipesh Rane (13:15):

Yeah. So, as you rightly said, you know, for a project, which is just starting up for them, Lambda can be a wise choice because they don’t have to invest anything into creating infrastructure or anything like that. From day one itself, they can be an operational, the developer can focus on writing the code and getting the value out of it. And then of services when it is interacting. For example, if it is using AWS services itself, since lambda is also in AWS, a service most of the time, the traffic between the services within the region is free. So there also you get some cost saving. The only danger in this thing is, as in when, you know, expand the scope of your product, you tend to end up getting married to the vendor, because if you have built anything in Lambda and then your functionality is growing day by day, you have to use lot of AWS service to make it work.

Dipesh Rane (14:10):

For example, AWS API gateways would come into the picture for monitoring and all you have to depend on CloudWatch events would be coming from all the AWS, event sources most of the times and so on. So, in the end, you may end up in something, you know, having a tight clock with the vendor. And then if you want to, you know, switch to some other cloud provider to gain a cost benefit or whatever other benefits you, then it would be a difficult switch. That you may not face when you are dealing with Kubernetes application, because then everything is, centered around your containers. And they can run anywhere where they find a container friendly environment. So that is only a danger like cost-wise, you will definitely get advantage in initially also as well as for the project, for which you don’t know how much load it would going to experience when it’s in production. And for typically the clients who are running MVP, the Lambda becomes obvious choice, but for a client who may want to, you know, take, advantage of running infrastructure on multiple clouds for them, it would be a bad choice because then you are tightly coupled or vendor locked with one cloud provider because every, but

Vinayak Joglekar (15:24):

Let’s talk about not being vendor locked and managing your own Kubernetes cluster. I mean, so you need to worry about, so what the cluster would look like, how many nodes you’ll need, what capacity of the nodes you’ll need, what kind of, where you would be putting them in which availability zones and, you know, you also need to worry about in case there is of scaling then how you do load balancing auto scaling. So, there is some availability of that within Kubernetes and containerization, but then you need to worry about not only that you need to worry about having your own observability own instrumentation in terms of whether you are using, you know, low key or Prometheus, whatever. I mean, then you need to worry about everything. And then you need to worry about how you are going to deploy, whether it’s Jenkins since the running there, which is going to trigger your bills. And I mean, that’s a whole lot of, of work, right? I mean, and then you end up making mistakes there, right. It’s not like, you know, you do it all perfectly. And how’s that as compared to something where you’re not really required to think of any of what I just described.

Dipesh Rane (16:51):

Right? So, then there are two types of people, Vinayak. Like, I would say like one who loves abstraction, they don’t care what is lying behind the scenes. And one set of people they need full control. So, there could be an application, you know, which are very particular about the underlying operating system that they’re using. There could be some flags or parameter. You might be setting at a, on your VM, into a Docker container to make application work, as you expect. So, if you am using Lambda JVM, then I have to go with whatever, JVM version that they have supported. And in that, if I need any modification, if I need to add some waste little parameters, I can’t control that on Lambda. So that is where, you know, this distinguishing comes like what you should opt for. If you need full control, then obviously the pain for that full control you to bear. So, if you have a power that comes with the responsibility.

Vinayak Joglekar (17:45):

But the moment you want something to be automatically done, you have to give up that control. Perfect. I mean, there are many examples of that.

Dipesh Rane (17:56):

I mean some, some companies, Vinayak, they continuous do a pen test of our infrastructure. And in Lara, you lost that ability because you are all totally dependent on AWS. You’ll also do the audit of what life libraries we are using. Are they vulnerable? There will be frequent vulnerability scans. There are some clients with us, which do follow this practice. They engage third party auditors, which, you know, inspect our infrastructure. They look for all, libraries or whatever API calls that we are making. The third party through audit happens. And every time if they find any vulnerabilities, they let us know, like, this is the issue that they’re seeing in case of Lambda. You won’t have that thing. Like you just trust AWS, focus on your business logic. And that’s what I said, like for clients who need control over their infrastructure, they need to see what is coming in out, what is happening. They need control. Then for them, Lambda is not the choice.

Vinayak Joglekar (18:49):

Dipesh, you also mentioned that Lambda kind of brings some kind of, you get locked into a vendor. Now, you know, if you look at Kubernetes as a way of deploying your own clusters and all that, how many people are really doing their own Kubernetes? I mean, they may be doing Kubernetes, dockers and everything that we just described, but how many of them are doing it all by themselves using cops as against using EKS. So isn’t there a thought of affinity, between, you know, users wanting to use EKS because there, several of these headaches that we just described are taken care of by, AWS but then in turn, you are getting locked into AWS. So, the advantage of not being locked in is actually theoretical, right? I mean, practically, if you see most of the people who are using are using EKS, because that has several advantages, you know, you don’t have to worry about so many things, because many of the decisions that you would make are made on your behalf by for example, what should be the size of your worker nodes. Right, right. You don’t worry about that. I mean, how many people, how many developers have the knowhow to even know that such and such application would require? So, and so size terms of, its compute and storage and all that. So, it’s, I think, practically if you see a lot of people are already locked in using a case, so not being locked in, is it the real advantage.

Dipesh Rane (20:33):

Again, that, depends on your business case. Some people they do multi-cloud deployment. So, they might seek an solution, which is easy to, you know, port on multiple clouds, but moving from one cloud to other cloud, even though you, no matter how much services you try to keep away from the native cloud platform, there would be some rework you have to do. Cause you would be using some addon services like SCAs or maybe SQS and so on. So, if you’re shipping a cloud, either you go with the hybrid model or you use the entirely, all the services from the other cloud that you are opting for. So, depends on how much minimum effort you would require, you know, to switch the cloud. So, in that case, I find containers bit, you can say more friendly because deployments or maybe Docker files that you don’t have changed, that remains same. It’s just like the, the way we deploy application into the newer managed cluster, the commands might be changing here and there. So, you might have to tweak your CI CD pipelines a bit to do the deployments. I mean, this is what I think.

Vinayak Joglekar (21:41):

Yeah. But then, you know, perfect. I mean, there is no, no debate on whether we use containerization or not because containerization is the primary unit because of which, you know, you can work in a cloud agnostic kind of, environment where you can easily go from one cloud because everyone supports docker and containerization. So, there’s no doubt about that. But what we are talking about is management and orchestration. Now, when it comes to management and orchestration, you end up for some reason or the other, utilizing some of the goodies that are offered by the cloud vendor in this case, AWS. And then you kind of then is, so, you know, I have used AWS and I have used in my past experience, you know, spinning up my own cluster and doing everything, managing everything. And there is a significant difference amount and is more easy. And you can say worry free way of managing. So, have we got a similar thing that can work? Like I have heard that, there are push button deployments available where you can shrink wrap Kubernetes application using, he charts and then seamlessly more from cloud to cloud. So, is it something that is theory or practice?

Dipesh Rane (23:08):

I haven’t seen this application or tested myself on own. So maybe I won’t comment on that, but there are some cloud providers, like Azure, they’re making an effort to make even Kubernetes as serverless. So right now, they have taken the headache of managing, the master nodes from us. They’ll also take the headache of, you know, managing the worker nodes from us. So that way you just build your container and just tell, I want to run this deployment cleaning of the node and bringing up and down that will be easily managed by the cloud provider. So Azure serverless is the service, you know, can provide that. I’m not sure if Amazon there’s

Vinayak Joglekar (23:50):

how many, that is interesting, but you know, who decides like, if my workload requires what node, right? I mean the sizing of the node, that is one thing. And as to, if I’m going to have pod, which has got a bunch of containers, where, you know, whether there are replica sets and how many replicas sets I deployed across how many, multiple availabilities on, so who takes those decisions there?

Dipesh Rane (24:17):

Yeah. So that would be totally abstracted, like how the lambda is abstracted to us. When we say run a function, we don’t know where it is running. We just provide our parameters. I want this much of compute capacity, and I want this much of memory capacity. So same thing would happen. It could just read our, our requirements from our deployment file. This is the container requirement. This is what I want for this container, this pod. And then that requirement, how in the background, the node is scheduled on which it is running, that would be totally abstract to us.

Vinayak Joglekar (24:48):

Yeah. But then it’s the same thing, right? I mean, for example, now let me argue that, you know, many of the auto scaling features that are available are based on what is the load on compute, right? What is the load on your memory? And based on that scaling happens when they see that a server or node is loaded now, but if I want to do auto scaling based on certain other parameters, like, what is the length of a particular queue right. Message queue for example, or something. Yes. I mean, just making it up, right. I mean, I want to predict based on load and at, 12 o’clock or so I want what scale up, based on whatever is that time a temporal scaling based on some application feature. Right. Which obviously the servers don’t know, obviously the, either as Azure or AWS won’t know now for such a thing you are losing control. Right. I mean, when you’re yeah. So that, I think that would be an interesting conversation to have on service mesh or something like that, an architecture where, but I think we need to resolve that for another episode. I think we are on; time and I think we have actually accelerated a little bit. So, is there anything that I should have asked, and I didn’t ask you?

Dipesh Rane (26:12):

Like we talked about, I mean, we haven’t directly talked about the benefits and drawbacks of Lambda, but then I guess we covered that while we were discussing, like, we didn’t bring it as a question. So, I don’t think there anything is missing. I mean, other, it would happen an obvious conversation, right? Dipesh, what the advantages, not like that. Like we just we’re discussing in general. And I think we covered the advantages and disadvantage both.

Vinayak Joglekar (26:38):

Wonderful. Yeah. So, thanks a lot, Dipesh, thanks, for sharing all this information and I hope that this is very useful for our listeners. So, thank you very much for coming here today and sharing your ideas.

Dipesh Rane (26:54):

My pleasure. Thanks.

Madhura Gaikwad (26:56):

Thanks, Vinayak. And thanks, Dipesh for taking the time to join us today. I’m sure this episode will help our listeners in making the right decision while architecting their technology stack. And thank you everyone for joining us today. If you are looking to accelerate your product roadmap, visit our website, www.synerzip.com for more information, stay tuned to future zip radio episodes for more insights on technology and agile trends. Thank you.

ZipRadio is available on these platforms