The Serverless Show, UC-Berkeley on The Rise of Serverless Computing

In part 1 of our latest Serverless Show, Hillel and Ran Ribenzaft, CTO, Epsagon discussed how serverless observability breeds confidence & complexity, and more.

In part 2 Hillel and Ran discuss a paper from UC-Berkeley. Hillel kicked off the discussion, “Some of the folks at UC-Berkeley put out a paper on the rise of serverless computing, and this was picked up by the news mainly because it was about 10 years, more or less to the day, from a famous UC-Berkeley paper on the rise of cloud computing. That really was an important piece of research, and it got articulated really well for the world where cloud computing was headed and what it meant. I think one of the key pieces that gets pointed out about that first paper is, not only has it been cited I think 17,000 times in other papers, but it was cited 1,000 times in the past year, so it’s still quite relevant to how people define and understand the cloud.

“Some of the same people even, coming back from the same university and coming back and saying, ‘There’s this new thing called serverless computing, and it’s an important trend in how cloud computing is being done and how software is being built.’ I think that was really interesting and important, so it’s definitely a paper I think people should take a few minutes and read. HPCwire did a nice job of summarizing it. Two key points I wanted to chat with you about. First, the article does quite a bit of work on trying to talk about where serverless makes sense today, where it doesn’t, and where that’s headed. I think the overall direction of the piece was that there are some use cases today that are challenging for serverless, but those are going to change over time relatively quickly, this paradigm of not managing infrastructure or not managing resources in scaling is going to be pretty ubiquitous.

“I think they make the point that serverfull applications are mostly going to be for things that support a serverless environment. Now I’m curious because you guys see a lot of people at sort of the cutting edge of what people want to do. What are the use cases today where you see serverless being used, and maybe if you could point out a use case or two where you’d say, ‘Right now, that’s not a good place to be heading into serverless just yet.’”

Serverless Will Adapt to The Scenarios We’re Doing

Ran replied, “That’s actually a good question and it’s really hard to answer. I usually say that almost every scenario can be fitted to serverless. I think regarding their post is that serverless will adapt to the scenarios and the use cases that we’re doing and not vice versa. I mean we want to adapt ourselves to serverless. AWS and the other cloud vendors will build serverless the way developers can build their use cases and scenarios. I think that serverless definitely matches all the things regarding automation, backends, ETL pipelines, any processing pipeline, and even some IoT, which is also some sort of processing pipeline, but might be a bit different in the overall architecture.

Where Serverless Doesn’t Fit

“I had a talk a couple months ago about the cost of serverless. I think that serverless may not fit in two use cases. The first one is performance, but I’m not talking about, ‘Yeah, I want this request to be handled in less than 100 milliseconds and I want to avoid cold starts or I might choose a server because it’s always on,’ but that’s not the case. I’m talking about where performance is a measure of a millisecond can impact your business. Let’s say maybe algo-trading or something that is life and death scenarios that you need to respond in less than three milliseconds. We’ve got an example for that that we need to respond in less than five milliseconds because otherwise our customers will wait for it. That’s the only scenario. Other than that, we’ve got 600 functions in production. I think it’s divided into 40 services, and it’s all okay.

“The second one I call ‘cost of scale.’ Serverless is definitely cheaper when the scale is low, but sometimes when you designed it for hundreds of millions of invocations or events and now you’re receiving 100 billion invocations, like order of magnitude more than that. It just sometimes might not scale because you don’t pay only for the Lambda function. You pay for the Kinesis stream and the SNS and the RDS, and some other services that we call them serverless. It might not fit if it really increased the order of magnitude in cost as well.”

Design the Architecture to Beat Serverless

Ran continued, “I want to stop everyone that says, ‘But for me, serverless costs a lot.’ Really think if you max out the way to design the architecture to beat serverless. Because sometimes I see lift and shifts, that they place something on the Lambda function, but it doesn’t fit and they cost a lot because it’s not properly designed to run on the serverless architecture. Before you say, ‘It costs too much. Let’s move back to servers,’ think if you designed an event-driven, correct serverless architecture. Those are the only two things I can relate when serverless doesn’t fit. But other than that, I think I’ve seen almost any use case already.”

Hillel replied, “Sure. I think to the point you just made — and to some of the cases where people sometimes think it doesn’t apply — very often even if the compute part is not optimal, when you evaluate the entire solution, particularly the operations piece of the solution, you may discover that, for example, heavy machine learning stuff could sometimes not map well to the compute environment or the amount of memory that you need. But when you compare that to the complexity of standing up and operating your own cluster of machines, you often find that it’s still worth it because you have to run twice as many Lambda functions as you would on an equivalent amount of compute. But since you don’t have to operate those servers and figure out how to scale them, etc., it’s still cheaper, simpler, easier for you to do.”

Hillel is Not Going Crazy (…Well, At Least Not About This)

Hillel continued, “Another point made in that article was a point we make a lot about security, and I was happy to see somebody else make it because sometimes I think I’m the only one talking and maybe I’m going crazy. They made a point that serverless applications stand to be more secure than servable opfull ones for some of the reasons that people talk about often. You give responsibility over to the cloud provider for certain things, etc., but also for some of the reasons we talk about in terms of being able to apply a very fine-grained policy and things like that. I think that’s something that resonated a lot with me was there’s at least the potential, it’s not instant, but there’s the potential when you move over to a serverless application for there to be an improvement of security, not because you’ve done this for security, but because your design for an application for serverless has lots of small functions rather than a small number of large containers. You now have some opportunities for security and granularity with your security that you didn’t have before. That was something that connected well with me.”

The post The Serverless Show, UC-Berkeley on The Rise of Serverless Computing appeared first on Protego.



*** This is a Security Bloggers Network syndicated blog from Blog – Protego authored by Megan Bozman. Read the original post at: https://www.protego.io/the-serverless-show-uc-berkeley-on-the-rise-of-serverless-computing/