SBN

Fastest Runtime For AWS Lambda Functions

2 minutes read


POSTED Mar, 2022

dot
IN
Serverless

Fastest Runtime For AWS Lambda Functions

Emin Bilgic

Written by Emin Bilgic

Software Engineer at Thundra

linkedin-share

 X

Summary

AWS Lambda is a compute service that lets you run code without any infrastructure management and it natively supports Java, Go, NodeJS, .Net, Python, and Ruby runtimes. In this article, we will compare the performances of the same hello world Lambda functions written in Java, Go, NodeJS, .Net, and Python runtimes.

Structure of the Template

We are using simple hello world functions to test invocation times of Lambdas’ by using the AWS SAM templates. When we compare them first we will use the latest versions of the runtimes the AWS SAM provided us. You can check the complete deployment package from this Github repository.

Comparing Lambda’s by Runtime

To compare these functions dynamically, the newly introduced Thundra APM’s custom dashboard widget is a pretty proper choice. Integrated functions already exist in the Github repository>, you can easily clone the repo and change the Thundra API Key with yours in SAM templates. For more information check APM docs from here or sign up for Thundra from here Let’s check the differences between runtimes by building widgets in a custom dashboard in Thundra APM. In this scenario, we only deployed the stack that contains the latest versions of runtimes. So filtering the functions with the runtimes and average durations would be enough for us to create the “Max durations” widget. All of the functions’ average invocation times are shown below:

Compiled languages’ cold start durations are significantly greater than the others but their max durations get closer to the others after warming up. Java has the highest cold start duration with approximately 5.000 milliseconds. In contrast to that, the interpreted languages have almost no performance difference between when they hit the cold start and for their later invokes. We can clearly see that .Net has a much better cold start performance than Java even if both are compiled languages. Let’s look from a different perspective by excluding Java and take a closer look at the other runtimes.

Go has approximately 400ms long cold start duration but invocation time goes down and invocations start to happen instantly after cold start. We see dazzling performance results in Python and NodeJS runtimes. It looks like they are almost not affected by cold start and invocations always run instantly.

I acknowledge that the package size has a proportional effect on cold start durations for the same runtimes. We can generally accept this for the different runtimes except for Java And Go. But Go has the 3rd fastest cold start duration even though it has a big size like 4.7MB.

Conclusion

One of the biggest differences between compiled and interpreted languages is that interpreted languages need less time to analyze source code but the overall time to execute the process is much slower. In this context, we can clearly understand the reason why languages like NodeJS and Python have dazzling  performances against cold starts. In my experimentation, I’ve observed that Java and .Net performed worse than the other runtimes when we compare the invocation durations. I think it’s because Lambdas don’t have a large enough process to bridge the difference between interpreter and compiler. So if we tried to do a heavier and more complex process instead of returning the ”Hello World’ message, compiled languages would have better performances after warming up. Here are some other graphs created with the Thundra APM Dashboard queries:

Bonus Section: Comparison between versions

There are different versions of the runtimes I used in my experimentation. AWS Lambda supports 3 different Java versions (java8, java8.al2, java11), 3 different NodeJS versions (Node10, Node12, Node14), 4 different Python versions (Python3.6, Python3.7, Python3.8, Pyton3.9). I compared them among themselves after comparing the supported latest version in AWS of each runtime. When comparing the versions of these runtimes, we will not be able to see any difference in Lambda functions behavior most probably, but it would not hurt to check them anyway.

There is no invocation duration difference between python versions but there are memory usage differences here. Python 3.8 has better memory usage results than the other 3 Python versions:

You can see the invocation duration difference between Java versions below.

You can see the invocation duration difference between NodeJS versions below.

×

SUBSCRIBE TO OUR BLOG

Get our new blogs delivered straight to your inbox.

 

THANKS FOR SIGNING UP!

We’ll make sure to share the best materials crafted for you!

2 minutes read


POSTED Mar, 2022

dot
IN
Serverless

Fastest Runtime For AWS Lambda Functions

Emin Bilgic

Written by Emin Bilgic

Software Engineer at Thundra

linkedin-share

 X

Summary

AWS Lambda is a compute service that lets you run code without any infrastructure management and it natively supports Java, Go, NodeJS, .Net, Python, and Ruby runtimes. In this article, we will compare the performances of the same hello world Lambda functions written in Java, Go, NodeJS, .Net, and Python runtimes.

Structure of the Template

We are using simple hello world functions to test invocation times of Lambdas’ by using the AWS SAM templates. When we compare them first we will use the latest versions of the runtimes the AWS SAM provided us. You can check the complete deployment package from this Github repository.

Comparing Lambda’s by Runtime

To compare these functions dynamically, the newly introduced Thundra APM’s custom dashboard widget is a pretty proper choice. Integrated functions already exist in the Github repository>, you can easily clone the repo and change the Thundra API Key with yours in SAM templates. For more information check APM docs from here or sign up for Thundra from here Let’s check the differences between runtimes by building widgets in a custom dashboard in Thundra APM. In this scenario, we only deployed the stack that contains the latest versions of runtimes. So filtering the functions with the runtimes and average durations would be enough for us to create the “Max durations” widget. All of the functions’ average invocation times are shown below:

Compiled languages’ cold start durations are significantly greater than the others but their max durations get closer to the others after warming up. Java has the highest cold start duration with approximately 5.000 milliseconds. In contrast to that, the interpreted languages have almost no performance difference between when they hit the cold start and for their later invokes. We can clearly see that .Net has a much better cold start performance than Java even if both are compiled languages. Let’s look from a different perspective by excluding Java and take a closer look at the other runtimes.

Go has approximately 400ms long cold start duration but invocation time goes down and invocations start to happen instantly after cold start. We see dazzling performance results in Python and NodeJS runtimes. It looks like they are almost not affected by cold start and invocations always run instantly.

I acknowledge that the package size has a proportional effect on cold start durations for the same runtimes. We can generally accept this for the different runtimes except for Java And Go. But Go has the 3rd fastest cold start duration even though it has a big size like 4.7MB.

Conclusion

One of the biggest differences between compiled and interpreted languages is that interpreted languages need less time to analyze source code but the overall time to execute the process is much slower. In this context, we can clearly understand the reason why languages like NodeJS and Python have dazzling  performances against cold starts. In my experimentation, I’ve observed that Java and .Net performed worse than the other runtimes when we compare the invocation durations. I think it’s because Lambdas don’t have a large enough process to bridge the difference between interpreter and compiler. So if we tried to do a heavier and more complex process instead of returning the ”Hello World’ message, compiled languages would have better performances after warming up. Here are some other graphs created with the Thundra APM Dashboard queries:

Bonus Section: Comparison between versions

There are different versions of the runtimes I used in my experimentation. AWS Lambda supports 3 different Java versions (java8, java8.al2, java11), 3 different NodeJS versions (Node10, Node12, Node14), 4 different Python versions (Python3.6, Python3.7, Python3.8, Pyton3.9). I compared them among themselves after comparing the supported latest version in AWS of each runtime. When comparing the versions of these runtimes, we will not be able to see any difference in Lambda functions behavior most probably, but it would not hurt to check them anyway.

There is no invocation duration difference between python versions but there are memory usage differences here. Python 3.8 has better memory usage results than the other 3 Python versions:

You can see the invocation duration difference between Java versions below.

You can see the invocation duration difference between NodeJS versions below.

Summary

AWS Lambda is a compute service that lets you run code without any infrastructure management and it natively supports Java, Go, NodeJS, .Net, Python, and Ruby runtimes. In this article, we will compare the performances of the same hello world Lambda functions written in Java, Go, NodeJS, .Net, and Python runtimes.

Structure of the Template

We are using simple hello world functions to test invocation times of Lambdas’ by using the AWS SAM templates. When we compare them first we will use the latest versions of the runtimes the AWS SAM provided us. You can check the complete deployment package from this Github repository.

Comparing Lambda’s by Runtime

To compare these functions dynamically, the newly introduced Thundra APM’s custom dashboard widget is a pretty proper choice. Integrated functions already exist in the Github repository>, you can easily clone the repo and change the Thundra API Key with yours in SAM templates. For more information check APM docs from here or sign up for Thundra from here Let’s check the differences between runtimes by building widgets in a custom dashboard in Thundra APM. In this scenario, we only deployed the stack that contains the latest versions of runtimes. So filtering the functions with the runtimes and average durations would be enough for us to create the “Max durations” widget. All of the functions’ average invocation times are shown below:

Compiled languages’ cold start durations are significantly greater than the others but their max durations get closer to the others after warming up. Java has the highest cold start duration with approximately 5.000 milliseconds. In contrast to that, the interpreted languages have almost no performance difference between when they hit the cold start and for their later invokes. We can clearly see that .Net has a much better cold start performance than Java even if both are compiled languages. Let’s look from a different perspective by excluding Java and take a closer look at the other runtimes.

Go has approximately 400ms long cold start duration but invocation time goes down and invocations start to happen instantly after cold start. We see dazzling performance results in Python and NodeJS runtimes. It looks like they are almost not affected by cold start and invocations always run instantly.

I acknowledge that the package size has a proportional effect on cold start durations for the same runtimes. We can generally accept this for the different runtimes except for Java And Go. But Go has the 3rd fastest cold start duration even though it has a big size like 4.7MB.

Conclusion

One of the biggest differences between compiled and interpreted languages is that interpreted languages need less time to analyze source code but the overall time to execute the process is much slower. In this context, we can clearly understand the reason why languages like NodeJS and Python have dazzling  performances against cold starts. In my experimentation, I’ve observed that Java and .Net performed worse than the other runtimes when we compare the invocation durations. I think it’s because Lambdas don’t have a large enough process to bridge the difference between interpreter and compiler. So if we tried to do a heavier and more complex process instead of returning the ”Hello World’ message, compiled languages would have better performances after warming up. Here are some other graphs created with the Thundra APM Dashboard queries:

Bonus Section: Comparison between versions

There are different versions of the runtimes I used in my experimentation. AWS Lambda supports 3 different Java versions (java8, java8.al2, java11), 3 different NodeJS versions (Node10, Node12, Node14), 4 different Python versions (Python3.6, Python3.7, Python3.8, Pyton3.9). I compared them among themselves after comparing the supported latest version in AWS of each runtime. When comparing the versions of these runtimes, we will not be able to see any difference in Lambda functions behavior most probably, but it would not hurt to check them anyway.

There is no invocation duration difference between python versions but there are memory usage differences here. Python 3.8 has better memory usage results than the other 3 Python versions:

You can see the invocation duration difference between Java versions below.

You can see the invocation duration difference between NodeJS versions below.

Summary

AWS Lambda is a compute service that lets you run code without any infrastructure management and it natively supports Java, Go, NodeJS, .Net, Python, and Ruby runtimes. In this article, we will compare the performances of the same hello world Lambda functions written in Java, Go, NodeJS, .Net, and Python runtimes.

Structure of the Template

We are using simple hello world functions to test invocation times of Lambdas’ by using the AWS SAM templates. When we compare them first we will use the latest versions of the runtimes the AWS SAM provided us. You can check the complete deployment package from this Github repository.

Comparing Lambda’s by Runtime

To compare these functions dynamically, the newly introduced Thundra APM’s custom dashboard widget is a pretty proper choice. Integrated functions already exist in the Github repository>, you can easily clone the repo and change the Thundra API Key with yours in SAM templates. For more information check APM docs from here or sign up for Thundra from here Let’s check the differences between runtimes by building widgets in a custom dashboard in Thundra APM. In this scenario, we only deployed the stack that contains the latest versions of runtimes. So filtering the functions with the runtimes and average durations would be enough for us to create the “Max durations” widget. All of the functions’ average invocation times are shown below:

Compiled languages’ cold start durations are significantly greater than the others but their max durations get closer to the others after warming up. Java has the highest cold start duration with approximately 5.000 milliseconds. In contrast to that, the interpreted languages have almost no performance difference between when they hit the cold start and for their later invokes. We can clearly see that .Net has a much better cold start performance than Java even if both are compiled languages. Let’s look from a different perspective by excluding Java and take a closer look at the other runtimes.

Go has approximately 400ms long cold start duration but invocation time goes down and invocations start to happen instantly after cold start. We see dazzling performance results in Python and NodeJS runtimes. It looks like they are almost not affected by cold start and invocations always run instantly.

I acknowledge that the package size has a proportional effect on cold start durations for the same runtimes. We can generally accept this for the different runtimes except for Java And Go. But Go has the 3rd fastest cold start duration even though it has a big size like 4.7MB.

Conclusion

One of the biggest differences between compiled and interpreted languages is that interpreted languages need less time to analyze source code but the overall time to execute the process is much slower. In this context, we can clearly understand the reason why languages like NodeJS and Python have dazzling  performances against cold starts. In my experimentation, I’ve observed that Java and .Net performed worse than the other runtimes when we compare the invocation durations. I think it’s because Lambdas don’t have a large enough process to bridge the difference between interpreter and compiler. So if we tried to do a heavier and more complex process instead of returning the ”Hello World’ message, compiled languages would have better performances after warming up. Here are some other graphs created with the Thundra APM Dashboard queries:

Bonus Section: Comparison between versions

There are different versions of the runtimes I used in my experimentation. AWS Lambda supports 3 different Java versions (java8, java8.al2, java11), 3 different NodeJS versions (Node10, Node12, Node14), 4 different Python versions (Python3.6, Python3.7, Python3.8, Pyton3.9). I compared them among themselves after comparing the supported latest version in AWS of each runtime. When comparing the versions of these runtimes, we will not be able to see any difference in Lambda functions behavior most probably, but it would not hurt to check them anyway.

There is no invocation duration difference between python versions but there are memory usage differences here. Python 3.8 has better memory usage results than the other 3 Python versions:

You can see the invocation duration difference between Java versions below.

You can see the invocation duration difference between NodeJS versions below.

*** This is a Security Bloggers Network syndicated blog from Thundra blog authored by Emin Bilgic. Read the original post at: https://blog.thundra.io/fastest-runtime-for-aws-lambda-functions