SBN

A Performance Perspective for Graviton Based Lambda Functions

2 minutes read


POSTED Sep, 2021

dot

IN

A Performance Perspective for Graviton Based Lambda Functions

Oguzhan Ozdemir

Written by Oguzhan Ozdemir

Solutions Engineer @Thundra

linkedin-share

 X

Introduction

AWS Lambda is constantly growing and, in time, things start to push limits. AWS has been working on new hardware architectures to make price-performance more efficient. AWS Graviton2 is a custom silicon-based 64-Bit ARM processor that’s coming to AWS Lambda which will help Serverless customers to have the same leverage as other services that can be built on AWS Graviton2. Lambda functions powered by Graviton2 offer up to 34% better price performance over x86-based Lambda functions.

As Thundra, we didn’t want to miss the opportunity to become a launch partner for such improvement and decided to test it out ourselves. The purpose of this article is to run simple benchmarks on AWS Graviton2 in comparison to x86_64 based architecture.

If you want to learn more about the AWS Graviton2 launch for AWS Lambda, make sure to visit the announcement page.

Benchmarks

Our test scenario consists of the below Python script that calculates the nth Fibonacci number to exhaust the CPU. It’s a bit cliche, but it’s simple. To keep the test simple and consistent, we’ve decided to calculate the 35th number as the sweet spot of our testing.

```python
import os
import multiprocessing


def handler(event, context):
    nth = 35

    processes = list()
    for i in range(int(os.getenv('PROCESS_COUNT', 1))):
        print(f'starting process #{i}.')
        process = multiprocessing.Process(
            target=calculateFib, args=(nth,))
        processes.append(process)
        process.start()

    for i, process in enumerate(processes):
        print(f'joining process #{i}.')
        process.join()
        print(f'finished process #{i}.')

    response = {
        "statusCode": 200
    }
    return response


def calculateFib(nth):
    if nth <= 2:
        return 1
    else:
        return calculateFib(nth - 2) + calculateFib(nth - 1)
```

We’ve run this script for all the combinations of (Process Count, RAM, Architecture) populated from the following data.

  • Process Count: 1, 2, 4, 8, 16
  • RAM: 256, 512, 1024, 2048, 4096, 8192, 10240 MB
  • Architecture: x86_64, arm64 (AWS Graviton2)

After running all the combinations, we’ve noticed a couple of trends. The most important one of these is probably the billed duration time. There is a significant difference between the two architectures. Assuming there is some error margin and that there are other things at play here, it’s still impressive with Graviton2.

As one might guess, duration time will decrease as the RAM gets bigger. For single-process invocation, you don’t get much gain from the CPUs after 2048 MB RAM. For 16 process invocations, the margin gets bigger as you can see in the graphs below.

The raw data for billed duration gives you the bigger picture.

x86_64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

18952

9571

4752

2709

2746

2828

2731

2

38428

19242

9687

4803

2742

2737

2790

4

76149

37965

19007

9498

4904

2777

2804

8

155968

76986

38184

18914

9578

4891

4359

16

305743

156496

76358

38699

19274

9703

7723

arm64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

15570

7830

4105

2481

2478

2513

2494

2

30856

15482

8259

4062

2482

2482

2572

4

61394

30588

15378

7899

4270

2491

2498

8

121505

61284

30533

15398

7942

4430

3697

16

244346

121374

61116

30660

15409

8124

6541

In the end, we can clearly see the advantage of AWS Graviton2 on CPU-intensive jobs. Overall, when the AWS Lambda is running on arm64 we’d get a considerable amount of cost-saving with the right configuration.

Outro

In conclusion, we can see the benefit of AWS Graviton2 for both price and performance. Of course, in real life, we would have much more complicated scenarios and things to worry about. However, it should give us a rough idea of how much improvement we can expect. Getting started with AWS Graviton is pretty easy if you just follow this GitHub repository. To be able to debug your applications quickly, go to https://start.thundra.io/ and sign up for a free account.

×

SUBSCRIBE TO OUR BLOG

Get our new blogs delivered straight to your inbox.

 

THANKS FOR SIGNING UP!

We’ll make sure to share the best materials crafted for you!

2 minutes read


POSTED Sep, 2021

dot

IN

A Performance Perspective for Graviton Based Lambda Functions

Oguzhan Ozdemir

Written by Oguzhan Ozdemir

Solutions Engineer @Thundra

linkedin-share

 X

Introduction

AWS Lambda is constantly growing and, in time, things start to push limits. AWS has been working on new hardware architectures to make price-performance more efficient. AWS Graviton2 is a custom silicon-based 64-Bit ARM processor that’s coming to AWS Lambda which will help Serverless customers to have the same leverage as other services that can be built on AWS Graviton2. Lambda functions powered by Graviton2 offer up to 34% better price performance over x86-based Lambda functions.

As Thundra, we didn’t want to miss the opportunity to become a launch partner for such improvement and decided to test it out ourselves. The purpose of this article is to run simple benchmarks on AWS Graviton2 in comparison to x86_64 based architecture.

If you want to learn more about the AWS Graviton2 launch for AWS Lambda, make sure to visit the announcement page.

Benchmarks

Our test scenario consists of the below Python script that calculates the nth Fibonacci number to exhaust the CPU. It’s a bit cliche, but it’s simple. To keep the test simple and consistent, we’ve decided to calculate the 35th number as the sweet spot of our testing.

```python
import os
import multiprocessing


def handler(event, context):
    nth = 35

    processes = list()
    for i in range(int(os.getenv('PROCESS_COUNT', 1))):
        print(f'starting process #{i}.')
        process = multiprocessing.Process(
            target=calculateFib, args=(nth,))
        processes.append(process)
        process.start()

    for i, process in enumerate(processes):
        print(f'joining process #{i}.')
        process.join()
        print(f'finished process #{i}.')

    response = {
        "statusCode": 200
    }
    return response


def calculateFib(nth):
    if nth <= 2:
        return 1
    else:
        return calculateFib(nth - 2) + calculateFib(nth - 1)
```

We’ve run this script for all the combinations of (Process Count, RAM, Architecture) populated from the following data.

  • Process Count: 1, 2, 4, 8, 16
  • RAM: 256, 512, 1024, 2048, 4096, 8192, 10240 MB
  • Architecture: x86_64, arm64 (AWS Graviton2)

After running all the combinations, we’ve noticed a couple of trends. The most important one of these is probably the billed duration time. There is a significant difference between the two architectures. Assuming there is some error margin and that there are other things at play here, it’s still impressive with Graviton2.

As one might guess, duration time will decrease as the RAM gets bigger. For single-process invocation, you don’t get much gain from the CPUs after 2048 MB RAM. For 16 process invocations, the margin gets bigger as you can see in the graphs below.

The raw data for billed duration gives you the bigger picture.

x86_64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

18952

9571

4752

2709

2746

2828

2731

2

38428

19242

9687

4803

2742

2737

2790

4

76149

37965

19007

9498

4904

2777

2804

8

155968

76986

38184

18914

9578

4891

4359

16

305743

156496

76358

38699

19274

9703

7723

arm64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

15570

7830

4105

2481

2478

2513

2494

2

30856

15482

8259

4062

2482

2482

2572

4

61394

30588

15378

7899

4270

2491

2498

8

121505

61284

30533

15398

7942

4430

3697

16

244346

121374

61116

30660

15409

8124

6541

In the end, we can clearly see the advantage of AWS Graviton2 on CPU-intensive jobs. Overall, when the AWS Lambda is running on arm64 we’d get a considerable amount of cost-saving with the right configuration.

Outro

In conclusion, we can see the benefit of AWS Graviton2 for both price and performance. Of course, in real life, we would have much more complicated scenarios and things to worry about. However, it should give us a rough idea of how much improvement we can expect. Getting started with AWS Graviton is pretty easy if you just follow this GitHub repository. To be able to debug your applications quickly, go to https://start.thundra.io/ and sign up for a free account.

Introduction

AWS Lambda is constantly growing and, in time, things start to push limits. AWS has been working on new hardware architectures to make price-performance more efficient. AWS Graviton2 is a custom silicon-based 64-Bit ARM processor that’s coming to AWS Lambda which will help Serverless customers to have the same leverage as other services that can be built on AWS Graviton2. Lambda functions powered by Graviton2 offer up to 34% better price performance over x86-based Lambda functions.

As Thundra, we didn’t want to miss the opportunity to become a launch partner for such improvement and decided to test it out ourselves. The purpose of this article is to run simple benchmarks on AWS Graviton2 in comparison to x86_64 based architecture.

If you want to learn more about the AWS Graviton2 launch for AWS Lambda, make sure to visit the announcement page.

Benchmarks

Our test scenario consists of the below Python script that calculates the nth Fibonacci number to exhaust the CPU. It’s a bit cliche, but it’s simple. To keep the test simple and consistent, we’ve decided to calculate the 35th number as the sweet spot of our testing.

```python
import os
import multiprocessing


def handler(event, context):
    nth = 35

    processes = list()
    for i in range(int(os.getenv('PROCESS_COUNT', 1))):
        print(f'starting process #{i}.')
        process = multiprocessing.Process(
            target=calculateFib, args=(nth,))
        processes.append(process)
        process.start()

    for i, process in enumerate(processes):
        print(f'joining process #{i}.')
        process.join()
        print(f'finished process #{i}.')

    response = {
        "statusCode": 200
    }
    return response


def calculateFib(nth):
    if nth <= 2:
        return 1
    else:
        return calculateFib(nth - 2) + calculateFib(nth - 1)
```

We’ve run this script for all the combinations of (Process Count, RAM, Architecture) populated from the following data.

  • Process Count: 1, 2, 4, 8, 16
  • RAM: 256, 512, 1024, 2048, 4096, 8192, 10240 MB
  • Architecture: x86_64, arm64 (AWS Graviton2)

After running all the combinations, we’ve noticed a couple of trends. The most important one of these is probably the billed duration time. There is a significant difference between the two architectures. Assuming there is some error margin and that there are other things at play here, it’s still impressive with Graviton2.

As one might guess, duration time will decrease as the RAM gets bigger. For single-process invocation, you don’t get much gain from the CPUs after 2048 MB RAM. For 16 process invocations, the margin gets bigger as you can see in the graphs below.

The raw data for billed duration gives you the bigger picture.

x86_64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

18952

9571

4752

2709

2746

2828

2731

2

38428

19242

9687

4803

2742

2737

2790

4

76149

37965

19007

9498

4904

2777

2804

8

155968

76986

38184

18914

9578

4891

4359

16

305743

156496

76358

38699

19274

9703

7723

arm64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

15570

7830

4105

2481

2478

2513

2494

2

30856

15482

8259

4062

2482

2482

2572

4

61394

30588

15378

7899

4270

2491

2498

8

121505

61284

30533

15398

7942

4430

3697

16

244346

121374

61116

30660

15409

8124

6541

In the end, we can clearly see the advantage of AWS Graviton2 on CPU-intensive jobs. Overall, when the AWS Lambda is running on arm64 we’d get a considerable amount of cost-saving with the right configuration.

Outro

In conclusion, we can see the benefit of AWS Graviton2 for both price and performance. Of course, in real life, we would have much more complicated scenarios and things to worry about. However, it should give us a rough idea of how much improvement we can expect. Getting started with AWS Graviton is pretty easy if you just follow this GitHub repository. To be able to debug your applications quickly, go to https://start.thundra.io/ and sign up for a free account.

Introduction

AWS Lambda is constantly growing and, in time, things start to push limits. AWS has been working on new hardware architectures to make price-performance more efficient. AWS Graviton2 is a custom silicon-based 64-Bit ARM processor that’s coming to AWS Lambda which will help Serverless customers to have the same leverage as other services that can be built on AWS Graviton2. Lambda functions powered by Graviton2 offer up to 34% better price performance over x86-based Lambda functions.

As Thundra, we didn’t want to miss the opportunity to become a launch partner for such improvement and decided to test it out ourselves. The purpose of this article is to run simple benchmarks on AWS Graviton2 in comparison to x86_64 based architecture.

If you want to learn more about the AWS Graviton2 launch for AWS Lambda, make sure to visit the announcement page.

Benchmarks

Our test scenario consists of the below Python script that calculates the nth Fibonacci number to exhaust the CPU. It’s a bit cliche, but it’s simple. To keep the test simple and consistent, we’ve decided to calculate the 35th number as the sweet spot of our testing.

```python
import os
import multiprocessing


def handler(event, context):
    nth = 35

    processes = list()
    for i in range(int(os.getenv('PROCESS_COUNT', 1))):
        print(f'starting process #{i}.')
        process = multiprocessing.Process(
            target=calculateFib, args=(nth,))
        processes.append(process)
        process.start()

    for i, process in enumerate(processes):
        print(f'joining process #{i}.')
        process.join()
        print(f'finished process #{i}.')

    response = {
        "statusCode": 200
    }
    return response


def calculateFib(nth):
    if nth <= 2:
        return 1
    else:
        return calculateFib(nth - 2) + calculateFib(nth - 1)
```

We’ve run this script for all the combinations of (Process Count, RAM, Architecture) populated from the following data.

  • Process Count: 1, 2, 4, 8, 16
  • RAM: 256, 512, 1024, 2048, 4096, 8192, 10240 MB
  • Architecture: x86_64, arm64 (AWS Graviton2)

After running all the combinations, we’ve noticed a couple of trends. The most important one of these is probably the billed duration time. There is a significant difference between the two architectures. Assuming there is some error margin and that there are other things at play here, it’s still impressive with Graviton2.

As one might guess, duration time will decrease as the RAM gets bigger. For single-process invocation, you don’t get much gain from the CPUs after 2048 MB RAM. For 16 process invocations, the margin gets bigger as you can see in the graphs below.

The raw data for billed duration gives you the bigger picture.

x86_64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

18952

9571

4752

2709

2746

2828

2731

2

38428

19242

9687

4803

2742

2737

2790

4

76149

37965

19007

9498

4904

2777

2804

8

155968

76986

38184

18914

9578

4891

4359

16

305743

156496

76358

38699

19274

9703

7723

arm64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

15570

7830

4105

2481

2478

2513

2494

2

30856

15482

8259

4062

2482

2482

2572

4

61394

30588

15378

7899

4270

2491

2498

8

121505

61284

30533

15398

7942

4430

3697

16

244346

121374

61116

30660

15409

8124

6541

In the end, we can clearly see the advantage of AWS Graviton2 on CPU-intensive jobs. Overall, when the AWS Lambda is running on arm64 we’d get a considerable amount of cost-saving with the right configuration.

Outro

In conclusion, we can see the benefit of AWS Graviton2 for both price and performance. Of course, in real life, we would have much more complicated scenarios and things to worry about. However, it should give us a rough idea of how much improvement we can expect. Getting started with AWS Graviton is pretty easy if you just follow this GitHub repository. To be able to debug your applications quickly, go to https://start.thundra.io/ and sign up for a free account.

*** This is a Security Bloggers Network syndicated blog from Thundra blog authored by Oguzhan Ozdemir. Read the original post at: https://blog.thundra.io/a-performance-perspective-for-graviton-based-lambda-functions