Kevin W. McConnell

Reasons to write Lambda functions in Go

For the last couple of years, Go has been my go-to language for writing Lambda functions.

It might not be the most widely used option on Lambda, but it’s a great choice for many workloads.

If you’re not already writing Lambdas in Go, I would encourage you to give it a try. Here are a few good reasons to do so.

1. Speed

When compared to more dynamic languages, Go code tends to be fast to run. And compiled Go binaries are fast to start up, which helps lessen the impact of cold starts.

Fast-running code is always nice to have. But it’s particularly helpful in Lambda, where you are billed for every millisecond that your functions run.

Many of the Go Lambdas I run in production execute in single-digit millisecond times when warm; and are somewhere in the tens of milliseconds when starting cold. Which is quite a bit faster than I tend to see when using other languages.

2. Memory efficiency

Similarly, Go tends to be quite memory-efficient (again, when compared to many dynamic languages). This gives you more freedom when configuring the memory allocation of your functions, which can be another way to lower your bill.

In practice I usually find that the reason to allocate additional memory is not for the memory itself, but for the additional CPU allocation that comes with it. Lambda functions get proportionally more CPU and network bandwidth as their configured memory is increased.

In effect, the memory setting is more like an overall “power” setting.

However some functions can perform just as well at low memory settings. For example, functions that spend a lot of time waiting on external API responses will not be able to make much use of any additional CPU allocation. So it’s nice to have the freedom to configure the memory allocation to whatever strikes the best balance of cost and performance for that function, without being forced into a higher memory allocation simply because the function requires it.

3. Reasonably compact binaries

During a cold start of your function, the Lambda environment has to get a copy of your code onto a suitable instance to run it. Having a small function payload goes some way towards helping this stage execute quickly.

Although you aren’t billed for this part of the start time, it is still adding to the latency of your functions. So the faster you can make it, the better — especially for cases like API handlers where that latency can be visible to users.

Since Go compiles to static binaries, the overall payload for a Go-based Lambda tends to be fairly small. This is particularly noticeable your functions that require a lot of additional libraries, which for some languages can very quickly add up.

4. Easy concurrency

This point is probably the one I find most significant from day to day.

With goroutines, Go makes it easy to perform several operations at the same time. Go will also spread those operations over available cores, by default. This can be a great performance win when you need it.

For example, say you have an operation to perform on several items in a list. You might write a loop like this:

for _, item := range allItems {
err := doSomethingComplicated(item)
if err != nil {
return err
return nil

To make this parallel, we could instead do something like:

var g errgroup.Group
for _, item := range allItems {
item := item
g.Go(func() error {
return doSomethingComplicated(item)
return g.Wait()

This version isn’t much longer or more complicated than the first1, but it now lets us use all the available CPU cores to get the work done more quickly. Given that Lambdas can be allocated up to 6 vCPUs, this can make a big difference in throughput for CPU-heavy workloads.

I’ve found that the more I look for opportunities to speed up my Lambdas by overlapping some work in parallel, I start to notice common patterns that I can reuse to speed up other functions as well. I like saving money, so this sort of thing makes me happy.

However I also like writing simple code, and not spending much time fretting over the details, which is why I find Go’s approach to concurrency to be so helpful.

5. Great SDK support

The AWS SDK for Go is great. It performs well, and tends to make good default choices (for example, reusing connections when making repeated API calls). It’s also straightforward to mock when writing unit tests.

When combined with the X-Ray SDK for Go you can collect detailed traces of your functions’ behaviour, often spanning multiple services.

For example, say you have a Lambda API handler that posts a message to an SQS queue, which is in then processed asynchronously by a second Lambda function that performs some operations in DynamoDB. By using the X-Ray SDK you can capture traces which cover the whole operation from end to end. This can be very useful when working on optimisations or troubleshooting a problem.

Bonus: The @aws-cdk/aws-lambda-go module

One last tip: if you’re building your applications with the CDK, you can make use of its built-in support for Go-based Lambda functions. It will take care of compiling your Go code (either directly, or via Docker) when deploying your stacks.

It’s currently marked “experimental” but I expect it will move to stable in a future release.

I hope some of these points will encourage you to try out Go, if you’re not using it already!

  1. It would be simpler if we didn’t need that odd-looking item := item line. But since item is being mutated on each iteration of the loop, we need some way to make sure that each goroutine runs on the item we intended it to. Assigning to a new variable inside the loop like this looks weird, but it is at least idiomatic enough that it’s easy to tune it out.

Posted July 15, 2021.