Serverless computing has changed how teams build apps that scale effortlessly. But here’s the catch: popular tools like Node.js and Python often face delays when starting up, hog memory, or just don’t perform as smoothly as needed. That’s where Rust shines. Built for lightning speed and reliability without the bulk, it’s quickly becoming the secret weapon for serverless setups.

In this walkthrough, we’ll teach you how to build and launch serverless functions using Rust on AWS Lambda. 

Why Rust for AWS Lambda?

Blazing-Fast Cold Starts

AWS Lambda cold starts — the delay when a function initializes — are a critical performance bottleneck. Unlike interpreted languages (e.g., Python), Rust compiles to machine-native binaries, eliminating interpreter startup overhead. Combined with Rust’s lack of a garbage collector (GC), this can result in cold starts as low as 50–75 ms, even for complex functions.

Memory Safety Without Compromise

Rust’s ownership model guarantees memory safety at compile time, preventing common vulnerabilities like buffer overflows. This is critical for serverless, where functions often process untrusted input (e.g., data from an API Gateway).

Tiny Binaries, Lower Costs

Rust binaries are often just 5–10 MB when optimized, compared to 50–100 MB for equivalent Node.js or Python deployments. Smaller binaries mean:

  • Faster deployment times
  • Reduced memory usage (leading to lower AWS Lambda costs)
  • Compatibility with restrictive environments like AWS Lambda@Edge

Async-First Concurrency

Rust’s async/await syntax, paired with runtimes like Tokio, enables non-blocking I/O operations. This is ideal for serverless functions handling concurrent API requests or database queries.

Setting Up Rust for AWS Lambda

Install the Rust Toolchain

Start by installing Rust and the AWS Lambda-specific tools:

# Install Rust + Cargo
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Add nightly toolchain (required for some optimizations)
rustup default nightly
rustup target add x86_64-unknown-linux-musl

# Install cargo-lambda
cargo install cargo-lambda

Create a New Lambda Project

Use cargo-lambda to scaffold a new function:

cargo lambda new my-lambda-function

Your generated Cargo.toml might include essential dependencies like:

[dependencies]

lambda_runtime = "0.8"

tokio = { version = "1.0", features = ["macros"] }

serde = { version = "1.0", features = ["derive"] }

Write a Basic Handler

Replace src/main.rs with a Lambda function that processes JSON input:

use lambda_runtime::{handler_fn, Context, Error};
use serde_json::{json, Value};

#[tokio::main] async fn main() -> Result<(), Error> {     lambda_runtime::run(handler_fn(handler)).await } async fn handler(event: Value, _: Context) -> Result {     let name = event["name"].as_str().unwrap_or("World");     Ok(json!({ "message": format!("Hello, {}!", name) })) }

Key Components

  • #[tokio::main] – configures the async runtime
  • handler_fn – wraps the handler for AWS Lambda compatibility
  • serde_json – parses and serializes JSON payloads

Code Snippets With Expected Outputs

Below, you’ll see additional code examples illustrating structured logging, error handling, and Terraform deployment, each paired with expected inputs and outputs.

Basic Lambda Handler (Extended Example)

async fn handler(event: Value, _: Context) -> Result {     let name = event["name"].as_str().unwrap_or("World");     Ok(json!({ "message": format!("Hello, {}!", name) })) }

Input

Output

{ "message": "Hello, Alice!" }

Input (No Name)

Output

{ "message": "Hello, World!" }

Structured Logging With Tracing

use tracing::{info, Level};
use tracing_subscriber::FmtSubscriber;

fn main() {
    let subscriber = FmtSubscriber::builder()
        .with_max_level(Level::INFO)
        .finish();
    tracing::subscriber::set_global_default(subscriber).unwrap();

    info!("Lambda initialized");
    // ...
}

CloudWatch Log Output

2023-10-05T12: 34: 56Z INFO my_lambda_function Lambda initialized

Logs appear in AWS CloudWatch, queryable via CloudWatch Insights for deeper analysis.

Error Handling With thiserror

Result {
    let name = event[“name”]
        .as_str()
        .ok_or(LambdaError::MissingField(“name”.into()))?;
    Ok(json!({ “message”: format!(“Hello, {}!”, name) }))
}” data-lang=”text/x-rustsrc”>

#[derive(thiserror::Error, Debug)]
enum LambdaError {
    #[error("Missing field: {0}")]
    MissingField(String),
    #[error(transparent)]
    SerdeJson(#[from] serde_json::Error),
}

async fn handler(event: Value, _: Context) -> Result {
    let name = event["name"]
        .as_str()
        .ok_or(LambdaError::MissingField("name".into()))?;
    Ok(json!({ "message": format!("Hello, {}!", name) }))
}

Input (Missing Name)

Output (Error)

{
  "errorMessage": "Missing field: name",
  "errorType": "LambdaError"
}

AWS Deployment

{
  "resource": {
    "aws_lambda_function": {
      "rust_lambda": {
        "function_name": "rust-serverless",
        "runtime": "provided.al2",
        "handler": "bootstrap",
        "filename": "target/lambda/my-lambda-function/bootstrap.zip",
        "role": "${aws_iam_role.lambda_exec.arn}",
        "memory_size": 128,
        "timeout": 10
      }
    },
    "aws_iam_role": {
      "lambda_exec": {
        "name": "rust-lambda-role",
        "assume_role_policy": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Action": "sts:AssumeRole",
              "Effect": "Allow",
              "Principal": {
                "Service": "lambda.amazonaws.com"
              }
            }
          ]
        }
      }
    }
  }
}

Output After terraform Apply

aws_lambda_function.rust_lambda: Creating...
aws_lambda_function.rust_lambda: Creation complete after 5s
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs

lambda_arn = "arn:aws:lambda:us-east-1: 123456789012:function:rust-serverless"

Optimizing Rust for AWS Lambda

Reduce Binary Size

AWS Lambda charges for memory usage, so smaller binaries can save costs:

# Compile with musl for static linking
cargo lambda build --release --target x86_64-unknown-linux-musl

# Strip debug symbols (saves ~30% size) 
strip target/x86_64-unknown-linux-musl/release/bootstrap

Pro tip: Use cargo udeps to audit unused dependencies.

Cold Start Mitigation

  • Precompiled binaries. The x86_64-unknown-linux-musl target ensures compatibility with AWS Lambda’s Amazon Linux 2 environment.
  • Provisioned concurrency. Pre-initialize Lambda instances via the AWS Console, Terraform, or CloudFormation to reduce cold starts for high-traffic functions.

Async Best Practices

Rust’s async runtime (Tokio) helps you run multiple I/O-bound tasks concurrently.

async fn fetch_s3_object(bucket: &str, key: &str) -> Result, Error> {     let client = aws_sdk_s3::Client::new(&aws_config::load_from_env().await);     let resp = client.get_object().bucket(bucket).key(key).send().await?;     let data = resp.body.collect().await?;     Ok(data.into_bytes().to_vec()) }

Use concurrency to fetch data from multiple sources without blocking the main thread.

Observability and Debugging

  • Structured logging. Already shown above with the tracing crate.
  • Error handling. thiserror for typed errors that help you quickly pinpoint issues in logs or metrics.
  • AWS X-Ray. Consider X-Ray for advanced tracing if you need deeper visibility into call chains, especially across microservices.

Advanced Optimization Example

Fetching S3 Data Concurrently

async fn fetch_s3_object(bucket: &str, key: &str) -> Result, Error> {     let client = aws_sdk_s3::Client::new(&aws_config::load_from_env().await);     let resp = client.get_object().bucket(bucket).key(key).send().await?;     let data = resp.body.collect().await?;     Ok(data.into_bytes().to_vec()) }

Input

{ "bucket": "my-bucket", "key": "data.json" }

Output

“,
  “metadata”: { “last_modified”: “2023-10-05T12: 34: 56Z” }
}” data-lang=”application/json”>

{
  "content": "",
  "metadata": { "last_modified": "2023-10-05T12: 34: 56Z" }
}

You can initiate multiple fetch_s3_object calls concurrently using tokio::join!, slashing overall execution time.

Final Deployment Workflow

Build

cargo lambda build --release --target x86_64-unknown-linux-musl
strip target/x86_64-unknown-linux-musl/release/bootstrap

Deploy

terraform apply -auto-approve

If you’re using aws_lambda_function_url, you can expose the function publicly via HTTPS once the apply step completes.

Invoke

aws lambda invoke 
  --function-name rust-serverless 
  --payload '{"name":"Alice"}' output.json

Response (output.json)

{ "message": "Hello, Alice!" }

Conclusion

Rust’s combination of speed, safety, and efficiency makes it ideal for serverless computing. By leveraging tools like cargo-lambda, tokio, and Terraform, you can deploy production-ready functions that outperform traditional runtimes in cold starts, memory usage, and overall cost.

Next Steps

  • Explore AWS Lambda Extensions for secrets management and advanced logging.
  • Integrate with AWS SQS or EventBridge for event-driven architectures.
  • Benchmark your own functions using AWS X-Ray to visualize call traces.

By adopting Rust for serverless, you’re not just optimizing performance — you’re future-proofing your architecture for the next wave of modern, scalable applications.

Further Reading

  • AWS Lambda Rust Runtime GitHub
  • A Guide to AWS Software Development
  • Rust Documentation

Opinions expressed by DZone contributors are their own.