← Back to Blog
awsserverlessfintecharchitecture

Serverless at Scale: Running a Rewards Platform Across 10 Countries

Digvijay Solanki·November 20, 2025

When you're building a consumer-facing rewards platform that needs to operate across 10 countries, serve 5 million users, and support market-specific configurations like Arabic right-to-left layouts, the deployment and operational complexity can easily spiral out of control.

On Abbott Care & Share Rewards, we chose a serverless architecture using AWS SAM, and it was the right call. Here's why — and what it actually looked like in practice.

The Problem with Traditional Microservices at This Scale

Before committing to serverless, we evaluated a traditional microservices approach. The model wasn't wrong, but the operational overhead was significant:

  • Each service needed its own deployment pipeline, container orchestration, and scaling configuration
  • Per-country configuration meant either many environment-specific containers or complex runtime config management
  • Spinning up a new market (e.g., Saudi Arabia) required coordinating deployments across multiple services

We estimated onboarding a new country would take 3–4 weeks of DevOps work. That wasn't acceptable.

Why AWS SAM Changed the Equation

AWS Serverless Application Model (SAM) is an extension of CloudFormation designed specifically for serverless workloads. It gave us:

  • Infrastructure as code for Lambda functions, API Gateway routes, DynamoDB tables, and IAM roles in a single, versioned template
  • Local development and testing via sam local — no AWS account needed for unit and integration tests
  • Repeatable environment creation — deploying the Saudi Arabia environment was running a SAM deploy with country-specific parameters

The same template that ran in production ran in staging and in each developer's local environment. Configuration drift became nearly impossible.

The Data Layer: DynamoDB

We chose DynamoDB for its managed scaling and global table support. Key design decisions:

  • Single-table design for core entities (users, rewards, transactions) to minimise hot partitions and keep read patterns fast
  • Country-prefixed partition keys to namespace data per market with full isolation
  • DynamoDB Streams for event-driven processing — reward redemptions triggered downstream workflows (notifications, analytics, fraud checks) via Lambda without direct coupling

One thing I'd do differently: we had a few access patterns that pushed against DynamoDB's query model. In hindsight, a secondary index designed earlier would have saved us several retroactive table migrations.

CDN Distribution: CloudFront + S3

Static assets — app bundles, localised content, images — were served from S3 buckets behind a CloudFront distribution. This gave us:

  • Sub-100ms asset delivery globally via edge caching
  • Per-country content variants (Arabic translations, region-specific promotional content) served via CloudFront's origin path routing
  • Significant reduction in Lambda invocations for static content

The Result: 40% Less Deployment Effort

Compared to our microservices estimate, SAM-based deployments were dramatically simpler:

  • New market onboarding dropped from ~3 weeks to under a week
  • The entire infrastructure was version-controlled alongside application code
  • Rollbacks were a single SAM deploy to a previous template version

With CloudWatch dashboards and Lambda-native metrics, observability was built in rather than bolted on.

Closing Thoughts

Serverless isn't the right answer for every workload. CPU-intensive tasks, long-running processes, and workloads with very predictable traffic profiles may be better served by containers or EC2. But for event-driven, bursty, globally distributed applications like a rewards platform, it's a natural fit.

The 40% reduction in deployment effort wasn't magic — it was the compounding effect of infrastructure-as-code, managed scaling, and a deployment model that treated each environment as a parameterised instance of the same template.

Reach out if you're evaluating serverless architecture for your platform — happy to work through the trade-offs with you.