An intro to Amazon Fargate: what it is, why it‘s awesome (and not), and when to use it

If you‘ve ever tried running and managing containers in production, you know it‘s no simple task. Ensuring your containers have the right resources, configuring the networking between them, keeping everything secure, and scaling to handle traffic spikes adds a lot of complexity. Amazon Fargate aims to solve many of these challenges by allowing you to run containers in a serverless way, without having to manage the underlying infrastructure. Let‘s take a closer look at what Fargate is, the problems it solves, how it works under the hood, and when you should consider using it.

The challenges of running containers at scale

Containers have exploded in popularity over the past few years as a way to bundle and deploy applications consistently across environments. Tools like Docker make it easy to package your application and its dependencies into a container image that can run reliably anywhere. However, going from a single container running on your laptop to a production-grade setup introduces a number of hurdles:

Managing container infrastructure is complex. As soon as you go beyond a handful of containers, you need a way to orchestrate and manage them. Tools like Amazon ECS and Kubernetes handle tasks like deploying containers across a cluster, scaling them up or down, performing health checks and more. But configuring these orchestrators and managing the underlying server infrastructure is a big job.

Scaling is difficult. Handling spikes in traffic requires careful planning and configuration to ensure you have the right mix of resources available. You need to overprovision capacity, set up auto-scaling policies, and make sure your application can handle failure of individual nodes. It often requires manual intervention and a lot of trial and error to get right.

Security is a concern. When you‘re running many containers on shared infrastructure, you have to worry about isolating them from each other. A vulnerability in one container can potentially compromise the entire host. Applying security updates also becomes more difficult since you have to coordinate updates across all running containers.

Amazon Fargate aims to solve these problems by abstracting away the underlying server infrastructure and allowing you to run each container or task in its own isolated environment.

How Fargate makes it easier to run containers

With Fargate, you no longer have to provision or manage any servers. You simply specify the resources you need for each container or task, and Fargate handles the rest. Here‘s how it helps:

No more managing infrastructure. Fargate eliminates the need to choose server types, decide when to scale your clusters, or optimize cluster packing. You just define your container, specify the CPU and memory requirements, and Fargate handles provisioning the right infrastructure to run it.

Containers run in isolation. Each container runs in its own dedicated kernel runtime environment, isolated from other containers and customers. You don‘t have to worry about rogue containers affecting the host or other containers. This greatly simplifies security and makes it easier to meet strict compliance requirements.

Easy scaling and high availability. Fargate integrates with other AWS services like ELB and Auto Scaling to provide a highly available and scalable architecture out of the box. As demand increases, new containers are automatically launched to handle the load with no infrastructure management required.

Only pay for the resources you use. With Fargate, you pay only for the amount of vCPU and memory resources your containerized applications consume – there‘s no over-provisioning and no paying for idle resources. This makes it easy to optimize costs, especially for periodic or bursty workloads.

So in essence, Fargate allows you to specify your container workload requirements and then dynamically matches those requirements to the right compute resources. You get the benefits of containers – fast startup times, efficient resource utilization, consistent runtimes – without any of the management overhead.

A technical look at how Fargate works

Under the hood, Fargate uses many of the same building blocks as Amazon ECS. When you run a task or service on Fargate, your containers are deployed onto dedicated VMs that are isolated from other customers. The big difference is that Fargate manages these VMs for you and abstracts them away.

With a standard ECS setup, you create a cluster and specify the instance type and number of EC2 instances to run in that cluster. Your containers then get deployed onto those EC2 instances based on the task definition and service configuration. You‘re responsible for scaling the cluster up and down and ensuring you have the right mix of instances.

With Fargate, you just specify the task definition for each container, including the Docker image and CPU/memory requirements. Fargate handles provisioning the right compute resources to run those tasks and manages the underlying infrastructure for you. You can still integrate with ELB for load balancing, CloudWatch for logging and monitoring, IAM for access control, etc.

Fargate architecture diagram

Fargate uses a fairly sophisticated scheduler to bin pack containers and achieve high utilization. When you launch a task, the Fargate scheduler provisions a VM to run that task in isolation. That VM is dedicated to your account but may end up running multiple tasks from your account over time as the scheduler optimizes placement.

One key concept is that of a task. In ECS, a task is composed of one or more containers that run together. With Fargate, tasks are the unit of deployment. You can run multiple tasks for the same container image, each with its own vCPU and memory footprint. This makes it easy to scale horizontally since you can just add more tasks as needed.

Comparing Fargate to the alternatives

So how does Fargate stack up to other ways of running containers in the cloud? Let‘s look at a few common options:

Running containers on EC2 yourself. With this approach, you set up and manage your own ECS cluster running on EC2. You have full control over the instance size and configuration, but you‘re also responsible for scaling and ensuring high availability of the cluster. This gives you the most flexibility but also the most management overhead.

Using Kubernetes. Kubernetes is an open-source orchestration platform for containers. Compared to ECS, it has a larger ecosystem and more flexibility, but also a steeper learning curve. AWS offers managed Kubernetes with EKS, but you still have to provision and manage worker nodes yourself. Fargate support for EKS is in the works but not yet available.

Serverless with Lambda. If your application is a good fit for the serverless model, you can use AWS Lambda to run code without provisioning any infrastructure. Lambdas have some limitations though – they can only run for a maximum of 15 minutes and are limited in memory, storage, and package size. They‘re great for small event-driven tasks but not a good fit for long-running processes.

In general, Fargate sits in a nice middle ground between running everything yourself and a pure serverless model. It abstracts away the server management but still gives you a lot of control over how your containers run.

Where Fargate shines

So what are the key benefits of using Fargate? Let‘s break it down:

Simplified ops. For me, the biggest draw of Fargate is not having to manage servers or worry about infrastructure. You can just focus on defining your containers and let Fargate handle the rest. This frees up a lot of time to focus on other priorities.

Easy scaling. With Fargate, you can quickly scale up to handle spikes in traffic or scale down to zero when there‘s no work to be done. You don‘t have to overprovision resources just in case or fiddle with auto scaling policies. Just define a service and set the desired number of tasks.

Granular resource allocation. Fargate allows you to specify the exact CPU and memory requirements for each task. No more guessing how many containers you can squeeze onto a larger host instance. This can lead to big cost savings, especially if you have many smaller services that would otherwise take up a large chunk of a dedicated host.

Improved security and compliance. Containers running on Fargate are isolated at the VM level. They run in their own dedicated kernel runtime environment, which reduces the attack surface. You can use your existing IAM policies to govern access and easily meet regulatory requirements like PCI or HIPAA.

One customer example comes from Vanguard, one of the world‘s largest investment companies. They used Fargate to deploy an internal application with over 100 microservices in a regulated environment. Using Fargate allowed them to achieve faster deployment cycles while meeting security and compliance needs. They provisioned a new region with over 120 tasks in just 6 weeks.

The tradeoffs and limitations

Of course, no technology is without tradeoffs. Here are a few potential drawbacks to consider with Fargate:

It can be expensive. The elephant in the room with Fargate is that it costs significantly more than using EC2. In general, Fargate pricing is about 4x the equivalent EC2 on-demand price. Some quick math: an m5.large EC2 instance costs $0.096/hour. The same resources on Fargate (2 vCPU and 8 GB memory) would cost $0.40/hour – about a 4x markup.

Now, you can save a lot of money with Fargate for workloads that scale down to zero. If you run a task for only 1 hour per day, Fargate will be much cheaper than paying for a dedicated host 24/7. The cost equation really depends on your utilization – if you can keep your instances busy, the savings might not justify the premium for Fargate.

Loss of control. The flip side of not having to manage infrastructure is that you give up some control. With Fargate, you can‘t SSH into your containers or customize the runtime in the same way you could with EC2. For most applications this isn‘t an issue, but it could be a blocker for some.

Limited configuration options. Fargate currently only supports a limited set of operating systems and Docker versions. It doesn‘t support all the configuration options you get with ECS, like custom networking modes or using a sidecar container for logging. Again, these limitations won‘t matter for many use cases but it‘s something to be aware of.

Potential for performance impact. Because Fargate is a shared multi-tenant environment, there‘s potential for "noisy neighbor" issues where a resource-hungry task impacts others. In my experience this isn‘t a huge issue since the VMs are pretty well isolated, but I wouldn‘t run my most latency-sensitive workloads on Fargate.

So when should you use Fargate?

Given the benefits and limitations, here are some questions to ask when deciding whether to use Fargate:

  1. How important is simplicity vs control? If your top priority is getting something deployed quickly and you‘re willing to trade some flexibility, Fargate is a great choice. But if you have special requirements or need more control over the environment, you may want to use ECS on EC2 instead.

  2. What are the cost implications? Do some napkin math based on your expected utilization to compare the costs of Fargate vs EC2. If you expect steady-state usage, EC2 will likely be cheaper. But for variable workloads, Fargate can provide significant savings.

  3. Can your application be easily containerized? Fargate is ideal for applications that follow the 12-factor principles and can be split into discrete services. Legacy monoliths can be more challenging and may not be a good fit for containerization in general.

  4. How sensitive are you to vendor lock-in? Fargate only works in the AWS ecosystem, so you‘ll need to be comfortable being locked into AWS services. If you‘re looking for something more portable, running your own Kubernetes cluster might be preferable.

In general, I‘d recommend Fargate for:

  • Small to medium-sized applications that can be easily containerized
  • Workloads with unpredictable or variable resource needs
  • Teams that want to get up and running quickly without managing infrastructure
  • Applications that need to dynamically scale in response to load

On the flip side, Fargate may not be the best choice for:

  • Cost-sensitive workloads that will run 24/7
  • Huge monolithic applications that are difficult to containerize
  • Specialized applications that require custom Linux kernels or low-level access
  • Teams that want to remain cloud-agnostic and avoid lock-in

The bottom line on Fargate

There‘s no question that Fargate is a powerful tool for running containers in the cloud. It takes a lot of the hassle out of managing container infrastructure and allows developers to focus on building applications instead of becoming part-time sysadmins. The serverless model is great for variable or unpredictable workloads and can provide significant cost savings in those scenarios.

But Fargate isn‘t a slam dunk for every use case. It has some important limitations and can be much more expensive than managing your own infrastructure if you have stable, predictable usage. As with any technology choice, it‘s important to understand the tradeoffs and pick the right tool for the job.

Personally, I‘m a big fan of Fargate and have seen it work well for many projects. Being able to define a few task definitions and let Fargate handle the rest is a huge time saver. I‘d encourage any team running containers on AWS to at least give it a look. But go in with eyes wide open about the costs and limitations. When used for the right workloads, Fargate can be a powerful addition to your architecture.