Amazon ECS (Docker): binding container to specific IP address

Here’s an actual, logical way to do it. It sounds too complicated but you can actually implement it in a matter of minutes, and it works. I’m actually implementing it as we speak.

You create a task for each container, and you create a service for each task, coupled with a target group for each service. And then you create just 1 Elastic Load Balancer.

Application-based elastic load balancers can route requests based on the requested path. Using the target groups, you can route requests coming to elb-domain.com/1 to container 1, elb-domain.com/2 to container 2, etc.

Now you are only one step away. Create a reverse proxy server.

In my case we’re using nginx, so you can create an nginx server with as many IPs as you’d like, and using nginx’s reverse proxying capability you can route your IPs to your ELB’s paths, which accordingly route them to the correct container(s). Here’s an example if you’re using domains.

server {
    server_name domain1.com;
    listen 80;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://elb-domain.com/1;
    }
}

Of course, if you’re actually listening to IPs you can omit the server_name line and just listen to corresponding interfaces.

This is actually better than assigning a static IP per container because it allows you to have clusters of docker machines where requests are balanced over that cluster for each of your “IPs”. Recreating a machine doesn’t affect the static IP and you don’t have to redo much configuration.

Although this doesn’t fully answer your question because it won’t allow you to use FTP and SSH, I’d argue that you should never use Docker to do that, and you should use cloud servers instead. If you’re using Docker, then instead of updating the server using FTP or SSH, you should update the container itself. However, for HTTP and HTTPS, this method works perfectly.

Leave a Comment