AWS Steppingstones #1- ECS Learning Series Part 01
Uncovering the fundamentals to buckle you up to become an expert in AWS Elastic Container Service (ECS) and deploy efficient and scalable microservices
Hello Reader,
I am sure you’re doing great today and thanks for taking time to read this post. Any kind of comments/feedback from you is always welcome. This will help me to improve myself and make the content better next time.
AWS Tip of the Week - “Back to Basics”
'Back to Basics' is a video series that explains and examines basic cloud architecture pattern best practices. Each episode is hosted by an AWS SA (Solutions Architect) and focuses on a specific architectural building block independent of a specific cloud solution.
The videos cover different domains, and you can learn more about the topics you’re interested and build more on the knowledge with further reading.
You can access it by clicking the below link and happy reading!
https://aws.amazon.com/architecture/back-to-basics/
Why this post?
With this post, I am trying to start a learning series for specific service(s) in Amazon Web Services (AWS) and I have selected AWS Elastic Container Service (ECS) to begin with. In this post, I will try to cover the fundamental concepts which you should know before learning about ECS.
Amazon Elastic Container Service (ECS) is a fully managed container orchestration service that helps you to more efficiently deploy, manage, and scale containerized applications. It deeply integrates with the AWS environment to provide an easy-to-use solution for running container workloads in the cloud and on premises with advanced security features using Amazon ECS Anywhere.
I feel there are couple of specific advantages in learning a new thing with the learning series option,
1. The best part is the continuity in your learning, and you’ll be able to learn the concepts from basic to expert level without jumping the concepts.
2. Helps to retain focus on the same thing/service and for example seeing videos on different things will divert the attention.
Let’s jump straight into the topic of interest and this post will cover the fundamentals in three steps. 😊
Step 01 – Virtualization and Containerization
We’re in the age of Gen AI, AI/ML, Cloud and so many things, but not very long ago we were deploying one application per physical server and when we needed to deploy more apps, companies were getting more physical servers and deploying them. This was a huge waste of physical resources and company capital. It also delayed the rollout of applications while physical servers were procured, racked, patched into the network, and had an operating system installed.
Then the concept of virtualization came to help us. The technologies like VMware opened the door to run multiple applications on a single physical server. This way, we could deploy apps very quickly to virtual machines on existing servers. Still the technology world never sleeps, and better innovation was on the way.
In the early 2010’s Docker gave the world the gift of easy-to-use containers. At a high level, containers are another form of virtualization that allows us to run even more apps on less servers and deploy them even faster.
The main difference is server virtualization slices a physical server into multiple virtual machines (VM). Each VM can have its own OS and apps can be deployed in them. Container virtualization slices operating systems into virtual operating systems called containers.
Some of the common differences are listed below in the tabular format.
Step 02 – Monolith vs Microservices
A monolithic application is a single, unified unit where all components (UI, business logic, data access, etc.) are tightly coupled and run as a single service. The entire application is deployed as one package, so even small changes require redeploying the whole system. Scaling is often vertical (adding more resources to a single server) rather than horizontal (adding more servers). All components are interdependent, making the system more challenging to maintain and scale as it grows. This model typically uses a single technology stack for the entire application.
Microservices break down an application into smaller, independent services that each handle a specific business function and communicate with each other through APIs. Each microservice can be deployed, updated, and scaled independently without affecting the rest of the system. Scaling is usually horizontal, with specific services being scaled independently based on demand. Services are loosely coupled, making the system more flexible, easier to maintain, and scalable. Each microservice can use a different technology stack, allowing teams to choose the best tools for specific tasks.
The microservice applications are better suited to deploy as containers and the individual applications can be scaled separately. Now we know the differences between the VM and the containerization and what advantages the containers can bring, we are good to proceed to the last step.
Step 03 – How to containerize your application?
With the widespread usage of Docker across the world, we will also understand the concept of how to containerize your application with the simple steps. We will see these steps in real action in the next post with the sample Java application.
1. Install Docker Desktop - Docker Desktop is a one-click-install application for your Mac, Linux, or Windows environment that lets you build, share, and run containerized applications and microservices. It provides a straightforward GUI (Graphical User Interface) that lets you manage your containers, applications, and images directly from your machine.
You can install the tool based on your operating system and the help links are given below.
Linux - https://docs.docker.com/desktop/install/linux-install/
Mac - https://docs.docker.com/desktop/install/mac-install/
Windows - https://docs.docker.com/desktop/install/windows-install/
2. Once you have the tool installed, you can create the Dockerfile (without any file extension) in the same directory where your application code resides.
A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Sample Dockerfile looks like the one below.
3. Use the Dockerfile and use the command “docker build” to build the image of your application. An image is a read-only template with instructions for creating a Docker container. A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI.
Sample Command:
docker build -t your-app-name .
This command tags the image with the name “your-app-name” and the dot(.) at the end refers to the current directory where the Dockerfile is located.
4. Now you can run the image as a container with the “docker run” command.
Sample Command:
docker run -d -p 3000:3000 your-app-name
The option “-d” runs the container in background mode and “-p” option maps the port 3000 on your host machine to the port 3000 in the container and this will help connecting to the container from the host machine.
5. This step is applicable when you want to push the image to the Docker registry. A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker looks for images on Docker Hub by default. You can even run your own private registry.
Sample Command:
docker push your-dockerhub-username/your-app-name
Now we know the fundamentals, we’re good to learn the ECS Concepts and build applications to deploy on ECS. 😊
What’s next?
In the next post, you’ll learn more on the ECS Concepts and how to deploy your sample Java application in ECS and access it from the internet. We’ll also look at some of the best practices for deploying applications using the ECS service.
“Please install the Docker Desktop tool and complete the setup in your workstation and that will help you in trying out what you’ve read.”
Further Reading and References
https://www.redhat.com/en/topics/virtualization
https://www.redhat.com/en/topics/cloud-native-apps/what-is-containerization