Public Cloud

how to create a secured website on amozon EC2

Damian Igbe, Phd
Oct. 4, 2022, 12:12 a.m.

Subscribe to Newsletter

Be first to know about new blogs, training offers, and company news.

Introduction to the project: Deploy a secured website on Amazon EC2

In the tech space, something is always trending or 'hot'. I first noticed this with OpenStack. I was fully immersed in the OpenStack community. I attended OpenStack summits, and I trained many people on it for about 3 years, full-time. Furthermore, I traveled to many places in the world when working for Mirantis. I enjoyed the community immensely.

 Next in the timeline was Docker. I remember being at the OpenStack summit in Hong Kong when I first heard the word, Docker. Before you know it, in every corner of the tech space, everybody was talking about Docker and containers.

Then Kubernetes, and it spread like wildfire. Even if you were hidden in caves, you will still hear the word Kubernetes. Seeing the challenges of getting OpenStack to work in a production environment, Kubernetes was a relief (at least in my own opinion).  I enjoy working with Kubernetes. I was one of those who took the CKA certification exam when it was in the beta stage, and I still work with Kubernetes on a daily basis. Like OpenStack, I train people on Kubernetes.

But do you know what underlies all the mentioned technologies? You guess right: Linux. And I have been using Linux since 1997 (Yes, I am old :)).

However, just as Linux powers all the above-mentioned technologies, Kubernetes is called 'the OS of the cloud'. What this tells me is that all (well, almost) future workloads in the cloud will be running on Kubernetes. What this tells me is that Linux and Kubernetes will be the most enduring technology into the future, which brings me to the point I want to make.

 When people want to switch to IT careers or enhance their tech roles, they are more interested in titles: DevOps, SRE, Solutions Architect, Cloud Engineer, Cybersecurity, etc. There is nothing wrong with titles, but I am of the opinion that they should first ground themselves in the fundamentals such as Linux, Kubernetes, at least one containerization technology like Docker/containerd/CRIO and then computer networking essentials. Step by step, after the fundamentals are out of the way,  the next stage would be to ground themselves in the specifics of their titles like DevOps, SRE, etc. I believe that anyone can rise to any height in tech if they have the right understanding and build the necessary skill from the ground up. Anything other than that is like building your house on sand!

 In the next couple of weeks, join me as we build a simple project that will tie all the mentioned technologies together. Together, we will build a simple secured (HTTPS) website on the AWS cloud using EC2, then Docker, then Kubernetes. We will use the same application on all the platforms to help you connect the missing links.  Along the line, we will learn the technology essentials--Linux, Docker, Kubernetes, cloud, and everything in between.  Tomorrow, I will come out with the project topology, and then we will get our hands dirty.

 

Stay tuned and let me know if you are super excited about this!

 

Day 2:

The Topology Diagram

OK, now that we have cleared the background of this project, let's get started. Today, we will start with the architectural diagram. Below is a simple topology that we will use for this project. This topology is a single-tier topology that we will use to run our secured website. We will not use any database for this first phase of the project.

 

 

AWS Services

Here is a summary of the AWS services that we will use:

  • AWS EC2
  • Virtual private clouds (VPC), utilizing only  public subnets
  • AWS Application Load Balancer (ALB)
  • AWS Autoscaling service
  • Internet gateway
  • Route53 (optional)

Requirements:

  1. You will need an AWS account if you don't have one. You can create a free tier account
  2. You will need to register a Domain to follow through to the end. Here, we will use cloudtechexperts.com

Implementations:

We will implement this project in phases. Remember, we are taking baby steps, and we are building gradually so that well, even a baby can follow through.  This will be simplified on purpose, it assumes nothing. We will use the AWS cloud, but you can use any cloud of your choice.

Phase 1:

  1. Create a VPC with 2 public subnets
  2. Create an EC2 instance. To create an EC2, we will need to create security groups and Key Pairs
  3. Connect to EC2
  4. Install Apache
  5. Configure our website
  6. Configure DNS A record to route our website to cloudtechexperts.com
  7. Secure the website with  SSL certificate.

Note: you will need to register a Domain to follow this step. You can register a domain outside AWS or inside AWS, it doesn't really matter.

Traffic flow for phase 1:

This is the flow of traffic  (from left to right) without the load balancer. 

  1. A user puts the URL of   the website on their browser, 
  2. The request gets resolved by the DNS/Route53 
  3. The request hits the IP address of the EC2 instance, which will serve the website from the EC2 to the user.

Phase 2: 

Once we get this phase out of the way, we will enhance our implementation with ALB/ELB

Below are the steps:

  1.  Create AMI of the EC2 instance
  2. Put ELB in front of the website(EC2 instance)
  3. Configure Autoscaler (AS) service for dynamic scaling
  4. Test and conclude the EC2 deployment 

Traffic flow for phase 2:

This is the flow of traffic  (from left to right) without the load balancer. 

  1. A user puts the URL of the ELB on their browser, 
  2. The request gets resolved by the DNS/Route53 
  3. The request hits the load balancer, which directs the request to any available instance that will serve out the website from the EC2 to the user.

Phase 3:

Implement the same solution using Docker containers

  1. Install docker
  2. Create a container image of the website
  3. Upload the image to the docker registry
  4. Deploy the website using a docker container

Phase 4:

Implement the same solution using Kubernetes.

  1. Create a Kubernetes cluster
  2. Create a deployment object and create the application
  3. Put a service/load balancer in front of the application
  4. Scale it
  5. Enjoy :)

That’s it for today, folks.  To follow through tomorrow, ensure that you have an AWS account ready. A registered domain is also required, but not mandatory. See you then.

 

Day 3

Create the  VPC/Network for the Secured Website

Now that we have the topology diagram, let us get started building our secured website. A natural place to start is to construct the computer network where the EC2 instance will be connected. Networking in AWS is covered under VPC (Virtual Private Cloud).

 Before you can work with VPCs, you need to understand the fundamentals of networking. You need to understand terminologies such as Network, Subnet, VLAN, Subneting, Routers, Internet Gateways/gateways. But I must confess that this can be a huge topic, so we will not get into the weeds. For now, let us just work with the knowledge that you cannot create an EC2 instance in AWS except you have a fully functional VPC. A fully functional VPC includes 4 main components:

  1. The VPC - This is the network with a CIDR such as 192.168.0.0/16. A VPC Spans an entire AWS Region
  2. The Subnets - These are the VLANs or subnets obtained by subletting 192.168.0.0/16. You create a subnet in each Availability Zones (AZ).
  3. The Router/Route Table - This controls how packets are routed within the VPC and outside the VPC. You can do amazing routing here.
  4. Internet Gateway - this is the connecting link between the Internet and your VPC. A subnet connected to the Internet Gateway is considered public subnet and  private subnet otherwise.

Take note that when you create an AWS account, AWS automatically creates a VPC in all regions with public subnets in all the AZs, for you. This enables you to get started without understanding very much about how computer networking works. Earlier in the introduction, I mentioned that we will not use a database for this  phase of the project, so we really don't need to have a private subnet, so the default VPC that AWS provides will be sufficient. However, we will still build a VPC, so you know how to do that.

To see how those components connect together, I always use this diagram from AWS's documentation:

 

To understand this flow, let us note that to reach the EC2 instances at the bottom, from your computer or laptop, the flow of diagram is as follows:

From your labtop --> Internet GW --> Router/Route Table -->NACL --> Subnet-->Security Groups --> EC2 instance.

With diagram understood, let us go ahead and build our VPC. The steps will follow in the video.

Here is the video.

 

Day 4:

Create the EC2 Instance

Remember that yesterday, we created a VPC and 2 subnets along with all the required network objects for our secured website on EC2. Here, we are going to create the EC2 instance. Before we do that, we need to make available 2 dependencies that are needed by the EC2 instance. We need to create them. These are key pairs and security groups.

1. Key pairs 

We need the key pairs to log in to the EC2 instance. Key pairs are more secured than using usernames/passwords. As the name indicates, this is a pair of keys. The public key and the private key. The public key is stored on AWS and this will be inserted into the EC2 instance. The private key will be downloaded to the computer/laptop that you used to create the key. Take note to keep the key in a secured location because this is like your password and each time you attach the key to your EC2 instance, you will need the private key to log in to the EC2 instance. The key you downloaded can either be .PEM or .PPK. If you are using Putty (ssh client) to connect to the instance, you need to use the .PPK. Almost everything else will require the use of .PEM key. It's usually best to use PuttyGen (key conversion tool that always get deployed when you install Putty) to convert the key from one format to another, so you can have both the PEM and the PPK.

2. Security groups 

Security groups are the virtual firewalls that you can use to control (deny or allow) the flow of traffic into your EC2 instances. By default, ssh is allowed if you create a Linux EC2 instance while RDP is allowed if you create a Windows instance. Anything else is implicitely denied and you need to allow the protocol before you can access it. If you install Apache webserver, for example, you will need to open port 80 or 443 to allow access to the webserver running on EC2 instance. Security groups are attached to a VPC so take note of that when creating your own Security Groups. Each time you create an EC2 instance, you have the option to attach a security group that AWS creates on the fly or to attach your own preconfigured security group. It's best to have a preconfigured security group before creating the EC2 instance so that all the ports that you want are properly configured. This is good planning.

Other Things to Understand before creating EC2 instance

 Creating an EC2 instance form the AWS management console is wizard driven so quite easy but let's understand some of the things that we need to use in  creating  the EC2 instance.

a. Amazone Machine Image (AMI):

Primarily, the AMI is a prebaked Operating system.  Hence, AMIs are classified based on the Operating system that they contain. The most populat AMIs are Linux and Windows. Linux has several distributions so we have several choices such as  Amazon Linux, Ubuntu, Redhat, SuSe, etc.

b. Intance Type:

This is the spec of your EC2 instance. This is what dictates how small or large the resources of your server will be. The more the resources, the more you will pay for the instance. AWS gives a free  instance type called T2.micro so this is what we will use most of the time to avoid being charged by AWS. If you have a free-tier account, then almost every other type of Instance type will not be free so take note of that. T2.micro is sufficient for the web server that we are going to deploy here. 

c. EBS volume:

We will need to attach  a volume to the EC2 instance. Linux instances usually require 8GB of EBS volume while Windows typically requires 30GB of volume. The good thing is that you have 30GB of EBS volume free as part of the free tier account though not a lot of room considering that just one Winodwes instance will consume all of the free allowance.

With that intro out of the way, let's go ahead and create the EC2 instance. The video is attached below.

 

 

Day 5:

Connect to the EC2 Instance 

In the Unix/Linux world, the concept of DOTADIW (Do one thing and do it well) is well-known. Hence, the Linux world has lots of tools and utilities for different things. Tools/utilities are wonderful, right? Try removing nails out of wood with your bare fingers and then try using the right tool like a Pincer plier, and you will understand the importance of tools and utilities in the tech field in general.
 
Today, we will use one of the most useful utilities in the cloud era--SSH. Since our servers are situated in some far away data centers commonly referred to as the cloud, we need a tool that we can use to connect to the remote servers. The best-known tool for this is SSH (Secure shell) command.
 
SSH itself is a client/server protocol, meaning that the server has to be up and running before the client can be useful.  However, the SSH client tool that you use will mostly be influenced by your operating system.  While Linux uses an ssh client utility, if you are on Windows, Putty is one of the most well-known ssh clients for windows.
 
Today, we will discuss how to connect to our project’s remote server with SSH client from the Linux shell and  Putty from a Windows laptop.
 
So, now that we have our EC2 instance created, we need to connect to the EC2 instance. We can either connect to the instance from the AWS management console or we can connect from our laptop. The way we connect from the laptop depends on the OS that we are using. That is, connecting from a Windows laptop is different from connecting from a Linux laptop or a Mac laptop.  Still, the AMI (Windows or Linux) of he EC2 instance also determines the right utility to used for connection.  Note that Conecting from Linux or Mac are usually similar.

Let's summarize the procedures:

 

 

Here are the summary of what to take care of:

  1. When connecting from Windows laptop to WIndows server/instance, you need to use RDP protocol. RDP is usually installed on every Windows laptop. The keypair needed must have a .PEM extension.
  2. When connecting from Windows laptop to Linux server/instance, you need to use SSH client like Putty . You need to download and install putty frm the internet.  The keypair needed must have a .PPK extension. If needed, you can use PuttyGen to convert the key from .PEM to .PPK.
  3. When connecting from Linux?mac laptop to WIndows server/instance, you need to use RDP protocol. RDP is not installed on every Mac/Linux so you need to install an RDP client of choice for Mac/Linux. The keypair needed must have a .PEM extension.
  4. When connecting from Mac/Linux laptop to Linux server/instance, you need to use SSH client protocol. SSH client is usually installed on every Mac/linux laptop. The keypair needed must have a .PEM extension. When connecting to a Linux server, you must change the permission of the key to 600 0r 400 with the command

         chmod 600 key.pem. (Replace the key.pem with the name of your key)

So below is the video.

 

Day 6:

Install the Apache Web Server  

One of the open-source projects that have stood the test of time is the Apache web server. At one point, 73% of all websites were deployed on Apache. Apache is even part of the most well-known web stack known as LAMP (Linux, Apache, MySQL, PHP). Though Nginx has surpassed Apache as the leading web server, Apache is not going anywhere soon.

Continuing with our project implementation, here we will deploy the Apache web server to host our website. Learning is the most important thing here.

Now that we have connected to the EC2 instance, we can now install the Apache web server to serve out our website. To host a website, you first need to install a web server. The most popular web servers on the Linux platform are Apache and Nginx.  We will use Apache web server here. The good thing is that they are all open source, so we don't have to pay anything to use them.

The Apache web server on Amazon Linux is called httpd  though it's called Apache2 on Ubuntu servers. Installing the web server requires the knowledge of  Linux package manager. Redhat based distros uses   yum, while Ubuntu based distros uses apt-get.  

To successfully manage a web server, you will also require knowing how to manage as service on a Linux platform. For this, we will use either the service command or the systemctl command.  systemctl is the modern way of managing services on a Linux Platform.

Note that the Apache web server serves out web pages from a specific directory and in Amazon Linux, web pages are served out from /var/www/html/. We will have to ensure that our website is located in this directory. There is a lot more to the Apache web server, but we will not go into the details here.

Here is the video:

 

Day 7:

Install and Configure our Website 

Everyone should have a website, even if that website is just to showcase your skill sets for your dream job. Websites have become such a part of our daily lives that in my opinion, understanding how websites are built, and the different architectures should be understood by everybody. That is the aim of this series of blogs. You can host your website with the domain registrar, but learning to do this yourself is a very useful skill.

In this series of blogs, the implementation stage is here. All is now set to deploy our website. The website we are going to deploy was developed using HTML, CSS and JavaScript. This is just the front end, as it contains no database backend. Websites usually need a database tier for data storage, but here we are taking it step by step to learn how to deploy a secured website on EC2. A website like this can also be deployed with a framework like WordPress. WordPress is fairly easy to set up and the ongoing creation of pages is simpler. However, WordPress has its own database, and you need to install a database before you can install WordPress.

A typical progression for this website would be to integrate a programming language like Python (or a Python web framework like Flask or Django)  to build the business logic and then integrate it with a Database backend like MySQL. Note that this is what web developers do, and developing a website like the one deployed here is beyond the scope of this project.  In fact, the website deployed here is only for demonstration purposes and the website is publicly available.

Deploying this website just requires knowing where to put the web pages. Since the code is already available in a git repository, all we have to do is login to the EC2 instance and then clone the repository from GitHub to the right location to be served by the Apache web server.  Apache usually serves web pages from /var/www/html by default but of course   Apache has more advanced configurations like Virtual hosts, which again,  are not the main objective of this project.

Here is the video that describes how to deploy the website on Apache web server.

 

 

 

Day 8:

Point the website to DNS Domain Name

Our website has been deployed, and it's up and running, but we still have a few things to do to make our website user-friendly and  secured.  Firstly, we will configure our domain to point to this website. If you have not registered a domain, this is the right time to do that. You can register a domain in AWS using Route53, or you can register it with a domain registrar such as Godaddy.com. With $10 you should be able to register a .com domain name, but again this is not mandatory.

We will use a domain called cloudtechexperts.com. To point the domain to the IP address of our EC2 instance , we will do that from my domain registrar. If I had registered the domain in AWS, I would have to do that in AWS, but this domain is with another registrar. Once I log in to the dashboard of my domain registrar, I will have to configure a 'DNS A record' for the domain to point to the static IP address of the EC2  running our website.

 Ok, with that introduction, let's head over to my domain registrar to do the DNS configuration.

The video is below:

 

Day 9:

Secure the website with SSL certificate

You cannot run an e-commerce website without proper security best practices in place.  At the minimum, you need to have the website on HTTPS so that traffic to and from your website is encrypted. This helps ensures that credit card information and other user data are protected from hackers. The way this works is as follows:

  • The user browser attempts to connect to a website via a web server secured with SSL. 
  • The browser requests that the web server identify itself.
  • The web server sends the browser a copy of its SSL certificate.
  • The browser checks to see whether it trusts the SSL certificate. Trust mostly depends on WHO issued the certificate. If it trusts the certificate, it sends a message to the web server.
  • The web server sends back a digitally signed acknowledgement to start an SSL encrypted session.
  • Encrypted data is shared between the browser and the web server.

The point here is that communication between the website and the user browser is secured, as traffic is encrypted. There are different ways to get an SSL certificate to configure on your server. Some are free and some are paid. On AWS, we can use AWS Certificate manager service to obtain a free certificate if we use an Elastic Load Balancer (ELB) in front of our EC2 instances. For this stage of the project, since we are not using ELB, we will use a well known publicly available and free certificate authority called Letsencrypt. Installing a Letsencrypt certificate is quite easy, so let's go to our EC2 instance and follow the steps required to do that.

 

Project Phase 2: Confure Elastic Load Balancer(ELB)/Autoscaling

Now that we are done with the phase 1 of the project, we can now go to phase 2.  What we did so far in phase one got our website up and  running, but if we evaluate our website based on cloud's best practice, it will fail the evaluations. The most important cloud parameters are as follows. These are also referred to as the Well architected Frameowork.

  • Security - ensure that our website is secured
  • Cost optimization - ensure that our design is not too expensive
  • Operational Efficiency - ensure that our website is easy to manage. This is better if we have some automation in place
  • Reliability - ensure that our website is reliable and has a high uptime
  • Performance - ensure that our website provides good customer experience

At the minimum, configuring our website with a load balancer and autoscaling groups will solve Security, Reliability and Performance considerations. However, as we are putting services that add to the overall cost of the project, we will not meet the cost optimization parameter, but that is ok because if things go well, the website should be making good money. Sometimes, it's dificult to meet all the 5 best pracices and you have to select the most important metrics to meet.

 Here is how the Load Balancer and Autoscaler help with cloud's best practices

  • The Load balancer will meet the security challenges as we put all the EC2 instances behind the load balancer and moreover we can put an SSL terminator on the ELB.
  • The AS will meet the reliability challenges since we can add more instances across 2 AZs when required and reduce the number when required too
  • The Load balancer and AutoScaler will meet the performance issue as we can scale the workload with more traffic and reduce the EC2 instances when less traffic

The load balancer is often used with the Autoscaler because the autoscaler performs horizontal scaling where more instances are added when needed and reduced when not needed. This is in contrast to the vertical scaling where a single server is used to serve the workload and capacity can only be increased through upgrade (i.e. increasing the CPU count or the size of RAM).

Here is the traffic flow:

  1. A user puts the URL of the LB on their browser, 
  2. The request gets resolved by the DNS/Route53 
  3. The request hits the load balancer, which directs the request to any available instance that will serve out the website from the EC2 to the user.

The important question to ask is: how do we ensure that the workload being served by all the EC2 instances that the LB is forwarding traffic to are identical? The answer to this question is that we can create an AMI of our secured website and then use that as the image to create additional instances when needed.

 

Day 10:

Create AMI of the project's EC2 Instance 

In this stage of the project, we will create an AMI (Amazon Machine Image). The AMI contains the base OS with some additional software. In this case, the additional software is the Apache web server as well as the secured website that we just created.

Here in this video you will learn how to create AMI.

 

Day 11:

Connect a Load balancer to the secured Website

In the previous video we created an AMI for the secured-website. Now let's create a load balancer and connect it to our EC2 instance.

As explianed ablove, the load balancer can forward traffic to different EC2 instances as required. Here we will setup the load balancer  in preparation for the austoscaler, that  will help scale the EC2 instances.

 Lets us go ahead and learn how to create a load balancer.

 

In creating the load balancer, we have the option to use either the (ALB)Application load balancer (Layer 7) or (NLB) Network load balancer (Layer 4).  For HTTP and HTTPS, ALB would be the best option so we used the applicatin load balancer.

Once we configure the load balancer, the next thing would how to point our domain cloudtechexperts.com to the load balancer.  The easiest and most straight forward option would be to go to our domain registrar and configure a CNAME or ALIAS DNS record to our domain. The instructions for this configuration varies from one register to another, so you need to check the documentation for your domain register. 

Another good option would be to reassign our Elastic IP address from the secured website EC2 instance to the ALB, but this is not possible in AWS since you cannot assign an Elastic IP to ALB.

Within AWS, we can use Route53, but we would need to transfer our domain to AWS if it is outside of AWS.

Here are the summary of the options we can use to point our domain to the ALB

  1. Go to your domain registrar and point the CNAME/ALIAS record to the Load balancer. 
  2. Use Route 53 service and configure Alias record for our domain, but note that the domain must be hosted in AWS.

 

Day 12:

Connect an Autoscaling Groups to Load balancer 

The final part of our configuration here is to connect the autoscaling group to Elastic  load balancer.

As stated earlier, the load balancer is often used with the Autoscaler so that as the autoscaler performs horizontal scaling where more instances are added when needed and reduced when not needed, the load balancer can forward traffic to the available EC2 instances. Compared to horizontal scaling, vertical scaling is where a single server is used to serve the workload and capacity can only be increased through upgrade (i.e. increasing the CPU count or the size of RAM). The load balancer forwards traffic to the instances as provisioned by the autoscaler. It can be one EC2 instance, it can be 1000 EC2 instances.

To create autoscaling group, we have to first create a launch configuration script. This is the template that contains all the information required to create a new EC2 instance. Information such as AMI, key pairs, security groups, instance type and volume. Whenever the autocsaling groups calls the target groups to scale, this template is used to create a new Ec2 instance. This is where we will use the AMI that we created in one of our earlier videos.

 

Here is the video:

 

 

Day 13:

Phase 3:  Run the website Inside Docker Container 

Now that we completed phases 1 and 2 of this project, we will now deploy the website on Docker container.

We will accomplish several steps to comoplete the task. These steps are illustrated in the planning phase above but repeated here:

  1. Install docker
  2. Create a container image of the website
  3. Upload the image to the docker registry
  4. Deploy the website using a docker container

In video 1 below I introduce the concepts and in video 2, I deploy the website inside Docker container. Note that the application I used here is different from the one I used in the previous video but the concept remains the same.

 

Video 1

 

In video 2 below I install the website.

 

 

 

Phase4:  Run the website on a Kubernetes Cluster

In the final phase of this project, we will now deploy the website on a Kubernetes Cluster.

We will accomplish several steps to comoplete the task. These steps are illustrated in the planning phase above but repeated here:

  1.  Create a deployment object
  2.  Deploy the website application
  3. Put a service/load balancer in front of the application
  4. Scale the website

Heer is the video

Zero-to-Hero Program: We Train and Mentor you to land your first Tech role