Amazon AWS Certified Advanced Networking Specialty – Advanced Route53 Configurations Part 4

  • By
  • January 16, 2023
0 Comment

10. Overview of Route53 Routing Policies

Hey everyone and welcome back. In today’s video we’ll be discussing about the routing policies in drought 53. Now there are various supported routing policies which are available in drought 53 and primarily because of routing policies, it really makes route 53 a great place to host your domains into. This is a very powerful feature of route 53. So as of the time when we are recording this video, these are the routing policies which are available. First is simple, you have weighted, you have latency, you have failover, you have geolocation and you have multi value answer. So let me quickly show you how exactly this might look like. So within the route 53, if you look into the routing policies, these are the routing policies which are available. Again, AWS might add new one of them, but as of now, these are the ones which the exams might be focusing on.

Now let’s discuss at a high level overview about some of the routing policies. In simple routing there is a plain one to one mapping between the domain and the host. So let’s say you have a domain of KP Labs dot in and it has a record of 128-19-9241 dot 125. So this is one to one mapping between your domain and the host. So this is the simple routing policy. Now the second one is the failure routing. Now failover routing depends upon the health check. So route 53 has a feature of health check. Now health check is responsible for monitoring the health of the endpoint. So let’s say that this is your server and you have the health check which is monitoring the server. And in case if the server goes down, then the route 53 basically will redirect the request to the backup website.

So in here you have to give the IP address of two different websites, IP address or something like CNAME or Alias. So if one of them goes down and route 53 detects that this server goes down, then all of the DNS responses would point to the alternative website that you put in the failover routing. Now the third one is the weighted routing. Weighted routing basically allows us to route the traffic to multiple resources in the proportion that we specify from our end. So here you see there are two servers and we can specify that 50% of the traffic goes to the server. 150 percent of the traffic goes to the server two. So this is something that we can specify with the help of weighted routing. The next is the latency based routing. Latency based routing is where we can specify where the response would be associated with depending upon the latency of the overall region. So let’s take an example.

So let’s say someone is visiting from Sydney and the latency from that Sydney towards the Singapore region is much more lower when compared to the latency from Sydney to the US region. Then the route 53 will basically give the IP address of the server which has the lowest latency. The next one is the geolocation routing. Geolocation routing basically allows us to choose different resource for different users based on the countries and the continent that they are visiting from. So let’s say that you have two records. You have Kplabs in, it has an IP address from 128. You again have the same record, kplabs in, and it has IP. Address of 52. Now, if a user is visiting from Asia, he’ll get the IP address of 128. If he’s visiting from Europe, he’ll get the IP address of 52. So this is what a geolocation routing is all about.

11. Overview of Disaster Recovery Models

Hey everyone, and welcome back to the Knowledge Pooled video series. And in today’s lecture we will be primarily speaking about the disaster recovery technique. So what this basically signifies is that if there is a disaster which might occur, what are the ways in which we can recover our infrastructure in a specific amount of time? So, when it comes to disaster recovery technique technique, one of the very important thing that comes is the RTO and the RPO. So there can be various disaster recovery design that a solutions architect can implement. Now, the design which can be implemented for disaster recovery directly depends on how quickly we want to recover from a disaster. So let’s assume we have a website in a single availability zone and if that Availability Zone went down and that website is like a part time website, it is not that important, then we don’t really have to worry about designing a multi AZ based architecture. So that will just lead to more cost.

However, if we want that even if one Availability Zone failure should not affect our website performance, then the disaster recovery design would actually be very different. So when you talk about design, there are four broad steps in which we can design our architecture based on disaster recovery. One is the simple backup and restorebased Strategy. Second is pilot light. Third is warm standby. Fourth is multisite. Now, again, one important thing to remember is that whichever technique that we choose, it comes with its own implications related to how fast we can recover, related to the performance and related to the cost thing as well as the complexity factor. So let’s go ahead and understand more about each one of them. So the first is the backup and recovery. So backup and recovery is a very simple cost effective method which requires us to constantly take backup of our data and store it into services like S Three to restore when a disaster strikes.

Now, this is a very simple technique and I still remember a lot of my friends who have their own blocks. Now, they are personal blogs and they cannot really afford a multi easy based architecture because that will lead to more complexity, that will lead to more costing. So what they go ahead and do is they go ahead with a simple backup and recovery where if their database gets corrupted or if something goes down, they can actually recover the database dump from the S Three. So every day they take the database dump and they store it to S Three. And if the database gets corrupted or if something happens, they pull the dump from the S Three and they recover the block. So this is a very simple backup and recovery for on premise servers which have a huge amount of data, typically in tens of terabytes. Then they can make use of the technology like direct, connect or import export to backup their data to AWS. Now, this is one important thing to remember because many of the organizations, they have a huge amount of data in on premise and they cannot really use the internet connection to backup because if you don’t have a very good internet connection, backing up like terabytes of data will be a huge pain.

Now, in order to back up such huge data, there are various ways in which you can do that. One is Direct Connect which is like a direct leased connection to AWS. And second is the use of import Export which you can use to directly back up the data. Now don’t worry, we’ll be speaking about each one of them in great detail in the relevant upcoming sections. So this is the first way. Now, second way is the pilot light. Now, Pilot Light basically in this approach we have a minimal version of servers in the backup region in stopped state or in form of AMI. So let’s assume that this is a primary region where your web server, your App server and the DB server is region is running. Now, as part of the Pilot Light, you have the similar setup you have, but the servers are in stopped state. So you see, web server is in stop state, app Server is in stopped state.

However, the database is currently mirroring. So this is one important thing to remember. So whenever a disaster strikes, you can start these servers and your website would be up and running. So this is one approach. Second approach is that you have the AMI of all of these in the backup region. So whenever this region goes down, the AMI is from the second region, you can launch the instance from the AMI and the website will be up and running. So this is the Pilot light. Now, as you can see that the Pilot Light is not very fast solution for the website to be up and running. So however, it does provide a good disaster recovery because the entire servers are in different regions. So third is Warm Standby where the servers are actually running. So now you see the difference over here between Pilot Light and Warm Standby is that the servers are constantly running but with a minimal version. So when a disaster happens, the servers are scaled up for production.

So let’s assume that this is a four GB Ram server. Then in the standby this might be a one GB Ram server under an elastic load balancer. So if the disaster strikes, we can quickly increase the size of our servers and our application will be up and running. Now, one important thing to note between Warm Standby and Pilot Light is that in Pilot light it is not necessary that we are having server in stopped state. Also, it might be also a possibility that you just have the AMI of Web server, you have the AMI of App server and whenever a disaster strikes, you can launch the servers from the AMI. However, in Warm Standby, you must have the servers in running conditions. So this is the difference. You cannot really have an AMI and no servers running. You should have a servers running, but the server should be in minimal size. So this is warm standby. The last is Multi site, where you have a complete one to one mirror of your production environment. So, if this is a four GB Ram server, the backup server should also be a four GB Ram. So this is an exact replica of the production environment.

Now, as far as the costing is concerned, multisite will cost you the most, but Multisite will also allow you to recover from disaster in the least amount of time. So these are some of the ways in which you can design the disaster recovery solution. Remember that each technique comes with its own cost, it comes with its own architecture complexity. So whichever technique that you choose, make sure that you also test things out. It should not happen that you have Multi site Multisite, but whenever you switch to the backup server in case of disaster, these servers are not running or they are having some issues. So you need to do a lot of testing. Now, I still remember in one of the organization where I work with, we have every two weeks of testing.

So what we do is every two weeks we switch from one region to another and we see whether everything is working perfectly or not. So the entire production traffic is migrated from the primary region to the disaster recovery region and we actually see if everything is working perfectly or not. So this is a nice way to make sure that when the actual disaster happens, we have a perfect working production environment. So again, there are various AWS services which we can use for the disaster recovery, like S Three Glacier Import Export. You have storage gateway. You have direct connect VM import export route. Fifty th and many others. So throughout this course we will be looking into auto, I would say all these services in the case of disaster recovery and also in terms of how exactly we can use for our production environment.

12. Multi-Site Failover with Route53

Hey everyone and welcome back. Now in the earlier lectures we were looking into various disaster recovery models that can be possible while designing a Dr for an organization. Now today we will have an overview about demo related to the multiregion active with failure capabilities. So let’s go go ahead and understand by what do I mean by that. So let’s assume that you have one server in Oregon region and you have a backup server in the Singapore region or a Mumbai region. Now you are following the multi site based disaster recovery model. So both of the servers or both of the architectures will look very similar, which includes the instance types as well. And both of the servers will be in the running condition. Now, we have a Route 53 record set and since this is our primary region, all the traffic that will be coming to your website will go to the servers in your primary region.

Now let’s assume that due to some reason a disaster has occurred and the server within your primary region has gone down. Now, in the ideal scenario, what you would have to do is at the middle of the night you would get an alarm that the server is down and then you had to go to your DNS provider, you had to manually change the A records of your host name to point to the backup region. However, when it comes to Route 53, route 53 has a great services related to the failure and the health checks. So what Route 53 can do is Route 53 can verify whether the server is up or run or not running. If the server does not reply, then the Route 53 assumes that the server is down and thus it will basically stop sending the traffic over here and it will automatically send the traffic to the backup region in the multi site based configuration.

So definitely first time we have to tell the Route 53 what are the primary region and what are the secondary region. And after that, Route 53 will take everything by itself. So let’s go ahead and look into how exactly that would really look like. So what I have is I have one EC two instance running. So this is the EC two instance running in the Oregon region. Now this is going to be our primary EC two instance. So this primary EC two instance is the one which is running over here. And we have one more EC two instance running in the Mumbai region. So in case the primary region goes down or the EC two instance in the primary region stops responding, then Drought 53 will automatically switch all the traffic to the secondary site. So let’s try this out. So our domain, this is our domain which is multi site, this one.

So if I’ll say it loudly, I’m sure my neighbors will start to laugh and this is the reason why I just ignore that. So I’ll copy this up and let’s paste this in the browser. And now you see you have got a basic welcome to NGINX on Amazon Linux AMI. So this is the index HTML page which is loading from the primary EC to instance. So we have a primary and we have a secondary based on Mumbai. I just put Mumbai over here. Perfect. So let’s do one thing. Let’s log into primary and let’s stop. In fact, instead of logging in, let me directly go ahead and stop the instance by itself. So I’ll stop the EC. To instance. So this is basically a disaster in production.

So anything that impacts the continuity of business, it might be networking issue, EC two instance issue, entire region going down or your application going down. So all of those are disaster. So we have stopped the instance. So this can be scenario of availability zone going down or the host in which the EC two instance is running going down or the entire Oregon region going down. So now what will happen is route 53 will verify whether the EC two instance is responding or not. And if it is not responding then it will automatically route the traffic to the EC two instance in the Mumbai region.

So let’s quickly verify. Now, within the Route 53 console, let me just click on refresh and it takes a certain amount of time. I have configured it as 30 seconds but definitely you can reduce it more further and after 30 seconds it will send various checks from lot of locations and if it gets reply it will assume that the EC two instance is healthy. If it does not, then it will assume that the EC two instance has stopped working. So let’s just wait for a few seconds for the health check to update. Perfect. So health check is now if you see the status of the health check update is now unhealthy. So what route 53 should be doing is that it should automatically route all the new traffic that comes to that domain to the secondary server. So let’s try this out. And now you see the message has changed. It is now this is a multi site architecture. So now this is the failure domain which is running in the secondary region. So this is how exactly you can design a multisite architecture. You don’t really need to stick to AWS. It can be between On Premise and AWS as well. Or it can be between On Premise and other cloud providers that’s.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img