Amazon AWS Certified Advanced Networking Specialty – AWS Private Link Architecture

  • By
  • January 16, 2023
0 Comment

1. VPC Endpoints

Hey everyone and welcome back to the Knowledge Portal video series. And today we’ll be speaking about a relatively new feature which got introduced, which is VPC Endpoints. So let’s go ahead and understand VPC Endpoints with the simple use cases. Now in this use case, we have an EC Two and we have a S three way scenario. So you have an EC to instance running and you have a S Three running. Now, if you want to send traffic from EC Two to S three, like if you want to upload lot of log files or if you want to upload certain backups, the regular traffic must go with the internet. Now this is a problem because if you have a lot of big file, like a lot of big MySQL backups, then all of those backup file will have to go via the internet. Now, this is quite fine, but there were a lot of customers which were requesting like if you have an EC two instance in let’s assume Mumbai region and if you have S three bucket in Mumbai region also, then ideally instead of going through the Internet, there should be a connectivity within the private link itself.

Because if both of the resources are within the same region, it is ideal to have a private link connectivity because it will bring a lot of benefits. The first benefit would be the security because no longer the traffic will have to go through the Internet. Second is the performance. Remember, if you send traffic across the Internet, the performance will always be less. And since AWS if it will have a private connectivity, that will definitely be like a fiber optics.

So the amount of time it will be required to send the data to the S Three bucket will be a huge boost when compared to sending the traffic across the internet. Now, in order to solve this issue, AWS actually decided to introduce the VPC Endpoints. So if the EC Two instance and if the S Three buckets are within the same region, now you can send the data across each other through the AWS internal private network. Now, AWS internal private network is very very fast and thus it allows us to bypass the Internet. It also allows us to save cost because now you will be charged related to the data charges of internal network and not the Internet. And it will also allow you to send huge amount of data extremely fast between the EC two instance and the S Three service. So this was one such use cases.

AWS has introduced S Three service as the initial parameter when they had launched VPC Endpoints. But now they are bringing a lot of services in support of VPC Endpoints because lot of users are actually using this specific functionality. So let’s do one thing, let’s try this out before we go ahead and understand more about the theoretical aspect. So I have two instances over here. One is the KP Labs two C. So by the name you’ll see this is in the Availability Zone third, and you have an instance called Enhanced Networking, which is in the Availability Zone two B.

So let’s do one thing. Let me just open up the VPC and I’ll select the appropriate VPC and I’ll go to the subnet. So there are three subnets over here. So when you look into the subnets which is associated with the second Availability Zone, where our Enhanced Networking instance is launched, within the route table, you only have one route which is the Local route. You do not really have any route related to NAD gateway or Internet gateway. So ideally, this EC two instance will have no connectivity to the Internet. However, when you talk about the EC to instance in two C, I’ll click over here and the route table has the Internet connectivity because there is an IGW attached.

So, from what we can know, we can connect to this EC to instance two C, but will not be able to connect to the Enhanced Networking because it does not have an IGW or even a Nat gateway as a route entry. So I’ll go ahead and let me go ahead and connect to the Kplabs two C instance which we’ll be using as a proxy to connect to the Enhanced Networking instance. Now, both of them are within the same VPC, so there will be a communication between both these instance. Now, since I want to communicate to this Enhanced Networking instance, I’ll be using this as a proxy to connect over here. So let’s do one thing. Let me connect to the public instance.

So I am connected over here. Perfect. So I have a key call as my key PEM which I can use to connect to the EC Two instance which just has the Local route enabled. So 100 2178 I’ll connect 100 2178. Perfect. So I’m connected to the EC two instance. Now, if I try to do a ping on Google. com, you see I’ll not be able to reach anywhere. This is because there is no Internet connectivity of any sort within this EC Two instance. Now. Same goes with S three. If I do AWS S three LS, it will not return me any output because this instance is isolated within the private network. Now, let’s go ahead and create a VPC Endpoint.

This is actually the things which becomes much more interesting. I’ll go to the endpoints over here and I’ll click on Create Endpoint. Now, if you see there are a lot of services which are present. Now, endpoints has two types. One is Gateway and one is Interface. We’ll be speaking whenever the relevant section comes. However, remember, within the gateway there are two services which are supported. One is the DynamoDB and one is the S Three.

So we’ll start with S Three for the time being. Once you select S Three as the gateway, select the right VPC which is KP as hyper new in my case. And now it will show you that there are multiple route tables which are associated. So let’s do one thing. I’ll open up VPC and we need to select the route table which is associated with the enhanced networking instance. So let me go to the subnet and if I go to the route table which is of two B, the route table associated is four B to it. Perfect. So I’ll select the four B to it as a route table policy will just ignore it for the time it’s, otherwise it will become much more confusing.

And let’s click on close. So you have one VPC endpoint which got created. Now this VPC endpoint will also let’s wait for a while, will also have to be added within the route table. So let’s just quickly verify whether the endpoint status is available. It seems to be available. Now, if you go to the route table within the VPC endpoint, it says that it is associated, but it has not yet modified the route table entries. So if you want to see the Route table entry is not yet modified. We still have just one entry over here. Now we have to add one more entry. So if I go to the endpoints, I click on route tables. Manage route tables. Select the route table. Click on modify route table. Perfect. So this would have modified the route table for us. Perfect. So now you see there is one route table entry which got associated. So this route table entry is basically for Amazon.

AWS US West. Hyphen two s three. So this is the new Route table entry. So now let’s try this out. So even if I have no Internet connectivity, let me just because we were discussing that the resource should be within the same region, which is the EC two instance. And the S three bucket will pass on the AWS s three LS command with the US West Hyphen two region. And now you see, I am actually able to see all the contents which are part of the S three bucket. So if I do AWS. S three LS s threeplabs billing. So one important thing to remember over here is that if you just try this way, it will not work. You have to explicitly specify the region which is US two. Now, I had not specified this while running in the AWS credentials file. So you have to specify the region explicitly. And now you see, I am able to see all the contents which is part of the S three bucket. So now what you have is even though you do not have any Internet connectivity, so if I do a Google. com, I don’t have any Internet connectivity over here, but still the S three works perfectly. So I will be able to push the data to S three, I’ll be able to pull my data from S three and so on. So this is a major, major boost for those who needed a private link to the AWS S Three bucket. Again, the upload and download will be much, much more faster because this time you’ll be using the AWS private link instead of S Three.

Now, there are a few important things that I wanted to show you before we conclude this lecture. Looking into some of the important pointers related to the VPC Endpoint. First important thing to remember is that earlier for EC Two instance, to be able to access public resources like S Three, the traffic needed to be passed via the Internet gateway or Nat as a minimum.

Now simplifying the approach, AWS introduced a feature called as VPC Endpoints, which are basically a highly secure and highly reliable connection, which provides a direct connectivity to the resources with the same region. Now, thus, EC, to instance, within the private VPC can now connect to the search services without any need of Nat gateway or even internet gateways. So, AWS is soon launching the connectivity with various other resources. Earlier, only s three were supported. So now you have dynamiteb and more and more resources will be supported for VPC Endpoints. So this is a very great feature which got introduced and many of the organizations, they are now moving to VPC Endpoints because it is much more faster to restore and backup data.

2. VPC Endpoints – Architectural Perspective

Hey, everyone, and welcome back to the KP Labs course. Now, in the early lecture, we had discussed the highlevel overview about VPC endpoints and how they actually work. So in today’s lecture containing the same topic, we look into the VPC endpoints as far as the architectural view is concerned. So let’s go ahead and understand VPC endpoints with architectural view with a simple use case of EC Two to DynamoDB communication. So this is the first use case of EC Two to DynamoDB communication. You see, this is the before once or before means before the VPC endpoints were introduced. So you have the EC Two instance over here, and you have the DynamoDB. Now, both of these belong to the same region. Now, before VPC endpoints were introduced, what used to happen was if EC Two instance wanted to communicate with the DynamoDB, the traffic would flow to the router. From the router, the traffic would flow to the Internet gateway, and from the Internet gateway, it would traverse the Internet. And then it used to reach the DynamoDB. Now, even though both the entities are within the same AWS region, still the traffic was traversing the Internet.

Now, this led to a lot of issues related to latency and even security related challenges. And this is the reason why a lot of customers were giving feedback to actually allow the communication between two services, which belongs to the same AWS region through an AWS private link. So in this second use case, you have the after scenario. After means after the VPC endpoints were introduced. So now what used to happen was you have the AC Two instance, you have the DynamoDB within the same region. Now, EC Two instance wanted to send some traffic to the DynamoDB. It reached the router router verifies the destination of the DynamoDB, and whether the destination belongs to the DynamoDB within the same region. If yes, then the router will send it to the VPC endpoint, and then from the VPC endpoint, it would traverse the DynamoDB. So in this kind of approach, the traffic is not actually going through the Internet gateway. It is actually residing within the AWS private network connectivity itself. So this is good feature. It reduces the overall latency. It increases the security posture.

Now, before we conclude this lecture, there is one interesting thing that I wanted to show you, because at a point of time, this part might confuse. So let me show you one thing. So, this is the EC Two instance, which is connected to the S Three gateway, VPC endpoint.

So this does not really have any Internet connectivity. So if I do a ping on Google. com, you see I am not able to reach anywhere. So let’s do an AWS S three LS, and let’s look into the output. So here I am getting a lot of S Three buckets, which belongs to my account. Now, a question might arise, because one of my colleagues had recently asked so he had implemented the VPC endpoints and he had run the same command. However, the buckets which were enlisted were also for the different region. So let me show you. If I open up the s three console, I have a lot of s three buckets. Some belongs to Singapore, some belongs to Ohio, some belongs to Oregon as well. So currently, my EC two instance is within the North Virginia region. Now, one question that might arise is that if I do AWS S Three LS, and since this EC Two instance is connected to the VPC endpoint, it should only show the bucket listing of the US East One region, which is North Virginia. However, if you look into the bucket name test kplabs, which belongs to the Oregon region, and if you look into the listing, I am actually able to see that bucket as well. So this might actually confuse a lot of people.

So, one important thing to remember is that even though you have a VPC Endpoint enabled, when you do a S Three listing, it will show you all the S Three buckets of all the regions. However, whenever you try to connect or you try to establish a connection to the S Three bucket belonging to a different region, it will not work. So if I do AWS S Three LS S Three output hyphen kplabs so this specific buckets belong to the Oregon region. However, my VPC endpoint and my EC two instance belongs to the North Virginia region. So if I press Enter over here, it is giving me the listing output hyphen kplabs.

Now, the question is why? So let’s quickly verify the output hyphen KP Labs belongs to the North Virginia region. So let’s try some different bucket which is Test KP Labs, which belongs to the Oregon region. So test hyphen. KP labs. So now, if you will see, it will not give me any output. So even though it had listed me the bucket of the Oregon region, when I tried to establish the connectivity, I will not be able to do that. However, if I try to establish the connectivity with the bucket belonging to the same region, it will work. However, for the different region, it will not work. Only the described part will work.

3. Gateway VPC Endpoints – Access Control

Hey everyone and welcome back to the KP Labs course. So in today’s lecture, we will be discussing about the gateway endpoints access control lists. So once you create a gateway level endpoint, definitely at some point of time, you might want to restrict the access based on certain conditions. And this is actually possible with the ACLs that gateway endpoints provide. So we can set up the access control policies in the gateway VPC endpoints based on this JSON document. So this is very similar to the IAM policies that we write. So you have the very similar approach where you create a policy and based on the policy you can restrict the access. So let’s look into how exactly this might look like. So I have a gateway endpoint which is already created for S three. Now within this endpoint, if you go into the policy, you see by default it has the policy of Allow All. So this basically means that any EC two instance, if it has the IAM role properly configured, it will have no restrictions as far as the gateway endpoint connections are concerned. So let’s quickly verify.

So I have a bucket called output hyphen KP Labs in the North Virginia, the same place where my EC two instances. So let’s quickly verify. If I do AWS S three LS S three output I one Kplabs, I’ll also provide the region. So currently my EC two instance has the Im role of Allow All. So it does not have any restrictions to view any S three buckets. So if I just press Enter, you see I am able to see the contents of the bucket. So the first access control is definitely through the Im role. And you can also set up an access control in the VPC endpoint. So let’s look into how that might really look like. So within the endpoint, if I click on Edit policy, you can actually create your own custom policy. So I have a simple custom policy written over here. So this is basically the full Allow policy which we had, and this is the custom policy based on the bucket name. So this will only restrict the access based on the bucket name. And within the bucket name, there is only one bucket which is allowed over here, which is output hype and Kplabs.

That means the VPC endpoint. The gateway VPC endpoint will only allow these two operations on only one bucket. So any operation which are sent to bucket which is not output Hyphen KP Labs through the VPC endpoints will be restricted. So before we implement this, let’s find one more bucket in the same region, North Virginia, where our AC two instances which is KP Labs Hyphen failover. So again, let’s quickly verify whether we are able to view the contents of KP Labs Hyphen failover before we implement that policy. Perfect. So I am able to see the contents of KP Labs if and failover as well. So let’s copy this policy as well. I’ll paste this policy in the resource section so that it can be used. So I’ll paste this policy. Let’s quickly verify. It has the get object, it has the put object and it is only allowing the bucket output KP lab. So let’s go ahead and click on Save. So now this VPC endpoint will only allow the get operation, the put object operation on a single bucket only. It will not allow any other operation to be performed via the VPC endpoint service.

So now let’s quickly verify with the last command whether we are able to see. And you see it is showing you access denied. And certain times this can be a bit of challenge also during troubleshooting. Because generally when you get this error, you will look in the Im role. And if the Im role has the administrator access and still you are getting permission denied and you are not aware of our VPC endpoints, then you might spend a few hours before you can actually find the right solution. So now let’s try the output hyphen kplabs Oops. And here also, you see it is showing you permission denied. Now the reason why it is showing you permission denied is because there are only two actions which are allowed. One is the get object and second is the put object.

So what we’ll do, we’ll edit this policy and we’ll add one more action over here, which is the list objects. So let me put a list asterisk so that it would suffice and I’ll click on Save perfect. So now let’s run the list on the output Kplabs. You see it seems to be working to just verify. We’ll again run a list on Kplab’s failure bucket and you see here again it is showing you permission denied. So this is what the VPC endpoints are all about and this is what the policy restrictions in the gateway VPC endpoint can be done.

4. Understanding Interface VPC Endpoints

Hey everyone and welcome back to the KP Labs course. So in today’s lecture we will be discussing about the interface VPC endpoints. So let me just show you on what I mean by this before we proceed for the lecture. So whenever you go into the create endpoint within this, there are two types of endpoints over here, one is the gateway and second is the interface. So in today’s lecture will speak about the interface level endpoints. Now in the earlier sessions we already discussed in great details related to what the gateway level endpoints are all about and let’s go ahead and discuss more about the interface level VPC endpoints. So just revising the gateway VPC endpoints. So in the gateway VPC endpoints approach, the VPC endpoints were actually created outside the VPC where you did not really have much control.

So it was actually created on the Amazon site and what you had to do is you had to modify the route entry so that the traffic would flow to the VPC endpoints via the route table. So same is depicted within this diagram where you have the EC to instance and if you will see the VPC endpoint is not within the VPC and this is something where you don’t really have any granular control. So because we work with route table in gateway endpoints, thus it is not possible to use the gateway endpoints via the VPNs or the direct connect connections. So let’s assume that you have a side to side tunnel between your AWS and the data center. Now you want to establish a private link between the data center and the VPC endpoint based on gateway approach.

This is not possible because route entries are directly available only to the easy to instances. So you cannot extend your network when it comes to gateway VPC endpoints. We’ll understand this more when we discuss about the interface level endpoints. And access control was actually restricted through an Im like JSON document. So we used to create an access policy on the gateway endpoints and this is how the access control used to work. So it was not very granular. Also. Now in order to solve these disadvantages, amazon decided to launch a next generation endpoints and these next generation endpoints are called as the interface endpoints.

So interface endpoints you can also call as the version two which has a lot of advantages. So one of the great advantages that VPC endpoints based on interface are created within the VPC. So if you will see over here, this is the interface endpoint and this is not created outside similar to the gateway, it is actually created inside the VPC within the subnet that you define. So they have the elastic network interface, that means that they have the private IP associated and access control is done through the security group instead of access policies. So since they have this eni and they have the private IP, even the servers within the data center. If there is a terminal associated, those servers in the data center can directly make a call to this eni and the traffic can be served via the private link. So let’s look into how exactly that might work. Like so I’m connected to my EC Two instance within the private subnet. So this does not really have any Internet connectivity. Let’s quickly verify again, you see, Internet connectivity is not there. Now, when you do AWS EC, to describe instances, let me press Enter. You see, now I am able to actually run this command successfully.

Now, why is that? So, within the endpoint service, let me go back within the endpoints. I have two endpoints available over here. One is the gateway endpoint type for S Three. And second is the interface endpoint type for EC two. Now, note that whatever now gateway is only for S Three and Dynamo DB and whatever newer services AWS launches. As far as the endpoint connections are concerned then most of them are based on the interface endpoint type. So for EC Two, I have created an interface. Endpoint. And this interface endpoint, if you will see it, is associated with the subnet and it also has a private IP addresses. It also has the security group instead of the access control policy that we generally use to work with in the gateway endpoint type. So let’s quickly verify. If I go to the EC Two, I’ll show you exactly where the eni is created for the interface endpoints. So if you go to the network interface, you will see this is the VPC endpoint which is created and the IP address associated with this eni is 107 231-2666 and you will find the same over here.

So now what happens is that from all the EC Two instances, whenever someone will run the AWS easy to describe command or similar API calls, that API calls will be sent to this IP address automatically. So let’s try that out and see whether it actually holds to be true. So let’s do a TCP dump on destination port of 403. So now that we had already seen that whenever you make an API call, be it S Three service or be it a two service, it actually goes to the endpoints by AWS.

So now let’s start the TCP dump. So now whenever you run this AWS easy to describe instances and here you will see that this is my IP address of my EC To server. And this call, this call has been sent to the IP address which is 107 231-2666, which is precisely the one of the endpoint we have created. So all the AWS EC to API calls will be sent to the end point which is the interface level endpoint. So this is what the interface endpoint is all about. I hope you had a high-level overview to understand the difference between a gateway endpoint and the interface endpoint. This is it about this lecture. Now, if you have any doubt at any point, feel free to connect as a Twitter, Facebook or LinkedIn or mail as an instructor at the rate KP Lab 13. Thanks for watching.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img