Amazon AWS Certified Advanced Networking Specialty – AWS Private Link Architecture Part 2
5. Implementing Interface Endpoints
Hey everyone and welcome back to the KP Labs course. So in today’s lecture we’ll learn the interface endpoints from the practical view. So I have two endpoints over here. One is the KP Lab One and this has the public IP associated with it so that I can log in. And second is the Interface Endpoints, easy to instance which does not really have any public IP. So I will not be able to directly log in over here. And this is the reason why I’ll first log into KP Lab One and from here I’ll log into the Interface endpoint EC Two instance because this does not really have any internet connectivity.
However, the connectivity between these two instances are present because they are in the same VPC. Perfect. So before we begin, I’ll just show you the route table which is associated with the interface endpoint. So the subnet is 83 C. So let’s quickly filter by VPC. And if we go into the subnets there is a subnet of 83 C. And if you look into the route table, it does not really have an internet gateway or Nat gateway. So there is no way this instance will be able to communicate to the internet.
So I’m already logged into the instance. So if you try and do a Google. com, it will not really work. Now, one of the downside with this kind of approach, like many of the organization, they do not really want any internet connectivity. So however, the problem is if they remove the internet or the Nat gateway, even the AWS API calls will stop working and that is actually a big pain. So many times you need the API calls to be present. So organizations are like they are forced to use some kind of internet or a NAN gateway. So in order to solve this approach, again we have the endpoint services which are launched by AWS. So let’s go ahead and deploy our interface endpoints.
So go to the endpoints and this time click on Create Endpoint. So again there are gateway and interface endpoints. We’ll be selecting the interface endpoints associated with the EC Two. Now the subnets where we want the interface endpoint to get launched should be the 83 C one associated with this EC Two instance. So basically if you include more subnets, it will create an elastic network interface in each of this subnet.
So for the time being, I will just select one subnet associated with USD one C, the security group. Since this creates the eni, there is also a security group which you can create. I’ll just leave it to be a default right now and I’ll click on Create Endpoint. So the endpoint status is now pending. It takes a little time for the endpoint to come up, so let’s just wait for a minute or two or for the status to be up. Perfect. So our interface endpoint is up. So if you’ll see the status is available now, as we’ve already discussed, that interface endpoints will create an Elastic Network interface.
And just to quickly verify whether eni is created, let’s go to the EC to console, go to the Network interfaces and the first column you’ll see there is an Elastic Network interface which is created for the VPC endpoint. See, the description is VPC endpoint. Now, since this is an eni based approach, you can directly associate the security group. So if you go to the security group currently it is using the default. So let’s select the eni and within the eni you should see the security group which is associated. I will click on it and I’ll verify whether the inbound connectivity is there. And as far as the connectivity aspect is concerned, things seems to be working perfectly. Great. So now we have everything set. So surprise, we don’t really have to modify the route table. Now the question is why? So if you’ll see over here the service name for the EC two API calls is this amazon AWS US East One, EC Two.
So what Amazon does is it changes the DNS associated with this service to the IP address associated with the Elastic Network Interface. So let me show you. So the IP address associated with the Elastic Network interface is 172-31-2763. So what Amazon will do, amazon will modify the service name. So this specific service name is the DNS name associated. So it will modify this specific one to the Elastic Network Interface IP. So let’s do a NS lookup and you will see now you are actually for this endpoint, this is the IP address which is automatically assigned. So now whenever you do AWCC to describe instances, all the API calls will go to this endpoints which are created.
And this is a good approach because you don’t really have to worry about managing the route table. So let’s quickly verify. I’ll do a TCP destination port on four three TCP dump. Great. And when you do an AWS easy to describe instance, it should be working perfectly. And the calls, if you will see the calls are going to this specific IP 170 to 31 27 63 which happens to be the interface IP. So not only the interface endpoints makes it quite simple because we don’t really have to modify the route table. It also makes it much simpler because now we are not dealing with the JSON based access control list, we are actually dealing with the security groups.
And one more thing is that since this interface has the private IP, if you have a direct connect connection or a virtual private network, even your On premise servers will be able to make call on this specific IP and they will be able to get results. So this is it about the deployment of interface level endpoint. I hope this has been informative for you and I look forward to seeing you in the next lecture.
6. Understanding VPC Endpoint Services
Hey everyone and welcome back to the KP Labs course. So in today’s lecture, we’ll be discussing about the VPC endpoint services. So this is one of the interesting thing. In fact, this is one of my favorite features of VPC Endpoints. And in the future, I am sure as a solutions architect, learning this particular topic is going to be very essential. So let’s look into what VPC Endpoint services are all about. Now. We have already discussed about the AWS private link. So Private Link is quite good. However, generally AWS offers a very limited set of services like Gateway Interface offers services related to S Three and DynamoDB. However, you have Interface Level Endpoints which offers EC to ELB kindness within the endpoint services. So this is quite good.
But the only thing is it is still limited to the AWS services only so many times. In fact, in most of the enterprise and corporate organizations, there are various tools like it may be Splunk Data dog, various tools where you upload the data from. So let’s assume that you have a Data Dog. So I am sure you know Data dog. If you do not, I’ll just show you what Data Dog is all about. So, Data Dog is generally it gives you various metrics.
So you have the Data Dog agent in your server and the agent will collect various metrics and it will send it to the central server. However, the problem is even if the Data Dog let’s assume data dog servers are in Singapore region and even your infrastructure is within the Singapore region, still all the matrix that you will be sending, it will be through the internet. So that is the problem. And what customers wanted is customers wanted a private link between one customer and a specific service provider. And this is what the VPC endpoint services are all about. So let’s understand this again with a use case. So there are many services providers like Datadog New Relic for which we need to upload the various server metrics. So you have your VPC here. Let’s assume that this is in the Singapore region and you have one more VPC which is in the Singapore region by the service provider. It can be. Data dock, new relic, et cetera. However, the matrix, the logs, whatever that you will be uploading here, it will still pass through the Internet.
And this is where the customer has wanted an improvement upon. Because since both of them are within the same region, it would have been great if the data traffic would not cross the internet and instead it would pass through the AWS Private Link feature. Now definitely you might ask why can’t you use a VPC Peering? Now, VPC Peering is good, but these service providers, they have like thousands of customers and having a VPC Peering is really a big pain if you have so big client list. And along with that, one of the problem with VPC Peering is the network address. So the network address of the service provider should not conflict with the address of the consumer. So if the service provider is running in the range of 100 zero 00:16 and even if the consumer is running on the same range, the VPC peering would not happen. So even the site to site tunnel will not happen.
So that was some of the challenges. So how AWS came about the solution was through a service VPC endpoints. So what would happen here is that on the right hand side you have the service provider. Now service provider would have all of those servers under a network load balancer and what we can do is this entire portion this network load balancer can be connected to a VPC endpoint service. So on the client side what client can do client can create a new VPC endpoint and that VPC endpoint will be connected to the network load balancer. Now definitely these two are within the same region. So now what would happen is the VPC endpoint is connected to the network load balancer of the service provider and all the EC two instances which are within the VPC they have to upload all of their logs, all of their matrix, they can send all of them to this VPC endpoint interface. So this is a private IP and this will in turn send it to the network load balancer which will in turn send it to the EC two instances of the service provider.
So if you have some kind of an agent maybe new relic and all or maybe data doc you can configure the agent to send it to the VPC endpoint service only and then VPC endpoint service will automatically forward to the network load balancer. So in this kind of a scenario the internet is skip that is the first big advantage. Second big advantage is that you are using the AWS private link which is much more faster second big advantage, third big advantage is you are saving a lot of cost. So again this is more of theoretical. Let me give you a demo on how exactly this would really look like. So first let’s look into the right hand side which is the service provider.
So service provider would have to have a load balancer which is of network type. So I’ll show you I have created a network load balancer. So this is a network load balancer and within network load balancer I have a target group of two instances. So these are the two instances very similar. I have a network load balancer and I have multiple instances which are connected to the network load balancer. So don’t worry if you are not aware about network load balancer. This will be taught in the relevant section where we will discuss about network load balancer in great detail. So just follow up for the time being. So this is the architecture as of now. So let me show you. So if I open up the network load balancer I’ll just show you, it seems to be working properly. So network load balancer is working properly. Now you have the network load balancer, you have the instances. Now you have to create a V service VPC endpoints out of it. So how will you do that? I’ll show you. So we’ll just have a high level demo and in the relevant section we’ll actually do the entire practical. So I have a registered endpoint service so I have registered my network load balancer with the endpoint services. So this is done. Now on the consumer side, what consumer have to do is consumer have to create this VPC endpoint and that VPC endpoint will be connected to the endpoint service which is created by the service provider. So in here I have created the service.
Now if I go to the endpoint, let’s assume that this is the consumer side. Whenever you click on create endpoint there are three ways in which you can do one is the AWS services. So these are all the AWS services. Second is find the service by name and this is what we actually do. So let me show you. So each and every service which is registered has its own name. So I’ll copy this, let me copy this out and you give this service name to the consumer. Now, consumer has to create an end point with that service name. So if you quickly click on verify you see this service name has been found and this is the way in which if you just create this endpoint, this endpoint will be configured and it will be connected to this service, this particular service perfect. Now once it is connected to this particular service, within the endpoint connections you will have to manually accept. So when a client connects to this load balancer service, you have to accept this specific connection we’ll look into once we do the practical and once you accept it, you will have an endpoint which is created.
So this end point, so I have this endpoint here, so I have one endpoint which is connected to the endpoint service where this network load balancer is created. So this endpoint is already connected. Now I’ll show you how exactly and this particular endpoint has an IP address. So this is the IP address which is seized even this endpoint has an IP six and this endpoint has a different IP. So as a consumer when I do a curl on the IP address of the endpoint so when I do a curl, when I connect to this endpoint VPC endpoint this VPC endpoint will forward the traffic to the network load balancer.
Network load balancer will in turn forward traffic to the back end EC two instance and within the back end easy to instance I have NGINX running. So when I do a curl on the VPC endpoint it should return me the NGINX page. So you see this is the default engineer page which it has given to me. So you see this is the Http server running and this is what VPC service endpoints are all about. So this feature is now actually being utilized by a lot of services like Dinatrace. So if you are using Dinatrace then you don’t really have to upload your matrix over the internet. You get the service name from the Dinette race. You create a VPC endpoint and you put that service name within the VPC endpoint the Dinatrace will accept and then you will be able to forward the traffic to their network load balancers. You also have Cisco’s Telfwatch which works on a similar model. So in the future you will have a lot of service providers who will offer you the service VPC endpoints which will help you upload a matrix, upload the logs over the endpoint services.
7. Implementing end to end VPC Endpoint service
Hey everyone and welcome back to the Kplapse course. So in today’s lecture we look into the implementation part of the service VPC endpoints. So now in this architecture diagram we saw that there are two phases. One is from the service provider side and second is from the consumer side. So we’ll look into both of them. So the first thing that we’ll do is fill first configure the service provider side. Now within the service provider side as the architecture diagram states that you need a network load balancer and there should be certain instances which should be connected to the network load balancer. So let’s do these two things first before we move ahead. So I have two instances. KP lab one. KP lab two. Now both of these instances have a web server, a simple NGINX web server running. So if I put the IP I will see the web server here and if I put the IP here, I have the web server. So it’s simple. Yum install NGINX and Service Engineer start just two commands and this should be up for you. Now you have the two instances ready.
So you have the instances ready. Now it’s time for the network load balancer and you connect the network load balancer to the instances. So don’t worry if you do not know much about network load balancers. We’ll be discussing in great detail in the relevant sections. So go to create load balancer it will be of network type. I’ll say Kplabs endpoints. So select the availability zone. I’ll just select one A for the time being. Now you do have option to select the elastic IP. We’ll discuss this again in the Network load balancer lecture target group. Let’s create a new target group here. I’ll say endpoint service target type would be instance. I’ll go to register Targets and I’ll select both the instances which are running here and I’ll click on add to Registered. I’ll click on next review and I’ll use a Create perfect. Great. So let me just filter it out. So this is our network load balancer which is getting configured now. So we have the network load balancer configured.
We have the easy two instances which are attached to the network load balancer. Now what we have to do is we have to register this entire setup with the service VPC endpoints. So we’ll have to create a new VPC endpoint service. So in order to do that, I’ll go to the VPC and within the endpoint services now I already have one service which is up. So within the endpoint services I’ll click on Create endpoint service. So this is a new endpoint service.
Now within this it will give you all the list of load balancers which are network based. So you select the network load balancers which you have created. So this VPC endpoint service will be connected to the network load balancer. So the entire service will be connected to the network load balancer. And if you see here require acceptance for Endpoint and I would say acceptance required. So what this basically means is that once I create the Endpoint service and once I give that service name to the consumer, consumer will connect their VPC Endpoint with the service name. Once they connect, the service provider will have to accept the connection and then only the entire network will get established.
So I’ll click on Create Service. Perfect. So this is the service endpoint which got configured, it ends with Phi C Four. So that at least we’ll remember. Now, within the Endpoint connection, you’ll see there are no Endpoint connection. That means none of the clients have connected to my service which I have created. Perfect. So the next step, which you would do from the client side. So this would be a client AWS account, since most of us will have a single AWS account, will do the same thing from a single AWS account. So now we’ll go to the end point.
So this ideally the client will do from their AWS account. Just remember now we are doing the left hand side flow. Right hand side flow is done. We’ll do the left hand side flow, we’ll go to endpoints, I’ll click on Create an Endpoint and we’ll click on Find service by name. So once we had created the network load balancer and the EC, to instance, we created a VPC Endpoint service out of this block. So this VPC endpoint service has a name. So you copy the name and on the Endpoint side you put this name and you click on Verify. You see the service has been found. And once you do this, you click on Create Endpoint. So again, this would have a security group which would be associated with it. I’ll just select a Launch Wizard which has everything allowed and I’ll click on Create Endpoint. So the endpoint has been created. However, you see this particular endpoint, it says Pending acceptance. This particular endpoint is pending acceptance. So it ends with CES. So I’ll just filter it out so that we do not confuse. So the pending acceptance. So what happened was I created this VPC endpoint, I feed in the service name of the service provider and the connection has been partially established. So the service provider has to accept the connection before this network connectivity can be created. So I’ll go to the Endpoint services, I’ll go to the Endpoint connection and you see I have received a new connection wire. So this is the new connection.
I’ll click on action and I’ll select Accept Endpoint Connection request. So this takes a little while, sometimes two to three minutes for the state to appear. It is still in the pending site. So let’s just wait for a moment. Perfect. So now the status has been changed from pending to available. So from the service provider side the connection has been accepted and this connectivity from the VPC Endpoint to the VPC Endpoint service has been created. So, if I go to this VPC Endpoint console so this is the endpoint console which you will see on the client side. Let’s go to the subnets. And this is the IP address associated with the endpoint. So this IP address that you see which is associated with this VPC endpoint is the one that we are seeing over here. So any request that we make to this endpoint will automatically go to the network load balancer on the service provider site and it will go to the EC to instance which is running there. And then you’ll get a relevant output.
So let’s try it out. We know that there is an NGINX server which is running here. So if you do a curl on the IP address of the VPC endpoint, you see you should actually get the outputs. This is the NGINX web server output that you saw in the browser. And this is how you actually create the end to end connectivity. So, we already discussed there are a lot of service providers and in the future there will be more service providers which will be offering their service through VPC Endpoint services. So all you have to do is you have to create the VPC Endpoint, connect to the service provider name, and then you will be able to send all of the logs to the VPC Endpoint private IP address. So this is how you can establish the end to end connectivity. So, I’m sure this is a very interesting feature and in the future I’m 100% sure that this is something that you will be implementing at a certain stage. Or if you are watching the video in the mid 2018, then this is something that you can propose to your organization. I actually have proposed it to a lot of organizations that I have been consulting with and they all are on the phase of implementing the endpoint services. This is it. About this lecture, I hope this has been informative for you.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »