AZ-304 Microsoft Azure Architect Design – Design a Networking Strategy Part 2

  • By
  • January 17, 2023
0 Comment

5. Azure Private DNS

Now the private DNS zone on the other hand, is actually pretty interesting. It is a DNS service only accessible to your Azure resources. And so instead of it being a domain name that you go and register with a registrar and anyone in the world can use that domain name. What we’re doing with a private DNS is effectively giving names to IP addresses that are private IP addresses. And so what you can end up doing is having, let’s say you’ve got two back end servers. One is a database server, one is an application server. Neither of them are available on the public Internet.

You do not want traffic traveling from the public Internet to your database server, but you still don’t necessarily want to hard code the IP address to the database server. You could move it at any time. You could upgrade it at some point in the future. You can see that private IP address going away. And so what you then do is you have a private DNS zone and then you can give a label to your database server. Let’s call it database local. And then you can use the label on any of your virtual machines and not have to rely on hard coding the private IP address anywhere within your network.

So showing me here that by calling something local, there are some historical multicast things that might have an interference with. But this is pretty much the standard. So you could have development local, production local, staging local. We can put this in central US. So we can create a private DNS zone. Call it or call it my server local. And then you can have database myserver local production myserver like one DNS zone that is set up to handle all your internal private DNS needs. And so this is actually pretty cool. And what you end up doing is linking virtual networks to your private DNS zone. And so again, the virtual network takes care of resolving the name to a private IP address. And again, it’s not publicly accessible. This does not leave azure. This does not leave your subscription. It’s private just for your resources. And I think that’s pretty cool.

6. Private Endpoints

So in this video we’re going to be talking about private endpoints and that’s going to lead us to private links as well. Now for many services in Azure when you go to create the service and you go to the networking tab, you have the choice between public endpoints or private endpoints. And so far we typically choose public endpoints points. What this results for? This is a case of a storage account is an endpoint, a public rest API endpoint that anyone can access. So the service itself is running publicly. Now they can access it but they don’t have the access keys and so they won’t be able to successfully get inside of your storage account.

So it’s like the door is there but the door is locked. Okay? Now the other thing you can do of course is tie your public endpoint to a virtual network. And when you tie it to a virtual network, let’s in this case tie it to one of the spokes, what you end up doing is you end up allowing traffic to travel over your network and then you can protect that traffic, right? So you’re basically putting yourself in a position where you can have firewalls network security groups. You can basically do the usual virtual networking protection against this. It’s still a public endpoint, but you can have the firewall standing in between or the route table and things like that. The final selection is this private endpoint, which is what I wanted to talk to you about in this video.

Now the private endpoint basically means there’s no door. There is no way for anyone outside of Azure to connect to your storage account. Not even you could connect to your storage account only from this private endpoint unless you have another way of connecting into Azure, for instance, a VPN or something. So private endpoint is basically just a network device, much like a network interface card that allows the Microsoft network to connect to this as opposed to the public network. So Microsoft’s backbone connects to the storage instead of the public network. So in order to do this we have to create what’s called a private endpoint. So as we saw, we create a storage account in the normal way. We go check private endpoint from the connectivity method and we get down here to where it says private endpoint and we can say add. Now private endpoint is its own thing. Like I said, it’s best thought of as a network device or a network interface card. I’m going to place this private endpoint onto my test resource group here. I’m going to give it a name.

Now it is trying to access a Blob. Now I could say I want to access a table queue, static website, et cetera. Let’s make the Blob the thing that it’s trying to access. Now the other thing is what do we want to attach this to? So in this case we’re going to have to this virtual network obviously doesn’t have or not the virtual. We want to do it to one of the spokes.

So attaching to the spoke. Basically it’s going to attach to the only VNet subnet that has to do with the spoke. Notice the warning here that says if you have a NSG enabled for the subnet, it will be disabled for private endpoints on this subnet only. So soon as you attach a private endpoint, your network security group, which is perhaps protecting your network incoming and outgoing traffic, doesn’t apply. Effectively, NSG does not apply to this private network. But you’re opening up basically a door onto Microsoft’s internal network and not onto the public network. So this does tie into the private DNS zone. We are going to need to create a private DNS zone or already have one. And so it’s going to, I’m going to say yes, allow it to create the private DNS zone. It’s going to attempt to create DNA zone using this name. I wonder if that is even going to oh, it’s the it’s the intimicrosoft private DNS zone. So we can that’s okay. All right, I’m going to say okay. Now notice that I’m allowing Microsoft network routing because we’re using the private endpoint.

That is the preferred way. So I’m going to skip right to the end here and if I click create here, I’m basically going to be creating a storage account that can only be accessed onto a very specific network. We linked it to the spoke, one of our hub and spoke demo and it’s not even going to be accessible. There won’t even be a public URL to access the storage account. Now, storage accounts are not the only service that support this. So I’ve been showing you a storage account as a demo, but many different services support it. So let’s look at I’m going to pull in the Microsoft documentation here and we can see that anything behind a load balancer.

So that could be virtual machines that have public connectivity turned off. You could have what’s called Azure Private Link and we can show you about that in a second. I just demonstrated the Azure Blob storage. Q SQL Database, synapse analytics, cosmos. DB. Another kind of database services. MySQL PostgreSQL you can put your key vault behind such a private link. Kubernetes, of course container registry if you don’t want your images to be accessible on the Internet or even discoverable service bus relay web apps. Even so, for years people have been asking for a way to have web apps that are not publicly accessible. And this private endpoint is one way.

Notice though that you have to be on a premium V two, effectively a premium plan for web apps in order to have this access machine learning automation. So tons and tons and tons of Azure public services are now available, generally available using this type of private link. So that owns only applications that connect to the private link service can get access to them and not publicly available.

7. Private Link Service

So with the storage account it might look actually like a normal storage account. It doesn’t scream to you that it’s a private endpoint only. And in fact if you go into the properties of the storage account it even has the normal URL set up for contacting this. But if we go under the resource group and we can go under the endpoint, we can see here first of all there’s a network interface card that’s been added to our resources. That is the nick part of the endpoint. Then there’s the endpoint itself. The endpoint is attached to the subnet. It has that nick cart. And if we go under DNS configuration we can see that it’s basically pointing this endpoint which is the private test blob to a private IP address. So this should not be accessible from outside. This is what the IP needs to be. So to be configured correctly the following are required in your private DNS setup we have our private DNS set up and so we can see that the AZ SJD private test is in fact pointing to that IP address.

So when we deployed it, it actually did get created properly. Again, this is a private DNS. So in order for any of our virtual machines to use the storage it’s going to have to recognize this private link. Now we did attach this to the virtual network and so the virtual network VNet spoke one if we created resources on that network would have access to this storage account privately. Now in a related service we were talking about private endpoints and we set up private endpoint for our storage accounts. But if we wanted to set up a private endpoint for our own virtual machines there is a way to do that. And so we could have this private endpoint that exists for our back end living inside of our front end have deny rules on the NSG. So deny outbound and deny inbound.

So theoretically there’s no traffic allowed between these particular virtual networks. But because you set up this private endpoint it is again a private connection and you get the endpoint on one side and you’ll have the private link service basically as the server on the other and this becomes like a proxy if you will. The Azure private link actually has a dashboard so I’m going to minimize that. And if we go under we search for private link in the marketplace and we go inside of it. We’re taken to the private link center and so we can see the diagram very similar to what we just saw which was some sort of front end, some sort of back end and a private link that manages the connection.

In fact if we go into the private endpoints of the private link center we can see the endpoint that we created for the storage account. If we wanted to create private link for our we don’t even have virtual machines but if we created a load balancer onto one of those networks and we could create a private link that allowed the connection between the two. At least we can see the active connections, pending connections, the status of it. We can approve stuff and deny stuff, ETCA. So connection state. It becomes a centralized way to look at how your private connections are talking to each other.

8. Overview of Azure Load Balancing Services

So let’s talk about how traffic is distributed in terms of load balancing services within Azure. So there are four main load balancing services that we’ll talk about. There is the standard load balancer which as the name implies, is the one that you would go to when you’re needing a load balancer. Application Gateway has some additional functionality and more capability and those are at the local regional level. At the global level you have the Front Door service and you have traffic manager and both of them have completely different capabilities when it comes to distributing traffic globally. So let’s start with the load balancer. So the load balancer service is the most basic of it where it’s what’s called a layer for load balancer. It works at the network level. It doesn’t understand http https at all, doesn’t understand domain names, doesn’t understand paths in the URL. The level four load balancer works at IP port and protocol, protocol being TCP or UDP. So those are the things that a load balancer service can handle. Now, Microsoft provides a basic load balancer for free and it’s not very featureful and they don’t even recommend you use the basic load balancer in a production setting. For that they have what’s called the standard load balancer.

The standard load balancer has got an SLA, whereas the basic load balancer does not. And it also does cost money for a standard load balancer and you pay per rule. So if you have one load balancing rule or it’s 2. 5 cents per hour, if you have two four, you’re just basically paying per rule. To create the load balancer is pretty straightforward. You’re just going to choose the group subscription, give it a name as usual. Region. Now, load balancers can either be public load balancers or internal load balancers. Internal load balancers effectively operate on private IP addresses and cannot be accessible from outside. Whereas a public load balancer would have a public IP address that the public would be using to access your services that are hidden behind it.

Here’s where you choose the standard versus basic. Now notice that the standard SKU does support fancy things such as Availability Zones. If I switch over to the basic Skew, the Availability Zone basically disappears. And then the ability to run the load balancer you see here, it actually allows me to run it regionally, which of course is going to be in a region or run as a global load balancer which is deployed to a region. Now I’ll pull in this page, you can see here that global versus regional is a thing. So when we’re talking about front door and traffic Manager, we’ll talk about that in a second. These operate at a global level. This is where traffic from around the world can come into the service and then will get distributed to the closest region to the user.

You’ll notice that Application Gateway is listed as a regional only load balancer. So typically you might have a front door service that points to an application gateway and the web servers are behind the application gateway. Now, surprisingly, Azure load balancer is also listed as global load balancer. And you can see here that it’s not designed specifically, it’s a level four load balancer. So it’s not designed specifically to work on Http type traffic. And so that’s why it’s listed as being not great for Http because it just doesn’t handle it any differently as it would handle non Http traffic. All right, so yeah, basically you do get the choice in the load balancer specifically whether you want to run at the regional level or the global level. And this is where again, you can make it in a no zone or make it zone redundant or across specific zones.

Now, if we go to the application gateway that’s right there next to it, the application gateway, this is also considered an enterprise load balancer. You’re not going to pay for it. Now, you do have the choice of what are called SKUs and so you’re going to basically pay for standard application gateway.

The WAF stands for Web Application Firewall. And a web application firewall has the ability to basically filter malicious traffic that is coming in. So if someone’s going to try a cross site scripting attack, SQL injection, any of these sort of standard internet hacking methods for websites, the web application firewall should be able to handle it. There’s a sort of an enlist of the known industry standard hacking ways. So this is not as fancy or sophisticated as an actual firewall or some of these advanced threat protections and other firewall devices, but this does basically make sure people can’t take advantage of your website if it’s not properly configured, for instance.

So again, with application gateway, give it a name, choose a region. You can see this application gateway supports scaling, which the load balancer does not. So you might want to not turn on scaling unless you really know that you need it, but you can basically have it grow to the number of instances of application gateway based on the traffic. It also supports the Http two protocol, but it’s disabled by default. Now they’re all again very similar except when you get to the because this is a layer seven load balancer. You have the ability to configure rules such as the domain name matching or parts of the path so that images get sent to one server and videos get sent to another and the rest of the traffic gets sent to a third server. So you can do load balancing based on a path of the URL. Now, the last two that we’ll talk about are Traffic Manager. I’ve always found Traffic Manager to be really cool because it’s really a hack of the domain name system, the global domain name system. The idea is that your user, let’s say you’ve got a user somewhere halfway around the world from you, let’s say they’re in Australia in my case. Well, the domain name system, the user is going to go and look up your domain, www. example. com. You can direct that user to a different IP address as you would to a North American user. And so you can set up your applications around the world in three or four or five regions and everyone going to the same domain gets directed to different servers.

This, I believe, would be how Google works. Or Facebook. It’s the same domain name no matter what country you’re in. But the servers are basically geographically dispersed and not everyone that goes to Facebook. com is being sent to the United States to get that traffic served. And so it’s a very similar set up to a lot of these big brands, even Microsoft. com. Front Door is also relatively new and it basically is an application gateway that runs at the global level. It also supports Web application firewall, so there is a security element, it also supports a CDN, so there’s a caching element. And basically it’s another high availability service that operates at a global level and then you can then distribute the traffic to the specific region that you want to.

And so again, you could think of it as a global service that then can direct them to the right region for them depending on what they’re trying to do. So Front Door service really sort of can do it all. Obviously there’s a price to that as well compared to load balancer. Very simple application gateway, a bit more complicated but also very straightforward. Traffic Manager is an eight hack of the DNS system and Front Door sort of has a hodgepodge of things operating at a global level. That’s all in terms of load balancing services in Azure.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img