AZ-304 Microsoft Azure Architect Design – Design Deployments
1. Design Infrastructure (25-30%)
The final section of the exam, it’s a big one, says design infrastructure and is worth 25% to 30% of your exam score. Now, inside of this section, we’re going to be talking about the biggies of infrastructure as a service. We’re going to talk about compute services, we’re going to talk about networking services, we’re going to talk about web apps, use logic apps, functions, service fabric, all of those platform as a service type solutions. And finally we’re going to talk about migration, which is getting the data and applications off of one platform and presumably into Microsoft Azure. This includes storage, migration data box and how to migrate databases and virtual machines. Again, this is a big one, but you’ve been following along so far, so you’ll be fine, don’t worry. Thanks a lot for being here and let’s get into infrastructure.
2. Compute Deployments
So we’re now into the fifth topic of this course, five out of six. This is to do with design for deployment, migration, and API integration. We’re going to talk about deployments in this section. Now the type of deployments that we’re talking about right now are deployments of your code into virtual machines. So if you’re working within a development environment where you’ve got an application, let’s say it’s Visual Studio and Dotnet, and you need that application to get deployed into one or more VMs, what are your options? Now, first off, I should mention that you can just hit Publish within Visual Studio. So there is a way to connect Visual Studio to Azure hit published, and then you can create a package, essentially that goes from your developer workstation into an Azure VM.
Now this is not recommended for production. I know a lot of companies that have very small teams or single solo people who run the whole company would obviously have access to production. But even then, you might want to prevent yourself from making a mistake by having a proper deployment process where you can test it. There’s a testing environment at first, and then you take that code and deploy it. And so going right from Visual Studio to a VM would be more likely for a dev environment or for some type of test. There are some professional tools. There are two of them that come to mind are Chef and Puppet. Azure does have a Chef integration. And so you can have a Chef server, have a cookbook.
All the Chef metaphors are all cooking related, which is great. You have your Chef cookbooks and basically you can design a deployment that will take your code from GitHub and get that into Azure. And your administrator can just sort of fire off some Chef commands and get that deployed. The other alternative to Chef is called Puppet, another great name for a product. The server that runs the Puppet is called a Puppet Master, of course. And you can basically take your code again from GitHub or from a Blob storage account, whatever it may be, and use the Puppetmaster directing the Puppet agents, which are installed on your virtual machines, to pull the code and do the deployment. So Puppet also has a deployment method, but Microsoft has its own.
About a year ago, they announced their DevOps suite and Azure Pipelines as part of that. And so Azure Pipelines is meant to be a way for you to run basically a script within your own environment to do building and testing and getting the approvals and all the things that need to be done to get your code. And that’s going to be completely customized. So for you, it might go to one VM and then post it here and then call a web hook and things like that you can see on the screen. Here we go, we got the code gets pushed into Git. This could be like a continuous integration type thing where once the commit happens, the build kicks off some continuous integration, some testing runs, maybe there’s approvals that have to go, people have to sign off on it, release management, and finally ends up in a virtual machine. Now, if you need to do this to multiple virtual machines, you can put all your virtual machines together into deployment groups. And so if you’ve got four production web servers, you don’t want to individually have to deploy each one of them. And the problem would be the guess that they fall out of sync with each other while you’re doing the deployment.
So put your servers into a deployment group in Azure Pipelines, and then you can use the deployment group as a target. Now, individual VMs have a VM extension called a custom script extension. This allows you to run PowerShell commands. So absolutely, you can develop your own PowerShell scripts, even outside of the Azure Pipelines, to, you know, run scripts that can basically instruct virtual machines to act. Now, the custom script extension also works as a task within Azure Pipelines. Of course, you can do Arm templates.
And so if you do need to deploy some VMs, you can basically set up that Arm template to do that. And part of it could be a custom script extension that executes a PowerShell downloads, the code from GitHub, et cetera. There are, of course, I’m just sort of touching on the top apps within the marketplace. There are lots of third party apps. The automation and the build process has been a very popular thing that’s been growing in a lot of environments, octopus and other things like that. So there are environments as well that can get stuff into virtual machines, not just what I’ve listed on screen.
3. Container Deployments
So continuing on, we’re going to continue talking about deployments, and in particular, let’s talk about container deployments. Now, containers are sort of relatively new technology within the cloud computing space. And the thing that is really great about containers is how easy they are to deploy. And so the containers were actually invented to solve some of these deployment problems that people have around piecemeal deployments, of having to follow 200 steps and deployments taking 12 hours. So containers solve a lot of those problems. And so there’s a lot of services and a lot of applications around container deployments. One of the central Azure services is called Azure Container Registry or ACR. ACR is a repository for container images. So the way that GitHub is a repository for your code, ACR is a repository for the built images that contain containers. And so once you have taken your code, put it into an image, put into a container, it goes into ACR.
Now, once it sits on ACR, then it can be pulled out of ACR and deployed into various environments. And one of the technologies around that is called Helm. Last year at Microsoft. Ignite, I heard a lot about Helm. People were very excited about it. The way that Helm works is that it basically is a package manager similar to Apt or similar to New Get, where you’re basically able to pull packages. In this case, it’s pulling it out of ACR. So GitHub, you check your code into GitHub. Helm will take over, do a build step, will pull that code into a docker image, put that docker image into an ACR repository. And so at that point, the build is done. That code will never be modified. The binary of that code will be identical no matter where it goes. And that’s what makes containers so great. At that point, you can choose where to release it. So you can basically deploy that code into a development environment, do some testing on it, play around, get that into a staging environment.
Other people can do some testing, and finally everyone signs off and it can go into production. And the same ACR image gets deployed in all locations. So you can see Helm is the orchestration. It’s the one that’s doing the work, whereas ACR is the place where it lives. Now, similar to this, we just talked about Helm being the orchestrator. Azure pipelines is also an orchestrator. So Microsoft Azure in the DevOps space comes out with pipelines. And that allows you to build some pretty sophisticated automations. So you can have step one, do this. Step two, do that.
Step three, notify this person. Step four, wait for approval. Step five, build your scripts that are step by step through what it takes to get a deployment done. Here’s a very simplistic one. Where the code goes into Git could knock off a continuous integration step where the build happens, automated tests are run, and then the code is ready. To be deployed and you can choose to deploy to dev, choose to deploy it to staging, choose to deploy to production. And Azure DevOps and the Pipelines particularly can push that right into a virtual machine and right into a container in this case.
Now, we can also deal with the deployment groups when it comes to containers as well. Now, obviously, like I said, containers are built for deployments and so there’s a lot of options around that. The ones I’ve talked about are sort of the most common ones and you can obviously do from the command line, you can run your docker commands on your command line or things like that. Your AKS and Kubernetes controls within a PowerShell as well. So do you do a lot of stuff manually or use these automated deployment?
4. Database Deployments
So now let’s talk about a type of deployment that’s traditionally been difficult in my entire career. The issue of how did it get databases out of a development environment and into staging into production without making mistakes has been a constant source of challenge. Now, there is a type of technology called DACPAC. Now, DACPAC represents the schema of your data database. You use a tool such as SQL Server Data Tools SSDT, that is a Microsoft design tool for databases. And basically by doing a build within SSDT, it’ll create a Dacpack file.
And you can use Azure Pipelines to push your Daxpack, which is your schema, into from dev into staging into production. Now, of course, this is the way that I used to do it was we’d have to write sequel scripts to perform the tasks that we wanted to do. So if we wanted to create a new table, then you would have a create table statement in a SQL file. And that was part of the deployment.
And so if you continue on with that, if the developers have to create SQL scripts, then that too can be pushed through Azure Pipelines to get into a database and do a database deployment by SQL script. There is an Azure SQL Data database deployment task. And so to get your DAC pack or your SQL scripts into an Azure SQL database, there’s a task in Pipelines for that same diagram. Basically, as virtual machines, it ends up in Azure. And there’s also a task for the MySQL database as well. I did not see tasks for the other types of databases PostgreSQL MariaDB, but there are tasks for those two.
5. Storage Deployments
Now another traditional challenge for people doing deployments is the deployment of files. So if you don’t have an application, it’s not an IAS, and you’re not zipping them up and following an application type of deployment. How do you just get files into a file folder? And so Azure Pipelines does have a file copy task. So if you do want to create an Azure Pipeline script to say, copy the images from here over to there, there is a way to do deployments of that. Now, good old AZ Copy isn’t going anywhere. That’s a tool, a command line tool that you can download and run in your local environment or on a server that allows you to manipulate files between containers.
But you can also upload files from your server into Azure or download files. Now it does this asynchronous way, so it’s basically a high availability. It’s not going to impact your application while it’s running. But using AZ Copy to get your files into the cloud is another way. A great thing about AZ Copy is you can build that as part of a script. If you’ve got a PowerShell script or some other scripting language that you’re using to do your deployments, then the AZ Copy command line, it just fits right in. There are other pipeline tasks besides the file copy. Now file copy will get into a database. A SQL storage not a SQL storage account. An Azure storage account.
But what if you need to get files into a VM? So now you’ve got a C drive or D drive inside a virtual machine, while there’s a Windows Machine file Copy pipeline task as well. And that not just for Windows, but Linux also is supported because there’s an SSH Azure Pipelines task if you want to deploy into a Linux environment. And speaking of SSH, SSH is a command line tool that allows you to copy files. And so the same way you have a Z copy to get into a file storage, you can use SSH to copy files from your local into a Linux, Linux VM or a Linux environment that has SSH access. Lots of ways to get files into Azure as part of your automated deployments or even.
6. Web App Deployments
Now, no discussion of deployments would be complete without talking about web apps. Web apps. Before there were containers, web apps platform as a service, applications were also designed to give you services such as deployment. So web apps, you can hit the Visual Studio Publish button and get that deployed into a brand new web app or into an existing web app. It also supports deployment slots, uses web deploy, I guess, to get from your machine into web app. And so anything else that supports web deploy will work as well.
Now, a very platform way of doing this is called Kudu. Now, Kudu is also a way of accepting files and getting them into a deployment into a web app. So if you’re using Eclipse or you’re using another open source editor besides Visual Studio, if that has a Kudu integration, then you can get your files from that development environment into Azure over Kudu GitHub. Now, now that Microsoft has bought it, they’ve always had an integration, I guess. But now that Microsoft has bought it, things have been starting to become accelerated and more tightly integrated. There’s even GitHub actions now, which is a push from GitHub into Azure, or in this case, you can use GitHub. Web apps integrate with GitHub. You can just set up the deployment right within the Azure Portal and pull your code from GitHub.
It also supports a number of file services, microsoft OneDrive, or the Dropbox file service. So if you can get your files into a Dropbox folder, well, it’ll detect that Dropbox folder has files and will do the deployment. So you can have a deployment shared folder that will automatically pick it up. Of course, web apps give you an FTP user ID and an FTP URL. And so you can basically go and upload the files the old fashioned way using the file transfer protocol. Now, no conversation. You’ll see, this has been a constant trend in the deployments. But of course, Azure Pipelines has web app tasks. So if you’re building.
7. Service Fabric Deployments
So we’re going to wrap up talking about deployments by talking about service fabric deployments. Now, service fabric is this a technology that Microsoft came out for the cloud, that is to make it easier to develop, test and deploy your code by breaking up your monolithic applications into much smaller parts. It is much easier to make small changes to small pieces of code and be more sure that you’re not negatively affecting some other random piece. And the testing is easier and deployments are easier. And I guess the theory is you can then do more frequent deployments. You can do changes of code that are tiny, do a quick test, push that out to production and keep that cycle going.
So it’s a quicker velocity in terms of development and deployment. As such, the deployment stuff within the service fabric follows along a lot of the other app service path. So what we talked about in the app service section probably still applies to service fabric to some degree. We can see on screen there’s a service fabric cluster on the left five nodes and you’ve got a web service and a worker service running on each node. And they’re distributed differently where the worker service is on everyone, but the web service is not. And you can see on the right that there’s a developer, obviously that’s packaged up their code and has sent it into the service fabric.
So how do they do that? Well, Visual Studio, no surprise, supports the publish command right into the service fabric. So this is a public published profile. You can choose your service fabric endpoint as your connection and it will use in this case the X 509 Credential to push it and so very easy to publish directly into service fabric that way. You can also use PowerShell scripts or CLI scripts.
Here’s a Microsoft webpage shows you how they have the endpoint, they have the package and then they use a number of PowerShell commands to copy the package, register the package and then get the service fabric running from that. Azure pipelines is Microsoft’s preferred way in terms of a DevOps stack. And so there is a service fabric application task in the pipelines. Here’s an example of the Azure pipelines within the DevOps tool. And you can choose the Azure service fabric.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »