Amazon AWS Certified Machine Learning Specialty – ML Implementation and Operations Part 4

  • By
  • January 25, 2023
0 Comment

9. Lab: Tuning, Deploying, and Predicting with Tensorflow on SageMaker – Part 1

So in this exercise, we’re going to illustrate using Sage Maker to run your own custom model and train it and deploy it and make predictions with it, all using the Sage Maker framework. What we’re going to do is take the convolutional neural network that we developed using TensorFlow in our previous exercise and slightly adapt that so that it will work within our Sage Maker environment instead. So let’s get started. Sign into your AWS management console here with your own account.

And if you do want to follow along, you can expect this to incur a few dollars in AWS charges. If you’re not comfortable with that, then just watch and don’t follow along yourself. And remember, if you forget to shut things down at the end, your bill could end up being a lot bigger than that. So you have been warned. But if you do want to follow along, go ahead and type in Sage Maker here and select Amazon Sage Maker to get to the Sage Maker console. And we’ll start by spinning up a notebook instance.

So the UI here may change, but just look for a friendly button that says Create Notebook Instance. Or worst case, you can go over to the notebook instances selection over here in the main menu. We need to give this notebook a name. Call it whatever you want. Let’s call it, I don’t know, Keras test. Because we are testing the Keras framework for CNN within Sage Maker here, we’ll use the default notebook instance type of a T two medium. That’s more than enough. And I don’t think we really need elastic inference at this point, but we will use it later on in this exercise. Let’s go through the other options here because these details can be important to the exam.

As a reminder, elastic inference is a way of accelerating Sage Maker, and you can choose to add on specific instance types that are called ML, EIA Medium, Large, or extra Large, and those will sit in front of Sage Maker and accelerate any requests that you make to it. From a security standpoint, we’re just going to go ahead and allow IAM to create a new one with default permissions. So we’ll say create a new role here. We need to specify what we want to do about S three bucket security so we can specify specific S three buckets that we want this notebook to have access to if we want. So if we had an existing data repository or data lake that we wanted to access, you might want to enter that in here. In this example, though, we’re just going to upload data into an S three bucket ourselves after importing it from the MNEs data set.

So we’re just going to go ahead and say none here, but any S three bucket with Sage Maker in the name or object with Sage Maker in the name will still be allowed. And that’s what we’re going to use to actually have access to S Three data within our example here. If you wanted to be a little bit more permissive, you could actually say any S Three bucket within your account has permission here. And remember, it is still restricted to your account, so it’s not as scary as it sounds. But the topic of S Three security with Sage Maker is a complex one, so it’s worth knowing about what these options are and keeping those in the back of your head during the exam.

Let’s hit Create Role to go ahead and create that role that we want. And I will actually have root access to this notebook if you disable that. It does limit the things you can do within Sage Maker. And again, we have pretty tight security already getting into our account, so I’m not going to worry about this too much. I have no need to encrypt the notebook data itself. This is not sensitive or personally identifiable information that we’re dealing with here. It’s just handwriting recognition. But you do have options here as well. You could use Kms to actually have the encryption key that’s used to actually encrypt the data within the notebook. If you wanted some other optional sections here that we’re going to ignore. We talked before about using a VPC, and this is where you would set up a private VPC if you wanted to.

We are not going to use one of this example to keep things simple, because as we talked about earlier, using a VPC with Sage Maker does get a little bit complicated, but it is a topic you need to understand. You can also automatically have our notebook start within a Git repository if you want. And there are also tags you can assign to the notebook as well to allow you to manage multiple notebooks more efficiently. We’re going to leave all that blank for our example here and just hit Create a notebook instance and spin that up. So now we just have to sit here and wait for that status to go from pending to something that indicates that we can actually use it.

So through the magic of video editing, we’ll come back when that’s ready. Okay, so after a few minutes, our status has switched to Inservice. It’s a nice friendly green indicating that we can use this notebook now. So all we have to do now is hit Open Jupyter to open a Jupiter notebook. There’s no special security or tunnels that we need to set up. It just works, which is kind of nice. And here we are. So let’s go ahead and upload the notebooks that we want to use for this example.

So within our course materials, you should see some materials like this. We’re going to be using the unstrained CNN PY script. That’s our same exact model that we used in the previous example of actually training a handwriting recognition model using a CNN using the Keras framework. And we also have the notebook itself, which is going to be the Keras MNIST Sage Maker ipynb notebook. So let’s go ahead and upload these both into the Sage Maker notebook environment. To do that, just hit the upload button here and we’ll upload each of those files in turn. That was in my AWS machine learning folder here where I put all the course materials. So we’ll first upload the Mnistrain CNN dot PY script and hit upload again. And we will also upload our notebook, which is keas MNIs sage maker. ipynb. Hit upload again and we should be seeing this. So now we can click on the Keras MNEs Sage Maker ipynb notebook here and it should just come up.

10. Lab: Tuning, Deploying, and Predicting with Tensorflow on SageMaker – Part 2

And here we have it. So we have a notebook that’s going to walk us through actually deploying and training and making predictions with our Keras based CNN, and not using just a builtin algorithm within Sage Maker, but one that we developed from scratch using TensorFlow. So that’s kind of cool. Kind of a more advanced example here. So we’ve gone through what this does before in the previous exercise, but to recap what we’re doing is we’re taking a data set of handwriting recognition samples where people have tried to write down the numbers zero through nine. And the job of our convolutional neural network is to take those raw images and predict what number the person was trying to draw in each of those images. That’s called the MNEs dataset. You see it all the time in the context of learning neural networks. Very common example.

This is actually based on a similar exercise that’s available from AWS itself. They were using a variation called the Fashion an MNIST data set, which is also kind of interesting to look at if you’re tired of looking at numbers. It’s sort of an alternative data set that works exactly the same way. We’ve made some twists here, though, to use our exact same CNN that we used in the previous exercise instead of the one that they used in this example. We’ll start off by just setting up Sage Maker here. And to do that, we need to import the Sage Maker package for Python, and we’ll get a session and an execution role from that and call it session role. Now, before we dive into the code here, I want to reiterate that you will never be asked to interpret or write or understand code as part of the certification exam.

So don’t get hung up on the syntax. And like how Python works in this exercise, it is absolutely not important for actually passing the exam. If you want to learn Python, that’s a different course. Okay? All that’s important to you is how these pieces fit together, how you would make a system out of them, and what the capabilities and options are as you build this system. The code itself, not important. Anyway, let’s go ahead and run this. So I’ll click in this box and hit Shift Enter. Works just like any other two picture notebook, except it’s running in the cloud on Sage Maker. It’s kind of cool. That will take a few moments to actually load up all the libraries that it needs. And there we have it. Now the next thing we need to do is actually take the MNS data set and store it to disk so we can then turn around and upload it into S Three. Remember, Sage Maker depends on getting all of its input data and training data from S Three somewhere. So the first thing we have to do, as usual, is to get and massage our training data and get that into the format and location that our algorithm expects.

So that’s what this block of code is doing here. It’s just taking the MNEs data set that’s built into Keras itself, loading it up, and extracting the training and testing data sets for both the labels and the actual images themselves. And it’s going to store that within this notebook instance in a data subdirectory, and furthermore into a training and a validation subdirectory, where we’ll store our training data and our validation data, again with the actual images which are called x and the labels which are called y. So let’s go ahead and kick that off. Shift enter. And it downloaded the data and it is presumably now storing that locally into that data subdirectory. Looks like it’s done. So now we need to turn around and upload that data into s three. What we’re going to do here is say that we’ll prefix a little bucket within our bucket, if you will, a directory that’s going to be called Keras dash MNIST, and we’re going to upload that resulting training NPC data and validation NPC data that we just produced in the previous block. So let’s go ahead and do that. And that was pretty quick.

You can see that automatically the Cess upload data function uploaded that into an S three bucket that starts with the word Sage Maker dash. And as you recall, when we were setting up this notebook, we said that we would allow our notebook to have access to any S three bucket that begins with Sage Maker. And well, that’s why this worked. All right, next we want to make sure that we can actually train successfully. So to do that, we’re going to actually test out our training script locally on the notebook instance itself. That’s going to allow us to work out any bugs or issues before we actually invest in spinning up expensive GPU instances to actually train this at scale. So to do that, we’re just going to say pigmentized ms train CNN PY that’s a script that we uploaded to this notebook directory earlier. And this is again, just the same exact convolutional neural network using the same exact Keras code that we did in the previous exercise.

Now, we have added a few things to this, however, to make it compatible with Sage Maker, and specifically with Sage Maker is automatic parameter tuning capabilities. So to do that, you can see that we’ve wrapped this into a little if name equals main block here. That’s just to protect it from running within other scripts. It’s a little python trick that you see sometimes. And the main things we’ve added here are this argument parser stuff. So we’ve added the capability to actually add in a bunch of parameters you can pass into this algorithm, such as what you want the learning rate to be, how many EP box you want to run, what the batch size should be, how many GPUs I should try to use. So on and so forth, where the training data is, where the validation data is, where the model actually is stored to, and I’ll be extracted and stored into various variables that we use within the script.

So instead of hard coding things like how many EP box and what the batch side is, we’re going to actually take parameters for that. So these first three things, the EP box, learning rate, and batch size, those are our hyper parameters, if you will. And by default, we’re going to run with ten epochs, a learning rate of zero one, and a batch size of 32, which matches the settings in our previous exercise. The rest of the code is exactly the same. So we actually just load up our training data and our test data there.

We massage it into the format that our neural network wants by scaling the image data down into this range of zero to one and converting the categories, the labels of zero through nine into one hot encoded format. We then create the convolutional neural network itself. This is the same exact architecture that we used in the previous exercise, so it shouldn’t be anything new here. In addition to the guts of the CNN involving the convolutional and pooling layers and also the flattened layers at the top, we also had those two dropout layers in here as well that we added in for regularization purposes to avoid overfitting. Same exact thing as before. And we have a little bit of an extra line here for dealing with multigpu models, which is an option you can do in Sage Maker. And we’re going to use the atom optimizer again. Note that we’re using those parameters again here instead of hard coded values, so the LR value is passed in as a parameter to this script.

We can change the learning rate if we want to. We can also change the batch size and the number of Fpocs if we so desire. Everything else is the same. The only other thing that we’ve added here is saving the model itself out to disk so that we can turn around and take this train model and deploy it to Sage Maker. So now that we’ve seen that and we’re confident that we think it works, let’s go ahead and run that locally. That’s what this block will do first. We’ll actually create an estimator from TensorFlow that’s built on that Python script that we just put in. So what’s going on here is we’re taking that Python script that embodies our TensorFlow model, and we’re going to go ahead and train this locally. And we’re just going to do a single epoch in this example because we’re running on an instance that’s not really made for running neural networks, right? So it could be kind of slow. All we want to do at this point is make sure that our model actually runs.

We don’t have any, like, weird syntax errors, and it’s working as expected. We only need one epoch to do that, so let’s go ahead and kick that off. And now to actually start the training, we’ll call fit on the resulting estimator with the input pass for the training and validation data. And now it is actually training. It’ll take a few minutes for that to complete. So again, we’ll just come back when that’s done. Once I’m convinced that it’s actually started, we should see some sort of output shortly. All right, starting to spin up now, so we’ll just come back when that’s done. Okay, it finished that single epoch. Let’s see what happened here. So let’s just scroll through and see what happened. You can see that it kicked it off with the hyper parameter of a single epoch. That’s the only thing that we provided to it. Everything else is default. And it printed out the actual model itself there. That’s coming from the script itself that we wrote. Looks right to me.

And it kicked off training. And it looks like it did finish that single EP box successfully. And on that one epoch we ended up with a validation accuracy of 98. 4%. Not bad for one epoch. And it finished successfully, so everything looks correct there. Now, if there were any errors with the script at this point, you would see the Python errors show up in this output here and you could then go back, correct that in the mnistrainedcnn PY script, re upload it and try it again. And I did have to do that a few times when I was first developing this notebook. This is where that debugging process happens. And again, you want to be doing that within the notebook instance itself, if you can, to save some money and save some trouble. But now that we’re sure that it works, we can go ahead and move on to actually training this on a dedicated GPU instance and actually do it with a full ten EP box and see how long that takes. So again, we’re creating an estimator again here. Looks just like the previous code. The difference is that we’re specifying an instance type of a P three two X large to run it on, and we’re going to run with ten epochs and we’re specifying the batch rating learning rate explicitly in this case. They’re the defaults anyway, so it doesn’t really matter. But we are going to do a full ten epoch here and using a p three two x large instance, this is where things start to cost money, guys, but it’s not a lot.

This runs pretty quickly, so we’re only talking a few pennies here to actually run this. Let’s go ahead and kick that off, shift enter and we will run fit to actually kick off the training job itself. Now, at this point, if you’re following along, you may have received an error message about a resource limited exception. It may be that with some newer accounts you’re not allowed to spin up GPU instances at all, and there’s actually a limit on your account. And on some of my own accounts, I’ve seen this happen. If you are seeing a resource limited exception at this point, there’s not a whole lot you can do about it, I’m afraid. It says you can contact AWS support to get that limit lifted. But if you do that and you don’t have a support plan, they might just tell you to go away.

So if you run into that, it’s okay. Don’t panic. Just watch the rest of this video. Learn by watching me instead of doing. Just remember that if you do need to stop at any point, go back and shut down your notebook instance. So just click on the Keras test there and select it, and click on stop when you’re done. Okay? So that way you won’t be incurring any costs for that notebook over time. But let’s go back here and continue on, assuming that it did work for you. So right now it is trying to launch those P Three instances that we requested. Well, we only requested one of them, but it’s going out there and getting it, and it will take some time for it to provision that server and set everything up for it and start that training job. So, again, we’ll come back when that’s done.

11. Lab: Tuning, Deploying, and Predicting with Tensorflow on SageMaker – Part 3

And by the way, in addition to watching the progress here in the notebook, you can also go back to the Sage Maker console here and click on the training jobs and you’ll see those spinning up right now. And you can monitor their progress here as well. So you can see here a bunch of training jobs that I ran yesterday while I was setting things up for this. And the current one that I just kicked off is in progress.

So you could wait for it to finish and show completed status there, or you can continue to watch watch it in the notebook here instead. Whatever works for you. Okay, that block completed. And even though it has scary looking red text here in the output, that doesn’t actually indicate a problem. It’s just informational stuff. And you can see what actually happened here if you look through it. And if we scroll down to the bottom, we should see the results of that training again. It printed out our actual architecture there of our network, and we can see it going through the ten app box that we requested. And it got up to an accuracy of about 99. 2% at the end, which is about what we got before.

So it works. So we have successfully trained our convolutional neural network on a P three instance using Sage Maker. That’s cool. And we have saved again that model, and now we can deploy that model and use it to make inferences at scale. That’s the next step. Now, to save money, we’re just going to do this to a C five instance. Using a GPU instance is also an option, but remember, all we’re doing is making predictions here, so we just have to run inputs through a pretrained neural network and look at the output. We don’t really need heavy duty GPUs to do that necessarily, so we’ll stick with the C five large for this one. Note that we are actually going to be using, just for the sake of illustration, elastic inference here. We’re adding an EIA one medium elastic inference accelerator to this that will sit in front of the Sage Maker deployment and accelerate that for us. The endpoint name will be constructed based as Keras Tfmnist and the date stamp there. Not important what it is, just as long as we remember it so that we can then use that endpoint name to make predictions.

After it’s been deployed, let’s go ahead and click on that block and run it. And it will have to go out and provision those resources and set them up. It’ll take a little bit of time. The output here isn’t a little bit weird. You would expect there to be sort of a message when it’s done saying I’m ready, but all you get is this sort of increasing row of dashes here. And when it’s done, it will just complete running and there’ll be an exclamation mark at the end indicating that it finished. So we’re just going to wait for that to happen. And again, we’ll come back when that’s done. Okay. It took several minutes, but we actually did get that deployment out there. So at this point we do have a model deployed to an endpoint named Keras Tfmnist, whatever it is, sitting on a C five large instance with an elastic inference accelerator on it as well. And it’s just sitting there waiting for us to throw images at it to classify. So again, let’s recap what we’ve done here. It’s actually pretty cool. We’ve taken a pretrained convolutional neural network for the MNIs data set that can do handwriting recognition.

We’ve trained that using Sage Maker on a P three instance, save that to a container that’s now been deployed to an endpoint as a model, and it’s sitting there waiting to make predictions on that pretrained model. Let’s do that next. So let’s see if that works. Clicking this next block here. All this is going to do is select five random images here and actually ask it to classify them for us and see what it comes back as. And again, the code here not that important. You’re not going to be asked to actually look at code or understand code on the exam itself. So don’t get hung up on the syntax here of what’s going on from a Python standpoint. What’s important here is that we’re actually calling TF Predictor predict. With that input data. This is going to be the reshaped input images there into the format that the neural network expects, and we’re going to actually get back the predictions and reformat it into a way that we can display in a pretty manner as a net result here.

So go ahead and shift enter there and kick it off. So we are actually submitting five test images to our endpoint here and waiting for it to come back with a prediction. And as you can see, it’s pretty quick. You won’t want to be calling that with like, really low latency requirements, but it was reasonable in the time that it came back. And look, it actually worked. That’s actually really cool. So for these five random images here, which are supposed to represent a 6781 and a six, the predictions came back as exactly that 6781 and six. So just given those input images, our pretrained neural network was actually successfully able to say, that’s a six, that’s a seven, that’s an eight, that’s the one, that’s a six, without being given those answers ahead of time. That’s really awesome.

Yeah, that’s cool. So let’s bask in our victory here for a moment. But after you’re done basking, we do need to clean up our mess because we’re sitting there incurring charges every moment that that deployment endpoint is up. So if you’re done playing with it and feel free to run this several times, you get a different set of images each time. Let’s do it one more time. Just for fun. 26364 is that one. And again, it got it right, so pretty cool. When you’re done playing, though, make sure you clean it up. And that’s what this delete endpoint function does with the session that will just make sure that we actually shut down that endpoint instance and ensure that we will no longer be charged for it, assuming that we’re done with it. So don’t forget to run that, guys, or you’ll have a nasty surprise on your AWS bill. So we’ve actually walked through the entire process there of training and deploying a model using Sage Maker, using TensorFlow and packaging that up as its own custom algorithm. Let’s also take this opportunity to play around with automatic model tuning.

So we talked a lot about hyper parameters and using Sage Maker to actually find the best hyper parameters and using its own algorithms to sort of sort of hone in on the right values there as quickly as possible. So let’s show that in action, shall we? Once again, we’re going to set up an estimator. And again we’re going to use a p three two x large here. And again, if you have a resource limit on your account, this isn’t going to work, but for me it does. So let’s watch and see what happens here. So in this quote here, we’re going to set up from the Sage Maker tuner package a bunch of packages that we need to actually do hyper parameter tuning. And we’re going to set up the following hyper parameter ranges that we want to explore. So we’re saying we want to test out the range of eppocs between five and 20. We want to explore learning rates between 0001 and zero one. And we want to explore different batch sizes between 32 and 1024. I want to find out what are the right, most optimal values for these hyper parameters, what’s the combination of them that will yield the most accuracy. So we’re looking at valid store ACC as the objective name that we’re trying to optimize for.

That’s the accuracy on the validation set, and we’re going to try to maximize that accuracy. And, yeah, we just call hyper parameter tuner using our estimator that we had it and the parameters that we set above. Now, there are a couple of best practices here that I want to call out that are being illustrated here. One is that we’re specifying the logarithmic scaling type on the learning rate. And you can see that the range of this value goes from dot 0001 to . 1. You know, it kind of implies that we’re looking at logarithmic increments there. So we don’t necessarily want to try every value like 00203 all the way up to zero one. That would take forever, right? Instead, we want to approach that as a logarithmic scale. So we’ll try 001-0101 and some values in between. But we want to be a little bit more aggressive in how we explore that parameter space. So with parameters like that, you want to specify a logarithmic scale to make things converge more quickly. For things like the epochs where we’re just going between five and 20, well, we’re not dealing with orders of magnitudes of difference there, right? So logarithmic wouldn’t make sense for that specific epoch parameter range.

Same thing for batch size. We’re not talking orders of magnitude there either. So we can continue to explore that in sort of a linear fashion. The other thing too is that we’re specifying that we can have up to ten jobs, but only two can run in parallel at one time. And again, this is an important thing when you’re doing hyper parameter tuning because every stage of the tuning depends on the results of the previous stage. And if you run too many in parallel, they can’t learn from each other because they’re all running together at the same time. So what we’re saying here is we want to do at most ten jobs to explore this space and we want to run two at a time. So what’s going to happen is we’re going to kick off two hyper parameter tuning jobs to explore our two initial sets of parameters. We’ll try to learn from those results and kick off two more and then we’ll learn from those results and kick off two more until we’ve reached a maximum of ten jobs.

Obviously you could go even higher if you wanted to get even better tuned results, but these p three instances aren’t cheap. So we want to make sure that we cap that at some level to make sure that we don’t end up paying a huge amount of money for these hyper parameter tuning jobs. Let’s go ahead and kick that off and define our hyper parameter tuner and then by calling Fit, that will actually kick off the job itself. Again, this does cost money, guys. For me, it’s only a couple of dollars, US dollars, to actually run this, which isn’t a whole lot, but if you’re squeamish about spending money, do make sure that you think twice about running this. Let’s go ahead and kick that off now. Interestingly. It looks like it finished immediately, right? Like, wow, that was really fast. No, it’s not that fast. Not at all. What you need to do is actually go back to the Sage Maker console and if you go to hyper parameter tuning jobs, you can see the one that I ran yesterday, but the one that I just started is just kicking off right now. So you can see right now that so far it’s completed zero jobs out of two total. That’s those first two that I’m running in parallel. So after those two run, it will say two out of four and then four out of six until I finally get to ten out of ten. And doing that will take about 26 minutes based on how long it took yesterday. Now, I don’t want to sit around for a half an hour waiting for that to finish.

So I’ll just walk you through the rest here. If you are following along yourself though, hit pause at this point and come back in a half an hour and it should be done. When it is done, you can then deploy the model that was actually the best fit. So by just saying tuner deploy it will automatically take the final parameters that it’s converged on and turn around and deploy that to your endpoint. So really, really easy, right? Once I’ve actually called tuner fit, actually deploying the best set of hyper parameters is just a matter of calling tuner deploy and it looks just like the other deploy count there. I’m just saying I’m going to deploy to a single C five large instance with an elastic inference accelerator on it with a given name and it just does it. So if you look at what happened here when I ran it yesterday, you can actually scroll through here and see what those parameters turned out to be. If we get down to the bottom here, you can see it actually ran that training script there several times and in the end we ended up with a validation accuracy of 99.

22% and if we scroll up we should see the actual parameters it used for. That run looks like it settled on a batch size of 576 with 19 Eppocs and a learning rate of zero, zero zero 87. So again, we could have gone further and allowed it to do more training jobs to try to converge on an even better set of parameters. But the final result there is pretty comparable to what we got when we started off so I don’t think we could actually do a whole lot better there. These are a reasonable set of parameters now that we’ve actually deployed that we could issue another prediction just to make sure that it still works. If you were to run this again, it does in fact work. You can see my output there from earlier. 12695 still works, 12695 still have a good model and when you’re done again, make sure that you clean up that end point. Just click in there and hit Shift Enter to get rid of that endpoint and make sure you don’t incur any more costs. Now also remember to shut down your notebook instance now that we’re done.

To do that, go back to Sage Maker. If you’re following along then that hyper parameter tuning job has completed and you’ve been playing with it. But I cheated by moving ahead. I’m going to go back to my notebook instances here, click on the Keras test notebook and click on Stop and that will assure that I’m no longer charged for that notebook instance either. So at this point we’ve cleaned up our mess. We’ve actually seen Sage Maker in action doing model tuning, hyper parameter tuning and deploying that model and making predictions on that deployed model successfully. So there it is sage Maker in action using a more complicated example. Pretty cool stuff. And you can see the power of Sage Maker and how easy it makes it to actually train and deploy these models at scale.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img