Amazon AWS Certified Machine Learning Specialty – Modeling Part 15
41. The Best of the Rest: Other High-Level AWS Machine Learning Services
Let’s quickly go through some other machine learning services that will probably not be covered a whole lot on the exam, but are still worth knowing about just in case. First of all, we have Amazon Personalized, and this is a new service that probably isn’t on the exam yet, but it might be on future revisions of it. Basically, it’s exposing Amazon’s recommendor system as a web service. And this is something I personally worked on at Amazon a long, long time ago. And it got shelved at the time, and they finally resuscitated it and released it, it seems. I guess they figured the time was right. But yeah, basically it’s a collaborative filtering engine.
So you feed in data about this user bought this thing or this user looked at this thing, and it gives you back recommendations for that user of other things they might be interested in looking at that’s all Amazon Personalize is they also have a service called Amazon Texttracked, which again, sounds like what it does. It’s basically OCR optical character recognition. So you can feed in images of scanned pages or what have you, and it can deal with things like forms and fields and tables that are in that image as well. So it’s not just straight up OCR. It can actually extract structured information from a page of text as well. So, for example, you could imagine a system that’s actually scanning in the pages of a book and turning that into text that could be spoken back through Amazon Poly. And Presto, you’ve built your own Audible, if you will, automated.
So that’s texttracked. There’s also a couple of cool little things that they offer. One is AWS Deep Racer. This is more of an educational thing. Basically. It is an actual 118 scale race car that is powered by AI. So they actually have races and competitions out there to see whose deep racer can actually outrun the other ones or navigate a maze better. It uses reinforcement learning. So it’s a good real world example of reinforcement learning. We talked about earlier with reinforcement learning in Sage Maker, the example of a pacman being generated, being trained using reinforcement learning. You can imagine bringing this into the real world by actually putting a 180 scale race car in an actual maze and using reinforcement learning to navigate that maze.
So it’s just kind of a fun thing. Deep Lens also more of an educational thing. It’s a deep learning enabled video camera, but you could actually use that for a real application, too. It has integration with recognition. Sage maker, poly TensorFlow MXNet and Cafe. So you could use, for example, IoT Green Grass to deploy a pretrained model, maybe using Sage Maker Neo, a deep lens device, and actually sit there and do deep learning on the edge and actually do object recognition and things like that within the deep lens camera itself. So it’s a very quick way to get up and running with image recognition and other computer vision applications with a camera that’s tightly integrated with AWS.
42. Putting them All Together
So again, the exam tends to focus on building larger systems. And since these are very highlevel services that we’re talking about, most of the questions you can expect are going to be in the context of how do I put these things together in a meaningful way. So use your imagination. You’re not going to encounter these specific examples on the exam, but start to wrap your head around how they might fit together in these different ways. So we talked about building our own Alexa. For example, we might use Amazon Transcribe to translate speech to text, feed that text into Amazon Lex to actually do the chatbot part of it, and then take the output of that chatbot from Lex and speak that using Amazon Poly.
I think we mentioned making a universal translator before as well. You can imagine using transcribe again to transcribe speech that adheres into text. And then we can use Amazon Translate to translate that text into another language and then turn around and use Poly to speak that translated text back. Kind of cool stuff. That could actually work as we saw, Translate can work on streaming data. So you could actually build a Star Trek universal translator out of that. That’s kind of fun stuff. We could get creepy and build a Jeff Bezos detector.
So Jeff Bezos is a celebrity that’s recognized by AWS recognition, so it’s good that he’s eating his own dog food there. You can imagine building a system on deep lens where you just set up a deep lens camera on a corner somewhere in Seattle and hook it up to Amazon recognition, celebrity detection, and maybe fire off some sort of an alarm when it sees Jeff Bezos to you. You could also imagine building a system to keep track of whether people on the phone seem happy or not. For example, you could use transcribe again to transcribe speech into text and then maybe use Amazon Comprehend to do sentiment analysis on that text.
So you can imagine a system where it’s automatically telling you in real time, is this person on the phone happy or not? Maybe that would be useful in the context of a customer service or some sort of a call center, right? So there are interesting ways of putting these systems together to build interesting applications and interesting systems. And you just need to understand what each service does to understand how they might fit together in system like this.
43. Lab: Tuning a Convolutional Neural Network on EC2, Part 1
The Amazon Machine Learning exam tries to be aimed at people who actually have real industry experience in machine learning. So it not only tests you about your book knowledge of what the different algorithms do, but it also tests your ability to tune those algorithms, identify problems in their training, try to identify things where things might be overfitting or shooting over local minima and things like that. And the only way to learn that often is to buy just doing it. So let’s do some of it. In this exercise, we’re going to actually set up a convolutional neural network, a deep learning system, if you will, and we will tune that to actually have it optimal and take a look at its results and figure out if it’s overfitting or not. We will do this on EC Two because this is an AWS course. But again, a lot of the exam is not AWS specific, and this topic in particular is applicable outside of AWS as well. But we still need a machine to work on. So let’s go ahead and go to EC Two. So in your AWS management console, let’s type in EC Two. And we want to find an AMI that lets us start using TensorFlow out of the box. I’m a TensorFlow fan. That’s what I use for deep learning, so I’m going to stick with that here. Click on Launch instance and let’s search for deep learning.
And I like, Ubuntu, that’s what I’m used to. We’ll click on the AWS Marketplace results and we’ll see that there is a deep learning AMI for Ubuntu available. Wide range of charges depending on what sort of instance you use. But the bottom end is like two cent an hour. Reasonable enough. Let’s go ahead and select that. It’s not free, guys, but you can choose your instance. Type here. This is deep learning. Deep learning works best on a GPU, so ideally you want to use a P something or other instance. Let’s scroll down to those. Looks like a P three, two x large is about $3 an hour. That’s going to be your default choice for this sort of a thing. But this isn’t going to be a huge neural network. I’m not going to be training it with a large amount of data. So a P two x large should meet my needs. And that’s only $0. 90 an hour. So hey, I can afford that. Let’s remember that. Now, we have to keep in mind, too, that not every AWS account has access to GPU instances. If you do have a newer account, you might not be allowed to spin up a P two x large and you’ll find out the hard way later on if you’re following along. If that’s the case, it will give you a big error message saying you are not allowed to do this. I don’t know what their logic is for allowing some accounts to use them and some don’t, but make sure you have a backup plan of a non GM Eppu instance as well, if you need to. A C three large should be just fine, and that’s also about $0. 10 an hour. So whatever is available in your region and that’s allowed in your account, go for it. Me. I’m going to shoot for a P.
Two X large. So let’s say continue. Just make sure you know what you’re paying for. Here what it costs. I mean, you definitely wouldn’t want to go for a P three DN 24 Xlarge and spend $30 for no reason. Remember, EMR is billed by the hour. Even if you only use it for ten minutes, you’re going to be billed for that hourly rate at a minimum. With that out of the way, let’s actually choose our instance type. As you recall, I chose a P two X Large because I know my account will allow GPU instances. Let’s go ahead and select that here and review and launch. All right, just a reminder that it is not free, but it’s cheap. I’m okay with that. And hit launch again. We need to set up a key pair for it. I’m going to use one of my existing ones here that I have. If you’ve used DC Two before, then you probably have one sitting around as well. If you do need to create one, however, you can select Create a new key pair here. It will allow you to create a new one. You will download what’s called a PEM file that contains your private key. And if you’re on Windows using Putty, you will need to convert that resulting PEM file to a PPK file using Putty gen. You’ll find that in your Start menu under Putty. Putty Gen is what we use to convert the PEM file to a PPK if you need it. I’ve already done that, though, and if you’ve used EC Two before, you’ve probably already done that as well.
So let’s go ahead and hit the Acknowledge button just to confirm that I do have access to that key pair file, so I know I can actually log into the resulting EC Two instead. Instance launch instances. And now we just wait for it to spin up. All right, they are now launching. Let’s click on the resulting instance here and monitor its progress. It’s initializing right now. It’s spinning up pretty quickly, so that’s cool. One thing that we need to do, though, is make sure that we can actually connect to it. So while we’re waiting for that to finish initializing, let’s go down to its security group here. Click on the security groups, go to Inbound and we’ll edit that. And let’s just lock down SSH to our own IP address and save a little bit more secure. That way everyone in his brother can’t just log into our instance there. Let’s go back to the EC Two dashboard, or to the instances rather, and we can see that’s still initializing. Let’s go ahead and just come back to that when it’s done through the magic of video editing. I’ll be back when that’s ready. Okay, our instance is up and running now, and now we just need to connect to it. Let’s click on that and click the Connect button for some guidance.
So we have our public DNS name here. Unfortunately, we don’t just want to run SSH and connect to it from a command line. We also want to connect to it as a jupyter notebook. And that means being able to tunnel through with SSH to open up an actual Web based notebook environment on that host. So let’s take care of that. To do that, let’s first of all, copy that public DNS name out here. And now I’m on Windows, so I’m going to be using Putty to connect here. We’ll open up Putty and paste in our hostname. We’ll go to Connection SSH, open that up, select Auth, and then select our private key. For me. That’s sundog EC two. For you, it will be something else. And we need to set up a tunnel. Like we said. This tunnel is going to go from source port eight eight eight to a destination of local host colon 8888. We’ll hit Add and open click yes to accept that new host key. And the login ID is Ubuntu, like that. All right, we’re into our host here. And now we just need to start up the Jupiter notebook server so we can actually start playing around with it. Just type in Jupyter with a Y notebook, allow that a moment to spin up. And we need to copy this token, this URL here that it gave us. So I’m just going to highlight that here in my console.
That will automatically copy it to my clipboard, go back to my browser, paste that in. And we are now communicating with our EC Two host through a tunnel to actually get a Jupiter notebook in our Web browser. That’s pretty cool. So we’re securely connected there. It’s only open to our IP address, so everything’s nice and safe and secure. So let’s go ahead and import our own little notebook here to play around with. Click the Upload button here and navigate to your course materials. From there, I want you to click the Kerascnntuning ipynb file and then hit the Upload button and select that from the list here, and it should open up that notebook. All right, we’re in business. So we’re going to run a pretty classic example here called the MNIST Handwriting Data Set.
It’s a very widely used example for learning, deep learning, if you will. And what it does is it contains handwriting samples of people writing the numbers zero through nine. And we will try to construct a deep neural network to actually classify these images of drawings of the numbers zero through nine into the numbers that they represent. First thing we need to do is import TensorFlow itself and again, don’t get caught up on the actual code here. You will not be expected to understand Python code or read it or even recognize it on the actual exam.
What’s important is that you understand what you can do here and how you can interpret the results of this training and actions you might take in response to what you see. Now, before we start running this code, we need to make sure that we’re running in what we call the right kernel. Right now, it’s just treating this as default python two code. That’s not what this is at all. It’s actually python three code that depends on the TensorFlow environment. So let’s go to the kernel menu up here and say, change kernel. And we’re going to select the conda TensorFlow p 36. That means we’re going to be using python 3. 6 with TensorFlow flow. Kernel is starting.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »