100% Real Microsoft Certified: Azure AI Engineer Associate Certification Exams Questions & Answers, Accurate & Verified By IT Experts
Instant Download, Free Fast Updates, 99.6% Pass Rate.
Download Free Microsoft Certified: Azure AI Engineer Associate Practice Test Questions VCE Files
Exam | Title | Files |
---|---|---|
Exam AI-102 |
Title Designing and Implementing a Microsoft Azure AI Solution |
Files 6 |
Microsoft Certified: Azure AI Engineer Associate Certification Exam Dumps & Practice Test Questions
Prepare with top-notch Microsoft Certified: Azure AI Engineer Associate certification practice test questions and answers, vce exam dumps, study guide, video training course from ExamCollection. All Microsoft Certified: Azure AI Engineer Associate certification exam dumps & practice test questions and answers are uploaded by users who have passed the exam themselves and formatted them into vce file format.
Now we're still on the topic of Computer Vision API, and in this video, we're going to talk to you about brands and detecting brands and logos in images. If we look at the official Microsoft docs for this, we can see that the brand call is going to return both the name of the brand and the location. This is given in X and Y coordinates with a width and height tag. switching over to GitHub. Going into Detectbrands PY, we can see that. Again, we're setting it up with a Computer Vision client. We're going to point to a new image, which contains a brand. In this case, we'll use the Microsoft logo that's on a hoodie here. So this is located on GitHub as well. Now we are specifically asking for the brands as a remote image feature. So we're going to call the generic analysed image method pointing to the URL, but we're asking for the brands specifically. Now, once again, you're going to get a return result that is going to contain the brands, and not only the brands but the confidence and the location. Again, we can do that locally or remotely. In the case of locally, we're going to use the instream version of the method. So let's switch over to PyCharm and have a look at how this works. So once again we've made sure that our cognitive services endpoint and key are in the code. We're going to execute the code, and what we're going to see is that it's going to find the Microsoft logo eventually. When it comes up here, it's actually found two Microsoft logos in this image. Now, for the first one, I went into Photoshop and drew it out, and we can see that it identified the logo as itself. And the second one is going to be the complete Microsoft Word, including the logo.
Now we're going to talk about content moderation. So obviously, if we see that the Computer Vision Service can analyse images and extract tags and descriptions, it can identify faces. As a result, one of the natural short extensions of this is to have the computer vision service analyse the image content that may be uploaded to your website, looking for specific defamatory or racy content. Now, if we go into the official documentation, we can see there's a "detect adult content" page here. And the three categories of content that it's looking for are what's called "adult images." These are explicit images. They often show either nudity or sexual acts. Racy images, which could very well be lingerie images, or images that are not necessarily sexual but perhaps suggestive, And finally, the type of content that they can detect is what's called gory, which focuses on violence and blood. Now, the way that this is returned to you is that you're going to get three boolean properties: adult content, racy content, and gory content, true or false. And it's also going to have a score on a scale of zero to one. Obviously, some things can be slightly adult but not completely adult, et cetera. And so there's going to be a score alongside the flags here. If we look at going into the GitHub for this course, the AI 102 files, there is a moderate-content Python file. Everything is identically set up. In terms of the Computer Vision Client, what you did provide, of course, was a URL to analyze. You're going to be using the adult features. We were only looking at brand features, for example, and now we're looking at adult features, which you'll fit into the client's analysed image method and it'll return those results. As I previously stated, if there is adult content, racist content, gore content, and the specific score on that. So for this one, I'm going to leave this to you. If you want to download this code and set it up, enter your own image URL. You can certainly see whether Microsoft ComputerVision Client is going to detect that as being an adult or null image. Pretty straightforward. As you can see, this is pretty much in line with the other examples that we've been seeing so far. So when it comes to it again, we're just talking about brands. You're basically passing in now with the analysed image method and a particular feature. And so far, brands and adults are two of them that we can ask for.
Alright, so the last lesson in this section has to do with generating thumbnails based on an image. Now you might wonder, why do we need an AI service to generate thumbnails? Well, the cool thing about this AI service is that it tries to generate a thumbnail based on a relevant portion of the picture. So if you have a large image with a lot of empty space and there's a face on it or there's a landmark, you can use the Generate Thumbnails API, and it will create a thumbnail based on the dimensions that you specify focused on the area of interest. So if we switch over to GitHub and look at the code for Generate Thumbnail Pi, we can see that we do the local ones first. We scroll down to the remote one and can pass in a remote image URL. The real work is done here using the Generate Thumbnail method. Now, for this method, you have to pass in a number of things. One of them being the width and height of the image, and the other, of course, being the URL to the image. And lastly, whether you want to utilise this—what they're calling "smart cropping," which is what's going to happen— Again, the area of interest is the centre of the thumbnail. And so if you execute this code, it's going to return you a binary file, which you're going to have to then save somewhere to be able to use. In this case, it's saving it to a local location. And so, yeah, it's pretty straightforward. It's basically generating a thumbnail based on an image you provide with a size you provide, and it's going to be focused on what you're interested in. So let's have a look at this in action in PyCharm. So we bring the code into PyCharm and make sure again that we have our endpoint and our API key in there. We can run the code. Now there's not much to see in the standard output. It's just going to tell us that it did it. We're actually going to have to go to our local folder and look at the image that got created. So first, I look at the local analysis. That image again, a 100 x 100 image, took the local, or now, in this case, remote, URL and cropped it to that size. Now in this case, the image itself doesn't have a focal point. There are five faces, and so it wasn't able to focus on the particular focal point. But if we had something where there was a lot of different space but one particular focal point, we would have seen it and cropped it intelligently.
So in this demo, we're going to be going back into Visual Studio and creating a C-sharp application that uses the Computer Vision API and the image analytics feature. So I'm in Net here, and I'm going to create a new console app called Computer Vision Demo, and we'll do it in Net Core 3.1, which is cross-platform, of course. So we're seeing the familiar Hello World sample app, and what we're going to have to do is set this up to use the Computer Vision Services. So I'm going to go into the NuGet package manager and I'm going to search for computer vision. I'm looking for the Cognitive Services dotVision dot Computer Vision package, and we can attach it to the project. I'll just take the latest version for that as usual, install the dependencies, and accept the licenses. That's pretty much all there is to it. Now what we're going to want to do is we're going to want to use the Computer Vision namespace, right? So we're going to say using Microsoft Azure Cognitive Services Vision and Computer Vision, and we're also going to accept the Computer Vision model so we can get some of those data elements into our application. Now, just like with previous demos, we are going to have to get our subscription information as well as the end point from our Cognitive Services Azure account. So I'm going to copy the endpoint and key from this service's keys and endpoints section. Now the first thing we're going to do inside the main method is create what is called a Computer Vision Client. So, essentially, we'll be calling how to create authentication for this. Now I'm going to have to create a method called Authenticate for this, which I'll do in a second. And then the next thing we're going to want to do is basically pass an image into an analysed image function, which we haven't written yet, and to retrieve the results. And so this is pretty much the purpose of our application: to create a client and then to call out to a method. So let's do the easy bit, which is the setting up of the client. So the Computer Vision Client, as you can see, is a class. We create this client variable with the new ComputerVision client, and we're just going to pass the subscription key and the end point into this instantiator and create this object and return that object. As simple as creating a client is, we could have done that in line as well, but this is a standalone method for the same purpose. This analysed image contains the true magic, the true trick. Now what we're going to see in this case is that it's a bit more complicated than just calling a single method with a URL. What we want to do is set this up so that we're getting back the analysis that we're looking for and the features of the image that we were curious about. So what I'm going to do is I've picked an image here that is available as part of the Creative Commons, which means it's free to use. And so I'm going to use this image to pass into my programme to see what the computer can view from the back of these two heads, the couch, the television, and the whole scene. So this is the somewhat complicated image that we're going to pass in. So I created a constant here with the URL of the image pointing to my own Azure Storage Blob copy of this Creative Commons image. So now we can start to work on the analysed image URL method. In order to make this an asynchronous task, we are going to have to add some more elements to the using section. We're going to deal with some collections in a second, and we might as well just get the JSON Newton Soft JSON stuff in there as well while we're down here. Alright, so we have our analysed image URL method that we're starting now. In this particular case, we're not going to call the Analyze Image Method without passing in some parameters, and in this case, with Computer Vision, the parameters that we're interested in are called features. And so we can basically request that Azure analyse this image for specific traits. So I'm going to create here—I messed up, clean this up a little bit—a list of visual features. So you can see it's called "Visual Feature Types," which is just one of the collections on the list, and we're looking for categories, descriptions, faces, image type tags, the adult or non-adult flag, the most prominent colors, any kinds of brands that are recognized, and also the objects, so that's quite a lot. And when we get back from the call with AnalyzeImage, we're going to see each of these separately. We're going to analyse the results of each of these calls. So with that setup, we can just now call the analysed image asynchronously using the Computer Vision client. We pass in the visual features as a parameter, and this is going to return a big list of results, and again across all of these dimensions, before we can start to pull out some of these results. And so what I'm going to do before we continue is I'm going to even just run the program, and I'm going to examine the results variable that gets set at this point so that we can actually see what's being returned. So I hit that five here and let the programme run, and fairly quickly it's going to setup the Computer Vision client, it's going to call the analysed image asynchronous, and it's done. So let's take a quick watch of the result, and we can see that we have requested the adult brands, categories, color, description, and faces. These are some of the items that we set as a feature.And so we should see how much of a difference this makes. adult content or gory content. It comes back as false for all of these. didn't recognise any brands in that image at all. You can see that there's a dominant colour because black is the dominant color. It's not a black and white image, though, etc. So we can start to pull some of these things out in our coding. So I'm going to stop this. So let's take a look at what the caption is that the Computer Vision Service figured So basically, I'm going to look at results-dot-description dot captions, and it's going to loop through all of the captions that came back and write this out to the console. Even the captions have a confidence score in the captions.The other thing we were looking for was categories. And so each of the categories we saw in the Results variable stands alone. It's not under the description. So results dot categories will be another array that we can loop through. The same with tags. We're going to be able to look at any kinds of tags that come out of this in a loop. There's a confidence score that gets sent back with that. Next up, we'll look at the objects that are recognized, and it actually comes back with a bounding rectangle of where the object is in the image. Now we know that there aren't any visible faces, but let's see if Microsoft was able to locate any faces. We already looked at the adult tags, so we know that it does come back as false. So we didn't see any brands, celebrities, or landmarks. I'll paste the code in here, but it's not really relevant to this photo. We've already seen that the computer vision thought this was primarily a black and white image. And finally, what would it say in terms of whether the image is a photograph or clipart, a drawing, et cetera? So those are the analytics features, if we go back to the top, that we asked it to recognize. And so let's run this again. This time it's going to write up to the console. So now that it's analysed the image, I can flip back to the console, and we can see a group of people playing a video game. and it's got a confidence score of 00:47. So that's pretty good for a computer to recognise the right category. Well, very low confidence score in any categoryother outdoor, but it's not really outdoor. So it's a seven-point lead. You can see that it is identifying tags as being on clothing, people, computers, monitors, indoor display devices, and furniture. That's a pretty good recognition there. And to close out this analysis, we can see it's recognising five individuals and a television. No faces, no adult content orgory content, no brands, celebrities, landmarks. It is not a black and white photo, although it does have black foreground and background colors. And it is not clip art, and it's not a line drawing. So the computer vision service did the best they could to analyse this photo, and I purposely chose a fairly complicated photo with nothing—no clear faces—and nothing clearly going on that would make it very easy for it to detect. But it did a pretty good job. So that's how you can use a computer vision service in your code. You can be very specific, pulling out only individual facets that you're interested in, or you can pull out dozens of facets, as we did.
ExamCollection provides the complete prep materials in vce files format which include Microsoft Certified: Azure AI Engineer Associate certification exam dumps, practice test questions and answers, video training course and study guide which help the exam candidates to pass the exams quickly. Fast updates to Microsoft Certified: Azure AI Engineer Associate certification exam dumps, practice test questions and accurate answers vce verified by industry experts are taken from the latest pool of questions.
Microsoft Microsoft Certified: Azure AI Engineer Associate Video Courses
Top Microsoft Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
Add Comment
Feel Free to Post Your Comments About EamCollection VCE Files which Include Microsoft Certified: Azure AI Engineer Associate Certification Exam Dumps, Practice Test Questions & Answers.