CompTIA CYSA+ CS0-002 – Vulnerability Scanning Part 1
1. Identifying Vulnerabilities (OBJ 1.3)
Identifying vulnerabilities. In this lesson, we’re going to talk about the importance of identifying vulnerabilities. And the way we do this is through a vulnerability assessment. Now, it is really important to identify vulnerabilities so that you can then mitigate those vulnerabilities. Remember, every vulnerability in your system represents a risk and you need to understand what those risks are so you can accept them, mitigate them, transfer them, or avoid, avoid them. And so doing that is going to be important to our security of our network. And we do that through vulnerability assessments. Now, a vulnerability assessment is an evaluation of the system security and our ability to meet compliance requirements based on the configuration state of the system as represented by the information collected from the system.
Now, that’s a long way of saying we have a piece of software and it’s going to go out there and it’s going to scan our network, it’s going to learn about our network, and then it’s going to report back to us the status of that network and then we’re going to make decisions based on that. Now, when we go and conduct a vulnerability assessment, there are really three main steps. The first one is to collect a set of target attributes. This is a predetermined set of attributes here. This will have things like specific parameters, like rules for a firewall or security policy for a Windows server or whatever it is that you want to check. Then we go into our second step, which is analyzing the differences in the current versus the baseline configurations. So if I set up this firewall and I only had two ports open, and I check it today and there are 20 ports open, there’s a big change there.
We need to figure out what those 18 different ports are and why they were all opened. That’s the idea of analyzing the differences here. And then the third thing we want to look at is reporting the results. Now that we’ve gone and we’ve collected the information, we’ve analyzed the information, we are going to report on that information. And this is really the simple three step process we’re going to use. Now, vulnerability assessments are typically going to be accomplished though, using automated tools. The reason for this is it would take a long time for me to go and look at every single computer on my network. The last network I worked on had over 1 million endpoints. There is no way that I’d have enough time in the day to go and look at every single one of those workstations.
So instead, we broke up the network into smaller networks and each of those smaller networks underneath the larger network would have people assigned to scan and report on those areas. And by doing that, we could roll up that information. So at the highest level, we can look over the entire network of 1 million machines and know exactly which machines were patched and which ones weren’t and which ones met our configurations and which ones didn’t. So we do a lot of that using these automated tools but automated tools alone are not enough. We need an analyst to help us go through that information as well. The reason for that is we are going to have so many different things to look at and we have to figure out what is the priority. Now, as we start prioritizing things, we take into account a lot of different things.
We’re going to look at asset criticality as one of those things. For instance, is the workstation sitting on your desk at work as important as the one sitting on the CEO’s desk or the one sitting on the accountant’s desk or the file server down in the server room? All of these assets to a vulnerability scanning machine just look like another computer. But you as an analyst need to know which computers are more or less important, which ones process more or less sensitive data and being able to put all that information in will help you prioritize. Now, when everything else is equal and you’re looking at 1000 and computers that are all used by assistants across the organization, how do you prioritize those? Well, based on the threats you find and the vulnerabilities found on those systems, certain vulnerabilities are going to be more or less critical and therefore we’ll prioritize them higher.
2. Scanning Workflow (OBJ 1.3)
Scanning workflow. In this lesson, we’re going to talk about the basic assessment scan workflow. Now, before you start doing your assessment scans, there are lots of different questions that you have to answer, and this will make sure you’re ready to conduct that vulnerability scan. These are things like who is going to conduct the scan? Because you need to know which person is going to do it. Is it going to be assistant administrator, an internal employee, an external employee? Who’s going to do it? Then you need to answer, when will the scan be performed? Because you can’t just conduct the scan anytime you want. When you do a scan, it puts additional load on the network and on the resources that you’re targeting as part of your scan. And so if you do this at the prime time during the middle of the day, you can actually crash your own network by doing it.
So you have to make sure you figure out when is the right time to perform that scan. Also, which systems are going to be scanned? Are you going to scan everything on the network and are you going to do them all equally, or are you going to divide this up into smaller pieces? This is an important question for you to answer. Another question you have to ask yourself is how will scanning impact the systems? As I just said, when you scan these systems, some of them are going to have additional load placed on them by these scans. If you’re doing a very in depth scan, you can actually put additional load on those systems. That eats up memory or processor resources and that can crash those systems. So you need to be careful when doing scanning as well.
Another question we want to ask ourselves is, does the system need to be isolated during the scanning? Now, why is this important to ask? Well, again, because you have to know how the system is going to respond when you do these scans. When I ask about should I isolate the system, the question here is really, can I scan it while it’s in production, or do I need to take it off of doing a real world job so I can scan it and then put it back into the real world environment? This is an important thing to consider because it’s going to have real world consequences to those actions. And finally, who can assist you with scanning? Now, I don’t mean who’s going to be your assistant here, but if you start scanning a system and you start seeing things that look unusual, who are you going to ask for help? If I’m scanning the web server, I probably want to talk to the web developer.
If I’m scanning a database, I might want to ask the database administrator. This is the idea of who can assist you if you have problems during scanning. Now, all of this then allows us to start creating what will be our workflow. As we start thinking about our workflow and we have the answers from these questions, we can develop our own processes. Now, for example, here is going to be a simple seven step process that we’re going to walk through. In this lesson. You don’t have to follow this step by step, though. This does not mean that this is the only way to do scanning. This is just an example for you to consider. First, we want to install the software and patches to establish a baseline system. So I have a brand new server. I installed the operating system, I installed the antivirus, I configured it, I installed all the software patches and updates.
And now that system is what I think a good system should look like. At this point. I should move into my second step, which is to perform an initial scan of that target system. So I have this brand new system with everything I just installed and patched. I’m going to run a scan and that creates my baseline for me. Everything else will be compared against this baseline moving forward. Now, the third step I’m going to do is analyze the assessment reports based on that baseline I just created. So I installed all the software, I scanned that target, and now I’m going to go through the report and see what the findings were. Was everything installed properly? Did I miss something? Are there still vulnerabilities that I’m not aware of? If so, we’ll go into step four, where we’re going to perform corrective actions based on the reported findings.
So let’s say I scan this new Windows system that I just installed and I found out that I’m running a vulnerable version of IIS. Well, I need to go to Microsoft update and get the updates to patch that system because I want to make sure I don’t have vulnerabilities in my system. And then I move into number five. I’m going to perform another vulnerability scanning assessment. So I’ve now installed this new system. I’ve scanned this new system, I’ve analyzed the report, I’ve patched things, and now I’m going to scan again. Why? Because now I want to make sure the fixes I put in actually took place and those things are actually solving the problem. Now, after I do this, I’m now going to go into step six. I’m going to document any findings and create reports for my relevant stakeholders.
So now that I have this baseline, I’ve patched it, I’ve fixed it, I’ve scanned it, I’ve fixed it, and I’ve done that a couple of times. I can now create a report and say, stakeholders, here is the remaining vulnerabilities. This is the risk that you’re going to be accepting if we put this device on the network. And by doing that, I’m giving my stakeholders a chance to accept that risk with knowledge of what that risk actually is. I’m not just saying I’m putting the server on, but I’m putting the server on, and here are the five or ten vulnerabilities we can’t fix yet. But here’s the mitigations we’ve put in place to minimize the risk from that. And then that gets us to number seven, which is conducting ongoing scanning to ensure continual remediation.
Now, this is a really important point, because when you do a scan and you take a vulnerability scan across your network or across the target, that is a point in time assessment. Now, what I mean is, if I do that today and today is Thursday, as I’m recording this and I go and I take a scan today, and then next Wednesday, we get hacked because there’s some zero day that came out and we didn’t know about it. Well, if we only scanned it last Thursday, today and Wednesday, the new vulnerability came out and somebody could exploit it. Well, again, they can do that because it’s a point in time assessment. We always need to make sure we’re continually checking our systems. Now, does that mean that we’re going to scan our systems every day? Maybe not. It would be really uncommon and for you to scan your systems every day, I’ll tell you that.
But we are going to cover more about scanning frequency later. But the idea here is that this is not a one and done. I don’t do it once when I install the system and never touch it again. I need to conduct ongoing scanning, whether that’s weekly, whether that’s monthly, quarterly or yearly, whatever that frequency is. Based on your risk profile, you are going to conduct ongoing scanning and patching. Which brings us to the mantra of it. Whenever you are stuck on the exam and you start talking about vulnerability assessments, I want to remember these three words scan, patch, scan. Now, what does that mean? Well, that’s really the workflow I just described. We have a system, we scanned it, we found some vulnerabilities, we patched them, and then we scanned it again to make sure that the patches we put in place actually worked.
And we’re going to continue to do that over and over and over again. If you do weekly scans in your organization, you should also be doing weekly patches. If you’re doing monthly patches but weekly scans, then guess what? You’re going to find the same vulnerabilities four times in a row until somebody gets round to patching them. That’s not going to be very effective for you. So instead, you want to make sure your scanning cycle and your patching cycle line up. You want to scan something, find all the problems, patch all the problems, and then scan again to make sure those have all been fixed. And then we’ll scan again next week or next month or next quarter or whatever your frequency is.
3. Scope Considerations (OBJ 1.3)
Scope considerations. At this point, we understand the basics of scanning, but now we need to understand what are some of the things that can affect the way we scan. Before we do that, let me give you a quick formal definition of a vulnerability scanner, because we’ve been talking about that in the last couple of lessons, but we haven’t really properly defined it. When we talk about a vulnerability scanner, this is a hardware appliance or a software application that is configured with a list of known weaknesses and exploits and can then scan for their presence in a host operating system or within a particular application. You can also scan things like network appliances like firewalls and routers and switches. All of these can be scanned with a vulnerability scanner.
Now, web application vulnerability scanners like Nikto will analyze applications for SQL injections, cross site scripting, and may even analyze the source code and database security to detect insecure programming practices. Now, in this section of the course and the next section of the course, as we’re talking about vulnerability scanners, I’m not really focused on web application vulnerability scanners, I’m talking more about what we call an infrastructure vulnerability scanner. These are things that are going to be testing things like your clients, your workstations, your desktops, your servers, not web applications. Some of these will have the ability to do some web application scanning, but we’ll focus on web application vulnerability scanners in a different lesson, in a different section.
So for now, whenever I say vulnerability scanners, I just want you to think about a basic vulnerability scanner. Now these basic vulnerability scanners known as infrastructure scanners, are going to be able to scan our network. These scanners, such as this one here, which is nessus, will compile a report for you and classify each identified vulnerability with an impact warning. Each of these scanners has its own database of vulnerabilities, like I said, and all of these can then look for those signatures to find out what things are out there on your network. So if I’m scanning your Macintosh machine and I find that you have mozilla foundation unsupported application detection, this is a critical finding. You can see it here in red on the screen.
There’s one count of this, and if I click into that, it will actually tell me what causes this, who knows how to exploit it, and what I can do to fix it. And that’s the benefit of having these vulnerability scanners. They find those vulnerabilities out there and then tell you what you need to do to fix it. Now these infrastructure scanners can perform mapping and enumeration in the form of a host discovery scan. Now, we talked about host discovery scans when we talked about things like Nmap, where we can go across the network and find out what ports are open and what computers are on our network. Well, these infrastructure scanners do this as well. It’s called a host discovery scan. And it is the smallest type of scan that we’re going to do with a vulnerability scanner.
The benefit of doing that is it first takes a look at your entire network, and then you can scope down into what you want to do. In depth scanning on, that’s the real benefit here, for example, here on the screen, I’ve conducted a discovery scan using the GreenBORN Community edition of the Openvoss Vulnerability Manager. This uses the Openvoss scanning engine, and it performed an uncredential discovery scan that couldn’t even identify the OS type of each of the hosts. But I did identify that there was nine different hosts there on the network, and they’re all in the same layer of the network, which is why you just see those nine dots together in the middle of my screen. Now, as we learn more about vulnerability scanners, you’ll learn how to configure them better, to be able to get exactly the information you want out of them.
But this is just to start getting you introduced to what these tools look like. Now, when we start talking about things like scope, I want to make sure you understand what scope is. When we talk about scope in a network, we’re talking about some portion of the network, right? So if I said 192, 168, 1124, you from your network plus study should know, oh, that is a scope of network IP addresses of 256 IPS. But when we talk about scope inside of a vulnerability scanner, we’re really talking about the range of hosts or subnets that’s going to be included within a single job. Now, this can be done using an IP address or an IP address with Cider notation, something like slash 24, to be able to say, I want that entire subnet. Or you can have multiple different subnets or multiple different IPS.
A lot of these tools will let you import a list of IP addresses as a CSV file. And so I can have the 50 IPS that I really care about, and they can be across all sorts of different subnets. That’s okay too. When we’re talking about scope here, we’re just talking about this grouping of computers or hosts that we want to look at using our tool. Now, it’s important for us to adjust the scope to make our scanning more efficient. There’s lots of ways to do this. For example, one of the things we can do is we can schedule scans on different portions of the scope for different times of the day. One of the reasons we want to do this is if we scan everything all at once, we can overload our network. We’re causing network traffic to be used, we’re causing CPU and memory to be used on the host that we’re scanning and responding to all of our queries.
So we don’t want to scan everything all at the same time. So by breaking things into scopes, we can scan at different times of day for different environments. For example, I might say I’m going to scan the accounting department overnight because nobody works after 05:00 in accounting, so that’s not going to affect their daily business. Or I may want to defer scanning my web servers in the middle of the Christmas holiday because I’m an ecommerce store and I’m doing a lot of business at that time, and so I’ll wait until after the holiday to do that. There might be some risk that I’m assuming there, but it is one of the ways you can schedule your scans to figure out which portions of the network should be scanned at which time. Some of them will be done at different times of the day or even on different days based on how you need to break up your network.
The second thing we want to think about is how are you going to configure your scope based on a particular compliance objective. For example, if you’re trying to be compliant with PCI DSS that has to do with payment card data, do you need to scan the video editor’s computer to make sure it’s in compliance with PCI DSS? Well, no, because they’re probably not dealing with any kind of credit card data. But you may need to do that for bookkeeping or accounting or the web server or the ecommerce server or the ecommerce database, because all of those things may touch that credit card data, in which case they would have to be part of that scope and that compliance scan. So we can have a particular scan set for just PCI DSS assets. And those are the ten or 15 computers across our organization that touch credit card data.
The third thing we want to consider is how we can rescan scopes containing critical assets more often. Now, what I mean by this is I might have a scope of my web servers that are sitting in the DMZ. Because they’re in the DMZ, they’re more vulnerable to attack than something that was inside my internal network. So I may want to scan those every week where I only do my internal network once a month. Again, these are just numbers I’m making up. And you as an organization, get to decide what the right frequency is for you. But if you have something that’s more critical, you probably want to scan it more often. Now, the other thing we have to look at when we start looking at these scopes and scanning is how are we going to scan them? Are we going to do this as an internal scan or an external scan.
Now, if I’m dealing with internal scanning, this is where a vulnerability scan is being conducted on your local network from within your local network. Now, the real benefit here is you’re going to be able to scan things without having to go through the firewall because you’re inside the local network already. Now, if you’re doing things externally using an external scan, this is a vulnerability scan that’s being conducted against your network from outside of your local area network. Therefore, it’s going to be coming in through the firewall. Now, when you do internal scanning, you can actually do this with permissions called credentialed scans. And this will give you additional details on the vulnerabilities that exist. Because essentially, you’re already behind the firewall, you’re already talking directly to those clients, and now you’re logging into them using an administrator password.
That’s going to give you a lot more detail than if you were an attacker coming in from the outside. So why wouldn’t we use an internal scan for everything? Well, because an internal scan is going to give you a laundry list of vulnerabilities, but not all of them are true vulnerabilities that could be exploited by an attacker. So instead, if you want to get an attacker’s perspective, you really have to do external scanning. By performing this, you’re going to be able to get the attacker’s perspective because you’re coming in through the firewall. Now, you’re going to see a lot less vulnerabilities this way because a lot of things are going to be blocked by the firewall. But again, it’s good to have both perspectives, internal scans and external scans. And you’re going to have to figure that out as you start doing this in the real world and how you’re deciding which scopes to use and from which perspective you want to scan.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »