CompTIA Linux+ XK0-005 – Unit 06 – System Configuration Part 4
29. Swap Space Options
All right. When you originally configured this Linux box and you partitioned your drive, you were told basically when you created it that you had to make a root partition and a swap partition. Those were the minimum too. You could make more partitions if you want to but we have to start going into asking some questions what’s the purpose of the partitions? Obviously the root and the swap were for different functions. Do they have to be on the same drive? Well, sure, they can be. Would that improve performance? No the reason, though, that we put the swap in its own partition to begin with is that by putting it all in the partition we’re creating an area that is not going to be fragmented.
If I had the swap file on the same partition as my root drive, there’s a likelihood that over time my swap file will have been fragmented, which means even slower responses on that drive. So by putting in its own partition, I’ve eliminated the problem of fragmentation that’s one of the biggest reasons why we put it there. But if you want better performance meaning that the root partition is not being accessed at the same time as the swap, meaning the read right head isn’t trying to do both, then we would tell you use a separate drive. Now, if you have more than one drive, that does improve performance because you have two read right heads moving but if they’re both trying to hit their maximum transfer rate, you’ve got a problem because they’re often connected to the same controller card which means you are limited by the transfer rate.
So if you want to even improve the transfer rate, not only should they be on a separate drive, but consider a separate hardware controller. That’s where talking about these different components was kind of our foundation for making sure that what I just said makes sense but basically you might have remember I said this, that on many systems you have two Ata or Idi controllers and each can have a cable that has two drives. Well, one of the ideas of having the separate controllers was so that I’m not competing for that transfer rate for multiple drives on the same cable. So that’s another option that you have that you’re you should have made before you installed even the actual Linux operating system.
30. Creating Swap Space
Now, if you need to make new swap space, whether it’s brand new, first time you made it, or you’re going to create a new file and add it as swap space, you have to be the root user. You then have to create the space. Now, you could do it by creating a new partition, which is great because again, we’re dealing with fragmentation. Or you could create a new file and make the file of whatever arbitrary size you want that file to be. But once you make either the partition or file, you then have to designate it asswap space with the command MK swap.
It stands for Make Swap. Now, the other thing that you’ll do is when you do create this swap file is you can create a priority. Priority just means that use this particular swap file that I’ve had first and then when it’s done, use this other one. You can actually put in priorities, especially if you’re using a file that could be in a fragmented area. Now, once you’ve done that, you’re going to have to restart the system or use the command swap on so that it’s ready to be used.
31. Filling New Swap Space
Now, remember that when you have the swap space, you have to reserve the disk space. To reserve the disk space, you have some issues of what do you do if it’s a file? So let’s think about that. If I make a partition, that’s pretty easy. I just mark the partition as my swap file. You get everything in there. But if I created a file not a partition, but a file, and I said, here’s the file, and I didn’t put any limits on it, well, how big is that? I mean, a file as a file name is nothing. It’s little. And suddenly I’m saying, that’s your swap space. So then it’s like, okay, well, let’s grow the file. And then the file shrinks, and the file gets bigger, and it shrinks, and it becomes fragmented, and it becomes difficult because it’s no longer contiguous.
So one of the things you do is when you create that file is you actually fill it up. You fill it up with just garbage nonsense so that it is the size you want it to be. Now, there’s a great command that you can use, and I like this command for many other reasons. It’s called the DD. The disk duplicator is what I like to call it. The disk duplicator’s job is that it can take a copy of information from what we call the input file and put it out as the output file. And it basically makes that copy. Or you could use it to just use plain old filler.
Just fill the thing up with whatever garbage you want. Now, we use the DD command a lot of times in the forensic examination world because it’s such a great tool to be able to take an exact image of a drive, I mean, a sector by sector copy, so that we can use that copy to do our evidentiary examinations rather than the original evidence. It’s a very important concept in keeping your evidence clean and free of any corruption. So it’s a great tool. You’ll like it. It’s very easy. It’s DD, input file, output file, and the stuff you’re copying.
32. Managing Swap
Now, if you don’t like the command line, as I’ve hinted before, there’s often a Gui. So you can also manage the swap space with the program swap. Again, the idea is that you can create new locations. You can look at how it’s being utilized. Most of the things that we just talked about being able to do in the command line, I can do that as well with my GUI so that I can manage, create, look at how it’s being used, all of those features available in a Gui, because some of you, that’s what you prefer to use, and that’s fine. The nice thing about Gui’s is it gives you all the options. You don’t have to memorize a command, and you can be perhaps a bit more specific or basically making sure you don’t miss anything when you create this swap space.
33. Demo – Managing Swap Space
All right, we are going to take a look at creating a new swap file, managing your swap space. And before we do that, we’re going to take a quick look over here under our Applications System tools and our system monitor, just so you can kind of get a little picture here about memory and swap space. You can see the ratings over here off to the side and look at that. I really only need swap space because I’m using all of like 0% of the existing swap space. But anyway and 92 megs of my 250 megs of memory. So I didn’t actually give this operating system a whole lot to work with, but we’re going to go ahead and see what we can do to improve that and give it even more. And I’ll start by opening up this terminal window. I got to make sure I am the root, which I am.
If I wasn’t, you type Su and put in the password and we’re going to open up. We’re going to switch anyway to a mounted hard drive. It’s been formatted and it’s just an empty drive. If I LS LF Lost and found is about the only thing we can visibly see. None of the hidden stuff. But we have that room there. We really actually have quite a bit of room. We have four gigs of size. It’s a huge hard drive and it’s a virtual drive, obviously, with VMware here. But we’re going to touch, we’re going to create a file. And that file we are going to create is called extra swap. And that creates it so you can see it with the LS LF, that it is a file. Right there, you can see it. And being that it’s a file, we’re going to use that file for swap space so that our system, if it needs even more room, will have it available to us.
Now, the problem with that is a couple of things. Number one, the file right now is look at this. It’s zero bytes in size. And the operating system is only going to let you use whatever size of a file you’ve created. And so to create a zero byte file does nothing for me. So we’re going to use the DD command while we’re here to change that. We want to fill it up with something just so that it’s set up. Now we’re going to use the input device as the device zero. The out file is going to be the extra swap. That’s where we’re putting everything. And that device, by the way, is going to sit there and fill this thing up with a whole bunch of zeros. That’s what it’s designed was to do. And the next thing is, is the block size would be 1024.
And the count of what we’re going to put in ten to 40 should be ten megabytes in size. So we’re going to hit that and see that I forgot to put an equal sign. So let’s back it up and try it again. There we go, ten megs copied into the file. Let’s do the LSL capital F. You can see that it is a much bigger file than it was before that we do have it being ten megs in size, so we filled it up with zeros. In fact, if I tried to cat it, which I won’t, because I don’t want to spit out that big of a file to this little screen, that’s all you would see. So if you feel up to it, I’ll let you do that. But let me clear the screen and let’s go on to the next part, which is we’re going to make the swap file. So it’s the MK swap command and the path, even though I’m actually there drive two p one.
Just look at my prompt there and I can see that extra swap. So we’re following the exact path to the make swap. And there you go, setting up swap space. Version one tells me the size didn’t have a label on there and we can LS dash in this case, L, F and H now is for the hidden file because that swap stuff is generally not going to be something we’re allowed to see. All right, so we have that ten meg file and it’s reported to us as being ten megs and nothing else about it has really changed other than once it actually is used as a swap file. All that zero stuff we put in is going to be replaced by whatever other stuff is put in there. Okay, so now one of the other commands we can use is this one called swap on.
Let’s look at the help for that swap on. And there it gives us actually not a lot of help. This would probably be one of those ones you’d open up the manual page for. But what we’re going to do is we want to view the swap configuration. So we’re going to do a swap on with the S here and this will hopefully show me the existing configuration, the size, its priority is minus one. That means that it’s really not going to be used a lot. It is a partition type and it even has the name SDA Five. All right. And so actually that is the existing swap. Sorry about that. That’s the existing swap configuration. So that’s my existing size. Now I want to enable actually the extra one.
So I was going to say that seems like just a little bit too big of what I wanted to do. So let’s type in the swap on extra swap and boy, it gets tough to say swap over and over again. All right, so now we’ve got that swap on. Now we do the swap on S again. There, that’s better. That’s what I was looking for before. So I had a quick little brain cramp there when I did the first one forgetting that we actually hadn’t turned on the extra swap. We just made it available for our use. All right, so that kind of is our way of creating that the swap file information. When I come over here to the swap file, I can see an extra ten megs showing up on my swap size that we didn’t have before. And that’s just, again, to showing us that I’ve added to the size of our swap system.
34. Disk Arrays
Now, something else we have to deal with when we look at hard drives is how do we expand a volume to be bigger than the hard drive physically is? So let’s take an example of the old days, five years ago when I had 100 gig hard drive and I was ecstatic. Then suddenly I had a database. And this database needed to be 400 gigs in size eyes. Well, I can’t fit it on a 100 gig drive, but if I had four drives or five drives, I could try to fit it on. But the problem is each drive was trying to be its own autonomous volume or partition. So one of the things we did is we created an array of disks and asked that array of disks to act in unison as a single drive. Often we use the term volume to talk about covering more than one physical drive, but it made itself appear to be a single drive.
Now, in creating these different types of arrays, we had the ability to add some fault tolerance into the arrays. So should any one of those physical drives actually die, then we had the ability to instantly recover our data. We didn’t lose anything, and we could keep running while we were making the repairs. And so that helped add to what we call our business continuity. That function of creating these different types of arrays was called Raid. It stood for one of two different things the redundant array of independent disks or a redundant array of inexpensive disks. I’ve heard both. I don’t know which one to tell you. I prefer the independent disks because that’s what it is. Each disk is independent physically, but as an array, it acts as a single drive and it allows me to expand the total size of a file that I could store.
Today, it’s not as big a deal when it comes to making an array big enough for big files because you can buy such large drive capacities. However, it is still relatively important, very much so, actually, in the redundancy, so that if one physical drive fails, the server is not down. The data access is still there, maybe a little bit slower to respond. But we’re keeping what we call those five nines of uptime, 99. 99% uptime. And we have basically recovery right at our fingertips to get back up to normal speed and normal operations. It’s a good idea. Now, there are different types of Raid, and I want to make sure that you understand some of the more common, common types of Raid that you might find or that you might choose to create.
35. RAID Level
Now, one of the first types of Raid is called Level Zero. Now, having just told you about Redundancy business continuity five nines of uptime, Raid Zero gives you none of it, but it is still one you need to know. It’s called disk striping. Basically, this was a performance benefit and an increase of storage. So what does that mean? Well, remember I said that one of the benefits of having two drives is that I could extend it as a single drive, virtually. We called it a volume so I could have more storage size for large files. The other benefit of striping is that as I’m saving data, it’s going to fill both drives up at the same time.
By filling them both up at the same time, what I’m trying to do is increase my performance of reading and writing. How? Well, if I say save the file, and Raid says, okay, this drive will save half the file, this other drive will save half the file. But together, they’re both working at saving the file. That’s two read heads doing the job of saving my file. So that’s kind of the idea of why striping would be good. Most systems will support a minimum of two. I mean, to have a stripe, you got to have more than one drive, right? So most will support from two to 32 drives as a stripe.
36. RAID Level
Now, if you really want to get into the redundancy to have the ability to recover from a hard drive failure, you’re going to start with one of the options called Raid One. Now, Raid Level One has two names you’ll hear it called duplexing or mirroring. So let me take the mirror issue first. The idea is that when I save a file to one of the drives, the exact same file, a copy of it is saved to the other the drive in the exact same way, exact same manner. So basically, both drives are identical. That means that I store a file in a certain location, physically on one drive. I store it physically in the same drive, so that my file system always references the same location, a mirror of each other. The difference between mirroring and duplexing is this if I have both drives connected to a single controller, I still have a single point of failure. If the controller fails, both drives are unavailable.
That was the mirror. If each drive is connected to its own controller, it’s still a mirror. But because of its connection to separate controllers, I’ve eliminated yet another point of failure. We call that duplexing. Now, it’s the same goal, though. Whatever I save on one drive is saved on the other. I’m just trying to keep my uptime. Now, if one of the two drives fails, and by the way, hard drives do fail, they often have a spin life between three and five years. So I expect that over time, things are going to die. If one fails, the other one can continue to work and function normally until you replace the other one and then mirror them back across and life is good again. It’s trying to eliminate downtime and the loss of information.
Interesting posts
The Growing Demand for IT Certifications in the Fintech Industry
The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »
CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared
In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »
The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?
If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »
SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification
As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »
CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?
The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »
Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?
The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »