CompTIA Pentest+ PT0-002 – Section 13: Cloud Attacks Part 2

  • By
  • January 24, 2023
0 Comment

126. Misconfiguration Assets (OBJ 3.4)

In this lesson, we’re going to discuss misconfigured cloud assets, the vulnerabilities they contain, and how an attacker might exploit some of those vulnerabilities. Misconfigured cloud assets are any kind of account, storage, container or other cloud-based resource that is vulnerable to attack because of its current configuration. This includes assets with incorrectly configured federations, identity and access management permissions, object storage and containerization technologies. First, let’s take a look at cloud federations and how incorrect configurations of these resources may be exploited. A cloud federation is the combination of infrastructure, platform services, and software to create data and applications that are hosted by the cloud.

With more traditional enterprise networks, an attacker needs to find a way into a secure area of your corporate network in order to launch their attacks. But due to the always-on, always-connected nature of cloud resources, an attacker can simply use their internet connection to reach out and touch your cloud-based resources. Additionally, because cloud infrastructure is mostly virtualized or abstracted from the hardware, it can have more vulnerabilities than a traditional enterprise network, since new servers and resources can be set up and configured outside of a normally approved corporate process or baseline.

To combat this virtual sprawl of hardware and services across the cloud, organizations need to identify where the responsibilities lie in terms of the approval of new services and servers, as well as who is going to be responsible for the vulnerability management and patch management of those new services and servers. A major area of vulnerability inside of a cloud federation involves a misconfiguration of identity and access management, or IAM, permissions. Identity and access management is used to define how users and devices are represented in the organization and their associated permissions to different resources within that organization’s cloud federation. There are many different types that can be configured in an identity and access management system, including personnel, endpoints, servers, software, and roles.

As you can see, these are not always directly tied to a human user, but sometimes they involve hardware or software. And those things also need to be given certain rights and permissions. The personnel type is commonly used in IAM to be able to define identities for an organization’s employees. Because the personnel type in IAM is associated with a human end user, it is one of the most commonly targeted rules by attackers. Attacks against the personnel type in IAM are usually based on social engineering exploits, such as phishing, spear phishing, on-path attacks, rogue access points, and others. To help protect the personnel type of resources, an organization should ensure they’re providing good end user security training, since this type of training has one of the highest return on investments in all of cyber security, and can really reduce the likelihood that your personnel are going to fall victim to a phishing or their social engineering campaign. The endpoint type is used for resources and devices that are used by personnel to gain legitimate access to the network.

These days, endpoints are really varied, from a standard desktop or laptop computer to a smartphone, tablet or smartwatch to a virtual desktop infrastructure, or VDI solution. To help protect the endpoint type of resources, an organization should use a centralized endpoint management solution to assign profiles to each endpoint based on its usage and role, and then they can validate those endpoints when they connect to the network to ensure they’re meeting the minimum security baseline requirements. This can be accomplished using a NAC, or network access control, type of solution. The server type is going to be used for mission-critical systems that provide a service to other users and endpoints. In terms of IAM, the server type is going to be used to prove the identity of the server and establish trust when it connects and communicates with other servers and devices. To secure these servers, an organization should use the appropriate encryption schemas, digital certificates and configuration hardening. The software type is going to be used by IAM to uniquely identify a software’s provenance prior to its installation. Most commonly, this is done using digital certificates to digitally sign the software’s code and provide assurances that that code has not changed since it was written and released by the developer. While self-signed certificates can be utilized, a public key infrastructure should be used to provide higher levels of authentication and authority.

The role type is used by IAM to support the identities of various assets and associate permission and rights to the different roles or functions of those resources. For example, roles can be associated with personnel, servers and points to provide security permissions based on the person’s endpoints or server’s role inside of that cloud federation. If a user is an administrator, then they’re going to receive one set of permissions based on that role, whereas if that user was only a regular user, they’re going to be blocked from accessing certain features or services. An identity and access management system is going to be used to assign, manage and verify the different permissions for each of the five types that we just mentioned, which includes personnel, endpoints, servers, software, and roles. The IAM system is built from different components, such as directory services, repositories, access management systems, and auditing and reporting systems. As an attacker, you’re often going to try to find misconfigurations in the IAM system so that you can gain additional permissions, either horizontally or vertically, to use against the resources in the organization’s cloud federation. Remember, the IAM system is going to be used to conduct auditing of the account activity, evaluate identity based threats and vulnerabilities, maintain compliance with regulations, create and deprovision accounts, and manage accounts for different roles. When you’re searching for vulnerabilities in an IAM system, you should primarily seek out privileged accounts and shared accounts. A privileged account is one that will allow the user to perform additional tasks, like installing software, upgrading the operating system, modifying the configurations and deleting software or files. In many organizations, users have improper credential management for their administrative accounts because they don’t log into them every day.

In my personal experience, I’ve seen a lot of administrative accounts where the passwords have not been changed for over two or three years, and this makes them much more vulnerable to a password attack than if that password was changed every 60 to 90 days. In contrast to this, I also see many organizations where the administrators use their administrative account as their daily work account. This is considered a really bad practice and one that an attacker can exploit, because the attacker can target that system administrator with a spear phishing email. And when that administrator opens the email or clicks on the link inside of it, the attacker can now run the malicious code using the administrative permissions of that user instead of the normal lower permissions of a regular user account. Another vulnerability you’re going to come across in the field involves the use of shared accounts. Now a shared account is any account where the password or authentication credentials for that account are being shared between more than one person. This is typically found in smaller organizations inside of small and medium sized businesses, either because of equipment limitations or simply because it’s easier for their administrators to do it this way.

For example, there’s some routers and switches that are designed for the small and medium-sized business environment and they only have a single administrative account enabled, and all the administrators simply log in as admin with a shared password to make any changes to that router or switch. Similarly, many administrators tend to get lazy when it comes to creating new passwords and accounts for every device. So they’ll make their lives easier by simply creating a shared account on each device using the same username and password. This is also a very insecure practice, because as an attacker, if I can get one set of usernames and passwords, I now have access to all of the devices because they’re all using the same username and password. Next, we need to discuss object storage and it’s associated vulnerabilities. In most cloud-based storage solutions, data is going to be stored in containers that are either referred to as buckets or blobs. Buckets are the term that’s used by Amazon Web Services, where blobs is used by Microsoft Azure. Each container is going to be created in a specific region and availability zone within that cloud service provider. And all the objects are then placed into these containers of buckets or blobs. An object is roughly equivalent to a file in a traditional file system, whereas a container is more like a folder. To control permission and access control to each object and container, object ACLs, container policies, and access management authorizations in the IAM system are going to be created. Due to these various levels of permissions and access management, configuring cloud storage permissions can be a bit tricky and complex compared to a normal file system that you’re going to use inside of Windows or Mac. This complexity can lead to different misconfigurations, which can be exploded by attackers. For example, it’s common to find incorrect permissions associated with cloud-based storage solutions. By default, storage containers set themselves to public read-write, which can make any data uploaded to the container freely accessible.

This can allow sensitive information to fall into the wrong hands, or the container can be used for hosting of malicious files, or illegal files, without the organization’s knowledge. Additionally, these containers are often used to host objects that represent static files, such as videos, audio files, images, and webpages. To provide faster data answer to the end user, most cloud-services and organizations have configured their networks to use a content distribution network, or CDN, to cash the objects at edge locations located all over the globe. When configured this way, the CDN edge nodes need to read the data from a trusted origin node, where the data owners initially upload their content to. When a site is built this way, such as my own site, diontraining.com, the organization needs to properly configure the cross-origin resource sharing policy, known as CORS, to allow objects to be read from multiple domain names and displayed properly in the end user’s browser. By establishing a CORS policy, the organization is telling the web browser that it can accept content from these various domains and edge nodes as trusted, safe content. But if the CORS policy is not properly configured, then an attacker could exploit this weakness to conduct a cross-site scripting attack.

For example, if the organization improperly sets their CORS policy by using a wild card under the allowed domains, which is what it set to by default, and this means the domain can access resources from any site and treat them as trusted, which can then be exploited by an attacker. In fact, misconfiguration of the CORS policy is listed as part of the OWASP Top 10 under Broken Access Control on the list. The final thing we need to cover is containerization technologies. When servers are deployed to the cloud, they can be deployed as either virtualized servers or containerized services. Virtualized servers are deployed as virtual machines that are managed by a hypervisor. Each virtualized server has its own operating system and runs on top of the virtualized hardware that’s emulating real hardware in use by the hypervisor. Now containerization technology, though, works a little bit differently. Containers are a more efficient way of providing services than using a virtualized server.

A container is an image that contains everything needed to run a single application or microservice. Most people believe that containerization is fairly secure, because it enforces resource segmentation and separation at the operating system level. But remember that containers are not perfect, and they can also have vulnerabilities that can be exploited by an attacker. This includes embedded malware in the container image missing critical security updates, using outdated software, having configuration defects included or hard-coding in cleartext passwords into the container’s image. As cybersecurity professionals, we should always validate any containers before we deploy them into a production network. As an attacker, though, we need to scan those containers to identify any misconfigurations or missing security patches, so we can then exploit them and gain access to the network at large.

127. Metadata Service Attacks (OBJ 3.4)

In this lesson, we’re going to discuss how a penetration tester can attack the metadata service. The metadata service is used to provide data about an organization’s instances so they can configure or manage their running instances in the cloud. The instance metadata is then divided into categories, such as host name, events, and security groups. When we talk about metadata, remember that it’s data that provides information about other data. For example, if I had metadata about your last phone call that you made on your smartphone, I would not know what you said on that call, but I would know the phone number you called, how long you talked for, whether you called them or they called you, and things like that. This is data about the phone call itself, but not the contents of that actual phone call.

Well, the metadata service is used in AWS to provide data about the instances that are being run in an organization’s AWS cloud account. While this type of data about data may seem harmless, there have been some big breaches that are tied back to attacks against the metadata service as the initial attack vector. For example, back in July of 2019, Capital One announced that they had a weakness in the Amazon Web Services Instance Metadata Service that was the source of the data breach on their cloud architecture. This wasn’t the first time that the Instance Metadata Service was found to be lacking in security either, because security researchers had been presenting reports and presentations on this weakness since as early as 2014, when researcher Andrés Riancho gave a talk entitled Pivoting in Amazon Clouds. So, how can the metadata service be exploited? Well, it all starts with an SSRF, or a Server-Side Request Forgery. Remember, a server-side request forgery is a type of attack that takes advantage of the trust relationship between the server and the other resources that it can access. A server-side request forgery vulnerability occurs whenever a web application is fetching a remote resource without first validating the user-supplied URL.

This allows an attacker to coerce the application to send a crafted request to an unexpected destination even if it’s protected by a firewall, VPN, or other type of network access control list. Now these server-side request forgeries are used to exploit vulnerable applications, communicate with the metadata service, extract credentials, and pivot into an organization’s cloud account. To conduct this attack, a penetration tester first needs to identify an application or cloud-based service that is vulnerable to a server-side request forgery vulnerability. For example, let’s consider the sample application that uses the following code written in JavaScript, public async IActionResult Get string target. Var client equals new HTTP client, var request equals client.GetAsync target, var json equals await, result.content.ReadAsStringAsync, return, JsonConvert.DeserializeObject, GetResult json. All right, this code essentially allows an API endpoint to accept an untrusted target parameter from the request. Then, it performs a get request to that target and returns the response back to the client. This essentially is acting as a proxy service. This code is actually written in JavaScript. Now this is going to accept a request from a user or a service on the front end.

And then it’s going to turn around and proxy that over to a backend service to request the information from an API located on that same domain. For example, I might have something like https://diontraining.com/proxy?target= http://diontraining.com/api/scientists. And this is going to return a list of some of the great scientists that are stored in my database to this front-end application in JSON format. So I might get something back like, id 1, 2, 3, 4, first name Charles, last name Darwin; id 5, 6, 7, 8, first name Albert, last name Einstein; id 4, 3, 2, 1, first name Dmitri, last name Mendeleev. You get the idea here.

Now again, this looks fairly harmless, except that we have security researchers and penetration testers who are able to quickly figure out that you could change the target that’s being specified and then attack the metadata service instead of proxying this connection to that backend database that we are asking it to do. For example, if you used https://diontraining.com/proxy?target= http://169.254.169.254/latest/meta-data/iam/ security-credentials/Admin-WAF-Role/, you’re actually going to cause a server-side request forgery to the instance metadata service and receive a JSON response from that new endpoint, in this case, 169.254.169.254. Now this comes back with something that looks like this, code success, last updated, and a date and time. Type, AWS HMAC. Access key ID, and the access key ID. Secret access key, and the secret access key.

The token, the expiration, you get the idea. This JSON response is going to provide us with the temporary credentials to the service account that’s assigned to that particular resource in the instance metadata service. These keys could then be used to communicate directly with the organization’s cloud account. Now, there is a lot more to this type of an attack, but for the exam, you just need to remember that the metadata service attack is a form of server-side request forgery attack that focuses on taking metadata about the instances and using it to get the value of the instances or containers and their associated credentials and keys.

As the different vulnerabilities and exploits are always rapidly changing all the time, especially in the world of cloud computing, you need to continue to stay up to date with the latest exploits and techniques as you enter the world of penetration testing. For the exam, though, they are not going to ask you about specific exploits like Puma Scan’s SEC0129 server-side request forgery tool that I used here. But they are going to ask you about the concepts being exploited in the metadata service attack as I briefly described in this lesson.

128. Software Development Kit (SDK) (OBJ 3.4)

In this lesson, we’re going to explore how software development kits are used in cloud computing. A software development kit or SDK is a package of tools that are dedicated to a specific programming language or platform. SDKs are commonly used by developers when creating applications because they contain a collection of elements that are needed for doing certain functions. SDKs are really helpful because the programmers don’t have to recreate all those pre-built functions. Software development kits though can contain vulnerabilities If the original authors who built those pre-built functions, didn’t do a great job. In this case, when you use an SDK and its associated functions, you’re accepting those vulnerabilities into your code and to your final applications as well.

In cloud computing, developers often rely on SDKs to connect to different cloud resources. For example, the Amazon Web Services Software Development Kit can be used to simplify your coding by providing you JavaScript objects to access different AWS services. This allows the developers to access AWS directly from their JavaScript code, which can be run from their web browser. SDKs also come in many different languages, and they support many different cloud providers. AWS provides their AWS Cloud Development Kit, which can be used in TypeScript, Python, and Java, to define cloud infrastructures for use During orchestration.

Amazon also provides SDKs for C++, Go, Java, JavaScript, .NET, Node.js, PHP, Python, and Ruby, to interact with different AWS services. Similarly, Azure supports .NET, Java, JavaScript, TypeScript, Python, Go, C++, C, Android, and iOS, inside their SDKs. By using SDKs, programmers have direct access to a collection of libraries, that are built to make it easier to use these different cloud services, from their language of choice. These SDK libraries are designed to be consistent, approachable, diagnosable, dependable, and idiomatic. So, how can you use SDKs to your advantage as an attacker? Well, the biggest way is to keep yourself up to date with the latest vulnerabilities, that are being discovered and released in these different SDKs.

Remember, when a programmer imports a library or function from an SDK, into their application or code, this means that application or code, is now requiring that library or function to run. If that library or function is found to be vulnerable, the developer now has to go back and update their code, test it, and redeploy it. This takes time, money, and energy, for that to happen. So you’re always going to find that there’s a delay between when a new vulnerability is released, and when the organization developers actually update their vulnerable code. So this becomes a prime opportunity for penetration testers or an attacker, to take advantage of a known vulnerability or weakness, that hasn’t been fixed or patched in their final code.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img