CompTIA Pentest+ PT0-002 – Section 11: Application Vulnerabilities Part 3

  • By
  • January 24, 2023
0 Comment

105. Improper Headers (OBJ 3.3)

In this lesson, we’re going to discuss vulnerabilities associated with improper headers in your web applications. Now, the OWASP Secure Headers Project describes different HTTP response headers that your application can use to increase the security of your application while placing its calls. HTTP response headers are used to control how a web server’s going to operate in order to increase its security during those operations. If you leave the response headers configured with their default settings, you’re going to have a very insecure web application that can easily be exploited. Therefore, it’s important to verify you have the proper header settings enabled in order to protect against cross-site request forgeries, cross-site scripting, downgrade attacks, cookie hijacking, user impersonation, and clickjacking. There are 10 different HTTP response headers you should consider in your configurations. These are HSTS, HPKP, X-Frame-Options, X-Cross-Site Scripting Protection, X-Content-Type-Options, Content-Security-Policy, X-Permitted-Cross-Domain-Policies, Referrer-Policy, Expect-CT, and Feature-Policy. First, we have HSTS, which is the HTTP Strict Transport Security web security policy, and this is going to be used to protect your website against protocol downgrade attacks and cookie hijacking.

By using HSTS, it’s going to allow web servers to declare that web browsers or other complying user agents should only interact with that server using secure HTTPS connections and never through insecure HTTP protocols. Second, we have the Public Key Pinning extension for HTTP, also known as HPKP. Now, the HTTP Public Key Pinning is a security mechanism which allows HTTPS websites to resist impersonation by attackers using mis-issued or otherwise fraudulent certificates. The HTTPS web server is going to serve up a list of public key hashes and on subsequent connections, clients can expect that server to use one or more of those public keys in its certificate chain. Deploying HPKP safely will require operational and organizational maturity because there is a risk that hosts can make themselves unavailable by pinning to a set of public key hashes that become invalid. Using HPKP, this can actually greatly reduce the risk of an on-path or man-in-the-middle attack, and therefore it’s going to be very useful to giving you a secure environment. It can also help with other false authentication problems, too. Third, we have the X-Frame-Options header.

Now, the X-Frame-Options header is used to prevent clickjacking from occurring by declaring a policy that communications are only allowed from a host to a client browser directly without allowing frames to be displayed inside of another webpage. Fourth, we have the X-Cross-Site-Scripting-Protection header. This header is used to enable the cross-site scripting filter in the web browser and provides a method to sanitize the page and report the violations if a cross-site scripting attack is attempted by the browser. Fifth, we have the X-Content-Type-Options header. The X-Content-Type-Options header setting is going to be used to prevent the browser from interpreting files as something other than what they’re declared by as the content type in the HTTP header. Essentially, this header is used to dictate the content type that the page should interpret everything as, and prevent the threat actor from tricking the browser into processing the wrong type of file or content.

Sixth, we have the Content-Security-Policy header, or CSP. Now, the Content-Security-Policy header requires careful tuning and precise definitions of its policy. If you enable CSP, it has significant impact on the way browsers render pages such as disabling Inline JavaScript by default, so JavaScript has to be explicitly allowed in the policy if you’re going to run it. The Content-Security-Policy header is able to prevent a wide range of attacks though, including cross-site-scripting and other cross-site injections. Seventh, we have the X-Permitted-Cross-Domain-Policies header. The X-Permitted-Cross-Domain-Policies header sends a cross-domain policy file as an XML document to the web client, and then specifies if the browser has permission to handle data across multiple domains or not. When clients request content hosted on particular source domain, and that content makes requests directed towards a domain other than its own, the remote domain needs to be inside the hosted cross-domain-policy file that grants access to that source domain and allows that client to continue that transaction. Normally, a meta-policy is going to be declared in the master policy file, but for those who can’t write to the root directory, they can also use a declare a meta-policy using this X-Permitted-Cross-Domain-Policies HTTP response header instead. Eighth, we have the Referrer-Policy header. The Referrer-Policy header is going to govern which referrer information is sent in the referrer header and should be included in the request being made. For example, the Referrer-Policy could contain the value of the same origin, which tells the browser that the same site origins are authorized but cross-origin requests are not.

Ninth, we have the Expect-CT header. Now, the expect-CT header is used by a server to indicate that browsers should evaluate connections to the host emitting the header for the Certificate Transparency compliance method. Now, Certificate Transparency is an open framework for monitoring digital certificates. Domain owners may find it useful to monitor certificate issuance for their domain and use that to detect misuse certificates. Prior to Certificate Transparency, there is not an efficient way to get a comprehensive list of all the certificates issued to your domain, and so enabling CT is something useful. Tenth, we have the Feature-Policy header.

The Feature-Policy header allows developers to selectively enable and disable the use of various browser features and APIs. For example, if you want to control access to the accelerometer or gyroscope in a mobile device using its web browser, you can do that using the Feature-Policy header. One useful feature to enable for a secure web application might be the geolocation feature using the Feature-Policy header. This would allow you to ensure the user is located within a specific geographical area as part of your authentication process, for example. So in summary, if you’re working as a developer for web applications, it’s important for you to review the different HTTP headers and response options that are available to you to secure your web applications and minimize their vulnerabilities.

106. Code Signing (OBJ 3.3)

Digital signatures have expanded beyond just email. Code signing is a method of using a digital signature to ensure the source integrity of the programming code. Code signing is the process of digitally signing executables and scripts to confirm the software was written by the author and it guarantees the code has not been altered or corrupted since it was digitally signed. The process employs the use of cryptographic hashes to validate the authenticity and integrity of the code. Code signing works just like digitally signing an email, where the code developer’s private key is going to be used to encrypt the hashtag digest of the finished executable file or script as a mean of providing non-repudiation for that code and to prove it was actually released by the developer and has not changed since it was signed by the developer. Now, code signing relies upon the digital signature of a compiled program file or the source code that’s being distributed.

For example, if we create a mobile application to offer through the app store our installer file must be code signed. Every developer has to register with Apple or Google to receive a private key. Just as in the email example, the application file is going to be hashed and that hash is encrypted using the developer’s private key. By code signing the file, the software said to have come from a trusted source. That being said, just because a piece of code is signed doesn’t mean it guarantees the code is of high quality.

Code signing is just a validation that the original developer was ready to distribute the code as it is and that code has not been changed since the code was signed. For example, if the developer’s workstation was hacked into and an attacker inserted code into the source code repository prior to it being signed, it could actually contain malicious code or a back door into it and still be code signed. Just look at the 2020 SolarWinds attack where a APT was able to insert malicious code into the SolarWinds source code. Then, when the company compiled code, signed it, and distribute it to its customers, they all became infected. Code signing is considered a good control to implement but it is not a foolproof solution as code signing only provides as much security as an organization that actually signed that code.

107. Vulnerabilities Components (OBJ 3.3)

In this lesson, we’re going to discuss the different web application components that could be vulnerable to exploitation. This includes client side processing versus server side processing, JavaScript object notation, representational state transfers, the simple object access protocol, browser extensions, HTML five, asynchronous JavaScript and XML, machine code and bike code. First, we have client side processing and server side processing. When your team is developing a web application, one of the first decisions you need to make is whether the code is going to be run on the user’s computer, which is known as client side processing, or on the server, known as server side processing.

Now as a web designer, we tend to prefer to code our things for client side processing, because it puts the load on the end user’s machine instead of on our server. And this makes the program more responsive and less costly for us to run. But this is not always the most secure method. Now this is a risk decision that has to be weighed when you’re designing a new web application because server side processing is considered to be be much more secure and trustworthy than using client side processing for most use cases. In most modern web applications, you’re going to see the server that hosts the functional logic and data, and this can be accessed by your client’s browser. That browser is then going to manipulate the final displaying of the data, but all the actual code that’s being executed is happening back on the server.

For example, if you’re accessing your bank account, most of the code is going to be run on the server side and only the finished results are going to be sent to the client’s browser for display and processing, to be able to display it on your device. Now, on the other hand, if you’re concerned with having more speed and efficiency of code execution and you don’t care as much about security, you may then use client side processing for executing that code. For example, let’s say I’m making a web based game and the performance of the web app is probably more important to me than the security. In this case, I might shift to a client side processing model instead, just remember, whenever you use client side processing, the end user could modify the application’s logic using tools like Burp Suite or BeEf to bypass security controls and load malicious objects into the project. Again, like I said earlier, this is a case where you need to balance the risk versus the reward when you choose client side versus server side processing. Second, we have JSON REST, which is the JavaScript object notation representational state transfer.

Now the representational state transfer or REST is a client server model for interacting with content on remote systems over HTTP. A RESTful web service chooses the format that’s going to be used during this HTTP exchange and REST will support XML, JavaScript and JSON formats. By far though, the most commonly used format for data transfer is the JSON format. JSON is a simple text based message format that’s used with RESTful web services. JSON was developed for JavaScript and is quite popular in most web applications. Just like all data exchange interfaces, JSON is subject to injection attacks, just like an XML injection or an SQL injection. To prevent an injection attack from occurring, you should always code your JSON APIs to inspect and sanitize all inputs and outputs that are being provided to your application’s code. You’re going to find that rest and JSON are preferred over SOAP and XML because less is going to be passed over the network. And this makes it really good for mobile devices. But if your web application needs more security and transactional services, then you may want to use SOAP and XML instead of REST with JSON.

This brings us to our third topic, which is SOAP, or the simple object access protocol. SOAP is going to be used for exchanging structural information for your different web services. SOAP consists of a processing model, an extensibility model, a binding framework, and a defined message structure. SOAP passes more data and is more verbose than using REST. And its header passes its unencrypted. For this reason, more and more developers are migrating to REST instead of SOAP, because REST can be secure since it has an encrypted header. Now you should be aware that soap APIs are often exploited to perform SQL injections, content discovery, authentication bypass, and other types of attack. The number one method to protect your soap APIs from exploitation is to conduct inspection and sanitization of your inputs and outputs for that application. Just like you do for JSON rest applications.

Fourth, we have browser extensions. Now browser extensions are small programs that provide expanded functionality or features to a web browser that are not included in the default configuration. Originally, all webpages were quite static and unchanging. Over time though, programmers wide webpages to become more interactive. So they created other technologies, including Adobe Flash and Microsoft Active X, as plugins that added the desired interactivity. Now Adobe Flash is an older plugin that’s reached its end of life and is no longer supported by Adobe. Still, there are some websites out there that have not updated their code yet and they still try to load up flash to execute their sites. This makes those websites and your client very vulnerable because Flash has known vulnerabilities out there and Adobe is no longer providing security patches for it. For this reason, if you find Adobe Flash installed on any of your network clients, you should immediately remove it. ActiveX is another plugin that used to be very popular, but these days is considered deprecated and a legacy plugin. ActiveX is a server side technology that uses object-oriented programming, the component object model or COM, and the distributed component object model or DCOM.

Now COM allows the software to communicate while the DCOM allows the software to be distributed across a large number of machines. ActiveX controls as the downloaded programs become called, are installed a into the operating system and they run with system or administrative level permissions. For this reason ActiveX has long been considered an insecure technology and one that should be avoided in your network at all costs. In general, most modern web browsers have moved away from plugins and instead rely now on extensions. Extensions are used to provide expanded support and these extensions can be used to alter how a web browser interprets and loads a webpage, as well as provides additional interaction with third party services. These extensions are essentially mini programs. So you should be careful and only install extensions from trusted vendors to avoid putting your web browser at risk. JavaScript is another technology that was developed to add interactivity to webpages because the original HTML standards were not designed to support it.

Now JavaScript commands consisted of event handlers and they’re embedded directly into the HTML code of a webpage. JavaScript has also significantly expanded in recent years and became essentially a full featured programming language at this point. Now fifth, we have HTML five. HTML five or the hyper text markup language five is the latest version of HTML, which is a markup language and has added tremendous amounts of support for things like multimedia and effectively negates the need for Flash or ActiveX. Most web developers have fully embraced HTML five now and web vulnerability scanners have added the necessary signatures to help discover security vulnerabilities insights that are built with HTML five. Now because HTML five was designed as a powerful web application programming language that enables feature rich applications, it does have several areas that can be exploited by an attacker. These areas include things like the cross domain messaging, cross origin resource sharing, web sockets, server sent events, local offline and web storage, client side databases, geolocation requests, web workers, tab nabbing and sandbox frames as common areas that are exploited in poorly coded HTML five websites. To avoid exploitation, Your developers should always follow secure coding best practices whenever they’re developing any web application, especially those using HTML five or other programming languages. Sixth, we have AJAX, which is the asynchronous JavaScript and XML technologies. Now AJAX is a grouping of related technologies that are used on the client side to create asynchronous web applications. AJAX has some built in security features which are known as the same origin policy. This requires that all that technology comes from the same web domain if they’re going to be run on the same webpage. The AJAX engine is an intermediary that’s going to load up when a user first requests a webpage and allows user interaction without constant communication with the web server. And this reduces the amount of required server side processing. by default, webpages are considered stateless which means they don’t remember any user actions but this is not very useful for most web applications because we need state management to be able to know what a user is doing. In the early days of the internet, state management was conducted by using cookies stored on a client’s machine, but due to to security concerns, much of state management is now done through server side storage and databases.

AJAX is really useful in maintaining these sessions and then conducting state management for us. And it’s considered to be much more secure than other methods because of the way the interactions between the client and the server are going to be obscured by using server side processing of those scripts. Now, there are some common vulnerabilities that are exploited in AJAX applications. This includes cross site scripting techniques that target AJAX transmitted plane JavaScript user commands that are going to the server by identifying different function names, database table names, user IDs and other sensitive program details. And then they try to overwrite those using malicious scripts. Seventh, we have machine code and bytecode that we have to discuss. Now machine code is the basic instructions that are written in a machine language that can be directly executed by the CPU or processor. In general, when a developer writes a program in source code, it has to be compiled and converted down into machine code that the computer can actually understand and then execute. Now the problem with machine code is that it is specific to a type of processor and can only be run on that specific processor for which it was compiled.

In general, most Windows machines run on either an X86 based processor, which is an older 32-bit type of processor, or the newer X 64 based processors which are 64 bit processors. from 2007 to 2020, Mac computers also used Intel based 64 bit processors but that was recently changed back in 2021, with the introduction of Apple silicone processors, which are a form of ARM based processors. Similarly, mobile devices also use ARM based processors. And if the machine code was compiled to be run on a Windows desktop for example, it will not run on an ARM based processor. Now to overcome this issue, Oracle’s Java programming language uses the concept of bytecode to make its Java applets cross platform. And so they can run on any system. Bytecode is essentially an intermediate form of code that’s going to be produced by the compiler that can then be executed by a virtual machine that then translates the bytecode into the final machine code. For that processor. This bytecode can then be interpreted from the bytecode state into the machine code on that system, using a Java virtual machine known as a JVM, and that does the translation into machine code that can then be run on the specific processor.

Now, Java applets are another form of server side component that can then be downloaded from the server and run in a Java virtual machine on the user’s computer. Java in the past has been known for having malicious applets and many became wary of using Java because of its early days when it was used for malicious software that got run on machine. But Java security has improved tremendously over the years, including the introduction of the Java security model, making Java a great resource for programmers who need to be able to create cross platform applications that can run on any operating system because most operating systems do have support for Java by default, using a Java virtual machine to translate that bytecode into machine code on any system or processor.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img