MCPA MuleSoft Certified Platform Architect Level 1 – Non-Functional Requirements of APIs Part 3

  • By
  • April 30, 2023
0 Comment

13. Review Solution from Previous Assignment

Hi. In this lecture, let us revisit our design and see how it looks. Now, after doing the previous assignment, where we applied various policies across the layers on various APIs, okay, so what you are looking in front of you is the revisited design. It’s revised by applying the various API analysis. And if you remember, at the early of this particular section, we had a design which was part of the assignment in the previous section. Correct? And we took that particular section assignment output and discussed that. Okay, this particular design is, okay, functionally, but it has some NFS to be applied because it is synchronous in nature. And there could be many more new kind of experience APA consumers may come when the load will increase and all. So we discussed all those things. Correct.

So now let us see, after learning all the details about the NFR and the APA manager and applying the policies, how this design will change. Okay, so I hope in your assignment, whatever analysis you have done, you must have come up with the list of policies somewhat closer to what we have here. I’m sure not everyone will exactly come up with the same policies, because everyone thinks differently. All right? But I’m sure some of you may have come with a subset of these API policies. Some of you may have even come with even more policies to make it even more perfect. So, as long as they’re somewhat closer to this one, it’s all good. Okay, fine.

So let us now see each layer by layer wise, okay? In the Experience layer, what we can see is because let’s go to the first one, our Create Sales order, which is Soap based web service. Because this is Soap based web service, the content request content will come in XML format. So it is advised to apply XML that production on that particular API experience API, whereas Create Sales Order, which is Rest service. And the mobile and bot based API consumers, they generally interact with the JSON formats. So let us apply JSON the production to check the JSON based data structures.

Then the mobile and the bot based ones, they need to have one more extra API policy which is OAuth based, because they are device held or the automatic way of the interaction with the APS. Instead of manual intervention, they need to be authenticated with the war tokens because they can’t every time use the credentials to log in and all because they are mostly like constructed apps or integrated apps, and they to make use of the token based authorization.

So we have done for so zero access token enforcement. Okay? And these particular APA policies are respected to individual APIs, right. But across the Experience APA layer, irrespective of what API consumer they are, we can apply the rate limiting, which is a solid based because every APA consumer, like we discussed, would be selecting an APA tire beat. The basic goal or Platinum, they’ll have to select a tire correct. So for that tire there will be particular rate limiting applied. So we will have to apply rate limiting which is based across all the APA experience there.

And another policy also common to all the APIs and experience there is the IP whitelisting. Okay? This is because the API consumers or the end users had to be whitelisted, otherwise there is no way we can fully secure the applications correct. There can be many unknown scenarios that might be there where there are very bad, bad IPS or bad people outside where they can hack. So we have to perform IP whitelisting. But one thing I just recall here is the IP whitelisting can only be applied I think I’ll have to correct this or you can correct this. It is not across all express appeals, okay? The IP whitelisting can only be on the APIs which are for known API consumers. So for example, we are exposing the API to a particular lobs or particular external, even their external. If they’re not lobby in your organization, they are external. If you know to whom we are giving, it could be external organization or third partnership.

If you are exposing your APIs to those kind of consumers, yes, you have to IP waitlist, but the bots or the mobile devices, especially mobile devices, even bots also it can be whitelisted by the range, okay? If they’re hosted agents, all the bots are hosted agents, then there could be a definitely IP range geographically or regionally. So still we can whitelist, but mobile devices especially, we cannot wait list. We cannot apply IP whitelisting because it could be an Android app or it could be an iOS app, right? And anyone can have a phone in your hand. So there can be a number of iPhones or Android phones. So we do not know IP address of all those.

So IP whitelisting cannot be applied on a mobile based consumer. Okay? So just wanted to highlight that because there is some question as well around that where IP whitelisting, is it applicable for mobile based one or not? And the answer would be no, because we won’t be knowing it. Okay? So just remember that for mobile based one, IP wide listing is not applicable. Now, one more thing we have implemented is the asynchronous behavior between experience and process.

But although that logic could be inset process, this is not with respect to API policy, but it’s a revisiting of the design to ensure that the end to end communication is not completely synchronous. So that the experience layer would just submit the request to the PRC layer and the PRC process layer API will immediately give back unique reference number as an order number by performing the validation.

Okay? And the back end processing of the actual sales order will happen behind the scenes using a sync mechanism. Okay, let us move on to the next layer which is process API layer. So in Process API layer, the APA policies that are enforced across commonly to all the API are IP whitelisting. Definitely this has to be done. So what IPS are we going to whitelist here? Because there are no end consumers, right? So IP is that we need to whitelist should be the IP range of the Experience API layer.

Because every Process API should definitely be called from an Experience API only. Right? That is the rule. Or another process API. Correct. So Process API can be invoked either from the Experience API layer or some other process API is in the Process API layer. Correct. So there is no chance that any other way the request can come in. Since these layers can’t call Process API, consumers can’t call a Process API is directly. So the only way is either EXP layer should call PRC layer or PRC.

Another PRC AP should call PRC. So we should whitelist the IP ranges of the entire Experience APA layer and the Process APA layer IP. Okay, so how do you get that? So generally, if you have the VPC set up a subnet of IP addresses and all, so you would be knowing, okay, this is the subnet or IP range for my Experience layer APS, and this is for the Process API APS. Or even if all three layers fit in the same subnet, then you have to just give that range so that you allow only requests to come in from that particular range only. All right. Then the second APA policy common across all Process API layer APS is the client ID enforcement. Because again, the Process API won’t be no directly from the end consumers or end users or ape consumers.

There is no need of enforcing walk based authentication at all. Simply enforcing client IDE credentials is enough so to know okay, from where we are getting the request. So that this is coming from the Experience layer or this lobby or this particular vertical in the organization. So these two are the common across all Process IP wide listing and client ID enforcement. But there is one which is specific to the validate external ID which we applied is http caching okay, this is because, like we discussed in the early of this section again, so although we made it asynchronous the submission of the order, still the validation should take place for the customer ID, shipment location ID and the item IDs.

So that would be again asynchronous right, but that cannot make a sync because it has to be real time. So what else can we do is because the data is static and loaded one time into the excerpt system and they won’t change, we can actually make use of Caching in this scenario. So if we enable http caching on this API and make the key using a data expression as unique okay, for customer ID, this is the key. Pick the customer ID as the key or ship location as the key or the items, then for that combination, the response will be cached. So since any subsequent requests that come with the same combination would retrieve the response from the cache itself without having to go to the system API and invoke all three individual APIs again, okay, so this is how we gain performance drastically.

All right, now going to the System API layer. Now again, the commonly applied APA policies in the System API layer are spike control and the rate limiting slabest. So why this is again here in the System APA layer is because we had rate limiting at the Experience APA layer for SLA tires, right, based on accelerator. And now again we have in the spike control but not in process. Why? Because process layer is an orchestration layer. So we cannot control our ideal lot. Cannot technically, if you want to apply, you can. But it doesn’t make sense to apply the spike control and the orchestration layer because there could be heavy orchestrations.

So we can apply. Generally are these kind of policies like red limpting or spike control fit where there are unique API behavior, atomic APA behavior like Experience API layer where we know only which API consumer is calling or system API layer where we know which system or which fixed functionality, to be precise, we are hitting. Okay, so there are modular pieces. So we can very well control such small modular pieces with this rate limiting and spike control. But if you go to process, it could be a very huge orchestration, right? So we cannot in general big orchestration say spike control or apply throttling. Okay? So that is why we apply System API layer. So here what are we rate limiting is the number of calls that go and hit the back end systems.

So instead of bombarding the request coming from the upstream layers like Process API or the Experience API, we can do the throttling or rate limiting saying okay, let us slowly send only 100. We won’t bombard the back end system because they could be old systems or legal systems, right? So for that aspect, we can do this by control also makes sure it balances the incoming and the requests and the outgoing responses to make sure there is no heavy loading on the back end system. And again, the specific API policies that are applied on a specific API are again http caching the validate client ID system API, itemized System API and the valid Ship Location System API. Because these three are also individual APIs and the validate external ID in the process API is a composed one, we have to cache these individual ones so that the first hit comes from the process layer, will cache the responses in the system layer for individual API.

And once they’re composed, the compost response would be anyway cached in the process API layer also. All right, so this is how we try to enhance our solution design to make it performant and meet the NFR for the Create sales order APA. All right. If you have any queries, please do post in the Q and A forum in the interactive forum so that others can also gain some insights and knowledge on the solution ideas you may have. All right, happy learning.

14. Reflection of API Policies in RAML

Hi. In this lecture let us discuss about the principles where how the APA’s policies should reflect their dependencies in the Ramble differences of an APA. Okay? So if you noticed, many APA policies that we have seen will change the way how the Http request or response should come from the API clients while hitting the APIs, right? For example, let’s take the client ID enforcement policy so that policy, how it defines or controls the request it enforces, that compulsory. The client credentials should come in the request from the APA client and it can be in the Http authorization or it can be as query parameters or it can be as part of the Http headers.

We saw this, right? So how will the end users will know? Because when they look at the definitions design, usually they go to exchange, correct? On the public portal or exchange and that is where they usually know the spec and all. So the functional spec would give them a good idea, right? For example, if they go and look at a great sales order, they’ll know this is the function spec, so they compose the request and all. But how will they know that? Okay, they would have to pass these Http headers as well because we have prematurely or I have built prematurely, include the trites as well in the early demonstration itself. So we now know that, okay, as part of trades, we already defined the client trade secret which I should not have done for a better course journey had I not given the Trites there during the early demonstrations. And if I applied now in this course, it would fit better way. But end of the day you are smart people, right? You will understand.

So if there are no trades, the end users would not know when or why they have to pass the kind conditions which is not reflecting in the design, right? So how they should reflect is usually the correct way is to make sure that whatever are the policies we are going to apply, such kind of things or requirements should be reflecting in the Ramble. You have to go and update in the RAML. So now the question would be how can we know the NFRS or the policies we are going to enforce in the very early stage of the project? Because the design center part or the designing of RAML would happen early in the phase, right? It was some in early phases of the course and that time we did not even discuss NFRS. We said first let’s concentrate on the functional part and then go to the NFRS. How so the design center part of the design phase is not just a one time thing and done and restored concept, okay? It can be revised as well. It can be iterations, there can be many iterations. So yes, you can initially go with the functional design, finish the structure schema request, response, examples, et cetera. Implement the API.

Then during the NFRS or API policy enforcement phase, we can very well go to the Design Center or the RAML specification and update the RAML to reflect the trites and all. Okay? The only concern why I’m trying to address this in a small lecture of this one is to just make it a practice that never miss the NFR pay policies requirements in your RAML. Always try to go and update it because usually teams tend to forget this because they are in the middle or edge of the project phase and they think, okay, everything is set now, why not try to touch the RAML or even they may not remember about RAML because it’s still working as usual, right? So it would necessarily bring down the value of your API from the consumer perspective thinking it is not well documented.

So to not get that bad remark, please go and reflect it in the API. So these trades are the best way to do that. Some of them may not be fit in trades, for example, the 2. 0 Enforcement or the Basic Authentication or the Protocol Https and all these can be taken care via the protocol or the security schemes in the RAML, okay? But wherever things like the query parameters should be coming, or the Http header should be coming, et cetera, and all can be controlled by the threats, they are the perfect mechanism for expressing the changes to an APA specification introduced by the application of an APA policy. So generally, who are the right people or the right team to do this thing? Are the C four E team. Okay. The C four E should ideally, in my opinion, own the whole definition of these reusable RAML fragments.

So all these trites and all for different, different policies, right? So each policy may reflect or come up as a particular trite. So all these trites should be individually created as a special API fragments. The RAML fragments, like, you know, RAML fragments we have seen in the previous lecture, how to create the fragments. So C four should be doing that and publish them to exchange as a reusable fragments so that the designers can actually leverage or make use of such reusable fragments and use the same one based on the policies they’re applying.

So if we kind of map each policy to one reusable Ramble fragment, then for whatever policies the teams are enforcing on their APS, they can map like, okay, if I applied this policy, so I’ll go and pick this API fragment and add it as a trite, so it will be easier that way. So try to make your trites map closer to one, to one, to your APA policies, publish them to exchange and reuse them as trites. All right, this is a practice you need to slow. Please remember this as this is the best practice to follow and keep your Ramble definitions and documentation up to the date. Okay? Happy learning.

15. Anypoint Security Edge

Hi. This is our last lecture in this particular section. In this lecture we will discuss about the any point security edge. Okay, so this is a separate component for customer hosted deployment environments. All right? So this is a component in addition to the API manager which comes for the customer hosted deployments in order to enforce the policies at an edge gateway level. So this component actually comprises the functionality of an edge gateway and helps to enforce the edge policies. If you are new or unaware of what is an edge gateway or edge policies are, then here is a definition for you from the Google. Okay, so the definition says that the edge gateway is a place where it comes the DMZ level to enforce things like VPNs, et cetera, networking related and prevent some things or bad things happening or enforce restrictions at the very first level of defense only. Okay, so edge policies are policies that are applied at the edge gateway. So whatever policies that we applied this level of defense, the first level of defense in the DMG level, they are called edge policies. In general It terminology. So at the technology architecture level, any point security edge and the edge policies are completely independent of Any point API manager.

And APA policies don’t get like mixed up like okay, any point security edge is just kind of replacement of any point APA manager in the customer hosted runtime play environments. No, in the customer hosted and time environments you have both security edge as well as the API manager because the customer hosted environment is a completely owned environment with the customer. So customer has to manage the DMZ and all this is an extra component helps to enforce such edge policies at the edge gateway. Whereas in the cloud hub, because MuleSoft already takes care of the first level of defense, we will be provided with only the any point API manager for the second level of defense at the API policies. Okay, so now let us see in the second slide the differences between the API manager and the any point security edge. So the any point security edge typically is deployed into a DMZ and on the customer hosted runtime plans, whereas API manager can be on both cloud hub, cloud runtimes and customer hostage run times. And the security age is for all requests that are coming to the DMZ.

Okay, so these edge policies and all will kick in or the component of security edge, this component kicks in at the DMZ level. So this is for the first level of defense. Okay, the first line of defense, whereas the APA manager is for APS only. So this is the second line of defense. Once the request comes in, then how should we restrict that APA and all is the APA manager concept. Now again, one more thing in the security edge is the edge policies can be applied to a whole set of APIs which is not the case of the APA manager. In the APA manager, if you see, we have done in the previous lectures and demonstrations that we have to apply the policies on each and every individual API instance, right?

We open the API instance, go to the Policies tab on the left and have to apply the policies we want on that API instance. Whereas, because the Security Agency is educated concept, we cannot share, repeat each API and set the APIs policies. There are edge policies there. So it applies for a whole set of the APIs, like in general. Okay, this is the policies irrespective of what request is coming through the DMG. Then another difference is the Security edge supports policies on both mule runtime hosted applications as well as the ones which are sitting in behind the mule proxy, which are non mule applications, which is similar to the APA manager. Even APM manager supports policies on both the mule runtimes as well as the mule proxy run times, right? So in this aspect, both are having same support.

Okay? Now let’s move to the diagram where you can see how the deployment model look for this Any Point security Edge. So in this slide, what you are seeing is a diagram in which it depicts typical, not exact or the only way, but a typical deployment model of any point security Edge. So in this, the API clients access an API via a cluster of these Any point security Edge nodes which are acting as edge gateways and which enforce edge policies as well for any entire group of APIs. So this is the place, the Sigata is the place where the edge policies are enforced. Like the API manager and the meal run time proxies are the place where AP policies are enforced. Same way these are the places where edge policies are enforced for an anti group, a whole group of APIs. And like you can see in this diagram, this is independent of the Any point API manager defined APA policies. And first within the mule runtimes. Okay? So those are enforced in the mural runtimes, right?

Apae policy symbol. So this clearly depicts the differences as well that we discussed previously. Now, the last thing before we conclude this lecture to see what kind of edge policies does the any point security edge component supports? Okay, so many of the policies you are going to see are familiar or looks familiar to the API policies that we saw in our previous lectures. So let’s see them one by one. So one edge policy is like with regards to the security, like the SSL termination or even adding the SSL and all, dropping the sets kind of things with Hmtls, mutual authentication and all. And the second one is with respect to the content attack prevention.

So when we say content attack prevention, the terminology is different. But this is somewhat similar to our threat production policies. Like enforcing the message size limits like we did for XML and JSON threat production or to prevent the SQL injections, XPath injections and any other language based attacks. Okay, and the third one is like to limit the requests based on the Http request properties like say if this is the Http version, only allow the request for this Http version for this type of Http methods are based on the URL path or query parameters. Http whatnot? You must be familiar, right? Many other API, gateways or otherwise web application firewalls and all support similar features.

Even this Edge policies also support IP whitelisting and blacklisting, very common feature. And some more features of web application firewall, right? Same way, some more feature in the Edge policies, I like to limit the Mime types or the request with specific words or the requests that match specific rejects, et cetera. Okay. And even there are quality of service level Edge policies as well like to control the rate limiting throttling and all. And of course the denial of service attack and all can also be prevented from the Edge policies because being an Edge gateway it has to allow or support the policies to detect the denial and distributed the denial of service attacks and all.

So even that is supported in the any point. Edge security Edge okay, so this is the end of the lecture and also concludes the whole section. I know that this section is like bit intensive and very lengthy one because we learn a lot of concepts in this and this plays a major role in your project as well. It’s not just a section that is lengthy even this phase.

Also in your real project, if you map it one to one is a lengthy one. This is where a lot of altercations happens with your It managers, customers or your APA consumers and all with respect to performance, the APA behavior, et cetera. So that is why we spent more time in this section. Even the next section will be bit lenient and we’ll slowly learn more and more things in the coming sections. All right, happy learning.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img