Cisco CCNA 200-301 – Network Automation and Programmability Part 7

  • By
  • March 11, 2023
0 Comment

12. Software Defined Architecture – SD-Access

In this lecture you’ll learn about SD Access, which is part of Cisco’s digital network architecture. Before you can understand why we have SD access and the benefits that it brings, you need to understand the traditional way of securing access to the network. So the traditional way to control access to and traffic flows within a network is with fixed view VLANs, IP addresses and access control lists. Users are expected to always connect to the same physical port where they’re assigned an Access VLAN and IP subnet. So if a user is in Department A, they’re expected to plug into a switch port which has been configured with the Department A Access VLAN.

They then get assigned an IP address based on that. If a user is in Department B, they’re always expected to plug in a switch port which is being configured with the Department B Access VLAN, and they then get the Department B subnet based on that. We then have access control lists that control the traffic flows between the different IP subnets. Now, that configuration can get complex and because each device is configured individually, it’s more work to configure it and more likely to have errors. It also does not support user mobility because users are always expected to plug into a physical switch part for their particular group, it means if they plug in somewhere else, the solution is really not going to work.

So looking at SD Access now, it’s a newer method of network access control which solves those limitations of the traditional implementation. Traffic flow security is based on user identity rather than the physical part that the user is plugged into and users can log in from and move to any physical location in the network. Two components are required for SD Axis that is the Ice Identity Services engine, that is Cisco’s AAA server and it authenticates the user.

The security policy which controls permitted and denied communication between groups is configured on the DNA Center. So the Ice and the DNA Center are integrated and work together to provide this solution. SD Axis uses an underlay and overlay network. An underlay network is the underlying physical network for the solution. It provides the underlying physical connections which the overlay network is built on top of. And an overlay network is a logical topology used to virtually connect devices.

It is built over the top of the physical underlay. So the combination of underlay and overlay forms the SD Axis network fabric. So looking at this, you can see here is our physical topology here. So this is the physical underlay. Now with this diagram I’ve just connected the switches like this. That’s got no particular meaning. So for the actual way that you’re going to connect your devices, just do that according to the standard best practice. I’ve just drawn it like this because it’s an easy way to show an underlying network.

The way that this is laid out, the topology is not specific to SD axis, just do it the way that you would do normally. Okay, so we’ve got our underlying physical network with those physical connections between the devices, and then for our overlay network. So for the actual connectivity between devices, that uses a tunnel built over the underlying network. So we’ve got a virtual tunnel that’s part of our overlay, which provides the virtual connectivity.

When SD axis is deployed into an existing brownfield network, any configuration can be used for the underlying physical network. So if you’ve got an existing site and then you want to implement SD axis there, it means that you don’t need to rip up your settings you have already and start again. Whatever settings, whatever configuration you’ve got on the device will still work with SD axis. So links between the devices can be layer two or layer three, and any routing protocol can be used.

So your existing setup will work just fine. DNA center can also be used to automate to automatically provision the underlay network in new greenfield sites. So if you do have a brand new network that is just being physically deployed, you can use DNA Center to do the initial setup for you to support SD access. In that case, layer three links will be used between the devices, and ISIS is used as the routing protocol. The reason that ISIS is used is it’s very extensible. It’s easy to add additional functionality on there.

So that is why Cisco chose that as the routing protocol. Now, don’t worry, for the CCNA exam, you don’t need to know anything about ISIS. You’ll learn about that if you go into the CCNP. So on our overlay network, for the control plane, we use Lisp for that. For the data plane, it’s using VXLAN.

Cisco trussec CTS is used for the policy. So we’ve got those three different technologies there. Each of those technologies has been optimized extra features added to it for SD axis. So let’s have a look in some more detail about Lisp, VXLAN and trust SEC. So, starting with Lisp, lisp has been around actually for quite a long time. It’s a good chance you haven’t heard of it before because it wasn’t really implemented that widely.

The original behind Lisp was to support mobility. So if you had users that were moving physical location, it meant that they could take a virtual IP address with them. So because we want that with SD access, we want to have the mobility cisco, rather than coming up with a brand new protocol, they use the Lisp, and they added some extra bells and whistles onto it to make it perfect for SD access. So let’s have a look at the way that Lisp is going to work.

So the tags here in yellow that is making up our physical underlay there, we’ve got an edge node switch with IPRs 1010. One over here on the left, we’ve got some edge node switches. Over on the right, ten two and ten three. And one of our switches is going to be designated as a control play node. Actually you’re going to have more than one control plane node for redundancy. So then what happens is we’ve got a host here, one thing, 216812, that gets connected to the network.

Our edge node switch sees that and it sends an update message to the control plane node saying that one thing 216812 can be reached through me. The next thing that happens is our host over here on the left, one link 216811, it sends a packet with a destination address of 1921-6812, the host over on the right. That packet will hit its nearest edge node switch. And then that switch will ask the control plane node how do I get to 1921-6812? Well, the control plane node knows because 1010 two told it earlier. So the control plane node will reply back saying you can get to it via ten two.

That edge node will then build a VX land tunnel over across to the other edge node and the traffic will be sent through that tunnel. So you can see we’ve got our underlying physical network underlay and then we’ve got our VX land tunnels for the data plane in the overlay network. Okay, so that is a simplified view of how Lisp works to build a control plane to build the connect to the A between our devices to have a look at how mobility works.

Now so let’s say that one thing 216812, that user moves down and is now into the switch here. So this would be most likely if it were over wireless. They moved to a different location in the network. What happens then is that the new edge node switch will send a message to the control plane saying that one thing 216812 is now available through me at 1010 three. The control plane node will then update its database with the new information. It will inform the other edge node over here on the left. And then the VX land tunnel will now be built directly between the correct edge nodes. Okay, and finally, looking at the policy, Cisco Trust sects is used for this.

So the users are authenticated, meaning they put in their username and password and that is authenticated by the Ice, the identity services engine. The security policy is configured on DNA Center. So Ice and DNA Center work together. For this, users are allocated an Sgt scalable group tag in some documentation. You’ll also see this being referred to as a security group tag. It’s the same thing. Cisco Trust secures with traffic flows based on the security policy and the Sgts. So for example, a user in department A can get to their servers, but they can’t get to other departments. Now there’s a difference with SD access in the way that the older traditional trust sect work trust sector has also been available for quite a few years.

Trust SEC is a great idea. But an issue with it that affected it to not have such a widespread implementation was that all the devices in the traffic path had to support Trust SEC. So that was only on newer model switches that it was originally supported on. But when you use SD Axis, that limitation is taken away. Because SD Axis uses those virtual tunnels, the traffic can actually go through any device. It does not have to be an officially supported Cisco Trust SEC device. It can even be switches from other vendors. Okay, so that was everything I needed to tell you about SD access. See you in the next lecture for SD One.

13. Software Defined Architecture – SD-WAN

In this lecture, you’ll learn about SDWAN, which is part of Cisco’s DNA digital network architecture, looking at the traditional way to do our Wan deployments first. So traditionally, each of those Wan edge routers is going to be configured individually, one at a time. And this leads to the configuration not being standardized organization. The focus is on the basic link connectivity, not the required performance for applications. And because we’re using particular hardware and configuration tied to a particular service provider in each location, it’s typically difficult if we want to migrate to another One service. So next, let’s look at how SD One improves upon that. Cisco acquired the other company Viptella in 2017 to enhance Cisco’s existing SD One solution, which was previously called I One. For Intelligent One, SD One provides automated set up of Wan connectivity between sites, and the monitoring and failover is automated as well. So, because SDWAN is part of Cisco’s digital network architecture, the whole part of that is to give centralized control of our operations and for everything to be automated, rather than the old way of configuring our devices one by one.

And with SDWAN, as well as the setup, the monitoring, and the failover all being automated, the traffic flow control is application aware. So if on a particular site, if it has got multiple Wan connections, say, one going over the Internet, another one going over NPLs, well, based on your different applications needs, SDWAN can automatically send particular application traffic over the most suitable Wan connection. So the benefits that we get with SDWAN, we get that automated, standardized setup of connectivity between all our Wan sites. It’s transport independent. So it doesn’t matter what kind of Wan links you’ve got in each different site, whether that’s Internet or going over MPLS or wherever, any kind of connection, SDWAN will work with that. It gives simplified and centralized integrated operations, which gives you more flexibility. Because it’s transport independent. It’s easy to migrate your Wan services. You get the required predictable performance for your important applications. Because it is application aware, it integrates with the latest cloud and network technologies. So the routers that SDWAN can control, you can have those on premises in your branch. You can also have cloudbased routers controlled by SD One as well.

So if you’re using a popular public cloud provider like AWS or Microsoft Azure, you can have virtual routers in the cloud being controlled by SDWAN because you’ve got all this flexibility and it makes your operations easier. That gives you a lower cost solution. So let’s look at how the solution actually works now. So there’s four main components. Down at the bottom, in the data plane, we’ve got our edge routers. That can be a V edge router originally from Vitla, or it can be various different models of Cisco routers as well. And these can be physical or virtual routers. Then we need to have our different components which are controlling the solution. And these all run as separate virtual machines. So for the control plane, we have got the V Smart controller.

Then for management, we have got the V manage NMS system. And for orchestration, we have got the vBond orchestrator. And the solution can scale. So if you’ve got a larger environment, you can just add more routers in the data plane, more V smart controllers in the control plane, more V manage NMS systems, and more vBond orchestrators. So let’s look at each of those in a bit more detail. So your V edge routers run the data plane. They are in charge of forwarding the actual packets. They can be physical or virtual routers, and they form an IPsec encrypted data plane between each other. So each time a new Wan site comes online, it will form VPN tunnels to your other Wan sites. And a site can have two V edge routers for redundancy. Next up, we’ve got the control plane, we’ve got our Vsmart controllers. There they are the centralized brain of this solution. They run as virtual machines and they distribute policy and forwarding information to the V edge routers. And that information is sent inside TLS tunnels. This is where you’re running the control plane.

So this is where it’s going to build all the routes between the different routers. And it tells the routers how to do that and how to forward traffic between each other. Each V edge router connects to two V smart controllers for redundancy. For them, the management plane is Vmanaged NMS. It enables centralized configuration and simplified changes. It also has real term alerting. Again, it runs as a virtual machine. And these can be clustered for redundancy. So when you’re interacting with SDWAN, you’re going to log into the GUI on Vmanage.

That’s where you configure everything from. Finally, we have the vBond orchestrator that authenticates all the smart controllers, v manage NMS and V edge routers that join the SDWAN network. It enables the V edge routers to discover each other Vmanage and V smart. It has a public IP address and is deployed in the DMZ. So you might be wondering, because I know I was when I first saw this and I saw the architecture, I thought, okay, I understand why we have Vanage, which is where we manage everything that gives us our admin GUI. And I understand we’ve got the controller there as well. But why do we also need an orchestrator?

Why can’t the control plane or the management plane do that for us already? Well, the reason is that they will typically be deployed in your data center. So when a V edge router first comes online, it needs a way to connect into the solution to download its configuration. It needs to discover all of the other devices. And with your other devices being in the data center, that’s not going to work because it’s not going to allow incoming connections coming in there. So this is why we have the orchestrator. It pulls everything together and it provides connectivity between everything. It’s in the DMZ with a public IP address. So when a V Edge router first comes online, it’s able to connect to the vBond orchestrator. It finds all the other components from there. vBond orchestrator also runs as a virtual machine, can also be run on a router in smaller deployments. And multiple vBond orchestrators can be deployed with round robin DNS for redundancy. ZTP is our Zero Touch provisioning service. This is a cloud based shared service hosted by Cisco and it’s utilized on first boot of the V Edge router only. This directs the V Edge router to vBond to orchestrate it joining to the network. So when you got your router, first take it out of the box. If it’s a physical router and have it plugged in, it’s going to come online. It’s going to connect to the cloud based service at Cisco which is going to tell it how to get to its vBond orchestrator.

The vBond orchestrator then tells it how to get to the other components. It then downloads its configuration, sets up the tunnels and you’re good to go. vBond, Vsmart and Vmanage can be deployed on premises or they can be hosted in the cloud either with Cisco or with one of Cisco’s partners. Most deployments are in the cloud because it’s an easier solution for the customer. Let’s look at building the data plane next. So the V Smart controller directs the V Edge routers to build a full mesh by default of IPsec VP tunnels between each other. So by default it will have a full mesh. If you want to, you can configure this to be hub and spoke or any other topology that you want. Vsmart then propagates policy and routing information to the V Edge routers and that is done through the OMP overlay management protocol. So you can see in my diagram here, I’ve got my V Smart controller.

Now you would actually have two of these for redundancy, but I’ve just put one in just to make the diagram easier to look at. And you can see we’ve got our V Edge routers here which are in our different Wan sites. And in my example, they’re each connected with an Internet Wan connection and also NPLs Wan as well. So what happens is the V Edge routers come online. They’re then told by the ZTP service about how to reach the vBond orchestrator which tells them how to reach the V Smart controller. Then they will build VPN tunnels to each other with the information from the Vsmart controller. The Vsmart controller will also tell them what end host IP addresses are available in each site as well. So that builds the routing tables on the routers.

Next thing we have bi directional forwarding detection packets sent over each of those VPN tunnels that is used to detect if a tunnel goes down. So each router has got a tunnel going to every other router by default and over all the different Wan links as well. BFD packets are sent over all those links regularly to check and we get packets going in both directions. That way the routers can detect if a link goes down, it will be taken out of service until it comes back up again. The BFD packets also provide latency Jitter and loss statistics as well, which we can use to direct packets for different applications over the most suitable connection. If multiple tunnels are available, for example, we’ve got NPLs and Internet as well, then traffic can be load balanced over those different tunnels. For your load balancing algorithms, you can use Active Active where you send it equally over both, or you can do Weighted Active Active. So if for example, you want to send more traffic over your NPLs connection because it’s higher quality than Internet, then you can do that with Weighted Active Active.

You can also do application pinning active standby. So you could send maybe email and web traffic over one connection, voice and video traffic over a different connection. It also supports Application aware routing. Let’s look and see how that works. So as I said earlier, BFD monitors the latency Jitter and loss across the different VPN tunnels. You can set minimum requirements for an application with an SLA service level agreement class. SDWAN ensures the application is sent over link which meets its SLA requirements. So if for example, you are sending voice and video traffic over your Wan links, you can set required latency Jitter and loss to make sure that your calls are good enough. Quality SD One is monitoring the QoS statistics over your links in real time and it will make sure that voice and video is over the most suitable link by default. Traffic will fall back to another link if no suitable link is available. So if you’ve got say, two links that are available and neither of them match the required statistics, it doesn’t mean the traffic is going to get dropped. It will still go over the best link at that time. OK, that was everything I needed to tell you about SD One. See you in the next lecture.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img