Google Associate Cloud Engineer – Operations in Google Cloud Platform – GCP Part 1

  • By
  • August 21, 2023
0 Comment

1. Step 01 – Getting Started with Google Cloud Monitoring

Welcome back. In this section, let’s look at how you can perform operations in the cloud. Developing applications is important. However, maintaining applications and maintaining them in production is very, very important as well. That’s where monitoring, logging, tracing, debugging, and all these kind of things become really, really important. Let’s look at some of the important services to help operations. In this specific section, let’s get started with cloud monitoring. To operate cloud applications effectively, you should know is my application healthy? Are users experiencing any issues? Does my database have enough space? Are my servers running in an optimum capacity? This is some of the data which is provided by Cloud Monitoring. Cloud Monitoring is a set of tools to monitor your infrastructure.

It measures key aspects of your services. For example, for a virtual machine, it can monitor a number of CPU, memory, and disk related metrics. It also provides metrics around network traffic. In addition to the metrics, you can also create visualizations. You can create graphs and dashboards around your metrics. A very important part of monitoring is to be able to send alerts when you see that some of the metrics are not as expected. So in cloud monitoring, you can configure alerts. When the metrics are not healthy, you can send an alert to the right person. This can be done by defining alerting policies. You can specify under what condition a specific alert has to be raised, and you can specify the channel in which a specific notification has to be sent. Along with the alerting policy, you can also have some documentation attached.

The documentation can describe the specific condition and maybe explain what can be done to resolve the issue. Now, in order for cloud monitoring to be able to gather your metrics, you need something called a workspace. Let’s now shift our attention to it. The Workspace you can use Cloud monitoring to monitor one or more GCP projects and one or more AWS accounts. That’s important to remember. You can use cloud monitoring not only to monitor your GCP projects, but also to monitor your AWS accounts. How do you group all the information from multiple GCP projects or AWS accounts? The way you can do that is by creating a workspace. Workspaces are needed to organize monitoring information. In a workspace, you can have monitoring information coming in from multiple projects. How do you set up a workspace? The step one is to create a workspace in a specific project.

This project is called the Host project. Once you have a host project and a workspace created in that project, you can add other GCP projects, or you can also add other AWS accounts to the workspace. Let’s now look at monitoring for virtual machines. Virtual machines which are created using Compute engine. For virtual machines, the default metrics monitored include CPU utilization, some disk traffic, metrics, network traffic, and uptime information. Is the VM up and running? If you want more metrics, you can actually install cloud Monitoring Agent. This would give you disk CPU, network and process metrics. The Cloud Monitoring agent is based on collect DBased daemon. It gathers the metrics from the virtual machine related to disk CPU, network and process metrics, and it sends them out to Cloud Monitoring. In this step, we got started with Cloud Monitoring. I’ll see you in the next step.

2. Step 02 – Getting Started with Google Cloud Logging

Come back. Next up, let’s start playing with cloud logging. In the cloud, you have a number of applications and a number of services. There are logs coming from applications, there are logs coming from the Google Cloud services and there might be a lot of logs coming in from different actions that users are performing in the cloud. All the logs from all the actions is centralized and captured in Google Cloud in cloud Logging. So it’s a real time log management and analysis tool. It allows you to store, search, analyze and alert on massive volumes of data. It is an exabyte scale, fully managed service. You don’t need to worry about server provisioning or patches or anything of that kind.

All that you need to do is to send the logs to Cloud Logging and you can use Cloud Logging to do whatever you’d want to do with the logs. You can ingest log data from any source. It can be an application which is running on a virtual machine. It can be a GCP service itself, it can be audit logs. Any kind of logs can be ingested into cloud logging. Some of the key features include Log Explorer you can search, sort and analyze the logs using queries. You can also get visualization around the logs by using Logs dashboard. You can also generate metrics from the logs. From the logs, you can say these are the metrics which are important for us and this is how you can capture them.

The capturing can be done using queries and matching strings and you can have metrics around them and you can have dashboards around these metrics as well. Cloud Logging also provides a logs router. You can look at the different log entries and you can route them to different destinations. Let’s start with looking at how you can actually collect logs. Most GCP managed services automatically send logs to cloud logging. GKE app engine, Cloud Run all of these automatically send logs to Cloud Logging if you don’t want to ingest logs from GCE virtual machines, you need to install Logging agent.

The logging agent is based on fluentd and this logging agent and this logging agent would capture the logs from the VM and it would send them out to Cloud Logging. The recommendation is to run logging agent on all virtual machine instances. If you want to ingest logs from on premises, the recommended option is to use the Bind plane tool from Blue Madeira. You can use this tool to send the logs from on premises to the cloud logging. The other option is also to use the Cloud Logging API from your on premise machine. You can actually call the Cloud Logging API and send logs. Our recommendation is to use the Bind plane tool. In this quick step we got a 10,000ft overview of cloud logging and how you can collect logs. I’ll see you in the next.

3. Step 03 – Exploring Google Cloud Logging – Audit Logs

Back in the step. Let’s look at audit and security logs. Whenever you talk about cloud and security in the cloud, auditing and monitoring is very very important. Let’s look at the different logs related to this right now. Access transparency log capture actions performed by GCP team on your content. This is not supported by audit all the services, but wherever this is supported, you’ll be able to capture the actions performed by GCP team on your content. Remember that this is only supported for organizations which have gold support level or above. So for the top two support levels is where Access transparency log is supported.

For the next set of logs are cloud audit logs. This answers who did what, when and where. There are four types of cloud audit logs admin activity logs, data access logs, system event audit logs and policy denied audit logs. Each of these logs contain the following entries which service? Over here you can look at proto payload service name. Over here it’s App Engine googleapis. com which operation protopilo method name set im policy is the operation. Which resource is audited resource Type you can go in here and you can see that the resource type type is Gae app Google App Engine application and who is making the call.

Authenticationinfo principal email you can see it over here authenticationinfo Principle Email you can see the email address of the member who is making this specific call. This is how your audit logs are organized. You can get information about which service, which operation, which resources audited and who is making the call. Now let’s compare the four types of cloud audit logs admin activity logs, data access logs, system event logs and policy denied logs. The Admin activity logs contain logs for API calls or other actions that modify the configuration of resources. So this is for any modifications that are done on the resources. Data access logs on the other hand, is about reading the configuration of a resource.

System event logs are for Google cloud administrative actions. Policy denied logs are when a user or a service account is denied access to perform a certain operation. Admin activity logs, system event logs and Policy denied logs are enabled by default. If you want to enable data access locks, you need to enable it for that specific service. So if you want to see the cloud storage access logs, you need to enable it for a cloud storage bucket. Let’s consider a few examples for these locks. When it comes to virtual machines, Admin Activity logs contains details about VM creation, any patching that is done on a VM, or any change in Im permissions related to a VM.

If you do any listing of resources, if you list the VMs or list the images or list the instances and the data access log is enabled, then this information that somebody is trying to list resources is printed into data access logs. System event logs are generated in case of events like on host maintenance or instance preemption. You have created a preemptible instance and the instance is preempted. You can see that in a system event log if an instance is automatically restarted, you can see that in system event logs as well. If there are any security policy violations, those logs will be visible in policy denied logs. Let’s take the example of cloud storage. If you modify bucket or object, it is logged in admin activity logs.

You can also enable data access logs for a bucket. If you enable data access logs, logs will be made in there if you modify or read a bucket or an object. Now, not everybody would be able to see these logs. These logs can contain sensitive information and therefore there are specific roles that you would need to be able to access these logs. You can see that with project viewer permission, you can see admin activity logs, system event logs and policy denied logs.

The same is the case if you have logging logs viewer so if you have logging logs viewer or you have project viewer, then you’ll be able to access admin activity logs, system event logs and policy denied logs. However, if you don’t want to access data access logs, then you would need more permissions. You would need logging slash logs, viewer permissions, or you need project slash owner. You need to be owner of the project or you need to have private logs viewer permissions to access the data access logs. In a step, we looked at some of the important facts related to auditing and security related logs in Google cloud. I’ll see you in the next.

4. Step 04 – Exploring Google Cloud Logging – Routing Logs and Exports

Welcome back. There are a huge volumes of logs which would be stored in cloud logging. So you’d have Onpremise Logs, you’d have Google Cloud Logs and Thirdparty Cloud Logs. All of them. You can send them to Cloud API and they would be routed into cloud logging. Because of that, it is important to control and route the logs out to appropriate destinations or syncs. How do you manage your logs? Logs from various sources reach the logs router as a log router checks against a number of configured rules. What to ingest, what to discard, where to route the specific log entry into. There are two types of log buckets required.

The required buckets hold admin activity System Events and Access transparency logs. These are retained by default for 400 days. This is the one you can see in here. This is the required logs bucket. There is zero charge for the information which is present in these log buckets. You can also not delete this bucket. This bucket will be present always. You cannot change the retention period as well. Retention period will be 400 days. The other option is underscore default. So this is the underscore default log buckets. This is for all the other logs. These are retained for 30 days by default. Over here you’ll be billed based on the amount of logs that are present inside this specific bucket.

You cannot delete the bucket, but you can disable the underscore default log sync route to disable injection in the log router, you can disable the log sync route that would disable the injection to this specific bucket. If you want to disable any log injection into underscore default log bucket, you can disable that rule and the logs will not be ingested with the underscore default bucket. You can also edit retention settings. The default is 30 days, but you can configure anything between one day and ten years. That’s 3650 days. In addition to routing, there are also multiple places where you can export your logs out to. You can see cloud storage. BigQuery and pub sub in here. Ideally you should store the logs in cloud logging for a limited period.

For long term retention, let’s say you have compliance needs or auditing needs. Your logs can be exported to either cloud storage bucket. If you export something to a bucket with the name bucket, then this is the folder structure which should be created so syslog the date. You can also export your logs to a BigQuery data set. This would enable you to actually query on the logs. So you can see the format in here. So syslog underscore the date and inside this and inside this table there would be columns, timestamp and the log. You can also export your logs out to Cloud Pub sub topic. Inside the topic messages would be placed which would contain base 64 encoded log entries.

If you want to send the log out to another logging storage like Splunk, you can have a subscriber configured on this topic and send the log out to the specific logging storage. To export the logs. What you need to do is to create syncs. You can use the log router to create syncs to these destinations and in the log router rules you can configure include or exclude filters to limit the logs which are exported. Let’s look at a few use cases for cloud logging export let’s say you want to troubleshoot using VM logs. You can install Cloud Logging Agent in all the VMs and send the logs to Cloud Logging, and you can search for the logs in Cloud Logging. Use case two is export VM logs to BigQuery for querying using SQL like queries.

You can install Cloud Logging agent in all the VMs and send the logs to Cloud Logging. And you can create a BigQuery data set for storing the logs. You can create an export sync in Cloud Logging with the BigQuery data set as the sync destination. Once you have all the logs in BigQuery, you can query them using SQL like queries. Use case three is you want to retain audit logs for external auditors at minimum cost. The way you can do that is by creating an export sync in Cloud Logging with cloud Storage bucket as the sync destination. Once the logs are in cloud storage, you can provide auditors with Storage Object Viewer role so that they’ll be able to view the information which is present in the bucket. You can also use Google Data Studio for visualization. In this step, we looked at how you can control the ingestion of logs and the export of logs from Cloud Logging. I’ll see you in the next step.

5. Step 04a – Creating a Cloud Storage Bucket and Cloud Function

Come back in this step. Let’s get started with playing with cloud logging. And in a later step, we’ll also look at cloud monitoring. So these are for operations and to enable us to explore these services better. What we’ll do is we’ll create a bucket and we’ll create a cloud function to process data from that specific bucket. Let’s get started with that. So what I would do is I’ll say storage. So we want to create a cloud storage bucket. So let’s go to storage. So what we are creating is a cloud storage bucket to store a few files. We already created a bucket with files. Let’s not worry about it. What we’ll do is we’ll create a new bucket. So I’ll say create bucket and I’ll call this my bucket tied with cloud function.

And I’ll make it unique by saying in 28 minutes you can add whatever is needed to make the bucket name unique. And I would say create. I don’t really want to configure anything else. Once the bucket is ready, what do I want to do? I would want to tie it with a cloud function. So whenever an object is uploaded to this bucket, I would want to invoke a cloud function. So what I would do is I’d go back to bucket details and select my bucket, which is my bucket tied with CF in 28 minutes. And what I would want to do is click this on the right hand side and say, I want to process with cloud functions. So whenever an object is uploaded to the storage bucket. I would want to call a cloud function and do some processing.

The processing which we’ll be doing is very, very limited. What we’ll do is we’ll do a simple log that there is something that is being uploaded, but this can be used for a number of use cases. For example, let’s say users are uploading images to your bucket. You can have a function which can create the thumbnails for that. Specific image, you automatically create a thumbnail for it. Okay, let’s choose the defaults. I think my bucket tied with CF. Invented one that looks good. I’ll choose the default region trigger. The trigger is where we can choose the actions on which we want to trigger the cloud function. So we’ll want to trigger it on cloud storage, on the event type, not archive.

But I would want to trigger it on finalize or create when an object is created that’s when I would want to invoke this trigger, the bucket is automatically configured and I would say save. I don’t really want to worry about anything else. So I’ll go next. And over here, you’ll be able to see the function. If you see any message that you have to enable cloud build, go ahead and enable that. But over here, the default function, which is shown, has a console log. And that’s exactly what I was looking for. So that’s cool. Let’s go ahead and say deploy the deployment of the cloud function would take a little while. In subsequent steps, what we want to do is to put a few objects into the bucket and see if the cloud function logs of that particular statement. And after that, we’ll also look at how you can see these logs in cloud logging. I’ll see you in the next step.

Comments
* The most recent comment are at the top

Interesting posts

The Growing Demand for IT Certifications in the Fintech Industry

The fintech industry is experiencing an unprecedented boom, driven by the relentless pace of technological innovation and the increasing integration of financial services with digital platforms. As the lines between finance and technology blur, the need for highly skilled professionals who can navigate both worlds is greater than ever. One of the most effective ways… Read More »

CompTIA Security+ vs. CEH: Entry-Level Cybersecurity Certifications Compared

In today’s digital world, cybersecurity is no longer just a technical concern; it’s a critical business priority. With cyber threats evolving rapidly, organizations of all sizes are seeking skilled professionals to protect their digital assets. For those looking to break into the cybersecurity field, earning a certification is a great way to validate your skills… Read More »

The Evolving Role of ITIL: What’s New in ITIL 4 Managing Professional Transition Exam?

If you’ve been in the IT service management (ITSM) world for a while, you’ve probably heard of ITIL – the framework that’s been guiding IT professionals in delivering high-quality services for decades. The Information Technology Infrastructure Library (ITIL) has evolved significantly over the years, and its latest iteration, ITIL 4, marks a substantial shift in… Read More »

SASE and Zero Trust: How New Security Architectures are Shaping Cisco’s CyberOps Certification

As cybersecurity threats become increasingly sophisticated and pervasive, traditional security models are proving inadequate for today’s complex digital environments. To address these challenges, modern security frameworks such as SASE (Secure Access Service Edge) and Zero Trust are revolutionizing how organizations protect their networks and data. Recognizing the shift towards these advanced security architectures, Cisco has… Read More »

CompTIA’s CASP+ (CAS-004) Gets Tougher: What’s New in Advanced Security Practitioner Certification?

The cybersecurity landscape is constantly evolving, and with it, the certifications that validate the expertise of security professionals must adapt to address new challenges and technologies. CompTIA’s CASP+ (CompTIA Advanced Security Practitioner) certification has long been a hallmark of advanced knowledge in cybersecurity, distinguishing those who are capable of designing, implementing, and managing enterprise-level security… Read More »

Azure DevOps Engineer Expert Certification: What’s Changed in the New AZ-400 Exam Blueprint?

The cloud landscape is evolving at a breakneck pace, and with it, the certifications that validate an IT professional’s skills. One such certification is the Microsoft Certified: DevOps Engineer Expert, which is validated through the AZ-400 exam. This exam has undergone significant changes to reflect the latest trends, tools, and methodologies in the DevOps world.… Read More »

img