As a cloud engineer, you are responsible for managing a Kubernetes cluster on the Oracle Cloud Infrastructure (OCI) platform for your organization. You are looking for ways to ensure reliable operations of Kubernetes at scale while minimizing the operational overhead of managing the worker node infrastructure. Which cluster option is the best fit for your requirement?
The cluster option that is the best fit for ensuring reliable operations of Kubernetes at scale while minimizing the operational overhead of managing the worker node infrastructure is using OCI OKE virtual nodes to eliminate worker node infrastructure management. OKE is a fully managed service that allows you to run and manage your Kubernetes cluster on OCI. A Kubernetes cluster consists of a control plane and a set of worker nodes that run your containerized applications. OKE provides a highly available and secure control plane that is managed by Oracle, while you are responsible for managing the worker node infrastructure. However, OKE also supports virtual nodes, which are serverless compute resources that are automatically provisioned and scaled by OCI based on your application workload demands. Virtual nodes eliminate the need for managing worker node infrastructure, such as security patches, updates, scaling, etc. Virtual nodes also offer a serverless Kubernetes experience, where you can focus on developing and deploying your applications without worrying about the underlying infrastructure. Verified Reference: [Container Engine for Kubernetes - Oracle Cloud Infrastructure Developer Tools], [Virtual Nodes - Oracle Cloud Infrastructure Container Engine for Kubernetes]
A fresher joined a company who made a mistake while ding yaria: to build_spec.yaml file. As a consequence, build pipelines started failing. What is the root cause for this error commited by the fresher? (Choose the best answer.)
The root cause for the error committed by the fresher is that the expected input/exported variable of a build is not persistent throughout multiple pipelines. This means that the value set for a variable in one pipeline is not carried over to subsequent pipelines, leading to failures in the build pipelines.
You host a microservices based application on the Oracle Cloud Infrastructure Con-tainer Engine for Kubernetes (OKE). Due to increased popularity of your application, you need to provision more resources to meet the growing demand. Which three statements are true for the given scenario?
The statements that are true for scaling an OKE cluster to meet growing demand are:
Enable autoscaling by autoscaling Pods by deploying Kubernetes Autoscaler to collect resource metrics from each worker node in the cluster. Pod autoscaling is a feature that allows you to adjust the number of pods in a deployment or replica set based on the CPU or memory utilization of the pods. You can use Kubernetes Autoscaler, which is an add-on component that you can install on your OKE cluster, to collect resource metrics from each worker node and scale the pods up or down accordingly.
Enable cluster autoscaling by autoscaling node pools by deploying the Kubernetes Autoscaler to automatically resize a cluster's node pools based on application workload demands. Cluster autoscaling is a feature that allows you to adjust the number of nodes in a node pool based on the pod requests and limits of the pods running on the nodes. You can use Kubernetes Autoscaler, which is an add-on component that you can install on your OKE cluster, to monitor the pod requests and limits and scale the node pools up or down accordingly.
Scale a node pool up and down to change the number of worker nodes in the node pool, and the availability domains and subnets in which to place them. A node pool is a group of worker nodes within an OKE cluster that share the same configuration, such as shape, image, subnet, etc. You can use OCI Console, CLI, or API to scale a node pool up and down by adding or removing worker nodes from it. You can also change the availability domains and subnets for your node pool to distribute your nodes across different fault domains. Scaling a node pool allows you to adjust your cluster capacity according to your application workload demands. Verified Reference: [Scaling Clusters - Oracle Cloud Infrastructure Container Engine for Kubernetes], [Scaling Node Pools - Oracle Cloud Infrastructure Container Engine for Kubernetes]
You are part of the DevOps team and troubleshooting an issue related to a newly deployed web application. The clients for the web application have reported failures with creating records into the application over an HTTPS connection. The current logs collected by the Oracle Cloud Infrastructure (OCI) Logging service is not providing much information related to the issue. You have been asked to enable specific logs applicable to services along with con-figuring an alarm to monitor any new failures. Which two steps can you perform to meet this requirement?
The steps that you can perform to enable specific logs applicable to services along with configuring an alarm to monitor any new failures are:
Install the OCI compute agent software on client systems, enable Custom log and create an agent configuration selecting log path. The OCI compute agent is a software component that runs on your compute instances and collects logs from various sources, such as files, syslog, Windows Event Log, etc. You can use the OCI compute agent to enable Custom log, which is a type of log that allows you to define your own log source and format. You can also create an agent configuration that specifies the log path, log group, and log name for your Custom log.
Create custom filters with required data fields (for example: source, time, statusCode, message) to filter log messages, configure Service Connector with Monitoring for creating an Alarm. A custom filter is a query that allows you to filter and analyze your log messages based on various data fields, such as source, time, level, message, etc. You can use custom filters to search for specific patterns or conditions in your logs, such as failures or errors. You can also configure a Service Connector with Monitoring, which is a component that allows you to transfer data from one OCI service to another. You can use a Service Connector with Monitoring to send your filtered log messages to the OCI Monitoring service, which is a service that allows you to create metrics and alarms based on your logs. You can then create an Alarm, which is a rule that triggers an action when a metric meets a specified threshold. Verified Reference: [Compute Agent - Oracle Cloud Infrastructure Logging], [Custom Logs - Oracle Cloud Infrastructure Logging], [Custom Filters - Oracle Cloud Infrastructure Logging], [Service Connectors - Oracle Cloud Infrastructure Logging], [Monitoring - Oracle Cloud Infrastructure Logging], [Alarms - Oracle Cloud Infrastructure Logging]
As a small company that wants to adopt a DevOps framework and a consumption-based pricing model, which Oracle Cloud Infrastructure service can be used as a target deployment environment, providing features like automated rollouts and rollbacks, self-healing of failed containers, and configuration management, without the overhead of managing security patches and scaling?
The OCI service that can be used as a target deployment environment for adopting a DevOps framework and a consumption-based pricing model, while providing features like automated rollouts and rollbacks, self-healing of failed containers, and configuration management, without the overhead of managing security patches and scaling, is OCI Container Engine for Kubernetes (OKE) with virtual nodes. OKE is a fully managed service that allows you to run and manage your containerized applications on OCI using Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications. OKE provides features such as automated rollouts and rollbacks, self-healing of failed containers, configuration management, service discovery, load balancing, etc. OKE also supports virtual nodes, which are serverless compute resources that are automatically provisioned and scaled by OCI based on your application workload demands. Virtual nodes eliminate the need for managing worker node infrastructure, such as security patches, updates, scaling, etc. Virtual nodes also offer a consumption-based pricing model, where you only pay for the resources you consume when your containers are running. Verified Reference: [Container Engine for Kubernetes - Oracle Cloud Infrastructure Developer Tools], [Virtual Nodes - Oracle Cloud Infrastructure Container Engine for Kubernetes]
Tricia
4 months agoLynsey
5 months agoAhmed
6 months agoCaitlin
6 months agoJonelle
6 months agoMira
6 months agoInocencia
6 months agoAntonio
7 months agoFelix
7 months ago