2024. 4. 11. 13:34ㆍDev/EKS
A company's security team needs to be able to detect whenever production containers attempt to communicate with known IP addresses associated with cryptocurrency-related activity. Automated vulnerability scanning of container images is performed in the CI/CD pipeline before deployment into managed node groups in EKS.
Which solution should the security team leverage to meet their requirement?
- Vulnerability scanning is already performed on the container images in the CI/CD pipeline so no other solution is required.
- Enable EKS Runtime Monitoring with GuardDuty.
Comments: This scenario requires a runtime monitoring solution to detect malicious activity while the containers are running in which GuardDuty's EKS Runtime Monitoring solution can be leveraged.
- Configure the deployments to run on AWS Fargate instead since access to the underlying host is restricted.
- Enable EKS control plane logging to send the Kubernetes API server logs to CloudWatch Logs and query for events using CloudWatch Logs Insights.
You are asked to provide a high-level summary of the Kubernetes cluster architecture to your team.
Which of the following statements BEST describes the key components?
- The main components of a Kubernetes cluster are pods, services, replica sets and namespaces that all run on top of the nodes.
- A Kubernetes cluster has control plane nodes with components like the API server, scheduler, and controllers. It also has worker nodes to run applications, and a distributed data store like etcd.
- A Kubernetes cluster consists of a set of worker nodes and a master node that manages them. Worker nodes run pod containers while the master handles scheduling.
Comments: This describes the basic master-agent node architecture but does not cover all the components of a Kubernetes cluster.
- The cluster contains a Kubernetes API server, etcd, controller manager, scheduler and DNS running on control plane. Worker nodes run kubelet, kube-proxy and container runtimes.
- The main components of a Kubernetes cluster are pods, services, replica sets and namespaces that all run on top of the nodes.
- A Kubernetes cluster has control plane nodes with components like the API server, scheduler, and controllers. It also has worker nodes to run applications, and a distributed data store like etcd.
- A Kubernetes cluster consists of a set of worker nodes and a master node that manages them. Worker nodes run pod containers while the master handles scheduling.
- The cluster contains a Kubernetes API server, etcd, controller manager, scheduler and DNS running on control plane. Worker nodes run kubelet, kube-proxy and container runtimes.
You are a DevOps engineer. Your team is responsible for managing a Kubernetes cluster that hosts critical microservices. You are planning to upgrade the Kubernetes cluster to a new version.
Which of the following Kubernetes components is responsible for ensuring that the cluster upgrade is performed safely and reliably?
- Kubernetes scheduler
- Kubernetes controller manager
- Kubernetes kubelet
- Kubernetes API server
Comments: The Kubernetes API server is responsible for providing a unified interface to the Kubernetes cluster. It is not involved in upgrading the cluster.
- Kubernetes scheduler
- Kubernetes controller manager
- Kubernetes kubelet
- Kubernetes API server
You have been tasked with setting up a new Kubernetes-based application infrastructure in AWS using EKS. To ensure that the EKS control plane has the necessary permissions to manage AWS resources on your behalf, you need to create an IAM role and associate it with the EKS cluster.
Which of the following options can configure the IAM permissions for the EKS control plane?
- Obtain the most privileged IAM role, AdministratorAccess, and associate it with the EKS cluster, ensuring no permission issues whatsoever.
- Create an IAM role with the policy AmazonEKSWorkerNodePolicy and trust relationship for the EKS service, then specify this role during EKS cluster creation.
- Create an IAM user with necessary permissions and associate its access and secret keys with the EKS control plane.
- Create an IAM role with the policy AmazonEKSClusterPolicy and trust relationship for the EKS service, then specify this role during EKS cluster creation.
Comments: This is correct AmazonEKSClusterPolicy is the correct policy and it needs to be called by the EKS service.
Your company is in the process of developing EKS Cluster with AWS best practices guidelines. They have been considering to use hybrid compute options for the Cluster Nodes Design. In hybrid architecture they are planning to to use On-Demand & Spot instances on x86 (Intel and AMD) and arm64 (Graviton) platforms based on traffic & resource requirements.
Which option do you think your company should select for lower operational management and better cost efficiency?
- Create Managed Node Group and respective Autoscaling Launch Configuration for each size, type, architecture, OnDemand or Spot instance type. Use K8s Node Selector to launch respective type of EC2 Node in EKS Cluster
- Create two Managed Node Groups- one for Spot & OnDemand each. Individual Managed Node Group will include required instances family/size & CPU Architecture(x86 & arms64) options. Add required EC2 size and family in the Autoscaling Launch Configuration to launch the respective EC2 at run time using Node-Selector.
- Create groupless Karpenter CRD with one provisioner file including required hybrid options as given configurations and let Karpenter use the cost efficient options runtime to launch a new Node in the cluster
Comments: Karpenter lets you define all your hybrid compute option into one provisioner file and you can configure to use new node with given EC2 Instance Size/Type, architecture, etc. Less Management in terms of No more Managed Node Groups and Karpenter makes sure that it picks just right size of instance for a given number of pods to be launched.
- Create multiple Self-Managed Node Group with respective AusoScaling Launch Configuration for on-demand, Spot, x86 and arm64. Use it with Node-selector for a given set of service deployments.
You cannot access the EKS cluster resources like pods and nodes with your IAM user. You've verified and concluded that you have the necessary IAM permissions.
Which of the following is the key step in troubleshooting why the IAM user is not able to access the cluster resources?
- Provide Administrator access to the IAM user.
- Renew the credentials of the IAM user to restore access.
- Verify and add the IAM entity in the aws-auth config map.
Comments: This is the correct answer as this creates an identity mapping to the IAM user to an RBAC group which provides access to the IAM user.
- Attach AmazonEKSClusterPolicy IAM policy to the IAM user.
You are a DevOps engineer responsible for managing the deployment of a containerized application on Amazon Elastic Kubernetes Service (EKS) using GitOps. You have set up a Git repository to store the Kubernetes manifests and automation scripts. Your team is working on continuous integration and continuous delivery (CI/CD) for the application.
Which of the following is the key benefit of using GitOps with Amazon EKS automation?
- Centralized version control and change tracking
Comments: GitOps is a methodology that emphasizes managing infrastructure and application configurations as code within a Git repository. This approach ensures that all changes to your Kubernetes manifests and automation scripts are tracked, versioned, and auditable. It allows for a centralized source of truth for your infrastructure and application configurations
- Faster container image building
- Reduced server maintenance overhead
- Enhanced container security
As a tech lead, you are tasked with selecting the right Kubernetes solution for your company's needs.
Which of the following statements accurately describes the business value and features of EKS?
- EKS is exclusively designed for large enterprises and may not be suitable for smaller businesses due to its complexity.
- With EKS, you get full control over the underlying Kubernetes infrastructure, allowing you to customize it as needed.
- EKS offers a highly cost-effective solution compared to other cloud providers, helping to reduce infrastructure expenses significantly.
- EKS simplifies Kubernetes management by handling the control plane and providing automated updates, allowing your team to focus on application developments.
Comments: EKS indeed simplifies Kubernetes management by handling the control plane and providing automated updates. This allows your team to concentrate on developing and running applications within the Kubernetes cluster, making it a valuable feature.
A new startup company recently launched an E-commerce site hosted on Amazon EKS cluster that has multiple Microservices. Their CEO asked operations team to build a solution to capture application logs across the cluster, so that they can identify what microservice can be improved.
What action should the operations team take in order to capture application logs generated across the cluster?
- Configure cluster-wide log collector agent like FluentBit to capture application logs and send them to a centralized logging destination like CloudWatch or Elasticsearch and build a dashboard.
Comments: Cluster-wide log collector systems like Fluentd or FluentBit can tail log files on the nodes and ship logs for retention.
- Setup CloudWatch Agent to capture application logs and store in Amazon Managed Service for Prometheus and visualize using Amazon Managed Grafana.
- Turn on Kubernetes native solution to collect application logs and send them to a centralized logging destination like CloudWatch or Elasticsearch and build a dashboard.
- Setup AWS Distro for OpenTelemetry (ADOT) collector to capture application logs and store in Amazon Managed Service for Prometheus and visualize using Amazon Managed Grafana.
You are deploying a new application on Amazon EKS and need to determine the best way to expose it to external traffic.
What are the main options for exposing an application to external traffic in EKS?
- External access is not possible with EKS. The only option is to connect from within the VPC.
- The options are ClusterIP, NodePort, LoadBalancer Service, and Ingress.
Comments: ClusterIP, NodePort, LoadBalancer Service, and Ingress are the main service types in EKS for exposing an application to external traffic. Each has its own use cases and tradeoffs.
- Using a Network Load Balancer is the most secure option since it only exposes public endpoints.
- Setting up a NAT Gateway is required to allow inbound connections to EKS pods.
An App team is planning on deploying a new application on EKS that is managed by the platform engineering unit. During requirements gathering, the App team mentioned the application is for parsing and cleaning streaming data for machine learning models and has interruption handling built into the code.
What is the best compute plan for this workload to save on cost?
- Spot Instances
Comments: Offer savings up to 90% off On-demand prices and excellent for fault-tolerant workloads
- Compute saving plans
- On-demand Instances
- ECS Instance saving plans
A developer wants to run a web application based on a popular framework written in Python. The developer cannot use a container image from a public registry since additional libraries are needed for the web application. They decide to create a Dockerfile and add the specific statements that install those libraries.
When the Dockerfile is ready, what should be the beginning of the command to be executed to create the final image?
- docker pull
- docker tag
- docker commit
- docker build
Comments: docker build is used when you need to build an image from a Dockerfile.
You manage an EKS Cluster with one autoscaling group using an instance type that has an EC2 Instance Savings Plan and other autoscaling group using instance types that are on demand.
In order to optimize costs, which feature of Cluster Autoscaler can favour the autoscaling group covered by Instance Savings Plan to be used first in an scale-out event?
- Weighted Provisioners
- Priority Expanders
- Spot Instances
- Node Termination Handler
Comments: Feature to manage the termination of a spot instance
- Weighted Provisioners
- Priority Expanders
- Spot Instances
- Node Termination Handler
Your organization needs to set up a continuous delivery pipeline. Its architecture includes containerized micro-services that need to be updated and rolled back quickly. They want to expose front-end services on the public Internet. The Application team requirements are listed below:
- Services are deployed redundantly across multiple availability zones in US east region
- Reserve a single front-end IP for their fleet of services
- Deployment artifacts are immutable.
Which set of managed services should they use? (Select THREE)
- Amazon RDS (Relational Database Service)
- AWS ELB - Elastic Load Balancers
Comments: Elastic Load Balancing (ELB) automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). Amazon EKS supports the Network Load Balancer and the Classic Load Balancer for pods running on Amazon Elastic Compute Cloud (Amazon EC2) instance worker nodes. You can load balance network traffic to a Network Load Balancer (instance or IP targets) or a Classic Load Balancer (instance target only). Network load balancers examine IP addresses and other network information to redirect traffic optimally. They track the source of the application traffic and can assign a static IP address to several servers. Network load balancers use the static and dynamic load balancing algorithms described earlier to balance server load.
- Amazon CloudWatch
- Amazon S3
Comments: Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. S3 is not a pre-requisite
- Amazon EKS (Elastic Kubernetes Service)
Comments: Amazon EKS runs the Kubernetes control plane across three Availability Zones (AZs) in order to ensure high availability, and it automatically detects and replaces unhealthy masters. The recommended way to run highly available k8s clusters is using Amazon EKS with worker nodes spread across three or more AZs to make your cluster highly available in multiple AZs so your applications will still be available in case of zone failures. You can take advantage of all the performance, scale, reliability, and availability of the AWS platform, as well as integrations with networking and security services, such as Application Load Balancers for load distribution, IAM for role-based access control, and VPC for pod networking.
- Amazon ECR - Elastic Container Registry
- Amazon RDS (Relational Database Service)
- AWS ELB - Elastic Load Balancers
- Amazon CloudWatch
- Amazon S3
- Amazon EKS (Elastic Kubernetes Service)
- Amazon ECR - Elastic Container Registry
Your organization runs an Amazon EKS cluster running multiple microservices. The DevOps team is planning to utilize Helm Charts to combine all the Kubernetes YAML manifests in a single package which can be advertised to their EKS cluster to simplify the Deployment of containerized Applications.
Which concepts used in helm does your team needs to be aware of? (Select THREE)
- SSM document
- Chart
Comments: a package that contains all the necessary pre-configured resources to deploy an application to a Kubernetes cluster. This includes YAML configuration files for deployments, services, secrets, and config maps that define the desired state of your application.
- CloudFormation Templates
- Release
Comments: An instance of a chart that can be deployed to the cluster with Helm.
- Repository
Comments: a collection of Helm charts that can be shared or made available to others
- AWS Artifact
You are responsible for managing a highly available and scalable containerized application on Amazon EKS for a retail company. The application is deployed across multiple Availability Zones for fault tolerance.
When setting up an Ingress controller for your containerized application on Amazon EKS, which of the following statements is true
- Ingress controllers are automatically provisioned by AWS EKS, and no additional configuration is required.
- Ingress controllers are only necessary when deploying applications in a single Availability Zone.
- An Ingress controller is primarily responsible for routing internal pod-to-pod traffic within your EKS cluster.
- Ingress controllers are used to define routing rules for external traffic coming to your containerized application in EKS cluster.
Comments: Ingress controllers act as a layer 7 (application layer) load balancer and allow you to define how incoming external HTTP and HTTPS traffic should be routed to different services and pods within your EKS cluster.
You are the cluster administrator for your organization's EKS cluster. You have been informed that your Organization purchased an EC2 instance savings plan for m6g.2xlarge and c6g.2xlarge instances.
How will you influence Karpenter to prefer these instance types first during a scale-out event?
- Limit Karpenter provisioners to only use m6g.2xlarge and c6g.2xlarge
Comments: INCORRECT - This answer is incomplete. This will influence scheduling pods on savings plan instances, but if the instances are not available, the scheduling will fail.
- Use Karpenter's priority expanders to prefer savings plans instance types first
- Use Weighted provisioners to prefer savings plans instance types first
- Create managed nodegroups consisting m6g.2xlarge and c6g.2xlarge to ensure EC2 Instance savings plans are utilized. Karpenter will use this existing capacity to schedule pods
- Karpenter cannot prefer specific instances during scheduling
- Limit Karpenter provisioners to only use m6g.2xlarge and c6g.2xlarge
- Use Karpenter's priority expanders to prefer savings plans instance types first
- Use Weighted provisioners to prefer savings plans instance types first
- Create managed nodegroups consisting m6g.2xlarge and c6g.2xlarge to ensure EC2 Instance savings plans are utilized. Karpenter will use this existing capacity to schedule pods
- Karpenter cannot prefer specific instances during scheduling
Your team has deployed a web server pod into your Amazon EKS environment as a part of your sandbox account. You need to know this pod's IP address and verify if it is successfully running.
Which command can you use to query the API server to get the status of this webserver?
- etcdctl get pods -o wide
- systemctl get pods -o wide
- apt-get pods -o wide
- kubectl get pods -o wide
Comments: To interact with the API server and retrieve the details of the workload, we need to make use of kubectl command.
As a DevOps engineer, you need to upgrade your company's EKS cluster to the latest Kubernetes version.
Which of the following BEST describes the process for updating an EKS cluster?
- To update the EKS cluster, you must configure Auto Scaling groups to replace cluster nodes one-by-one with new instances.
- You can update the Kubernetes version of your EKS cluster using the AWS console or CLI. AWS will handle updating the control plane automatically behind the scenes.
Comments: EKS supports seamless upgrades of the Kubernetes version. You simply update the cluster using AWS tools and EKS will handle updating the control plane components to the new version automatically.
- You can update the EKS cluster directly using kubectl set image since it provides full access to the underlying infrastructure.
- The EKS cluster control plane is managed by AWS, so you need to create a new EKS cluster with the desired Kubernetes version and migrate your workloads.
You are a consultant for a medium-sized software development company. The company is considering adopting containerization as part of its software deployment strategy. The management wants to ensure that the adoption of containers aligns with their business objectives.
Which of the following BEST describes the business value of adopting containers for software deployment?
- Containers are primarily suitable for large enterprises and do not offer any benefits for small to medium-sized businesses.
- Containers simplify application deployment and scaling, leading to improved efficiency, faster development cycles, and reduced infrastructure costs.
Comments: Containers simplify application deployment and scaling, improving efficiency, faster development cycles, and reducing infrastructure costs by running multiple containers on a single machine. These benefits align with the business objectives of most companies.
- Containers provide a secure and isolated environment for software, which ensures 100% protection against all types of cyber threats.
- Containers are primarily used for running legacy software, making it easier to maintain outdated systems.
An engineering team deployed new microservices in their EKS cluster and noticed that some application requests were failing briefly during the startup process. They believe this could be because pods were accepting traffic before fully initializing.
What can they do to mitigate this issue and prevent pods from receiving requests before they are ready?
- Enable Liveness Probe health check.
- Enable Readiness Probe with default initialDelaySeconds.
- Enable Readiness Probe with initialDelaySeconds as X seconds based on each applications startup time.
Comments: This would be the ideal solution. Since its recommended to use the initialDelaySeconds based on each application's startup tasks and the time it takes for the application be ready before it accepts the traffic.
- Enable Readiness Probe with initialDelaySeconds as 5 seconds for both applications.
A new startup has deployed multiple mission-critical applications on Amazon EKS. To ensure compliance and enhanced debugging capabilities, they want to maintain an audit trail of all administrative actions taken on their EKS cluster.
Which action should their operations team take to capture and review these activities on their EKS cluster?
- Enable AWS CloudTrail, ship the logs to a cost affective S3 bucket, and filter for EKS-related events.
- Enable Amazon EKS control plane logs, direct them to CloudWatch Logs, and establish metric filters and alarms for notable activities.
Comments: Amazon EKS control plane logs detail the cluster's operational activities, including logs from the API server, controller manager, and scheduler. These offer insights essential for monitoring and auditing operations.
- Since the control plane logs are enabled by default, the operations team can simply view the respective CloudWatch Logs, and establish metric filters and alarms for notable activities.
- Activate CloudWatch Container Insights and filter logs for administrative activities.
You are a DevOps engineer responsible for managing containerized applications in a Kubernetes cluster. You're tasked with ensuring that your team understands the key considerations when deploying pods in Kubernetes to maintain efficient and reliable operations.
When deploying a single pod in Kubernetes, which of the following considerations should be taken into account?
- The maximum number of pods that can run on a node.
- The geographical location of the Kubernetes cluster.
- The number of CPU cores and RAM allocated to the pod.
Comments: When deploying a pod in Kubernetes, one of the primary considerations is the allocation of CPU cores and RAM to ensure that the pod has sufficient resources to run efficiently.
- The Kubernetes version is used for cluster management.
What command option when used with kubectl will list all resources running in the "workshop" namespace ?
- kubectl get all
- kubectl get all --all-namespaces
- kubectl get pods
- kubectl get all -n workshop
Comments: -n or --namespace is used to see resources running in a namespace
You need to deploy an application on Amazon EKS to be consumed by another application within the same Amazon VPC. Your application only requires one pod but may need to scale in the future.
How can you expose the application in a cost-efficient and scalable way?
- Run the application on a single Pod and expose it using a Service with type LoadBalancer.
- Run the application using a Deployment with one replica, then expose the Deployment using a Service with type NodePort. In the future, change the service to use type LoadBalancer if needed.
- Run the application using a Deployment with one replica, then expose the Deployment using a Service with type LoadBalancer.
Comments: The Deployment will make it easy for the application to scale in the future, and a Service with type `LoadBalancer` will provide a stable endpoint for the consumer regardless of the number of application replicas.
- Run the application on a single Pod and expose it using a Service with type NodePort.
You own a set of unique microservices deployed in EKS. Some of these microservices are exposed externally using the service type `LoadBalancer` for clients outside the EKS cluster. Your management asked you to find ways to reduce the cost of running these applications considering the recent cloud bill.
Which of the following options would reduce the cost to run your applications without compromising their external-facing access and with MINIMAL operational overhead?
- Expose all external facing services using an Ingress resource.
- Annotate services with type LoadBalancer with `kubernetes.io/ingress.class: nlb` to put them behind a single Application Load Balancer.
Comments: The annotation kubernetes.io/ingress.class is only applicable for Ingress resource configuration. The AWS Load Balancer Controller add-on uses this annotation to determine if the Ingress needs a Network Load Balancer or an Application Load Balancer. This annotation does not have any meaning if used for a service configuration.
- Make all the services of type ClusterIP to eliminate the cost of load balancers.
- Make the external facing services of type NodePort and configure the DNS records of the services to point to one of the nodes' IP address. The external clients can use the domain name with the assigned port to access the service.
- Expose all external facing services using an Ingress resource.
- Annotate services with type LoadBalancer with `kubernetes.io/ingress.class: nlb` to put them behind a single Application Load Balancer.
- Make all the services of type ClusterIP to eliminate the cost of load balancers.
- Make the external facing services of type NodePort and configure the DNS records of the services to point to one of the nodes' IP address. The external clients can use the domain name with the assigned port to access the service.
Your organization has deployed multiple microservices on an Amazon EKS cluster. You need to expose these services publicly via HTTP protocol.
How would you achieve this in a cost-effective way with minimum operational overhead?
- Create NodePort for each service and expose them publicly.
- Create ExternalName to expose the service externally.
- Create AWS Load balancer Controller and ingress resource for mapping each of its services to be exposed via path-based routing.
Comments: This would be the better option, considering you need to just create a Load Balancer controller and expose all the services via Ingress using the path-based routing for all applications.
- Create Load Balancer for each of the service and expose them publicly.
You are responsible for managing a Kubernetes application deployment using Helm. Your application consists of multiple microservices, each with its unique configuration requirements.
Which Helm resource should you use to define common configuration elements once and reuse them efficiently across multiple microservices?
- Helm Values
Comments: Correct. Helm Values allow you to define configuration settings and parameters that can be reused across different microservices within a Helm chart. They enable you to centralize and reuse common configurations efficiently.
- Helm Release
- Helm Templates
- Helm Dependencies
The security team in an enterprise is looking to enforce a strong "defense in depth" security strategy, particularly for storing critical and sensitive data. In order to conform to the security standards, individual application teams hosting their workloads on EKS have been asked to ensure use of unique Data Encryption keys (DEK) for encryption and decryption of secrets.
You are working as the platform administrator responsible for building and maintaining the EKS cluster. What native mechanism in EKS would you use to help the application teams meet the security standards and protect their secrets in etcd?
- The EKS Control plane and etcd is fully managed. Security is built-in; no additional steps are required for encryption.
- Kubernetes secrets are encrypted by default; no additional encryption is needed.
- Enable the Amazon EBS CSI Driver add-on. Security is built-in for CSI drivers, and it ensures secrets are encrypted at the disk level.
- Enable AWS KMS Envelope Encryption.
Comments: Envelope encryption allows the encryption of a key with another key. With this feature enabled, the encryption provider automatically encrypts the secrets during the creation and it is done using a Kubernetes-generated Data Encryption Key (DEK) - these keys are unique for individual secrets and are also encrypted using a master key or Customer Master Key (CMK) in KMS which can be automatically rotated on a recurring schedule.
Your organization is concerned about unusual network traffic involving two pods in its EKS cluster. The DevOps team needs to create a Network Policy to block ingress and egress connections on Pods A and B only, both running on the same namespace. A deny all ingress and egress Network Policy is applied to the pod's namespace but, testing shows that ALL pods in the namespace have been blocked.
What should the Devops team do to fix the Network Policy and block only Pods A and B communication?
- Add labels to both Pods A and B, edit the Network Policy's field named podSelector to match the newly created labels.
Comments: Not setting a correct podSelector field on your Network Policy will result in it not matching the desired pods, so in fact blocking all namespace traffic. Setting labels to the Pods and adding those to the podSelector field in the Network Policy would make sure the pods are matched by the deny all policy and traffic to only Pods A and B is blocked.
- Restart Pods A and B for the Network Policy to take effect.
- Remove policyType Egress from the Network Policy, so that it blocks all ingress and egress traffic for Pods A and B.
- Remove policyType Ingress from the Network Policy, so that it blocks all ingress and egress traffic for Pods A and B.
Your organization runs an Amazon EKS cluster running multiple microservices. While troubleshooting, the Infrastructure team found that the application pods could not communicate with other pods. They are currently using the recommended version of the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes.
What are the different methods to identify the root cause to resolve the issue? (Select THREE)
- Verify that you are using node groups.
- Utilize Cloudtrail logs to identify the root cause.
- Check security groups allow pods to communicate with each other.
Comments: if you use security groups for pods or CNI custom networking, then you can allocate any security groups to pods. In this scenario, confirm that the security groups allow communication between pods correctly.
- Utilize Container Insights to determine the root cause.
Comments: CloudWatch Container Insights dashboard gives you access to the following Network Rx/Tx information:<br>- The number of bytes per second received over the network by the pod.<br>- The number of bytes per second being transmitted over the network by the pod.
- Verify pods are correctly using the correct pod/service DNS names to communicate with each other.
Comments: You must expose your pods through a service first. If you don't, then your pods can't get DNS names and can be reached only by their specific IP addresses.
- Redeploy pods that are in STOPPED state.
You have developed a micro-services based web application that is being deployed to your Amazon EKS cluster. The application has multiple services that need to be exposed externally. You have decided that exposing the application endpoints using a Kubernetes Ingress controller and AWS ALB would meet all requirments. A central administration team deployed the EKS cluster you are leveraging and is using the default configuration, with no additional components installed.
You have created a Kubernetes Ingress manifest with the proper configuration options for the ALB and your applications, but upon applying the manifest, nothing happened. No load balancer was created and your application is not accessible externally.
What is a potential reason that your ingress resource is not being satisfied?
- The ALB must be created prior to the Ingress resource being applied.
- An ingress resource should not be used for this use case. A service of type LoadBalancer should be created instead.
- The AWS Load Balancer Controller was not installed on the EKS cluster.
Comments: You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.<br><br>EKS clusters do not come with the AWS Load Balancer Controller installed by default.
- A NLB must be used with an Ingress resource, not an ALB.
Your organization is considering migrating its application deployment process to containerization technology. They want to assess the potential benefits and drawbacks of this transition. As the IT consultant, you are asked to evaluate the situation and provide recommendations.
Which statement BEST represents a strong reason for you to adopt containerization technology?
- Containers can guarantee complete hardware-level isolation.
- Containers can span multiple hosts without complex configuration.
- Containers are a solution for persistently storing data that needs long-term retention.
- Containerization simplifies software updates and rollbacks, reducing the risk of system failures during deployment.
Comments: This statement is a strong reason for adopting containerization technology. Containerization allows for encapsulating applications and their dependencies in isolated containers. This isolation makes managing software updates and rollbacks easier, reducing the risk of system failures. When a containerized application encounters issues during an update, you can simply roll back to the previous container, ensuring stability and minimizing downtime.
You are a DevOps engineer at a large financial services company. Your team is responsible for managing the company's Kubernetes cluster, which hosts hundreds of microservices that serve millions of customers per day. You need to expose a group of microservices to the outside world, but you want to do so in a way that is highly available, secure, and scalable.
Which kubernetes resource would BEST meet your requirements?
- ClusterIP
- A combination of an Ingress controller and a load balancer.
Comments: The Ingress service will use a path-based routing rule to route traffic to the different microservices. The load balancer will distribute traffic to the Ingress service across multiple regions, ensuring high availability and scalability.
- NodePort
- Load Balancer
You have built an Amazon EKS cluster residing in an Amazon VPC with a CIDR of 192.168.0.0/16. You have decided to provision your cluster's managed node groups across two Availability Zones in subnets with the CIDRs of 192.168.32.0/19 and 192.168.64.0/19. The Amazon VPC-CNI plugin is using its default configuration. An API application has been deployed to the cluster and is running across multiple Pods to support availability requirements. The API application is now required to make outbound calls to another application running within the same Amazon VPC as your Amazon EKS cluster. Due to the unique nature of this external application, you must allow a list of IP ranges from which calls will be made.
Which IP address is an example from which the application running on your Amazon EKS cluster could make outbound calls?
- 172.16.25.5
- 10.52.36.11
- 192.168.67.9
Comments: IP Address falls in the range of the subnets the EKS managed node groups reside in.
- 127.0.0.1
You are architecting a complex microservices-based application for deployment in a Kubernetes cluster. Each microservice runs in a separate pod and requires direct communication using the DNS name of the target pod, following the format ..svc. cluster.local. This is essential for maintaining data consistency and minimizing latency.
Which of the following communication mechanisms would BEST facilitate this requirement?
- Network Policies
- NodePort
- Service Discovery
Comments: Service discovery in Kubernetes is a mechanism that enables pods to discover and communicate with other pods or services within the cluster using DNS names.
- ClusterIP
Your company is running a web application on Kubernetes. The application needs to maintain three replicas at all times for high availability. You want to ensure that if a pod goes down, a new pod is automatically created to replace it. Which Kubernetes resource would you use to manage the replicas?
- Job
- ReplicaSet
Comments: A ReplicaSet maintains a stable set of replica Pods running at any given time. It will automatically restart and reschedule Pods if they fail.
- DaemonSet
- Service
The development team recently launched a new application and wants a simple way to understand the Docker Container running process information for CPU, Memory, and Swap usage.
What command would you run from the Docker host to return this information?
- docker stats
Comments: Displays a live stream of container/s resource usage statistics
- docker top
- docker logs
- docker port
- docker stats
- docker top
- docker logs
- docker port
An engineer would like to deploy an application to a shared EKS cluster and ensure that application resources are isolated. The engineer wants to be able to give access to these resources to specific team members but not let them have access to other resources in the cluster.
What is the best option for the engineer's requirement?
- Deploy resources in the kube-system namespace.
- Create a new namespace for this application and deploy the resources in it
Comments: Correct this will enable isolation of the resources.
- Deploy the resources in the default namespace.
- Deploy the resources on a separate worker node.
Your organization runs web applications that use EKS to manage several workloads. One workload requires consistent hostnames even after pod scaling and relaunches.
Which feature of Kubernetes should you use to accomplish this?
- Persistent Volumes
- StatefulSets
Comments: StatefulSets ensure that each pod in the set has a unique and stable hostname based on a predictable naming convention. When scaling the workload, new pods are created with a new hostname based on the same naming convention, and existing pods maintain their existing hostname.
- Replicasets
- Role-based access control
A Devops engineer created an EKS cluster using a deployment IAM role. When the engineer tries to connect to the cluster with a personal IAM role through kubectl, they get "An error occurred (InvalidClientTokenId) when calling the AssumeRole operation: The security token included in the request is invalid". They repeatedly ran the eks update-kubeconfig command, which was completed without errors.
What actions will help to resolve this error most efficiently?
- Add in Kubernetes RBAC for the user so that they have permission to run kubectl commands.
- Create a support ticket to AWS and get them to provide a valid token id.
- Assume the deployments IAM role used to create the cluster and add the personal role to the aws-auth.yaml file.
Comments: When an EKS cluster is created, it adds the role assumed to create it to the aws-auth.yaml file. For another role/user to run kubectl commands, they have to be added to the aws-auth.yaml file.
- Delete the cluster and manually create it on the console with the DevOps engineer's IAM credentials.
A Telecommunication company has a regulatory requirement to migrate all their applications to IPv6. You have been tasked to migrate all their Kubernetes clusters hosted on EKS. The security team has a requirement to prevent all traffic from the internet from connecting directly to all EC2 instances.
What steps do you need to take to migrate all clusters ( Select TWO)
- Re-create all clusters and Enable IPv6 on all new clusters, Migrate all your applications using a blue-green deployment
Comments: IPv6 need to configured on cluster creation, so cluster need to be re-created
- Create an Egress-only internet gateway in all VPCs and configure routing table to use it
- Re-configure all EKS clusters and Enable IPv6, re-deploy all your applications so new pods can start using IPv6
- Create a NAT Gateway in all VPCs and configure routing tables to use it
Comments: While Nategateway could work, Egress only internet gateway is the recommended way for pods to access the internet
- Configure AWS VPC CNI to enable Dual-stack for pods
- Re-create all clusters and Enable IPv6 on all new clusters, Migrate all your applications using a blue-green deployment
- Create an Egress-only internet gateway in all VPCs and configure routing table to use it
- Re-configure all EKS clusters and Enable IPv6, re-deploy all your applications so new pods can start using IPv6
- Create a NAT Gateway in all VPCs and configure routing tables to use it
- Configure AWS VPC CNI to enable Dual-stack for pods
You plan to run an application in a Docker container. Your application requires a specific amount of CPU and memory to run without resource contention.
Which `docker` CLI command allows you to specify resource constraints such as CPU and memory limits?
- docker run
Comments: The docker run command is used to create and start a new container. You can specify resource constraints using flags such as --cpus for CPU limits and --memory for memory limits when using the docker run command.
- docker start
- docker exec
- docker create
As a software architect, your team needs to deploy various microservices to your company's EKS cluster.
Which of the following BEST describes how Helm can help with this goal?
- Helm allows you to define Kubernetes manifests as templates, checking them into Git. You can then use a Git push to trigger an automated deployment.
Comments: While helm does use templates, it does not directly integrate with Git for deployments. Using a GitOps tool like Flux or Argo CD would enable this automated workflow.
- Helm is a container registry that provides secure storage for Docker images needed by your microservices.
- With Helm charts, you can define, install, and upgrade even the most complex microservices deployments on Kubernetes.
- Helm is a monitoring tool that lets you visualize metrics and logs for microservices running on EKS.
- Helm allows you to define Kubernetes manifests as templates, checking them into Git. You can then use a Git push to trigger an automated deployment.
- Helm is a container registry that provides secure storage for Docker images needed by your microservices.
- With Helm charts, you can define, install, and upgrade even the most complex microservices deployments on Kubernetes.
- Helm is a monitoring tool that lets you visualize metrics and logs for microservices running on EKS.
You are a developer working on microservices hosted on an EKS cluster. Your organization uses Kubernetes namespaces for multi-tenancy. The security team wants tenants to implement network segmentation between namespaces for defense-in-depth.
What Kubernetes mechanism can you use to restrict network traffic between Pods in your namespace and Pods in other namespaces in the same EKS cluster?
- Cluster Security Group
- Network Policy
Comments: Within a Kubernetes cluster, all Pod to Pod communication is allowed by default. While this flexibility may help promote experimentation, it is not considered secure. Kubernetes network policies give you a mechanism to restrict network traffic between Pods (often referred to as East/West traffic) as well as between Pods and external services.
- EKS is a managed AWS service, security is built in and therefore Pod to Pod communication is restricted by default. Explicit configuration to restrict network traffic is not required.
- Security Group for pods
You have a new team that is not familiar with Kubernetes and containers. With their limited knowledge, they need help understanding the differences between a container and a Pod in Kubernetes and seek your advice.
Which statements BEST help your team understand the concepts? (Select TWO)
- A Pod may have one or more containers running at the same time.
Comments: A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service. For example, one container serves data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
- To scale out an application during a load condition, you add multiple copies of your containers inside a Pod.
- A Pod may only have at most one container in it.
Comments: A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serves data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
- A container in Kubernetes is known as a Pod.
- A Pod is a virtual execution environment in which a container runs.
- A Pod may have one or more containers running at the same time.
- To scale out an application during a load condition, you add multiple copies of your containers inside a Pod.
- A Pod may only have at most one container in it.
- A container in Kubernetes is known as a Pod.
- A Pod is a virtual execution environment in which a container runs.
You are refactoring a monolithic e-commerce app into microservices in EKS. Each microservice runs in its own namespace. The monolith used a LoadBalancer Service for Internet traffic.
What is the most cost-efficient way to expose the new microservices to the Internet with minimal overhead?
- Deploy AWS Load Balancer Controller, then deploy multiple Ingress resources that exposes each microservice running in different namespaces. Use different path prefixes for each microservice and configure all Ingress resources to use the same IngressGroup.
Comments: The Ingress is a namespaced resource and can only refer to a Service backend within the same namespace, and the solution use IngressGroup feature to share one ALB for all Ingress rules.
- Deploy NGINX Ingress Controller, then deploy one NGINX Ingress resource that exposes all microservices in a single Namespace. Use different path prefixes for each microservice.
- Deploy AWS Load Balancer Controller, then deploy one Ingress resource that exposes all microservices in a single Namespace. Use different path prefixes for each microservice.
- Use service with type LoadBalancer that exposes each microservice
Your microservice application is failing to communicate with other pods. To investigate, you run the following DNS query from within an application pod and receive the error below:
server can't find nginx.default.svc. cluster.local: NXDOMAIN
Which of the following is the likely cause of the error?
- Pods are on different nodes and cannot communicate.
- The pod isn't exposed through a service.
Comments: One way to expose the pods is by creating a service.
- Subnets have run out of available IP addresses.
- Pod requires a public IP assigned to be reachable.
You are responsible for managing the company's Kubernetes cluster, which hosts a handful of microservices. You are planning to add new nodes to the cluster to meet the increasing demand and are considering using two types of nodes: worker nodes and control plane.
What are the key differences between worker nodes and control plane?
- Worker nodes are more expensive than the control plane.
- Control plane is more scalable than worker nodes.
- Worker nodes are responsible for running containerized applications, while the control plane is responsible for managing the cluster.
Comments: Worker nodes are responsible for running containerized applications, such as pods. The control plane is responsible for managing the cluster, such as scheduling pods to worker nodes and monitoring the health of the cluster.
- Worker nodes can be deployed on-premises or in the cloud, while master nodes must be deployed on-premises.
The Development team on Company A regularly pushes new images directly to ECR for Deployment. With Company A's recent concerns for security, specifically with common vulnerabilities and exposures, the DevOps team is tasked to create a plan to scan these images as soon as they are pushed to ECR AND have reports on vulnerabilities and exposure findings available on SecurityHub for the Security team.
How can the Devops team address ALL the requirements of this task?
- Scan the image using Hadolint, save the reports on a txt file, send to the Security team when requested.
- Enable Amazon Inspector, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
- Enable ECS, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
Comments: ECS is another Container orchestrator by AWS, it does not address the issues posed by the question.
- Scan the image using Hadolint, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
- Scan the image using Hadolint, save the reports on a txt file, send to the Security team when requested.
- Enable Amazon Inspector, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
- Enable ECS, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
- Scan the image using Hadolint, when a new image is pushed and automatically scanned, the reports will be available on SecurityHub.
Question category | Total questions | Total score |
Introduction to Containers | 5 | 80 |
Why run your Kubernetes workloads on Amazon's Elastic Kubernetes Service (EKS) | 1 | 100 |
Introduction to Kubernetes Core Concepts | 12 | 95.83 |
The EKS Cluster | 5 | 60 |
Deploying Microservices to EKS | 10 | 76.6 |
EKS Security | 5 | 80 |
EKS Networking | 5 | 90 |
Basic Observability for EKS | 2 | 100 |
Autoscaling and cost optimization | 4 | 50 |
GitOps for EKS Automation - Exploring the Ecosystem | 1 | 100 |
'Dev > EKS' 카테고리의 다른 글
kubernetes a count of all Pods on a specific node (0) | 2023.11.07 |
---|---|
Amazon ECR 다른 AWS 계정에서 사용하는 법 (0) | 2023.10.30 |
EKS Workshop 해보기 - Fundamentals (0) | 2023.10.29 |
Why EKS ALB Controller fails to create due to lack of AddTag permissions (1) | 2023.09.19 |
AWS EKS POD DNS 문제 해결 - Route53 CNAME/A record (0) | 2023.09.13 |