A company uses AWS and has a VPC that contains critical compute infrastructure with
predictable traffic patterns. The company has configured VPC flow logs that are published
to a log group in Amazon CloudWatch Logs.
The company's DevOps team needs to configure a monitoring solution for the VPC flow
logs to identify anomalies in network traffic to the VPC over time. If the monitoring solution
detects an anomaly, the company needs the ability to initiate a response to the anomaly.
How should the DevOps team configure the monitoring solution to meet these
requirements?
A. Create an Amazon Kinesis data stream. Subscribe the log group to the data stream. Configure Amazon Kinesis Data Analytics to detect log anomalies in the data stream. Create an AWS Lambda function to use as the output of the data stream. Configure the Lambda function to write to the default Amazon EventBridge event bus in the event of an anomaly finding.
B. Create an Amazon Kinesis Data Firehose delivery stream that delivers events to an Amazon S3 bucket. Subscribe the log group to the delivery stream. Configure Amazon Lookout for Metrics to monitor the data in the S3 bucket for anomalies. Create an AWS Lambda function to run in response to Lookout for Metrics anomaly findings. Configure the Lambda function to publish to the default Amazon EventBridge event bus.
C. Create an AWS Lambda function to detect anomalies. Configure the Lambda function to publish an event to the default Amazon EventBridge event bus if the Lambda function detects an anomaly. Subscribe the Lambda function to the log group.
D. Create an Amazon Kinesis data stream. Subscribe the log group to the data stream. Create an AWS Lambda function to detect log anomalies. Configure the Lambda function to write to the default Amazon EventBridge event bus if the Lambda function detects an anomaly. Set the Lambda function as the processor for the data stream.
Explanation: To meet the requirements, the DevOps team needs to configure a monitoring solution for the VPC flow logs that can detect anomalies in network traffic over time and initiate a response to the anomaly. The DevOps team can use Amazon Kinesis Data Streams to ingest and process streaming data from CloudWatch Logs. The DevOps team can subscribe the log group to a Kinesis data stream, which will deliver log events from CloudWatch Logs to Kinesis Data Streams in near real-time. The DevOps team can then create an AWS Lambda function to detect log anomalies using machine learning or statistical methods. The Lambda function can be set as a processor for the data stream, which means that it will process each record from the stream before sending it to downstream applications or destinations. The Lambda function can also write to the default Amazon EventBridge event bus if it detects an anomaly, which will allow other AWS services or custom applications to respond to the anomaly event.
A company has a legacy application A DevOps engineer needs to automate the process of
building the deployable artifact for the legacy application. The solution must store the
deployable artifact in an existing Amazon S3 bucket for future deployments to reference
Which solution will meet these requirements in the MOST operationally efficient way?
A. Create a custom Docker image that contains all the dependencies tor the legacy application Store the custom Docker image in a new Amazon Elastic Container Registry (Amazon ECR) repository Configure a new AWS CodeBuild project to use the custom Docker image to build the deployable artifact and to save the artifact to the S3 bucket.
B. Launch a new Amazon EC2 instance Install all the dependencies (or the legacy application on the EC2 instance Use the EC2 instance to build the deployable artifact and to save the artifact to the S3 bucket.
C. Create a custom EC2 Image Builder image Install all the dependencies for the legacy application on the image Launch a new Amazon EC2 instance from the image Use the new EC2 instance to build the deployable artifact and to save the artifact to the S3 bucket.
D. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with an AWS Fargate profile that runs in multiple Availability Zones Create a custom Docker image that contains all the dependencies for the legacy application Store the custom Docker image in a new Amazon Elastic Container Registry (Amazon ECR) repository Use the custom Docker image inside the EKS cluster to build the deployable artifact and to save the artifact to the S3 bucket.
Explanation: This approach is the most operationally efficient because it leverages the benefits of containerization, such as isolation and reproducibility, as well as AWS managed services. AWS CodeBuild is a fully managed build service that can compile your source code, run tests, and produce deployable software packages. By using a custom Docker image that includes all dependencies, you can ensure that the environment in which your code is built is consistent. Using Amazon ECR to store Docker images lets you easily deploy the images to any environment. Also, you can directly upload the build artifacts to Amazon S3 from AWS CodeBuild, which is beneficial for version control and archival purposes.
A company has an AWS CodePipeline pipeline that is configured with an Amazon S3
bucket in the eu-west-1 Region. The pipeline deploys an AWS Lambda application to the
same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS
CloudFormation deploy action.
The CodeBuild project uses the aws cloudformation package AWS CLI command to build
an artifact that contains the Lambda function code’s .zip file and the CloudFormation
template. The CloudFormation deploy action references the CloudFormation template from
the output artifact of the CodeBuild project’s build action.
The company wants to also deploy the Lambda application to the us-east-1 Region by
using the pipeline in eu-west-1. A DevOps engineer has already updated the CodeBuild
project to use the aws cloudformation package command to produce an additional output
artifact for us-east-1.
Which combination of additional steps should the DevOps engineer take to meet these
requirements? (Choose two.)
A. Modify the CloudFormation template to include a parameter for the Lambda function code’s zip file location. Create a new CloudFormation deploy action for us-east-1 in thepipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override.
B. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.
C. Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.
D. Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu-west-1 to the S3 bucket in us-east-1.
E. Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.
Explanation: A. The CloudFormation template should be modified to include a parameter that indicates the location of the .zip file containing the Lambda function's code. This allows the CloudFormation deploy action to use the correct artifact depending on the region. This is critical because Lambda functions need to reference their code artifacts from the same region they are being deployed in. B. You would also need to create a new CloudFormation deploy action for the us-east-1 Region within the pipeline. This action should be configured to use the CloudFormation template from the artifact that was specifically created for useast- 1.
A company's application uses a fleet of Amazon EC2 On-Demand Instances to analyze
and process data. The EC2 instances are in an Auto Scaling group. The Auto Scaling
group is a target group for an Application Load Balancer (ALB). The application analyzes
critical data that cannot tolerate interruption. The application also analyzes noncritical data
that can withstand interruption.
The critical data analysis requires quick scalability in response to real-time application
demand. The noncritical data analysis involves memory consumption. A DevOps engineer
must implement a solution that reduces scale-out latency for the critical data. The solution
also must process the noncritical data.
Which combination of steps will meet these requirements? (Select TWO.)
A. For the critical data, modify the existing Auto Scaling group. Create a warm pool instance in the stopped state. Define the warm pool size. Create a newversion of the launch template that has detailed monitoring enabled. use Spot Instances.
B. For the critical data, modify the existing Auto Scaling group. Create a warm pool instance in the stopped state. Define the warm pool size. Create a newversion of the launch template that has detailed monitoring enabled. Use On-Demand Instances.
C. For the critical data. modify the existing Auto Scaling group. Create a lifecycle hook to ensure that bootstrap scripts are completed successfully. Ensure that the application on the instances is ready to accept traffic before the instances are registered. Create a new version of the launch template that has detailed monitoring enabled.
D. For the noncritical data, create a second Auto Scaling group that uses a launch template. Configure the launch template to install the unified AmazonCloudWatch agent and to configure the CloudWatch agent with a custom memory utilization metric. Use Spot Instances. Add the new Auto Scaling group asthe target group for the ALB. Modify the application to use two target groups for critical data and noncritical data.
E. For the noncritical data, create a second Auto Scaling group. Choose the predefined memory utilization metric type for the target tracking scaling policy. Use Spot Instances. Add the new Auto Scaling group as the target group for the ALB. Modify the application to use two target groups for critical data and noncritical data.
Explanation:
For the critical data, using a warm pool1 can reduce the scale-out latency by
having pre-initialized EC2 instances ready to serve the application traffic. Using
On-Demand Instances can ensure that the instances are always available and not
interrupted by Spot interruptions2.
For the noncritical data, using a second Auto Scaling group with Spot Instances
can reduce the cost and leverage the unused capacity of EC23. Using a launch
template with the CloudWatch agent4 can enable the collection of memory
utilization metrics, which can be used to scale the group based on the memory
demand. Adding the second group as a target group for the ALB and modifying the
application to use two target groups can enable routing the traffic based on the
data type.
A DevOps engineer used an AWS Cloud Formation custom resource to set up AD
Connector. The AWS Lambda function ran and created AD Connector, but Cloud
Formation is not transitioning from CREATE_IN_PROGRESS to CREATE_COMPLETE.
Which action should the engineer take to resolve this issue?
A. Ensure the Lambda function code has exited successfully.
B. Ensure the Lambda function code returns a response to the pre-signed URL.
C. Ensure the Lambda function IAM role has cloudformation UpdateStack permissions for the stack ARN.
D. Ensure the Lambda function IAM role has ds ConnectDirectory permissions for the AWS account.
A company is using an Amazon Aurora cluster as the data store for its application. The
Aurora cluster is configured with a single DB instance. The application performs read and
write operations on the database by using the cluster's instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming
maintenance window. The cluster must remain available with the least possible interruption
during the maintenance window.
What should a DevOps engineer do to meet these requirements?
A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.
B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
C. Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.
D. Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
Explanation: To meet the requirements, the DevOps engineer should do the following:
Turn on the Multi-AZ option on the Aurora cluster.
Update the application to use the Aurora cluster endpoint for write operations.
Update the Aurora cluster's reader endpoint for reads.
Turning on the Multi-AZ option will create a replica of the database in a different Availability
Zone. This will ensure that the database remains available even if one of the Availability
Zones is unavailable.
Updating the application to use the Aurora cluster endpoint for write operations will ensure
that all writes are sent to both the primary and replica databases. This will ensure that the
data is always consistent.
Updating the Aurora cluster's reader endpoint for reads will allow the application to read
data from the replica database. This will improve the performance of the application during
the maintenance window.
A company has an application that runs on Amazon EC2 instances that are in an Auto
Scaling group. When the application starts up. the application needs to process data from
an Amazon S3 bucket before the application can start to serve requests.
The size of the data that is stored in the S3 bucket is growing. When the Auto Scaling
group adds new instances, the application now takes several minutes to download and
process the data before the application can serve requests. The company must reduce the
time that elapses before new EC2 instances are ready to serve requests.
Which solution is the MOST cost-effective way to reduce the application startup time?
A. Configure a warm pool for the Auto Scaling group with warmed EC2 instances in the Stopped state. Configure an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group. Modify the application to complete the lifecycle hook when the application is ready to serve requests.
B. Increase the maximum instance count of the Auto Scaling group. Configure an
autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group.
Modify the application to complete the lifecycle hook when the application is ready to serve
requests.
C. Configure a warm pool for the Auto Scaling group with warmed EC2 instances in the Running state. Configure an autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group. Modify the application to complete the lifecycle hook when the application is ready to serve requests.
D. Increase the maximum instance count of the Auto Scaling group. Configure an
autoscaling:EC2_INSTANCE_LAUNCHING lifecycle hook on the Auto Scaling group.
Modify the application to complete the lifecycle hook and to place the new instance in the
Standby state when the application is ready to serve requests.
Explanation: Option A is the most cost-effective solution. By configuring a warm pool of EC2 instances in the Stopped state, the company can reduce the time it takes for new instances to be ready to serve requests. When the Auto Scaling group launches a new instance, it can attach the stopped EC2 instance from the warm pool. The instance can then be started up immediately, rather than having to wait for the data to be downloaded and processed. This reduces the overall startup time for the application.
A company builds a container image in an AWS CodeBuild project by running Docker
commands. After the container image is built, the CodeBuild project uploads the container
image to an Amazon S3 bucket. The CodeBuild project has an 1AM service role that has
permissions to access the S3 bucket.
A DevOps engineer needs to replace the S3 bucket with an Amazon Elastic Container
Registry (Amazon ECR) repository to store the container images. The DevOps engineer
creates an ECR private image repository in the same AWS Region of the CodeBuild
project. The DevOps engineer adjusts the 1AM service role with the permissions that are
necessary to work with the new ECR repository. The DevOps engineer also places new
repository information into the docker build command and the docker push command that
are used in the buildspec.yml file.
When the CodeBuild project runs a build job, the job fails when the job tries to access the
ECR repository.
Which solution will resolve the issue of failed access to the ECR repository?
A. Update the buildspec.yml file to log in to the ECR repository by using the aws ecr getlogin- password AWS CLI command to obtain an authentication token. Update the docker login command to use the authentication token to access the ECR repository.
B. Add an environment variable of type SECRETS_MANAGER to the CodeBuild project. In
the environment variable, include the ARN of the CodeBuild project's lAM service role.
Update the buildspec.yml file to use the new environment variable to log in with the docker
login command to access the ECR repository.
C. Update the ECR repository to be a public image repository. Add an ECR repository policy that allows the 1AM service role to have access.
D. Update the buildspec.yml file to use the AWS CLI to assume the 1AM service role for ECR operations. Add an ECR repository policy that allows the 1AM service role to have access.
Explanation: (A) When Docker communicates with an Amazon Elastic Container Registry
(ECR) repository, it requires authentication. You can authenticate your Docker client to the
Amazon ECR registry with the help of the AWS CLI (Command Line Interface). Specifically,
you can use the "aws ecr get-login-password" command to get an authorization token and
then use Docker's "docker login" command with that token to authenticate to the registry.
You would need to perform these steps in your buildspec.yml file before attempting to push
or pull images from/to the ECR repository.
A company recently migrated its application to an Amazon Elastic Kubernetes Service
(Amazon EKS) cluster that uses Amazon EC2 instances. The company configured the
application to automatically scale based on CPU utilization.
The application produces memory errors when it experiences heavy loads. The application
also does not scale out enough to handle the increased load. The company needs to collect and analyze memory metrics for the application over time.
Which combination of steps will meet these requirements? (Select THREE.)
A. Attach the Cloud WatchAgentServer Pol icy managed 1AM policy to the 1AM instance profile that the cluster uses.
B. Attach the Cloud WatchAgentServer Pol icy managed 1AM policy to a service account role for the cluster.
C. Collect performance metrics by deploying the unified Amazon CloudWatch agent to the existing EC2 instances in the cluster. Add the agent to the AMI for any new EC2 instances that are added to the cluster.
D. Collect performance logs by deploying the AWS Distro for OpenTelemetry collector as a DaemonSet.
E. Analyze the pod_memory_utilization Amazon CloudWatch metric in the Containerlnsights namespace by using the Service dimension.
F. Analyze the node_memory_utilization Amazon CloudWatch metric in the Containerlnsights namespace by using the ClusterName dimension.
A company is developing a new application. The application uses AWS Lambda functions
for its compute tier. The company must use a canary deployment for any changes to the
Lambda functions. Automated rollback must occur if any failures are reported.
The company’s DevOps team needs to create the infrastructure as code (IaC) and the
CI/CD pipeline for this solution.
Which combination of steps will meet these requirements? (Choose three.)
A. Create an AWS CloudFormation template for the application. Define each Lambda function in the template by using the AWS::Lambda::Function resource type. In the template, include a version for the Lambda function by using the AWS::Lambda::Version resource type. Declare the CodeSha256 property. Configure an AWS::Lambda::Alias resource that references the latest version of the Lambda function.
B. Create an AWS Serverless Application Model (AWS SAM) template for the application. Define each Lambda function in the template by using the AWS::Serverless::Function resource type. For each function, include configurations for the AutoPublishAlias property and the DeploymentPreference property. Configure the deployment configuration type to LambdaCanary10Percent10Minutes.
C. Create an AWS CodeCommit repository. Create an AWS CodePipeline pipeline. Use
the CodeCommit repository in a new source stage that starts the pipeline. Create an AWS
CodeBuild project to deploy the AWS Serverless ApplicationModel (AWS SAM) template.
Upload the template and source code to the CodeCommit repository. In the CodeCommit
repository, create a buildspec.yml file that includes the commands to build and deploy the
SAM application.
D. Create an AWS CodeCommit repository. Create an AWS CodePipeline pipeline. Use the CodeCommit repository in a new source stage that starts the pipeline. Create an AWS CodeDeploy deployment group that is configured for canary deployments with a DeploymentPreference type of Canary10Percent10Minutes. Upload the AWS CloudFormation template and source code to the CodeCommit repository. In the CodeCommit repository, create an appspec.yml file that includes the commands to deploy the CloudFormation template.
E. Create an Amazon CloudWatch composite alarm for all the Lambda functions. Configure an evaluation period and dimensions for Lambda. Configure the alarm to enter the ALARM state if any errors are detected or if there is insufficient data.
F. Create an Amazon CloudWatch alarm for each Lambda function. Configure the alarms to enter the ALARM state if any errors are detected. Configure an evaluation period, dimensions for each Lambda function and version, and the namespace as AWS/Lambda on the Errors metric.
Explanation: The requirement is to create the infrastructure as code (IaC) and the CI/CD
pipeline for the Lambda application that uses canary deployment and automated rollback.
To do this, the DevOps team needs to use the following steps:
Create an AWS Serverless Application Model (AWS SAM) template for the
application. AWS SAM is a framework that simplifies the development and
deployment of serverless applications on AWS. AWS SAM allows customers to
define Lambda functions and other resources in a template by using a simplified
syntax. For each Lambda function, the DevOps team can include configurations
for the AutoPublishAlias property and the DeploymentPreference property. The
AutoPublishAlias property specifies the name of the alias that points to the latest
version of the function. The DeploymentPreference property specifies how
CodeDeploy deploys new versions of the function. By configuring the deployment
configuration type to LambdaCanary10Percent10Minutes, the DevOps team can
enable canary deployment with 10% of traffic shifted to the new version every 10
minutes.
Create an AWS CodeCommit repository. Create an AWS CodePipeline pipeline.
Use the CodeCommit repository in a new source stage that starts the pipeline.
Create an AWS CodeBuild project to deploy the AWS SAM template. CodeCommit
is a fully managed source control service that hosts Git repositories. CodePipeline
is a fully managed continuous delivery service that automates the release process
of software applications. CodeBuild is a fully managed continuous integration
service that compiles source code and runs tests. By using these services, the
DevOps team can create a CI/CD pipeline for the Lambda application. The pipeline should use the CodeCommit repository as the source stage, where the
DevOps team can upload the SAM template and source code. The pipeline should
also use a CodeBuild project as the build stage, where the SAM template can be
built and deployed.
Create an Amazon CloudWatch alarm for each Lambda function. Configure the
alarms to enter the ALARM state if any errors are detected. Configure an
evaluation period, dimensions for each Lambda function and version, and the
namespace as AWS/Lambda on the Errors metric. CloudWatch is a service that
monitors and collects metrics from AWS resources and applications. CloudWatch
alarms are actions that are triggered when a metric crosses a specified threshold.
By creating CloudWatch alarms for each Lambda function, the DevOps team can
monitor the health and performance of each function version during deployment.
By configuring the alarms to enter the ALARM state if any errors are detected, the
DevOps team can enable automated rollback if any failures are reported.
A DevOps engineer manages a large commercial website that runs on Amazon EC2. The
website uses Amazon Kinesis Data Streams to collect and process web togs. The DevOps
engineer manages the Kinesis consumer application, which also runs on Amazon EC2.
Sudden increases of data cause the Kinesis consumer application to (all behind and the
Kinesis data streams drop records before the records can be processed. The DevOps
engineer must implement a solution to improve stream handling.
Which solution meets these requirements with the MOST operational efficiency?
A. Modify the Kinesis consumer application to store the logs durably in Amazon S3 Use Amazon EMR to process the data directly on Amazon S3 to derive customer insights Store the results in Amazon S3.
B. Horizontally scale the Kinesis consumer application by adding more EC2 instances based on the Amazon CloudWatch GetRecords IteratorAgeMilliseconds metric Increase the retention period of the Kinesis data streams.
C. Convert the Kinesis consumer application to run as an AWS Lambda function. Configure the Kinesis data streams as the event source for the Lambda function to process the data streams
D. Increase the number of shards in the Kinesis data streams to increase the overall throughput so that the consumer application processes the data faster.
A company sells products through an ecommerce web application The company wants a
dashboard that shows a pie chart of product transaction details. The company wants to
integrate the dashboard With the company’s existing Amazon CloudWatch dashboards
Which solution Will meet these requirements With the MOST operational efficiency?
A. Update the ecommerce application to emit a JSON object to a CloudWatch log group for each processed transaction. Use CloudWatch Logs Insights to query the log group and to visualize the results in a pie chart format Attach the results to the desired CloudWatch dashboard.
B. Update the ecommerce application to emit a JSON object to an Amazon S3 bucket for each processed transaction. Use Amazon Athena to query the S3 bucket and to visualize the results In a Pie chart format. Export the results from Athena Attach the results to the desired CloudWatch dashboard
C. Update the ecommerce application to use AWS X-Ray for instrumentation. Create a new X-Ray subsegment Add an annotation for each processed transaction. Use X-Ray traces to query the data and to visualize the results in a pie chart format Attach the results to the desired CloudWatch dashboard
D. Update the ecommerce application to emit a JSON object to a CloudWatch log group for each processed transaction_ Create an AWS Lambda function to aggregate and write the results to Amazon DynamoDB. Create a Lambda subscription filter for the log file. Attach the results to the desired CloudWatch dashboard.
Page 2 out of 21 Pages |
Previous |