A DevOps team manages an API running on-premises that serves as a backend for an
Amazon API Gateway endpoint. Customers have been complaining about high response
latencies, which the development team has verified using the API Gateway latency metrics
in Amazon CloudWatch. To identify the cause, the team needs to collect relevant data
without introducing additional latency.
Which actions should be taken to accomplish this? (Choose two.)
A. Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.
B. Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those segments to X-Ray during each request.
C. Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray daemon to upload segments to X-Ray.
D. Modify the on-premises application to send log information back to API Gateway with each request.
E. Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to CloudWatch metrics.
A company is divided into teams Each team has an AWS account and all the accounts are
in an organization in AWS Organizations. Each team must retain full administrative rights to
its AWS account. Each team also must be allowed to access only AWS services that the
company approves for use AWS services must gam approval through a request and
approval process.
How should a DevOps engineer configure the accounts to meet these requirements?
A. Use AWS CloudFormation StackSets to provision IAM policies in each account to deny access to restricted AWS services. In each account configure AWS Config rules that ensure that the policies are attached to IAM principals in the account.
B. Use AWS Control Tower to provision the accounts into OUs within the organization
Configure AWS Control Tower to enable AWS IAM identity Center (AWS Single Sign-On).
Configure 1AM Identity Center to provide administrative access Include deny policies on
user roles for restricted AWS services.
C. Place all the accounts under a new top-level OU within the organization Create an SCP that denies access to restricted AWS services Attach the SCP to the OU.
D. Create an SCP that allows access to only approved AWS services. Attach the SCP to the root OU of the organization. Remove the FullAWSAccess SCP from the root OU of the organization.
A company wants to use AWS development tools to replace its current bash deployment
scripts. The company currently deploys a LAMP application to a group of Amazon EC2
instances behind an Application Load Balancer (ALB). During the deployments, the
company unit tests the committed application, stops and starts services, unregisters and
re-registers instances with the load balancer, and updates file permissions. The company
wants to maintain the same deployment functionality through the shift to using AWS
services.
Which solution will meet these requirements?
A. Use AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy's appspec.yml file to restart services, and deregister and register instances with the ALB. Use the appspec.yml file to update file permissions without a custom script.
B. Use AWS CodePipeline to move the application from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy's deployment group to test the application, unregister and re-register instances with the ALB. and restart services. Use the appspec.yml file to update file permissions without a custom script.
C. Use AWS CodePipeline to move the application source code from the AWS CodeCommit repository to AWS CodeDeploy. Use CodeDeploy to test the application. Use CodeDeploy's appspec.yml file to restart services and update permissions without a custom script. Use AWS CodeBuild to unregister and re-register instances with the ALB.
D. Use AWS CodePipeline to trigger AWS CodeBuild to test the application. Use bash scripts invoked by AWS CodeDeploy's appspec.yml file to restart services. Unregister and re-register the instances in the AWS CodeDeploy deployment group with the ALB. Update the appspec.yml file to update file permissions without a custom script.
A company is hosting a web application in an AWS Region. For disaster recovery
purposes, a second region is being used as a standby. Disaster recovery requirements
state that session data must be replicated between regions in near-real time and 1% of
requests should route to the secondary region to continuously verify system functionality.
Additionally, if there is a disruption in service in the main region, traffic should be
automatically routed to the secondary region, and the secondary region must be able to
scale up to handle all traffic.
How should a DevOps engineer meet these requirements?
A. In both regions, deploy the application on AWS Elastic Beanstalk and use Amazon DynamoDB global tables for session data. Use an Amazon Route 53 weighted routing policy with health checks to distribute the traffic across the regions.
B. In both regions, launch the application in Auto Scaling groups and use DynamoDB for session data. Use a Route 53 failover routing policy with health checks to distribute the traffic across the regions.
C. In both regions, deploy the application in AWS Lambda, exposed by Amazon API Gateway, and use Amazon RDS for PostgreSQL with cross-region replication for session data. Deploy the web application with client-side logic to call the API Gateway directly.
D. In both regions, launch the application in Auto Scaling groups and use DynamoDB global tables for session data. Enable an Amazon CloudFront weighted distribution across regions. Point the Amazon Route 53 DNS record at the CloudFront distribution.
A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ
DB cluster as the database. A cross-Region read replica has been created for disaster
recovery purposes. A DevOps engineer wants to automate the promotion of the replica so
it becomes the primary database instance in the event of a failure.
Which solution will accomplish this?
A. Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoints. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to invoke an AWS Lambda function that will promote the replica instance as the primary.
B. Create an Aurora custom endpoint to point to the primary database instance. Configure the application to use this endpoint. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.
C. Create an AWS Lambda function to modify the application's AWS CloudFormation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instance. Create an Amazon CloudWatch alarm to invoke this Lambda function after the failure event occurs.
D. Store the Aurora endpoint in AWS Systems Manager Parameter Store. Create an Amazon EventBridge event that detects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Store. Code the application to reload the endpoint from Parameter Store if a database connection fails.
Explanation: EventBridge is needed to detect the database failure. Lambda is needed to promote the replica as it's in another Region (manual promotion, otherwise). Storing and updating the endpoint in Parameter store is important in updating the application.
A company uses Amazon RDS for all databases in Its AWS accounts The company uses
AWS Control Tower to build a landing zone that has an audit and logging account All
databases must be encrypted at rest for compliance reasons. The company's security
engineer needs to receive notification about any noncompliant databases that are in the
company's accounts
Which solution will meet these requirements with the MOST operational efficiency?
A. Use AWS Control Tower to activate the optional detective control (guardrail) to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the company's audit account. Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
B. Use AWS Cloud Formation StackSets to deploy AWS Lambda functions to every account. Write the Lambda function code to determine whether the RDS storage is encrypted in the account the function is deployed to Send the findings as an Amazon CloudWatch metric to the management account Create an Amazon Simple Notification Service (Amazon SNS) topic. Create a CloudWatch alarm that notifies the SNS topic when metric thresholds are met. Subscribe the security engineer's email address to the SNS topic.
C. Create a custom AWS Config rule in every account to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the audit account Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
D. Launch an Amazon EC2 instance. Run an hourly cron job by using the AWS CLI to determine whether the RDS storage is encrypted in each AWS account Store the results in an RDS database. Notify the security engineer by sending email messages from the EC2 instance when noncompliance is detected
Explanation:
Activate AWS Control Tower Guardrail:
Create SNS Topic for Notifications:
Create EventBridge Rule to Filter Non-compliant Events:
Subscribe Security Engineer's Email to SNS Topic:
By using AWS Control Tower to activate a detective guardrail and setting up SNS
notifications for non-compliant events, the company can efficiently monitor and ensure that
all RDS databases are encrypted at rest.
A company runs its container workloads in AWS App Runner. A DevOps engineer
manages the company's container repository in Amazon Elastic Container Registry
(Amazon ECR).
The DevOps engineer must implement a solution that continuously monitors the container repository. The solution must create a new container image when the solution detects an
operating system vulnerability or language package vulnerability.
Which solution will meet these requirements?
A. Use EC2 Image Builder to create a container image pipeline. Use Amazon ECR as the target repository. Turn on enhanced scanning on the ECR repository. Create an Amazon EventBridge rule to capture an Inspector2 finding event. Use the event to invoke the image pipeline. Re-upload the container to the repository.
B. Use EC2 Image Builder to create a container image pipeline. Use Amazon ECR as the target repository. Enable Amazon GuardDuty Malware Protection on the container workload. Create an Amazon EventBridge rule to capture a GuardDuty finding event. Use the event to invoke the image pipeline.
C. Create an AWS CodeBuild project to create a container image. Use Amazon ECR as the target repository. Turn on basic scanning on the repository. Create an Amazon EventBridge rule to capture an ECR image action event. Use the event to invoke the CodeBuild project. Re-upload the container to the repository.
D. Create an AWS CodeBuild project to create a container image. Use Amazon ECR as the target repository. Configure AWS Systems Manager Compliance to scan all managed nodes. Create an Amazon EventBridge rule to capture a configuration compliance state change event. Use the event to invoke the CodeBuild project.
Explanation:
The solution that meets the requirements is to use EC2 Image Builder to create a container
image pipeline, use Amazon ECR as the target repository, turn on enhanced scanning on
the ECR repository, create an Amazon EventBridge rule to capture an Inspector2 finding
event, and use the event to invoke the image pipeline. Re-upload the container to the
repository.
This solution will continuously monitor the container repository for vulnerabilities using
enhanced scanning, which is a feature of Amazon ECR that provides detailed information
and guidance on how to fix security issues found in your container images. Enhanced
scanning uses Inspector2, a security assessment service that integrates with Amazon ECR
and generates findings for any vulnerabilities detected in your images. You can use
Amazon EventBridge to create a rule that triggers an action when an Inspector2 finding
event occurs. The action can be to invoke an EC2 Image Builder pipeline, which is a
service that automates the creation of container images. The pipeline can use the latest
patches and updates to build a new container image and upload it to the same ECR
repository, replacing the vulnerable image.
The other options are not correct because they do not meet all the requirements or use
services that are not relevant for the scenario.
Option B is not correct because it uses Amazon GuardDuty Malware Protection, which is a
feature of GuardDuty that detects malicious activity and unauthorized behavior on your
AWS accounts and resources. GuardDuty does not scan container images for
vulnerabilities, nor does it integrate with Amazon ECR or EC2 Image Builder.
Option C is not correct because it uses basic scanning on the ECR repository, which only
provides a summary of the vulnerabilities found in your container images. Basic scanning
does not use Inspector2 or generate findings that can be captured by Amazon
EventBridge. Moreover, basic scanning does not provide guidance on how to fix the
vulnerabilities.
Option D is not correct because it uses AWS Systems Manager Compliance, which is a
feature of Systems Manager that helps you monitor and manage the compliance status of
your AWS resources based on AWS Config rules and AWS Security Hub standards.
Systems Manager Compliance does not scan container images for vulnerabilities, nor does
it integrate with Amazon ECR or EC2 Image Builder.
A development team is using AWS CodeCommit to version control application code and
AWS CodePipeline to orchestrate software deployments. The team has decided to use a
remote main branch as the trigger for the pipeline to integrate code changes. A developer
has pushed code changes to the CodeCommit repository, but noticed that the pipeline had
no reaction, even after 10 minutes.
Which of the following actions should be taken to troubleshoot this issue?
A. Check that an Amazon EventBridge rule has been created for the main branch to trigger the pipeline.
B. Check that the CodePipeline service role has permission to access the CodeCommit repository.
C. Check that the developer’s IAM role has permission to push to the CodeCommit repository.
D. Check to see if the pipeline failed to start because of CodeCommit errors in Amazon CloudWatch Logs.
Explanation: When you create a pipeline from CodePipeline during the step-by-step it
creates a CloudWatch Event rule for a given branch and repo like this:
{
"source": [
"aws.codecommit"
],
"detail-type": [
"CodeCommit Repository State Change"
],
"resources": [
"arn:aws:codecommit:us-east-1:xxxxx:repo-name"
],
"detail": {
"event": [
"referenceCreated",
"referenceUpdated"
],
"referenceType": [
"branch"
],
"referenceName": [
"master"
]
}
}
To run an application, a DevOps engineer launches an Amazon EC2 instance with public
IP addresses in a public subnet. A user data script obtains the application artifacts and
installs them on the instances upon launch. A change to the security classification of the
application now requires the instances to run with no access to the internet. While the
instances launch successfully and show as healthy, the application does not seem to be
installed.
Which of the following should successfully install the application while complying with the
new rule?
A. Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.
B. Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.
C. Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.
D. Create a security group for the application instances and allow only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.
Explanation: EC2 instances running in private subnets of a VPC can now have controlled
access to S3 buckets, objects, and API functions that are in the same region as the VPC.
You can use an S3 bucket policy to indicate which VPCs and which VPC Endpoints have
access to your S3 buckets
A company is using AWS CodePipeline to deploy an application. According to a new
guideline, a member of the company's security team must sign off on any application
changes before the changes are deployed into production. The approval must be recorded
and retained.
Which combination of actions will meet these requirements? (Select TWO.)
A. Configure CodePipeline to write actions to Amazon CloudWatch Logs.
B. Configure CodePipeline to write actions to an Amazon S3 bucket at the end of each pipeline stage.
C. Create an AWS CloudTrail trail to deliver logs to Amazon S3.
D. Create a CodePipeline custom action to invoke an AWS Lambda function for approval.
Create a policy that gives the security team access to manage CodePipeline custom
actions.
E. Create a CodePipeline manual approval action before the deployment step. Create a policy that grants the security team access to approve manual approval stages.
Explanation: To meet the new guideline for application deployment, the company can use a combination of AWS CodePipeline and AWS CloudTrail. A manual approval action in CodePipeline allows the security team to review and approve changes before they are deployed. This action can be configured to pause the pipeline until approval is granted, ensuring that no changes move to production without the necessary sign-off. Additionally, by creating an AWS CloudTrail trail, all actions taken withinCodePipeline, including approvals, are recorded and delivered to an Amazon S3 bucket. This provides an audit trail that can be retained for compliance and review purposes.
A company has chosen AWS to host a new application. The company needs to implement
a multi-account strategy. A DevOps engineer creates a new AWS account and an
organization in AWS Organizations. The DevOps engineer also creates the OU structure
for the organization and sets up a landing zone by using AWS Control Tower.
The DevOps engineer must implement a solution that automatically deploys resources for
new accounts that users create through AWS Control Tower Account Factory. When a user
creates a new account, the solution must apply AWS CloudFormation templates and SCPs
that are customized for the OU or the account to automatically deploy all the resources that
are attached to the account. All the OUs are enrolled in AWS Control Tower.
Which solution will meet these requirements in the MOST automated way?
A. Use AWS Service Catalog with AWS Control Tower. Create portfolios and products in AWS Service Catalog. Grant granular permissions to provision these resources. Deploy SCPs by using the AWS CLI and JSON documents.
B. Deploy CloudFormation stack sets by using the required templates. Enable automatic deployment. Deploy stack instances to the required accounts. Deploy a CloudFormation stack set to the organization’s management account to deploy SCPs.
C. Create an Amazon EventBridge rule to detect the CreateManagedAccount event.
Configure AWS Service Catalog as the target to deploy resources to any new accounts.
Deploy SCPs by using the AWS CLI and JSON documents.
D. Deploy the Customizations for AWS Control Tower (CfCT) solution. Use an AWS CodeCommit repository as the source. In the repository, create a custom package that includes the CloudFormation templates and the SCP JSON documents.
Explanation: The CfCT solution is designed for the exact purpose stated in the question. It
extends the capabilities of AWS Control Tower by providing you with a way to automate
resource provisioning and apply custom configurations across all AWS accounts created in
the Control Tower environment. This enables the company to implement additional account
customizations when new accounts are provisioned via the Control Tower Account Factory.
The CloudFormation templates and SCPs can be added to a CodeCommit repository and
will be automatically deployed to new accounts when they are created. This provides a
highly automated solution that does not require manual intervention to deploy resources
and SCPs to new accounts.
A DevOps engineer is building an application that uses an AWS Lambda function to query
an Amazon Aurora MySQL DB cluster. The Lambda function performs only read queries.
Amazon EventBridge events invoke the Lambda function.
As more events invoke the Lambda function each second, the database's latency
increases and the database's throughput decreases. The DevOps engineer needs to
improve the performance of the application.
Which combination of steps will meet these requirements? (Select THREE.)
A. Use Amazon RDS Proxy to create a proxy. Connect the proxy to the Aurora cluster reader endpoint. Set a maximum connections percentage on the proxy.
B. Implement database connection pooling inside the Lambda code. Set a maximum number of connections on the database connection pool.
C. Implement the database connection opening outside the Lambda event handler code.
D. Implement the database connection opening and closing inside the Lambda event handler code.
E. Connect to the proxy endpoint from the Lambda function.
F. Connect to the Aurora cluster endpoint from the Lambda function.
Page 3 out of 21 Pages |
Previous |