A company provides an application to customers. The application has an Amazon API
Gateway REST API that invokes an AWS Lambda function. On initialization, the Lambda
function loads a large amount of data from an Amazon DynamoDB table. The data load
process results in long cold-start times of 8-10 seconds. The DynamoDB table has
DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to
requests. The application receives thousands of requests throughout the day. In the middle
of the day, the application experiences 10 times more requests than at any other time of
the day. Near the end of the day, the application's request volume decreases to 10% of its
normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the
day.
Which solution will meet these requirements?
A. Configure provisioned concurrency on the Lambda function with a concurrency value of
1. Delete the DAX cluster for the DynamoDB table.
B. Configure reserved concurrency on the Lambda function with a concurrency value of 0.
C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
D. Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.
Explanation: The following are the steps that the DevOps engineer should take to reduce
the latency of the Lambda function at all times of the day:
Configure provisioned concurrency on the Lambda function.
Configure AWS Application Auto Scaling on the Lambda function with provisioned
concurrency values set to a minimum of 1 and a maximum of 100.
The provisioned concurrency setting ensures that there is always a minimum number of
Lambda function instances available to handle requests. The Application Auto
Scalingsetting will automatically scale the number of Lambda function instances up or
down based on the demand for the application.
This solution will ensure that the Lambda function is able to handle the increased load
during the middle of the day, while also keeping the cold-start latency low.
The following are the reasons why the other options are not correct:
Option A is incorrect because it will not reduce the cold-start latency of the
Lambda function.
Option B is incorrect because it will not scale the number of Lambda function
instances up or down based on demand.
Option D is incorrect because it will only configure reserved concurrency on the
API Gateway API, which will not affect the Lambda function.
A company uses an AWS CodeCommit repository to store its source code and
corresponding unit tests. The company has configured an AWS CodePipeline pipeline that
includes an AWS CodeBuild project that runs when code is merged to the main branch of
the repository.
The company wants the CodeBuild project to run the unit tests. If the unit tests pass, the
CodeBuild project must tag the most recent commit.
How should the company configure the CodeBuild project to meet these requirements?
A. Configure the CodeBuild project to use native Git to clone the CodeCommit repository.
Configure the project to run the unit tests. Configure the project to use native Git to create a
tag and to push the Git tag to the repository if the code passes the unit tests.
B. Configure the CodeBuild project to use native Git to clone the CodeCommit repository.
Configure the project to run the unit tests. Configure the project to use AWS CLI
commands to create a new repository tag in the repository if the code passes the unit tests.
C. Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project lo run the unit tests. Configure the project to use AWS CLI commands to create a new Git tag in the repository if the code passes the unit tests.
D. Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use AWS CLI commands to create a new repository tag in the repository if the code passes the unit tests.
Explanation:
Step 1: Using Native Git in CodeBuildTo meet the requirement of running unit tests
and tagging the most recent commit if thetests pass, the CodeBuild project should
be configured to use native Git to clone the CodeCommit repository. Native Git
support allows full functionality for managing the repository, including the ability to
create and push tags.
Step 2: Tagging the Most Recent CommitOnce the unit tests pass, the CodeBuild
project can use native Git to create a tag for the most recent commit and push that
tag to the repository. This ensures that the tagged commit is linked to the test
results.
A company is migrating from its on-premises data center to AWS. The company currently
uses a custom on-premises CI/CD pipeline solution to build and package software.
The company wants its software packages and dependent public repositories to be
available in AWS CodeArtifact to facilitate the creation of application-specific pipelines.
Which combination of steps should the company take to update the CI/CD pipeline solution
and to configure CodeArtifact with the LEAST operational overhead? (Select TWO.)
A. Update the CI/CD pipeline to create a VM image that contains newly packaged software Use AWS Import/Export to make the VM image available as an Amazon EC2 AMI. Launch the AMI with an attached 1AM instance profile that allows CodeArtifact actions. Use AWS CLI commands to publish the packages to a CodeArtifact repository.
B. Create an AWS Identity and Access Management Roles Anywhere trust anchor Create an 1AM role that allows CodeArtifact actions and that has a trust relationship on the trust anchor. Update the on-premises CI/CD pipeline to assume the new 1AM role and to publish the packages to CodeArtifact.
C. Create a new Amazon S3 bucket. Generate a presigned URL that allows the PutObject
request. Update the on-premises CI/CD pipeline to use the
presigned URL to publish the packages from the on-premises location to the S3 bucket.
Create an AWS Lambda function that runs when packages are created in the bucket
through a put command Configure the Lambda function to publish the packages to
CodeArtifact
D. For each public repository, create a CodeArtifact repository that is configured with an external connection Configure the dependent repositories as upstream public repositories.
E. Create a CodeArtifact repository that is configured with a set of external connections to the public repositories. Configure the external connections to be downstream of the repository
Explanation:
Create an AWS Identity and Access Management Roles Anywhere trust anchor Create an
IAM role that allows CodeArtifact actions and that has a trust relationship on the trust
anchor. Update the on-premises CI/CD pipeline to assume the new IAM role and to publish
the packages to CodeArtifact:
Roles Anywhere allows on-premises servers to assume IAM roles, making it easier
to integrate on-premises environments with AWS services.
Steps:
Create a new Amazon S3 bucket. Generate a presigned URL that allows the PutObject
request. Update the on-premises CI/CD pipeline to use the presigned URL to publish the
packages from the on-premises location to the S3 bucket. Create an AWS Lambda function
that runs when packages are created in the bucket through a put command Configure the
Lambda function to publish the packages to CodeArtifact:
Using an S3 bucket as an intermediary, you can easily upload packages from onpremises
systems.
Steps:
A company hosts its staging website using an Amazon EC2 instance backed with Amazon
EBS storage. The company wants to recover quickly with minimal data losses in the event
of network connectivity issues or power failures on the EC2 instance.
Which solution will meet these requirements?
A. Add the instance to an EC2 Auto Scaling group with the minimum, maximum, and desired capacity set to 1.
B. Add the instance to an EC2 Auto Scaling group with a lifecycle hook to detach the EBS volume when the EC2 instance shuts down or terminates.
C. Create an Amazon CloudWatch alarm for the StatusCheckFailed System metric and select the EC2 action to recover the instance.
D. Create an Amazon CloudWatch alarm for the StatusCheckFailed Instance metric and select the EC2 action to reboot the instance.
A DevOps learn has created a Custom Lambda rule in AWS Config. The rule monitors
Amazon Elastic Container Repository (Amazon ECR) policy statements for ecr:' actions.
When a noncompliant repository is detected, Amazon EventBridge uses Amazon Simple
Notification Service (Amazon SNS) to route the notification to a security team.
When the custom AWS Config rule is evaluated, the AWS Lambda function fails to run.
Which solution will resolve the issue?
A. Modify the Lambda function's resource policy to grant AWS Config permission to invoke the function.
B. Modify the SNS topic policy to include configuration changes for EventBridge to publish to the SNS topic.
C. Modify the Lambda function's execution role to include configuration changes for custom AWS Config rules.
D. Modify all the ECR repository policies to grant AWS Config access to the necessary ECR API actions.
Explanation: Step 1: Understanding Lambda Permissions and AWS ConfigThe
custom AWS Config rule evaluates resources and invokes an AWS Lambda function when
a compliance check is triggered. For AWS Config to invoke the Lambda function, it requires
permission to do so.
Issue:The Lambda function fails to execute because AWS Config doesn't have
permission to invoke it.
Action:Modify the resource-based policy of the Lambda function to grant AWS
Config permission to invoke the Lambda function.
Why:Without this permission, AWS Config cannot trigger the Lambda function,
which is why the evaluation fails.
A company recently deployed its web application on AWS. The company is preparing for a
large-scale sales event and must ensure that the web application can scale to meet the
demand
The application's frontend infrastructure includes an Amazon CloudFront distribution that
has an Amazon S3 bucket as an origin. The backend infrastructure includes an Amazon
API Gateway API. several AWS Lambda functions, and an Amazon Aurora DB cluster
The company's DevOps engineer conducts a load test and identifies that the Lambda
functions can fulfill the peak number of requests However, the DevOps engineer notices
request latency during the initial burst of requests Most of the requests to the Lambda
functions produce queries to the database A large portion of the invocation time is used to
establish database connections
Which combination of steps will provide the application with the required scalability? (Select
TWO)
A. Configure a higher reserved concurrency for the Lambda functions.
B. Configure a higher provisioned concurrency for the Lambda functions
C. Convert the DB cluster to an Aurora global database Add additional Aurora Replicas in AWS Regions based on the locations of the company's customers.
D. Refactor the Lambda Functions Move the code blocks that initialize database connections into the function handlers.
E. Use Amazon RDS Proxy to create a proxy for the Aurora database Update the Lambda functions to use the proxy endpoints for database connections.
Explanation: The correct answer is B and E. Configuring a higher provisioned concurrency for the Lambda functions will ensure that the functions are ready to respond to the initial burst of requests without any cold start latency. Using Amazon RDS Proxy to create a proxy for the Aurora database will enable the Lambda functions to reuse existing database connections and reduce the overhead of establishing new ones. This will also improve the scalability and availability of the database by managing the connection pool size and handling failovers. Option A is incorrect because reserved concurrency only limits the number of concurrent executions for a function, not pre-warms them. Option C is incorrect because converting the DB cluster to an Aurora global database will not address the issue of database connection latency, and may introduce additional costs and complexity. Option D is incorrect because moving the code blocks that initialize database connections into the function handlers will not improve the performance or scalability of the Lambda functions, and may actually worsen the cold start latency.
A development team uses AWS CodeCommit, AWS CodePipeline, and AWS CodeBuild to
develop and deploy an application. Changes to the code are submitted by pull requests.
The development team reviews and merges the pull requests, and then the pipeline builds
and tests the application.
Over time, the number of pull requests has increased. The pipeline is frequently blocked
because of failing tests. To prevent this blockage, the development team wants to run the
unit and integration tests on each pull request before it is merged.
Which solution will meet these requirements?
A. Create a CodeBuild project to run the unit and integration tests. Create a CodeCommit approval rule template. Configure the template to require the successful invocation of the CodeBuild project. Attach the approval rule to the project's CodeCommit repository.
B. Create an Amazon EventBridge rule to match pullRequestCreated events from CodeCommit Create a CodeBuild project to run the unit and integration tests. Configure the CodeBuild project as a target of the EventBridge rule that includes a custom event payload with the CodeCommit repository and branch information from the event.
C. Create an Amazon EventBridge rule to match pullRequestCreated events from CodeCommit. Modify the existing CodePipeline pipeline to not run the deploy steps if the build is started from a pull request. Configure the EventBridge rule to run the pipeline with a custom payload that contains the CodeCommit repository and branch information from the event.
D. Create a CodeBuild project to run the unit and integration tests. Create a CodeCommit notification rule that matches when a pull request is created or updated. Configure the notification rule to invoke the CodeBuild project.
A DevOps engineer is building a multistage pipeline with AWS CodePipeline to build, verify,
stage, test, and deploy an application. A manual approval stage is required between the
test stage and the deploy stage. The development team uses a custom chat tool with
webhook support that requires near-real-time notifications.
How should the DevOps engineer configure status updates for pipeline activity and
approval requests to post to the chat tool?
A. Create an Amazon CloudWatch Logs subscription that filters on CodePipeline Pipeline Execution State Change. Publish subscription events to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the chat webhook URL to the SNS topic, and complete the subscription validation.
B. Create an AWS Lambda function that is invoked by AWS CloudTrail events. When a CodePipeline Pipeline Execution State Change event is detected, send the event details to the chat webhook URL.
C. Create an Amazon EventBridge rule that filters on CodePipeline Pipeline Execution State Change. Publish the events to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function that sends event details to the chat webhook URL. Subscribe the function to the SNS topic.
D. Modify the pipeline code to send the event details to the chat webhook URL at the end of each stage. Parameterize the URL so that each pipeline can send to a different URL based on the pipeline environment.
A company has many applications. Different teams in the company developed the
applications by using multiple languages and frameworks. The applications run on
premises and on different servers with different operating systems. Each team has its own
release protocol and process. The company wants to reduce the complexity of the release
and maintenance of these applications.
The company is migrating its technology stacks, including these applications, to AWS. The
company wants centralized control of source code, a consistent and automatic delivery
pipeline, and as few maintenance tasks as possible on the underlying infrastructure.
What should a DevOps engineer do to meet these requirements?
A. Create one AWS CodeCommit repository for all applications. Put each application's code in a different branch. Merge the branches, and use AWS CodeBuild to build the applications. Use AWS CodeDeploy to deploy the applications to one centralized application server.
B. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time. Use AWS CodeDeploy to deploy the applications to one centralized application server.
C. Create one AWS CodeCommit repository for each of the applications. Use AWS
CodeBuild to build the applications one at a time and to create one AMI for each server.
Use AWS CloudFormation StackSets to automatically provision and decommission
Amazon EC2 fleets by using these AMIs.
D. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.
Explanation: because of "as few maintenance tasks as possible on the underlying infrastructure". Fargate does that better than "one centralized application server"
An IT team has built an AWS CloudFormation template so others in the company can
quickly and reliably deploy and terminate an application. The template creates an Amazon
EC2 instance with a user data script to install the application and an Amazon S3 bucket
that the application uses to serve static webpages while it is running.
All resources should be removed when the CloudFormation stack is deleted. However, the
team observes that CloudFormation reports an error during stack deletion, and the S3
bucket created by the stack is not deleted.
How can the team resolve the error in the MOST efficient manner to ensure that all
resources are deleted without errors?
A. Add a DelelionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted.
B. Add a custom resource with an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Write the Lambda function to delete all objects from the bucket when RequestType is Delete.
C. Identify the resource that was not deleted. Manually empty the S3 bucket and then delete it.
D. Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resource. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket.
A company runs a workload on Amazon EC2 instances. The company needs a control that
requires the use of Instance Metadata Service Version 2 (IMDSv2) on all EC2 instances in
the AWS account. If an EC2 instance does not prevent the use of Instance Metadata
Service Version 1 (IMDSv1), the EC2 instance must be terminated.
Which solution will meet these requirements?
A. Set up AWS Config in the account. Use a managed rule to check EC2 instances.
Configure the rule to remediate the findings by using AWS Systems Manager Automation
to terminate the instance.
B. Create a permissions boundary that prevents the ec2:Runlnstance action if the ec2:MetadataHttpTokens condition key is not set to a value of required. Attach the permissions boundary to the IAM role that was used to launch the instance.
C. Set up Amazon Inspector in the account. Configure Amazon Inspector to activate deep inspection for EC2 instances. Create an Amazon EventBridge rule for an Inspector2 finding. Set an AWS Lambda function as the target to terminate the instance.
D. Create an Amazon EventBridge rule for the EC2 instance launch successful event. Send the event to an AWS Lambda function to inspect the EC2 metadata and to terminate the instance.
Explanation: To implement a control that requires the use of IMDSv2 on all EC2 instances in the account, the DevOps engineer can use a permissions boundary. A permissions boundary is a policy that defines the maximum permissions that an IAM entity can have. The DevOps engineer can create a permissions boundary that prevents the ec2:RunInstance action if the ec2:MetadataHttpTokens condition key is not set to a value of required. This condition key enforces the use of IMDSv2 on EC2 instances. The DevOps engineer can attach the permissions boundary to the IAM role that was used to launch the instance. This way, any attempt to launch an EC2 instance without using IMDSv2 will be denied by the permissions boundary.
A company is developing an application that will generate log events. The log events
consist of five distinct metrics every one tenth of a second and produce a large amount of
data The company needs to configure the application to write the logs to Amazon Time
stream The company will configure a daily query against the Timestream table.
Which combination of steps will meet these requirements with the FASTEST query
performance? (Select THREE.)
A. Use batch writes to write multiple log events in a Single write operation
B. Write each log event as a single write operation
C. Treat each log as a single-measure record
D. Treat each log as a multi-measure record
E. Configure the memory store retention period to be longer than the magnetic store retention period
F. Configure the memory store retention period to be shorter than the magnetic store retention period
Page 5 out of 21 Pages |
Previous |