DVA-C02 Practice Test Questions

368 Questions


A company is planning to securely manage one-time fixed license keys in AWS. The company's development team needs to access the license keys in automaton scripts that run in Amazon EC2 instances and in AWS CloudFormation stacks. Which solution will meet these requirements MOST cost-effectively?


A. Amazon S3 with encrypted files prefixed with “config”


B. AWS Secrets Manager secrets with a tag that is named SecretString


C. AWS Systems Manager Parameter Store SecureString parameters


D. CloudFormation NoEcho parameters





C.
  AWS Systems Manager Parameter Store SecureString parameters

Explanation: AWS Systems Manager Parameter Store is a service that provides secure, hierarchical storage for configuration data and secrets. Parameter Store supports SecureString parameters, which are encrypted using AWS Key Management Service (AWS KMS) keys. SecureString parameters can be used to store license keys in AWS and retrieve them securely from automation scripts that run in EC2 instances or CloudFormation stacks. Parameter Store is a cost-effective solution because it does not charge for storing parameters or API calls.

A developer is building an application to process a stream of customer orders. The application sends processed orders to an Amazon Aurora MySQL database. The application needs to process the orders in batches. The developer needs to configure a workflow that ensures each record is processed before the application sends each order to the database.


A. Use Amazon Kinesis Data Streams to stream the orders. Use an AWS Lambda function to process the orders. Configure an event source mapping for the Lambda function, and set the MaximumBatchingWindowInSeconds setting to 300.


B. Use Amazon SQS to stream the orders. Use an AWS Lambda function to process the orders. Configure an event source mapping for the Lambda function, and set the MaximumBatchingWindowInSeconds setting to 0.


C. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the orders. Use an Amazon EC2 instance to process the orders. Configure an event source mapping for the EC2 instance, and increase the payload size limit to 36 MB.


D. Use Amazon DynamoDB Streams to stream the orders. Use an Amazon ECS cluster on AWS Fargate to process the orders. Configure an event source mapping for the cluster, and set the BatchSize setting to 1.





A.
  Use Amazon Kinesis Data Streams to stream the orders. Use an AWS Lambda function to process the orders. Configure an event source mapping for the Lambda function, and set the MaximumBatchingWindowInSeconds setting to 300.

Explanation:
Step 1: Understanding the Problem
Processing in Batches:The application must process records in groups.
Sequential Processing:Each record in the batch must be processed before writing to Aurora.
Solution Goals:Use services that support ordered, batched processing and integrate with Aurora.
Step 2: Solution Analysis
Option A:
Option B:
Option C:
Option D:
Step 3: Implementation Steps for Option A
Set up Kinesis Data Stream:
Configure Lambda with Event Source Mapping:
{
"EventSourceArn": "arn:aws:kinesis:region:account-id:stream/stream-name",
"BatchSize": 100,
"MaximumBatchingWindowInSeconds": 300
}
Write Processed Data to Aurora:

A developer is designing a fault-tolerant environment where client sessions will be saved. How can the developer ensure that no sessions are lost if an Amazon EC2 instance fails?


A. Use sticky sessions with an Elastic Load Balancer target group.


B. Use Amazon SOS to save session data.


C. Use Amazon DynamoDB to perform scalable session handling.


D. Use Elastic Load Balancer connection draining to stop sending requests to failing instances.





C.
  Use Amazon DynamoDB to perform scalable session handling.

A company is planning to use AWS CodeDeploy to deploy an application to Amazon Elastic Container Service (Amazon ECS) During the deployment of a new version of the application, the company initially must expose only 10% of live traffic to the new version of the deployed application. Then, after 15 minutes elapse, the company must route all the remaining live traffic to the new version of the deployed application. Which CodeDeploy predefined configuration will meet these requirements?


A. CodeDeployDefault ECSCanary10Percent15Minutes


B. CodeDeployDefault LambdaCanary10Percent5Minutes


C. CodeDeployDefault LambdaCanary10Percent15Minutes


D. CodeDeployDefault ECSLinear10PercentEvery1 Minutes





A.
  CodeDeployDefault ECSCanary10Percent15Minutes

Explanation:
CodeDeploy Predefined Configurations: CodeDeploy offers built-in deployment configurations for common scenarios.
Canary Deployment: Canary deployments gradually shift traffic to a new version, ideal for controlled rollouts like this requirement.
CodeDeployDefault.ECSCanary10Percent15Minutes: This configuration matches the company's requirements, shifting 10% of traffic initially and then completing the rollout after 15 minutes.

A company has an existing application that has hardcoded database credentials A developer needs to modify the existing application The application is deployed in two AWS Regions with an active-passive failover configuration to meet company’s disaster recovery strategy.
The developer needs a solution to store the credentials outside the code. The solution must comply With the company's disaster recovery strategy.
Which solution Will meet these requirements in the MOST secure way?


A. Store the credentials in AWS Secrets Manager in the primary Region. Enable secret replication to the secondary Region Update the application to use the Amazon Resource Name (ARN) based on the Region.


B. Store credentials in AWS Systems Manager Parameter Store in the primary Region. Enable parameter replication to the secondary Region. Update the application to use the Amazon Resource Name (ARN) based on the Region.


C. Store credentials in a config file. Upload the config file to an S3 bucket in me primary Region. Enable Cross-Region Replication (CRR) to an S3 bucket in the secondary region. Update the application to access the config file from the S3 bucket based on the Region.


D. Store credentials in a config file. Upload the config file to an Amazon Elastic File System (Amazon EFS) file system. Update the application to use the Amazon EFS file system Regional endpoints to access the config file in the primary and secondary Regions.





A.
  Store the credentials in AWS Secrets Manager in the primary Region. Enable secret replication to the secondary Region Update the application to use the Amazon Resource Name (ARN) based on the Region.

Explanation: AWS Secrets Manager is a service that allows you to store and manage secrets, such as database credentials, API keys, and passwords, in a secure and centralized way. It also provides features such as automatic secret rotation, auditing, and monitoring1. By using AWS Secrets Manager, you can avoid hardcoding credentials in your code, which is a bad security practice and makes it difficult to update them. You can also replicate your secrets to another Region, which is useful for disaster recovery purposes2. To access your secrets from your application, you can use the ARN of the secret, which is a unique identifier that includes the Region name. This way, your application can use the appropriate secret based on the Region where it is deployed3.

A developer is using AWS CodeDeploy to automate a company's application deployments to Amazon EC2. Which application specification file properties are required to ensure the software deployments do not fail? (Select TWO.)


A. The file must be a JSON-formatted file named appspec.json.


B. The file must be a YAML-formatted file named appspec.yml.


C. The file must be stored in AWS CodeBuild and referenced from the application's source code.


D. The file must be placed in the root of the directory structure of the application's source code.


E. The file must be stored in Amazon S3 and referenced from the application's source code.





B.
  The file must be a YAML-formatted file named appspec.yml.

D.
  The file must be placed in the root of the directory structure of the application's source code.

Explanation: Comprehensive and Detailed Step-by-Step Explanation:
To ensure successful software deployments using AWS CodeDeploy, the application specification file (appspec.yml or appspec.json) must adhere to specific requirements:
File Format Requirement (Option B):
File Placement Requirement (Option D):
Incorrect Options:

A development team wants to build a continuous integration/continuous delivery (CI/CD) pipeline. The team is using AWS CodePipeline to automate the code build and deployment. The team wants to store the program code to prepare for the CI/CD pipeline. Which AWS service should the team use to store the program code?


A. AWS CodeDeploy


B. AWS CodeArtifact


C. AWS CodeCommit


D. Amazon CodeGuru





C.
  AWS CodeCommit

Explanation: AWS CodeCommit is a service that provides fully managed source control for hosting secure and scalable private Git repositories. The development team can use CodeCommit to store the program code and prepare for the CI/CD pipeline. CodeCommit integrates with other AWS services such as CodePipeline, CodeBuild, and CodeDeploy to automate the code build and deployment process.

An online sales company is developing a serverless application that runs on AWS. The application uses an AWS Lambda function that calculates order success rates and stores the data in an Amazon DynamoDB table. A developer wants an efficient way to invoke the Lambda function every 15 minutes. Which solution will meet this requirement with the LEAST development effort?


A. Create an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes. Add the Lambda function as the target of theEventBridge rule.


B. Create an AWS Systems Manager document that has a script that will invoke the Lambda function on Amazon EC2. Use a Systems Manager Run Commandtask to run the shell script every 15 minutes.


C. Create an AWS Step Functions state machine. Configure the state machine to invoke the Lambda function execution role at a specified interval by using a Wait state. Set the interval to 15 minutes.


D. Provision a small Amazon EC2 instance. Set up a cron job that invokes the Lambda function every 15 minutes.





A.
  Create an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes. Add the Lambda function as the target of theEventBridge rule.

Explanation: The best solution for this requirement is option A. Creating an Amazon EventBridge rule that has a rate expression that will run the rule every 15 minutes and adding the Lambda function as the target of the EventBridge rule is the most efficient way to invoke the Lambda function periodically. This solution does not require any additional resources or development effort, and it leverages the built-in scheduling capabilities of EventBridge1.

A developer wants to use an AWS AppSync API to invoke AWS Lambda functions to return data. Some of the Lambda functions perform long-running processes. The AWS AppSync API needs to return responses immediately. Which solution will meet these requirements with the LEAST operational overhead?


A. Configure the Lambda functions to be AWS AppSync data sources. Use Event mode for asynchronous Lambda invocation.


B. Increase the timeout setting for the Lambda functions to accommodate longer processing times.


C. Set up an Amazon SQS queue. Configure AWS AppSync to send messages to the SQS queue. Configure a Lambda function event source mapping to poll the queue.


D. Enable caching, and increase the duration of the AWS AppSync cache TTL.





A.
  Configure the Lambda functions to be AWS AppSync data sources. Use Event mode for asynchronous Lambda invocation.

Explanation: Step-by-Step Breakdown:
Requirement Summary:
AWS AppSync API needs toinvoke Lambda functions
Some Lambda functions arelong-running
AppSync shouldreturn immediately, minimizingoperational overhead
**Option A: AppSync + Lambda as data source usingEvent Mode
Correct: AWS AppSync supportsasynchronous (event) invocationof Lambda data sources usingEvent Mode.
Event Mode means:
Option B: Increase Lambda timeout
Incorrect: This keeps AppSync waiting.
Even with increased timeout, synchronous invocations wouldblock AppSync responses.
Option C: SQS queue + polling Lambda
Possible but too complex for this use case.
Requires additional infrastructure: queue + mapping + custom logic.
Higheroperational overheadcompared to built-in AppSync Event Mode.
Option D: Enable caching in AppSync
Irrelevant: AppSync cache is for optimizing repeated read queries,not for async workflows.

A developer needs to modify an application architecture to meet new functional requirements. Application data is stored in Amazon DynamoDB and processed tor analysis in a nightly batch. The system analysts do not want to wait until the next day to view the processed data and have asked to have it available in near-real time. Which application architecture pattern would enable the data to be processed as it is received?


A. Event driven


B. Client-server d riven


C. Fan-out driven


D. Schedule driven





A.
  Event driven

A developer received the following error message during an AWS CloudFormation deployment: Which action should the developer take to resolve this error?


A. Contact AWS Support to report an issue with the Auto Scaling Groups (ASG> service.


B. Add a DependsOn attribute to the ASGInstanceRole12345678 resource in the CloudFormation template. Then delete the stack.


C. Modify the CloudFormation template to retain the ASGInstanceRolet 2345678 resource. Then manually delete the resource after deployment.


D. Add a force parameter when calling CloudFormation with the role-am of ASGInstanceRole12345678.





C.
  Modify the CloudFormation template to retain the ASGInstanceRolet 2345678 resource. Then manually delete the resource after deployment.

A developer is working on an ecommerce application that stores data in an Amazon RDS for MySQL cluster The developer needs to implement a caching layer for the application to retrieve information about the most viewed products. Which solution will meet these requirements?


A. Edit the RDS for MySQL cluster by adding a cache node. Configure the cache endpoint instead of the duster endpoint in the application.


B. Create an Amazon ElastiCache (Redis OSS) cluster. Update the application code to use the ElastiCache (Redis OSS) cluster endpoint.


C. Create an Amazon DynamoDB Accelerator (DAX) cluster in front of the RDS for MySQL cluster. Configure the application to connect to the DAX endpoint instead of the RDS endpoint.


D. Configure the RDS for MySQL cluster to add a standby instance in a different Availability Zone. Configure the application to read the data from the standby instance.





B.
  Create an Amazon ElastiCache (Redis OSS) cluster. Update the application code to use the ElastiCache (Redis OSS) cluster endpoint.

Explanation:
Requirement Summary:
E-commerce app usingAmazon RDS for MySQL
Needscaching layerformost-viewed products
Evaluate Options:
A. Add a cache node to RDS
No such feature inRDS for MySQL
Caching must be implementedoutsideRDS
B. ElastiCache for Redis (OSS)
Purpose-built forcachingfrequently accessed data
Reduces read pressure on RDS
Fast in-memory access (microseconds)
Seamless integration into app logic
C. DynamoDB DAX
DAX is for acceleratingDynamoDB, not RDS
D. RDS standby instance
Read from standby is not allowed
Standby is forfailover only, not for load balancing


Page 13 out of 31 Pages
Previous