DVA-C02 Practice Test Questions

368 Questions


A developer needs to export the contents of several Amazon DynamoDB tables into Amazon S3 buckets to comply with company data regulations. The developer uses the AWS CLI to run commands to export from each table to the proper S3 bucket. The developer sets up AWS credentials correctly and grants resources appropriate permissions. However, the exports of some tables fail. What should the developer do to resolve this issue?


A. Ensure that point-in-time recovery is enabled on the DynamoDB tables.


B. Ensure that the target S3 bucket is in the same AWS Region as the DynamoDB table.


C. Ensure that DynamoDB streaming is enabled for the tables.


D. Ensure that DynamoDB Accelerator (DAX) is enabled.





B.
  Ensure that the target S3 bucket is in the same AWS Region as the DynamoDB table.

Explanation:
Comprehensive Detailed and Lengthy Step-by-Step Explanation with All AWS Developer References:
1. Understanding the Use Case:
The developer needs to export DynamoDB table data into Amazon S3 buckets using the AWS CLI, and some exports are failing. Proper credentials and permissions have already been configured.
2. Key Conditions to Check:
Region Consistency:DynamoDB exports require that the target S3 bucket and the DynamoDB table reside in the same AWS Region. If they are not in the same Region, the export process will fail.
Point-in-Time Recovery (PITR):PITR is not required for exporting data from DynamoDB to S3. Enabling PITR allows recovery of table states at specific points in time but does not directly influence export functionality.
DynamoDB Streams:Streams allow real-time capture of data modifications but are unrelated to the bulk export feature.
DAX (DynamoDB Accelerator):DAX is a caching service that speeds up read operations for DynamoDB but does not affect the export functionality.
3. Explanation of the Options:
Option A:"Ensure that point-in-time recovery is enabled on the DynamoDB tables."While PITR is useful for disaster recovery and restoring table states, it is not required for exporting data to S3. This option does not address the export failure.
Option B:"Ensure that the target S3 bucket is in the same AWS Region as the DynamoDB table."This is the correct answer. DynamoDB export functionality requires the target S3 bucket to reside in the same AWS Region as the DynamoDB table. If the S3 bucket is in a different Region, the export will fail. Option C:"Ensure that DynamoDB streaming is enabled for the tables."Streams are useful for capturing real-time changes in DynamoDB tables but are unrelated to the export functionality. This option does not resolve the issue.
Option D:"Ensure that DynamoDB Accelerator (DAX) is enabled."DAX accelerates read operations but does not influence the export functionality. This option is irrelevant to the issue.

A developer is creating an AWS Lambda function that will connect to an Amazon RDS for MySQL instance. The developer wants to store the database credentials. The database credentials need to be encrypted and the database password needs to be automatically rotated.
Which solution will meet these requirements?


A. Store the database credentials as environment variables for the Lambda function. Set the environment variables to rotate automatically.


B. Store the database credentials in AWS Secrets Manager. Set up managed rotation on the database credentials.


C. Store the database credentials in AWS Systems Manager Parameter Store as secure string parameters. Set up managed rotation on the parameters.


D. Store the database credentials in the X-Amz-Security-Token parameter. Set up managed rotation on the parameter.





B.
  Store the database credentials in AWS Secrets Manager. Set up managed rotation on the database credentials.

A company uses an AWS Lambda function to transfer files from an Amazon S3 bucket to the company's SFTP server. The Lambda function connects to the SFTP server by using credentials such as username and password. The company uses Lambda environment variables to store these credentials.
A developer needs to implement encrypted username and password credentials.
Which solution will meet these requirements?


A. Remove the user credentials from the Lambda environment. Implement 1AM database authentication.


B. Move the user credentials from Lambda environment variables to AWS Systems Manager Parameter Store.


C. Move the user credentials from Lambda environment variables to AWS Key Management Service (AWS KMS).


D. Move the user credentials from the Lambda environment to an encrypted .txt file. Store the file in an S3 bucket.





B.
  Move the user credentials from Lambda environment variables to AWS Systems Manager Parameter Store.

A developer is building an application that stores objects in an Amazon S3 bucket. The bucket does not have versioning enabled. The objects are accessed rarely after 1 week. However, the objects must be immediately available at all times. The developer wants to optimize storage costs for the S3 bucket. Which solution will meet this requirement?


A. Create an S3 Lifecycle rule to expire objects after 7 days.


B. Create an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.


C. Create an S3 Lifecycle rule to transition objects to S3 Glacier Flexible Retrieval after 7 days.


D. Create an S3 Lifecycle rule to delete objects that have delete markers.





B.
  Create an S3 Lifecycle rule to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.

Explanation:
Comprehensive Detailed and Lengthy Step-by-Step Explanation with All AWS Developer References:
1. Understanding the Use Case:
The goal is to store objects in an S3 bucket while optimizing storage costs. The key conditions are:
Objects are accessed infrequently after 1 week.
Objects must remainimmediately accessibleat all times.
2. AWS S3 Storage Classes Overview:
Amazon S3 offers various storage classes, each optimized for specific use cases:
S3 Standard:Best for frequently accessed data with low latency and high throughput needs.
S3 Standard-Infrequent Access (S3 Standard-IA):Optimized for infrequently accessed data but requires the same availability and immediate access as Standard storage. It provides lower storage costs but incurs retrieval charges.
S3 Glacier Flexible Retrieval (formerly S3 Glacier):Designed for archival data with retrieval latency ranging from minutes to hours. This does not meet the requirement for "immediate access."
S3 Glacier Deep Archive:Lowest-cost storage, suitable for rarely accessed data with retrieval times of hours.
3. Explanation of the Options:
Option A:"Create an S3 Lifecycle rule to expire objects after 7 days."Expiring objects after 7 days deletes them permanently, which does not fulfill the requirement of retaining the objects for later infrequent access.
Option B:"Create an S3 Lifecycle rule to transition objects to S3 Standard- Infrequent Access (S3 Standard-IA) after 7 days."This is the correct solution. S3 Standard-IA is ideal for objects accessed infrequently but still need to be available immediately. Transitioning objects to this storage class reduces storage costs while maintaining availability and low latency.
Option C:"Create an S3 Lifecycle rule to transition objects to S3 Glacier Flexible Retrieval after 7 days."S3 Glacier Flexible Retrieval is a low-cost archival solution.
However, it does not provideimmediate accessas retrieval requires minutes to hours. This option does not meet the requirement.
Option D:"Create an S3 Lifecycle rule to delete objects that have delete markers."This option is irrelevant to the given use case, as it addresses versioning cleanup, which is not enabled in the described S3 bucket.
4. Implementation Steps for Option B:
To transition objects to S3 Standard-IA after 7 days:
Navigate to the S3 Console:
Select the Target Bucket:
Set Up a Lifecycle Rule:
Define the Rule Name and Scope:
Configure Transitions:
Review and Save the Rule:
5. Cost Optimization Benefits:
Transitioning to S3 Standard-IA results in cost savings as it offers:
Lower storage costs compared to S3 Standard.
Immediate access to objects when required.However, remember that there is a retrieval cost associated with S3 Standard-IA, so it is best suited for data with low retrieval frequency.

A company hosts a batch processing application on AWS Elastic Beanstalk with instances that run the most recent version of Amazon Linux. The application sorts and processes large datasets. In recent weeks, the application's performance has decreased significantly during a peak period for traffic. A developer suspects that the application issues are related to the memory usage. The developer checks the Elastic Beanstalk console and notices that memory usage is not being tracked. How should the developer gather more information about the application performance issues?


A. Configure the Amazon CloudWatch agent to push logs to Amazon CloudWatch Logs by using port 443.


B. Configure the Elastic Beanstalk .ebextensions directory to track the memory usage of the instances.


C. Configure the Amazon CloudWatch agent to track the memory usage of the instances.


D. Configure an Amazon CloudWatch dashboard to track the memory usage of the instances.





C.
  Configure the Amazon CloudWatch agent to track the memory usage of the instances.

Explanation:
To monitor memory usage in Amazon Elastic Beanstalk environments, it's important to understand that default Elastic Beanstalk monitoring capabilities in Amazon CloudWatch do not track memory usage, as memory metrics are not collected by default. Instead, theAmazon CloudWatch agentmust be configured to collect memory usage metrics.

  • Why Option C is Correct:
  • How to Implement This Solution:
  • Why Other Options are Incorrect:
  • AWS Documentation References:

A company is planning to deploy an application on AWS behind an Elastic Load Balancing (ELB) load balancer. The application uses an HTTP/HTTPS listener and must access the client IP addresses. Which load-balancing solution meets these requirements?


A. Use an Application Load Balancer and the X-Forwarded-For headers.


B. Use a Network Load Balancer (NLB). Enable proxy protocol support on the NLB and the target application.


C. Use an Application Load Balancer. Register the targets by the instance ID.


D. Use a Network Load Balancer and the X-Forwarded-For headers.





B.
  Use a Network Load Balancer (NLB). Enable proxy protocol support on the NLB and the target application.

A developer is testing a new file storage application that uses an Amazon CloudFront distribution to serve content from an Amazon S3 bucket. The distribution accesses the S3 bucket by using an origin access identity (OAI). The S3 bucket's permissions explicitly deny access to all other users.
The application prompts users to authenticate on a login page and then uses signed cookies to allow users to access their personal storage directories. The developer has configured the distribution to use its default cache behavior with restricted viewer access and has set the origin to point to the S3 bucket. However, when the developer tries to navigate to the login page, the developer receives a 403 Forbidden error.
The developer needs to implement a solution to allow unauthenticated access to the login page. The solution also must keep all private content secure.
Which solution will meet these requirements?


A. Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to the path of the login page, and make viewer access unrestricted. Keep the default cache behavior's settings unchanged.


B. Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to *, and make viewer access restricted. Change the default cache behavior's path pattern to the path of the login page, and make viewer access unrestricted.


C. Add a second origin as a failover origin to the default cache behavior. Point the failover origin to the S3 bucket. Set the path pattern for the primary origin to *, and makeviewer access restricted. Set the path pattern for the failover origin to the path of the login page, and make viewer access unrestricted.


D. Add a bucket policy to the S3 bucket to allow read access. Set the resource on the policy to the Amazon Resource Name (ARN) of the login page object in the S3 bucket. Add a CloudFront function to the default cache behavior to redirect unauthorized requests to the login page's S3 URL.





A.
  Add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to the path of the login page, and make viewer access unrestricted. Keep the default cache behavior's settings unchanged.

Explanation: The solution that will meet the requirements is to add a second cache behavior to the distribution with the same origin as the default cache behavior. Set the path pattern for the second cache behavior to the path of the login page, and make viewer access unrestricted. Keep the default cache behavior’s settings unchanged. This way, the login page can be accessed without authentication, while all other content remains secure and requires signed cookies. The other options either do not allow unauthenticated access to the login page, or expose private content to unauthorized users.

A company has an application that runs across multiple AWS Regions. The application is experiencing performance issues at irregular intervals. A developer must use AWS X-Ray to implement distributed tracing for the application to troubleshoot the root cause of the performance issues. What should the developer do to meet this requirement?


A. Use the X-Ray console to add annotations for AWS services and user-defined services


B. Use Region annotation that X-Ray adds automatically for AWS services Add Region annotation for user-defined services


C. Use the X-Ray daemon to add annotations for AWS services and user-defined services


D. Use Region annotation that X-Ray adds automatically for user-defined services Configure X-Ray to add Region annotation for AWS services





B.
  Use Region annotation that X-Ray adds automatically for AWS services Add Region annotation for user-defined services

Explanation:

  • Distributed Tracing with X-Ray: X-Ray helps visualize request paths and identify bottlenecks in applications distributed across Regions.
  • Region Annotations (Automatic for AWS Services): X-Ray automatically adds a Region annotation to segments representing calls to AWS services. This aids in tracing cross-Region traffic.
  • Region Annotations (Manual for User-Defined): For segments representing calls to user-defined services in different Regions, the developer needs to add the Region annotation manually to enable comprehensive tracing.

An Amazon Kinesis Data Firehose delivery stream is receiving customer data that contains personally identifiable information. A developer needs to remove pattern-based customer identifiers from the data and store the modified data in an Amazon S3 bucket. What should the developer do to meet these requirements?


A. Implement Kinesis Data Firehose data transformation as an AWS Lambda function. Configure the function to remove the customer identifiers. Set an Amazon S3 bucket as the destination of the delivery stream.


B. Launch an Amazon EC2 instance. Set the EC2 instance as the destination of the delivery stream. Run an application on the EC2 instance to remove the customer identifiers. Store the transformed data in an Amazon S3 bucket.


C. Create an Amazon OpenSearch Service instance. Set the OpenSearch Service instance as the destination of the delivery stream. Use search and replace to remove the customer identifiers. Export the data to an Amazon S3 bucket.


D. Create an AWS Step Functions workflow to remove the customer identifiers. As the last step in the workflow, store the transformed data in an Amazon S3 bucket. Set the workflow as the destination of the delivery stream.





A.
  Implement Kinesis Data Firehose data transformation as an AWS Lambda function. Configure the function to remove the customer identifiers. Set an Amazon S3 bucket as the destination of the delivery stream.

Explanation: Amazon Kinesis Data Firehose is a service that delivers real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, and Amazon Kinesis Data Analytics. The developer can implement Kinesis Data Firehose data transformation as an AWS Lambda function. The function can remove pattern-based customer identifiers from the data and return the modified data to Kinesis Data Firehose. The developer can set an Amazon S3 bucket as the destination of the delivery stream.

A developer is designing an AWS Lambda function that creates temporary files that are less than 10 MB during invocation. The temporary files will be accessed and modified multiple times during invocation. The developer has no need to save or retrieve these files in the future. Where should the temporary files be stored?


A. the /tmp directory


B. Amazon Elastic File System (Amazon EFS)


C. Amazon Elastic Block Store (Amazon EBS)


D. Amazon S3





A.
  the /tmp directory

Explanation: AWS Lambda is a service that lets developers run code without provisioning or managing servers. Lambda provides a local file system that can be used to store temporary files during invocation. The local file system is mounted under the /tmp directory and has a limit of 512 MB. The temporary files are accessible only by the Lambda function that created them and are deleted after the function execution ends. The developer can store temporary files that are less than 10 MB in the /tmp directory and access and modify them multiple times during invocation.

A developer is creating an application that must be able to generate API responses without backend integrations. Multiple internal teams need to work with the API while the application is still in development. Which solution will meet these requirements with theLEAST operational overhead?


A. Create an Amazon API Gateway REST API. Set up a proxy resource that has the HTTP proxy integration type.


B. Create an Amazon API Gateway HTTP API. Provision a VPC link, and set up a private integration on the API to connect to a VPC.


C. Create an Amazon API Gateway HTTP API. Enable mock integration on the method of the API resource.


D. Create an Amazon API Gateway REST API. Enable mock integration on the method of the API resource.





D.
  Create an Amazon API Gateway REST API. Enable mock integration on the method of the API resource.

Explanation:
Comprehensive Detailed and Lengthy Step-by-Step Explanation with All AWS Developer References:
1. Understanding the Use Case:
The API needs to:
Generate responses without backend integrations:This indicates the use of mock responses for testing.
Be used by multiple internal teams during development.
Minimize operational overhead.
2. Key Features of Amazon API Gateway:
REST APIs:Fully managed API Gateway option that supports advanced capabilities like mock integrations, request/response transformation, and more.
HTTP APIs:Lightweight option for building APIs quickly. It supports fewer features but has lower operational complexity and cost.
Mock Integration:Allows API Gateway to return pre-defined responses without requiring backend integration.
3. Explanation of the Options:
Option A:"Create an Amazon API Gateway REST API. Set up a proxy resource that has the HTTP proxy integration type."A proxy integration requires a backend service for handling requests. This does not meet the requirement of "no backend integrations."
Option B:"Create an Amazon API Gateway HTTP API. Provision a VPC link, and set up a private integration on the API to connect to a VPC."This requires setting up a VPC and provisioning resources, which increases operational overhead and is unnecessary for this use case.
Option C:"Create an Amazon API Gateway HTTP API. Enable mock integration on the method of the APIresource."While HTTP APIs can enable mock integrations, they have limited support for advanced features compared to REST APIs, such as detailed request/response customization. REST APIs are better suited for development environments requiring mock responses.
Option D:"Create an Amazon API Gateway REST API. Enable mock integration on the method of the API resource."This is the correct answer. REST APIs with mock integration allow defining pre-configured responses directly within API Gateway, making them ideal for scenarios where backend services are unavailable. It provides flexibility for testing while minimizing operational overhead.
4. Implementation Steps:
To enable mock integration with REST API:
Create a REST API in API Gateway:
Define the API Resource and Methods:
Set Up Mock Integration:
Configure the Mock Response:
Deploy the API:
5. Why REST API Over HTTP API?
REST APIs support detailed request/response transformations and robust mock integration features, which are ideal for development and testing scenarios.
While HTTP APIs offer lower cost and simplicity, they lack some advanced features required for fine-tuned mock integrations.

A company's application has an AWS Lambda function that processes messages from loT devices. The company wants to monitor the Lambda function to ensure that the Lambda function is meeting its required service level agreement (SLA).
A developer must implement a solution to determine the application's throughput in near real time. The throughput must be based on the number of messages that the Lambda function receives and processesin a given time period. The Lambda function performs initialization and post-processing steps that must not factor into the throughput measurement.
What should the developer do to meet these requirements?


A. Use the Lambda function's ConcurrentExecutions metric in Amazon CloudWatch to measure the throughput.


B. Modify the application to log the calculated throughput to Amazon CloudWatch Logs. Use Amazon EventBridge to invoke a separate Lambda function to process the logs on a schedule.


C. Modify the application to publish custom Amazon CloudWatch metrics when the Lambda function receives and processes each message. Use the metrics to calculate the throughput.


D. Use the Lambda function's Invocations metric and Duration metric to calculate the throughput in Amazon CloudWatch.





C.
  Modify the application to publish custom Amazon CloudWatch metrics when the Lambda function receives and processes each message. Use the metrics to calculate the throughput.


Page 6 out of 31 Pages
Previous