DVA-C02 Practice Test Questions

368 Questions


A developer is creating an AWS Lambda function that consumes messages from an Amazon Simple Queue Service (Amazon SQS) standard queue. The developer notices that the Lambda function processes some messages multiple times. How should developer resolve this issue MOST cost-effectively?


A. Change the Amazon SQS standard queue to an Amazon SQS FIFO queue by using the Amazon SQS message deduplication ID.


B. Set up a dead-letter queue.


C. Set the maximum concurrency limit of the AWS Lambda function to 1


D. Change the message processing to use Amazon Kinesis Data Streams instead of Amazon SQS.





A.
  Change the Amazon SQS standard queue to an Amazon SQS FIFO queue by using the Amazon SQS message deduplication ID.

Explanation: Amazon Simple Queue Service (Amazon SQS) is a fully managed queue service that allows you to de-couple and scale for applications1. Amazon SQS offers two types of queues: Standard and FIFO (First In First Out) queues1. The FIFO queue uses the messageDeduplicationId property to treat messages with the same value as duplicate2. Therefore, changing the Amazon SQS standard queue to an Amazon SQS FIFO queue using the Amazon SQS message deduplication ID can help resolve the issue of the Lambda function processing some messages multiple times. Therefore, option A is correct.

A developer needs to deploy an application running on AWS Fargate using Amazon ECS The application has environment variables that must be passed to a container for the application to initialize. How should the environment variables be passed to the container?


A. Define an array that includes the environment variables under the environment parameter within the service definition.


B. Define an array that includes the environment variables under the environment parameter within the task definition.


C. Define an array that includes the environment variables under the entryPoint parameter within the task definition.


D. Define an array that includes the environment variables under the entryPoint parameter within the service definition.





B.
  Define an array that includes the environment variables under the environment parameter within the task definition.

Explanation: This solution allows the environment variables to be passed to the container when it is launched by AWS Fargate using Amazon ECS. The task definition is a text file that describes one or more containers that form an application. It contains various parameters for configuring the containers, such as CPU and memory requirements, network mode, and environment variables. The environment parameter is an array of keyvalue pairs that specify environment variables to pass to a container. Defining an array that includes the environment variables under the entryPoint parameter within the task definition will not pass them to the container, but use them as command-line arguments for overriding the default entry point of a container. Defining an array that includes the environment variables under the environment or entryPoint parameter within the service definition will not pass them to the container, but cause an error because these parameters are not valid for a service definition.

A developer needs to use Amazon DynamoDB to store customer orders. The developer's company requires all customer data to be encrypted at rest with a key that the company generates. What should the developer do to meet these requirements?


A. Create the DynamoDB table with encryption set to None. Code the application to use the key to decrypt the data when the application reads from the table. Code the application to use the key to encrypt the data when the application writes to the table.


B. Store the key by using AW5 KMS. Choose an AVVS KMS customer managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.


C. Store the key by using AWS KMS. Create the DynamoDB table with default encryption. Include the kms:Encrypt parameter with the Amazon Resource Name (ARN) of the AWS KMS key when using the DynamoDB SDK.


D. Store the key by using AWS KMS. Choose an AWS KMS AWS managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.





B.
  Store the key by using AW5 KMS. Choose an AVVS KMS customer managed key during creation of the DynamoDB table. Provide the Amazon Resource Name (ARN) of the AWS KMS key.

A developer is designing a serverless application with two AWS Lambda functions to process photos. One Lambda function stores objects in an Amazon S3 bucket and stores the associated metadata in an Amazon DynamoDB table. The other Lambda function fetches the objects from the S3 bucket by using the metadata from the DynamoDB table. Both Lambda functions use the same Python library to perform complex computations and are approaching the quota for the maximum size of zipped deployment packages. What should the developer do to reduce the size of the Lambda deployment packages with the LEAST operational overhead?


A. Package each Python library in its own .zip file archive. Deploy each Lambda function with its own copy of the library.


B. Create a Lambda layer with the required Python library. Use the Lambda layer in both Lambda functions.


C. Combine the two Lambda functions into one Lambda function. Deploy the Lambda function as a single .zip file archive.


D. Download the Python library to an S3 bucket. Program the Lambda functions to reference the object URLs.





B.
  Create a Lambda layer with the required Python library. Use the Lambda layer in both Lambda functions.

Explanation: AWS Lambda is a service that lets developers run code without provisioning or managing servers. Lambda layers are a distribution mechanism for libraries, custom runtimes, and other dependencies. The developer can create a Lambda layer with the required Python library and use the layer in both Lambda functions. This will reduce the size of the Lambda deployment packages and avoid reaching the quota for the maximum size of zipped deployment packages. The developer can also benefit from using layers to manage dependencies separately from function code.

A developer deployed an application to an Amazon EC2 instance The application needs to know the public IPv4 address of the instance. How can the application find this information?


A. Query the instance metadata from http./M69.254.169.254. latestmeta-data/.


B. Query the instance user data from http '169 254.169 254. latest/user-data/


C. Query the Amazon Machine Image (AMI) information from http://169.254.169.254/latest/meta-data/ami/.


D. Check the hosts file of the operating system





A.
  Query the instance metadata from http./M69.254.169.254. latestmeta-data/.

Explanation:
Instance Metadata Service: EC2 instances have access to an internal metadata service. It provides instance-specific information like instance ID, security groups, and public IP address.

A company has implemented a pipeline in AWS CodePipeline. The company Is using a single AWS account and does not use AWS Organizations. The company needs to test its AWS CloudFormation templates in its primary AWS Region and a disaster recovery Region. Which solution will meet these requirements with the MOST operational efficiency?


A. In the CodePipeline pipeline, implement an AWS CodeDeploy action for each Region to deploy and test the Cloud Formation templates. Update CodePipeline and AWS CodeBuild with appropriate permissions.


B. Configure CodePipeline to deploy and test the Cloud Formation templates. Use CloudFormation StackSets to start deployment across both Regions.


C. Configure CodePipeline to invoke AWS CodeBuild to deploy and test the CloudFormation templates in each Region. Update CodeBuild and CloudFormation with appropriate permissions.


D. Use the Snyk action in CodePipeline to deploy and test the CloudFormation templates in each Region.





B.
  Configure CodePipeline to deploy and test the Cloud Formation templates. Use CloudFormation StackSets to start deployment across both Regions.

A company uses Amazon API Gateway to expose a set of APIs to customers. The APIs have caching enabled in API Gateway. Customers need a way to invalidate the cache for each API when they test the API. What should a developer do to give customers the ability to invalidate the API cache?


A. Ask the customers to use AWS credentials to call the InvalidateCache API operation.


B. Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the API. Ask the customers to send a request that contains theHTTP header when they make an API call.


C. Ask the customers to use the AWS SDK API Gateway class to invoke the InvalidateCache API operation.


D. Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the API. Ask the customers to add the INVALIDATE_CACHE querystring parameter when they make an API call.





D.
  Attach an InvalidateCache policy to the IAM execution role that the customers use to invoke the API. Ask the customers to add the INVALIDATE_CACHE querystring parameter when they make an API call.

A developer at a company needs to create a small application mat makes the same API call once each flay at a designated time. The company does not have infrastructure in the AWS Cloud yet, but the company wants to implement this functionality on AWS. Which solution meets these requirements in the MOST operationally efficient manner?


A. Use a Kubermetes cron job that runs on Amazon Elastic Kubemetes Sen/ice (Amazon EKS)


B. Use an Amazon Linux crontab scheduled job that runs on Amazon EC2


C. Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.


D. Use an AWS Batch job that is submitted to an AWS Batch job queue.





C.
  Use an AWS Lambda function that is invoked by an Amazon EventBridge scheduled event.

Explanation: This solution meets the requirements in the most operationally efficient manner because it does not require any infrastructure provisioning or management. The developer can create a Lambda function that makes the API call and configure an EventBridge rule that triggers the function once a day at a designated time. This is a serverless solution that scales automatically and only charges for the execution time of the function.

A developer is building an application that uses an Amazon RDS for PostgreSQL database. To meet security requirements, the developer needs to ensure that data is encrypted at rest. The developer must be able to rotate the encryption keys on demand.


A. Use an AWS KMS managed encryption key to encrypt the database.


B. Create a symmetric customer managed AWS KMS key. Use the key to encrypt the database.


C. Create a 256-bit AES-GCM encryption key. Store the key in AWS Secrets Manager, and enable managed rotation. Use the key to encrypt the database.


D. Create a 256-bit AES-GCM encryption key. Store the key in AWS Secrets Manager. Configure an AWS Lambda function to perform key rotation. Use the key to encrypt the database.





B.
  Create a symmetric customer managed AWS KMS key. Use the key to encrypt the database.

Explanation:
Why Option B is Correct: A customer-managed AWS Key Management Service (KMS) key allows for encryption at rest and provides the ability to rotate the key on demand. This ensures compliance with security requirements for key management and database encryption.

A company needs to set up secure database credentials for all its AWS Cloud resources. The company's resources include Amazon RDS DB instances Amazon DocumentDB clusters and Amazon Aurora DB instances. The company's security policy mandates that database credentials be encrypted at rest and rotated at a regular interval. Which solution will meet these requirements MOST securely?


A. Set up IAM database authentication for token-based access. Generate user tokens to provide centralized access to RDS DB instances. Amazon DocumentDB clusters and Aurora DB instances.


B. Create parameters for the database credentials in AWS Systems Manager Parameter Store Set the Type parameter to Secure Sting. Set up automatic rotation on the parameters.


C. Store the database access credentials as an encrypted Amazon S3 object in an S3 bucket Block all public access on the S3 bucket. Use S3 server-side encryption to set up automatic rotation on the encryption key.


D. Create an AWS Lambda function by using the SecretsManagerRotationTemplate template in the AWS Secrets Manager console. Create secrets for the database credentials in Secrets Manager Set up secrets rotation on a schedule.





D.
  Create an AWS Lambda function by using the SecretsManagerRotationTemplate template in the AWS Secrets Manager console. Create secrets for the database credentials in Secrets Manager Set up secrets rotation on a schedule.

Explanation: This solution will meet the requirements by using AWS Secrets Manager, which is a service that helps protect secrets such as database credentials by encrypting them with AWS Key Management Service (AWS KMS) and enabling automatic rotation of secrets. The developer can create an AWS Lambda function by using the SecretsManagerRotationTemplate template in the AWS Secrets Manager console, which provides a sample code for rotating secrets for RDS DB instances, Amazon DocumentDB clusters, and Amazon Aurora DB instances. The developer can also create secrets for the database credentials in Secrets Manager, which encrypts them at rest and provides secure access to them. The developer can set up secrets rotation on a schedule, which changes the database credentials periodically according to a specified interval or event. Option A is not optimal because it will set up IAM database authentication for token-based access, which may not be compatible with all database engines and may require additional configuration and management of IAM roles or users. Option B is not optimal because it will create parameters for the database credentials in AWS Systems Manager Parameter Store, which does not support automatic rotation of secrets. Option C is not optimal because it will store the database access credentials as an encrypted Amazon S3 object in an S3 bucket, which may introduce additional costs and complexity for accessing and securing the data.

A company is using the AWS Serverless Application Model (AWS SAM) to develop a social media application. A developer needs a quick way to test AWS Lambda functions locally by using test event payloads. The developer needs the structure of these test event payloads to match the actual events that AWS services create.


A. Create shareable test Lambda events. Use these test Lambda events for local testing.


B. Store manually created test event payloads locally. Use the sam local invoke command with the file path to the payloads.


C. Store manually created test event payloads in an Amazon S3 bucket. Use the sam local invoke command with the S3 path to the payloads.


D. Use the sam local generate-event command to create test payloads for local testing.





D.
  Use the sam local generate-event command to create test payloads for local testing.

Explanation: Comprehensive Detailed Step by Step Explanation with All AWS Developer References:
The AWS Serverless Application Model (SAM) includes features for local testing and debugging of AWS Lambda functions. One of the most efficient ways to generate test payloads that match actual AWS event structures is by using the sam local generate-event command.
sam local generate-event: This command allows developers to create preconfigured test event payloads for various AWS services (e.g., S3, API Gateway, SNS). These generated events accurately reflect the format that the service would use in a live environment, reducing the manual work required to create these events from scratch.
Operational Overhead: This approach reduces overhead since the developer does not need to manually create or maintain test events. It ensures that the structure is correct and up-to-date with the latest AWS standards.
Alternatives:
AWS SAM CLI documentation

A company has an AWS Step Functions state machine named myStateMachine. The company configured a service role for Step Functions. The developer must ensure that only the myStateMachine state machine can assume the service role. Which statement should the developer add to the trust policy to meet this requirement?


A. "Condition": { "ArnLike": { "aws:SourceArn":"urn:aws:states:ap-south- 1:111111111111:stateMachine:myStateMachine" } }


B. "Condition": { "ArnLike": { "aws:SourceArn":"arn:aws:states:ap-south- 1:*:stateMachine:myStateMachine" } }


C. "Condition": { "StringEquals": { "aws:SourceAccount": "111111111111" } }


D. "Condition": { "StringNotEquals": { "aws:SourceArn":"arn:aws:states:ap-south- 1:111111111111:stateMachine:myStateMachine" } }





A.
  "Condition": { "ArnLike": { "aws:SourceArn":"urn:aws:states:ap-south- 1:111111111111:stateMachine:myStateMachine" } }


Page 14 out of 31 Pages
Previous