A social media company is designing a platform that allows users to upload data, which is stored in Amazon S3. Users can upload data encrypted with a public key. The company wants to ensure that only the company can decrypt the uploaded content using an asymmetric encryption key. The data must always be encrypted in transit and at rest.
A. Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the data.
B. Use server-side encryption with customer-provided encryption keys (SSE-C) to encrypt the data.
C. Use client-side encryption with a data key to encrypt the data.
D. Use client-side encryption with a customer-managed encryption key to encrypt the data.
Explanation:
Step 1: Problem Understanding
Asymmetric Encryption Requirement:Users encrypt data with apublic key, and only
the company can decrypt it using aprivate key.
Data Encryption at Rest and In Transit:The data must be encrypted during upload
(in transit) and when stored in Amazon S3 (at rest).
Step 2: Solution Analysis
Option A:Server-side encryption with Amazon S3 managed keys (SSE-S3).
Option B:Server-side encryption with customer-provided keys (SSE-C).
Option C:Client-side encryption with a data key.
Option D:Client-side encryption with a customer-managed encryption key.
Step 3: Implementation Steps for Option D
Generate Key Pair:
Encrypt Data on Client Side:
S3 Upload:
Decrypt Data on the Server:
AWS Developer References:
Amazon S3 Encryption Options
Asymmetric Key Cryptography in AWS
An application that runs on AWS receives messages from an Amazon Simple Queue
Service (Amazon SQS) queue and processes the messages in batches. The
application sends the data to another SQS queue to be consumed by another legacy
application. The legacy system can take up to 5 minutes to process some transaction data.
A developer wants to ensure that there are no out-of-order updates in the legacy system.
The developer cannot alter the behavior of the legacy system.
Which solution will meet these requirements?
A. Use an SQS FIFO queue. Configure the visibility timeout value.
B. Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the DelaySeconds values.
C. Use an SQS standard queue with a SendMessageBatchRequestEntry data type. Configure the visibility timeout value.
D. Use an SQS FIFO queue. Configure the DelaySeconds value.
Explanation:
A developer uses AWS CloudFormation to deploy an Amazon API Gateway API and an AWS Step Functions state machine The state machine must reference the API Gateway API after the CloudFormation template is deployed The developer needs a solution that uses the state machine to reference the API Gateway endpoint. Which solution will meet these requirements MOST cost-effectively?
A. Configure the CloudFormation template to reference the API endpoint in the DefinitionSubstitutions property for the AWS StepFunctions StateMachme resource.
B. Configure the CloudFormation template to store the API endpoint in an environment variable for the AWS::StepFunctions::StateMachine resourc Configure the state machine to reference the environment variable
C. Configure the CloudFormation template to store the API endpoint in a standard AWS: SecretsManager Secret resource Configure the state machine to reference the resource
D. Configure the CloudFormation template to store the API endpoint in a standard AWS::AppConfig;:ConfigurationProfile resource Configure the state machine to reference the resource.
Explanation:
A company created an application to consume and process data. The application uses Amazon SQS and AWS Lambda functions. The application is currently working as expected, but it occasionally receives several messages that it cannot process properly. The company needs to clear these messages to prevent the queue from becoming blocked. A developer must implement a solution that makesqueue processing always operational. The solution must give the company the ability to defer the messages with errors and save these messages for further analysis. What is the MOST operationally efficient solution that meets these requirements?
A. Configure Amazon CloudWatch Logs to save the error messages to a separate log stream.
B. Create a new SQS queue. Set the new queue as a dead-letter queue for the application queue. Configure the Maximum Receives setting.
C. Change the SQS queue to a FIFO queue. Configure the message retention period to 0 seconds.
D. Configure an Amazon CloudWatch alarm for Lambda function errors. Publish messages to an Amazon SNS topic to notify administrator users.
Explanation:
Using a dead-letter queue (DLQ) with Amazon SQS is the most operationally efficient
solution for handling unprocessable messages.
Amazon SQS Dead-Letter Queue:
Why DLQ is the Best Option:
Why Not Other Options:
Steps to Implement:
Create a new SQS queue to serve as the DLQ.
Attach the DLQ to the primary queue and configure the Maximum Receives
setting.
:
Using Amazon SQS Dead-Letter Queues
Best Practices for Using Amazon SQS with AWS Lambda
A company has built an AWS Lambda function to convert large image files into output files
that can be used in a third-party viewer application The company recently added a new
module to the function to improve the output of the generated files However, the new
module has increased the bundle size and has increased the time that is needed to deploy
changes to the function code.
How can a developer increase the speed of the Lambda function deployment?
A. Use AWS CodeDeploy to deploy the function code
B. Use Lambda layers to package and load dependencies.
C. Increase the memory size of the function.
D. Use Amazon S3 to host the function dependencies
Explanation:
A company has an application that is deployed on AWS Elastic Beanstalk. The application
generates user-specific PDFs and stores the PDFs in an Amazon S3 bucket. The
application then uses Amazon Simple Email Service (Amazon SES) to send the PDFs by
email to subscribers.
Users no longer access the PDFs 90 days after the PDFs are generated. The S3 bucket is
not versioned and contains many obsolete PDFs.
A developer must reduce the number of files in the S3 bucket by removing PDFs that are
older than 90 days.
Which solution will meet this requirement with the LEAST development effort?
A. Update the application code. In the code, add a rule to scan all the objects in the S3 bucket every day and to delete objects after 90 days.
B. Create an AWS Lambda function. Program the Lambda function to scan all the objects in the S3 bucket every day and to delete objects after 90 days.
C. Create an S3 Lifecycle rule for the S3 bucket to expire objects after 90 days.
D. Partition the S3 objects with a
A developer is implementing a serverless application by using AWS CloudFormation to
provision Amazon S3 web hosting. Amazon API Gateway, and AWS Lambda functions.
The Lambda function source code is zipped and uploaded to an S3 bucket. The S3 object
key of the zipped source code is specified in the Lambda resource in the CloudFormation
template.
The developer notices that there are no changes in the Lambda function every time the
CloudFormation stack is updated.
How can the developer resolve this issue?
A. Create a new Lambda function alias before updating the CloudFormation stack.
B. Change the S3 object key or the S3 version in the CloudFormation template before updating the CloudFormation stack.
C. Upload the zipped source code to another S3 bucket before updating the CloudFormation stack.
D. Associate a code signing configuration with the Lambda function before updating the CloudFormation stack.
A company is creating an AWS Step Functions state machine to run a set of tests for an application. The tests need to run when a specific AWS Cloud Formation stack is deployed. Which combination of steps will meet these requirements? (Select TWO.)
A. Create an AWS Lambda function to invoke the state machine.
B. Create an Amazon EventBridge rule on the default bus that matches on a detail type of CloudFormation stack status change, a status of UPDATE_IN_PROGRESS, and the stack ID of the CloudFormation stack.
C. Create a pipe in Amazon EventBridge Pipes that has a source of the default event bus. Set the Lambda function as a target. Filter on a detail type of CloudFormation stack status change, a status of UPDATE_IN_PROGRESS, and the stack ID of the CloudFormation stack.
D. Create a pipe in Amazon EventBridge Pipes that has a source of the EventBridge rule. Set the state machine as a target.
E. Add the state machine as a target of the EventBridge rule.
Explanation: Requirement Summary:
Trigger anAWS Step Functions state machine(test execution)
Only when aspecific AWS CloudFormation stack is deployed
Option A: Create a Lambda function to invoke the state machine
Valid approach: Lambda can be used as anintermediary triggerfor Step Functions
using the SDK (e.g., StartExecution API).
Offers flexibility (custom filtering, additional logic).
Option B: Create EventBridge rule filtering on UPDATE_IN_PROGRESS
Incorrect: UPDATE_IN_PROGRESS triggersbeforethe stack is fully deployed.
You need totrigger after deployment, such as UPDATE_COMPLETE or
CREATE_COMPLETE.
Option C: EventBridge Pipes with Lambda target filtering on
UPDATE_IN_PROGRESS
Incorrect for same reason as B (wrong timing).
Also, EventBridge Pipes are not necessary here if you're using rules directly.
Option D: Pipe with EventBridge Rule as source and Step Functions as target
Invalid setup: EventBridge Pipes useevent sources, not rules, as input.
This configuration is unsupported.
Option E: Add the state machine as a target of the EventBridge rule
Direct and low-overhead approach.
EventBridgenatively supports Step Functionsas a target.
You can trigger the state machinewithout a Lambdaif the filter matches (e.g.,
ResourceStatus = CREATE_COMPLETE, with the correct StackId).
A developer is using AWS Amplify Hosting to build and deploy an application. The developer is receiving an increased number of bug reports from users. The developer wants to add end-to-end testing to the application to eliminate as many bugs as possible before the bugs reach production. Which solution should the developer implement to meet these requirements?
A. Run the amplify add test command in the Amplify CLI.
B. Create unit tests in the application. Deploy the unit tests by using the amplify push command in the Amplify CLI.
C. Add a test phase to the amplify.yml build settings for the application.
D. Add a test phase to the aws-exports.js file for the application.
Explanation: The solution that will meet the requirements is to add a test phase to the amplify.yml build settings for the application. This way, the developer can run end-to-end tests on every code commit and catch any bugs before deploying to production. The other options either do not support end-to-end testing, or do not run tests automatically.
A developer is using an AWS CloudFormation template to create a pipeline in AWS
CodePipeline. The template creates an Amazon S3 bucket that the pipeline references in a
source stage. The template also creates an AWS CodeBuild project for a build stage. The
pipeline sends notifications to an Amazon SNS topic. Logs for the CodeBuild project are
stored in Amazon CloudWatch Logs.
The company needs to ensure that the pipeline's artifacts are encrypted with an existing
customer-managed AWS KMS key. The developer has granted the pipeline permissions to
use the KMS key.
Which additional step will meet these requirements?
A. Create an Amazon S3 gateway endpoint that the pipeline can access.
B. In the CloudFormation template, use the KMS key to encrypt the logs in CloudWatch Logs.
C. Apply an S3 bucket policy that ensures the pipeline sends only encrypted objects to the S3 bucket.
D. Configure the notification topic to use the existing KMS key to enable encryption with the existing KMS key.
Explanation: Why Option C is Correct:Ensuring that pipeline artifacts are encrypted with a customer-managed AWS KMS key involves configuring the S3 bucket policy to require encryption. This policy ensures all objects uploaded to the bucket are encrypted with the specified KMS key.
A company has an application that is hosted on Amazon EC2 instances The application stores objects in an Amazon S3 bucket and allows users to download objects from the S3 bucket A developer turns on S3 Block Public Access for the S3 bucket After this change, users report errors when they attempt to download objects The developer needs to implement a solution so that only users who are signed in to the application can access objects in the S3 bucket. Which combination of steps will meet these requirements in the MOST secure way? (Select TWO.)
A. Create an EC2 instance profile and role with an appropriate policy Associate the role with the EC2 instances
B. Create an 1AM user with an appropriate policy. Store the access key ID and secret access key on the EC2 instances
C. Modify the application to use the S3 GeneratePresignedUrl API call
D. Modify the application to use the S3 GetObject API call and to return the object handle to the user
E. Modify the application to delegate requests to the S3 bucket.
Explanation:
A developer is creating an AWS Lambda function. The Lambda function needs an external library to connect to a third-party solution The external library is a collection of files with a total size of 100 MB The developer needs to make the external library available to the Lambda execution environment and reduce the Lambda package space Which solution will meet these requirements with the LEAST operational overhead?
A. Create a Lambda layer to store the external library Configure the Lambda function to use the layer
B. Create an Amazon S3 bucket Upload the external library into the S3 bucket. Mount the S3 bucket folder in the Lambda function Import the library by using the proper folder in the mount point.
C. Load the external library to the Lambda function's /tmp directory during deployment of the Lambda package. Import the library from the /tmp directory.
D. Create an Amazon Elastic File System (Amazon EFS) volume. Upload the external library to the EFS volume Mount the EFS volume in the Lambda function. Import the library by using the proper folder in the mount point.
Page 8 out of 31 Pages |
Previous |