Topic 1: Exam Pool A
A company is running several workloads in a single AWS account. A new company policy
states that engineers can provision only approved resources and that engineers must use
AWS CloudFormation to provision these resources. A solutions architect needs to create a
solution to enforce the new restriction on the IAM role that the engineers use for access.
What should the solutions architect do to create the solution?
A. Upload AWS CloudFormation templates that contain approved resources to an Amazon S3 bucket. Update the IAM policy for the engineers' IAM role to only allow access to Amazon S3 and AWS CloudFormation. Use AWS CloudFormation templates to provision resources.
B. Update the IAM policy for the engineers' IAM role with permissions to only allow provisioning of approved resources and AWS CloudFormation. Use AWS CloudFormation templates to create stacks with approved resources.
C. Update the IAM policy for the engineers' IAM role with permissions to only allow AWS CloudFormation actions. Create a new IAM policy with permission to provision approved resources, and assign the policy to a new IAM service role. Assign the IAM service role to AWS CloudFormation during stack creation.
D. Provision resources in AWS CloudFormation stacks. Update the IAM policy for the engineers' IAM role to only allow access to their own AWS CloudFormation stack.
A retail company is operating its ecommerce application on AWS. The application runs on
Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses an
Amazon RDS DB instance as the database backend. Amazon CloudFront is configured
with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used
to host all public zones.
After an update of the application, the ALB occasionally returns a 502 status code (Bad
Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB.
The webpage returns successfully when a solutions architect reloads the webpage
immediately after the error occurs.
While the company is working on the problem, the solutions architect needs to provide a
custom error page instead of the standard ALB error page to visitors.
Which combination of steps will meet this requirement with the LEAST amount of
operational overhead? (Choose two.)
A. Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
B. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Target.FailedHealthChecks is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a publicly accessible web server.
C. Modify the existing Amazon Route 53 records by adding health checks. Configure a fallback target if the health check fails. Modify DNS records to point to a publicly accessible webpage.
D. Create an Amazon CloudWatch alarm to invoke an AWS Lambda function if the ALB health check response Elb.InternalError is greater than 0. Configure the Lambda function to modify the forwarding rule at the ALB to point to a public accessible web server.
E. Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.
Explanation: "Save your custom error pages in a location that is accessible to CloudFront. We recommend that you store them in an Amazon S3 bucket, and that you don’t store them in the same place as the rest of your website or application’s content. If you store the custom error pages on the same origin as your website or application, and the origin starts to return 5xx errors, CloudFront can’t get the custom error pages because the origin server is unavailable."
A company has hundreds of AWS accounts. The company recently implemented a
centralized internal process for purchasing new Reserved Instances and modifying existing
Reserved Instances. This process requires all business units that want to purchase or
modify Reserved Instances to submit requests to a dedicated team for procurement.
Previously, business units directly purchased or modified Reserved Instances in their own
respective AWS accounts autonomously.
A solutions architect needs to enforce the new process in the most secure way possible.
Which combination of steps should the solutions architect take to meet these
requirements? (Choose two.)
A. Ensure that all AWS accounts are part of an organization in AWS Organizations with all features enabled.
B. Use AWS Config to report on the attachment of an IAM policy that denies access to the ec2:PurchaseReservedInstancesOffering action and the ec2:ModifyReservedInstances action.
C. In each AWS account, create an IAM policy that denies the ec2:PurchaseReservedInstancesOffering action and the ec2:ModifyReservedInstances action.
D. Create an SCP that denies the ec2:PurchaseReservedInstancesOffering action and the ec2:ModifyReservedInstances action. Attach the SCP to each OU of the organization.
E. Ensure that all AWS accounts are part of an organization in AWS Organizations that uses the consolidated billing feature.
Explanation: All features – The default feature set that is available to AWS Organizations. It includes all the functionality of consolidated billing, plus advanced features that give you more control over accounts in your organization. For example, when all features are enabled the management account of the organization has full control over what member accounts can do. The management account can apply SCPs to restrict the services and actions that users (including the root user) and roles in an account can access.
A company is running applications on AWS in a multi-account environment. The company's
sales team and marketing team use separate AWS accounts in AWS Organizations.
The sales team stores petabytes of data in an Amazon S3 bucket. The marketing team
uses Amazon QuickSight for data visualizations. The marketing team needs access to data
that the sates team stores in the S3 bucket. The company has encrypted the S3 bucket
with an AWS Key Management Service (AWS KMS) key. The marketing team has already
created the IAM service role for QuickSight to provide QuickSight access in the marketing
AWS account. The company needs a solution that will provide secure access to the data in
the S3 bucket across AWS accounts.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a new S3 bucket in the marketing account. Create an S3 replication rule in the sales account to copy the objects to the new S3 bucket in the marketing account. Update the QuickSight permissions in the marketing account to grant access to the new S3 bucket.
B. Create an SCP to grant access to the S3 bucket to the marketing account. Use AWS Resource Access Manager (AWS RAM) to share the KMS key from the sates account with the marketing account. Update the QuickSight permissions in the marketing account to grant access to the S3 bucket.
C. Update the S3 bucket policy in the marketing account to grant access to the QuickSight role. Create a KMS grant for the encryption key that is used in the S3 bucket. Grant decrypt access to the QuickSight role. Update the QuickSight permissions in the marketing account to grant access to the S3 bucket.
D. Create an IAM role in the sales account and grant access to the S3 bucket. From the marketing account, assume the IAM role in the sales account to access the S3 bucket. Update the QuickSight rote, to create a trust relationship with the new IAM role in the sales account.
Explanation: Create an IAM role in the sales account and grant access to the S3 bucket.
From the marketing account, assume the IAM role in the sales account to access the S3
bucket. Update the QuickSight role, to create a trust relationship with the new IAM role in
the sales account.
This approach is the most secure way to grant cross-account access to the data in the S3
bucket while minimizing operational overhead. By creating an IAM role in the sales
account, the marketing team can assume the role in their own account, and have access to
the S3 bucket. And updating the QuickSight role, to create a trust relationship with the new
IAM role in the sales account will grant the marketing team to access the data in the S3
bucket and use it for data visualization using QuickSight.
AWS Resource Access Manager (AWS RAM) also allows sharing of resources between
accounts, but it would require additional management and configuration to set up the
sharing, which would increase operational overhead.
Using S3 replication would also replicate the data to the marketing account, but it would not
provide the marketing team access to the original data, and also it would increase
operational overhead with managing the replication process.
IAM roles and policies, KMS grants and trust relationships are a powerful combination for
managing cross-account access in a secure and efficient manner.
A company wants to deploy an AWS WAF solution to manage AWS WAF rules across
multiple AWS accounts. The accounts are managed under different OUs in AWS
Organizations.
Administrators must be able to add or remove accounts or OUs from managed AWS WAF
rule sets as needed Administrators also must have the ability to automatically update and
remediate noncompliant AWS WAF rules in all accounts
Which solution meets these requirements with the LEAST amount of operational
overhead?
A. Use AWS Firewall Manager to manage AWS WAF rules across accounts in the organization. Use an AWS Systems Manager Parameter Store parameter to store account numbers and OUs to manage Update the parameter as needed to add or remove accounts or OUs Use an Amazon EventBridge (Amazon CloudWatch Events) rule to identify any changes to the parameter and to invoke an AWS Lambda function to update the security policy in the Firewall Manager administrative account
B. Deploy an organization-wide AWS Config rule that requires all resources in the selected OUs to associate the AWS WAF rules. Deploy automated remediation actions by using AWS Lambda to fix noncompliant resources Deploy AWS WAF rules by using an AWS CloudFormation stack set to target the same OUs where the AWS Config rule is applied.
C. Create AWS WAF rules in the management account of the organization Use AWS Lambda environment variables to store account numbers and OUs to manage Update environment variables as needed to add or remove accounts or OUs Create cross-account IAM roles in member accounts Assume the rotes by using AWS Security Token Service (AWS STS) in the Lambda function to create and update AWS WAF rules in the member accounts.
D. Use AWS Control Tower to manage AWS WAF rules across accounts in the organization Use AWS Key Management Service (AWS KMS) to store account numbers and OUs to manage Update AWS KMS as needed to add or remove accounts or OUs Create IAM users in member accounts Allow AWS Control Tower in the management account to use the access key and secret access key to create and update AWS WAF rules in the member accounts
Explanation: In this solution, AWS Firewall Manager is used to manage AWS WAF rules across accounts in the organization. An AWS Systems Manager Parameter Store parameter is used to store account numbers and OUs to manage. This parameter can be updated as needed to add or remove accounts or OUs. An Amazon EventBridge rule is used to identify any changes to the parameter and to invoke an AWS Lambda function to update the security policy in the Firewall Manager administrative account. This solution allows for easy management of AWS WAF rules across multiple accounts with minimal operational overhead
A retail company is hosting an ecommerce website on AWS across multiple AWS Regions.
The company wants the website to be operational at all times for online purchases. The website stores data in an Amazon RDS for MySQL DB instance.
Which solution will provide the HIGHEST availability for the database?
A. Configure automated backups on Amazon RDS. In the case of disruption, promote an automated backup to be a standalone DB instance. Direct database traffic to the promoted DB instance. Create a replacement read replica that has the promoted DB instance as its source.
B. Configure global tables and read replicas on Amazon RDS. Activate the cross-Region scope. In the case of disruption, use AWS Lambda to copy the read replicas from one Region to another Region.
C. Configure global tables and automated backups on Amazon RDS. In the case of disruption, use AWS Lambda to copy the read replicas from one Region to another Region.
D. Configure read replicas on Amazon RDS. In the case of disruption, promote a cross- Region and read replica to be a standalone DB instance. Direct database traffic to the promoted DB instance. Create a replacement read replica that has the promoted DB instance as its source.
Explanation: This solution will provide the highest availability for the database, as the read replicas will allow the database to be available in multiple Regions, thus reducing the chances of disruption. Additionally, the promotion of the cross-Region read replica to become a standalone DB instance will ensure that the database is still available even if one of the Regions experiences disruptions.
A company has an on-premises monitoring solution using a PostgreSQL database for
persistence of events. The database is unable to scale due to heavy ingestion and it
frequently runs out of storage.
The company wants to create a hybrid solution and has already set up a VPN connection
between its network and AWS. The solution should include the following attributes:
• Managed AWS services to minimize operational complexity
• A buffer that automatically scales to match the throughput of data and requires no ongoing
administration.
• A visualization toot to create dashboards to observe events in near-real time.
• Support for semi -structured JSON data and dynamic schemas.
Which combination of components will enabled© company to create a monitoring solution
that will satisfy these requirements'' (Select TWO.)
A. Use Amazon Kinesis Data Firehose to buffer events Create an AWS Lambda function 10 process and transform events
B. Create an Amazon Kinesis data stream to buffer events Create an AWS Lambda function to process and transform evens
C. Configure an Amazon Aurora PostgreSQL DB cluster to receive events Use Amazon Quick Sight to read from the database and create near-real-time visualizations and dashboards
D. Configure Amazon Elasticsearch Service (Amazon ES) to receive events Use the Kibana endpoint deployed with Amazon ES to create near-real-time visualizations and dashboards.
E. Configure an Amazon Neptune 0 DB instance to receive events Use Amazon QuickSight to read from the database and create near-real-time visualizations and dashboards
A company runs a Java application that has complex dependencies on VMs that are in the
company's data center. The application is stable. but the company wants to modernize the
technology stack. The company wants to migrate the application to AWS and minimize the
administrative overhead to maintain the servers.
Which solution will meet these requirements with the LEAST code changes?
A. Migrate the application to Amazon Elastic Container Service (Amazon ECS) on AWS Fargate by using AWS App2Container. Store container images in Amazon Elastic Container Registry (Amazon ECR). Grant the ECS task execution role permission 10 access the ECR image repository. Configure Amazon ECS to use an Application Load Balancer (ALB). Use the ALB to interact with the application.
B. Migrate the application code to a container that runs in AWS Lambda. Build an Amazon API Gateway REST API with Lambda integration. Use API Gateway to interact with the application.
C. Migrate the application to Amazon Elastic Kubernetes Service (Amazon EKS) on EKS managed node groups by using AWS App2Container. Store container images in Amazon Elastic Container Registry (Amazon ECR). Give the EKS nodes permission to access the ECR image repository. Use Amazon API Gateway to interact with the application.
D. Migrate the application code to a container that runs in AWS Lambda. Configure Lambda to use an Application Load Balancer (ALB). Use the ALB to interact with the application.
Explanation: According to the AWS documentation1, AWS App2Container (A2C) is a
command line tool for migrating and modernizing Java and .NET web applications into
container format. AWS A2C analyzes and builds an inventory of applications running in
bare metal, virtual machines, Amazon Elastic Compute Cloud (EC2) instances, or in the
cloud. You can use AWS A2C to generate container images for your applications and
deploy them on Amazon ECS or Amazon EKS.
Option A meets the requirements of the scenario because it allows you to migrate your
existing Java application to AWS and minimize the administrative overhead to maintain the
servers. You can use AWS A2C to analyze your application dependencies, extract
application artifacts, and generate a Dockerfile. You can then store your container images
in Amazon ECR, which is a fully managed container registry service. You can use AWS
Fargate as the launch type for your Amazon ECS cluster, which is a serverless compute
engine that eliminates the need to provision and manage servers for your containers. You
can grant the ECS task execution role permission to access the ECR image repository,
which allows your tasks to pull images from ECR. You can configure Amazon ECS to use
an ALB, which is a load balancer that distributes traffic across multiple targets in multiple
Availability Zones using HTTP or HTTPS protocols. You can use the ALB to interact with
your application.
A company is developing a new serverless API by using Amazon API Gateway and AWS
Lambda. The company integrated the Lambda functions with API Gateway to use several
shared libraries and custom classes.
A solutions architect needs to simplify the deployment of the solution and optimize for code
reuse.
Which solution will meet these requirements?
A. Deploy the shared libraries and custom classes into a Docker image. Store the image in an S3 bucket. Create a Lambda layer that uses the Docker image as the source. Deploy the API's Lambda functions as Zip packages. Configure the packages to use the Lambda layer.
B. Deploy the shared libraries and custom classes to a Docker image. Upload the image to Amazon Elastic Container Registry (Amazon ECR). Create a Lambda layer that uses the Docker image as the source. Deploy the API's Lambda functions as Zip packages. Configure the packages to use the Lambda layer.
C. Deploy the shared libraries and custom classes to a Docker container in Amazon Elastic Container Service (Amazon ECS) by using the AWS Fargate launch type. Deploy the API's Lambda functions as Zip packages. Configure the packages to use the deployed container as a Lambda layer.
D. Deploy the shared libraries, custom classes, and code for the API's Lambda functions to a Docker image. Upload the image to Amazon Elastic Container Registry (Amazon ECR). Configure the API's Lambda functions to use the Docker image as the deployment package.
Explanation: Deploying the shared libraries and custom classes to a Docker image and
uploading the image to Amazon Elastic Container Registry (Amazon ECR) and creating a
Lambda layer that uses the Docker image as the source. Then, deploying the API's
Lambda functions as Zip packages and configuring the packages to use the Lambda layer
would meet the requirements for simplifying the deployment and optimizing for code reuse.
A Lambda layer is a distribution mechanism for libraries, custom runtimes, and other
function dependencies. It allows you to manage your in-development function code
separately from your dependencies, this way you can easily update your dependencies
without having to update your entire function code.
By deploying the shared libraries and custom classes to a Docker image and uploading the
image to Amazon Elastic Container Registry (ECR), it makes it easy to manage and
version the dependencies. This way, the company can use the same version of the
dependencies across different Lambda functions.
By creating a Lambda layer that uses the Docker image as the source, the company can
configure the API's Lambda functions to use the layer, reducing the need to include the
dependencies in each function package, and making it easy to update the dependencies
across all functions at once.
A solutions architect is investigating an issue in which a company cannot establish new
sessions in Amazon Workspaces. An initial analysis indicates that the issue involves user
profiles. The Amazon Workspaces environment is configured to use Amazon FSx for
Windows File Server as the profile share storage. The FSx for Windows File Server file
system is configured with 10 TB of storage.
The solutions architect discovers that the file system has reached its maximum capacity.
The solutions architect must ensure that users can regain access. The solution also must
prevent the problem from occurring again.
Which solution will meet these requirements?
A. Remove old user profiles to create space. Migrate the user profiles to an Amazon FSx for Lustre file system.
B. Increase capacity by using the update-file-system command. Implement an Amazon CloudWatch metric that monitors free space. Use Amazon EventBridge to invoke an AWS Lambda function to increase capacity as required.
C. Monitor the file system by using the FreeStorageCapacity metric in Amazon CloudWatch. Use AWS Step Functions to increase the capacity as required.
D. Remove old user profiles to create space. Create an additional FSx for Windows File Server file system. Update the user profile redirection for 50% of the users to use the new file system.
A company plans to refactor a monolithic application into a modern application designed
deployed or AWS. The CLCD pipeline needs to be upgraded to support the modem design
for the application with the following requirements
• It should allow changes to be released several times every hour.
* It should be able to roll back the changes as quickly as possible.
Which design will meet these requirements?
A. Deploy a Cl-CD pipeline that incorporates AMIs to contain the application and their configurations Deploy the application by replacing Amazon EC2 instances
B. Specify AWS Elastic Beanstak to sage in a secondary environment as the deployment target for the CI/CD pipeline of the application. To deploy swap the staging and production environment URLs.
C. Use AWS Systems Manager to re-provision the infrastructure for each deployment Update the Amazon EC2 user data to pull the latest code art-fact from Amazon S3 and use Amazon Route 53 weighted routing to point to the new environment
D. Roll out the application updates as pan of an Auto Scaling event using prebuilt AMIs. Use new versions of the AMIs to add instances, and phase out all instances that use the previous AMI version with the configured termination policy during a deployment event.
Explanation: It is the fastest when it comes to rollback and deploying changes every hour
A software company hosts an application on AWS with resources in multiple AWS
accounts and Regions. The application runs on a group of Amazon EC2 instances in an
application VPC located in the us-east-1 Region with an IPv4 CIDR block of 10.10.0.0/16.
In a different AWS account, a shared services VPC is located in the us-east-2 Region with
an IPv4 CIDR block of 10.10.10.0/24. When a cloud engineer uses AWS CloudFormation
to attempt to peer the application
VPC with the shared services VPC, an error message indicates a peering failure.
Which factors could cause this error? (Choose two.)
A. The IPv4 CIDR ranges of the two VPCs overlap
B. The VPCs are not in the same Region
C. One or both accounts do not have access to an Internet gateway
D. One of the VPCs was not shared through AWS Resource Access Manager
E. The IAM role in the peer accepter account does not have the correct permissions
Page 9 out of 41 Pages |
Previous |