Topic 1: Exam Pool A
A company runs a Python script on an Amazon EC2 instance to process data. The script
runs every 10 minutes. The script ingests files from an Amazon S3 bucket and processes
the files. On average, the script takes approximately 5 minutes to process each file The
script will not reprocess a file that the script has already processed.
The company reviewed Amazon CloudWatch metrics and noticed that the EC2 instance is
idle for approximately 40% of the time because of the file processing speed. The company
wants to make the workload highly available and scalable. The company also wants to
reduce long-term management overhead.
Which solution will meet these requirements MOST cost-effectively?
A. Migrate the data processing script to an AWS Lambda function. Use an S3 event notification to invoke the Lambda function to process the objects when the company uploads the objects.
B. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure Amazon S3
to send event notifications to the SQS queue. Create an EC2 Auto Scaling group with a
minimum size of one instance. Update the data processing script to poll the SQS queue.
Process the S3 objects that the SQS message identifies.
C. Migrate the data processing script to a container image. Run the data processing container on an EC2 instance. Configure the container to poll the S3 bucket for new objects and to process the resulting objects.
D. Migrate the data processing script to a container image that runs on Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Create an AWS Lambda function that calls the Fargate RunTaskAPI operation when the container processes the file. Use an S3 event notification to invoke the Lambda function.
Explanation: migrating the data processing script to an AWS Lambda function and using an S3 event notification to invoke the Lambda function to process the objects when the company uploads the objects. This solution meets the company's requirements of high availability and scalability, as well as reducing long-term management overhead, and is likely to be the most cost-effective option.
An AWS partner company is building a service in AWS Organizations using Its organization
named org. This service requires the partner company to have access to AWS resources in
a customer account, which is in a separate organization named org2 The company must
establish least privilege security access using an API or command line tool to the customer
account
What is the MOST secure way to allow org1 to access resources h org2?
A. The customer should provide the partner company with their AWS account access keys to log in and perform the required tasks
B. The customer should create an IAM user and assign the required permissions to the IAM user The customer should then provide the credentials to the partner company to log In and perform the required tasks.
C. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM rote's Amazon Resource Name (ARN) when requesting access to perform the required tasks
D. The customer should create an IAM rote and assign the required permissions to the IAM rote. The partner company should then use the IAM rote's Amazon Resource Name (ARN). Including the external ID in the IAM role's trust pokey, when requesting access to perform the required tasks
Explanation: This is the most secure way to allow org1 to access resources in org2 because it allows for least privilege security access. The customer should create an IAM role and assign the required permissions to the IAM role. The partner company should then use the IAM role’s Amazon Resource Name (ARN) and include the external ID in the IAM role’s trust policy when requesting access to perform the required tasks. This ensures that the partner company can only access the resources that it needs and only from the specific customer account.
An enterprise company wants to allow its developers to purchase third-party software
through AWS Marketplace. The company uses an AWS Organizations account structure
with full features enabled, and has a shared services account in each organizational unit
(OU) that will be used by procurement managers. The procurement team's policy indicates
that developers should be able to obtain third-party software from an approved list only and
use Private Marketplace in AWS Marketplace to achieve this requirement . The
procurement team wants administration of Private Marketplace to be restricted to a role
named procurement-manager-role, which could be assumed by procurement managers
Other IAM users groups, roles, and account administrators in the company should be
denied Private Marketplace administrative access
What is the MOST efficient way to design an architecture to meet these requirements?
A. Create an IAM role named procurement-manager-role in all AWS accounts in the organization Add the PowerUserAccess managed policy to the role Apply an inline policy to all IAM users and roles in every AWS account to deny permissions on the AWSPrivateMarketplaceAdminFullAccess managed policy.
B. Create an IAM role named procurement-manager-role in all AWS accounts in the organization Add the AdministratorAccess managed policy to the role Define a permissions boundary with the AWSPrivateMarketplaceAdminFullAccess managed policy and attach it to all the developer roles.
C. Create an IAM role named procurement-manager-role in all the shared services accounts in the organization Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization.
D. Create an IAM role named procurement-manager-role in all AWS accounts that will be used by developers. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an SCP in Organizations to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Apply the SCP to all the shared services accounts in the organization.
Explanation: SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. This approach allows the procurement managers to assume the procurement-manager-role in shared services accounts, which have the AWSPrivateMarketplaceAdminFullAccess managed policy attached to it and can then manage the Private Marketplace. The organization root-level SCP denies the permission to administer Private Marketplace to everyone except the role named procurement-manager-role and another SCP denies the permission to create an IAM role named procurement-manager-role to everyone in the organization, ensuring that only the procurement team can assume the role and manage the Private Marketplace. This approach provides a centralized way to manage and restrict access to Private Marketplace while maintaining a high level of security.
A company is storing data in several Amazon DynamoDB tables. A solutions architect must
use a serverless architecture to make the data accessible publicly through a simple API
over HTTPS. The solution must scale automatically in response to demand.
Which solutions meet these requirements? (Choose two.)
A. Create an Amazon API Gateway REST API. Configure this API with direct integrations to DynamoDB by using API Gateway’s AWS integration type.
B. Create an Amazon API Gateway HTTP API. Configure this API with direct integrations to Dynamo DB by using API Gateway’s AWS integration type.
C. Create an Amazon API Gateway HTTP API. Configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables.
D. Create an accelerator in AWS Global Accelerator. Configure this accelerator with AWS Lambda@Edge function integrations that return data from the DynamoDB tables.
E. Create a Network Load Balancer. Configure listener rules to forward requests to the appropriate AWS Lambda functions
A company has a web application that allows users to upload short videos. The videos are
stored on Amazon EBS volumes and analyzed by custom recognition software for
categorization.
The website contains stat c content that has variable traffic with peaks in certain months.
The architecture consists of Amazon EC2 instances running in an Auto Scaling group for
the web application and EC2 instances running in an Auto Scaling group to process an
Amazon SQS queue The company wants to re-architect the application to reduce operational overhead using AWS managed services where possible and remove
dependencies on third-party software.
Which solution meets these requirements?
A. Use Amazon ECS containers for the web application and Spot Instances for the Auto Scaling group that processes the SQS queue. Replace the custom software with Amazon Recognition to categorize the videos.
B. Store the uploaded videos n Amazon EFS and mount the file system to the EC2 instances for Te web application. Process the SOS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
C. Host the web application in Amazon S3. Store the uploaded videos in Amazon S3. Use S3 event notifications to publish events to the SQS queue Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
D. Use AWS Elastic Beanstalk to launch EC2 instances in an Auto Scaling group for the web application and launch a worker environment to process the SQS queue Replace the custom software with Amazon Rekognition to categorize the videos.
Explanation: Option C is correct because hosting the web application in Amazon S3, storing the uploaded videos in Amazon S3, and using S3 event notifications to publish events to the SQS queue reduces the operational overhead of managing EC2 instances and EBS volumes. Amazon S3 can serve static content such as HTML, CSS, JavaScript, and media files directly from S3 buckets. Amazon S3 can also trigger AWS Lambda functions through S3 event notifications when new objects are created or existing objects are updated or deleted. AWS Lambda can process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos. This solution eliminates the need for custom recognition software and third-party dependencies345
A solutions architect must analyze a company's Amazon EC2 Instances and Amazon
Elastic Block Store (Amazon EBS) volumes to determine whether the company is using
resources efficiently The company is running several large, high-memory EC2 instances lo host database dusters that are deployed in active/passive configurations The utilization of
these EC2 instances varies by the applications that use the databases, and the company
has not identified a pattern
The solutions architect must analyze the environment and take action based on the
findings.
Which solution meets these requirements MOST cost-effectively?
A. Create a dashboard by using AWS Systems Manager OpsConter Configure visualizations tor Amazon CloudWatch metrics that are associated with the EC2 instances and their EBS volumes Review the dashboard periodically and identify usage patterns Right size the EC2 instances based on the peaks in the metrics
B. Turn on Amazon CloudWatch detailed monitoring for the EC2 instances and their EBS volumes Create and review a dashboard that is based on the metrics Identify usage patterns Right size the FC? instances based on the peaks In the metrics
C. Install the Amazon CloudWatch agent on each of the EC2 Instances Turn on AWS Compute Optimizer, and let it run for at least 12 hours Review the recommendations from Compute Optimizer, and right size the EC2 instances as directed
D. Sign up for the AWS Enterprise Support plan Turn on AWS Trusted Advisor Wait 12 hours Review the recommendations from Trusted Advisor, and rightsize the EC2 instances as directed
A company is running a critical application that uses an Amazon RDS for MySQL database
to store data. The RDS DB instance is deployed in Multi-AZ mode.
A recent RDS database failover test caused a 40-second outage to the application A
solutions architect needs to design a solution to reduce the outage time to less than 20
seconds.
Which combination of steps should the solutions architect take to meet these
requirements? (Select THREE.)
A. Use Amazon ElastiCache for Memcached in front of the database
B. Use Amazon ElastiCache for Redis in front of the database.
C. Use RDS Proxy in front of the database
D. Migrate the database to Amazon Aurora MySQL
E. Create an Amazon Aurora Replica
F. Create an RDS for MySQL read replica
Explanation: Migrate the database to Amazon Aurora MySQL. - Create an Amazon Aurora Replica. - Use RDS Proxy in front of the database. - These options are correct because they address the requirement of reducing the failover time to less than 20 seconds. Migrating to Amazon Aurora MySQL and creating an Aurora replica can reduce the failover time to less than 20 seconds. Aurora has a built-in, fault-tolerant storage system that can automatically detect and repair failures. Additionally, Aurora has a feature called "Aurora Global Database" which allows you to create read-only replicas across multiple AWS regions which can further help to reduce the failover time. Creating an Aurora replica can also help to reduce the failover time as it can take over as the primary DB instance in case of a failure. Using RDS proxy can also help to reduce the failover time as it can route the queries to the healthy DB instance, it also helps to balance the load across multiple DB instances.
A solutions architect needs to advise a company on how to migrate its on-premises data
processing application to the AWS Cloud. Currently, users upload input files through a web
portal. The web server then stores the uploaded files on NAS and messages the
processing server over a message queue. Each media file can take up to 1 hour to
process. The company has determined that the number of media files awaiting processing
is significantly higher during business hours, with the number of files rapidly declining after
business hours.
What is the MOST cost-effective migration recommendation?
A. Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in an Amazon S3 bucket.
B. Create a queue using Amazon M. Configure the existing web server to publish to the new queue. When there are messages in the queue, create a new Amazon EC2 instance to pull requests from the queue and process the files. Store the processed files in Amazon EFS. Shut down the EC2 instance after the task is complete.
C. Create a queue using Amazon MO. Configure the existing web server to publish to the new queue. When there are messages in the queue, invoke an AWS Lambda function to pull requests from the queue and process the files. Store the processed files in Amazon EFS.
D. Create a queue using Amazon SOS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SOS queue length. Store the processed files in an Amazon S3 bucket.
A company has migrated an application from on premises to AWS. The application
frontend is a static website that runs on two Amazon EC2 instances behind an Application
Load Balancer (ALB). The application backend is a Python application that runs on three
EC2 instances behind another ALB. The EC2 instances are large, general purpose On-
Demand Instances that were sized to meet the on-premises specifications for peak usage
of the application.
The application averages hundreds of thousands of requests each month. However, the
application is used mainly during lunchtime and receives minimal traffic during the rest of
the day.
A solutions architect needs to optimize the infrastructure cost of the application without
negatively affecting the application availability.
Which combination of steps will meet these requirements? (Choose two.)
A. Change all the EC2 instances to compute optimized instances that have the same number of cores as the existing EC2 instances.
B. Move the application frontend to a static website that is hosted on Amazon S3.
C. Deploy the application frontend by using AWS Elastic Beanstalk. Use the same instance type for the nodes.
D. Change all the backend EC2 instances to Spot Instances.
E. Deploy the backend Python application to general purpose burstable EC2 instances that have the same number of cores as the existing EC2 instances.
Explanation: Moving the application frontend to a static website that is hosted on Amazon
S3 will save cost as S3 is cheaper than running EC2 instances.
Using Spot instances for the backend EC2 instances will also save cost, as they are
significantly cheaper than On-Demand instances. This will be suitable for the application,
as it has minimal traffic during the rest of the day, and the availability of spot instances will
not negatively affect the application's availability.
A company uses Amazon S3 to store files and images in a variety of storage classes. The
company's S3 costs have increased substantially during the past year.
A solutions architect needs to review data trends for the past 12 months and identity the
appropriate storage class for the objects.
Which solution will meet these requirements?
A. Download AWS Cost and Usage Reports for the last 12 months of S3 usage. Review AWS Trusted Advisor recommendations for cost savings.
B. Use S3 storage class analysis. Import data trends into an Amazon QuickSight dashboard to analyze storage trends.
C. Use Amazon S3 Storage Lens. Upgrade the default dashboard to include advanced metrics for storage trends.
D. Use Access Analyzer for S3. Download the Access Analyzer for S3 report for the last 12 months. Import the csvfile to an Amazon QuickSight dashboard.
A company is using an on-premises Active Directory service for user authentication. The
company wants to use the same authentication service to sign in to the company's AWS
accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already
exists between the on-premises environment and all the company's AWS accounts.
The company's security policy requires conditional access to the accounts based on user
groups and roles. User identities must be managed in a single location.
Which solution will meet these requirements?
A. Configure AWS Single Sign-On (AWS SSO) to connect to Active Directory by using SAML 2.0. Enable automatic provisioning by using the System for Cross- domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using attributebased access controls (ABACs).
B. Configure AWS Single Sign-On (AWS SSO) by using AWS SSO as an identity source. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using AWS SSO permission sets.
C. In one of the company's AWS accounts, configure AWS Identity and Access Management (IAM) to use a SAML 2.0 identity provider. Provision IAM users that are mapped to the federated users. Grant access that corresponds to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM users.
D. In one of the company's AWS accounts, configure AWS Identity and Access Management (IAM) to use an OpenID Connect (OIDC) identity provider. Provision IAM roles that grant access to the AWS account for the federated users that correspond to appropriate groups in Active Directory. Grant access to the required AWS accounts by using cross-account IAM roles.
A company is refactoring its on-premises order-processing platform in the AWS Cloud. The
platform includes a web front end that is hosted on a fleet of VMs RabbitMQ to connect the
front end to the backend, and a Kubernetes cluster to run a containerized backend system
to process the orders. The company does not want to make any major changes to the
application
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AMI of the web server VM Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer Set up Amazon MQ to replace the onpremises messaging queue Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend
B. Create a custom AWS Lambda runtime to mimic the web server environment Create an Amazon API Gateway API to replace the front-end web servers Set up Amazon MQ to replace the on-premises messaging queue Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend
C. Create an AMI of the web server VM Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer Set up Amazon MQ to replace the onpremises messaging queue Install Kubernetes on a fleet of different EC2 instances to host the order-processing backend
D. Create an AMI of the web server VM Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer Set up an Amazon Simple Queue Service (Amazon SQS) queue to replace the on-premises messaging queue Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend
Page 3 out of 41 Pages |
Previous |