Topic 1: Exam Pool A
A company wants to migrate its workloads from on premises to AWS. The workloads run
on Linux and Windows. The company has a large on-premises intra structure that consists
of physical machines and VMs that host numerous applications.
The company must capture details about the system configuration. system performance.
running processure and network coi.net lions of its o. -premises ,on boards. The company
also must divide the on-premises applications into groups for AWS migrations. The
company needs recommendations for Amazon EC2 instance types so that the company
can run its workloads on AWS in the most cost-effective manner.
Which combination of steps should a solutions architect take to meet these requirements?
(Select THREE.)
A. Assess the existing applications by installing AWS Application Discovery Agent on the physical machines and VMs.
B. Assess the existing applications by installing AWS Systems Manager Agent on the physical machines and VMs
C. Group servers into applications for migration by using AWS Systems Manager Application Manager.
D. Group servers into applications for migration by using AWS Migration Hub.
E. Generate recommended instance types and associated costs by using AWS Migration Hub.
F. Import data about server sizes into AWS Trusted Advisor. Follow the recommendations for cost optimization.
A company runs an loT platform on AWS loT sensors in various locations send data to the
company's Node js API servers on Amazon EC2 instances running behind an Application
Load Balancer The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB
General Purpose SSD volume
The number of sensors the company has deployed in the field has increased over time and
is expected to grow significantly The API servers are consistently overloaded and RDS
metrics show high write latency
Which of the following steps together will resolve the issues permanently and enable
growth as new sensors are provisioned, while keeping this platform cost-efficient? {Select
TWO.)
A. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume's IOPS
B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas
C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data
D. Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load
E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance
A large company is running a popular web application. The application runs on several
Amazon EC2 Linux Instances in an Auto Scaling group in a private subnet. An Application
Load Balancer is targeting the Instances In the Auto Scaling group in the private subnet.
AWS Systems Manager Session Manager Is configured, and AWS Systems Manager
Agent is running on all the EC2 instances.
The company recently released a new version of the application Some EC2 instances are
now being marked as unhealthy and are being terminated As a result, the application is
running at reduced capacity A solutions architect tries to determine the root cause by
analyzing Amazon CloudWatch logs that are collected from the application, but the logs are
inconclusive
How should the solutions architect gain access to an EC2 instance to troubleshoot the
issue1?
A. Suspend the Auto Scaling group's HealthCheck scaling process. Use Session Manager to log in to an instance that is marked as unhealthy
B. Enable EC2 instance termination protection Use Session Manager to log In to an instance that is marked as unhealthy.
C. Set the termination policy to Oldestinstance on the Auto Scaling group. Use Session Manager to log in to an instance that is marked as unhealthy
D. Suspend the Auto Scaling group's Terminate process. Use Session Manager to log in to an instance that is marked as unhealthy
A company that has multiple AWS accounts is using AWS Organizations. The company’s
AWS accounts host VPCs, Amazon EC2 instances, and containers.
The company’s compliance team has deployed a security tool in each VPC where the
company has deployments. The security tools run on EC2 instances and send information
to the AWS account that is dedicated for the compliance team. The company has tagged
all the compliance-related resources with a key of “costCenter” and a value or
“compliance”.
The company wants to identify the cost of the security tools that are running on the EC2
instances so that the company can charge the compliance team’s AWS account. The cost
calculation must be as accurate as possible.
What should a solutions architect do to meet these requirements?
A. In the management account of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Use the tag breakdown in the report to obtain the total cost for the costCenter tagged resources.
B. In the member accounts of the organization, activate the costCenter user-defined tag. Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Schedule a monthly AWS Lambda function to retrieve the reports and calculate the total cost for the costCenter tagged resources.
C. In the member accounts of the organization activate the costCenter user-defined tag. From the management account, schedule a monthly AWS Cost and Usage Report. Use the tag breakdown in the report to calculate the total cost for the costCenter tagged resources.
D. Create a custom report in the organization view in AWS Trusted Advisor. Configure the report to generate a monthly billing summary for the costCenter tagged resources in the compliance team’s AWS account.
An adventure company has launched a new feature on its mobile app. Users can use the
feature to upload their hiking and ratting photos and videos anytime. The photos and
videos are stored in Amazon S3 Standard storage in an S3 bucket and are served through
Amazon CloudFront.
The company needs to optimize the cost of the storage. A solutions architect discovers that
most of the uploaded photos and videos are accessed infrequently after 30 days. However,
some of the uploaded photos and videos are accessed frequently after 30 days. The
solutions architect needs to implement a solution that maintains millisecond retrieval
availability of the photos and videos at the lowest possible cost.
Which solution will meet these requirements?
A. Configure S3 Intelligent-Tiering on the S3 bucket.
B. Configure an S3 Lifecycle policy to transition image objects and video objects from S3 Standard to S3 Glacier Deep Archive after 30 days.
C. Replace Amazon S3 with an Amazon Elastic File System (Amazon EFS) file system that is mounted on Amazon EC2 instances.
D. Add a Cache-Control: max-age header to the S3 image objects and S3 video objects. Set the header to 30 days.
Explanation: Amazon S3 Intelligent-Tiering is a storage class that automatically moves objects between two access tiers based on changing access patterns. Objects that are accessed frequently are stored in the frequent access tier and objects that are accessed infrequently are stored in the infrequent access tier. This allows for cost optimization without requiring manual intervention. This makes it an ideal solution for the scenario described, as it can automatically move objects that are infrequently accessed after 30 days to a lower-cost storage tier while still maintaining millisecond retrieval availability.
A company manages multiple AWS accounts by using AWS Organizations. Under the root
OU. the company has two OUs: Research and DataOps.
Because of regulatory requirements, all resources that the company deploys in the
organization must reside in the ap-northeast-1 Region. Additionally. EC2 instances that the
company deploys in the DataOps OU must use a predefined list of instance types
A solutions architect must implement a solution that applies these restrictions. The solution
must maximize operational efficiency and must minimize ongoing maintenance
Which combination of steps will meet these requirements? (Select TWO )
A. Create an IAM role in one account under the DataOps OU Use the ec2 Instance Type condition key in an inline policy on the role to restrict access to specific instance types.
B. Create an IAM user in all accounts under the root OU Use the aws RequestedRegion condition key in an inline policy on each user to restrict access to all AWS Regions except ap-northeast-1.
C. Create an SCP Use the aws:RequestedRegion condition key to restrict access to all AWS Regions except ap-northeast-1 Apply the SCP to the root OU.
D. Create an SCP Use the ec2Reo»on condition key to restrict access to all AWS Regions except ap-northeast-1. Apply the SCP to the root OU. the DataOps OU. and the Research OU.
E. Create an SCP Use the ec2:lnstanceType condition key to restrict access to specific instance types Apply the SCP to the DataOps OU.
A company is planning to store a large number of archived documents and make the
documents available to employees through the corporate intranet. Employees will access
the system by connecting through a client VPN service that is attached to a VPC. The data
must not be accessible to the public.
The documents that the company is storing are copies of data that is held on physical
media elsewhere. The number of requests will be low. Availability and speed of retrieval
are not concerns of the company.
Which solution will meet these requirements at the LOWEST cost?
A. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone- Infrequent Access (S3 One Zone-IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint.
B. Launch an Amazon EC2 instance that runs a web server. Attach an Amazon Elastic File System (Amazon EFS) file system to store the archived data in the EFS One Zone- Infrequent Access (EFS One Zone-IA) storage class Configure the instance security groups to allow access only from private networks.
C. Launch an Amazon EC2 instance that runs a web server Attach an Amazon Elastic Block Store (Amazon EBS) volume to store the archived data. Use the Cold HDD (sc1) volume type. Configure the instance security groups to allow access only from private networks.
D. Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 Glacier Deep Archive storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint.
Explanation: The S3 Glacier Deep Archive storage class is the lowest-cost storage class offered by Amazon S3, and it is designed for archival data that is accessed infrequently and for which retrieval time of several hours is acceptable. S3 interface endpoint for the VPC ensures that access to the bucket is only from resources within the VPC and this will meet the requirement of not being accessible to the public. And also, S3 bucket can be configured for website hosting, and this will allow employees to access the documents through the corporate intranet. Using an EC2 instance and a file system or block store would be more expensive and unnecessary because the number of requests to the data will be low and availability and speed of retrieval are not concerns. Additionally, using Amazon S3 bucket will provide durability, scalability and availability of data.
A company wants to change its internal cloud billing strategy for each of its business units.
Currently, the cloud governance team shares reports for overall cloud spending with the
head of each business unit. The company uses AWS Organizations lo manage the
separate AWS accounts for each business unit. The existing tagging standard in
Organizations includes the application, environment, and owner. The cloud governance
team wants a centralized solution so each business unit receives monthly reports on its
cloud spending. The solution should also send notifications for any cloud spending that
exceeds a set threshold.
Which solution is the MOST cost-effective way to meet these requirements?
A. Configure AWS Budgets in each account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in each account to create monthly reports for each business unit.
B. Configure AWS Budgets in the organization's master account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization's master account to create monthly reports for each business unit.
C. Configure AWS Budgets in each account and configure budget alerts lhat are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use the AWS Billing and Cost Management dashboard in each account to create monthly reports for each business unit.
D. Enable AWS Cost and Usage Reports in the organization's master account and configure reports grouped by application, environment, and owner. Create an AWS Lambda function that processes AWS Cost and Usage Reports, sends budget alerts, and sends monthly reports to each business unit's email list.
Explanation: Configure AWS Budgets in the organization€™s master account and configure budget alerts that are grouped by application, environment, and owner. Add each business unit to an Amazon SNS topic for each alert. Use Cost Explorer in the organization€™s master account to create monthly reports for each business unit.
A company is running an application in the AWS Cloud. The application runs on containers in an Amazon Elastic Container Service (Amazon ECS) cluster. The ECS tasks use the Fargate launch type. The application's data is relational and is stored in Amazon Aurora MySQL. To meet regulatory requirements, the application must be able to recover to a separate AWS Region in the event of an application failure. In case of a failure, no data can be lost. Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Provision an Aurora Replica in a different Region.
B. Set up AWS DataSync for continuous replication of the data to a different Region.
C. Set up AWS Database Migration Service (AWS DMS) to perform a continuous replication of the data to a different Region.
D. Use Amazon Data Lifecycle Manager {Amazon DLM) to schedule a snapshot every 5 minutes
Explanation: Provision an Aurora Replica in a different Region will meet the requirement of the application being able to recover to a separate AWS Region in the event of an application failure, and no data can be lost, with the least amount of operational overhead.
A company hosts a Git repository in an on-premises data center. The company uses
webhooks to invoke functionality that runs in the AWS Cloud. The company hosts the
webhook logic on a set of Amazon EC2 instances in an Auto Scaling group that the
company set as a target for an Application Load Balancer (ALB). The Git server calls the
ALB for the configured webhooks. The company wants to move the solution to a serverless architecture.
Which solution will meet these requirements with the LEAST operational overhead?
A. For each webhook, create and configure an AWS Lambda function URL. Update the Git servers to call the individual Lambda function URLs.
B. Create an Amazon API Gateway HTTP API. Implement each webhook logic in a separate AWS Lambda function. Update the Git servers to call the API Gateway endpoint.
C. Deploy the webhook logic to AWS App Runner. Create an ALB, and set App Runner as the target. Update the Git servers to call the ALB endpoint.
D. Containerize the webhook logic. Create an Amazon Elastic Container Service (Amazon ECS) cluster, and run the webhook logic in AWS Fargate. Create an Amazon API Gateway REST API, and set Fargate as the target. Update the Git servers to call the API Gateway endpoint.
A startup company hosts a fleet of Amazon EC2 instances in private subnets using the
latest Amazon Linux 2 AMI. The company's engineers rely heavily on SSH access to the
instances for troubleshooting.
The company's existing architecture includes the following:
• A VPC with private and public subnets, and a NAT gateway
• Site-to-Site VPN for connectivity with the on-premises environment
• EC2 security groups with direct SSH access from the on-premises environment
The company needs to increase security controls around SSH access and provide auditing
of commands executed by the engineers.
Which strategy should a solutions architect use?
A. Install and configure EC2 Instance Connect on the fleet of EC2 instances. Remove all security group rules attached to EC2 instances that allow inbound TCP on port 22. Advise the engineers to remotely access the instances by using the EC2 Instance Connect CLI.
B. Update the EC2 security groups to only allow inbound TCP on port 22 to the IP addresses of the engineer's devices. Install the Amazon CloudWatch agent on all EC2 instances and send operating system audit logs to CloudWatch Logs.
C. Update the EC2 security groups to only allow inbound TCP on port 22 to the IP addresses of the engineer's devices. Enable AWS Config for EC2 security group resource changes. Enable AWS Firewall Manager and apply a security group policy that automatically remediates changes to rules.
D. Create an IAM role with the AmazonSSMManagedInstanceCore managed policy attached. Attach the IAM role to all the EC2 instances. Remove all security group rules attached to the EC2 instances that allow inbound TCP on port 22. Have the engineers install the AWS Systems Manager Session Manager plugin for their devices and remotely access the instances by using the start-session API call from Systems Manager.
Explanation: Allows client machines to be able to connect to Session Manager using the AWS CLI instead of going through the AWS EC2 or AWS Server Manager console.
A company is planning to migrate its business-critical applications from an on-premises
data center to AWS. The company has an on-premises installation of a Microsoft SQL
Server Always On cluster. The company wants to migrate to an AWS managed database
service. A solutions architect must design a heterogeneous database migration on AWS.
Which solution will meet these requirements?
A. Migrate the SQL Server databases to Amazon RDS for MySQL by using backup and restore utilities.
B. Use an AWS Snowball Edge Storage Optimized device to transfer data to Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration with SQL Server features, such as BULK INSERT.
C. Use the AWS Schema Conversion Tool to translate the database schema to Amazon RDS for MeSQL. Then use AWS Database Migration Service (AWS DMS) to migrate the data from on-premises databases to Amazon RDS.
D. Use AWS DataSync to migrate data over the network between on-premises storage and Amazon S3. Set up Amazon RDS for MySQL. Use S3 integration with SQL Server features, such as BULK INSERT.
Explanation: AWS Schema Conversion Tool (SCT) can automatically convert the database schema from Microsoft SQL Server to Amazon RDS for MySQL. This allows for a smooth transition of the database schema without any manual intervention. AWS DMS can then be used to migrate the data from the on-premises databases to the newly created Amazon RDS for MySQL instance. This service can perform a one-time migration of the data or can set up ongoing replication of data changes to keep the on-premises and AWS databases in sync.
Page 6 out of 41 Pages |
Previous |