SAA-C03 Practice Test Questions

964 Questions


Topic 1: Exam Pool A

A company runs its Infrastructure on AWS and has a registered base of 700.000 users for res document management application The company intends to create a product that converts large pdf files to jpg Imago files. The .pdf files average 5 MB in size. The company needs to store the original files and the converted files. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over lime.
Which solution meets these requirements MOST cost-effectively?


A. Save the pdf files to Amazon S3 Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to jpg format and store them back in Amazon S3


B. Save the pdf files to Amazon DynamoDB. Use the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to jpg format and store them hack in DynamoDB


C. Upload the pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances. Amazon Elastic Block Store (Amazon EBS) storage and an Auto Scaling group. Use a program In the EC2 instances to convert the files to jpg format Save the .pdf files and the .jpg files In the EBS store.


D. Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon EPS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to jpg format Save the pdf files and the jpg files in the EBS store.





A.
  Save the pdf files to Amazon S3 Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to jpg format and store them back in Amazon S3

A company runs a highly available SFTP service. The SFTP service uses two Amazon EC2 Linux instances that run with elastic IP addresses to accept traffic from trusted IP sources on the internet. The SFTP service is backed by shared storage that is attached to the instances. User accounts are created and managed as Linux users in the SFTP servers. The company wants a serverless option that provides high IOPS performance and highly configurable security. The company also wants to maintain control over user permissions. Which solution will meet these requirements?


A. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Create an AWS Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach the EBS volume to the SFTP service endpoint. Grant users access to the SFTP service.


B. Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS Transfer Family SFTP service with elastic IP addresses and a VPC endpoint that has internet-facing access. Attach a security group to the endpoint that allows only trusted IP addresses. Attach the EFS volume to the SFTP service endpoint. Grant users access to the SFTP service.


C. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.


D. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a VPC endpoint that has internal access in a private subnet. Attach a security group that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.





C.
  Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a public endpoint that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.

Explanation: AWS Transfer Family is a secure transfer service that enables you to transfer files into and out of AWS storage services using SFTP, FTPS, FTP, and AS2 protocols. You can use AWS Transfer Family to create an SFTP-enabled server with a public endpoint that allows only trusted IP addresses. You can also attach an Amazon S3 bucket with default encryption enabled to the SFTP service endpoint, which will provide high IOPS performance and highly configurable security for your data at rest. You can also maintain control over user permissions by granting users access to the SFTP service using IAM roles or service-managed identities.

A company's website handles millions of requests each day, and the number of requests continues to increase. A solutions architect needs to improve the response time of the web application. The solutions architect determines that the application needs to decrease latency when retrieving product details from the Amazon DynamoDB table. Which solution will meet these requirements with the LEAST amount of operational overhead?


A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.


B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application. Route all read requests through Redis.


C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application. Route all read requests through Memcached.


D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and populate Amazon ElastiCache. Route all read requests through ElastiCache.





A.
  Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.

Explanation: it allows the company to improve the response time of the web application and decrease latency when retrieving product details from the Amazon DynamoDB table. By setting up a DynamoDB Accelerator (DAX) cluster, the company can use a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement. By routing all read requests through DAX, the company can reduce the number of read operations on the DynamoDB table and improve the user experience.

A company has more than 5 TB of file data on Windows file servers that run on premises Users and applications interact with the data each day The company is moving its Windows workloads to AWS. As the company continues this process, the company requires access to AWS and on-premises file storage with minimum latency The company needs a solution that minimizes operational overhead and requires no significant changes to the existing file access patterns. The company uses an AWS Site-to-Site VPN connection for connectivity to AWS
What should a solutions architect do to meet these requirements?


A. Deploy and configure Amazon FSx for Windows File Server on AWS. Move the on- premises file data to FSx for Windows File Server. Reconfigure the workloads to use FSx for Windows File Server on AWS.


B. Deploy and configure an Amazon S3 File Gateway on premises Move the on-premises file data to the S3 File Gateway Reconfigure the on-premises workloads and the cloud workloads to use the S3 File Gateway


C. Deploy and configure an Amazon S3 File Gateway on premises Move the on-premises file data to Amazon S3 Reconfigure the workloads to use either Amazon S3 directly or the S3 File Gateway, depending on each workload's location


D. Deploy and configure Amazon FSx for Windows File Server on AWS Deploy and configure an Amazon FSx File Gateway on premises Move the on-premises file data to the FSx File Gateway Configure the cloud workloads to use FSx for Windows File Server on AWS Configure the on-premises workloads to use the FSx File Gateway





D.
  Deploy and configure Amazon FSx for Windows File Server on AWS Deploy and configure an Amazon FSx File Gateway on premises Move the on-premises file data to the FSx File Gateway Configure the cloud workloads to use FSx for Windows File Server on AWS Configure the on-premises workloads to use the FSx File Gateway

Explanation: To meet the requirements of the company to have access to both AWS and on-premises file storage with minimum latency, a hybrid cloud architecture can be used. One solution is to deploy and configure Amazon FSx for Windows File Server on AWS, which provides fully managed Windows file servers. The on-premises file data can be moved to the FSx File Gateway, which can act as a bridge between on-premises and AWS file storage. The cloud workloads can be configured to use FSx for Windows File Server on AWS, while the on-premises workloads can be configured to use the FSx File Gateway. This solution minimizes operational overhead and requires no significant changes to the existing file access patterns. The connectivity between on-premises and AWS can be established using an AWS Site-to-Site VPN connection.

A company manages its own Amazon EC2 instances that run MySQL databases. The company is manually managing replication and scaling as demand increases or decreases. The company needs a new solution that simplifies the process of adding or removing compute capacity to or from its database tier as needed. The solution also must offer improved performance, scaling, and durability with minimal effort from operations. Which solution meets these requirements?


A. Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.


B. Migrate the databases to Amazon Aurora Serverless for Aurora PostgreSQL.


C. Combine the databases into one larger MySQL database. Run the larger database on larger EC2 instances.


D. Create an EC2 Auto Scaling group for the database tier. Migrate the existing databases to the new environment.





A.
  Migrate the databases to Amazon Aurora Serverless for Aurora MySQL.

A company has applications that run on Amazon EC2 instances in a VPC. One of the applications needs to call the Amazon S3 API to store and read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet.
Which solution will meet these requirements?


A. Configure an S3 interface endpoint.


B. Configure an S3 gateway endpoint.


C. Create an S3 bucket in a private subnet.


D. Create an S3 bucket in the same Region as the EC2 instance.





B.
  Configure an S3 gateway endpoint.

A company recently migrated its web application to AWS by rehosting the application on Amazon EC2 instances in a single AWS Region. The company wants to redesign its application architecture to be highly available and fault tolerant. Traffic must reach all running EC2 instances randomly. Which combination of steps should the company take to meet these requirements? (Choose two.)


A. Create an Amazon Route 53 failover routing policy.


B. Create an Amazon Route 53 weighted routing policy.


C. Create an Amazon Route 53 multivalue answer routing policy.


D. Launch three EC2 instances: two instances in one Availability Zone and one instance in another Availability Zone.


E. Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.





C.
  Create an Amazon Route 53 multivalue answer routing policy.

E.
  Launch four EC2 instances: two instances in one Availability Zone and two instances in another Availability Zone.

A company is designing an application. The application uses an AWS Lambda function to receive information through Amazon API Gateway and to store the information in an Amazon Aurora PostgreSQL database.
During the proof-of-concept stage, the company has to increase the Lambda quotas significantly to handle the high volumes of data that the company needs to load into the database. A solutions architect must recommend a new design to improve scalability and minimize the configuration effort.
Which solution will meet these requirements?


A. Refactor the Lambda function code to Apache Tomcat code that runs on Amazon EC2 instances. Connect the database by using native Java Database Connectivity (JDBC) drivers.


B. Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.


C. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using Amazon Simple Notification Service (Amazon SNS).


D. Set up two Lambda functions. Configure one function to receive the information. Configure the other function to load the information into the database. Integrate the Lambda functions by using an Amazon Simple Queue Service (Amazon SQS) queue.





B.
  Change the platform from Aurora to Amazon DynamoDB. Provision a DynamoDB Accelerator (DAX) cluster. Use the DAX client SDK to point the existing DynamoDB API calls at the DAX cluster.

A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website
The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the problem Which solution addresses this performance issue?


A. Change the storage type to Provisioned IOPS SSD


B. Change the DB instance to a memory optimized instance class


C. Change the DB instance to a burstable performance instance class


D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.





A.
  Change the storage type to Provisioned IOPS SSD

A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon API Gateway REST API. The company uses Amazon Route 53 as its DNS provider. The company wants to provide individual and secure URLs for all customers. Which combination of steps will meet these requirements with the MOST operational efficiency? (Select THREE.)


A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint.


B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a different Region.


C. Create hosted zones for each customer as required in Route 53. Create zone records that point to the API Gateway endpoint.


D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.


E. Create multiple API endpoints for each customer in API Gateway.


F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).





A.
  Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint.

D.
  Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.

F.
  Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).

Explanation: To provide individual and secure URLs for all customers using an API Gateway REST API, you need to do the following steps: Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that points to the API Gateway endpoint. This step will allow you to use a custom domain name for your API instead of the default one generated by API Gateway. A wildcard custom domain name means that you can use any subdomain under your domain name (such as customer1.example.com or customer2.example.com) to access your API. You need to register your domain name with a registrar (such as Route 53 or a third-party registrar) and create a hosted zone in Route 53 for your domain name. You also need to create a record in the hosted zone that points to the API Gateway endpoint using an alias record. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region. This step will allow you to secure your API with HTTPS using a certificate issued by ACM. A wildcard certificate means that it can match any subdomain under your domain name (such as *.example.com). You need to request or import a certificate in ACM that matches your custom domain name and verify that you own the domain name. You also need to request the certificate in the same Region as your API. F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM). This step will allow you to associate your custom domain name with your API and use the certificate from ACM to enable HTTPS. You need to create a custom domain name in API Gateway for the REST API and specify the certificate ARN from ACM. You also need to create a base path mapping that maps a path from your custom domain name to your API stage.

A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report on incomplete multipart uploads for cost compliance purposes. Which solution will meet these requirements with the LEAST operational overhead?


A. Configure AWS Config with a rule to report the incomplete multipart upload object count.


B. Create a service control policy (SCP) to report the incomplete multipart upload object count.


C. Configure S3 Storage Lens to report the incomplete multipart upload object count.


D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.





C.
  Configure S3 Storage Lens to report the incomplete multipart upload object count.

Explanation: S3 Storage Lens is a cloud storage analytics feature that provides organization-wide visibility into object storage usage and activity across multiple AWS accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart upload object count as one of the metrics that it collects and displays on an interactive dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet format to an S3 bucket for further analysis. This solution will meet the requirements with the least operational overhead, as it does not require any code development or policy changes.

A company needs to configure a real-time data ingestion architecture for its application. The company needs an API, a process that transforms data as the data is streamed, and a storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?


A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.


B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3.


C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.


D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3.





C.
  Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3.


Page 9 out of 81 Pages
Previous