A company is load testing its three-tier production web application deployed with an AWS CloudFormation
template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS
Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes
made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)
A.
Review the stack drift before modifying the template
B.
Create and review a change set before applying it
C.
Export the database resources as stack outputs
D.
Define the database resources in a nested stack
E.
Set a stack policy for the database resources
Review the stack drift before modifying the template
Define the database resources in a nested stack
A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database.
The application needs to be deployed to production and other non-production environments. A Database
Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS
CloudFormation templates used for automated deployment. The CloudFormation templates are version
controlled in the company’s code repository. The company also needs to meet compliance requirement by
routinely rotating its database master password for production.
What is most secure solution to store the master password?
A.
Store the master password in a parameter file in each environment. Reference the environment-specific
parameter file in the CloudFormation template
B.
Encrypt the master password using an AWS KMS key. Store the encrypted master password in
theCloudFormation template
C.
Use the secretsmanager dynamic reference to retrieve the master password stored in AWS
SecretsManager and enable automatic rotation.
D.
Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems
ManagerParameter Store and enable automatic rotation.
Use the secretsmanager dynamic reference to retrieve the master password stored in AWS
SecretsManager and enable automatic rotation.
A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB
cluster in VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account.
Testing results in minimal changes to the test data. The Development team wants each environment refreshed
nightly so each test database contains fresh production data every day.
Which migration approach will be the fastest and most cost-effective to implement?
A.
Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to
bedeleted and re-created nightly.
B.
Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases
inVPC_TEST using Aurora Serverless.
C.
Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script
thereplicas to be deleted and re-created nightly.
D.
Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST,
andscript the clones to be deleted and re-created nightly.
Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to
bedeleted and re-created nightly.
A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the
persistent data store. The company wants to minimize downtime when it migrates the database to Amazon
Aurora.
Which migration method should a Database Specialist use?
A.
Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
B.
Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora
DB cluster, and restore the backup
C.
Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
D.
Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To
meet the company’s disaster recovery policies, the database backup needs to be copied into another Region.
The company requires the solution to be cost-effective and operationally efficient.
What should a Database Specialist do to copy the database backup into a different Region?
A.
Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
B.
Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication tocopy the snapshot into another Region
C.
Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a secondLambda function to copy the snapshot into another Region
D.
Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot
ofthe read replica
Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot
ofthe read replica
A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialistneeds to split up two read-only applications so each application always connects to a dedicated replica. The
Database Specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?
A.
Use a specific instance endpoint for each replica and add the instance endpoint to each read-onlyapplication connection string
B.
Use reader endpoints for both the read-only workload applications.
C.
Use a reader endpoint for one read-only application and use an instance endpoint for the other read-onlyapplication.
D.
Use custom endpoints for the two read-only applications
Use reader endpoints for both the read-only workload applications.
A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in
its AWS account. The template configures provisioned throughput capacity using hard-coded values. The
company wants to change the template so that the tables it creates in the future have independently
configurable read and write capacity units assigned.
Which solution will enable this change?
A.
Add values for the rcuCount and wcuCount parameters to the Mappings section of the template.
ConfigureDynamoDB to provision throughput capacity using the stack’s mappings
B.
Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the
hard-codedvalues with calls to the Ref intrinsic function, referencing the new parameters
C.
Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure
DynamoDBto provision throughput capacity using the stack outputs.
D.
Add values for the rcuCount and wcuCount parameters to the Mappings section of the template.
Replacethe hard-coded values with calls to the Ref intrinsic function, referencing the new parameters
Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the
hard-codedvalues with calls to the Ref intrinsic function, referencing the new parameters
A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic
patterns throughout the year are usually stable; however, a large event is planned. The company knows that
traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published
during the event, traffic will spike rapidly.How should a Database Specialist ensure DynamoDB can handle the increased traffic?
A.
Ensure the table is always provisioned to meet peak needs
B.
Allow burst capacity to handle the additional load
C.
Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
D.
Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Allow burst capacity to handle the additional load
A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores
in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate
the deployment of the database with identical configurations in additional Regions, as needed. The solution
should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?
A.
Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future
deployments
B.
Create an AWS CloudFormation template and deploy the template to all the Regions.
C.
Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions
D.
Create DynamoDB tables using the AWS Management Console in all the Regions and create a
step-bystep guide for future deployments.
Create an AWS CloudFormation template and deploy the template to all the Regions.
A company with branch offices in Portland, New York, and Singapore has a three-tier web application that
leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2
Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2
Regions.
This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics.
There are complaints that the dashboard performs more slowly in the Singapore location than it does in
Portland or New York. A solution is needed to provide consistent performance for all users in each location.
Which set of actions will meet these requirements?
A.
Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the
ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
B.
Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the
uswest- 2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance
C.
Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture
(CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1
front-end dashboard to access this instance
D.
Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read
replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure
the ap-southeast-1 front-end dashboard to access this instance.
Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the
ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP)
transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the
OLTP transactions run all the time. The company has benchmarked its workload and determined that a
six-node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of
nodes in the cluster to support the workload at different times. The workload has not changed since the
previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?
A.
Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor
and set up replication between the two clusters to keep data consistent.
B.
Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an
acceptable level. Adjust the number of instances, if necessary.
C.
Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal
workload. The cluster can be restarted again depending on the workload at the time.
D.
Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust
automatically to the reporting workload, when needed.
Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust
automatically to the reporting workload, when needed.
A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse
solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for themigration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the
on-premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move
must take place during a 2-week period when source systems are shut down for maintenance. The data should
stay encrypted at rest and in transit.
Which approach has the least risk and the highest likelihood of a successful data transfer?
A.
Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage
AWSSCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS
task tomove the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3
to AmazonRedshift.
B.
Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task
withtwo AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS
encryption.Use AWS DMS to finish copying data to Amazon Redshift.
C.
Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet
of10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from
on-premises toAmazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon
redshift.
D.
Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a
nativedatabase export feature to export the data and compress the files. Use the aws S3 cp multi-port
uploadcommand to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load
the data toAmazon Redshift using AWS Glue.
Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet
of10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from
on-premises toAmazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon
redshift.
Page 3 out of 17 Pages |
Previous |