PROFESSIONAL-CLOUD-ARCHITECT Practice Test Questions

251 Questions


Topic 5, Misc Questions

Your company's test suite is a custom C++ application that runs tests throughout each day on Linux virtual
machines. The full test suite takes several hours to complete, running on a limited number of on premises
servers reserved for testing. Your company wants to move the testing infrastructure to the cloud, to reduce the
amount of time it takes to fully test a change to the system, while changing the tests as little as possible. Which cloud infrastructure should you recommend?


A.

Google Compute Engine unmanaged instance groups and Network Load Balancer


B.

Google Compute Engine managed instance groups with auto-scaling


C.

Google Cloud Dataproc to run Apache Hadoop jobs to process each test


D.

Google App Engine with Google Stackdriver for logging





B.
  

Google Compute Engine managed instance groups with auto-scaling



https://cloud.google.com/compute/docs/instance-groups/

You want to enable your running Google Container Engine cluster to scale as demand for your application changes. What should you do?



A.

Option A


B.

Option B


C.

Option C


D.

Option D





C.
  

Option C



https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler
To enable autoscaling for an existing node pool, run the following command:
gcloud container clusters update [CLUSTER_NAME] -enable-autoscaling \-min-nodes 1 -max-nodes 10
-zone [COMPUTE_ZONE] -node-pool default-pool

You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes
from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading. Where
should you store the data?


A.

Google BigQuery


B.

Google Cloud SQL


C.

Google Cloud Bigtable


D.

Google Cloud Storage





C.
  

Google Cloud Bigtable



The database administration team has asked you to help them improve the performance of their new database
server running on Google Compute Engine. The database is for importing and normalizing their performance
statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk. What should they change to get better performance from this system?


A.

Increase the virtual machine's memory to 64 GB.


B.

Create a new virtual machine running PostgreSQL.


C.

Dynamically resize the SSD persistent disk to 500 GB.


D.

Migrate their performance metrics warehouse to BigQuery.


E.

Modify all of their batch jobs to use bulk inserts into the database.





C.
  

Dynamically resize the SSD persistent disk to 500 GB.



Your organization requires that metrics from all applications be retained for 5 years for future analysis in
possible legal proceedings. Which approach should you use?


A.

Grant the security team access to the logs in each Project.


B.

Configure Stackdriver Monitoring for all Projects, and export to BigQuery.


C.

Configure Stackdriver Monitoring for all Projects with the default retention policies.


D.

Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.





D.
  

Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.



https://cloud.google.com/monitoring/api/v3/metrics

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as –is  and would still be an optimized architecture for performance in the cloud?


A.

Web applications deployed using App Engine standard environment


B.

RabbitMQ deployed using an unmanaged instance group


C.

Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode


D.

Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types





A.
  

Web applications deployed using App Engine standard environment



For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you
configure the solution to scale for this growth without making major application changes and still maximize
the ROI?


A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud
Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.


B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with
Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.
ge


C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ
to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk
storage.


D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL,
RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Stora





C.
  

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ
to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk
storage.



For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?


A.

Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL
server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.


B.

Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher.
Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.


C.

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL
server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.


D.

Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy
Jenkins to Compute Engine using Cloud Launcher





C.
  

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL
server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.



For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP
solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?


A.

Replace the existing data warehouse with BigQuery. Use table partitioning.


B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.


C.

Replace the existing data warehouse with BigQuery. Use federated data sources.


D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional
Compute Engine pre-emptible instance with 32 CPUs.





C.
  

Replace the existing data warehouse with BigQuery. Use federated data sources.



For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost. Which two actions should you take?


A.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to
Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Coldline”, and
Action: “Delete”.


B.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Coldline”, and Action: “Set to
Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Coldline”, and
Action: “Set to Nearline


C.

Create a Cloud Storage lifecycle rule with Age: “90”, Storage Class: “Standard”, and Action: “Set to
Nearline”, and create a second GCS life-cycle rule with Age: “91”, Storage Class: “Nearline”, and
Action: “Set to Coldline”.


D.

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to
Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and
Action: “Delete”.





D.
  

Create a Cloud Storage lifecycle rule with Age: “30”, Storage Class: “Standard”, and Action: “Set to
Coldline”, and create a second GCS life-cycle rule with Age: “365”, Storage Class: “Nearline”, and
Action: “Delete”.



For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation,
TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. What should you do?


A.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For
Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age
condition of 36 months.


B.

Create a BigQuery table for the European data, and set the table retention period to 36 months. For
Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36
months.


C.

Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with
an Age condition of 36 months.


D.

Create a BigQuery time-partitioned table for the European data, and set the partition period to 36
months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age
condition of 36 months.





B.
  

Create a BigQuery table for the European data, and set the table retention period to 36 months. For
Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36
months.



For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical
architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?


A.

Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.


B.

Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.


C.

Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.


D.

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery
for historical data queries.





D.
  

Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery
for historical data queries.




Page 8 out of 21 Pages
Previous