Professional-Cloud-Developer Practice Test Questions

253 Questions


Topic 2: Misc. Questions

You have an application running on Google Kubernetes Engine (GKE). The application is currently using a logging library and is outputting to standard output You need to export the logs to Cloud Logging, and you need the logs to include metadata about each request. You want to use the simplest method to accomplish this. What should you do?


A. Change your application s logging library to the Cloud Logging library and configure your application to export logs to Cloud Logging


B. Update your application to output logs in CSV format, and add the necessary metadata to the CSV.


C. Install the Fluent Bit agent on each of your GKE nodes, and have the agent export all logs from /var/ log.


D. Update your application to output logs in JSON format, and add the necessary metadata to the JSON





B.
  Update your application to output logs in CSV format, and add the necessary metadata to the CSV.

You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by multiple clients within the same Virtual Private Cloud (VPC). You want clients to be able to get the IP address of the service. What should you do?


A. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Clients should use this IP address to connect to the service.


B. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service.


C. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c.[PROJECT_ID].internal/.


D. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.





D.
  Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.

You need to deploy resources from your laptop to Google Cloud using Terraform. Resources in your Google Cloud environment must be created using a service account. Your Cloud Identity has the roles/iam.serviceAccountTokenCreator Identity and Access Management (IAM) role and the necessary permissions to deploy the resources using Terraform. You want to set up your development environment to deploy the desired resources following Google-recommended best practices. What should you do?


A. 1) Download the service account’s key file in JSON format, and store it locally on your laptop.
2) Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your downloaded key file.


B. 1) Run the following command from a command line: gcloud config set auth/impersonate_service_account service-accountname@ project.iam.gserviceacccount.com.
2) Set the GOOGLE_OAUTH_ACCESS_TOKEN environment variable to the value that is returned by the gcloud auth print-access-token command.


C. 1) Run the following command from a command line: gcloud auth application-default login.
2) In the browser window that opens, authenticate using your personal credentials.


D. 1) Store the service account's key file in JSON format in Hashicorp Vault.
2) Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate to Vault using a short-lived access token.





D.
  1) Store the service account's key file in JSON format in Hashicorp Vault.
2) Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate to Vault using a short-lived access token.

Whenever possible, avoid storing service account keys on a file system. If you can't avoid storing keys on disk, make sure to restrict access to the key file, configure file access auditing, and encrypt the underlying disk.
https://cloud.google.com/iam/docs/best-practices-for-managing-service-accountkeys# software-keystore
In situations where using a hardware-based key store isn't viable, use a software-based key store to manage service account keys. Similar to hardware-based options, a softwarebased key store lets users or applications use service account keys without revealing the private key. Software-based key store solutions can help you control key access in a finegrained manner and can also ensure that each key access is logged.

You have a mixture of packaged and internally developed applications hosted on a Compute Engine instance that is running Linux. These applications write log records as text in local files. You want the logs to be written to Cloud Logging. What should you do?


A. Pipe the content of the files to the Linux Syslog daemon.


B. Install a Google version of fluentd on the Compute Engine instance


C. Install a Google version of collected on the Compute Engine instance


D. Using cron, schedule a job to copy the log files to Cloud Storage once a day.





B.
  Install a Google version of fluentd on the Compute Engine instance

Your application requires service accounts to be authenticated to GCP products via credentials stored on its host Compute Engine virtual machine instances. You want to distribute these credentials to the host instances as securely as possible. What should you do?


A. Use HTTP signed URLs to securely provide access to the required resources.


B. Use the instance’s service account Application Default Credentials to authenticate to the required resources.


C. Generate a P12 file from the GCP Console after the instance is deployed, and copy the credentials to the host instance before starting the application.


D. Commit the credential JSON file into your application’s source repository, and have your CI/CD process package it with the software that is deployed to the instance.





B.
  Use the instance’s service account Application Default Credentials to authenticate to the required resources.

Your team develops stateless services that run on Google Kubernetes Engine (GKE). You need to deploy a new service that will only be accessed by other services running in the GKE cluster. The service will need to scale as quickly as possible to respond to changing load. What should you do?


A. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.


B. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort Service.


C. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.


D. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service.





C.
  Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service.

You have two tables in an ANSI-SQL compliant database with identical columns that you need to quickly combine into a single table, removing duplicate rows from the result set. What should you do?


A. Use the JOIN operator in SQL to combine the tables.


B. Use nested WITH statements to combine the tables


C. Use the UNION operator in SQL to combine the tables.


D. Use the UNION ALL operator in SQL to combine the tables.





C.
  Use the UNION operator in SQL to combine the tables.

Your development team is using Cloud Build to promote a Node.js application built on App Engine from your staging environment to production. The application relies on several directories of photos stored in a Cloud Storage bucket named webphotos-staging in the staging environment. After the promotion, these photos must be available in a Cloud Storage bucket named webphotos-prod in the production environment. You want to automate the process where possible. What should you do?


A. Manually copy the photos to webphotos-prod.


B. Add a startup script in the application’s app.yami file to move the photos from webphotosstaging to webphotos-prod.


C. Add a build step in the cloudbuild.yaml file before the promotion step with the arguments:
name: gcr.io/cloud-builders/gsutil
args: ['cp','-r', 'gs://webphotos-staging', 'gs://webphotos-prod']
waitFor: ['-']


D. Add a build step in the cloudbuild.yaml file before the promotion step with the arguments:
- name: gcr.io/cloud-builders/gcloud
args: ['cp','-A', 'gs://webphotos-staging',
'gs://webphotos-prod']
waitFor: ['-']





C.
  Add a build step in the cloudbuild.yaml file before the promotion step with the arguments:
name: gcr.io/cloud-builders/gsutil
args: ['cp','-r', 'gs://webphotos-staging', 'gs://webphotos-prod']
waitFor: ['-']


Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a table in Cloud Spanner. Your application is more sensitive to latency and less sensitive to consistency. How should you perform reads from Cloud Spanner for this application?


A. Perform Read-Only transactions


B. Perform stale reads using single-read methods


C. Perform strong reads using single-read methods.


D. Perform stale reads using read-write transactions.





D.
  Perform stale reads using read-write transactions.

You need to deploy a new European version of a website hosted on Google Kubernetes Engine. The current and new websites must be accessed via the same HTTP(S) load balancer's external IP address, but have different domain names. What should you do?


A. Define a new Ingress resource with a host rule matching the new domain


B. Modify the existing Ingress resource with a host rule matching the new domain


C. Create a new Service of type LoadBalancer specifying the existing IP address as the loadBalancerIP


D. Generate a new Ingress resource and specify the existing IP address as the kubernetes.io/ingress.global-static-ip-name annotation value





B.
  Modify the existing Ingress resource with a host rule matching the new domain

You want to notify on-call engineers about a service degradation in production while minimizing development time. What should you do?


A. Use Cloud Function to monitor resources and raise alerts


B. Use Cloud Pub/Sub to monitor resources and raise alerts


C. Use Stackdriver Error Reporting to capture errors and raise alerts.


D. Use Stackdriver Monitoring to monitor resources and raise alerts.





A.
  Use Cloud Function to monitor resources and raise alerts

You are developing an application that will store and access sensitive unstructured data objects in a Cloud Storage bucket. To comply with regulatory requirements, you need to ensure that all data objects are available for at least 7 years after their initial creation. Objects created more than 3 years ago are accessed very infrequently (less than once a year). You need to configure object storage while ensuring that storage cost is optimized. What should you do? (Choose two.)


A. Set a retention policy on the bucket with a period of 7 years.


B. Use IAM Conditions to provide access to objects 7 years after the object creation date.


C. Enable Object Versioning to prevent objects from being accidentally deleted for 7 years after object creation.


D. Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years.


E. Implement a Cloud Function that checks the age of each object in the bucket and moves the objects older than 3 years to a second bucket with the Archive Storage class. Use Cloud Scheduler to trigger the Cloud Function on a daily schedule.





A.
  Set a retention policy on the bucket with a period of 7 years.

D.
  Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years.

This page discusses the Bucket Lock feature, which allows you to configure a data retention policy for a Cloud Storage bucket that governs how long objects in the bucket must be retained. The feature also allows you to lock the data retention policy, permanently preventing the policy from being reduced or removed.
https://cloud.google.com/storage/docs/storage-classes#archive
Archive storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike the "coldest" storage services offered by other Cloud providers, your data is available within milliseconds, not hours or days.
Archive storage is the best choice for data that you plan to access less than once a year.


Page 7 out of 22 Pages
Previous