Topic 1, Mountkirk Games Case Study
Company Overview
Mountkirk Games makes online, session-based. multiplayer games for the most popular mobile platforms.
Company Background
Mountkirk Games builds all of their games with some server-side integration and has historically used cloud
providers to lease physical servers. A few of their games were more popular than expected, and they had
problems scaling their application servers, MySQL databases, and analytics tools.
Mountkirk's current model is to write game statistics to files and send them through an ETL tool that loads
them into a centralized MySQL database for reporting.
Solution Concept
Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the
game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics and
take advantage of its autoscaling server environment and integrate with a managed NoSQL database.
Technical Requirements
Requirements for Game Backend Platform
1. Dynamically scale up or down based on game activity.
2. Connect to a managed NoSQL database service.
3. Run customized Linx distro.
Requirements for Game Analytics Platform
1. Dynamically scale up or down based on game activity.
2. Process incoming data on the fly directly from the game servers
3. Process data that arrives late because of slow mobile networks.
4. Allow SQL queries to access at least 10 TB of historical data.
5. Process files that are regularly uploaded by users' mobile devices.
6. Use only fully managed services
CEO Statement
Our last successful game did not scale well with our previous cloud provider, resuming in lower user adoption
and affecting the game’s reputation. Our investors want more key performance indicators (KPIs) to evaluate
the speed and stability of the game, as well as other metrics that provide deeper insight into usage patterns so
we can adapt the gams to target users.
CTO Statement
Our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an
environment that provides autoscaling, low latency load balancing, and frees us up from managing physical
servers.
CFO Statement
We are not capturing enough user demographic data usage metrics, and other KPIs. As a result, we do not
engage the right users. We are not confident that our marketing is targeting the right users, and we are not
selling enough premium Blast-Ups inside the games, which dramatically impacts our revenue.
For this question, refer to the Mountkirk Games case study.
Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?
A.
Verify that the database is online.
B.
Verify that the project quota hasn't been exceeded.
C.
Verify that the new feature code did not introduce any performance bugs.
D.
Verify that the load-testing team is not running their tool against production.
Verify that the database is online.
503 is service unavailable error
For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must
meet their technical requirements. Which combination of Google technologies will meet all of their
requirements?
A.
Container Engine, Cloud Pub/Sub, and Cloud SQL
B.
Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
C.
Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow
D.
Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow
E.
Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc
Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery
A real time requires Stream / Messaging so Pub/Sub, Analytics by Big Query.
For this question, refer to the Mountkirk Games case study
Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application
environments. Developers and testers can access each other's environments and resources, but they cannot
access staging or production resources. The staging environment needs access to some services from
production.
What should you do to isolate development environments from staging and production?
A.
Create a project for development and test and another for staging and production.
B.
Create a network for development and test and another for staging and production.
C.
Create one subnetwork for development and another for staging and production.
D.
Create one project for development, a second for staging and a third for production.
Create a project for development and test and another for staging and production.
For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small
services that they want to be able to update and roll back quickly. Mountkirk Games has the following
requirements:
• Services are deployed redundantly across multiple regions in the US and Europe.
• Only frontend services are exposed on the public internet.
• They can provide a single frontend IP for their fleet of services.
Deployment artifacts are immutable.
Which set of products should they use?
A.
Google Cloud Storage, Google Cloud Dataflow, Google Compute Engine
B.
Google Cloud Storage, Google App Engine, Google Network Load Balancer
C.
Google Container Registry, Google Container Engine, Google HTTP(s) Load Balancer
D.
Google Cloud Functions, Google Cloud Pub/Sub, Google Cloud Deployment Manager
Google Container Registry, Google Container Engine, Google HTTP(s) Load Balancer
For this question, refer to the Mountkirk Games case study.
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a
thorough testing process for new versions of the backend before they are released to the public. You want the
testing environment to scale in an economical way. How should you design the process?
A.
Create a scalable environment in GCP for simulating production load.
B.
Use the existing infrastructure to test the GCP-based backend at scale.
C.
Build stress tests into each component of your application using resources internal to GCP to simulate
load.
D.
Create a set of static environments in GCP to test different levels of load — for example, high, medium,
and low.
Use the existing infrastructure to test the GCP-based backend at scale.
For this question, refer to the Mountkirk Games case study.
Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from
their existing backends on the other platforms?
A.
Tests should scale well beyond the prior approaches.
B.
Unit tests are no longer required, only end-to-end tests.
C.
Tests should be applied after the release is in the production environment.
D.
Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.
Tests should be applied after the release is in the production environment.
For this question, refer to the TerramEarth case study.
TerramEarth's CTO wants to use the raw data from connected vehicles to help identify approximately when a
vehicle in the development team to focus their failure. You want to allow analysts to centrally query the
vehicle data. Which architecture should you recommend?
A.
Option A
B.
Option B
C.
Option C
D.
Option D
Option A
https://cloud.google.com/solutions/iot/
https://cloud.google.com/solutions/designing-connected-vehicle-platform
https://cloud.google.com/solutions/designing-connected-vehicle-platform#data_ingestion
http://www.eweek.com/big-data-and-analytics/google-touts-value-of-cloud-iot-core-for-analyzing-connected-car-data
https://cloud.google.com/solutions/iot/
For this question refer to the TerramEarth case study
Operational parameters such as oil pressure are adjustable on each of TerramEarth's vehicles to increase their
efficiency, depending on their environmental conditions. Your primary goal is to increase the operating
efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this goal?
A.
Have your engineers inspect the data for patterns, and then create an algorithm with rules that make
operational adjustments automatically.
B.
Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.
C.
Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud
Messaging (GCM) to make operational adjustments automatically.
D.
Capture all operating data, train machine learning models that identify ideal operations, and host in
Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.
Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.
Question #:9 - (Exam Topic 2)
For this question, refer to the TerramEarth case study
You analyzed TerramEarth's business requirement to reduce downtime, and found that they can achieve a
majority of time saving by reducing customers' wait time for parts You decided to focus on reduction of the 3
weeks aggregate reporting time Which modifications to the company's processes should you recommend?
A.
Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning
analysis of metrics.
B.
Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine
learning analysis of metrics.
C.
Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop
machine learning analysis of metrics.
D.
Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer
local inventory by a fixed factor.
Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine
learning analysis of metrics.
For this question, refer to the TerramEarth case study.
TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry
data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked
you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles.
You want to run this job on all the data. What is the most cost-effective way to run this job?
A.
Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.
B.
Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.
C.
Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi
region bucket and use a Dataproc cluster to finish the job.
D.
Launch a cluster in each region to preprocess and compress the raw data, then move the data into a
regional bucket and use a Cloud Dataproc cluster …..
Launch a cluster in each region to preprocess and compress the raw data, then move the data into a
regional bucket and use a Cloud Dataproc cluster …..
For this question, refer to the TerramEarth case study
Your development team has created a structured API to retrieve vehicle data. They want to allow third parties
to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization
against this data. What should you do?
A.
Build or leverage an OAuth-compatible access control system.
B.
Build SAML 2.0 SSO compatibility into your authentication system.
C.
Restrict data access based on the source IP address of the partner systems.
D.
Create secondary credentials for each dealer that can be given to the trusted third party.
Build or leverage an OAuth-compatible access control system.
https://cloud.google.com/appengine/docs/flexible/go/authorizing-apps
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#delegate_application_authorization_
For this question, refer to the TerramEarth case study.
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data
to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the
file when connections fail, which happens often. You want to improve the reliability of the solution and
minimize data transfer time on the cellular connections. What should you do?
A.
Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run
the ETL process using data in the bucket.
B.
Use multiple Google Container Engine clusters running FTP servers located in different regions. Save
the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.
C.
Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu,
and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.
D.
Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional
bucket.
Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional
bucket.
Page 1 out of 21 Pages |