A retail company with thousands of stores has an API to receive data about purchases and insert it into a single database. Each individual store sends a batch of purchase data to the API about every 30 minutes. The API implementation uses a database bulk insert command to submit all the purchase data to a database using a custom JDBC driver provided by a data analytics solution provider. The API implementation is deployed to a single CloudHub worker. The JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker, and then the data is sent to an analytics engine using a proprietary protocol. This process usually takes less than a few minutes. Sometimes a request fails. In this case, the logs show a message from the JDBC driver indicating an out-of-file-space message. When the request is resubmitted, it is successful. What is the best way to try to resolve this throughput issue?
A. se a CloudHub autoscaling policy to add CloudHub workers
B. Use a CloudHub autoscaling policy to increase the size of the CloudHub worker
C. Increase the size of the CloudHub worker(s)
D. Increase the number of CloudHub workers
Explanation
Correct Answer: Increase the size of the CloudHub worker(s)
*****************************************
The key details that we can take out from the given scenario are:
API implementation uses a database bulk insert command to submit all the purchase data to a database
JDBC driver processes the data into a set of several temporary disk files on the CloudHub worker
Sometimes a request fails and the logs show a message indicating an out-of-file-space message
Based on above details:
Both auto-scaling options does NOT help because we cannot set auto-scaling rules based on error messages. Auto-scaling rules are kicked-off based on CPU/Memory usages and not due to some given error or disk space issues.
Increasing the number of CloudHub workers also does NOT help here because the reason for the failure is not due to performance aspects w.r.t CPU or Memory. It is due to disk-space.
Moreover, the API is doing bulk insert to submit the received batch data. Which means, all data is handled by ONE worker only at a time. So, the disk space issue should be tackled on "per worker" basis. Having multiple workers does not help as the batch may still fail on any worker when disk is out of space on that particular worker.
Therefore, the right way to deal this issue and resolve this is to increase the vCore size of the worker so that a new worker with more disk space will be provisioned.
What is the most performant out-of-the-box solution in Anypoint Platform to track transaction state in an asynchronously executing long-running process implemented as a Mule application deployed to multiple CloudHub workers?
A. Redis distributed cache
B. java.util.WeakHashMap
C. Persistent Object Store
D. File-based storage
Explanation:
Correct Answer: Persistent Object Store
*****************************************
Redis distributed cache is performant but NOT out-of-the-box solution in Anypoint Platform
File-storage is neither performant nor out-of-the-box solution in Anypoint Platform
java.util.WeakHashMap needs a completely custom implementation of cache from scratch using Java code and is limited to the JVM where it is running. Which means the state in the cache is not worker aware when running on multiple workers. This type of cache is local to the worker. So, this is neither out-of-the-box nor worker-aware among multiple workers on cloudhub.
https://www.baeldung.com/java-weakhashmap
Persistent Object Store is an out-of-the-box solution provided by Anypoint Platform which is performant as well as worker aware among multiple workers running on CloudHub.
https://docs.mulesoft.com/object-store/
So, Persistent Object Store is the right answer.
An organization is implementing a Quote of the Day API that caches today's quote. What scenario can use the GoudHub Object Store via the Object Store connector to persist the cache's state?
A. When there are three CloudHub deployments of the API implementation to three separate CloudHub regions that must share the cache state
B. When there are two CloudHub deployments of the API implementation by two Anypoint Platform business groups to the same CloudHub region that must share the cache state
C. When there is one deployment of the API implementation to CloudHub and anottV deployment to a customer-hosted Mule runtime that must share the cache state
D. When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state
Explanation
Correct Answer: When there is one CloudHub deployment of the API implementation to three CloudHub workers that must share the cache state.
*****************************************
Key details in the scenario:
Use the CloudHub Object Store via the Object Store connector
Considering above details:
CloudHub Object Stores have one-to-one relationship with CloudHub Mule Applications.
We CANNOT use an application's CloudHub Object Store to be shared among multiple Mule applications running in different Regions or Business Groups or Customer-hosted Mule Runtimes by using Object Store connector.
If it is really necessary and very badly needed, then Anypoint Platform supports a way by allowing access to CloudHub Object Store of another application using Object Store REST API. But NOT using Object Store connector.
So, the only scenario where we can use the CloudHub Object Store via the Object Store connector to persist the cache’s state is when there is one CloudHub deployment of the API implementation to multiple CloudHub workers that must share the cache state.
A code-centric API documentation environment should allow API consumers to investigate and execute API client source code that demonstrates invoking one or more APIs as part of representative scenarios. What is the most effective way to provide this type of code-centric API documentation environment using Anypoint Platform?
A. Enable mocking services for each of the relevant APIs and expose them via their Anypoint Exchange entry
B. Ensure the APIs are well documented through their Anypoint Exchange entries and API Consoles and share these pages with all API consumers
C. Create API Notebooks and include them in the relevant Anypoint Exchange entries
D. Make relevant APIs discoverable via an Anypoint Exchange entry
Explanation
Correct Answer: Create API Notebooks and Include them in the relevant Anypoint exchange entries
*****************************************
API Notebooks are the one on Anypoint Platform that enable us to provide code-centric API documentation
Reference: [: https://docs.mulesoft.com/exchange/to-use-api-notebook, , Bottom of Form, Top of Form, , ]
In an organization, the InfoSec team is investigating Anypoint Platform related data traffic. From where does most of the data available to Anypoint Platform for monitoring and alerting originate?
A. From the Mule runtime or the API implementation, depending on the deployment model
B. From various components of Anypoint Platform, such as the Shared Load Balancer, VPC, and Mule runtimes
C. From the Mule runtime or the API Manager, depending on the type of data
D. From the Mule runtime irrespective of the deployment model
Explanation
Correct Answer: From the Mule runtime irrespective of the deployment model
*****************************************
Monitoring and Alerting metrics are always originated from Mule Runtimes irrespective of the deployment model.
It may seems that some metrics (Runtime Manager) are originated from Mule Runtime and some are (API Invocations/ API Analytics) from API Manager. However, this is realistically NOT TRUE. The reason is, API manager is just a management tool for API instances but all policies upon applying on APIs eventually gets executed on Mule Runtimes only (Either Embedded or API Proxy).
Similarly all API Implementations also run on Mule Runtimes.
So, most of the day required for monitoring and alerts are originated fron Mule Runtimes only irrespective of whether the deployment model is MuleSoft-hosted or Customer-hosted or Hybrid.
A Mule application exposes an HTTPS endpoint and is deployed to three CloudHub workers that do not use static IP addresses. The Mule application expects a high volume of client requests in short time periods. What is the most cost-effective infrastructure component that should be used to serve the high volume of client requests?
A. A customer-hosted load balancer
B. The CloudHub shared load balancer
C. An API proxy
D. Runtime Manager autoscaling
Explanation
Correct Answer: The CloudHub shared load balancer
*****************************************
The scenario in this question can be split as below:
There are 3 CloudHub workers (So, there are already good number of workers to handle high volume of requests)
The workers are not using static IP addresses (So, one CANNOT use customer load-balancing solutions without static IPs)
Looking for most cost-effective component to load balance the client requests among the workers.
Based on the above details given in the scenario:
Runtime autoscaling is NOT at all cost-effective as it incurs extra cost. Most over, there are already 3 workers running which is a good number.
We cannot go for a customer-hosted load balancer as it is also NOT most cost-effective (needs custom load balancer to maintain and licensing) and same time the Mule App is not having Static IP Addresses which limits from going with custom load balancing.
An API Proxy is irrelevant there as it has no role to play w.r.t handling high volumes or load balancing.
So, the only right option to go with and fits the purpose of scenario being most cost-effective is - using a CloudHub Shared Load Balancer.
What best explains the use of auto-discovery in API implementations?
A. It makes API Manager aware of API implementations and hence enables it to enforce policies
B. It enables Anypoint Studio to discover API definitions configured in Anypoint Platform
C. It enables Anypoint Exchange to discover assets and makes them available for reuse
D. It enables Anypoint Analytics to gain insight into the usage of APIs
Explanation:
Explanation
Correct Answer: It makes API Manager aware of API implementations and hence enables it
to enforce policies.
*****************************************
>> API Autodiscovery is a mechanism that manages an API from API Manager by pairing
the deployed application to an API created on the platform.
>> API Management includes tracking, enforcing policies if you apply any, and reporting
API analytics.
>> Critical to the Autodiscovery process is identifying the API by providing the API name
and version.
References:
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
https://docs.mulesoft.com/api-manager/1.x/api-auto-discovery
https://docs.mulesoft.com/api-manager/2.x/api-auto-discovery-new-concept
When must an API implementation be deployed to an Anypoint VPC?
A. When the API Implementation must invoke publicly exposed services that are deployed outside of CloudHub in a customer- managed AWS instance
B. When the API implementation must be accessible within a subnet of a restricted customer-hosted network that does not allow public access
C. When the API implementation must be deployed to a production AWS VPC using the Mule Maven plugin
D. When the API Implementation must write to a persistent Object Store
When designing an upstream API and its implementation, the development team has been advised to NOT set timeouts when invoking a downstream API, because that downstream API has no SLA that can be relied upon. This is the only downstream API dependency of that upstream API. Assume the downstream API runs uninterrupted without crashing. What is the impact of this advice?
A. An SLA for the upstream API CANNOT be provided
B. The invocation of the downstream API will run to completion without timing out
C. Each modern API must be easy to consume, so should avoid complex authentication mechanisms such as SAML or JWT D
D. A toad-dependent timeout of less than 1000 ms will be applied by the Mule runtime in which the downstream API implementation executes
Explanation
Correct Answer: An SLA for the upstream API CANNOT be provided.
*****************************************
>> First thing first, the default HTTP response timeout for HTTP connector is 10000 ms (10
seconds). NOT 500 ms.
>> Mule runtime does NOT apply any such "load-dependent" timeouts. There is no such
behavior currently in Mule.
>> As there is default 10000 ms time out for HTTP connector, we CANNOT always
guarantee that the invocation of the downstream API will run to completion without timing
out due to its unreliable SLA times. If the response time crosses 10 seconds then the
request may time out.
The main impact due to this is that a proper SLA for the upstream API CANNOT be
provided.
Reference: https://docs.mulesoft.com/http-connector/1.5/
http-documentation#parameters-3
An organization wants MuleSoft-hosted runtime plane features (such as HTTP load balancing, zero downtime, and horizontal and vertical scaling) in its Azure environment. What runtime plane minimizes the organization's effort to achieve these features?
A. Anypoint Runtime Fabric
B. Anypoint Platform for Pivotal Cloud Foundry
C. CloudHub
D. A hybrid combination of customer-hosted and MuleSoft-hosted Mule runtimes
Explanation:
Explanation
Correct Answer: Anypoint Runtime Fabric
*****************************************
>> When a customer is already having an Azure environment, It is not at all an ideal
approach to go with hybrid model having some Mule Runtimes hosted on Azure and some
on MuleSoft. This is unnecessary and useless.
>> CloudHub is a Mulesoft-hosted Runtime plane and is on AWS. We cannot customize to
point CloudHub to customer's Azure environment.
>> Anypoint Platform for Pivotal Cloud Foundry is specifically for infrastructure provided by
Pivotal Cloud Foundry
>> Anypoint Runtime Fabric is right answer as it is a container service that automates the
deployment and orchestration of Mule applications and API gateways. Runtime Fabric runs
within a customer-managed infrastructure on AWS, Azure, virtual machines (VMs), and
bare-metal servers.
-Some of the capabilities of Anypoint Runtime Fabric include:
-Isolation between applications by running a separate Mule runtime per application.
-Ability to run multiple versions of Mule runtime on the same set of resources.
-Scaling applications across multiple replicas.
-Automated application fail-over.
-Application management with Anypoint Runtime Manager.
Reference: https://docs.mulesoft.com/runtime-fabric/1.7/
Which of the following sequence is correct?
A. API Client implementes logic to call an API >> API Consumer requests access to API >> API Implementation routes the request to >> API
B. API Consumer requests access to API >> API Client implementes logic to call an API >> API routes the request to >> API Implementation
C. API Consumer implementes logic to call an API >> API Client requests access to API >> API Implementation routes the request to >> API
D. API Client implementes logic to call an API >> API Consumer requests access to API >> API routes the request to >> API Implementation
Explanation
Correct Answer:
API Consumer requests access to API >> API Client implementes logic to
call an API >> API routes the request to >> API Implementation
*****************************************
>> API consumer does not implement any logic to invoke APIs. It is just a role. So, the
option stating "API Consumer implementes logic to call an API" is INVALID.
>> API Implementation does not route any requests. It is a final piece of logic where
functionality of target systems is exposed. So, the requests should be routed to the API
implementation by some other entity. So, the options stating "API Implementation routes
the request to >> API" is INVALID
>> The statements in one of the options are correct but sequence is wrong. The sequence
is given as "API Client implementes logic to call an API >> API Consumer requests access
to API >> API routes the request to >> API Implementation". Here, the statements in the
options are VALID but sequence is WRONG.
>> Right option and sequence is the one where API consumer first requests access to API
on Anypoint Exchange and obtains client credentials. API client then writes logic to call an
API by using the access client credentials requested by API consumer and the requests will
be routed to API implementation via the API which is managed by API Manager.
Version 3.0.1 of a REST API implementation represents time values in PST time using ISO 8601 hh:mm:ss format. The API implementation needs to be changed to instead represent time values in CEST time using ISO 8601 hh:mm:ss format. When following the semver.org semantic versioning specification, what version should be assigned to the updated API implementation?
A. 3.0.2
B. 4.0.0
C. 3.1.0
D. 3.0.1
Explanation
Correct Answer: 4.0.0
*****************************************
As per semver.org semantic versioning specification:
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes.
- MINOR version when you add functionality in a backwards compatible manner.
- PATCH version when you make backwards compatible bug fixes.
As per the scenario given in the question, the API implementation is completely changing
its behavior. Although the format of the time is still being maintained as hh:mm:ss and there
is no change in schema w.r.t format, the API will start functioning different after this change
as the times are going to come completely different.
Example: Before the change, say, time is going as 09:00:00 representing the PST. Now on,
after the change, the same time will go as 18:00:00 as Central European Summer Time is
9 hours ahead of Pacific Time.
>> This may lead to some uncertain behavior on API clients depending on how they are
handling the times in the API response. All the API clients need to be informed that the API
functionality is going to change and will return in CEST format. So, this considered as a
MAJOR change and the version of API for this new change would be 4.0.0
Page 2 out of 13 Pages |
Previous |