A segment fails to refresh with the error "Segment references too many data lake objects (DLOS)". Which two troubleshooting tips should help remedy this issue? Choose 2 answers
A. Split the segment into smaller segments.
B. Use calculated insights in order to reduce the complexity of the segmentation query.
C. Refine segmentation criteria to limit up to five custom data model objects (DMOs).
D. Space out the segment schedules to reduce DLO load.
Explanation:
The error “Segment references too many data lake objects (DLOs)” occurs when a segment query exceeds the limit of 50 DLOs that can be referenced in a single query. This can happen when the segment has too many filters, nested segments, or exclusion criteria that involve different DLOs. To remedy this issue, the consultant can try the following troubleshooting tips:
Split the segment into smaller segments. The consultant can divide the segment into multiple segments that have fewer filters, nested segments, or exclusion criteria. This can reduce the number of DLOs that are referenced in each segment query and avoid the error. The consultant can then use the smaller segments as nested segments in a larger segment, or activate them separately.
Use calculated insights in order to reduce the complexity of the segmentation query. The consultant can create calculated insights that are derived from existing data using formulas. Calculated insights can simplify the segmentation query by replacing multiple filters or nested segments with a single attribute. For example, instead of using multiple filters to segment individuals based on their purchase history, the consultant can create a calculated insight that calculates the lifetime value of each individual and use that as a filter.
The other options are not troubleshooting tips that can help remedy this issue. Refining segmentation criteria to limit up to five custom data model objects (DMOs) is not a valid option, as the limit of 50 DLOs applies to both standard and custom DMOs. Spacing out the segment schedules to reduce DLO load is not a valid option, as the error is not related to the DLO load, but to the segment query complexity.
References:
Troubleshoot Segment Errors
Create a Calculated Insight
Create a Segment in Data Cloud
The Salesforce CRM Connector is configured and the Case object data stream is set up. Subsequently, a new custom field named Business Priority is created on the Case object in Salesforce CRM. However, the new field is not available when trying to add it to the data stream.
Which statement addresses the cause of this issue?
A. The Salesforce Integration User Is missing Rad permissions on the newly created field.
B. The Salesforce Data Loader application should be used to perform a bulk upload from a desktop.
C. Custom fields on the Case object are not supported for ingesting into Data Cloud.
D. After 24 hours when the data stream refreshes it will automatically include any new fields that were added to the Salesforce CRM.
The Salesforce CRM Connector uses the Salesforce Integration User to access the data from the Salesforce CRM org. The Integration User must have the Read permission on the fields that are included in the data stream. If the Integration User does not have the Read permission on the newly created field, the field will not be available for selection in the data stream configuration. To resolve this issue, the administrator should assign the Read permission on the new field to the Integration User profile or permission set. References: Create a Salesforce CRM Data Stream, Edit a Data Stream, Salesforce Data Cloud Full Refresh for CRM, SFMC, or Ingestion API Data Streams
A customer requests that their personal data be deleted. Which action should the consultant take to accommodate this request in Data Cloud?
A. Use a streaming API call to delete the customer's information.
B. Use Profile Explorer to delete the customer data from Data Cloud.
C. Use Consent API to request deletion of the customer's information.
D. Use the Data Rights Subject Request tool to request deletion of the customer's information.
Explanation:
The Data Rights Subject Request tool is a feature that allows Data Cloud users to manage customer requests for data access, deletion, or portability. The tool provides a user interface and an API to create, track, and fulfill data rights requests. The tool also generates a report that contains the customer’s personal data and the actions taken to comply with the request. The consultant should use this tool to accommodate the customer’s request for data deletion in Data Cloud. References: Data Rights Subject Request Tool, Create a Data Rights Subject Request.
Which consideration related to the way Data Cloud ingests CRM data is true?
A. CRM data cannot be manually refreshed and must wait for the next scheduled synchronization,
B. The CRM Connector's synchronization times can be customized to up to 15-minute intervals.
C. Formula fields are refreshed at regular sync intervals and are updated at the next full refresh.
D. The CRM Connector allows standard fields to stream into Data Cloud in real time.
A customer notices that their consolidation rate is low across their account unification. They have mapped Account to the Individual and Contact Point Email DMOs. What should they do to increase their consolidation rate?
A. Change reconciliation rules to Most Occurring.
B. Disable the individual identity ruleset.
C. Increase the number of matching rules.
D. Update their account address details in the data source
Explanation:
Consolidation Rate: The consolidation rate in Salesforce Data Cloud refers to the effectiveness of unifying records into a single profile. A low consolidation rate indicates that many records are not being successfully unified.
Matching Rules: Matching rules are critical in the identity resolution process. They define the criteria for identifying and merging duplicate records.
Solution:
Increase Matching Rules: Adding more matching rules improves the system's ability to identify duplicate records. This includes matching on additional fields or using more sophisticated matching algorithms.
Steps:
Access the Identity Resolution settings in Data Cloud.
Review the current matching rules.
Add new rules that consider more fields such as phone number, address, or other unique identifiers.
Benefits:
Improved Unification: Higher accuracy in matching and merging records, leading to a higher consolidation rate.
Comprehensive Profiles: Enhanced customer profiles with consolidated data from multiple sources.
Which two requirements must be met for a calculated insight to appear in the segmentation canvas? (Choose 2 answers)
A. The metrics of the calculated insights must only contain numeric values.
B. The primary key of the segmented table must be a metric in the calculated insight.
C. The calculated insight must contain a dimension including the Individual or Unified Individual Id.
D. The primary key of the segmented table must be a dimension in the calculated insight.
Explanation:
A calculated insight is a custom metric or measure that is derived from one or more data model objects or data lake objects in Data Cloud. A calculated insight can be used in segmentation to filter or group the data based on the calculated value. However, not all calculated insights can appear in the segmentation canvas. There are two requirements that must be met for a calculated insight to appear in the segmentation canvas:
The calculated insight must contain a dimension including the Individual or Unified Individual Id. A dimension is a field that can be used to categorize or group the data, such as name, gender, or location. The Individual or Unified Individual Id is a unique identifier for each individual profile in Data Cloud.
The calculated insight must include this dimension to link the calculated value to the individual profile and to enable segmentation based on the individual profile attributes. The primary key of the segmented table must be a dimension in the calculated insight. The primary key is a field that uniquely identifies each record in a table. The segmented table is the table that contains the data that is being segmented, such as the Customer or the Order table.
The calculated insight must include the primary key of the segmented table as a dimension to ensure that the calculated value is associated with the correct record in the segmented table and to avoid duplication or inconsistency in the segmentation results.
Northern Trail Outfitters wants to be able to calculate each customer's lifetime value (LTV) but also create breakdowns of the revenue sourced by website, mobile app, and retail channels. How should this use case be addressed in Data Cloud?
A. Nested segments
B. Flow orchestration
C. Streaming data transformations
D. Metrics on metrics
Explanation:
This feature can help Northern Trail Outfitters calculate each customer’s lifetime value (LTV) and create breakdowns of the revenue sourced by different channels. Streaming data transformations allow you to transform and enrich streaming data from different sources using formulas and operators.
Northern Trail Outfitters wants to use some of its Marketing Cloud data in Data Cloud. Which engagement channel data will require custom integration?
A. SMS
B. Email
C. CloudPage
D. Mobile push
Explanation:
CloudPage is a web page that can be personalized and hosted by Marketing Cloud. It is not one of the standard engagement channels that Data Cloud supports out of the box. To use CloudPage data in Data Cloud, a custom integration is required. The other engagement channels (SMS, email, and mobile push) are supported by Data Cloud and can be integrated using the Marketing Cloud Connector or the Marketing Cloud API.
Which configuration supports separate Amazon S3 buckets for data ingestion and activation?
A. Dedicated S3 data sources in Data Cloud setup
B. Multiple S3 connectors in Data Cloud setup
C. Dedicated S3 data sources in activation setup
D. Separate user credentials for data stream and activation target
Explanation:
To support separate Amazon S3 buckets for data ingestion and activation, you need to configure dedicated S3 data sources in Data Cloud setup. Data sources are used to identify the origin and type of the data that you ingest into Data Cloud1. You can create different data sources for each S3 bucket that you want to use for ingestion or activation, and specify the bucket name, region, and access credentials2. This way, you can separate and organize your data by different criteria, such as brand, region, product, or business unit3. The other options are incorrect because they do not support separate S3 buckets for data ingestion and activation. Multiple S3 connectors are not a valid configuration in Data Cloud setup, as there is only one S3 connector available4. Dedicated S3 data sources in activation setup are not a valid configuration either, as activation setup does not require data sources, but activation targets5. Separate user credentials for data stream and activation target are not sufficient to support separate S3 buckets, as you also need to specify the bucket name and region for each data source2.
Cumulus Financial created a segment called High Investment Balance Customers. This is a foundational segment that includes several segmentation criteria the marketing team should consistently use. Which feature should the consultant suggest the marketing team use to ensure this consistency when creating future, more refined segments?
A. Create new segments using nested segments.
B. Create a High Investment Balance calculated insight.
C. Package High Investment Balance Customers in a data kit.
D. Create new segments by cloning High Investment Balance Customers.
Explanation:
Nested segments are segments that include or exclude one or more existing segments. They allow the marketing team to reuse filters and maintain consistency in their data by using an existing segment to build a new one. For example, the marketing team can create a nested segment that includes High Investment Balance Customers and excludes customers who have opted out of email marketing. This way, they can leverage the foundational segment and apply additional criteria without duplicating the rules. The other options are not the best features to ensure consistency because:
B. A calculated insight is a data object that performs calculations on data lake objects or CRM data and returns a result. It is not a segment and cannot be used for activation or personalization.
C. A data kit is a bundle of packageable metadata that can be exported and imported across Data Cloud orgs. It is not a feature for creating segments, but rather for sharing components.
D. Cloning a segment creates a copy of the segment with the same rules and filters. It does not allow the marketing team to add or remove criteria from the original segment, and it may create confusion and redundancy.
Cumulus Financial uses Service Cloud as its CRM and stores mobile phone, home phone, and work phone as three separate fields for its customers on the Contact record. The company plansz to use Data Cloud and ingest the Contact object via the CRM Connector. What is the most efficient approach that a consultant should take when ingesting this data to ensure all the different phone numbers are properly mapped and available for use in activation?
A. Ingest the Contact object and map the Work Phone, Mobile Phone, and Home Phone to the Contact Point Phone data map object from the Contact data stream.
B. Ingest the Contact object and use streaming transforms to normalize the phone numbers from the Contact data stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new DLO to the Contact Point Phone data map object.
C. Ingest the Contact object and then create a calculated insight to normalize the phone numbers, and then map to the Contact Point Phone data map object.
D. Ingest the Contact object and create formula fields in the Contact data stream on the phone numbers, and then map to the Contact Point Phone data map object.
Explanation:
The most efficient approach is B: Ingest the Contact object and use streaming transforms to normalize phone numbers into a separate Phone DLO, which stores each phone number type (work, home, mobile) in three rows. This data is then mapped to the Contact Point Phone object, ensuring all phone numbers are available for activation (e.g., SMS, calls). Streaming transforms allow real-time normalization (removing spaces, dashes, adding country codes) during ingestion without extra processing or storage.
A customer needs to integrate in real time with Salesforce CRM. Which feature accomplishes this requirement?
A. Streaming transforms
B. Data model triggers
C. Sales and Service bundle
D. Data actions and Lightning web components
Page 2 out of 14 Pages |
Previous |