A software development team requires valid data for internal tests. Company regulations, however do not allow the use of this data in cleartext. Which of the following solutions best meet these requirements?
A. Configuring data hashing
B. Deploying tokenization
C. Replacing data with null record
D. Implementing data obfuscation
Explanation:
Why D is Correct:
Data obfuscation (or data masking) is a technique specifically designed for this purpose. It creates a functional substitute for real data that is structurally similar but contains inauthentic information. This allows developers and testers to work with realistic-looking data sets without exposing any real sensitive information. The data remains "valid" for testing application logic, workflows, and database schemas because it preserves the format, type, and length of the original data, while the actual content is scrambled or replaced. This directly meets the requirement of not having real cleartext data in a non-production environment.
Why A is Incorrect:
Data hashing is a one-way cryptographic function. It is excellent for verifying data integrity (e.g., checking passwords) but is useless for providing valid test data. Hashed data loses all its original format and meaning. A developer cannot run meaningful tests on a database where every field is a hash value, as the application logic would fail. For example, a hashed first name field no longer contains letters and cannot be used to test a "search by name" feature.
Why B is Incorrect:
Tokenization is the process of replacing sensitive data with a non-sensitive equivalent, called a token, which has no extrinsic or exploitable meaning. The token is a random value that can be mapped back to the original data in a secure vault. While excellent for protecting data in production (e.g., credit card numbers), tokens are not "valid data" for testing. Like hashed data, a token is just a random string and does not preserve the format or logic of the original data, making it unsuitable for application testing.
Why C is Incorrect:
Replacing data with null records destroys the utility of the data set. A database full of null values is not "valid data for internal tests." It would be impossible to test most application features, as there would be no data to display, sort, filter, or manipulate. This solution fails the primary requirement of providing usable test data.
Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography and Domain 4.0: Governance, Risk, and Compliance. It addresses data protection methods and their appropriate use cases, specifically focusing on securing non-production environments to comply with data protection policies while maintaining development agility. Data obfuscation is a standard practice for creating safe, useful test environments.
A security configure is building a solution to disable weak CBC configuration for remote access connections lo Linux systems. Which of the following should the security engineer modify?
A. The /etc/openssl.conf file, updating the virtual site parameter
B. The /etc/nsswith.conf file, updating the name server
C. The /etc/hosts file, updating the IP parameter
D. The /etc/etc/sshd, configure file updating the ciphers
Explanation:
Why D is Correct:
The question specifies the goal is to "disable weak CBC configuration for remote access connections to Linux systems." The most common method for remote access to Linux systems is SSH (Secure Shell).
The configuration file for the SSH daemon (the service that accepts incoming SSH connections) is typically /etc/ssh/sshd_config.
Within this file, the Ciphers directive is used to specify which encryption algorithms (ciphers) the server will accept for a connection.
Cipher Block Chaining (CBC) mode ciphers (e.g., aes128-cbc, aes256-cbc) are considered weak and vulnerable to attacks like "SSH CBC information disclosure." To disable them, the security engineer would modify the sshd_config file to explicitly list only strong ciphers (e.g., Counter Mode ciphers like aes128-ctr, aes256-ctr, or modern algorithms like chacha20-poly1305@openssh.com), thereby removing any CBC-based options.
Why A is Incorrect:
The /etc/openssl.conf file (or similar OpenSSL configuration files) is used to configure the OpenSSL library itself, which provides cryptographic functions for many applications. However, it does not directly control the specific cipher suites offered by the SSH daemon. Modifying this would have a broad, system-wide impact and is not the precise tool for configuring SSH-specific access.
Why B is Incorrect:
The /etc/nsswitch.conf file (Name Service Switch configuration) controls how the system resolves various types of information like hostnames, users, and groups (e.g., using /etc/hosts, DNS, or LDAP). It has absolutely nothing to do with configuring encryption algorithms or remote access protocols.
Why C is Incorrect:
The /etc/hosts file is a simple static table for mapping hostnames to IP addresses. It is used for local name resolution and is unrelated to the encryption protocols or cipher suites used for network connections.
Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It tests the practical knowledge of hardening specific services (SSH) by modifying their configuration files to use only strong cryptographic settings, which is a core responsibility of a security engineer.
Which of the following AI concerns is most adequately addressed by input sanitation?
A. Model inversion
B. Prompt Injection
C. Data poisoning
D. Non-explainable model
Explanation:
Why B is Correct:
Prompt injection is a vulnerability specific to AI systems that use text-based prompts, particularly Large Language Models (LLMs). It occurs when an attacker crafts a malicious input (a "prompt") that tricks the model into ignoring its original instructions, bypassing safety filters, or revealing sensitive information. Input sanitation is a primary defense against this attack. It involves rigorously validating, filtering, and escaping all user-provided input before it is passed to the AI model. This helps to neutralize or render ineffective any malicious instructions embedded within the user's input, thereby preventing the model from being hijacked.
Why A is Incorrect:
Model inversion is an attack where an adversary uses the model's outputs (e.g., API responses) to reverse-engineer and infer sensitive details about the training data. This is addressed by controls on the output side (e.g., differential privacy, output filtering, limiting API response details) and model design, not by sanitizing the input prompts.
Why C is Incorrect:
Data poisoning is an attack on the training phase of an AI model. An attacker injects malicious or corrupted data into the training set to compromise the model's performance, integrity, or behavior after deployment. Defending against this requires securing the data collection and curation pipeline, using robust training techniques, and validating training data—measures that are completely separate from sanitizing runtime user input.
Why D is Incorrect:
A non-explainable model (often called a "black box" model) is a characteristic of certain complex AI algorithms where it is difficult for humans to understand why a specific decision was made. This is an inherent challenge of the model's architecture (e.g., deep neural networks) and is addressed by the field of Explainable AI (XAI), which involves using different models, tools, and techniques to interpret them. Input sanitation has no bearing on making a model's decisions more explainable.
Reference:
This question falls under the intersection of Domain 1.0: Security Architecture and emerging technologies. It tests the understanding of specific threats to AI systems and the appropriate security controls to mitigate them. Input validation/sanitation is a classic application security control that finds a new critical application in protecting AI systems from prompt injection attacks.
Which of the following best explains the business requirement a healthcare provider fulfills by encrypting patient data at rest?
A. Securing data transfer between hospitals
B. Providing for non-repudiation data
C. Reducing liability from identity theft
D. Protecting privacy while supporting portability.
Explanation:
Why D is Correct:
This option most accurately and completely captures the core business and regulatory requirements for a healthcare provider.
Protecting Privacy:
This is the primary driver. Regulations like HIPAA (Health Insurance Portability and Accountability Act) in the United States mandate the protection of patient Protected Health Information (PHI). Encryption of data at rest is a key safeguard to ensure confidentiality and privacy, preventing unauthorized access if devices are lost, stolen, or improperly accessed. It directly addresses the fundamental ethical and legal duty to keep patient information private.
Supporting Portability:
This is a critical business enabler. Healthcare data needs to be portable—it must be stored on laptops, mobile devices, USB drives, and in cloud data centers to facilitate modern healthcare delivery, backups, and research. Encryption is the technology that makes this portability secure. It allows data to be moved and stored flexibly without incurring the high risk of a data breach. The "portability" in HIPAA's name hints at this need for data movement in a secure manner.
Why A is Incorrect:
Encrypting data at rest protects data while it is stored on a device (e.g., a database, hard drive). Securing data transfer between hospitals is the role of encrypting data in transit (e.g., using TLS for network transmission). This is an important requirement, but it is not the one fulfilled by encryption at rest.
Why B is Incorrect:
Non-repudiation provides proof of the origin of data and prevents a sender from denying having sent it. This is a security service achieved through digital signatures and cryptographic hashing, not through encryption at rest. Encryption ensures confidentiality, not non-repudiation.
Why C is Incorrect:
While reducing liability from identity theft is a positive outcome of encrypting data, it is not the best explanation of the direct business requirement. The requirement is driven by proactive compliance with privacy laws (like HIPAA) and the duty of care to protect patients. Reducing liability is a beneficial consequence of meeting that primary requirement, not the requirement itself. Option D is a more precise and comprehensive description of the core business and regulatory need.
Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests the ability to map a technical control (encryption at rest) back to the fundamental business and legal requirements that mandate its use, particularly in a heavily regulated industry like healthcare. Understanding the "why" behind a control is crucial for a CASP+.
The material finding from a recent compliance audit indicate a company has an issue with excessive permissions. The findings show that employees changing roles or departments results in privilege creep. Which of the following solutions are the best ways to mitigate this issue? (Select two).
Setting different access controls defined by business area
A. Implementing a role-based access policy
B. Designing a least-needed privilege policy
C. Establishing a mandatory vacation policy
D. Performing periodic access reviews
E. Requiring periodic job rotation
Explanation:
The core problem identified is privilege creep due to employees changing roles. This means users accumulate permissions over time because old access rights are not removed when they are no longer needed for their new position. The solutions must directly address this accumulation and ensure permissions align with current job functions.
Why A is Correct (Implementing a role-based access policy):
Role-Based Access Control (RBAC) is a fundamental solution to this exact problem. Instead of assigning permissions directly to users, permissions are assigned to roles (e.g., "Accountant," "Marketing Manager"). Users are then assigned to these roles. When an employee changes departments, their old role is simply removed, and their new role is assigned. This automatically revokes the old permissions and grants the new, appropriate ones, effectively preventing privilege creep by design.
Why D is Correct (Performing periodic access reviews):
Even with RBAC in place, processes can break down. Periodic user access reviews (also known as recertification) are a critical administrative control to catch and correct privilege creep. In these reviews, managers or system owners periodically attest to whether their employees' current access levels are still appropriate for their job functions. This process proactively identifies and removes excessive permissions that may have been missed during a role transition.
Why the Other Options Are Incorrect:
B. Designing a least-needed privilege policy:
While the principle of least privilege is the ultimate goal, this option describes a concept or principle, not an actionable solution to the problem of privilege creep. Implementing RBAC (Option A) is how you operationalize and enforce a least privilege policy. Therefore, A is a more direct and specific solution.
C. Establishing a mandatory vacation policy:
This is a detective control primarily used to uncover fraud (e.g., requiring an employee to take vacation forces someone else to perform their duties, potentially revealing fraudulent activity). It does not directly address the procedural issue of permissions not being removed during role changes.
E. Requiring periodic job rotation:
Job rotation is a security practice used to reduce the risk of fraud and collusion and to cross-train employees. It would actually exacerbate the problem of privilege creep, as more employees changing roles would lead to even more accumulated permissions if a proper process (like RBAC and access reviews) is not in place to manage the transitions.
Reference:
This question falls under Domain 4.0: Governance, Risk, and Compliance. It tests knowledge of identity and access management (IAM) best practices, specifically the controls used to implement and maintain the principle of least privilege and prevent authorization vulnerabilities like privilege creep. RBAC and access recertification are cornerstone practices for any mature IAM program.
Third parties notified a company's security team about vulnerabilities in the company's application. The security team determined these vulnerabilities were previously disclosed in third-party libraries. Which of the following solutions best addresses the reported vulnerabilities?
A. Using laC to include the newest dependencies
B. Creating a bug bounty program
C. Implementing a continuous security assessment program
D. Integrating a SASI tool as part of the pipeline
Explanation:
Why A is Correct:
The root cause of the vulnerabilities is that the application uses third-party libraries with known, publicly disclosed vulnerabilities. The most direct and effective solution is to update these dependencies to their latest, patched versions. Infrastructure as Code (IaC) is the best practice for automating and managing this process.
IaC tools (like Terraform, Ansible, or cloud-specific templates) allow developers to define the application's infrastructure and dependencies in code files.
These definitions can specify the exact versions of libraries to be used. To remediate, a team can update the version number in the IaC script and redeploy. This ensures consistency, repeatability, and speed in pushing the patched libraries across all environments (dev, test, prod).
This approach directly fixes the reported problem by replacing the vulnerable component with a secure one.
Why B is Incorrect:
A bug bounty program is a crowdsourced initiative to incentivize external security researchers to find and report unknown vulnerabilities. The vulnerabilities in this scenario are already known and were reported by third parties. A bug bounty might help find future unknown issues, but it does nothing to fix the current, known problem with the libraries.
Why C is Incorrect:
Implementing a continuous security assessment program (which might include SAST, DAST, etc.) is a broad and valuable practice for finding vulnerabilities. However, like a bug bounty, it is a detective control. It would help identify that the vulnerable libraries are present, but the team already knows this because they've been notified. The requirement is to address or fix the vulnerability, not just to find it again. The fix is to update the library.
Why D is Incorrect:
Integrating a SAST (Static Application Security Testing) tool into the pipeline is also a detective control. It scans source code for patterns that indicate vulnerabilities. While it could potentially detect the use of a vulnerable library if its rules are tuned for that, its primary function is to find flaws in custom code. More importantly, it identifies problems but does not remediate them. The remediation is still the action of updating the dependency, which is best managed through IaC.
In summary:
While options B, C, and D are all valuable parts of a mature application security program, they are focused on finding vulnerabilities. The problem stated is that vulnerabilities have already been found. The necessary action is to patch them. Using IaC to automate dependency management and deployment is the most effective way to execute that patch quickly and consistently.
Reference:
This question falls under Domain 2.0: Security Operations and Domain 1.0: Security Architecture. It addresses vulnerability management and the practical application of DevOps practices (like IaC) to ensure secure and consistent configurations across environments.
Asecuntv administrator is performing a gap assessment against a specific OS benchmark
The benchmark requires the following configurations be applied to endpomts:
• Full disk encryption
* Host-based firewall
• Time synchronization
* Password policies
• Application allow listing
* Zero Trust application access
Which of the following solutions best addresses the requirements? (Select two).
A. CASB
B. SBoM
C. SCAP
D. SASE
E. HIDS
Explanation:
The question requires selecting solutions that best help an administrator apply and enforce a specific set of OS security configurations (like disk encryption, firewall settings, etc.) across endpoints. The goal is to close the gap between the current state and the desired benchmark.
Why C is Correct (SCAP):
The Security Content Automation Protocol (SCAP) is a suite of standards specifically designed for this exact task. It allows for:
Automated Compliance Checking:
SCAP-compliant tools can automatically scan an endpoint (using benchmarks like CIS or DISA STIGs) and check its configuration against hundreds of required settings (firewall rules, password policies, time sync, etc.).
Remediation:
Many SCAP tools can not only identify misconfigurations but also automatically remediate them to bring the system into compliance.
Standardized Benchmarks:
The requirements listed (firewall, time sync, password policies) are classic configuration items that are defined in SCAP benchmarks. SCAP is the industry standard for automating technical compliance and hardening.
Why D is Correct (SASE):
Secure Access Service Edge (SASE) is a cloud architecture that converges networking and security functions. It directly addresses two requirements from the list:
Zero Trust application access:
This is a core principle of SASE. It ensures users and devices are authenticated and authorized before granting access to applications, regardless of their location, which fulfills the "Zero Trust application access" requirement.
Host-based firewall (extension):
While SASE provides a cloud-delivered firewall, it can also help enforce security policies that complement or supersede the need for a host-based firewall by applying consistent security at the network edge.
SASE provides a framework to enforce these policies consistently across all endpoints.
Why the Other Options Are Incorrect:
A. CASB (Cloud Access Security Broker):
A CASB is primarily focused on securing access to cloud applications (SaaS) and enforcing security policies between users and the cloud. It does not manage OS-level configurations on endpoints like disk encryption, host firewalls, or time synchronization.
B. SBoM (Software Bill of Materials):
An SBoM is an inventory of components in a software product. It is used for vulnerability management in the software supply chain (e.g., finding vulnerable libraries). It is completely unrelated to configuring operating system settings on an endpoint.
E. HIDS (Host-Based Intrusion Detection System):
A HIDS monitors a host for signs of malicious activity and policy violations. It is a detective control. While it might alert on a misconfiguration, it is not the tool used to apply the required configurations from a benchmark. SCAP is the tool for applying the configuration; a HIDS might monitor for changes to that configuration afterward.
Reference:
This question falls under Domain 2.0: Security Operations and Domain 1.0: Security Architecture. It tests the knowledge of specific security technologies and their appropriate application for system hardening, compliance automation (SCAP), and modern secure access principles (SASE).
A company wants to install a three-tier approach to separate the web. database, and application servers A security administrator must harden the environment which of the following is the best solution?
A. Deploying a VPN to prevent remote locations from accessing server VLANs
B. Configuring a SASb solution to restrict users to server communication
C. Implementing microsegmentation on the server VLANs
D. installing a firewall and making it the network core
Explanation:
Why C is Correct:
The core requirement is to harden a three-tier architecture (web, app, database servers). The fundamental security principle for this architecture is to enforce strict communication paths:
Web servers should only talk to application servers.
Application servers should only talk to database servers.
Direct communication from web servers to database servers, or from external sources to app/database servers, should be blocked.
Microsegmentation is the ideal solution for this. It involves creating fine-grained, granular security policies (often at the workload or individual server level) to control east-west traffic (traffic between servers within the data center). This allows the administrator to create exact rules that only permit the necessary communication between the specific tiers and block everything else, drastically reducing the attack surface.
Why A is Incorrect:
A VPN secures communication to the network from remote users or sites. It is designed for securing north-south traffic (traffic entering or leaving the data center). It does nothing to control the east-west traffic between the server tiers, which is the primary concern in hardening this architecture.
Why B is Incorrect:
A SASE (Secure Access Service Edge) solution is also primarily focused on north-south traffic. It provides secure, identity-driven access for users to applications and services, regardless of their location. It is not the right tool for controlling traffic between servers inside the data center.
Why D is Incorrect:
While installing a firewall is a good general practice, simply making it the "network core" is a vague and outdated concept. A traditional core firewall is often not granular enough to effectively segment traffic between tiers at a micro level. Modern data centers require more agile and granular controls that can be applied directly to the workloads, which is what microsegmentation provides (often using host-based firewalls or software-defined networking security policies).
Reference:
This question falls under Domain 1.0: Security Architecture. It tests the understanding of data center security design, specifically the best practices for securing a multi-tier application architecture by controlling east-west traffic through advanced segmentation techniques like microsegmentation.
A company wants to implement hardware security key authentication for accessing sensitive information systems The goal is to prevent unauthorized users from gaining access with a stolen password Which of the following models should the company implement to b«st solve this issue?
A. Rule based
B. Time-based
C. Role based
D. Context-based
Explanation:
Why B is Correct:
The question describes the implementation of hardware security keys (e.g., YubiKey, Google Titan) to prevent access with a stolen password. This is a classic description of multi-factor authentication (MFA) where the hardware key provides the "something you have" factor.
The most common protocol used by these hardware keys for generating the one-time passcode is the Time-based One-Time Password (TOTP) algorithm. This algorithm generates a code that is synchronized with the authentication server and changes every 30-60 seconds. Even if a password is stolen, an attacker cannot access the system without physically possessing the hardware key that generates the current, valid code. Therefore, the company is implementing a time-based authentication model.
Why A is Incorrect:
Rule-based access control involves making access decisions based on a set of predefined rules or filters (e.g., "Allow access if the request comes from the HR network segment"). It is a type of access control model, not an authentication factor model. It does not describe how the one-time code from a hardware key is generated.
Why C is Incorrect:
Role-based access control (RBAC) is an authorization model where access permissions are assigned to roles, and users are assigned to those roles. It governs what a user can do after they are authenticated. The question is specifically about the authentication process (proving identity), not authorization (assigning permissions).
Why D is Incorrect:
Context-based authentication is a more advanced form of MFA that considers additional contextual factors (e.g., geographic location, time of day, network reputation, device posture) when making an authentication decision. While a hardware key could be part of a context-based system, the core functionality described—using a hardware token to generate a one-time code—is fundamentally time-based. Context-based would be a broader, more adaptive model that might use time-based codes as one input.
Reference:
This question falls under Domain 3.0: Security Engineering and Cryptography. It tests the understanding of authentication protocols and factors, specifically the operation of hardware security tokens and the underlying time-based model that makes them secure.
A systems administrator wants to use existing resources to automate reporting from disparate security appliances that do not currently communicate. Which of the following is the best way to meet this objective?
A. Configuring an API Integration to aggregate the different data sets
B. Combining back-end application storage into a single, relational database
C. Purchasing and deploying commercial off the shelf aggregation software
D. Migrating application usage logs to on-premises storage
Explanation:
Why A is Correct:
The core requirements are to automate reporting from disparate security appliances that do not currently communicate, using existing resources.
APIs (Application Programming Interfaces) are the standard method for enabling different software systems to communicate and share data. Most modern security appliances (firewalls, IDS/IPS, EDR, etc.) have APIs designed specifically for this purpose—to extract logs, alerts, and configuration data.
Automation:
By writing scripts (e.g., in Python) that call these APIs, the systems administrator can automatically pull data from each disparate appliance on a scheduled basis without manual intervention.
Aggregation:
The data collected from these various APIs can then be parsed, normalized, and aggregated into a single format for reporting (e.g., fed into a dashboard, a SIEM, or a custom database). This approach directly leverages existing appliance capabilities (their APIs) and can often be implemented with existing scripting skills and resources.
Why B is Incorrect:
Combining back-end application storage into a single relational database is often not feasible. The appliances likely use different, proprietary storage formats and databases. Directly combining these back-ends would require deep access to each system, risk corruption, and is not a standard or supported method for integration. APIs are the intended, supported way to access this data.
Why C is Incorrect:
Purchasing commercial off-the-shelf (COTS) aggregation software (like a SIEM or a dedicated log management tool) is a very common and effective solution. However, the question specifies the administrator wants to use existing resources. Purchasing new software contradicts this requirement, as it involves acquiring new resources (budget, software, and potentially hardware).
Why D is Incorrect:
Migrating logs to on-premises storage is a data consolidation step, but it does not solve the communication or automation problem. You would still have logs in different formats from different systems sitting in the same storage location. Without a way to parse, normalize, and aggregate them (a function an API integration or a SIEM performs), you cannot automate reporting from them. This is just moving the data, not making it usable for automated reporting.
Reference:
This question falls under Domain 2.0: Security Operations. It tests the practical knowledge of how to integrate security tools and automate processes, a key skill for security analysts and engineers. Using APIs is the modern, scalable, and resource-efficient method for achieving this integration.
A cloud engineer needs to identify appropriate solutions to:
• Provide secure access to internal and external cloud resources.
• Eliminate split-tunnel traffic flows.
• Enable identity and access management capabilities.
Which of the following solutions arc the most appropriate? (Select two).
A. Federation
B. Microsegmentation
C. CASB
D. PAM
E. SD-WAN
F. SASE
Explanation:
Let's break down the requirements and see which solutions best address them:
Provide secure access to internal and external cloud resources:
This requires a solution that can securely connect users to applications, whether they are in a corporate data center, a public cloud (IaaS/PaaS), or a SaaS application (like Office 365).
Eliminate split-tunnel traffic flows:
Split tunneling allows some user traffic to go directly to the internet while other traffic goes through the corporate network. To eliminate this, all user traffic must be routed through a central security checkpoint for inspection and enforcement.
Enable identity and access management capabilities:
The solution must integrate strongly with identity systems to enforce access policies based on user identity, group, and other context.
Why F is Correct (SASE):
Secure Access Service Edge (SASE) is the overarching architecture that perfectly meets all three requirements.
It provides secure, identity-driven access to all resources (internal and cloud-based) from anywhere.
A core principle of SASE is to funnel all user traffic through a cloud-based security stack (SWG, CASB, ZTNA, FWaaS), which eliminates split tunneling by ensuring all traffic is inspected.
It has identity and access management as a foundational component, using user identity as the key for applying security policies.
Why A is Correct (Federation):
Federation (e.g., using SAML, OIDC) is a critical identity capability that integrates with a SASE solution to fulfill the IAM requirement.
It allows users to authenticate once with a central identity provider (like Azure AD) and gain seamless access to multiple cloud services and applications without needing separate passwords.
This provides the strong identity and access management foundation that a SASE platform uses to make access decisions. SASE relies on federated identity to know who the user is before applying policy.
Why the Other Options Are Incorrect:
B. Microsegmentation:
This is for controlling east-west traffic between workloads within a data center or cloud network. It does not address secure user access to resources or internet-bound traffic flows.
C. CASB (Cloud Access Security Broker):
A CASB is a component that can be part of a SASE solution. It secures access to SaaS applications and provides data security for cloud services. However, by itself, it does not eliminate split tunneling for all internet traffic or provide secure access to internal resources—it's focused on cloud services. SASE is the broader architecture that incorporates CASB functionality.
D. PAM (Privileged Access Management):
PAM is used to secure, manage, and monitor access for privileged accounts (e.g., administrators). It is a critical security solution but is focused on a specific set of users and systems, not the general workforce's secure access to all cloud resources.
E. SD-WAN (Software-Defined Wide Area Network):
SD-WAN is a technology for intelligently routing traffic between branch offices and data centers. It optimizes network performance but is not a security solution. In fact, traditional SD-WAN can create split tunnels. SASE often incorporates SD-WAN capabilities but adds the crucial security and identity layer.
Reference:
This question falls under Domain 1.0: Security Architecture. It tests the understanding of modern secure access architectures, specifically how SASE converges networking and security functions with identity to address the challenges of cloud-centric and remote work environments. Federation is the key identity component that enables this.
A company detects suspicious activity associated with external connections Security detection tools are unable to categorize this activity. Which of the following is the best solution to help the company overcome this challenge?
A. Implement an Interactive honeypot
B. Map network traffic to known loCs.
C. Monitor the dark web
D. implement UEBA
Explanation:
Why D is Correct:
The core challenge is that "security detection tools are unable to categorize" the "suspicious activity." This indicates that the activity does not match any known signatures, patterns, or Indicators of Compromise (IoCs). This is a classic scenario for User and Entity Behavior Analytics (UEBA).
UEBA uses machine learning and advanced analytics to establish a baseline of normal behavior for users, hosts, and network entities.
It then detects anomalies that deviate from this baseline, without relying on known threat signatures.
This makes it exceptionally effective at identifying novel attacks, insider threats, and suspicious activity that evades traditional, signature-based detection tools. It can categorize unknown activity based on its anomalous nature.
Why A is Incorrect:
An interactive honeypot is a decoy system designed to attract and engage attackers to study their techniques. While it can provide valuable intelligence on new attack methods, it is a proactive research tool, not a direct solution for detecting and categorizing ongoing, suspicious activity on the production network. The suspicious activity is already happening; a honeypot wouldn't help analyze it.
Why B is Incorrect:
Mapping network traffic to known IoCs is the function of traditional signature-based tools like IDS/IPS and many SIEM rules. The problem states that these tools have already failed to categorize the activity, meaning it does not match any known IoCs. Therefore, this approach will not help overcome the challenge.
Why C is Incorrect:
Monitoring the dark web is a strategic intelligence-gathering activity. It is used to find stolen credentials, learn about upcoming attacks, or discover if company data is for sale. It is not a tactical solution for analyzing and categorizing specific, ongoing suspicious network activity within the company's environment.
Reference:
This question falls under Domain 2.0: Security Operations. It tests the knowledge of advanced security analytics tools and their appropriate application. UEBA is specifically designed to address the limitation of traditional tools by using behavioral analysis to detect unknown threats and anomalous activity.
Page 2 out of 9 Pages |
Previous |