SSCP Practice Test Questions

1048 Questions


topic 3.analysis and monitoring

What is the primary goal of setting up a honeypot?


A.

To lure hackers into attacking unused systems


B.

To entrap and track down possible hackers


C.

 To set up a sacrificial lamb on the network


D.

To know when certain types of attacks are in progress and to learn about attack
techniques so the network can be fortified.





D.
  

To know when certain types of attacks are in progress and to learn about attack
techniques so the network can be fortified.



The primary purpose of a honeypot is to study the attack methods of an
attacker for the purposes of understanding their methods and improving defenses.
"To lure hackers into attacking unused systems" is incorrect. Honeypots can serve as
decoys but their primary purpose is to study the behaviors of attackers.
"To entrap and track down possible hackers" is incorrect. There are a host of legal issues
around enticement vs entrapment but a good general rule is that entrapment is generally
prohibited and evidence gathered in a scenario that could be considered as "entrapping" an
attacker would not be admissible in a court of law.
"To set up a sacrificial lamb on the network" is incorrect. While a honeypot is a sort of
sacrificial lamb and may attract attacks that might have been directed against production
systems, its real purpose is to study the methods of attackers with the goals of better
understanding and improving network defenses.
References
AIO3, p. 213

The session layer provides a logical persistent connection between peer hosts. Which of
the following is one of the modes used in the session layer to establish this connection?


A.

 Full duplex


B.

Synchronous


C.

Asynchronous


D.

Half simplex





A.
  

 Full duplex



Layer 5 of the OSI model is the Session Layer. This layer provides a logical
persistent connection between peer hosts. A session is analogous to a conversation that is
necessary for applications to exchange information.
The session layer is responsible for establishing, managing, and closing end-to-end
connections, called sessions, between applications located at different network endpoints.
Dialogue control management provided by the session layer includes full-duplex, half- duplex, and simplex communications. Session layer management also helps to ensure that
multiple streams of data stay synchronized with each other, as in the case of multimedia
applications like video conferencing, and assists with the prevention of application related
data errors.
The session layer is responsible for creating, maintaining, and tearing down the session.
Three modes are offered:
(Full) Duplex: Both hosts can exchange information simultaneously, independent of each
other.
Half Duplex: Hosts can exchange information, but only one host at a time.
Simplex: Only one host can send information to its peer. Information travels in one direction
only.
Another aspect of performance that is worthy of some attention is the mode of operation of
the network or connection. Obviously, whenever we connect together device A and device
B, there must be some way for A to send to B and B to send to A. Many people don’t
realize, however, that networking technologies can differ in terms of how these two
directions of communication are handled. Depending on how the network is set up, and the
characteristics of the technologies used, performance may be improved through the
selection of performance-enhancing modes.
Basic Communication Modes of Operation
Let's begin with a look at the three basic modes of operation that can exist for any network
connection, communications channel, or interface.
Simplex Operation
In simplex operation, a network cable or communications channel can only send
information in one direction; it's a “one-way street”. This may seem counter-intuitive: what's
the point of communications that only travel in one direction? In fact, there are at least two
different places where simplex operation is encountered in modern networking.
The first is when two distinct channels are used for communication: one transmits from A to
B and the other from B to A. This is surprisingly common, even though not always obvious.
For example, most if not all fiber optic communication is simplex, using one strand to send
data in each direction. But this may not be obvious if the pair of fiber strands are combined
into one cable.
Simplex operation is also used in special types of technologies, especially ones that are asymmetric. For example, one type of satellite Internet access sends data over the satellite
only for downloads, while a regular dial-up modem is used for upload to the service
provider. In this case, both the satellite link and the dial-up connection are operating in a
simplex mode.
Half-Duplex Operation
Technologies that employ half-duplex operation are capable of sending information in both
directions between two nodes, but only one direction or the other can be utilized at a time.
This is a fairly common mode of operation when there is only a single network medium
(cable, radio frequency and so forth) between devices.
While this term is often used to describe the behavior of a pair of devices, it can more
generally refer to any number of connected devices that take turns transmitting. For
example, in conventional Ethernet networks, any device can transmit, but only one may do
so at a time. For this reason, regular (unswitched) Ethernet networks are often said to be
“half-duplex”, even though it may seem strange to describe a LAN that way.
Full-Duplex Operation
In full-duplex operation, a connection between two devices is capable of sending data in
both directions simultaneously. Full-duplex channels can be constructed either as a pair of
simplex links (as described above) or using one channel designed to permit bidirectional
simultaneous transmissions. A full-duplex link can only connect two devices, so many such
links are required if multiple devices are to be connected together.
Note that the term “full-duplex” is somewhat redundant; “duplex” would suffice, but
everyone still says “full-duplex” (likely, to differentiate this mode from half-duplex).
For a listing of protocols associated with Layer 5 of the OSI model, see below:
ADSP - AppleTalk Data Stream Protocol
ASP - AppleTalk Session Protocol
H.245 - Call Control Protocol for Multimedia Communication
ISO-SP
OSI session-layer protocol (X.225, ISO 8327)
iSNS - Internet Storage Name Service
The following are incorrect answers:
Synchronous and Asynchronous are not session layer modes. Half simplex does not exist. By definition, simplex means that information travels one way
only, so half-simplex is a oxymoron.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third
Edition ((ISC)2 Press) (Kindle Locations 5603-5636). Auerbach Publications. Kindle
Edition.
and
http://www.tcpipguide.com/free/t_SimplexFullDuplexandHalfDuplexOperation.htm
and
http://www.wisegeek.com/what-is-a-session-layer.htm

Several analysis methods can be employed by an IDS, each with its own strengths and
weaknesses, and their applicability to any given situation should be carefully considered.
There are two basic IDS analysis methods that exists. Which of the basic method is more
prone to false positive?


A.

A. Pattern Matching (also called signature analysis)


B.

Anomaly Detection


C.

Host-based intrusion detection


D.

Network-based intrusion detection





B.
  

Anomaly Detection



Several analysis methods can be employed by an IDS, each with its own
strengths and weaknesses, and their applicability to any given situation should be carefully
considered.
There are two basic IDS analysis methods:
1. Pattern Matching (also called signature analysis), and
2. Anomaly detection
PATTERN MATCHING
Some of the first IDS products used signature analysis as their detection method and simply looked for known characteristics of an attack (such as specific packet sequences or
text in the data stream) to produce an alert if that pattern was detected. If a new or different
attack vector is used, it will not match a known signature and, thus, slip past the IDS.
ANOMALY DETECTION
Alternately, anomaly detection uses behavioral characteristics of a system’s operation or
network traffic to draw conclusions on whether the traffic represents a risk to the network or
host. Anomalies may include but are not limited to:
Multiple failed log-on attempts
Users logging in at strange hours
Unexplained changes to system clocks
Unusual error messages
Unexplained system shutdowns or restarts
Attempts to access restricted files
An anomaly-based IDS tends to produce more data because anything outside of the
expected behavior is reported. Thus, they tend to report more false positives as expected
behavior patterns change. An advantage to anomaly-based IDS is that, because they are
based on behavior identification and not specific patterns of traffic, they are often able to
detect new attacks that may be overlooked by a signature-based system. Often information
from an anomaly-based IDS may be used to create a pattern for a signature-based IDS.
Host Based Intrusion Detection (HIDS)
HIDS is the implementation of IDS capabilities at the host level. Its most significant
difference from NIDS is that related processes are limited to the boundaries of a single-host
system. However, this presents advantages in effectively detecting objectionable activities
because the IDS process is running directly on the host system, not just observing it from
the network. This offers unfettered access to system logs, processes, system information,
and device information, and virtually eliminates limits associated with encryption. The level
of integration represented by HIDS increases the level of visibility and control at the
disposal of the HIDS application.
Network Based Intrustion Detection (NIDS)
NIDS are usually incorporated into the network in a passive architecture, taking advantage
of promiscuous mode access to the network. This means that it has visibility into every
packet traversing the network segment. This allows the system to inspect packets and
monitor sessions without impacting the network or the systems and applications utilizing
the network. Below you have other ways that instrusion detection can be performed:
Stateful Matching Intrusion Detection
Stateful matching takes pattern matching to the next level. It scans for attack signatures in
the context of a stream of traffic or overall system behavior rather than the individual
packets or discrete system activities. For example, an attacker may use a tool that sends a
volley of valid packets to a targeted system. Because all the packets are valid, pattern
matching is nearly useless. However, the fact that a large volume of the packets was seen
may, itself, represent a known or potential attack pattern. To evade attack, then, the
attacker may send the packets from multiple locations with long wait periods between each
transmission to either confuse the signature detection system or exhaust its session timing
window. If the IDS service is tuned to record and analyze traffic over a long period of time it
may detect such an attack. Because stateful matching also uses signatures, it too must be
updated regularly and, thus, has some of the same limitations as pattern matching.
Statistical Anomaly-Based Intrusion Detection
The statistical anomaly-based IDS analyzes event data by comparing it to typical, known,
or predicted traffic profiles in an effort to find potential security breaches. It attempts to
identify suspicious behavior by analyzing event data and identifying patterns of entries that
deviate from a predicted norm. This type of detection method can be very effective and, at
a very high level, begins to take on characteristics seen in IPS by establishing an expected
baseline of behavior and acting on divergence from that baseline. However, there are some
potential issues that may surface with a statistical IDS. Tuning the IDS can be challenging
and, if not performed regularly, the system will be prone to false positives. Also, the
definition of normal traffic can be open to interpretation and does not preclude an attacker
from using normal activities to penetrate systems. Additionally, in a large, complex,
dynamic corporate environment, it can be difficult, if not impossible, to clearly define
“normal” traffic. The value of statistical analysis is that the system has the potential to
detect previously unknown attacks. This is a huge departure from the limitation of matching
previously known signatures. Therefore, when combined with signature matching
technology, the statistical anomaly-based IDS can be very effective.
Protocol Anomaly-Based Intrusion Detection
A protocol anomaly-based IDS identifies any unacceptable deviation from expected
behavior based on known network protocols. For example, if the IDS is monitoring an
HTTP session and the traffic contains attributes that deviate from established HTTP
session protocol standards, the IDS may view that as a malicious attempt to manipulate the
protocol, penetrate a firewall, or exploit a vulnerability. The value of this method is directly
related to the use of well-known or well-defined protocols within an environment. If an
organization primarily uses well-known protocols (such as HTTP, FTP, or telnet) this can be an effective method of performing intrusion detection. In the face of custom or
nonstandard protocols, however, the system will have more difficulty or be completely
unable to determine the proper packet format. Interestingly, this type of method is prone to
the same challenges faced by signature-based IDSs. For example, specific protocol
analysis modules may have to be added or customized to deal with unique or new
protocols or unusual use of standard protocols. Nevertheless, having an IDS that is
intimately aware of valid protocol use can be very powerful when an organization employs
standard implementations of common protocols.
Traffic Anomaly-Based Intrusion
Detection A traffic anomaly-based IDS identifies any unacceptable deviation from expected
behavior based on actual traffic structure. When a session is established between systems,
there is typically an expected pattern and behavior to the traffic transmitted in that session.
That traffic can be compared to expected traffic conduct based on the understandings of
traditional system interaction for that type of connection. Like the other types of anomalybased
IDS, traffic anomaly-based IDS relies on the ability to establish “normal” patterns of
traffic and expected modes of behavior in systems, networks, and applications. In a highly
dynamic environment it may be difficult, if not impossible, to clearly define these
parameters.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third
Edition ((ISC)2 Press) (Kindle Locations 3664-3686). Auerbach Publications. Kindle
Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third
Edition ((ISC)2 Press) (Kindle Locations 3711-3734). Auerbach Publications. Kindle
Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third
Edition ((ISC)2 Press) (Kindle Locations 3694-3711). Auerbach Publications. Kindle
Edition.

Controls provide accountability for individuals who are accessing sensitive information. This
accountability is accomplished


A.

through access control mechanisms that require identification and authentication and
through the audit function.


B.

through logical or technical controls involving the restriction of access to systems and
the protection of information.


C.

through logical or technical controls but not involving the restriction of access to systems
and the protection of information.


D.

through access control mechanisms that do not require identification and authentication
and do not operate through the audit function.





A.
  

through access control mechanisms that require identification and authentication and
through the audit function.



Controls provide accountability for individuals who are accessing sensitive
information. This accountability is accomplished through access control mechanisms that
require identification and authentication and through the audit function. These controls
must be in accordance with and accurately represent the organization's security policy.
Assurance procedures ensure that the control mechanisms correctly implement the
security policy for the entire life cycle of an information system.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 33.

How often should a Business Continuity Plan be reviewed?


A.

At least once a month


B.

At least every six months


C.

 At least once a year


D.

At least Quarterly





C.
  

 At least once a year



As stated in SP 800-34 Rev. 1:
To be effective, the plan must be maintained in a ready state that accurately reflects
system requirements, procedures, organizational structure, and policies. During the
Operation/Maintenance phase of the SDLC, information systems undergo frequent
changes because of shifting business needs, technology upgrades, or new internal or
external policies.
As a general rule, the plan should be reviewed for accuracy and completeness at an
organization-defined frequency (at least once a year for the purpose of the exam) or
whenever significant changes occur to any element of the plan. Certain elements, such as
contact lists, will require more frequent reviews.
Remember, there could be two good answers as specified above. Either once a year or
whenever significant changes occur to the plan. You will of course get only one of the two
presented within you exam.
Reference(s) used for this question:
NIST SP 800-34 Revision 1

Which of the following tools is less likely to be used by a hacker?


A.

l0phtcrack


B.

Tripwire


C.

OphCrack


D.

John the Ripper





B.
  

Tripwire



Tripwire is an integrity checking product, triggering alarms when important
files (e.g. system or configuration files) are modified. This is a tool that is not likely to be used by hackers, other than for studying its workings in
order to circumvent it.
Other programs are password-cracking programs and are likely to be used by security
administrators as well as by hackers. More info regarding Tripwire available on the
Tripwire, Inc. Web Site.
NOTE:
The biggest competitor to the commercial version of Tripwire is the freeware version of
Tripwire. You can get the Open Source version of Tripwire at the following URL:
http://sourceforge.net/projects/tripwire/

In order to enable users to perform tasks and duties without having to go through extra
steps it is important that the security controls and mechanisms that are in place have a
degree of?


A.

Complexity


B.

Non-transparency


C.

Transparency


D.

implicity





C.
  

Transparency



The security controls and mechanisms that are in place must have a degree
of transparency.
This enables the user to perform tasks and duties without having to go through extra steps
because of the presence of the security controls. Transparency also does not let the user
know too much about the controls, which helps prevent him from figuring out how to
circumvent them. If the controls are too obvious, an attacker can figure out how to
compromise them more easily.
Security (more specifically, the implementation of most security controls) has long been a
sore point with users who are subject to security controls. Historically, security controls
have been very intrusive to users, forcing them to interrupt their work flow and remember
arcane codes or processes (like long passwords or access codes), and have generally
been seen as an obstacle to getting work done. In recent years, much work has been done to remove that stigma of security controls as a detractor from the work process adding
nothing but time and money. When developing access control, the system must be as
transparent as possible to the end user. The users should be required to interact with the
system as little as possible, and the process around using the control should be engineered
so as to involve little effort on the part of the user.
For example, requiring a user to swipe an access card through a reader is an effective way
to ensure a person is authorized to enter a room. However, implementing a technology
(such as RFID) that will automatically scan the badge as the user approaches the door is
more transparent to the user and will do less to impede the movement of personnel in a
busy area.
In another example, asking a user to understand what applications and data sets will be
required when requesting a system ID and then specifically requesting access to those
resources may allow for a great deal of granularity when provisioning access, but it can
hardly be seen as transparent. A more transparent process would be for the access
provisioning system to have a role-based structure, where the user would simply specify
the role he or she has in the organization and the system would know the specific
resources that user needs to access based on that role. This requires less work and
interaction on the part of the user and will lead to more accurate and secure access control
decisions because access will be based on predefined need, not user preference.
When developing and implementing an access control system special care should be taken
to ensure that the control is as transparent to the end user as possible and interrupts his
work flow as little as possible.
The following answers were incorrect:
All of the other detractors were incorrect.
Reference(s) used for this question:
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, 6th edition. Operations
Security, Page 1239-1240
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations
25278-25281). McGraw-Hill. Kindle Edition.
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition :
Access Control ((ISC)2 Press) (Kindle Locations 713-729). Auerbach Publications. Kindle

Which of the following questions are least likely to help in assessing controls covering audit
trails?


A.

Does the audit trail provide a trace of user actions?


B.

Are incidents monitored and tracked until resolved?


C.

 Is access to online logs strictly controlled?


D.

Is there separation of duties between security personnel who administer the access
control function and those who administer the audit trail?





B.
  

Are incidents monitored and tracked until resolved?



Audit trails maintain a record of system activity by system or application
processes and by user activity. In conjunction with appropriate tools and procedures, audit
trails can provide individual accountability, a means to reconstruct events, detect intrusions,
and identify problems. Audit trail controls are considered technical controls. Monitoring and
tracking of incidents is more an operational control related to incident response capability.
Reference(s) used for this question:
SWANSON, Marianne, NIST Special Publication 800-26, Security Self-Assessment Guide
for Information Technology Systems, November 2001 (Pages A-50 to A-51).
NOTE: NIST SP 800-26 has been superceded By: FIPS 200, SP 800-53, SP 800-53A
You can find the new replacement at: http://csrc.nist.gov/publications/PubsSPs.html
However, if you really wish to see the old standard, it is listed as an archived document at:
http://csrc.nist.gov/publications/PubsSPArch.html

Which of the following are the two MOST common implementations of Intrusion Detection
Systems?


A.

Server-based and Host-based.


B.

Network-based and Guest-based.


C.

Network-based and Client-based.


D.

Network-based and Host-based.





D.
  

Network-based and Host-based.



The two most common implementations of Intrusion Detection are Networkbased
and Host-based.
IDS can be implemented as a network device, such as a router, switch, firewall, or
dedicated device monitoring traffic, typically referred to as network IDS (NIDS).
The" (IDS) "technology can also be incorporated into a host system (HIDS) to monitor a
single system for undesirable activities. "
A network intrusion detection system (NIDS) is a network device .... that monitors traffic
traversing the network segment for which it is integrated." Remember that NIDS are usually
passive in nature.
HIDS is the implementation of IDS capabilities at the host level. Its most significant
difference from NIDS is that related processes are limited to the boundaries of a single-host
system. However, this presents advantages in effectively detecting objectionable activities
because the IDS process is running directly on the host system, not just observing it from
the network.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third
Edition ((ISC)2 Press) (Kindle Locations 3649-3652). Auerbach Publications. Kindle
Edition.

Which conceptual approach to intrusion detection system is the most common?


A.

 Behavior-based intrusion detection


B.

Knowledge-based intrusion detection


C.

Statistical anomaly-based intrusion detection


D.

Host-based intrusion detection





B.
  

Knowledge-based intrusion detection



There are two conceptual approaches to intrusion detection. Knowledgebased
intrusion detection uses a database of known vulnerabilities to look for current
attempts to exploit them on a system and trigger an alarm if an attempt is found. The other
approach, not as common, is called behaviour-based or statistical analysis-based. A hostbased
intrusion detection system is a common implementation of intrusion detection, not a
conceptual approach.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3:
Telecommunications and Network Security (page 63).
Also: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne,
2002, chapter 4: Access Control (pages 193-194).

Which of the following is a disadvantage of a statistical anomaly-based intrusion detection
system?


A.

it may truly detect a non-attack event that had caused a momentary anomaly in the
system.


B.

 it may falsely detect a non-attack event that had caused a momentary anomaly in the
system.


C.

 it may correctly detect a non-attack event that had caused a momentary anomaly in the
system.


D.

it may loosely detect a non-attack event that had caused a momentary anomaly in the
system.





B.
  

 it may falsely detect a non-attack event that had caused a momentary anomaly in the
system.



Some disadvantages of a statistical anomaly-based ID are that it will not
detect an attack that does not significantly change the system operating characteristics, or
it may falsely detect a non-attack event that had caused a momentary anomaly in the
system.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49.

Which of the following would NOT violate the Due Diligence concept?


A.

Security policy being outdated


B.

Data owners not laying out the foundation of data protection


C.

Network administrator not taking mandatory two-week vacation as planned


D.

Latest security patches for servers being installed as per the Patch Management
process





D.
  

Latest security patches for servers being installed as per the Patch Management
process



To be effective a patch management program must be in place (due
diligence) and detailed procedures would specify how and when the patches are applied
properly (Due Care). Remember, the question asked for NOT a violation of Due Diligence,
in this case, applying patches demonstrates due care and the patch management process
in place demonstrates due diligence.
Due diligence is the act of investigating and understanding the risks the company faces. A
company practices by developing and implementing security policies, procedures, and
standards. Detecting risks would be based on standards such as ISO 2700, Best Practices,
and other published standards such as NIST standards for example.
Due Diligence is understanding the current threats and risks. Due diligence is practiced by
activities that make sure that the protection mechanisms are continually maintained and
operational where risks are constantly being evaluated and reviewed. The security policy
being outdated would be an example of violating the due diligence concept.
Due Care is implementing countermeasures to provide protection from those threats. Due
care is when the necessary steps to help protect the company and its resources from
possible risks that have been identifed. If the information owner does not lay out the
foundation of data protection (doing something about it) and ensure that the directives are
being enforced (actually being done and kept at an acceptable level), this would violate the
due care concept.
If a company does not practice due care and due diligence pertaining to the security of its
assets, it can be legally charged with negligence and held accountable for any ramifications
of that negligence. Liability is usually established based on Due Diligence and Due Care or
the lack of either.
A good way to remember this is using the first letter of both words within Due Diligence
(DD) and Due Care (DC).
Due Diligence = Due Detect
Steps you take to identify risks based on best practices and standards.
Due Care = Due Correct.
Action you take to bring the risk level down to an acceptable level and maintaining that
level over time.
The Following answer were wrong:Security policy being outdated:
While having and enforcing a security policy is the right thing to do (due care), if it is
outdated, you are not doing it the right way (due diligence). This questions violates due
diligence and not due care.
Data owners not laying out the foundation for data protection:
Data owners are not recognizing the "right thing" to do. They don't have a security policy.
Network administrator not taking mandatory two week vacation:
The two week vacation is the "right thing" to do, but not taking the vacation violates due
diligence (not doing the right thing the right way)
Reference(s) used for this question
Shon Harris, CISSP All In One, Version 5, Chapter 3, pg 110


Page 38 out of 88 Pages
Previous