Topic 1: Access Control
Which TCSEC level is labeled Controlled Access Protection?
A.
C1
B.
C2
C.
C3
D.
B1
C2
C2 is labeled Controlled Access Protection.
The TCSEC defines four divisions: D, C, B and A where division A has the highest security.
Each division represents a significant difference in the trust an individual or organization
can place on the evaluated system. Additionally divisions C, B and A are broken into a
series of hierarchical subdivisions called classes: C1, C2, B1, B2, B3 and A1.
Each division and class expands or modifies as indicated the requirements of the
immediately prior division or class.
D — Minimal protection
Reserved for those systems that have been evaluated but that fail to meet the
requirements for a higher division
C — Discretionary protection
C1 —Discretionary Security Protection Identification and authentication
Separation of users and data
Discretionary Access Control (DAC) capable of enforcing access limitations on an
individual basis
Required System Documentation and user manuals
C2 — Controlled Access Protection
More finely grained DAC
Individual accountability through login procedures
Audit trails
Object reuse
Resource isolation
B — Mandatory protection B1 — Labeled Security Protection
Informal statement of the security policy model
Data sensitivity labels
Mandatory Access Control (MAC) over selected subjects and objects
Label exportation capabilities
All discovered flaws must be removed or otherwise mitigated
Design specifications and verification
B2 — Structured Protection
Security policy model clearly defined and formally documented
DAC and MAC enforcement extended to all subjects and objects
Covert storage channels are analyzed for occurrence and bandwidth
Carefully structured into protection-critical and non-protection-critical elements
Design and implementation enable more comprehensive testing and review
Authentication mechanisms are strengthened
Trusted facility management is provided with administrator and operator segregation
Strict configuration management controls are imposed
B3 — Security Domains
Satisfies reference monitor requirements
Structured to exclude code not essential to security policy enforcement
Significant system engineering directed toward minimizing complexity
Security administrator role defined
Audit security-relevant events
Automated imminent intrusion detection, notification, and response
Trusted system recovery procedures
Covert timing channels are analyzed for occurrence and bandwidth
An example of such a system is the XTS-300, a precursor to the XTS-400 A — Verified protection
A1 — Verified Design
Functionally identical to B3
Formal design and verification techniques including a formal top-level specification
Formal management and distribution procedures
An example of such a system is Honeywell's Secure Communications Processor SCOMP,
a precursor to the XTS-400
Beyond A1
System Architecture demonstrates that the requirements of self-protection and
completeness for reference monitors have been implemented in the Trusted Computing
Base (TCB).
Security Testing automatically generates test-case from the formal top-level specification or
formal lower-level specifications.
Formal Specification and Verification is where the TCB is verified down to the source code
level, using formal verification methods where feasible.
Trusted Design Environment is where the TCB is designed in a trusted facility with only trusted (cleared) personnel.
The following are incorrect answers:
C1 is Discretionary security
C3 does not exists, it is only a detractor
B1 is called Labeled Security Protection.
Reference(s) used for this question:
HARE, Chris, Security management Practices CISSP Open Study Guide, version 1.0, april
1999.
and AIOv4 Security Architecture and Design (pages 357 - 361)
AIOv5 Security Architecture and Design (pages 358 - 362)
Technical controls such as encryption and access control can be built into the operating
system, be software applications, or can be supplemental hardware/software units. Such
controls, also known as logical controls, represent which pairing?
A.
Preventive/Administrative Pairing
B.
Preventive/Technical Pairing
C.
Preventive/Physical Pairing
D.
Detective/Technical Pairing
Preventive/Technical Pairing
Preventive/Technical controls are also known as logical controls and can be
built into the operating system, be software applications, or can be supplemental
hardware/software units.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 34.
Rule-Based Access Control (RuBAC) access is determined by rules. Such rules would fit
within what category of access control ?
A.
Discretionary Access Control (DAC)
B.
Mandatory Access control (MAC)
C.
Non-Discretionary Access Control (NDAC)
D.
Lattice-based Access control
Non-Discretionary Access Control (NDAC)
Rule-based access control is a type of non-discretionary access control
because this access is determined by rules and the subject does not decide what those
rules will be, the rules are uniformly applied to ALL of the users or subjects.
In general, all access control policies other than DAC are grouped in the category of nondiscretionary
access control (NDAC). As the name implies, policies in this category have
rules that are not established at the discretion of the user. Non-discretionary policies
establish controls that cannot be changed by users, but only through administrative action.
Both Role Based Access Control (RBAC) and Rule Based Access Control (RuBAC) fall
within Non Discretionary Access Control (NDAC). If it is not DAC or MAC then it is most likely NDAC.
IT IS NOT ALWAYS BLACK OR WHITE
The different access control models are not totally exclusive of each others. MAC is making
use of Rules to be implemented. However with MAC you have requirements above and
beyond having simple access rules. The subject would get formal approval from
management, the subject must have the proper security clearance, objects must have
labels/sensitivity levels attached to them, subjects must have the proper security clearance.
If all of this is in place then you have MAC.
BELOW YOU HAVE A DESCRIPTION OF THE DIFFERENT CATEGORIES:
MAC = Mandatory Access Control
Under a mandatory access control environment, the system or security administrator will
define what permissions subjects have on objects. The administrator does not dictate
user’s access but simply configure the proper level of access as dictated by the Data
Owner.
The MAC system will look at the Security Clearance of the subject and compare it with the
object sensitivity level or classification level. This is what is called the dominance
relationship.
The subject must DOMINATE the object sensitivity level. Which means that the subject
must have a security clearance equal or higher than the object he is attempting to access.
MAC also introduce the concept of labels. Every objects will have a label attached to them
indicating the classification of the object as well as categories that are used to impose the
need to know (NTK) principle. Even thou a user has a security clearance of Secret it does
not mean he would be able to access any Secret documents within the system. He would
be allowed to access only Secret document for which he has a Need To Know, formal
approval, and object where the user belong to one of the categories attached to the object. If there is no clearance and no labels then IT IS NOT Mandatory Access Control.
Many of the other models can mimic MAC but none of them have labels and a dominance
relationship so they are NOT in the MAC category.
NISTR-7316 Says:
Usually a labeling mechanism and a set of interfaces are used to determine access based
on the MAC policy; for example, a user who is running a process at the Secret classification should not be allowed to read a file with a label of Top Secret. This is known
as the “simple security rule,” or “no read up.” Conversely, a user who is running a process
with a label of Secret should not be allowed to write to a file with a label of Confidential.
This rule is called the “*-property” (pronounced “star property”) or “no write down.” The *-
property is required to maintain system security in an automated environment. A variation
on this rule called the “strict *-property” requires that information can be written at, but not
above, the subject’s clearance level. Multilevel security models such as the Bell-La Padula
Confidentiality and Biba Integrity models are used to formally specify this kind of MAC
policy. DAC = Discretionary Access Control
DAC is also known as: Identity Based access control system.
The owner of an object is define as the person who created the object. As such the owner
has the discretion to grant access to other users on the network. Access will be granted
based solely on the identity of those users.
Such system is good for low level of security. One of the major problem is the fact that a
user who has access to someone's else file can further share the file with other users
without the knowledge or permission of the owner of the file. Very quickly this could
become the wild wild west as there is no control on the dissimination of the information.
RBAC = Role Based Access Control RBAC is a form of Non-Discretionary access control.
Role Based access control usually maps directly with the different types of jobs performed
by employees within a company.
For example there might be 5 security administrator within your company. Instead of
creating each of their profile one by one, you would simply create a role and assign the
administrators to the role. Once an administrator has been assigned to a role, he will
IMPLICITLY inherit the permissions of that role.
RBAC is great tool for environment where there is a a large rotation of employees on a
daily basis such as a very large help desk for example.
RBAC or RuBAC = Rule Based Access Control
RuBAC is a form of Non-Discretionary access control. A good example of a Rule Based access control device would be a Firewall. A single set of
rules is imposed to all users attempting to connect through the firewall.
NOTE FROM CLEMENT:
Lot of people tend to confuse MAC and Rule Based Access Control.
Mandatory Access Control must make use of LABELS. If there is only rules and no label, it
cannot be Mandatory Access Control. This is why they call it Non Discretionary Access
control (NDAC).
There are even books out there that are WRONG on this subject. Books are sometimes
opiniated and not strictly based on facts.
In MAC subjects must have clearance to access sensitive objects. Objects have labels that
contain the classification to indicate the sensitivity of the object and the label also has
categories to enforce the need to know.
Today the best example of rule based access control would be a firewall. All rules are
imposed globally to any user attempting to connect through the device. This is NOT the
case with MAC.
I strongly recommend you read carefully the following document:
NISTIR-7316 at http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf It is one of the best Access Control Study document to prepare for the exam. Usually I tell
people not to worry about the hundreds of NIST documents and other reference. This
document is an exception. Take some time to read it.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten
Domains of Computer Security, 2001, John Wiley & Sons, Page 33.
and
NISTIR-7316 at http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf
and
Conrad, Eric; Misenar, Seth; Feldman, Joshua (2012-09-01). CISSP Study Guide (Kindle
Locations 651-652). Elsevier Science (reference). Kindle Edition.
Access control is the collection of mechanisms that permits managers of a system to
exercise a directing or restraining influence over the behavior, use, and content of a
system. It does not permit management to:
A.
A. specify what users can do
B.
specify which resources they can access
C.
Specify how to restrain hackers
D.
D. specify what operations they can perform on a system.
Specify how to restrain hackers
Access control is the collection of mechanisms that permits managers of a
system to exercise a directing or restraining influence over the behavior, use, and content
of a system. It permits management to specify what users can do, which resources they
can access, and what operations they can perform on a system. Specifying HOW to
restrain hackers is not directly linked to access control.
Source: DUPUIS, Clement, Access Control Systems and Methodology, Version 1, May
2002, CISSP Open Study Group Study Guide for Domain 1, Page 12.
What is considered the most important type of error to avoid for a biometric access control
system?
A.
Type I Error
B.
Type II Error
C.
Combined Error Rate
D.
Crossover Error Rate
Type II Error
When a biometric system is used for access control, the most important error
is the false accept or false acceptance rate, or Type II error, where the system would
accept an impostor.
A Type I error is known as the false reject or false rejection rate and is not as important in
the security context as a type II error rate. A type one is when a valid company employee is
rejected by the system and he cannot get access even thou it is a valid user. The Crossover Error Rate (CER) is the point at which the false rejection rate equals the
false acceptance rate if your would create a graph of Type I and Type II errors. The lower
the CER the better the device would be.
The Combined Error Rate is a distracter and does not exist.
Source: TIPTON, Harold F. & KRAUSE, Micki, Information Security Management
Handbook, 4th edition (volume 1), 2000, CRC Press, Chapter 1, Biometric Identification
(page 10).
Almost all types of detection permit a system's sensitivity to be increased or decreased
during an inspection process. If the system's sensitivity is increased, such as in a biometric
authentication system, the system becomes increasingly selective and has the possibility of
generating:
A.
Lower False Rejection Rate (FRR)
B.
Higher False Rejection Rate (FRR)
C.
Higher False Acceptance Rate (FAR)
D.
It will not affect either FAR or FRR
Higher False Rejection Rate (FRR)
Almost all types of detection permit a system's sensitivity to be increased or
decreased during an inspection process. If the system's sensitivity is increased, such as in
a biometric authentication system, the system becomes increasingly selective and has a
higher False Rejection Rate (FRR).
Conversely, if the sensitivity is decreased, the False Acceptance Rate (FRR) will increase.
Thus, to have a valid measure of the system performance, the Cross Over Error (CER) rate
is used. The Crossover Error Rate (CER) is the point at which the false rejection rates and
the false acceptance rates are equal. The lower the value of the CER, the more accurate
the system.
There are three categories of biometric accuracy measurement (all represented as
percentages): False Reject Rate (a Type I Error): When authorized users are falsely rejected as
unidentified or unverified.
False Accept Rate (a Type II Error): When unauthorized persons or imposters are falsely
accepted as authentic.
Crossover Error Rate (CER): The point at which the false rejection rates and the false
acceptance rates are equal. The smaller the value of the CER, the more accurate the
system.
NOTE:
Within the ISC2 book they make use of the term Accept or Acceptance and also Reject or
Rejection when referring to the type of errors within biometrics. Below we make use of
Acceptance and Rejection throughout the text for conistency. However, on the real exam
you could see either of the terms.
Performance of biometrics Different metrics can be used to rate the performance of a biometric factor, solution or
application. The most common performance metrics are the False Acceptance Rate FAR
and the False Rejection Rate FRR.
When using a biometric application for the first time the user needs to enroll to the system.
The system requests fingerprints, a voice recording or another biometric factor from the
operator, this input is registered in the database as a template which is linked internally to a
user ID. The next time when the user wants to authenticate or identify himself, the
biometric input provided by the user is compared to the template(s) in the database by a
matching algorithm which responds with acceptance (match) or rejection (no match).
FAR and FRR
The FAR or False Acceptance rate is the probability that the system incorrectly authorizes
a non-authorized person, due to incorrectly matching the biometric input with a valid
template. The FAR is normally expressed as a percentage, following the FAR definition this
is the percentage of invalid inputs which are incorrectly accepted. The FRR or False Rejection Rate is the probability that the system incorrectly rejects
access to an authorized person, due to failing to match the biometric input provided by the
user with a stored template. The FRR is normally expressed as a percentage, following the
FRR definition this is the percentage of valid inputs which are incorrectly rejected.
FAR and FRR are very much dependent on the biometric factor that is used and on the
technical implementation of the biometric solution. Furthermore the FRR is strongly person dependent, a personal FRR can be determined for each individual.
Take this into account when determining the FRR of a biometric solution, one person is
insufficient to establish an overall FRR for a solution. Also FRR might increase due to
environmental conditions or incorrect use, for example when using dirty fingers on a
fingerprint reader. Mostly the FRR lowers when a user gains more experience in how to
use the biometric device or software.
FAR and FRR are key metrics for biometric solutions, some biometric devices or software
even allow to tune them so that the system more quickly matches or rejects. Both FRR and
FAR are important, but for most applications one of them is considered most important.
Two examples to illustrate this:
When biometrics are used for logical or physical access control, the objective of the
application is to disallow access to unauthorized individuals under all circumstances. It is clear that a very low FAR is needed for such an application, even if it comes at the price of
a higher FRR.
When surveillance cameras are used to screen a crowd of people for missing children, the
objective of the application is to identify any missing children that come up on the screen.
When the identification of those children is automated using a face recognition software,
this software has to be set up with a low FRR. As such a higher number of matches will be
false positives, but these can be reviewed quickly by surveillance personnel.
False Acceptance Rate is also called False Match Rate, and False Rejection Rate is
sometimes referred to as False Non-Match Rate.
crossover error rate
clear that a very low FAR is needed for such an application, even if it comes at the price of
a higher FRR.
When surveillance cameras are used to screen a crowd of people for missing children, the
objective of the application is to identify any missing children that come up on the screen.
When the identification of those children is automated using a face recognition software,
this software has to be set up with a low FRR. As such a higher number of matches will be
false positives, but these can be reviewed quickly by surveillance personnel.
False Acceptance Rate is also called False Match Rate, and False Rejection Rate is
sometimes referred to as False Non-Match Rate.
crossover error rate
Above see a graphical representation of FAR and FRR errors on a graph, indicating the
CER
CER
The Crossover Error Rate or CER is illustrated on the graph above. It is the rate where
both FAR and FRR are equal.
The matching algorithm in a biometric software or device uses a (configurable) threshold
which determines how close to a template the input must be for it to be considered a
match. This threshold value is in some cases referred to as sensitivity, it is marked on the X
axis of the plot. When you reduce this threshold there will be more false acceptance errors
(higher FAR) and less false rejection errors (lower FRR), a higher threshold will lead to
lower FAR and higher FRR.
Speed
Most manufacturers of biometric devices and softwares can give clear numbers on the time
it takes to enroll as well on the time for an individual to be authenticated or identified using
their application. If speed is important then take your time to consider this, 5 seconds might seem a short time on paper or when testing a device but if hundreds of people will use the
device multiple times a day the cumulative loss of time might be significant.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third
Edition ((ISC)2 Press) (Kindle Locations 2723-2731). Auerbach Publications. Kindle
Edition.
and
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten
Domains of Computer Security, 2001, John Wiley & Sons, Page 37.
and
http://www.biometric-solutions.com/index.php?story=performance_biometrics
What does the Clark-Wilson security model focus on?
A.
Confidentiality
B.
Integrity
C.
Accountability
D.
Availability
Integrity
The Clark-Wilson model addresses integrity. It incorporates mechanisms to
enforce internal and external consistency, a separation of duty, and a mandatory integrity
policy.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 5: Security
Architectures and Models (page 205).
The type of discretionary access control (DAC) that is based on an individual's identity is
also called:
A.
Identity-based Access control
B.
Rule-based Access control
C.
Non-Discretionary Access Control
D.
Lattice-based Access control
Identity-based Access control
An identity-based access control is a type of Discretionary Access Control
(DAC) that is based on an individual's identity.
DAC is good for low level security environment. The owner of the file decides who has
access to the file.
If a user creates a file, he is the owner of that file. An identifier for this user is placed in the
file header and/or in an access control matrix within the operating system.
Ownership might also be granted to a specific individual. For example, a manager for a
certain department might be made the owner of the files and resources within her
department. A system that uses discretionary access control (DAC) enables the owner of
the resource to specify which subjects can access specific resources.
This model is called discretionary because the control of access is based on the discretion
of the owner. Many times department managers, or business unit managers , are the owners of the data within their specific department. Being the owner, they can specify who
should have access and who should not.
Reference(s) used for this question:
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 220). McGraw-
Hill . Kindle Edition.
Which of the following models does NOT include data integrity or conflict of interest?
A.
Biba
B.
Clark-Wilson
C.
Bell-LaPadula
D.
Brewer-Nash
Bell-LaPadula
Bell LaPadula model (Bell 1975): The granularity of objects and subjects is
not predefined, but the model prescribes simple access rights. Based on simple access
restrictions the Bell LaPadula model enforces a discretionary access control policy
enhanced with mandatory rules. Applications with rigid confidentiality requirements and
without strong integrity requirements may properly be modeled.
These simple rights combined with the mandatory rules of the policy considerably restrict
the spectrum of applications which can be appropriately modeled.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Also check:
Proceedings of the IFIP TC11 12th International Conference on Information Security,
Samos (Greece), May 1996, On Security Models.
Which of the following can be defined as a framework that supports multiple, optional
authentication mechanisms for PPP, including cleartext passwords, challenge-response,
and arbitrary dialog sequences?
A.
Extensible Authentication Protocol
B.
Challenge Handshake Authentication Protocol
C.
Remote Authentication Dial-In User Service
D.
Multilevel Authentication Protocol
Extensible Authentication Protocol
RFC 2828 (Internet Security Glossary) defines the Extensible Authentication
Protocol as a framework that supports multiple, optional authentication mechanisms for
PPP, including cleartext passwords, challenge-response, and arbitrary dialog sequences. It
is intended for use primarily by a host or router that connects to a PPP network server via
switched circuits or dial-up lines. The Remote Authentication Dial-In User Service
(RADIUS) is defined as an Internet protocol for carrying dial-in user's authentication
information and configuration information between a shared, centralized authentication
server and a network access server that needs to authenticate the users of its network
access ports. The other option is a distracter.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
In which of the following security models is the subject's clearance compared to the object's
classification such that specific rules can be applied to control how the subject-to-object
interactions take place?
A.
Bell-LaPadula model
B.
Biba model
C.
Access Matrix model
D.
Take-Grant model
Bell-LaPadula model
The Bell-LAPadula model is also called a multilevel security system because
users with different clearances use the system and the system processes data with
different classifications. Developed by the US Military in the 1970s.
A security model maps the abstract goals of the policy to information system terms by specifying explicit data structures and techniques necessary to enforce the security policy.
A security model is usually represented in mathematics and analytical ideas, which are
mapped to system specifications and then developed by programmers through
programming code. So we have a policy that encompasses security goals, such as “each
subject must be authenticated and authorized before accessing an object.” The security
model takes this requirement and provides the necessary mathematical formulas,
relationships, and logic structure to be followed to accomplish this goal.
A system that employs the Bell-LaPadula model is called a multilevel security system
because users with different clearances use the system, and the system processes data at
different classification levels. The level at which information is classified determines the
handling procedures that should be used. The Bell-LaPadula model is a state machine
model that enforces the confidentiality aspects of access control. A matrix and security
levels are used to determine if subjects can access different objects. The subject’s
clearance is compared to the object’s classification and then specific rules are applied to
control how subject-to-object subject-to-object interactions can take place.Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (p. 369). McGraw-
Hill. Kindle Edition.
Which of the following can best eliminate dial-up access through a Remote Access Server
as a hacking vector?
A.
Using a TACACS+ server.
B.
Installing the Remote Access Server outside the firewall and forcing legitimate users to
authenticate to the firewall.
C.
Setting modem ring count to at least 5.
D.
Only attaching modems to non-networked hosts.
Installing the Remote Access Server outside the firewall and forcing legitimate users to
authenticate to the firewall.
Containing the dial-up problem is conceptually easy: by installing the Remote
Access Server outside the firewall and forcing legitimate users to authenticate to the
firewall, any access to internal resources through the RAS can be filtered as would any
other connection coming from the Internet. The use of a TACACS+ Server by itself cannot eliminate hacking.
Setting a modem ring count to 5 may help in defeating war-dialing hackers who look for
modem by dialing long series of numbers.
Attaching modems only to non-networked hosts is not practical and would not prevent
these hosts from being hacked.
Source: STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000,
Chapter 2: Hackers.
Page 8 out of 88 Pages |
Previous |