topic 2.security operation adimnistration
Which of the following is based on the premise that the quality of a software product is a
direct function of the quality of its associated software development and maintenance
processes?
A.
The Software Capability Maturity Model (CMM)
B.
The Spiral Model
C.
The Waterfall Model
D.
Expert Systems Model
The Software Capability Maturity Model (CMM)
The Capability Maturity Model (CMM) is a service mark owned by Carnegie
Mellon University (CMU) and refers to a development model elicited from actual data. The
data was collected from organizations that contracted with the U.S. Department of
Defense, who funded the research, and became the foundation from which CMU created
the Software Engineering Institute (SEI). Like any model, it is an abstraction of an existing
system.
The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level
evolutionary path of increasingly organized and systematically more mature processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a
research and development center sponsored by the U.S. Department of Defense (DoD).
SEI was founded in 1984 to address software engineering issues and, in a broad sense, to
advance software engineering methodologies. More specifically, SEI was established to
optimize the process of developing, acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes involved are equally applicable to the
software industry as a whole, SEI advocates industry-wide adoption of the CMM.
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the
International Organization for Standardization (ISO). The ISO 9000 standards specify an
effective quality system for manufacturing and service industries; ISO 9001 deals
specifically with software development and maintenance. The main difference between the
two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable
quality level for software processes, while the CMM establishes a framework for continuous
process improvement and is more explicit than the ISO standard in defining the means to
be employed to that end.
CMM's Five Maturity Levels of Software Processes
At the initial level, processes are disorganized, even chaotic. Success is likely to depend on
individual efforts, and is not considered to be repeatable, because processes would not be
sufficiently defined and documented to allow them to be replicated.
At the repeatable level, basic project management techniques are established, and
successes could be repeated, because the requisite processes would have been made
established, defined, and documented.
At the defined level, an organization has developed its own standard software process
through greater attention to documentation, standardization, and integration.
At the managed level, an organization monitors and controls its own processes through
data collection and analysis.
At the optimizing level, processes are constantly being improved through monitoring
feedback from current processes and introducing innovative processes to better serve the
organization's particular needs.
When it is applied to an existing organization's software development processes, it allows
an effective approach toward improving them. Eventually it became clear that the model
could be applied to other processes. This gave rise to a more general concept that is
applied to business processes and to developing people.
CMM is superseded by CMMI
The CMM model proved useful to many organizations, but its application in software
development has sometimes been problematic. Applying multiple models that are not
integrated within and across an organization could be costly in terms of training, appraisals,
and improvement activities. The Capability Maturity Model Integration (CMMI) project was
formed to sort out the problem of using multiple CMMs.
For software development processes, the CMM has been superseded by Capability Maturity Model Integration (CMMI), though the CMM continues to be a general theoretical
process capability model used in the public domain.
CMM is adapted to processes other than software development
The CMM was originally intended as a tool to evaluate the ability of government contractors
to perform a contracted software project. Though it comes from the area of software
development, it can be, has been, and continues to be widely applied as a general model
of the maturity of processes (e.g., IT Service Management processes) in IS/IT (and other)
organizations.
Source:
http://searchsoftwarequality.techtarget.com/sDefinition/0,,sid92_gci930057,00.html
and
http://en.wikipedia.org/wiki/Capability_Maturity_Model
Which of the following describes a technique in which a number of processor units are
employed in a single computer system to increase the performance of the system in its
application environment above the performance of a single processor of the same kind?
A.
Multitasking
B.
Multiprogramming
C.
Pipelining
D.
Multiprocessing
Multiprocessing
Multiprocessing is an organizational technique in which a number of
processor units are employed in a single computer system to increase the performance of
the system in its application environment above the performance of a single processor of
the same kind. In order to cooperate on a single application or class of applications, the
processors share a common resource. Usually this resource is primary memory, and the
multiprocessor is called a primary memory multiprocessor. A system in which each
processor has a private (local) main memory and shares secondary (global) memory with
the others is a secondary memory multiprocessor, sometimes called a multicomputer
system because of the looser coupling between processors. The more common multiprocessor systems incorporate only processors of the same type and performance
and thus are called homogeneous multiprocessors; however, heterogeneous
multiprocessors are also employed. A special case is the attached processor, in which a
second processor module is attached to a first processor in a closely coupled fashion so
that the first can perform input/output and operating system functions, enabling the
attached processor to concentrate on the application workload.
The following were incorrect answers:
Multiprogramming: The interleaved execution of two or more programs by a computer, in
which the central processing unit executes a few instructions from each program in
succession.
Multitasking: The concurrent operation by one central processing unit of two or more
processes.
Pipelining: A procedure for processing instructions in a computer program more rapidly, in
which each instruction is divided into numerous small stages, and a population of
instructions are in various stages at any given time. One instruction does not have to wait
for the previous one to complete all of the stages before it gets into the pipeline. It would be
similiar to an assembly chain in the real world.
References:
TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
http://www.answers.com/QUESTION NO: /multiprocessing?cat=technology
http://www.answers.com/multitasking?cat=biz-fin
http://www.answers.com/pipelining?cat=technology
Why does compiled code pose more of a security risk than interpreted code?
A.
Because malicious code can be embedded in compiled code and be difficult to detect.
B.
If the executed compiled code fails, there is a chance it will fail insecurely.
C.
Because compilers are not reliable.
D.
There is no risk difference between interpreted code and compiled code.
Because malicious code can be embedded in compiled code and be difficult to detect.
From a security standpoint, a compiled program is less desirable than an
interpreted one because malicious code can be
resident somewhere in the compiled code, and it is difficult to detect in a very large
program.
Who is responsible for implementing user clearances in computer-based information
systems at the B3 level of the TCSEC rating ?
A.
Security administrators
B.
Operators
C.
Data owners
D.
Data custodians
Security administrators
Security administrator functions include user-oriented activities such as
setting user clearances, setting initial password, setting other security characteristics for
new users or changing security profiles for existing users. Data owners have the ultimate
responsibility for protecting data, thus determining proper user access rights to data.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which of the following is given the responsibility of the maintenance and protection of the
data?
A.
Data owner
B.
Data custodian
C.
User
D.
Security administrator
Data custodian
It is usually responsible for maintaining and protecting the data.
The following answers are incorrect:
Data owner is usually a member of management , in charge of a specific business unit and
is ultimately responsible for the protection and use of the information.
User is any individual who routinely uses the data for work-related tasks.
Security administrator's tasks include creating new system user accounts , implementing
new security software.
References : Shon Harris AIO v3 , Chapter - 3: Security Management Practices , Pages :
99 - 103
Which of the following is not a component of a Operations Security "triples"?
A.
Asset
B.
Threat
C.
Vulnerability
D.
Risk
Risk
The Operations Security domain is concerned with triples - threats,
vulnerabilities and assets.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 216.
Buffer overflow and boundary condition errors are subsets of which of the following?
A.
Race condition errors.
B.
Access validation errors.
C.
Exceptional condition handling errors.
D.
Input validation errors.
Input validation errors.
In an input validation error, the input received by a system is not properly
checked, resulting in a vulnerability that can be exploited by sending a certain input
sequence. There are two important types of input validation errors: buffer overflows (input
received is longer than expected input length) and boundary condition error (where an input
received causes the system to exceed an assumed boundary). A race condition occurs
when there is a delay between the time when a system checks to see if an operation is
allowed by the security model and the time when the system actually performs the
operation. In an access validation error, the system is vulnerable because the access
control mechanism is faulty. In an exceptional condition handling error, the system
somehow becomes vulnerable due to an exceptional condition that has arisen.
Source: DUPUIS, Clement, Access Control Systems and Methodology CISSP Open Study
Guide, version 1.0, march 2002 (page 105).
Related to information security, availability is the opposite of which of the following?
A.
delegation
B.
distribution
C.
documentation
D.
destruction
destruction
Availability is the opposite of "destruction."
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the
Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
What prevents a process from accessing another process' data?
A.
Memory segmentation
B.
Process isolation
C.
The reference monitor
D.
Data hiding
Process isolation
Process isolation is where each process has its own distinct address space
for its application code and data. In this way, it is possible to prevent each process from
accessing another process' data. This prevents data leakage, or modification to the data
while it is in memory. Memory segmentation is a virtual memory management mechanism.
The reference monitor is an abstract machine that mediates all accesses to objects by
subjects. Data hiding, also known as information hiding, is a mechanism that makes
information available at one processing level is not available at another level.
Source: HARE, Chris, Security Architecture and Models, Area 6 CISSP Open Study Guide,
January 2002.
Which of the following is a not a preventative control?
A.
Deny programmer access to production data.
B.
Require change requests to include information about dates, descriptions, cost analysis
and anticipated effects.
C.
Run a source comparison program between control and current source periodically.
D.
Establish procedures for emergency changes.
Run a source comparison program between control and current source periodically.
Running the source comparison program between control and current source
periodically allows detection, not prevention, of unauthorized changes in the production
environment. Other options are preventive controls.
Source: Information Systems Audit and Control Association, Certified Information Systems
Auditor 2002 review manual, chapter 6: Business Application System Development,
Acquisition, Implementation and Maintenance (page 309).
Which of the following is commonly used for retrofitting multilevel security to a database
management system?
A.
trusted front-end.
B.
trusted back-end.
C.
controller.
D.
kernel
trusted front-end.
If you are "retrofitting" that means you are adding to an existing database
management system (DBMS). You could go back and redesign the entire DBMS but the
cost of that could be expensive and there is no telling what the effect will be on existing
applications, but that is redesigning and the question states retrofitting. The most cost
effective way with the least effect on existing applications while adding a layer of security
on top is through a trusted front-end.
Clark-Wilson is a synonym of that model as well. It was used to add more granular control
or control to database that did not provide appropriate controls or no controls at all. It is one
of the most popular model today. Any dynamic website with a back-end database is an
example of this today.
Such a model would also introduce separation of duties by allowing the subject only
specific rights on the objects they need to access.
The following answers are incorrect:
trusted back-end. Is incorrect because a trusted back-end would be the database
management system (DBMS). Since the question stated "retrofitting" that eliminates this
answer.
controller. Is incorrect because this is a distractor and has nothing to do with "retrofitting".
kernel. Is incorrect because this is a distractor and has nothing to do with "retrofitting". A
security kernel would provide protection to devices and processes but would be inefficient
in protecting rows or columns in a table.
What is the most secure way to dispose of information on a CD-ROM?
A.
Sanitizing
B.
Physical damage
C.
Degaussing
D.
Physical destruction
Physical destruction
First you have to realize that the question is specifically talking about a
CDROM. The information stored on a CDROM is not in electro magnetic format, so a
degausser woud be inneffective.
You cannot sanitize a CDROM but you might be able to sanitize a RW/CDROM. A CDROM
is a write once device and cannot be overwritten like a hard disk or other magnetic device.
Physical Damage would not be enough as information could still be extracted in a lab from
the undamaged portion of the media or even from the pieces after the physical damage has
been done.
Physical Destruction using a shredder, your microwave oven, melting it, would be very
effective and the best choice for a non magnetic media such as a CDROM.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Page 31 out of 88 Pages |
Previous |