
Type I and Type II Security Engineering ErrorsBy Roy D. Follendore iIICopyright (c) 2002 RDFollendoreIII
Engineers who
attempt to justify security through statistics are prone to the errors induced
by formulating those statistics. Because of its rational origins, no
statistical conclusion can exist as an entirely objective result of
logic. Security involves the evaluation of questions of
complexity associated within hypothetical systems. Beliefs,
attitudes, interactions and intentions play a role in concept of
security analysis. It is impossible to carelessly substitute an
extensional phrasing of one policy question equivalently for another. If a
useful statistical result is expected probable, then it can be easily
appreciated that within such a hypothesis it is possible to make an
error. The test itself can too easily become a self fulfilling prophecy.
As in all statistical hypotheses, an assumption is made, an experiment is
designed and executed to test it, and some calculations are performed to
determine if the results are probable given the assumptions. If not,
then the assumption is thrown out and an alternative assumption is
created. To quote John Allen Paulos, "Proving a statistic
is more a matter of disconfirming proposition rather than confirming
them.'
Two types of
errors exist within procedure of attempting to find a
statistical proof. A Type I procedure exists when a true hypothesis is
rejected. A Type II procedure exists when a false hypothesis
is accepted. The distinction between these two types of security
errors is important. A high level example can be found in the issue of
choosing the right algorithm. One Government
security expert tries to avoid Type I policy errors where the good guys
don't get to receive high quality cryptographic results.
Another Government security expert tries to avoid Type II policy errors where
the potential criminals get to use a high quality cryptographic
results. A potential statistical compromise may not exist because
it is possible that unacceptable rates of both a Type I and Type II error can
simultaneously coexist.
Other examples
can be found within the process cryptographic design. In general, the
more cryptographic processing that can be performed, the stronger the result
with fewer errors, the longer it takes and the slower the processing. One
Engineer tries to avoid a Type I design errors where very high
speed processes don't get the necessary throughput needed to be passed
through a cryptographic system. Another Engineer attempts
to avoid a Type II design error where the throughput of a very high
rates of cryptographic processing may fail to produce adequate
level of protection. The lessons learned from understanding this is
that the statistical specification of a cryptographic design can not exist
independently of its rational application.
Should a manager
wait to design a system that is acceptable to a security hypothesis
or simply accept existing problems and solutions that are unacceptable?
Once a system exists, should managers fix systems that are secure or wait until the systems become insecure? These are the problems underlying the issues surrounding the security engineering process. 
Copyright (c) 20012007 RDFollendoreIII All Rights Reserved
