SECURE PROGRAM

Consider what we mean when we say that a program is "secure." We know that security implies some degree of trust that the program enforces expected confidentiality, integrity, and availability. From the point of view of a program or a programmer, how can we look at a software component or code fragment and assess its security? This question is, of course, like the problem of assessing software quality in general. One way to assess security or quality is to ask people to name the characteristics of software that contribute to its overall security. However, we are likely to get different answers from different people. This difference occurs because the importance of the characteristics depends on who is analyzing the software. For example, one person may decide that code is secure because it takes too long to break through its security controls. And someone else may decide code is secure if it has run for a period with no apparent failures. But a third person may decide that any potential fault in meeting security requirements makes code insecure.
Early work in computer security was based on the paradigm of "penetrate and patch," in which analysts searched for and repaired faults. Often, a top-quality "tiger team" would be convened to test a system's security by attempting to cause it to fail. The test was considered to be a "proof" of security; if the system withstood the attacks, it was considered secure.Unfortunately, far too often the proof became a counterexample, in which not just one but several serious security problems were uncovered. The problem discovery in turn led to a rapid effort to "patch" the system to repair or restore the security. However, the patch efforts were largely useless, making the system less secure rather than more secure because they frequently introduced new faults. There are at least four reasons why.
1.The pressure to repair a specific problem encouraged a narrow focus on the fault itself and not on its context. In particular, the analysts paid attention to the immediate cause of the failure and not to the underlying design or requirements faults.
2.The fault often had nonobvious side effects in places other than the immediate area of the fault.
3. Fixing one problem often caused a failure somewhere else, or the patch addressed the problem in only one place, not in other related places.
4.The fault could not be fixed properly because system functionality or performance would suffer therefore.
The inadequacies of penetrate-and-patch led researchers to seek a better way to be confident that code meets its security requirements. One way to do that is to compare the requirements with the behavior. That is, to understand program security, we can examine programs to see whether they behave as their designers intended or users expected. We call such unexpected behavior a program security flaw, it is inappropriate program behavior caused by a program vulnerability.
Program   security   flaws   can   derive   from   any   kind   of   software   fault.   That   is, they   cover everything from a misunderstanding of program requirements to a one-character error in coding or even typing. The flaws can result from problems in a single code component or from the   failure   of   several   programs   or   program   pieces   to   interact   compatibly   through   a shared interface. The security flaws can reflect code that was intentionally designed or coded to be malicious or code that was simply developed in a sloppy or misguided way. Thus, it makes sense to divide program flaws into two separate logical categories: inadvertent human errors versus malicious, intentionally induced flaws.

Types of Flaws
To aid our understanding of the problems and their prevention or correction, we can define categories that distinguish one kind of problem from another. For example, Landwehr et al. present a taxonomy of program flaws, dividing them first into intentional and inadvertent flaws. They further divide intentional flaws into malicious and nonmalicious ones.
In the taxonomy, the inadvertent flaws fall into six categories:

  1. validation error (incomplete or inconsistent): permission checks
  2. domain error: controlled access to data
  3. serialization and aliasing: program flow order
  1. inadequate identification and authentication: basis for authorization
  2. boundary condition violation: failure on first or last case other exploitable logic errors

References

  1. P. Pfleeger, Shari Lawrence Pfleeger Charles: Security in Computing, PHI
  2. Notes: Veer Surendra Sai University of Technology (VSSUT)

Next