Sponsored Link •
There�s general agreement that contract programming is a good thing, and in fact most of us have been doing some form of contract programming for years, as we�ve been using assertions to enforce assumptions about the design of our code. What contract programming delivers is a methodology for the unequivocal identification of violation of software design, by using constructs built into it by its author(s), who are the only ones qualified to make such a determination. And, to a significant degree, it provides in code what previously was commonly only expressed through documentation. This gives significant rewards in terms of code quality. I�ll discuss practical examples of this in parts 2 and 4. For now, let�s consider the theoretical perspective, looking at postconditions.
A thorough postcondition is an equivalent description of the algorithm used in a function, but written in an orthogonal manner. For example, if you have a sort function, a check on its postcondition would verify that the input array is indeed sorted according to the function�s intent . Now, let's assume that there's a 90% chance that the sort algorithm is implemented correctly, and a 90% chance that the postcondition was written correctly. If the specifications are orthogonal, the probability of both failing on the same data is 10% times 10%, or 1%. So by writing twice as much code, we may achieve 10 times the reliability. That's the real magic of contract programming. It's the same idea behind, in aircraft design, having independent boxes controlling a critical task. The controllers have different electronics, different CPUs, different algorithms, and the software is written by different teams. They vote on the answer, and the system automatically shuts down if they disagree. This is how very high reliability is achieved even though each controller itself is nowhere near highly reliable.
The discussion thus far pretty much covers basic contract programming concepts, and is amply sufficient to serve as a base for the discussions I want to present in this article. It is, however, just scratching the surface of contract programming. For example, class invariants get inherited, as do post and pre conditions. This is one aspect of contract programming that is particularly difficult to emulate in C++. (Note: the Digital Mars C/C++ compiler has had support for contract programming as a language extension since 2000 .) Further, the complexity of callouts—a public method of class B calling a public method of class A as part of the modification of internal state, which in turn invokes another class B method which erroneously fires the invariant—is not considered further at this time.
If you want further convincing of the goodness of Contract Programming, there are several important works, including [1, 7, 10]. For now, we�re going to take as read that that�s accepted. Where the real controversy resides, however, is on what one can and should do in response to contract violations. That is the main subject of this article, and will occupy much of parts 2-4.
Ok, ok. What this preposterous initialism—only Mongolian throat singers could make it into an acronym!—actually represents is Absence Of Evidence Of Error Is Not Evidence Of Absence Of Error. In propositional logic terms, error detection/code verification is an implication, not a bi-implication. In other words, if an error is detected in your code, your code can be said to be erroneous, but if no error is detected in your code, it cannot be inferred that your code is correct. We�ll see the ramifications of this throughout the article. I should point out that this is a must-buy-in principle—if you don�t accept it you�ll be wasting your time in reading on.
From a practical perspective, some operating systems can determine some memory corruptions, such as accessing an unmapped page, or attempting to write to a read-only page, or what not. But in general there's no overarching help from hardware or anything else to be had for compiled languages. And even in such cases as there are, a programmer may choose to rely on an operating system catching an access violation to effect expected program behaviour—this is the way that some operating systems provide dynamic stack allocation (see Chapter 32 of Imperfect C++ ).
There are two main reasons why this must be so. First, the Heisenbergian: How can we measure something if we�re modifying it? If contract-programming constructs were to be part of the normal function logic, then they�d need to be subject to quality assurance, requiring contract enforcements on the contract enforcements. And so on, and so on, ad absurdum. Second, the practical: We need to be able to remove some/all contract enforcements for reasons of efficiency, and/or to change the checking/reporting mechanisms and responses, depending on application context. (Note: the removal of a particular enforcement does not affect the contracted behaviour it was policing. The removal of all enforcements does not mean that the associated contracts have been removed, nor does the absence of the enforcements imply that adherence to the contracts' provisions is no longer mandatory.)
This�ll be bread and butter to anyone who�s used asserts, since you will know that the cardinal rule is not to effect changes in the clauses of assertions, because it can lead to the worst of all possible debugging scenarios: where the code works correctly in debug mode and not in release mode. As soon as the line blurs between normal code and contract enforcement code, things are going to get squiffy, and your clients are going to be displeased.
Unfortunately, this tends to foster a blurring in the intended purposes of contracts and exceptions in the minds of engineers. Historically, this has tended to be much less the case in C and C++, because most C/C++ developers have used assertions for contract enforcement. Assertions in C and C++ tend to be included in debug builds only, and elided in release builds for performance reasons. However, as the programming community appreciation for contract programming grows, there is increasing interest in using contract enforcement in release builds. Indeed, as we�ll see in part 4 of this article, there is a very good argument for leaving contracts in released software, since the alternative may be worse, sometimes catastrophically so. Thus, there exists a very real danger that the same misapprehension may enter the C++ community psyche, so it�s worth discussing the issues here.
A thrown exception typically represents an exceptional condition that a valid program may encounter and, perhaps, recover from, such as inability to open a socket, or access a file, or connect to a database. A process in which such an exception is thrown remains a valid process, irrespective of whether it might continue to execute and attempt to reopen/access/connect, or whether it emits an error message and shuts down gracefully. Further, exceptions may also be used as a part of the processing logic for a given algorithm/component/API, although this is less often the case, and tends to be frowned on in the main [7, 13, 14].
There are other kinds of exceptions, from which a process cannot generally
recover, but which do not represent an invalid state. We may call these
Practically Irrecoverable Exceptional Conditions. The most
obvious example is an out-of-memory condition. In receipt of such an
exception, a process can often do little else but close down, though it is
still in a valid state. It�s exactly analogous to not being able to open
a file, but for the fact that the reporting and response mechanisms are
likely to want to use memory; in such cases you should consider using
memory parachutes , which can work
well under some circumstances.
Another practically unrecoverable exceptional condition is failure to
allocate Thread-Specific Storage [7, 16,
17] (TSS) resources. To use TSS one needs to allocate slots, the keys
for which are well-known values shared between all threads that act as
indexes into tables of thread- specific data. One gets at one's TSS data
by specifying the key, and the TSS library works out the slot for the
calling thread, and gets/sets the value for that slot for you. TSS
underpins multi-threaded libraries - e.g.
GetLastError() per thread is one of the more simple uses - and
running out of TSS keys is a catastrophic event. If you run out before
you've built the runtime structures for your C runtime library, there's
really no hope of doing anything useful, and precious little you can say
In contrast to the activity of exceptions in indicating exceptional runtime conditions or as part of program logic, a contract violation represents the detection of a program state that is explicitly counter to its design. As we�ll see in The Principle Of Irrecoverability (in part 2), upon a contract violation the program is, literally, in an invalid state, and, in principle, there is no further valid action that the program may perform. Though nothing deleterious whatsoever might happen, the program has theoretically limitless potential for doing harm. In practice this tends to be limited to the current process, or the current operating environment, but could in theory be as bad as sending a proclamation of eternal war to Alpha Centuri!
Let's say we have a program that gets a date from its input, perhaps typed in by a user. At the point where the input is accepted into the program as a date, it should be validated as being an acceptable date, i.e. dates like 37-July-2004 should be rejected with a suitable message and retry presented to the user. Once a date has been validated and accepted into the logic of the program, however, at some level the internal architecture of the program will always assume valid dates. (To not do this, but instead to codify invalid-date branches at every level of the application, would be both very inefficient and lead to extremely complex code.)
At the point at which that assumption is made, a function precondition
should reflect that assumption, and it would be appropriate to place a
contract enforcement (e.g. an
assert(theDate.is_valid());) to guard that assumption.
This gives confidence to the application programmer that at that point it
really is a valid date, that nothing bad slipped by the user input
validator, and that nothing else corrupted it in the meantime. A message
to retry to the user at this point would make no sense at all. An invalid
date at that point does not represent bad input; it represents a bug in
the program. Thus contracts are designed to find bugs in the program, not
bad data being input to the program.