The Artima Developer Community
The C++ Source | C++ Community News | Discuss | Print | Email | Screen Friendly Version | Previous | Next
Sponsored Link

The C++ Source
Contract Programming 101
The Nuclear Reactor and the Deep Space Probe, Part I
by Matthew Wilson
January 1, 2006

Summary
Contract Programming is something that�s been around for a long time but is getting far more air play, and rigorous examination, in recent times. Despite general agreement regarding the attraction of software contracts to programmers (and users!), there remains equivocation on what to do when contracts are broken. This is Part One of a series that takes a slightly philosophical look at this important issue, considers the tradeoffs between information and safety in reacting to contract violations, looks at practical measures for shutting errant processes, and introduces a new technique for the implementation of unrecoverable exceptions in C++.

The article is in four parts, of which this is the first, whose contents are defined as follows:

Part 1

The first part contains a (re)fresher course on contract programming, pointing out the difference between functional and operational contracts, and detailing the three strict monkeys of contract programming for functional contracts: pre-conditions, post-conditions and class invariants. It also examines the issue of observation—who defines and detects (in)correctness—and highlights The Principle of Removability. Finally, it examines the often-misunderstood relationship between exceptional conditions, invalid data and contract violations, highlighting the fact that exceptions are a tool for use in the implementation of correctly handling programs as well as a mechanism for the reporting of contract violations.

Part 2

The second part takes a more detailed look at the separate phases of contract enforcement: Detection, Reporting and Response. It then proceeds to introduce the defining instrument of this article: The Principle of Irrecoverability. The remainder of this part is concerned with the refutation of objections to the, principle including, importantly, the fallacy that precondition violations could be exempt from irrecoverability.

Part 3

The third part takes one last swipe at objections to the Principle of Irrecoverability, in the case of how the failure of plug-in components may avoid irrecoverabilty. The remainder of this part takes a practical turn, looking at the practical exceptions to the principle, and examining techniques for maximising the likelihood of graceful shutdown when violations are detected.

Part 4

The final part continues the practical bent of its predecessor. First, it introduces an unrecoverable exception class for C++, which does exactly what it says on the tin: once thrown, it can be caught to facilitate graceful process shutdown, but its effect cannot be quenched—shutdown is inevitable. The article series is then brought to a close by an examination of a "methodology" for using irrecoverability, known as Informed Zero Tolerance, with some success stories from its application.

Introduction

Contract programming got its first, or at least most thorough and widely recognized, treatment by Bertrand Meyer in his groundbreaking book Object-Oriented Software Construction [1], where it was known as Design By Contract, and it is a core element of the Eiffel language [2]. (Note: the term Design By Contract was trademarked in 2003 by Dr. Meyer, so all the little free software pixies are dropping the term like a hot coal. The latest favoured term is Contract Programming, as suggested by Walter Bright in 2004 and used in the recent proposal by Thorsten Ottosen and Lawrence Crowl to the C++ standards body [3].)

The use of the contract metaphor in software engineering is a growing, but not entirely well understood, phenomenon. A software contract itself is, as in life, merely the agreement (explicit or otherwise) between the parties involved. It "defines a set of expectations between the two parties, that vary in strength, latitude and negotiability, and specified penalties [for contract violation]" [4]. The software contract metaphor encompasses not only functional behaviour—types, interfaces, parameters and return values, and so on - but also operational behaviour—complexity, speed, use of resources, and so on.

The issue of what action is to be taken in response to contract violation is a separate matter, just as in life. In this four-part article I'm focusing on the use of programmatic constructs—enforcements—that police the functional contracts codified in software: known as Contract Enforcement. Other aspects of the software contract metaphor are outside the scope of this discussion.

Contract Programming 101

Contract programming is all about finding bugs in your software. Sounds amazing? Well, let�s put it another way. It�s about finding design flaws. Now it sounds even more amazing! How is a compiler—a nice piece of kit to be sure, but still a very dumb thing compared to a human being (marketing dept. notwithstanding)—supposed to be able to understand your design, and to do so better than you can? After all, it�s likely that no other software engineer, not even the gurus, will understand your design even as well as you, never mind better. So, of course, the compiler cannot. You have to lay a trail for it.

Just in the same way that human language contains redundancies and error-checking mechanisms, so we must ensure that our code does the same. You tell the compiler what your design is as you go, and it ensures that each time it picks up a crumb, it verifies the design.

Essentially, contract programming is about specifying the design, in terms of the behaviour, of your components (functions and classes), and asserting truths about the design of your code in the form of runtime tests placed within it. These assertions of truth will be tested as the thread of execution passes through the parts of your components, and will �fire� if they don�t hold. (Note: not all parts of contracts are amenable to codification in current languages, and there is some debate as to whether they may ever be [4]. This does not detract from the worth of contract programming, but it does define limits to its active realisation in code. In this article, I will be focusing on the practical benefits of codifying contract programming constructs.)

The behaviour is specified in terms of function/method preconditions, function/method postconditions, and class invariants. (There are some subtle variations on this, such as process invariants, but they all share the same basic concepts with these three elements.) Preconditions state what conditions must be true in order for the function/method to perform according to its design. Satisfying preconditions is the responsibility of the caller. Postconditions say what conditions will exist after the function/method has performed according to its design. Satisfying postconditions is the responsibility of the callee. Class invariants state what conditions hold true for the class to be in a state in which it can perform according to its design; an invariant is a �consistency property that every instance of the class must satisfy whenever it�s observable from the outside" [4]. Class invariants should be verified after construction, before destruction, and before and after the call of every public member function.

Let�s begin with a look at a simple function, strcpy(), which is implemented along the lines of:

char *strcpy(char *dest, char const *src)
{
  char *const r = dest;
  for(;; ++dest, ++src)
  {
    if(�\0� == (*dest = *src))
    {
       break;
    }
  }
  return r;
}
What are its pre-/post-conditions? Let N be the number of characters pointed to by src that do not contain the null-terminating character �\0� (0).

Some preconditions are:

Some postconditions are:

Preconditions

We might see the precondition validated along the following lines:
char *strcpy(char *dest, char const *src)
{
  char * r;

  /* Precondition checks */
  assert(IsValidReadableString(src));                     /* Is src valid?  */
  assert(IsValidWriteableMemory(dest, 1 + strlen(src)));  /* Is dest valid? */

  for(r = dest;; ++dest, ++src)
  {
    if(�\0� == (*dest = *src))
    {
       break;
    }
  }
  return r;
}
where: Note that in practice the precondition tests are not actually carried out before the function, rather they are inside the function implementation but before any of the operations the function carries out.

When a contract enforcement test, such as the assert()s in the example, fails, it is said to fire, and the code is said to have violated its contract, and be in a violated state or an invalid state. By definition, the firing of a contract violation within a component is a message from that code�s author(s) that states precisely and absolutely that the component has violated its design, and no future expectations about its behaviour can be made, and no guarantees given. As Christopher Diggins points out [5], �design flaws are transitive. There is no known method in software engineering able to predict that a detected design flaw in a particular area has not corrupted the design of the rest of the software including the other assertions potentially causing them to falsely accept incorrect contracts". I will examine this issue in some rigour in part 2, with respect to precondition violations, and their potential recoverability.

Any methodology that makes use of software contracts and enforces their conditions via runtime tests needs to address three important elements of the enforcement: Detection, Reporting and Response. When using assert() these three are effectively carried out in one place: Detection comprises a conditional test on the given expression; Reporting involves a call to fprintf() or equivalent to display representative information, usually comprising file name + line number, the failed expression, and possibly some additional qualifying information (see Chapter 1 of Imperfect C++ [6]); Response is termination via a call to exit() or abort(). It�s important to realise that, in the examples in this part, use of assert() is an implementation detail, just one means of effecting the enforcement, and quite peripheral to the the contract itself. Contracts may be enforced in other ways, e.g. via exceptions, which we�ll look at in Part 4.

Postconditions

As I discuss in Chapter 1 of Imperfect C++ [6], implementing postconditions in C++ is a non-trivial undertaking in general, and relies on some hoop-jumping. In this case, it would look like the following:
#if defined(ACMELIB_TEST_POSTCONDITIONS)
static char *strcpy_impl(char *dest, char const *src);

char *strcpy(char *dest, char const *src)
{
  char       *const d = dest; 
  char const *const s = src;
  char              *r;

  /* Precondition checks */
  assert(IsValidReadableString(src));
  assert(IsValidWriteableMemory(dest, 1 + strlen(src)));

  /* Call 'actual' function */
  r = strcpy_impl(dest, src);

  /* Postcondition checks. */
  assert(0 == strcmp(d, s)); /* Are all contents the same?             */
  assert(r == d);            /* Has it returned the right destination? */

  return r;
}

static char *strcpy_impl(char *dest, char const *src)
#else /* ? ACMELIB_TEST_POSTCONDITIONS */
char *strcpy(char *dest, char const *src)
#endif /* ACMELIB_TEST_POSTCONDITIONS */
{
  . . . // Same impl as shown previously for strcpy()
}
The reason for the separation into inner and outer functions is that the tests need to be outside the (inner) function context, in order to that the author of the tests can be confident that he/she is seeing the true post-condition. This is especially important in C++ where the destructors of local scope objects might affect the post-conditions after their ostensibly �final� test.

In practice, one tends to leave postcondition testing in the too-hard basket, save for exceptional cases where the benefits outweigh the hassles. (One such hassle in this case would be ensuring that strlen() doesn�t call strcpy(), otherwise we may have a little stack problem.) Note that, in principle, there is almost no difference between checking the post-condition of a function in a wrapper function as shown above or in the function�s actual client code. It�s just that the former case is only done (once) by the library writer, who�s the one who should do it, and the latter is by the library user, who may be ignorant of the full behavioural spectrum and/or out of date with respect to changes in the valid behaviour since they wrote their tests.

Class Invariants

That just leaves invariant testing. Just as we�ve seen with postconditions, we can�t easily effect verifications of invariants before and after method calls, but we can easily arrange for them to be done at the beginning and end of method calls. Let�s look at a simple type, a Dollar class:
class Dollar
{
public:
  explicit Dollar(int dollars, int cents)
    : m_dollars(dollars)
    , m_cents(cents)
  {}

public:
  Dollar &add(Dollar const &rhs);
  Dollar &add(int dollars, int cents);

public:
  int getDollars() const; // returns # of dollars
  int getCents() const;   // returns # of cents
  int getAsCents() const; // returns total amount in cents

private:
  int m_dollars;
  int m_cents;
};
Given this very simple class, what can we say and do about its invariant? Well, since one can, in principle, have or owe any amount of money, we can say that the valid range of dollars is anything that can be stored in an int. (If you�ve more than $2B, you might opt for a long long.) However, dollars, whether Australian, Canadian, or for any other country that has them, has only ever 100 cents per dollar. Thus we can say that the invariant for our Dollar class is that cents must not be more than 99. Hence we might write our invariant in the private member function is_valid():
class Dollar
{
  . . .
private:
  bool is_valid() const
  {
    if( m_cents < 0 ||
        m_cents > 99)
    {
      return false;
    }

    return true;
  }
  . . .
};
Note that this assumes that the cents field is always positive, and that negative amounts are represented in the sign of m_dollars only, e.g. $-24.99 would be represented as m_dollars = -24, m_cents = 99. If we chose to represent negativity in the total amount in both members, our invariant would need to reflect that also. Were we to do that, we�d also be able to state more in our invariant about the relationship between negative values in the member variables:
bool Dollar::is_valid() const
{
  if(::abs(m_cents) > 99)
  {
    return false;
  }
  if((m_cents < 0) != (m_dollars < 0))
  {
    return false;
  }

  return true;
}
Let�s look at how we hook in the invariant:
Dollar & Dollar::add(int dollars, int cents)
{
  assert(is_valid()); // Verify invariant on method entry

  // . . . code to add the two amounts . . .

  assert(is_valid()); // Verify invariant on method exit

  return *this;
}
Note that we show a strategy for asserting on calls to invariants shown here, rather than having the invariant function itself fire the assertions. With complex classes it is also common to see some reporting occur within the invariant function, while the assertion is applied on the return value. For further discussions on this subject see Chapter 1 of Imperfect C++ [7]. The is_valid() method and its tests define and enforce the criteria for the Dollar's representational contract: it's a representation invariant. Simply: if is_valid() returns false, then there's either a design error in Dollar, or it has been corrupted (either by an undetected pre-condition violation, or by some other part of the processing tramping on its memory). An alternative view of specifying invariants is the public invariant. An example for Dollar would be:

"For any Dollar instance d, either the expression d.getDollars() + d.getCents() == d.getAsCents() && g.getCents() < 100 holds true if d.getDollars() returns a non-negative value, otherwise the expression d.getDollars() - d.getCents() == d.getAsCents() && g.getCents() < 100 holds true."

Such public invariants do not lend themselves to association with the class implementation (i.e. as methods) as readily as representational invariants because it's customary for public methods to check invariants. If the invariant is comprised of public methods, this would lead to (possibly complex) additional logic to avoid experiencing recursive calls. For that reason they're not considered further in this article.

There�s general agreement that contract programming is a good thing, and in fact most of us have been doing some form of contract programming for years, as we�ve been using assertions to enforce assumptions about the design of our code. What contract programming delivers is a methodology for the unequivocal identification of violation of software design, by using constructs built into it by its author(s), who are the only ones qualified to make such a determination. And, to a significant degree, it provides in code what previously was commonly only expressed through documentation. This gives significant rewards in terms of code quality. I�ll discuss practical examples of this in parts 2 and 4. For now, let�s consider the theoretical perspective, looking at postconditions.

A thorough postcondition is an equivalent description of the algorithm used in a function, but written in an orthogonal manner. For example, if you have a sort function, a check on its postcondition would verify that the input array is indeed sorted according to the function�s intent [8]. Now, let's assume that there's a 90% chance that the sort algorithm is implemented correctly, and a 90% chance that the postcondition was written correctly. If the specifications are orthogonal, the probability of both failing on the same data is 10% times 10%, or 1%. So by writing twice as much code, we may achieve 10 times the reliability. That's the real magic of contract programming. It's the same idea behind, in aircraft design, having independent boxes controlling a critical task. The controllers have different electronics, different CPUs, different algorithms, and the software is written by different teams. They vote on the answer, and the system automatically shuts down if they disagree. This is how very high reliability is achieved even though each controller itself is nowhere near highly reliable.

The discussion thus far pretty much covers basic contract programming concepts, and is amply sufficient to serve as a base for the discussions I want to present in this article. It is, however, just scratching the surface of contract programming. For example, class invariants get inherited, as do post and pre conditions. This is one aspect of contract programming that is particularly difficult to emulate in C++. (Note: the Digital Mars C/C++ compiler has had support for contract programming as a language extension since 2000 [9].) Further, the complexity of callouts—a public method of class B calling a public method of class A as part of the modification of internal state, which in turn invokes another class B method which erroneously fires the invariant—is not considered further at this time.

If you want further convincing of the goodness of Contract Programming, there are several important works, including [1, 7, 10]. For now, we�re going to take as read that that�s accepted. Where the real controversy resides, however, is on what one can and should do in response to contract violations. That is the main subject of this article, and will occupy much of parts 2-4.

AOEOEINEOAOE

Sing it with me (to the tune of The Police�s �De Do Do Do"): �ay-oh-oh-ee, en-ee-oh-oh". :-)

Ok, ok. What this preposterous initialism—only Mongolian throat singers could make it into an acronym!—actually represents is Absence Of Evidence Of Error Is Not Evidence Of Absence Of Error. In propositional logic terms, error detection/code verification is an implication, not a bi-implication. In other words, if an error is detected in your code, your code can be said to be erroneous, but if no error is detected in your code, it cannot be inferred that your code is correct. We�ll see the ramifications of this throughout the article. I should point out that this is a must-buy-in principle—if you don�t accept it you�ll be wasting your time in reading on.

Is Big Brother Watching? (or Who Decides What Constitutes a Contract Violation?)

Computers don�t have higher order reasoning, they just execute commands we give to them in the language they understand. Hence, the question of who decides what constitutes a violation for a given piece of code is exceedingly simple: solely the author(s) of that code. No one else has designed that code; therefore no one else is in a position to make statements about its design and to reify those statements in code in the form of contract enforcements. Not the users of the libraries, not the users of any programs, not you or I, not even the designer of the programming language in which the code is written.

From a practical perspective, some operating systems can determine some memory corruptions, such as accessing an unmapped page, or attempting to write to a read-only page, or what not. But in general there's no overarching help from hardware or anything else to be had for compiled languages. And even in such cases as there are, a programmer may choose to rely on an operating system catching an access violation to effect expected program behaviour—this is the way that some operating systems provide dynamic stack allocation (see Chapter 32 of Imperfect C++ [7]).

The Principle of Removability

Another aspect I need to touch on before we get into the nitty-gritty is the Principle of Removability [11], which states "a contract enforcement should be removable from correct software without changing the (well-functioning) behaviour".

There are two main reasons why this must be so. First, the Heisenbergian: How can we measure something if we�re modifying it? If contract-programming constructs were to be part of the normal function logic, then they�d need to be subject to quality assurance, requiring contract enforcements on the contract enforcements. And so on, and so on, ad absurdum. Second, the practical: We need to be able to remove some/all contract enforcements for reasons of efficiency, and/or to change the checking/reporting mechanisms and responses, depending on application context. (Note: the removal of a particular enforcement does not affect the contracted behaviour it was policing. The removal of all enforcements does not mean that the associated contracts have been removed, nor does the absence of the enforcements imply that adherence to the contracts' provisions is no longer mandatory.)

This�ll be bread and butter to anyone who�s used asserts, since you will know that the cardinal rule is not to effect changes in the clauses of assertions, because it can lead to the worst of all possible debugging scenarios: where the code works correctly in debug mode and not in release mode. As soon as the line blurs between normal code and contract enforcement code, things are going to get squiffy, and your clients are going to be displeased.

Contract Violations are Not Exceptional Conditions

Although C/C++ do not have direct contract support, some programming languages do, including notably Eiffel [2], which was invented by Bertrand Meyer, and D [12], which was invented by Walter Bright. In those languages, and in others, such as Java and .NET, where contract-programming techniques outside the language are used, contract violations are expressed as thrown exceptions.

Unfortunately, this tends to foster a blurring in the intended purposes of contracts and exceptions in the minds of engineers. Historically, this has tended to be much less the case in C and C++, because most C/C++ developers have used assertions for contract enforcement. Assertions in C and C++ tend to be included in debug builds only, and elided in release builds for performance reasons. However, as the programming community appreciation for contract programming grows, there is increasing interest in using contract enforcement in release builds. Indeed, as we�ll see in part 4 of this article, there is a very good argument for leaving contracts in released software, since the alternative may be worse, sometimes catastrophically so. Thus, there exists a very real danger that the same misapprehension may enter the C++ community psyche, so it�s worth discussing the issues here.

A thrown exception typically represents an exceptional condition that a valid program may encounter and, perhaps, recover from, such as inability to open a socket, or access a file, or connect to a database. A process in which such an exception is thrown remains a valid process, irrespective of whether it might continue to execute and attempt to reopen/access/connect, or whether it emits an error message and shuts down gracefully. Further, exceptions may also be used as a part of the processing logic for a given algorithm/component/API, although this is less often the case, and tends to be frowned on in the main [7, 13, 14].

There are other kinds of exceptions, from which a process cannot generally recover, but which do not represent an invalid state. We may call these Practically Irrecoverable Exceptional Conditions. The most obvious example is an out-of-memory condition. In receipt of such an exception, a process can often do little else but close down, though it is still in a valid state. It�s exactly analogous to not being able to open a file, but for the fact that the reporting and response mechanisms are likely to want to use memory; in such cases you should consider using memory parachutes [15], which can work well under some circumstances. Another practically unrecoverable exceptional condition is failure to allocate Thread-Specific Storage [7, 16, 17] (TSS) resources. To use TSS one needs to allocate slots, the keys for which are well-known values shared between all threads that act as indexes into tables of thread- specific data. One gets at one's TSS data by specifying the key, and the TSS library works out the slot for the calling thread, and gets/sets the value for that slot for you. TSS underpins multi-threaded libraries - e.g. errno / GetLastError() per thread is one of the more simple uses - and running out of TSS keys is a catastrophic event. If you run out before you've built the runtime structures for your C runtime library, there's really no hope of doing anything useful, and precious little you can say about it.

In contrast to the activity of exceptions in indicating exceptional runtime conditions or as part of program logic, a contract violation represents the detection of a program state that is explicitly counter to its design. As we�ll see in The Principle Of Irrecoverability (in part 2), upon a contract violation the program is, literally, in an invalid state, and, in principle, there is no further valid action that the program may perform. Though nothing deleterious whatsoever might happen, the program has theoretically limitless potential for doing harm. In practice this tends to be limited to the current process, or the current operating environment, but could in theory be as bad as sending a proclamation of eternal war to Alpha Centuri!

Contract Violations are Not Invalid Input Data

The purpose of contracts is to find bugs in a program. They are not there to look for bugs outside of the program, i.e. in another program or in the input to that program. (The test in the contracts should be removable with no effect on the logical behaviour of that program. If there is a change, then the enforcements were misused or the program is buggy; See Principle of Removability.)

Let's say we have a program that gets a date from its input, perhaps typed in by a user. At the point where the input is accepted into the program as a date, it should be validated as being an acceptable date, i.e. dates like 37-July-2004 should be rejected with a suitable message and retry presented to the user. Once a date has been validated and accepted into the logic of the program, however, at some level the internal architecture of the program will always assume valid dates. (To not do this, but instead to codify invalid-date branches at every level of the application, would be both very inefficient and lead to extremely complex code.)

At the point at which that assumption is made, a function precondition should reflect that assumption, and it would be appropriate to place a contract enforcement (e.g. an assert(IsDateValid(theDate)); or assert(theDate.is_valid());) to guard that assumption. This gives confidence to the application programmer that at that point it really is a valid date, that nothing bad slipped by the user input validator, and that nothing else corrupted it in the meantime. A message to retry to the user at this point would make no sense at all. An invalid date at that point does not represent bad input; it represents a bug in the program. Thus contracts are designed to find bugs in the program, not bad data being input to the program.

Unfortunately, there tends to be ambiguities in the use of exceptions in some languages/libraries, which can lead to confusion over what falls within the purview of contract programming. In Java, exceptions will be thrown upon array overflow errors. This exception is part of the specification of Java. So, the following code is legitimate Java:

try
{
  int[] array = new int[10];
  int   i;

  for(i = 0; ; ++i)
  {}
}
catch(ArrayIndexOutOfBoundsException e)
{}					
Thus we see the overflow exception being used as part of the normal control flow logic. Some Java programmers use this practice, meaning that array overflows in Java are not useable for contract programming purposes, because that would violate the Principle of Removability.

The same situation exists with regards to the at() member of C++�s std::basic_string, and appears to cause quite a degree of misunderstanding, generally in confusion between whether operator []() and at() are equivalent. Let�s have a look at at()�s signature:

const_reference at(size_type index) const;
reference       at(size_type index);
The C++ standard [18] states the following:
Requires: index <= size()
Throws: out_of_range, if index >= size()
Returns: operator [](index)

From a contract programming perspective, this is a little misleading, since the Requires part inclines one to think that the contract stipulates that index <= size(). This is not so. Indeed the precondition for at() is empty, i.e. the sum of all possible values for index [19]:

at()�s Precondition: (empty)
The postcondition is where the interest resides, since it states:
at()�s Postcondition: returns reference to the index�th element if index < size(), otherwise throws out_of_range.
In other words, if index is within range it returns a reference to the corresponding element, otherwise it throws an exception. All that�s none too surprising. Now consider how this differs from the operator []() method(s):
     const_reference operator [](size_type index) const;
     reference       operator [](size_type index);
The standard [18] states:
Returns: If index < size() returns data()[index]. Otherwise, if index == size(), the const version returns charT(). Otherwise, the behaviour is undefined.
This is a thoroughly different kettle of fish. If we request a valid index we get back a reference to the corresponding element, just as with at(). (Note that the const version defines the valid range to be [0, size() + 1), whereas in the non- const case it is [0, size()). Go figure! [20]) However, if we do not get the index right, the behaviour is undefined.
Termin(ologic)al interlude

Here's a simple rule for anyone that's confused about what the language dictates: The C++ standard [18] does not address the issue of contract enforcement at all, although it does do an acceptable job of the description of the contracted behaviour (ambiguous language such as basic_string�s at()�s "Requires" notwithstanding).

Therefore, you cannot talk of any part of the standard library as enforcing a contract, since it only ever refers to undefined behaviour. Some implementations, e.g. Metrowerks', use assertions, and therefore may be said to enforce contracts, but the standard merely leaves such things up to the implementor. If you�re writing STL extension libraries—e.g. Boost [ 21], STLSoft [22]—you can interpret each "undefined behaviour" as a potential place for contract enforcement.


There are clear differences between the contract for at() and operator [](). The contract for the mutable (non-const) version of operator []() is as follows:
operator []()�s Precondition: index < size() operator []()�s Postcondition: returns reference to the index�th element.
Other than confused users and an over-baked standard string class [23], what are the ramifications of these differences? Simply, it means that different element access paradigms are supported. The normal manipulation of arrays in C and C++, via knowing the range, is supported by operator []():
int main()
{
  std::string   s(�Something or other");
  for(std::string::size_type i = 0; i < s.size(); ++i)
  {
    std::cout << i << ": " << s[i] << std::endl;
  }
  return 0;
}
And the catch-out-of-bounds method, as shown in the Java example earlier, is supported by at():
 int main()
 {
  std::string   s(�Something or other");
  try
  {
    for(std::string::size_type i = 0; ; ++i)
    {
      std::cout << i << ": " << s.at(i) << std::endl;
    }
  }
  catch(std::out_of_range &)
  {} // Do nothing
  return 0;
}
It is important to realise that these both represent entirely valid programs, in which the client code respects the contracts of the respective std::basic_string methods used. To reiterate, specifying an out-of-bounds index for at() is not a contract violation, whereas it most certainly is for operator [](). This delineation between exception-throwing and undefined behaviour (i.e. contract violations) exists equally outside the standard. Consider the STL mapping for the Open-RJ library [24]. The record classes provides an operator [](char_type const *fieldName), which throws a std::out_of_range if fieldName does not corresponding to an existing field for that record within the database. Now it's certainly not the case that asking for a field (by name) that does not exist is invalid code. It affords a simple and elegant style in client code:
openrj::stl::file_database db("pets.orj", openrj::ELIDE_BLANK_RECORDS);

try
{
  for(openrj::stl::database::iterator b =  db.begin(); . . .  ) // enumerating db's records
  {
    openrj::stl::record  r(*b);

    std::cout << "Pet: name: " << r["Name"]
              << "; species: " << r["Species"]
              << "; weight: "  << r["Weight"]
              << std::endl;
  }
  catch(std::out_of_range &)
  {
    std::cout << "One of the records did not have the Name, Species and Weight fields" 
              << std::endl;
  }
The record class also provides an operator [](size_type index) method, for which an out of bounds index represents a contract violation. Thus, the following code is a badly formed program:
    . . .
    openrj::cpp::Record  record(*b);
    for(size_t i = 0; ; ++i)
    {
      std::cout << record[i].name << ": " << record[i].value << std::endl;
    }
    . . .
Whereas the former is perfectly valid code, and is a reasonable tool for checking the validity of Pets databases—using Record::operator []()�s thrown exception in the case that a record does not contain a field of the given name—the latter is ill-formed, and is going to cause you grief (i.e. a crash).

And if you�re still sceptical whether exceptions may be part of a function�s contract, consider the case of the operator new() function. If throwing an instance of bad_alloc (or something derived from bad_alloc) were not within its contract, it would mean that memory exhaustion—a runtime condition largely outside the control of the program designer—would be a contract violation, that is to say an unequivocal statement of design contradiction! Now that�d make writing good software something of a challenge ...

In Part 2 ...

I will dive deeper into the ramifications of failure, and present The Principle of Irrecoverability along with some case studies that help address objections that have been raised to it. I will finish by examining The Fallacy of the Recoverable Precondition Violation, which will lead nicely into Part 3.

Acknowledgements

Thanks to Bjorn Karlsson, Christopher Diggins, Chuck Allison, John Torjo, Thorsten Ottosen and Walter Bright for a lengthy and bracing discussion of this and other contract programming issues throughout October 2004. There were tears, fears, jeers and a few leers—sadly no beers—but I think we�re all the better for it. A special thought of sympathy for Walter, whose views most closely approximate those of my own—what a crazy cookie!—and several of whose gnomic observations have been incorporated into the text in several places

Thanks also to the members of the D newsgroup (news://news.digitalmars.com/d) for a similarly stimulating discussion in April 2005, particularly Ben Hinkle, Derek Parnell, George Wrede, and Regan Heath. You made me work very hard to fill in the gaps in the Principle of Irrecoverability that had previously only been held together by instinct and crossed fingers. Special thanks to Sean Kelly for stimulating the thought process that led to The Fallacy of the Recoverable Precondition Violation (part 2).

Thanks also to the following reviewers: Andrew Hunt, Bjorn Karlsson, Christopher Diggins, Kevlin Henney, Nevin Liber, Sean Kelly, Thorsten Ottosen, Walter Bright. Special thanks to Chris, whose dryness and rigour in review has proven such a valuable compliment to my intuition and verbosity, and to Kevlin, whose eloquent criticism would gently give pause to the most doubtless evangelist. And I�d also like to thank my editor Chuck Allison, for actions above and beyond the call of duty in helping me prepare this leviathan meal into digestible servings.

Despite all help received, any errors, bad jokes and poor judgements are my own.

Thank you for reading,
Matthew Wilson

References

  1. Object-Oriented Software Construction, Bertrand Meyer, Prentice Hall, 1997
  2. The Eiffel Programming Language (http://www.eiffel.com)
  3. Proposal to add Contract Programming to C++, Lawrence Crowl and Thorsten Ottosen, WG21/N1773 and J16/05-0033, 4 March 2005 ( http://www.open- std.org/JTC1/SC22/WG21/docs/papers/2005/n1773.html)
  4. Email correspondence with Kevlin Henney, June 2005
  5. Design by Contract; A Conversation with Bertrand Meyer, Part II, Bill Venners, Artima, 8 December 2003 ( http://www.artima.com/intv/contracts.html)
  6. Email correspondence with Christopher Diggins, May 2005
  7. Imperfect C++, Matthew Wilson, Addison-Wesley, 2004 (http://imperfectcplusplus.com/)
  8. Sorted, Kevlin Henney, Application Development Advisor, July-August 2003.
  9. Digital Mars C/C++ compiler Contract Programming support ( www.digitalmars.com/ctg/contract.html).
  10. The Pragmatic Programmer, Dave Hunt and Andy Thomas, Addison-Wesley, 2000
  11. Email conversation with Christopher Diggins, October 2004
  12. D is a new systems programming language, created by Walter Bright (of Digital Mars; http://www.digitalmars.com/), which merges many of the best features of C, C++ and other advanced languages, including support for contract progamming.
  13. The Practice of Programming, Kernighan and Pike, Addison-Wesley, 1999
  14. The Art of UNIX Programming, Eric Raymond, Addison-Wesley, 2003
  15. Jumping from the top of the parachutes, Matthew Wilson, Weblog 18 April 2005 ( http://www.artima.com/weblogs/viewpost.jsp?thread=104862)
  16. Programming With POSIX Threads, David R. Butenhof, Addison-Wesley, 1997
  17. Advanced Windows, Jeffrey Richter, Microsoft Press, 1997
  18. The C++ Standard, ISO/IEC 14882:98
  19. Thorsten Ottosen suggested, in an email conversation in May 2005, an alternative representation for a precondition that allows all possible values as true, rather than the empty condition. This would nicely balance the theoretical condition false for a function that had no satisfiable precondition. We�ve all come across a couple of those in our travels � ;)
  20. Obviously the reason is that returning a non-mutable (const) reference to the null-terminator is harmless, whereas returning a mutable (non-const) reference is anything but. Whether this inconsistency is worth the modest increase in non-mutable flexibility is a debate outside the scope of this article.
  21. Boost is an open-source organisation whose focus is the development of libraries that integrate with the C++ Standard Library, and is located at http://boost.org/. It has thousands of members, including many of the top names in the C++ software community.
  22. STLSoft is an open-source organisation whose focus is the development of robust, lightweight, simple-to- use, cross-platform STL-compatible software, and is located at http://stlsoft.org/. It has fewer members than Boost.
  23. Stringing Things Along, Kevlin Henney, Application Development Advisor, Volume 6 Number 6, July/August 2002.
  24. Open-RJ is an open-source, platform-independent structured file reader library for the Record JAR format. It�s available from http://openrj.org/.
The Nuclear Reactor and the Deep Space Probe, Part 1
Matthew Wilson
©2004-2006.

About the Author

Matthew is a development consultant for Synesis Software, and creator of the STLSoft libraries. He is author of Imperfect C++ (Addison-Wesley, 2004), and is currently working on volume 1 of his modicum opus, Extended STL, to be published in 2006. Matthew can be contacted via http://imperfectcplusplus.com/.

The C++ Source | C++ Community News | Discuss | Print | Email | Screen Friendly Version | Previous | Next

Sponsored Links

Copyright © 1996-2014 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use - Advertise with Us