Article Discussion
The Demand for Software Quality
Summary: Bertrand Meyer talks with Bill Venners about the increasing importance of software quality, the commercial forces on quality, and the challenges of complexity.
15 posts on 1 page.      
« Previous 1 Next »
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: October 31, 2003 8:20 AM by Geoff
Bill
Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
The Demand for Software Quality
October 26, 2003 9:15 PM      
In this interview, Eiffel creator Bertrand Meyer states that "As the use of computers pervades more and more of what society does, the effects of non-quality software just becomes unacceptable."

Read this Artima.com interview with Eiffel creator Bertrand Meyer:

http://www.artima.com/intv/serious.html

What do you think of Bertrand Meyer's comments?
jdnicolet
Posts: 6 / Nickname: jdnicolet / Registered: October 27, 2003 7:58 PM
Re: The Demand for Software Quality
October 28, 2003 1:22 AM      
I well recognize Bertrand Meyer's writing style. I agree with most of his arguments, although he's sometimes a bit too accademical. As an example, the question about systematic attribute hiding is controversial. Bjarne Stroustrup would disagree on this point.

The most annoying point is the continuous advertisement about Eiffel. Looking at its not so widespread use, one may conclude at least that most programmers on this planet don't agree fully with Meyer's arguments!

Moreover, there is a fundamental philosophical issue behind a language like Eiffel: The idea that the limiting paradigm you're imposing is good enough for about every task. Eiffel is presented as the ACME programming language, and this is certainly not true. I personnally prefer a language giving me more freedom, at the price of a more profound architectural thinking in the choice I make.

Let's take an example as illustration. Consider two popular OS' on the market place: UNIX on one side, and OS/2 (well, not so popular anymore...) on the other side. The underlying philosophy of UNIX is that about all aspects of the system are accessible, provided you have the appropriate access rights. All system parameters are contained within simple text files you may see and modify at will. But be warned: you may also very simply crash the system if you don't perfectly master what you're doing. Anyway, you can do it if you want. No door is closed, there are just warning sings on them, but you're free to open them if it is the right or only thing to do for you.

On the other side, OS/2 was designed with the opposite philosphy. All vital information was concealed within binary files you could'nt even see without specialized tools that did not even exist at the beginning. The net result was that more than once, you were forced after some major disfunction to re-install the whole system from scratch. With UNIX, on the other side, you may even intervene in the boot process, alter some system table on the fly, and continue to boot from another partition contained somewhere else on the disk. This is quite a dangerous thing to do, but you can if you need to and know what you're doing.

The conclusion is: Chose your camp, camarade. Either you have at your disposal a tool that imposes you constraints, but offers you some simplicity (like Eiffel or Java), or you may have a more complicated tool that offers more possibilities, at the price of an accrued need for thinking and mastering risks (like C++). I personnally prefer the later.
Marco
Posts: 1 / Nickname: marcoabis / Registered: June 25, 2003 0:11 AM
Re: The Demand for Software Quality
October 28, 2003 2:39 AM      
Design by Contract by Example has been written by Richard Mitchell and Jim McKim, not by Bertrand :-)
Joost de
Posts: 15 / Nickname: yoozd / Registered: May 15, 2003 4:13 AM
the nineties are over...
October 28, 2003 3:49 AM      
Bertrand Meyer's point about attribute hiding is imo very '90s in that it pertains to the original point of OO: information hiding at the object level. The object's interface is cast in iron and within you're free to change at will.
Nowadays though there is a change in perspective imo that sees the need of information hiding more at the level of the component or API: my publicized API is relatively fixed (or closed in the terminology of OOSC) and any classes not visible can be changed at the discretion of the development team. The point is that change of non-publicized classes is relatively low-cost using modern refactoring tools and inherent to the realities of software development.
The other side of fixedness in Meyers open/closed principle is the openness to reuse of classes in unforeseen ways. In my experience reuse of publicized classes in unforeseen ways by extending them seldom happens. It is mostly designated interfaces or (abstract) classes that can be implemented or extended. The code that is being reused functions as a framework to the new code. But not in an unforeseen way.
In publicized classes I'd side with mr Meyer and say that client code shouldn't for example be aware of my change in insight that a given attribute will be calculated instead of just stored.
Ideally a language would make the distinction between mr Stroustrups argument of add only when needed and mr Meyers protect the client code irrelevant: I think that in Ruby an attribute is private unless declared accessible for reading and/or writing. Default no accessors needed. But when they are needed they can be added without the client code noticing.

groeten uit Nederland,
Joost de Vries

ps that OS/2-vs-Unix is a nice analogy Jean-Daniel!

pps the term publicized I got from Martin Fowler.
Alexander
Posts: 4 / Nickname: ajeru / Registered: March 27, 2003 10:58 PM
Re: The Demand for Software Quality
October 28, 2003 4:50 AM      
I think there is a lot of truth in what Paul Graham says about popular languages, which is in stark contrast to Bertrand Meyer's view on languages suitable for complex systems:

"[...]if you want to make a language that is used for big systems, you have to make it good for writing throwaway programs, because that's where big systems come from."
http://www.paulgraham.com/popular.html

I'm not sure if this is always the case, but certainly more often than we expect.
Joshua
Posts: 2 / Nickname: jesmith / Registered: June 30, 2003 10:39 PM
Re: The Demand for Software Quality
October 28, 2003 9:37 AM      
It would be quite something to hear Paul Graham and Bertrand Meyer debate the power vs protection philosophies of language design...
Bill
Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
Re: The Demand for Software Quality
October 28, 2003 10:39 AM      
> Design by Contract by Example has been written by
> Richard Mitchell and Jim McKim, not by Bertrand :-)

Oops. Thanks for letting me know. I dropped the link. I'll add it back in in a later installment when Bertrand actually goes into Design by Contract in depth (with the proper author attribution).
Bill
Posts: 409 / Nickname: bv / Registered: January 17, 2002 4:28 PM
Re: the nineties are over...
October 28, 2003 10:49 AM      
> pps the term publicized I got from Martin Fowler.

I think Martin uses the term Published, not publicized:

http://www.artima.com/intv/principles3.html
Geoff
Posts: 6 / Nickname: geoffs / Registered: April 24, 2003 6:15 AM
Re: The Demand for Software Quality
October 28, 2003 11:32 AM      
> The conclusion is: Chose your camp, camarade. Either
> her you have at your disposal a tool that imposes you
> constraints, but offers you some simplicity (like Eiffel
> or Java), or you may have a more complicated tool that
> offers more possibilities, at the price of an accrued need
> for thinking and mastering risks (like C++). I personnally
> prefer the later.

I guess that misses Meyers' point pretty well.

For example, from p. 3:
I think we build in software some of the most complex
artifacts that have ever been envisioned by humankind,
and in some cases they just overwhelm us. The only way
we can build really big and satisfactory systems is to
put a hold on complexity, to maintain a grasp on
complexity.

Basically, Meyers' is claiming that the choice isn't there anymore. If we choose a tool that "offers more possibilities, at the price of an accrued need for thinking and mastering risks" then - despite our best intentions, skill, and "mastery" - we will produce systems with defects.

The point behind information hiding, DBC, component design, "published/publisized" interfaces, dependency inversion, dependency management, and other techniques is to allow the power of "divide and conquor" to be brought to bear on a large problem. If we design systems where every class/component depends on the *implementation* of an unbounded number of other classes or components, then no mater how much we divide, we can never conquor. For example, allow one circular dependency into a C++ system and watch the build-time go through the roof for almost any small change.

Once we start designing our systems with an eye to controling interactions and dependencies, we can start to reduce the complexity of each individual "component" to the point that it can be understood (and tested) in isolation. Then, the complexity of the larger system is limited to the intrinsic complexity of the interactions between components.

If the number of components is "m" and the number of interactions is "n", then the complexity of the fully-coupled system is m*n; the complexity of the fully decoupled system is m+n. For "m" and "n" that are greater than 2, the difference is dramatic; when "n" and "m" both get into the thousands, the difference is between managable and "overwhelming".

In those situations, the choice of a "tool that imposes [some] constraints" isn't a deficieny, it's a aide. Those "constraints" are eactly the language features that the developer uses to express the independence and inter-dependence of the components in the system.

Cheers,

Geoff S.
Alexander
Posts: 4 / Nickname: ajeru / Registered: March 27, 2003 10:58 PM
Re: The Demand for Software Quality
October 28, 2003 1:50 PM      
> Once we start designing our systems with an eye to
> controling interactions and dependencies, we can start to
> reduce the complexity of each individual "component" to
> the point that it can be understood (and tested) in
> isolation. Then, the complexity of the larger system is
> limited to the intrinsic complexity of the interactions
> between components.
>
> If the number of components is "m" and the number of
> interactions is "n", then the complexity of the
> fully-coupled system is m*n; the complexity of the fully
> decoupled system is m+n. For "m" and "n" that are greater
> than 2, the difference is dramatic; when "n" and "m" both
> get into the thousands, the difference is between
> managable and "overwhelming".
>
> In those situations, the choice of a "tool that imposes
> [some] constraints" isn't a deficieny, it's a aide. Those
> "constraints" are eactly the language features that the
> developer uses to express the independence and
> inter-dependence of the components in the system.

I agree with you in principle but I have a problem with the kind of solutions that are assumed to be helpful in managing dependencies. There seems to be a general assumption that the stricter you define interfaces and contracts between components, the better you can handle dependencies.

Very often, no distinction is made between different kinds of dependencies. E.g. it is a completely different ball game if you have a dependency between two components that are both controlled by the same central authority or the other case where there is no central control. You get two completely different change management use-cases.

A tool that imposes one very strict way of managing dependencies may solve one class of dependency problems while creating lots of new problems for other change and dependeny patterns. And that's why I prefer tools that let good software engineers pick and choose what suits the situation best.

In general, the assumption that you either publicize a piece of code or you don't isn't realistic. There is very often a gradual expansion of publicity and you don't want all interfaces to use the same dependency management mechanisms at all times. I don't want to be forced into a style of coding that assumes I'm creating the final version of an interface that a large global committee is about to cast in stone for the next 5 years when in fact I'm just prototyping the innards of some future in house application.
Joost de
Posts: 15 / Nickname: yoozd / Registered: May 15, 2003 4:13 AM
Re: the nineties are over...
October 28, 2003 1:57 PM      
Ah yes, that's right; Fowler speaks English not Dutchglish. 'Publiceren'
Geoff
Posts: 6 / Nickname: geoffs / Registered: April 24, 2003 6:15 AM
Re: The Demand for Software Quality
October 29, 2003 6:10 AM      
> > Once we start designing our systems with an eye to
> > controlling interactions and dependencies, we can start to
> > reduce the complexity of each individual "component" to
> > the point that it can be understood [...]
> > [...] a "tool that imposes
> > [some] constraints" isn't a deficiency, it's a aide.
> > Those
> > "constraints" are exactly the language features that the
> > developer uses to express the independence and
> > inter-dependence of the components in the system.
>
> I agree with you in principle but I have a problem with
> the kind of solutions that are assumed to be helpful in
> managing dependencies. There seems to be a general
> assumption that the stricter you define interfaces and
> contracts between components, the better you can handle
> dependencies.
>
> Very often, no distinction is made between different kinds
> of dependencies. [...]
> A tool that imposes one very strict way of managing
> dependencies may solve one class of dependency problems
> while creating lots of new problems for other change and
> dependency patterns. And that's why I prefer tools that let
> good software engineers pick and choose what suits the
> situation best. [...]
> I don't want to be forced into a
> style of coding that assumes I'm creating the final
> version of an interface that a large global committee is
> about to cast in stone for the next 5 years when in fact
> I'm just prototyping the innards of some future in house
> application.

I couldn't agree more. There are (at least) two quite separate regimes of dependency/interaction control. Each requires a different dependency/interaction management style, and each may be best implemented with different tools (aka languages). BTW, I think it's the fallacy of "one size fits all" (Java *or* C++ *or* Perl *or* Eiffel ...) that gets us into many of these discussions.

One extrema regime is the "heavy weight" published (and I use that in the sense that Martin Fowler does in http://martinfowler.com/ieeeSoftware/published.pdf) interface; perhaps a better term is "inter-component interface". In this situation you need to bring to force the full weight of DBC, information hiding, etc. because this interface needs to stand the test of time (and varying implementations over time). Here you want to use tools that force you to protect yourself from inadvertent (or malicious) abuse of the intended interface/contract that you have published. This is a two-way street, BTW. The component's writer needs to be prevented from inadvertently changing the promised behavior (i.e. the interface/contract), *and* the component's clients need to be protected from accessing unpublished features of the component, as well as getting different behavior from two implementations of the same interface/contract.

The regime on the other end of the spectrum is very "light weight". In this case the context of the dependencies and interactions is very local (small), so the complexity for even an n*m case isn't too big. The most extreme case of this is the relationship between private attributes and the code within a single class. A slightly larger example might be a package of tightly coupled classes fronted by a single published Facade (GOF) class. In this situation the implementer should be allowed free rein to manage his/her own internal dependencies. In fact, working behind a tightly controlled published interface/contract opens up the possibility for vastly relaxed intra-component dependency/interaction management (bring on the global variables!). A perfect example of this is the case where performance optimizations force tightly-coupled code (perhaps even tightly coupled to a specific piece of hardware). Protected by a strong published interface/contract, this implementation can be safely interchanged with other implementations, and all of them could be used by a myriad of clients ("m" clients + "n" component implementations).

Cheers,

Geoff S.
Jason
Posts: 1 / Nickname: jasonw / Registered: October 28, 2003 6:33 PM
Re: The Demand for Software Quality
October 30, 2003 10:32 AM      
I've gone around the loop on the accessor slots vs structure tradeoffs in my mind many times. I like that Ruby makes accessors for you, and that you're free to change the implimentation.

But I'm also realizing that Bjarne's perspective has value in the performance contexts c++ targets. For example, a linked list container class template might store and maintain a size variable, or it might compute it each time requested. Both approaches make sense in differing contexts (does your inner loop modify the list?), and knowing weither the value is stored or computed is indicated by how it's accessed. This could easily be the difference between code that works and code that breaks, or linear performance vs quadratic.

I also think that barjne and Meyer's perspectives are operating at two different scales.

Bjarne is talking about a concrete class, which in his view, should be fairly small and straightfoward. The question for him is: what's the range of values this class member takes. Is it the same as a standard type, or do we need to enforce invarients that limit it.

Meyer is talking about componants, which are often implimented with a single wrapper class, but really shouldn't. A componant should be a bigger thing than a single class, as it represnts more than just "a value". A componant (to me) is something that maintains a set of responsibilities.

I don't think the two views are in opposition to eachother. I think most c++ programmers would look to using bjarne's approach on the small scale, and then something like COM on the scale Meyer is talking about. One c++ professional I respect even advocates using a COM like approch on systems that are published as a unit, not as seperate componants, because it decouples the programmers on the project from eachother.

So the question that remains in my mind is: what does Eiffel offer that is better than what we can do in a more popular language?
Brian
Posts: 3 / Nickname: skybrian / Registered: September 29, 2003 4:48 PM
RSS feed working?
October 30, 2003 11:42 PM      
FYI, this article hasn't appeared on the RSS feed yet.
Geoff
Posts: 6 / Nickname: geoffs / Registered: April 24, 2003 6:15 AM
Re: The Demand for Software Quality
October 31, 2003 8:20 AM      
> I also think that barjne and Meyer's perspectives are
> operating at two different scales. [...]
> Bjarne is talking about a concrete class, which in his
> view, should be fairly small and straightfoward.
> Meyer is talking about componants, [...] A
> componant (to me) is something that maintains a set of
> responsibilities.
> I don't think the two views are in opposition to
> each other. I think most c++ programmers would look to
> using bjarne's approach on the small scale, and then
> something like COM on the scale Meyer is talking about.

Exactly! Well put (better than I could)

> One c++ professional I respect even advocates using a COM
> like approch on systems that are published as a unit, not
> as seperate componants, because it decouples the
> programmers on the project from eachother.

I think that's a very good idea in some circumstances. I'm mostly in the emergent-design/agile camp right now, so I'd probably approach adopting that kind of technique when it becomes clear that intra-project (or intra-component) "divide-and-conquor" subdivision is helpful.

FWIW, on the pseudo-agile project I'm currently involved with, we regularly revisit the decision to split our project in componends (so far we've not adopted a split, but I'm confident at some point we will).

> So the question that remains in my mind is: what does
> Eiffel offer that is better than what we can do in a more
> popular language?

I'm personally neutral on Eiffel as a production programming language (I don't even really know the syntax of the language very well). I'm more interested in the concepts that Meyer brings up, and using them as "best practices" in more mainstream programming environments, with an eye toward perhaps including some of the ideas as extensions.

Cheers,

Geoff S.
15 posts on 1 page.
« Previous 1 Next »