For social programming to work, specifications must in theory be strictly adhered to by both providers and consumers. But some systems seem to work in practice with loose adherence to specifications, systems in which sloppy providers and forgiving consumers are the norm. How important is strict adherence to contracts?
Today I had a 1:00 PM appointment with Elliotte Rusty Harold, author of many books on Java and XML and the force behind the websites Cafe au Lait and Cafe con Leche. Rusty told me to meet him at the bookstore at the Software Developer (SD) conference. I showed up at the San Jose Convention Center shortly before 1:00, inwardly admiring myself for being so prompt and dependable. Unfortunately, SD was being held at the Santa Clara Convention Center. Luckily, Rusty was still waiting for me at 1:15 when I finally arrived at the correct convention center.
After lunch, Rusty and I sat down for a 90 minute interview, which I'll publish on Artima.com within the next few months. Rusty recently released XOM, which is to a great extent JDOM refactored. In the interview, Rusty and I talked about API design in general, as well as particulars of XML processing APIs, especially JDOM and XOM.
The very first time I interviewed James Gosling back in 1999, I asked him how the rise of the network would change software development:
James Gosling: I think the biggest difference is that you can't just sit alone in a room and build stuff, because the things you're building interact with everything out there. You can't just sit alone and do whatever you want.
Bill Venners: And why is that?
James Gosling: Because you're trying to interact with other things, you have to know what the other things do. If there are multiple people doing similar kinds of things, they have to have some kind of an agreement on how these things should work. If you're designing an electrical power delivery mechanism, for example, you have to design a wall socket. And everybody has to use the same wall socket; otherwise those toasters won't be able to plug in to you.
It becomes an environment where people have to be much more socially involved. It really is a community thing.
For years I have assumed that "social programming" of the kind envisioned by James Gosling would require strict adherence to contracts defined in thorough specifications. Without strict adherence to contracts clearly delineated in specifications, how could a distributed system being put together by lots of different parties ever work together?
I imagined such systems would work like this: People define specifications of APIs, data structures (such as XML schemas), and protocols. People use those specifications to guide them in 1) creating software that provides the specified services or data structures, and 2) creating software that uses or consumes the APIs or data structures. People bring providers and consumers together and everything works, because everyone adhered strictly to the agreed upon specifications.
Attempting to Corner Rusty
One thing I had heard XML enthusiasts, including Rusty Harold, highlight as an advantage of XML is that because XML elements are individually tagged, invalid documents can often still be processed. This attitude has long grated against my strict-contracts personality. I figured this interview would be a good opportunity to try and convince Rusty and, via the interview, other XML enthusiasts that invalid documents should be rejected, not processed.
But Rusty threw me off balance with his first example: RSS. He suggested that a huge percentage of the documents that purport to adhere to one of the four RSS standards actually don't. This threw me because I am planning to very soon write an application that consumes RSS feeds. I realized that if 99% of the RSS feeds on the Internet turn out in practice to be valid, then it makes sense for my application to reject invalid RSS documents. But if only 50% of the RSS feeds are valid, then I should surely try and handle the invalid documents as best I can. But then, anyone writing applications that consume RSS feeds would make the same decision. For the most part, RSS consumers would all deal with invalid RSS documents. And if that's the case, what is the incentive for RSS producers to create valid documents?
A good example of that kind of sloppy services/forgiving clients system is websites and browsers, in which the point of agreement is the HTML specification. In practice, a great percentage of web pages on the Internet have HTML errors in them. Because of that, browsers try hard to figure out what the intent of the HTML author was. The browsers are forgiving of those errors. As a result, websites don't really need to create valid HTML documents. And they don't. In fact, I probably don't to a significant extent. The way I verify the webpages here at Artima.com is not by running them through an HTML verifier, but by looking at them in various browsers on various platforms.
A good example from the other end of the strict adherence spectrum is the Java class file. The JVM specification includes a chapter that formally and thoroughly specifies the format of the Java class file. It also says that JVMs must reject malformed (invalid) class files. JVMs are not forgiving of errors in class files. As a result, the producers of Java class files are very careful to only produce valid well-formed Java class files.
My intuition tells me that for social programming to work, specifications must be strictly adhered to by both providers and consumers, like the Java class file. But then I see some systems that seem to work in practice with loose adherence to specifications, systems in which sloppy providers and forgiving consumers are the norm. Rusty and I explored this topic at the end of our interview, but we didn't really nail it down. Here are some questions I'd enjoy hearing your opinion on:
1. What is it that makes some systems, like HTML and maybe RSS, become sloppy provider / forgiving consumer kinds of systems? What allows other systems, such as the Java class file format, to maintain strict adherence to the specification by all parties?
2. How important is strict adherence to specifications in social programming systems? In what kinds of situations is strict adherence important? When is strict adherence not so important?
3. To what extent is the level of adherence to specifications a manifestation of the "culture" of each particular social programming system?
That's quite funny: Your article started with an illustration of a "sloppy" specification: You ended up at the wrong convention center. Since the consequences of agreeing to an under-specified condition (i.e., to meet at the SD bookstore), were small, all turned out fine. If Rusty arranged to meet 50 people at the bookstore, instead of just you, I'm sure he would have sent them a map (a stricter spec).
That illustrates my point: If the consequences of misunderstanding are small (or relatively so), sloppiness is OK. If the consequences are big, you want to be as strict as possible.
I disagree with the notion that a reason XML is attractive is because it's forgiving. Validating XML parsers must ensure that a document follows rather strict specifications. With XML Schema, those specifications can be very strict, indeed. A parser should, and must, reject every document not conformant to that specification.
Of course, in very simple situations, you may not want to validate a document. For an RSS feed, the worst that can happen is that some messages won't show up on a Web page. By contrast, I recently wrote a system that parses and analyzes consumer credit reports in XML format. If you miss whether a consumer filed for 0 or 1, or 2 bankruptcies, that's a big deal with big consequences. So the providers of those documents very strictly outline the format of their reports, in specs that are several hundred pages long.
I also disgree with the sloppy HTML argument. The reason you'd want to test your HTML pages by looking at them in various Web browsers is not to ensure your pages' adherence to strict HTML, but to ensure that your pages look good in different browsers. Since the actual display of HTML is not specified by the HTML specs, browsers display even valid HTML differently. Apart from that, it is still a good idea to use an HTML-checker.
And there are Web pages so seriously broken that many browsers miss important details. Browsers can make amends for bad HTML only at the peril of missing some important detail of a page. If that missing detail happens to be a button with the text, "Click here to claim your $50 mil," does that browser then serve or a disserve the user? If a Java VM sort of glossed over some exception handling, or perhaps a couple of methods, due to an imprecise byte code or class file format, a running program might reach the conlusion to invoke the selfDestruct() method, instead of launch(), if that program ran inside a missile's on-board computer.
So, my take is that when the consequences of ambiguity, and forgiveness, are manageable, it's OK to be sloppy. You want to invest in exactitude when the price of ambiguity would outweigh that investment.
I believe the sloppiness sometimtes creeps in due to purely non-technical reasons. Technology sometimes moves too fast, making it difficult to nail down proper standards. Companies are in a hurry to come out with products and they simply accept the latest (but sometimes) sloppy specifications and start work on them. Also at times a client has huge investments in a system that is sloppy. As a technology service provider there may be no other option than to work with the clients sloppy system. Creating a strict system will need a lot of analysis and as a result greater costs. Ofcourse the benifit in the longer run will justify the additional initial costs. But often it is difficult to convince clients who want something done within a meagre budget. However I'm sure that is possible to have more stricter systems. Perhaps it may also be a function of the developers discipline and foresight. Maybe we lack the rigourous standards which other engineering areas do not.
> I disagree with the notion that a reason XML is attractive > is because it's forgiving. Validating XML parsers must > ensure that a document follows rather strict > specifications. With XML Schema, those specifications can > be very strict, indeed. A parser should, and must, reject > every document not conformant to that specification. > > Of course, in very simple situations, you may not want to > validate a document. For an RSS feed, the worst that can > happen is that some messages won't show up on a Web page. > By contrast, I recently wrote a system that parses and > analyzes consumer credit reports in XML format. If you > miss whether a consumer filed for 0 or 1, or 2 > bankruptcies, that's a big deal with big consequences. So > the providers of those documents very strictly outline the > format of their reports, in specs that are several hundred > pages long. > Yes, the argument is that even if there's a schema, you don't use a validating parser that rejects invalid documents. You just parse the document into a tree. If the document is well-formed XML, it gets accepted. Then your code goes processes the document as best it can, potentially ignoring unrecognized elements, forgiving out of order elements, and so on. The consumer of RSS feeds that I soon plan to write will likely operate that way, despite RSS being very strictly defined in specifications.
> I also disgree with the sloppy HTML argument. The reason > you'd want to test your HTML pages by looking at them in > various Web browsers is not to ensure your pages' > adherence to strict HTML, but to ensure that your pages > look good in different browsers. Since the actual display > of HTML is not specified by the HTML specs, browsers > display even valid HTML differently. Apart from that, it > is still a good idea to use an HTML-checker. > My point is that because browsers are forgiving, I have little incentive to produce perfect HTML. So long as it looks OK in the browser, I'm OK. I do find some HTML errors by looking at the web page in a browser, though. If I forget a close <i>, for example, the rest of the page is italics. That indicates an HTML problem as well as a page appearance problem.
About two years ago I did incorporate an HTML lint of sorts into my build, but I ended up never using it. What I really need is some kind of validator that kills the build when the HTML is bad, then run it all the time as part of the build.
I believe a lot of it has to do with the consequences of failure or of not following the contract. Let's take a simple analogy like obeying the law, since in a sense obeying the construct of a social programming contract is the same kind of thing.
As far as obeying the law goes, I would guess that 85% of the people right now on an interstate anywhere in this country are speeding. This is because of two reasons.
1) Everybody else is, so in order to maintain some semblance of order and consistency, you are forced to stay with the pack. Only radical deviation from the norm (going something like 145 mph) can be constituted as dangerous behavior.
2) The penalty for speeding is something between a slap on the wrist and a fine. It's not too bad unless you are speeding to such proportions that you are radically deviating from the norm and it can be categorized as something like reckless endangerment.
A web page to me is similar to this. In most cases, the cost of not obeying the contract is minimal, so it can be loosely interpreted, and as long as you are doing the same things everybody else is, then you're not hurting anybody. The worst thing that happens if a simple HTML document doesn't show is you can't see it. Nothing life threatening happens.
Now let's go to the other end of the obeying the law spectrum and hazard a guess that only a very small percentage of the citizens out there right now are mass murderers. Being a mass murderer, while also breaking the law, differs from speeding in that
1) Mass murdering is not harmless.
2) The penalty for mass murder is much stiffer than for speeding.
In the same way, as pointed out Frank, if something like an execution environment decides to execute the wrong code, then you could have a catastrophe on your hands.
Social anything deals with a whole realm of constructs beyond the simple model presented by an API. Does the same intuition tell you that for social programming to work that specifications must be followed to the letter also tell you that traffic patterns on your favorite interstate would become more efficient if every person that was not adhering to the standard (speeding) was pulled over and fined? Or, to more closely mirror the "if you don't follow the contruct you get rejected" mode, every car should be wired to explode once it exceeds the speedlimit. You didn't follow the social contract. REJECTED!! Granted, in some cases this is warranted, but in most is would be excessive.
I think having a very large group of people follow a rigidly defined set of programming constructs would be great, but in most cases, like in HTML, if people went through the effort of doing so because they had to, a lot less information would be out there on the web because people either got frustrated with the details or the whole process just got a lot slower.
Where you can afford to, you need to be flexible. Being flexible constitutes what most people who write software for a living would call sloppy and forgiving. I tend to use stronger terms ;-) At the same time, most of the people that are putting together one shot items or just tinker around with this stuff on the side don't really have the need or desire to follow every last edict about every last construct out there. It just has to be good enough. Like the HTML you put together for these pages. It probably isn't all valid, but it all displays, and that's good enough.
Well, this certainly went on a for a lot longer than I intended. I can't even tell if I made my point. To put it in one sentence or less. Rigid, nice but it'll never happen in the real world.
I think the main reason why some technologies tend to drift into forgiving sloppy usage is that they are badly designed. And by badly designed I mean:
* It takes immense effort to understand and implement the specification correctly. What "immense effort" is, however, depends of course on the user group that the spec targets. This can best be prevented by the "simple things easy" principle. It should make a difference if a specification targets a few large companies (such as in the case of the JVM class file format) or if masses of poorely trained web designers should use the spec.
* You have to specify things that you are not interested in and that are just not semantically necessary to express what you want. For example, J2EE deployment descriptors force me to write <security-constraint> before <env-entry>. Why is that? The deployment descriptors are declarative. There is no natural order so why does the specification enforce an order? I'm sure there are servlet engines out there that ignore wrongly ordered deployment descriptor elements, and rightly so.
> I think having a very large group of people follow a > rigidly defined set of programming constructs would be > great, but in most cases, like in HTML, if people went > through the effort of doing so because they had to, a lot > less information would be out there on the web because > people either got frustrated with the details or the whole > process just got a lot slower. > But there's a tradeoff in being forgiving. Sure, you enable people to be sloppy. Allowing people to be sloppy saves them time, and that is arguably a benefit. The cost is that by being forgiving, you are actually moving the specification. In effect, the HTML spec is no longer a written down document, it is "whatever works in IE on Windows, plus perhaps a few other browsers on a few other platforms." But that isn't a well-defined specification. Things work differently on different versions of IE, on different platforms, and among other browsers.
So in practice, the more forgiving the HTML community, the more the de-facto specification becomes fuzzy. That makes it harder to write software that consumes HTML. Because people are writing mediocre HTML, if you're consuming it, you have to deal with HTML errors. You have to guess intent. And you may guess differently from the next programmer of an HTML consumer. Which means that some HTML documents won't work in some HTML consumers.
So one cost is that writing an HTML consumer is more difficult. The other, which is the one I'm concerned about most, is that some HTML won't work in some HTML consumers. Some percentage of the system won't work. This actually affects and annoys the end user. I fairly regularly encounter web pages that don't work, and it is frustrating. To me, the promise of a culture of strict compliance is that everything works as expected with everything. Every Java program runs in every JVM. For the most part, that's true.
> Where you can afford to, you need to be flexible. Being > flexible constitutes what most people who write software > for a living would call sloppy and forgiving. I tend to > use stronger terms ;-) At the same time, most of the > people that are putting together one shot items or just > tinker around with this stuff on the side don't really > have the need or desire to follow every last edict about > every last construct out there. It just has to be good > enough. Like the HTML you put together for these pages. It > probably isn't all valid, but it all displays, and that's > good enough.
My point is that "it all displays' is true only to an extent in a sloppy but forgiving system. Some web pages don't display in some browsers. Some DVDs don't play in some DVD players. Some RSS feeds won't be parsed correctly by some news aggregators. If every place HTML landed required valid HTML, then web site designers like myself would have plenty incentive to use whatever tools necessary to make sure we only produce valid HTML. My intuition is this: a culture of strictness helps reduce the percentage of mismatches between producers and consumers of standardized information artifacts.
I do see you point about being "good enough." The number of pages that don't work is small enough to make the web worthwhile for most people. Perhaps social software systems find the right level of strictness naturally, but I'm wondering if there are things that the specification definers can do to help promote strictness.
This article claims that not only are many RSS feeds not valid, meaning they are well-formed XML but don't conform to their advertised schema, but that 10% of RSS feeds on the internet aren't even well-formed XML. Sheesh.
In this article, the author talks about the social stuff I was asking Rusty about. They mention that there is competitive pressures on consumers of RSS feeds, just like there were competitive pressures on HTML browser vendors, to handle badly formed content. Once content consumers are forgiving, content providers have no incentive to provide well-formed or valid content.
One thing I know that Sun did to help keep that from happening in the Java world is by requiring strict conformance in the trademark licensing. You can't call it Java unless it generates valid class files, among other things. But even if one company owned the name "RSS" as a trademark, the same approach wouldn't work. Am I going to get each hand-edited RSS file approved and blessed by that company before releasing it? No. I just won't call it RSS.
The success of HTML might lead one to believe that it is succesful because of its forgiving clients. In fact, it is succesful in spite of them. Being open and quasi human readable helped it quite a bit. The strengths of TCP/IP and HTTP helped, too. http://webservices.xml.com/pub/a/ws/2002/02/06/rest.html
A strict system that is designed without evolution in mind is harder to evolve than a forgiving one. A strict system that is designed with evolution in mind is easier to evolve than either.
>because XML elements are individually tagged, invalid >documents can often still be processed. This attitude has >long grated against my strict-contracts personality. >I figured this interview would be a good opportunity to try >and convince Rusty and, via the interview, other XML >enthusiasts that invalid documents should be rejected, not >processed.
The "strict contracts" approach makes an awful lot of sense for deep infrastructure. You don't want to process invalid class files that could contain a virus, or pass on forged IP packets that could be part of a DDOS attack. But it doesn't work so well for application-level code or data, not for technical reasons but for mainly "social" reasons.
The classic example of a "waterfall model" software project is too well-known to belabor: by the time a formal "contract" can be specified and delivered, the chances are extremely high that it no longer describes the customer's real needs (assuming it ever did). Formal specs, be they for software or for data, are hard to produce, and can't easily adapt to changes in technology, business, and the social environment in which both operate.
So, "XML schemas as contracts to be enforced" are great if they are issued by some recognized authority, if they reflect the business needs of the parties, and if they "work" technologically. It turns out that the intersection of these conditions is extremely rare. RSS is indeed a good illustration. First, all the "authorities" issuing the various flavors of RSS are self-appointed. RSS is only a particularly, uhh, 'colorful' example (if you read the things the "authorities" say about one another!) -- virtually all XML specs are issued by self-appointed "authorities." Even the W3C has no actual standing as a "standards" organization. It gets its credibility from people believing it has credibility.
Second, the actual needs of the producers and consumers of many specs, especially RSS, are rapidly evolving. "Liberal" systems such as RSS and HTML can accomodate change more easily than those that enforce contracts devised in the early days of the Web.
Finally, the standard has to work technologically. The W3C XML Schema Definition Language illustrates the challenges here. It tries to cover a wide variety of needs, but ends up being so complex that essentially no vendor implements all of it in a way that fully interoperates with other implementations. That will presumably be fixed sooner or later (by some combination of fixes to the spec and to the implementations) but in the meantime if you reject an XML document/message as invalid, you run a significant risk that your validation software is in error. (Or perhaps you are in some limbo where the experts disagree whether a particular document is a valid instance of a schema or not...this happens with disturbing regularity when one "pushes the envelope" with XML specs.) Conversely, for all its complexity, XML schema languages are not capable of expressing many useful constraints. There is no way, for example, to "validate" that the value in an XML element is a prime number, or that it is a valid part number for which you have inventory, without additional non-XML processing.
And in any event, rejecting a legitimate business order or invoice because of an obscure formatting error is not likely to be conducive to continuing relationships at the business level! That's the ultimate reason why "sloppy and forgiving" tends to dominate -- people pay us nerds to make life easier for them, they don't want to change the way they do things to accomodate "brittle" technologies .... and XML schema validation is a brittle solution to a technological problem, not a robust solution to a human problem.
> And in any event, rejecting a legitimate business order or > invoice because of an obscure formatting error is not > likely to be conducive to continuing relationships at the > business level! > Interpreting an obscure formatting error in a legitimate business order in the wrong way, such that fewer widgets are ordered, or are delivered a week late, is also not conducive to business relationships. When you get a malformed or invalid document, you have to guess what the intent was. The way you guess is not defined in a specification, so different parties will guess differently. It's like the election folks in Florida staring at dangling chad and trying to figure out what the voter's intent was. You don't get a predictable system.
> That's the ultimate reason why "sloppy and > forgiving" tends to dominate -- people pay us nerds to > make life easier for them, they don't want to change the > way they do things to accomodate "brittle" technologies > .... and XML schema validation is a brittle solution to a > technological problem, not a robust solution to a human > problem.
I agree that people pay nerds to make life easier for them, but part of making life easier for normal people is giving them reliable, predictable systems. Normal people aren't creating malformed, invalid RSS documents. The software they use is creating the malformed RSS. And that software is written by nerds, who for many reasons are producing a lot of invalid RSS. This means that it is easier for nerds to write software that creates RSS feeds, harder for nerds to write software that consumes RSS feeds, and somewhat unpredictable whether a normal non-nerd person's software-generated RSS feed is going to be consumed correctly in any particular RSS client. That's another kind of brittle. Some malformed documents will be rejected by some strict clients and misinterpreted by some forgiving clients, even if they work correctly in most forgiving clients.
My intuition is that the only way to get RSS producers to consistently generate correct RSS is for all the consumers to be strict. I myself hand write the software that produces RSS feeds at this site. Because I'm anal-retentive about contracts, I always check them in an RSS validator. I use this one:
Tonight when I registered my personal weblog on javablogs.com, I discovered that my RSS feed program had a bug in it. The link element for the entire feed was accidentally hard-coded to the Artima homepage, not the individual weblog page. Oddly enough, the RDF about attribute was correct. That means that the about attribute and the link attribute were different. That should make the thing invalid in theory. The validator I use has complained about that kind of mismatch before, but it didn't in this case. It was an easy bug to fix, but for a week my RSS feed was sloppy. In the next few weeks I plan to write some software that consumes RSS feeds, and I am going to be forgiving, because to be competitive I have to be forgiving.
Once a social programming system becomes sloppy and forgiving, writers of consumers or clients face a lot of pressure to write forgiving software. It becomes a self-sustaining system. You can never go back to being strict. What that gives end users is a system that works some of the time, is predictable to some extent, is reliable to some extent.
Isn't all this a question of productivity and efficiency? Depending on the specific environment where a standard is used, there might be an optimal level of tolerance towards sloppy usage. For example, when I write a letter and I know the correct name, post code and house number but I mistype the flat number, I would expect the postman to still deliver my mail provided there is only one person of that name in that house. It'd be extremely inefficient if all such mails would be returned to the sender. Even if I didn't know the number and just left it out because it'd cost me too much time to find out. Or imagine you write a letter to someone in another country and you'd have to learn the postal rules of that country first. These rules are different for each country and these standards are lots of pages long. It'd be extremely inefficient if everyone who sends a letter would have to learn these standards.
It costs time and money to be correct and it costs time and money to cope with sloppyness. The question is simply where is that point where sloppyness starts to cost more moeney than correctness.
But I think, what's important is that this problem really starts where the standards and specifications are defined. There is a level of detail and strictness that can impede necessary change and provoke sloppyness. There is always this tradeoff between decidability and unambiguousness on the one hand and flexibility on the other.
If you don't agree with Jon Postel's rule of thumb ("Be conservative in what you accept and conservative in what you send") as the key for real-world networked systems, my contribution will be of no interest. The only question in every protocol is who's the sender and who's the receiver, espacially in sophisticated protocols, where one physical person running computers can be receiver, than sender.
Real-world general purpose networked systems have no meaning if they can't aggregate numerous actors, including the dumbest and irrespectuous ones. The problem is that most networked systems don't provide any way for a tiers to handle its peers uncorrectness (Google being the exception in this world of dumb tiers, since it allows mechanical format transformation & translation for its users convenience, opening a lot of non-technical issues).
Most specifications reduce operation to a client/server approach, reducing any complex system to a serie of client/servers paradigms. This has no sense, since it cannot handle the many cases where a service has to be delivered while "the client" and "the server" aren't alive at the very same moment. This is why real-world systems use service providers, acting as tiers in a real service, such as e-mail for example.
Reducing the tier role to a relay actor is seducing for specification-minded people, but blatantly violate the Postel commitment.
So in the end, the absence of a standard implementation of a compliance test leads to competition and eventually the emergence of de-facto standard implementations that are extremely liberal. The vicious cycle continues with producers that deviate from the original spec. The moral of the story is, before releasing a specification, make sure you have a solid compliance suite, otherwise expect the specification to be subverted.