The Artima Developer Community
Sponsored Link

Frank Thoughts
The Vanishing Middle
by Frank Sommers
March 21, 2005
Summary
Choice leads to quality, but higher standards of quality reduce choice over time. That leads to the vanishing middle: software projects increasingly consolidate into a few large projects, on one hand, and a few small, niche projects, on the other. Smaller projects can turn to transparency of their development processes as a way to gain traction.

Advertisement

While incremental improvement is best practice in almost every industry, software stands out in one respect: Users not only help with feature improvements, but also with quality control.

Agile development methods all center around developing small pieces of well-tested functionality, incorporating that functionality into a release, and then obtaining user feedback on that new release. With each cycle, the software improves, since user feedback can reveal problems as well as areas for improvements. The more frequent the cycles, the quicker software quality improves.

Software vendors who understand the importance of time-to-market seldom hesitate to use paying customers as an external QA department. Anyone who used some version Windows or DOS knows that well. Indeed, many an ISV has fallen due to favoring a higher quality initial release over capturing market opportunities. Being first and just good enough proved more advantageous than being best.

That practice is not limited to commercial software. User-provided bug reports are what make open-source software so high quality over time, since finding a bug and fixing a bug can be done by the same person, often at the same time. Smart feedback mechanisms are now standard parts of many open-source and commercial software packages.

The Quality Wedge

This process has worked so well that it now threatens its own existence. Looming changes in user attitudes are one indication of the impending change, and at the heart of those changes are choice.

When I started my software company a few years ago, I, too, counted on receiving user feedback from early releases. We didn't charge for these beta releases, so I assumed that users would be happy to download and try new builds, providing us valuable feedback for future releases, and for the eventually for-pay release. I also hoped that having many users would improve the quality of the product.

I was in for a surprise, though. Instead of providing feedback on a free product, users expected the software to work perfectly from the start. When they encountered bugs or incomplete features, they complained instead of collaborating to improve the product. The feedback I did receive at the time was that users were willing to pay for something that worked exactly as they expected it from day one, rather than try and evaluate a free or beta product. This proved an expensive experiment for our company, because we had to provide technical support for a free product, a required condition for users to even try the software.

One reason for that reception was choice: Users already had experience with products in that domain space that had gone through the initial collective debugging experience. Our users were not forced to try a new product, even if our product offered advantages over competitors' offerings. The leverage afforded by choice raised the quality bar for what users were willing to put up with, even for a trial. Instead of providing feedback, many would-be early adopters wanted a product that had all the bells and whistles, and that just worked as expected, at first try.

If you think mine was an isolated experience, consider Linux. In the early 90s, I would gladly spend many a weekend installing Linux, re-compiling the kernel, hunting down the specific device drivers that would make my X windows fly. Today, I can choose from a dozen distros, each of such high quality that a newcomer distro would have to meet very high user expectations from the initial release. No one, I submit, will spend weeks re-configuring their computers because that new distro couldn't figure out what hardware devices were available. Users now have choice.

Similarly, few users would put up with data loss due to the crash of their DBMS - there are a dozen high quality, free DBMS projects that do the job so well that a newcomer to the field would have to meet a high quality bar if they wanted to cultivate a large user base.

That trend suggests that established projects - and products - in a given problem domain will only gain more momentum, and, concomitantly, that newcomers will have a harder time to gain a foot in the door. Moreover, because of the collective debugging effect, consolidation in software should produce sharper and more Draconian results than in other industries. On the one hand, large projects will receive a disproportionate amount of quality control feedback, allowing their owners to improve quality at a rapid rate. Smaller projects will always exist to satisfy niche areas, but the middle ground may be driven out by that quality wedge.

That has already happened in the Linux distro market where there are five or six large distros with a huge mind share, and a handful of smaller ones with a tiny user base. The result of that large user base is that the large distros are of excellent quality, leaving little incentive for users to try anything else. The same is true for the Java Web server market, where Tomcat all but dominates the field, as is true of the Java persistence layer market, the JVM market, or the IDE market. The key projects receive a disproportionately large percentage of user feedback, which, in turn, will make them even better.

Does that mean that it is hopeless to start a new project or product in a domain where there are already established players? Does it imply that new projects will eternally be doomed to marginal niche existence? Is there really no room for a middle ground?

A Transparent Solution

The traditional solution to that problem is innovation: A smaller project can gain traction, provided that it offers a unique and much demanded feature not available anywhere else. Even then, larger projects will likely swallow those smaller ones. Large software companies routinely make a sport of "innovating" by buying up small competitors with fresh ideas. On the open-source scene, observe how the Apache project has incorporated dozens of smaller open-source projects. For all practical purposes, Apache has become a Wal-Mart of the open-source Web server universe.

A more practical solution may take a clue from other fields. In industry, for instance, when looking for a supplier, a company cannot afford to just order a few thousand samples and see how that order will work out. And they can seldom afford to send someone to a vendor's site and have him inspect a large sample before committing to purchase. Rather, a manufacturer often ensures that its suppliers follow best practices in making their wares, in the expectation that best practices likely lead to high quality results. The ISO 9000 standard, or Six Sigma, are examples of a set of processes that aim to ensure consistently high quality output.

Likewise, when making a doctor's appointment, we must gain some reasonable confidence that that doctor's efforts will produce good results and not harm, in advance of our visit. Again, we often rely on some evidence that the doctor follows best accepted practice: Doctors often display special diplomas or certificates in their offices as evidence of their familiarity with best practices in their areas.

The common element in these examples is transparency of process: A manufacturer or a doctor are not only willing to let potential customers watch over their shoulders, but actively declare the kind of processes they use in their practice. Indeed, we would frown upon visiting a physician who called his office primarily a business, and not a practice. Transparency of process, especially adherence to best practices, are what makes it possible for a smaller manufacturer, or an unknown off-shore supplier, as well as a family physician, to get established.

A similar solution may be possible in software. If a software project - commercial or free - publishes the processes it follows in developing its products, educated users may be more inclined to try those products. By openly publishing the steps taken to ensure quality, as well as up-to-date key quality metrics, software vendors can provide buyers and users of their wares with a level of comfort.

While I still believe that feedback from users is one of the most effective ways to improve software quality, users may be more ready to go along on the collaborative debugging and testing ride if they know where the product is on the quality scale at any given moment, and what processes are in place to gauge and improve quality. In the highly competitive, abundant environment where users have ample choice, transparency of process may become a competitive advantage for smaller projects aiming to grow.

Talk Back!

Have an opinion? Readers have already posted 6 comments about this weblog entry. Why not add yours?

RSS Feed

If you'd like to be notified whenever Frank Sommers adds a new entry to his weblog, subscribe to his RSS feed.

About the Blogger

Frank Sommers is a Senior Editor with Artima Developer. Prior to joining Artima, Frank wrote the Jiniology and Web services columns for JavaWorld. Frank also serves as chief editor of the Web zine ClusterComputing.org, the IEEE Technical Committee on Scalable Computing's newsletter. Prior to that, he edited the Newsletter of the IEEE Task Force on Cluster Computing. Frank is also founder and president of Autospaces, a company dedicated to bringing service-oriented computing to the automotive software market.

Prior to Autospaces, Frank was vice president of technology and chief software architect at a Los Angeles system integration firm. In that capacity, he designed and developed that company's two main products: A financial underwriting system, and an insurance claims management expert system. Before assuming that position, he was a research fellow at the Center for Multiethnic and Transnational Studies at the University of Southern California, where he participated in a geographic information systems (GIS) project mapping the ethnic populations of the world and the diverse demography of southern California. Frank's interests include parallel and distributed computing, data management, programming languages, cluster and grid computing, and the theoretic foundations of computation. He is a member of the ACM and IEEE, and the American Musicological Society.

This weblog entry is Copyright © 2005 Frank Sommers. All rights reserved.

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use