The Artima Developer Community
Sponsored Link

Weblogs Forum
The Vanishing Middle

6 replies on 1 page. Most recent reply: Mar 30, 2005 8:49 AM by Celia Redmore

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 6 replies on 1 page
Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

The Vanishing Middle (View in Weblogs)
Posted: Mar 21, 2005 2:34 PM
Reply to this message Reply
Summary
Choice leads to quality, but higher standards of quality reduce choice over time. That leads to the vanishing middle: software projects increasingly consolidate into a few large projects, on one hand, and a few small, niche projects, on the other. Smaller projects can turn to transparency of their development processes as a way to gain traction.
Advertisement
While incremental improvement is best practice in almost every industry, software stands out in one respect: Users not only help with feature improvements, but also with quality control.

Agile development methods all center around developing small pieces of well-tested functionality, incorporating that functionality into a release, and then obtaining user feedback on that new release. With each cycle, the software improves, since user feedback can reveal problems as well as areas for improvements. The more frequent the cycles, the quicker software quality improves.

Software vendors who understand the importance of time-to-market seldom hesitate to use paying customers as an external QA department. Anyone who used some version Windows or DOS knows that well. Indeed, many an ISV has fallen due to favoring a higher quality initial release over capturing market opportunities. Being first and just good enough proved more advantageous than being best.

That practice is not limited to commercial software. User-provided bug reports are what make open-source software so high quality over time, since finding a bug and fixing a bug can be done by the same person, often at the same time. Smart feedback mechanisms are now standard parts of many open-source and commercial software packages.

The Quality Wedge

This process has worked so well that it now threatens its own existence. Looming changes in user attitudes are one indication of the impending change, and at the heart of those changes are choice.

When I started my software company a few years ago, I, too, counted on receiving user feedback from early releases. We didn't charge for these beta releases, so I assumed that users would be happy to download and try new builds, providing us valuable feedback for future releases, and for the eventually for-pay release. I also hoped that having many users would improve the quality of the product.

I was in for a surprise, though. Instead of providing feedback on a free product, users expected the software to work perfectly from the start. When they encountered bugs or incomplete features, they complained instead of collaborating to improve the product. The feedback I did receive at the time was that users were willing to pay for something that worked exactly as they expected it from day one, rather than try and evaluate a free or beta product. This proved an expensive experiment for our company, because we had to provide technical support for a free product, a required condition for users to even try the software.

One reason for that reception was choice: Users already had experience with products in that domain space that had gone through the initial collective debugging experience. Our users were not forced to try a new product, even if our product offered advantages over competitors' offerings. The leverage afforded by choice raised the quality bar for what users were willing to put up with, even for a trial. Instead of providing feedback, many would-be early adopters wanted a product that had all the bells and whistles, and that just worked as expected, at first try.

If you think mine was an isolated experience, consider Linux. In the early 90s, I would gladly spend many a weekend installing Linux, re-compiling the kernel, hunting down the specific device drivers that would make my X windows fly. Today, I can choose from a dozen distros, each of such high quality that a newcomer distro would have to meet very high user expectations from the initial release. No one, I submit, will spend weeks re-configuring their computers because that new distro couldn't figure out what hardware devices were available. Users now have choice.

Similarly, few users would put up with data loss due to the crash of their DBMS - there are a dozen high quality, free DBMS projects that do the job so well that a newcomer to the field would have to meet a high quality bar if they wanted to cultivate a large user base.

That trend suggests that established projects - and products - in a given problem domain will only gain more momentum, and, concomitantly, that newcomers will have a harder time to gain a foot in the door. Moreover, because of the collective debugging effect, consolidation in software should produce sharper and more Draconian results than in other industries. On the one hand, large projects will receive a disproportionate amount of quality control feedback, allowing their owners to improve quality at a rapid rate. Smaller projects will always exist to satisfy niche areas, but the middle ground may be driven out by that quality wedge.

That has already happened in the Linux distro market where there are five or six large distros with a huge mind share, and a handful of smaller ones with a tiny user base. The result of that large user base is that the large distros are of excellent quality, leaving little incentive for users to try anything else. The same is true for the Java Web server market, where Tomcat all but dominates the field, as is true of the Java persistence layer market, the JVM market, or the IDE market. The key projects receive a disproportionately large percentage of user feedback, which, in turn, will make them even better.

Does that mean that it is hopeless to start a new project or product in a domain where there are already established players? Does it imply that new projects will eternally be doomed to marginal niche existence? Is there really no room for a middle ground?

A Transparent Solution

The traditional solution to that problem is innovation: A smaller project can gain traction, provided that it offers a unique and much demanded feature not available anywhere else. Even then, larger projects will likely swallow those smaller ones. Large software companies routinely make a sport of "innovating" by buying up small competitors with fresh ideas. On the open-source scene, observe how the Apache project has incorporated dozens of smaller open-source projects. For all practical purposes, Apache has become a Wal-Mart of the open-source Web server universe.

A more practical solution may take a clue from other fields. In industry, for instance, when looking for a supplier, a company cannot afford to just order a few thousand samples and see how that order will work out. And they can seldom afford to send someone to a vendor's site and have him inspect a large sample before committing to purchase. Rather, a manufacturer often ensures that its suppliers follow best practices in making their wares, in the expectation that best practices likely lead to high quality results. The ISO 9000 standard, or Six Sigma, are examples of a set of processes that aim to ensure consistently high quality output.

Likewise, when making a doctor's appointment, we must gain some reasonable confidence that that doctor's efforts will produce good results and not harm, in advance of our visit. Again, we often rely on some evidence that the doctor follows best accepted practice: Doctors often display special diplomas or certificates in their offices as evidence of their familiarity with best practices in their areas.

The common element in these examples is transparency of process: A manufacturer or a doctor are not only willing to let potential customers watch over their shoulders, but actively declare the kind of processes they use in their practice. Indeed, we would frown upon visiting a physician who called his office primarily a business, and not a practice. Transparency of process, especially adherence to best practices, are what makes it possible for a smaller manufacturer, or an unknown off-shore supplier, as well as a family physician, to get established.

A similar solution may be possible in software. If a software project - commercial or free - publishes the processes it follows in developing its products, educated users may be more inclined to try those products. By openly publishing the steps taken to ensure quality, as well as up-to-date key quality metrics, software vendors can provide buyers and users of their wares with a level of comfort.

While I still believe that feedback from users is one of the most effective ways to improve software quality, users may be more ready to go along on the collaborative debugging and testing ride if they know where the product is on the quality scale at any given moment, and what processes are in place to gauge and improve quality. In the highly competitive, abundant environment where users have ample choice, transparency of process may become a competitive advantage for smaller projects aiming to grow.


Vincent O'Sullivan

Posts: 724
Nickname: vincent
Registered: Nov, 2002

Re: The Vanishing Middle Posted: Mar 23, 2005 1:55 AM
Reply to this message Reply
"We didn't charge for these beta releases, so I assumed that users would be happy to download and try new builds, providing us valuable feedback for future releases, and for the eventually for-pay release."

Let me get this right: The business model was that you released buggy software. Users debugged it for you. You then charged them for the fixed product.

"I was in for a surprise, though."

Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

Re: The Vanishing Middle Posted: Mar 23, 2005 10:12 AM
Reply to this message Reply
No, the product had incomplete features, but it was not otherwise buggy.

Users wanted to pay for something that had all the features of established products vs. trying a product that worked but without many features. That's because users' time, in that case, was more valuable than trying something new.

I realized that in that situation it would have been better to not release the product even in beta until much later. That goes against the XP/agile intuition to release something early and often. Here, the lesson I learned was to NOT release something early.

In this situation, releasing something early resulted in technical support issues, because users were unable/unwilling to understand that they dealt with a beta. They had no concept of beta, or in-progress software, unlike many of us developers do.

> "We didn't charge for these beta releases, so I assumed
> that users would be happy to download and try new builds,
> providing us valuable feedback for future releases, and
> for the eventually for-pay release."
>
> Let me get this right: The business model was that you
> released buggy software. Users debugged it for you. You
> then charged them for the fixed product.
>
> "I was in for a surprise, though."

David Vydra

Posts: 60
Nickname: dvydra
Registered: Feb, 2004

Re: The Vanishing Middle Posted: Mar 23, 2005 8:04 PM
Reply to this message Reply
Frank, I think Agile principles still apply, but in the case of a product entering the market it maybe that one has to start with a "killer" feature. Lets say I have technology to make a VOIP phone call sound 50% better than any other IM out there and I ship a bare-bones implementation without text chat and redial, etc. Will I be able to acquire market share? I think so. I have just improved the most important feature of the product.

David Vydra

Posts: 60
Nickname: dvydra
Registered: Feb, 2004

Re: The Vanishing Middle Posted: Mar 23, 2005 10:09 PM
Reply to this message Reply
I think transparency will become the norm for how companies that have good quality practices market themselves. For example, here in the Bay Area, you can take a tour of the NUMMI car plant ( http://www.nummi.com/tours.html ) and you can see Agitar's Quality Dashboards at http://agitar.com/openquality/openquality.shtml

Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

Re: The Vanishing Middle Posted: Mar 24, 2005 3:56 PM
Reply to this message Reply
> Frank, I think Agile principles still apply, but in the
> case of a product entering the market it maybe that one
> has to start with a "killer" feature. Lets say I have
> technology to make a VOIP phone call sound 50% better than
> any other IM out there and I ship a bare-bones
> implementation without text chat and redial, etc. Will I
> be able to acquire market share? I think so. I have just
> improved the most important feature of the product.

I agree that finding the killer product angle is key.

But it might be that if that product is not easy enough to use (set up and configure, for instance), or in some other ways handicapped, even that 50% increase in the main product feature won't be appreciated.

My observation is that potential users are so busy, and have so many choices, that the pool of those willing to try early releases out is diminishing. So shipping early and often may not be a good idea even in that case. Rather, shipping when the product is ready for prime time is perhaps the right approach.

In addition, user expectations are very important in introducing a new product or project. We need to find good ways of communicating where a given product on the reliability or maturity scale. The Agitar dashboard is a great example that, hopefully, will become universally imitated.

Celia Redmore

Posts: 21
Nickname: redmore
Registered: Jun, 2003

Re: The Vanishing Middle Posted: Mar 30, 2005 8:49 AM
Reply to this message Reply
As Frank Sommers found out the hard way, it’s up to the vendor to make sure that the beta tester is suitable. Anyone who seriously believes the comment below is not a suitable candidate. Beta testers can be more trouble than they’re worth.

>> The business model was that you released buggy software. Users debugged it for you. You then charged them for the fixed product.

On the other hand, beta testing done properly is expensive for the tester. Just because the vendor doesn’t charge for the beta copy doesn’t mean that the tester doesn’t need to expend personnel and other resources to load the software on to a test machine (please, not production), monitor it, carefully note problems and other issues, report them back to the vendor, and then apply patches promptly as they’re sent. Repeat until done.

My experience as a beta tester is that it costs more to beta test – even if you eventually get the production version for free – than it would be to buy the product after someone else has tested it. The only advantage to beta testing is that you have input into the features and functions that you wouldn’t otherwise have. In that way, you get semi-custom software cheap. Unfortunately, even that may be too expensive for today’s lean-and-mean (emphasize mean) shops.

Anyone who expects their beta copy to run as it will eventually in production hasn’t bought into this concept of mutually beneficial feed back and should be eliminated as a beta tester.

Flat View: This topic has 6 replies on 1 page
Topic: Negatable Marker Annotations Previous Topic   Next Topic Topic: Packer Fans are not always football fans

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use