The Artima Developer Community
Sponsored Link

Artima Developer Spotlight Forum
Doron Reuveni on uTest's Marketplace of Bugs

0 replies on 1 page.

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 0 replies on 1 page
Frank Sommers

Posts: 2642
Nickname: fsommers
Registered: Jan, 2002

Doron Reuveni on uTest's Marketplace of Bugs Posted: Oct 8, 2008 7:53 PM
Reply to this message Reply
Advertisement

The range of tools for automated testing—whether unit, integration, acceptance, or load testing—has greatly expanded over the past several years. At the same time, automated tests cannot completely replace humans in uncovering certain types of software bugs.

In this interview with Artima, uTest co-founder Doron Reuveni explains the benefits of having human testers be part of an application's testing cycle. Reuveni's company provides access to an on-demand pool of over ten thousand testers, who bring to the testing process a wide variety of user backgrounds, locales, and client environments:

uTest is a cloud-based market place for software testing. We provide developers and development organizations with a full range of testing services via a global community of professional testers.

Although developer testing, such as automated unit, integration, and acceptance testing, are crucial to delivering high-quality applications, our customers often find that software errors manifest themselves first when actual customers start using their applications. Of course, that’s the worst possible way to uncover software defects—and, in fact, you want to perform testing on your product especially to avoid end-users having to discover your bugs.

Some organizations maintain an in-house team of testers who, when an application is ready for testing, exercise various functions and, in effect, simulate interaction with the application from a user’s point of view.

We offer a similar benefit, but without having to maintain an in-house staff for that purpose, and without the need for long-term commitment: When your application is ready, you can have our globally distributed team of testers exercise its functionality and report any bugs they encounter. You pay only for the bugs thus uncovered, or for the test-cases that were created in the process of testing the application. So it’s a pay-for-performance type of a model.

The requirements that applications must be developed quickly, that they must operate under a variety of deployment environments, and by a possibly diverse set of users, all conspire to compromise application quality. Hiring staff and building an infrastructure to test your application under those circumstances is difficult, especially if you’re an emerging small company, or even if you’re a mid-size business. Building out the human resources aspect of such a testing infrastructure may not be cost-effective. We a saw a great need for this sort of testing, especially if the services we provide are delivered on a pay-as-you go basis.

Currently, we have more than 10,000 testers from around 140 countries. The testers range from novice, with about a year of experience, to experts with over ten years of testing and QA experience. Over ten percent of our community belongs in that latter category.

They provide testing of applications across multiple platforms, locations, locales, and environments, and their testing is performed in real-time. When you’re ready to test your application, you can go to our Web site, create an account, define what you want tested, and how you want that done, and specify the profile of the tester within our community that you would like to test your application. You will start to have testers from our community invited to the testing project soon after submitting such a request. They will associate themselves with your project, and start reporting bugs and feedback in real-time.

For companies following an agile methodology, that sort of real-time, immediate feedback loop provides great benefits: For example, you can do a weekly sprint, have your application tested over the weekend, and have all the bugs and feedback prioritized by Monday. Then you can decide what to focus on and what to fix during a stabilization week. You can also do a final testing round just before releasing your application or feature to users.

That way, agile developers can obtain two types of test coverage for their applications: You need to do automated testing, create your test scenarios and test scripts, and run those through some sort of an automated tool. And you need to do functional testing, and exploratory testing, which often involves humans doing part of the testing, as well.

Because our testing community consists of members who are skilled in testing, and they can also help you write test scenarios based on their experience using your application. Those scripts and test cases can then be incorporated into your automated test execution process.

Some of our customers provide very specific instructions on how the application should behave, and what points in the application to test. Our testers would then map their testing activities against those specified requirements.

Some other customers, but contrast, utilize our services for completely exploratory type of testing: Instead of following a strict testing script, our testers in that case do completely exploratory testing of a Web application, for instance, much like end-users would. Once they find something that doesn’t behave appropriately, or is not acceptable, they flag that.

Another type of service we provide is real-time load and stress testing. You can load and stress-test your application with automated tools, but on top of that, you can have real testers testing the application concurrently, while the application is under load. There are bugs and defects in an application that manifest only when the application is under load, and humans are often very good at pinpointing those issues. Load on an application heavily impacts usability, and testers can report those usability issues.

Our service is a market place in that sense that when a customer creates a new testing job, that request matches the task with testers having the required skills. Once you create a testing job request in our system, within twenty-four to forty-eight hours you get anywhere from thirty to fifty users testing the application.

Any time a customer approves a bug, the customer pays that for that bug, and our tester gets compensated for discovering that bug. So it’s really a market place of bugs. At the end of the day, though, it’s really a market place of quality: You get high-quality bug reports, and those, in turn, allow you to improve your application’s quality even further.

Our uTest platform provides a complete bug and issue tracking system. When our testers discover an issue, they report it on our platform. We also now provide integration between requirements management systems and the uTest platform.

In addition, we recently integrated our platform with existing bug tracking systems, such as JIRA, Bugzilla, and Trac. That integration is now available, and that means that feedback from our testers will integrate with your in-house bug tracking solution, if you use one that we support. You can then assign an issue to a developer, and then bring it back to the issue tracking system for verification.

What do you think of the relationship between automated and human testing?

Topic: Dependency Injection in Scala Previous Topic   Next Topic Topic: Jonas Bonér on Real-World Scala

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use