The Artima Developer Community
Sponsored Link

Weblogs Forum
A Set of Unit Testing Rules

50 replies on 4 pages. Most recent reply: Jan 21, 2011 2:19 AM by Steve Merrick

Welcome Guest
  Sign In

Go back to the topic listing  Back to Topic List Click to reply to this topic  Reply to this Topic Click to search messages in this forum  Search Forum Click for a threaded view of the topic  Threaded View   
Previous Topic   Next Topic
Flat View: This topic has 50 replies on 4 pages [ « | 1 2 3 4 | » ]
Curt Sampson

Posts: 21
Nickname: cjs
Registered: Mar, 2004

Who cares if it's a "unit test"? Posted: Sep 12, 2005 2:42 AM
Reply to this message Reply
Advertisement
I certainly don't. I have several different groups of tests, some of which run faster than others, and that's what work with. For example, at my day job, some of my current options are:

1. Database reload and database functional tests (2 seconds)
2. Run all ruby unit tests, without a database reload (2 seconds)
3. Run functional test of a program written in ruby, without database reload (1-2 seconds)
4. Database reload, run ruby unit tests, another reload, run all functional tests of programs written in ruby (7-10 seconds)
5. No database reload, start web server, run test cases from one class in the web test case suite (5 seconds, for the one I just tried)
6. Database reload, start web server, run all web test cases (30 seconds or so)
7. Everything (45-60 seconds)

Now it's a small system, but I've also done a lot of optimization of how I test over the years. (Switching from Java and HttpUnit to Ruby and test/httpweb for the web tests made them about an order of magnitude faster, as well as easier to write.)

I move stuff around between different kinds of testing as well, based on speed and convenience. (Some of my servlets are tested with unit tests in order to reduce the number of (slower) web tests, for example.)

The comment about "if it accesses a database, it's not a unit test" really irked me. Databases have code in them too, and that code should be tested just as you test any other code. I regularly migrate code back and for between, say, Ruby and SQL stored procedures, based on what's easy, what kind of security I need, and so on. In fact, I'll go so far as to say that your database schema *is* code, and should be treated as a first class citizen along with all of the rest of your code. (BTW, there's a bit of my unit test framework for PostgreSQL available in the pgtools project on SourceForge. Bug me and I'll update it.)

Sven Gorts

Posts: 2
Nickname: gorowitch
Registered: Feb, 2004

Re: A Set of Unit Testing Rules Posted: Sep 12, 2005 6:53 AM
Reply to this message Reply
Hi Michael,

While I would call such tests unit tests, the reliance on critical dependencies from within the test code is often problematic. Besides being slow, critical dependencies also have another nasty property: We can't ensure the availability of the resource being dependent on.

For example: When running the tests against a test server, the server needs to be up and running. In case the server is shut down, or perhaps even taken away we are no longer able to run our tests.

Other typical examples of critical dependencies involve calling native code from Java or direct execution of shell commands. In such cases the problem can even become worse than 'being slow' because a dependency on native or environment specific code may prevent to run the tests from a pure development environment.

Now while critial depencencies are definitely something one would try to avoid, developers often need to accept to live with them (at least for some time), as they are working their way through bringing their codebase under test.

Kind regards,
Sven
http://www.refactoring.be

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: A Set of Unit Testing Rules Posted: Sep 12, 2005 7:07 AM
Reply to this message Reply
> Isn't this just a case of 'how do we name things'?
>
> Assuming that we adopt your definition of a Unit Test ...
>
> Given a specific test, do we spend time to catagorize it
> as a 'Unit Test' or some other type of test? What is the
> point of making such a distinction?
>
> If a unit of code, for example a function, is given one or
> more tests that attempt to prove that its logic is
> working, why is it important that we define these tests as
> 'Unit' or not? For example, a function that reads an
> entire text file into a RAM buffer may have one or more
> tests written to show that that is exactly what is
> achieves when run. Why split hairs about calling them Unit
> Tests or not? They are useful and important tests, whether
> or not they fit your Unit test definition.
>
> How would my coding or testing behaviour change by calling
> some tests "Unit Tests" and other tests something else?
>
> These are not rhetorical questions as I honestly don't
> know the answers and would like to improve my knowlege and
> practice of programming.


I'm glad you brought it up. It is a matter of how we name things, but also what we motivate ourselves to do. When I first ran into this problem of database/IO bound tests, I sat with a team and they told me that they had a couple hundred UTs. I looked and them and said "well, these over here look unit-ish, but those over there, we shouldn't count those." I formulated a few rules with that first team and they agreed to try using them. They did end up writing tests that touched the database but only a few of them, and as an internal metric they decided to pay attention only to the number of UTs satisfied the "rules." By keeping those rules in mind and acting on them they ended up with far more decoupled software.

One thing that is interesting about this thread is that many are irked by the def of UTs that I laid out. The thing is, I'm not trying to change the def in the industry. But locally, with some teams, it's a great way to focus. Anyone who doesn't want to do that, or call them that, it's fine with me. It's just naming after all. I view rules as a local matter. Teams can create whatever rules they want or name things whatever way they wish to.. but it's great not to underestimate the power of using rules like that to raise the bar. The ones I outlined may be good for your team or not. Drop some or add some, but regardless, raise the bar.

Daniel Jimenez

Posts: 40
Nickname: djimenez
Registered: Dec, 2004

"Define" Unit Test Posted: Sep 12, 2005 7:12 AM
Reply to this message Reply
Recently a client asked for a rundown of TDD and unit testing. This client was calling anything that tested any code a "unit test", leading to a bit of confusion.

To be fair, the history of the term unit test does imply that definition. Recall the waterfall model: code is designed, then implemented, then "unit tested". In that model, pretty much any developed code is a "unit", thus unit testing is testing any code.

TDD shifts this definition subtly, but importantly. The definition I proposed for the term unit test was the tightest definition: a unit is the smallest bit of code that can be tested, thus a test is only a unit test if it tests the smallest bit of code that can be tested (usually a class via its public methods; this was a Java project). I then proceeded to categorize other tests (integration, acceptance, performance, etc) to emphasize that 21st-century development efforts require many layers of tests.

Certainly the original definition of unit test is not incorrect. However, more modern views of development get better traction out of a more specific definition, one version of which - and a very concise one at that - is Michael's list of criteria.

Curt Sampson

Posts: 21
Nickname: cjs
Registered: Mar, 2004

Re: A Set of Unit Testing Rules Posted: Sep 12, 2005 7:33 AM
Reply to this message Reply
> One thing that is interesting about this thread is that
> many are irked by the def of UTs that I laid out. The
> thing is, I'm not trying to change the def in the
> industry. But locally, with some teams, it's a great way
> to focus.

I guess my issue with that is that it's just one way to focus, and it may be the wrong way to focus. In particular, I've worked with people who make comments like this:

Besides being slow, critical dependencies also have another nasty property: We can't ensure the availability of the resource being dependent on.

Sure, one approach is not to do those tests. But another is to figure out how you *can* ensure the availability of that resource. Often it requires getting pretty creative. A few years ago it even required moving an entire project from Oracle to PostgreSQL, so that everybody could have their own database server, with as many schemas as they wanted. That took a lot of work and negotiation, but it was well worthwhile in the end, because not only did the database stop being a "critical resource" that was blocking testing, but it in fact was brought into the agile world, letting us make database changes almost as easily as other code changes. I've done other creative solutions for things such as e-mail sending and receiving and external credit card transaction servers.

Michael Feathers

Posts: 448
Nickname: mfeathers
Registered: Jul, 2003

Re: A Set of Unit Testing Rules Posted: Sep 12, 2005 7:40 AM
Reply to this message Reply
> > One thing that is interesting about this thread is that
> > many are irked by the def of UTs that I laid out. The
> > thing is, I'm not trying to change the def in the
> > industry. But locally, with some teams, it's a great
> way
> > to focus.
>
> I guess my issue with that is that it's just one way to
> focus, and it may be the wrong way to focus. In
> particular, I've worked with people who make comments like
> this:
>
> Besides being slow, critical dependencies also have
> another nasty property: We can't ensure the availability
> of the resource being dependent on.

>
> Sure, one approach is not to do those tests. But another
> is to figure out how you *can* ensure the availability of
> that resource. Often it requires getting pretty creative.

That's the thing. Those are other tests. I wouldn't recommend not doing them. The thing is, though, I often run into teams that do the hard work you describe but they end up not having any of the more independent tests, it bogs down, etc.

Mike

Posts: 1
Nickname: mbartyzel
Registered: Sep, 2005

Re: A Set of Unit Testing Rules Posted: Sep 12, 2005 1:20 PM
Reply to this message Reply
Hi!

Nice ideas, but I disagree the unit using config files is not test case. For example I use spring bean factory to delivery complex beans into my test case.

mbartyzel

Vincent O'Sullivan

Posts: 724
Nickname: vincent
Registered: Nov, 2002

Re: "Define" Unit Test Posted: Sep 12, 2005 10:10 PM
Reply to this message Reply
> TDD shifts this definition subtly, but importantly. The
> definition I proposed for the term unit test was the
> tightest definition: a unit is the smallest
> bit of code that can be tested, thus a test is only a unit
> test if it tests the smallest bit of code that can be
> tested (usually a class via its public methods; this was a
> Java project).

It's just nit picking but I'm not sure the use of smallest here is quite correct. Most TDD tests test individual methods (or functions) not classes, thus the smallest unit of code being tested is the method.

It then gets rather vague: Since most methods require more than one test for complete coverage it becomes apparent that any given test may be only part testing a particular method. Therefore the granularity is smaller still, except that an external test cannot see anything smaller than the method signature and therefore cannot 'know' to what extent the test fully or partially tests the method in question.

Kelly R. Denehy

Posts: 8
Nickname: kdenehy
Registered: Dec, 2003

Mod parent up +5 Insightful Posted: Sep 13, 2005 11:48 AM
Reply to this message Reply
Oops, wrong website. :)

It's amazing how little mention of mock objects there are in this discussion. People seem to think that the only reason to avoid hitting a real database is the performance of the test suite. Somebody mentioned avaiability of the database (or other resource) as another issue, which is an excellent point. But one of the best reasons to use mocks instead of the real thing is that it's much easier to simulate certain runtime behavior (i.e., exceptions) using mocks, and ensure that your method under test handles it correctly.

If you really think your existing unit tests are doing a good job while hitting a real database, run them through a code coverage tool like Clover, Cobertura, or EMMA. You'll probably see that little if any of your exception handling code is ever tested.

Bill Poitras

Posts: 2
Nickname: wpoitras
Registered: Sep, 2005

Re: A Set of Unit Testing Rules Posted: Sep 13, 2005 12:03 PM
Reply to this message Reply
> It's difficult to write tests for an application which
> uses a database as its persistent store if you have a rule
> that you can't access the database.

Its not that difficult if you use the DAO design pattern. You design your database access as an interface. When writing unit tests you use mock objects.

Unit tests are meant to test that the object at hand is doing what its suppose to.

If you need to make sure that integration with your database requires testing (and that's certainly reasonable in a database intensive application) you write integration tests.

Bill Poitras

Posts: 2
Nickname: wpoitras
Registered: Sep, 2005

Re: A Set of Unit Testing Rules Posted: Sep 13, 2005 12:11 PM
Reply to this message Reply
> Nice ideas, but I disagree the unit using config files is
> not test case. For example I use spring bean factory to
> delivery complex beans into my test case.

Although Spring does provide abstract JUnit classes for instantiating Spring objects from a Spring context, what you are describing are not considered unit tests. If you take a look at the unit tests that the Spring samples and library itself they almost exclusively instantiate objects using "new" and wire up collaborators that are mock objects.

This post is really about trying to create true unit tests first. Then if more complex integration tests are needed, those are separate. Michael is trying to foster the idea that unit tests are for testing the proper behavior of a single object and try not to worry whether calling the database, thread library, transaction manager, other objects you write etc actually work, because they should. Or in the case of other objects you write, you'll write unit tests for those.

Steve Garcia

Posts: 1
Nickname: garcia
Registered: Sep, 2005

Re: A Set of Unit Testing Rules Posted: Sep 13, 2005 2:47 PM
Reply to this message Reply
>> Abstraction is great, we use interfaces all the time. But at some point, you just have to implement (and test) your RegistryStorage and FileStorage classes (two implementations of the IStorage interface). To test these, you'll *have* to touch the filesystem and the registry (which is actually a db). <<

Yes, I agree with this notion. I believe that every line of production code must be backed by a test, whether that is a unit test, an acceptance test, end-to-end test, system test, or whatever you call it.

However, there can conceivably be one class that writes to the file system. That class can be used over and over within the production code. And there should be a test around that class. But for the 50 classes that depend on that class, it can be mocked out with an in-memory version of that class.

Matt Gerrans

Posts: 1153
Nickname: matt
Registered: Feb, 2002

Re: A Set of Unit Testing Rules Posted: Sep 13, 2005 6:07 PM
Reply to this message Reply
Are you saying you write wrapper classes around every external resource, like the file system, network, databases, etc.?

James Sadler

Posts: 1
Nickname: jsadler
Registered: Sep, 2005

Re: A Set of Unit Testing Rules Posted: Sep 14, 2005 12:47 AM
Reply to this message Reply
> > <i><p>A test is not a unit test if:</p>
> > <li>It talks to the database
> > <li>It communicates across the network
> > <li>It touches the file system
> > <li>It can't run at the same time as any of your other
> > unit tests
> > <li>You have to do special things to your environment
> > (such as editing
> > config files) to run it.
>
> Sometimes, your unit test must do some of these things:
> 1. You test a class who's function is to interact with a
> db, write/read a file, or write/read the Windows
> Registry.

You don't need to write a unit test that accesses the database for every piece of code that accessed the database. Here's what you do: encapsulate all access to your database through an implementation of an interface called MyDatabaseService (or whatever). This class itself can either be:

- tested using the unit test tool of your choice
- or simply tested by inspection (it's a really thin layer, and hey, you are pretty sure that Oracle tested their JDBC adapters eh?!)

From then on, all code can be tested against a mock version of the MyDataBaseService. You can us inversion of control at run time (see Spring) to enable this sort of thing easily.

What you should ask yourself is this: what am I testing? Am I testing MY piece of code, or am I testing the database (or some other external dependency)?

Where I work, we use this mocking technique to return fake values etc, and it is very effective. The tests are testing OUR code, and we have 2000 JUnit tests that run in about 35 seconds. Only about 10 tests go out to the network. We have very very very few bugs in our system, so it definitely works.

> 2. You develop a client; to make sure it works, it
> must interact with a server. At my work, for
> example, our client must exchange timestamped, signed, XML
> messages with a server. The test will not be complete
> without server interaction. Actually, you simply
> cannot test this without a server.

Same issues here, any external dependency can be hidden behind an interface that can be mocked out at test time. The mock will have no logic: it will simply return the data required for the code under test to work. At run time , you replace the mock with the real implementation that actually goes to the server.

When mocks are done well, they have zero logic. I cannot stress that enough: usually the body of a method in the mock is empty, or simply returns a default value.

Of course you will still have intergration tests, and they will use the REAL implementations of your 'external system' interfaces. You can use JUnit for these if you like, but strictly speaking, they are not unit tests. Where I work, none of our integration tests have ever failed, not because we are the best developers the world has ever seen, but because the 'external service' implementations are so thin. And I don't want to test that JDBC works, TCP/IP works, Swing works, etc : I want to test my code.

Unit tests should also be self contained, have no static state and should be able to be run one at a time, in any order you please (TestNG relaxes this a bit).

- ) self contained; they are testing my code, and are defined entirely in code. I don't want my tests to fail because someone deleted some records from the database.

- ) no static state; for the guarantee that there are no 'magic' dependencies between tests. Don't want to get into a situation where one test will only run if test X is run first. This means static variables in my code are readonly, and static initializers are avoided if possible.

- ) no external dependencies; if tests fail, it is always because I screwed up the code and for no other reason

>
> You want to say these are not unit tests? I'll argue that
> the tests can't be any simpler.

Hopefully I have illustrated that this is not the case.

James.

James Tikalsky

Posts: 12
Nickname: jt2190
Registered: Dec, 2003

Re: A Set of Unit Testing Rules Posted: Sep 14, 2005 5:49 PM
Reply to this message Reply
If I recall correctly, the term "unit test" was borrowed from outside of computer programming, and refered to the testing of a single part or piece, removed from its whole. For example, instead of testing a whole engine at once, a single piston can be removed from the engine and placed in a testing device. This allows the testers to simulate things that would either rarely or never happen to the piston it in its lifetime, or would take years of real world use to determine, say how the piston wore over one year of use.

The key thing is that the part has been isolated. If there is a failure, there is no doubt about what, exactly, failed, or what the part was doing when it failed.

Your five rules can essentially be rolled into one: Isolate and test only one part.

But in software, we have a wrinkle to contend with: What, exactly, is "one part"? Some say it's a single method, however, I've seen methods that were pages long. This brings us to the concept of Test Driven Design; refactor that big method into many small methods that each do only one thing, and they become easier to test.

Flat View: This topic has 50 replies on 4 pages [ « | 1  2  3  4 | » ]
Topic: Computer About to Play Jeopardy Previous Topic   Next Topic Topic: The Search for Requirements

Sponsored Links



Google
  Web Artima.com   

Copyright © 1996-2019 Artima, Inc. All Rights Reserved. - Privacy Policy - Terms of Use