Dust off an old computer and put it to work.
I think that when people look back at the way we program today, one of the things that they'll be amazed at is the fact that we have so much computing power but we don't do much with it.
Seriously, how many computers do you have? If you're like me you have a least a couple of stray ones sitting around the office, and they aren't doing anything. What a waste. And, it's only going to get worse as multicore processors become more prevalent.
When I first heard of continuous integration, way back in the day, I misunderstood it. I thought that some computer someplace was going to build my project continuously. That is, regardless of whether I or anyone else had checked in. Silly? Perhaps. But, wasteful? I don't think so, the computer is running anyway. It might as well work.
A while back, I visited a C++ team that had found a wonderful use for a spare computer. They set it up to build continuously. Then they did something fascinating. They created a script that went into their code and commented out a single #include. Then it built their project. If the build and the tests passed, the script deleted the include. If it didn't, the script uncommented the include and went on to the next one. Yes, it took quite a while to march through the code, but with that build running continuously in the background, they were able to eliminate a large number of superfluous dependencies. Their build got better.
That script that they wrote is an example of something I call a regenerative build tool. There are others. Jester is a regenerative build tool which uses mutation testing to find problems. It changes your code in subtle ways and runs your tests to see whether they still pass. If they do, you either don't have the tests you need to have, or you could have code which is really doesn't contribute to your results at all -- zombie code. Guantanamo is another regenerative build tool, a rather severe one. It deletes all code that isn't covered by a set of tests.
I'm sure there are plenty of regenerative build tools that we haven't thought of yet. And, that's a shame. Our spare computing resources can offer us a great deal. We just have to find creative ways to use them.
This stuff sounds really, really cool. This is so far removed from our current build 'process' that it is sad. Alas, it is good to know there are some people out there doing some really interesting and useful things.
You can always go rogue. Setting up a continuous integration server isn't all that painful. In addition, if you build it and they see it, they will scoff at first and then they will say "can we have that."
Cruisecontrol isn't the most user friendly CI server out there, but its not terrible either. Hudson (which is free from sun) is damn easy to setup and get going.
Here's a link to a bunch of alternatives to Cruisecontrol ranging from simple open source to complex monsters involved distributed builds etc.
> You can always go rogue. Setting up a continuous > integration server isn't all that painful. In addition, > if you build it and they see it, they will scoff at first > and then they will say "can we have that." > > Cruisecontrol isn't the most user friendly CI server out > there, but its not terrible either. Hudson (which is free > from sun) is damn easy to setup and get going. > > Here's a link to a bunch of alternatives to Cruisecontrol > ranging from simple open source to complex monsters > involved distributed builds etc. > > http://confluence.public.thoughtworks.org/display/CC/Unders > tanding+the+alternatives+to+CruiseControl > > hope its of use. > > -- > Peter > firstname.lastname@example.org
I'm working just to try and get an average ordinary build environment set up. I'll take any resources you all are willing to give me. We had one at my last place of employment and I didn't realize how much I missed it. I also thought things here at my current place were better than they are. Oh well, that's called 'an opportunity', right?
You're also wasting a lot of energy by using lots of physical computers, especially when reactivating old inefficient ones. Where I work, we've started using virtual computers for stuff like that. You only need to have one (powerfull) server stashed away somewhere, and create VMs for nightly build, CI, dangerous tests, etc. and the VM will take care of resource allocation all by itself.
This scheme can be extended for continous optimization of program code both for statically as well as dynamically typed languages. With a huge amount of data and tests you will get good approximations of real situations and program behaviour. I'm pretty much excited about these empirist practices.