The Friendly Coder

On software development and technology

Scaling Automated Testing – The Platform Paradox

So you’ve built your awesome product which is designed to target several platforms. This may be as simple as supporting several versions of Windows (XP, Vista and 7, say) or as complex as supporting Linux, Windows, Mac and maybe even an Android flavor, on both 32 and 64 bit environments. Now, being a good developer you have been diligent in writing your unit tests, using good coding practices including clean version control patterns and all. Then you are now tasked with setting up automation for this product – what do you do? You will soon discover that scaling your automated testing is not as simple as you first thought.

Option 1: keep it simple
You could choose one particular platform as your primary target and set up your automated builds on that platform. At the very least you’ll have good confidence after a successful build on that platform that your code will function correctly in that environment. But are you willing to gamble that your app will also work on those other platforms just as well? If so then you are a much braver soul than I. Typically Murphies Law prevails in this situation. Anyone who has taken a 32 bit product code base and ported it to a 64 bit platform can vouch for this. Unexpected platform dependent bugs tend to creep in.

Option 2: Avoidance
Some people would argue that this problem is easily avoidable if you leverage a cross platform runtime like a Java runtime or .NET / Mono. Such frameworks do help isolate the code from the environment but they are not a perfect solution. Different runtimes for different platforms are often equivalent but not identical. Also different companies / teams are likely involved with building some of the variants adding even more potential behavioural differences between runtimes.

These arguments aside there are often other influences preventing such a migration. Perhaps some pivotal dependencies on your product can’t be migrated or you have a legacy code base that is not cost effective to port. Suffice it to say this tends not to be a silver bullet solution.

Option 3: Test Everything
“Computing resources are a dime a dozen these days. Just buy a stack of PCs or lease some time in the cloud and test all combinations” some people might say. I know this sounds perfectly reasonable in theory, but in practice it’s not so easy.

First is the practical requirements for such a solution. Let’s say you work for a company that produces three applications. Let’s assume each of those products must work in both 32 and 64 bit environments across four different flavors of Windows (2008 server, XP, Vista and 7). Further, as we all know, no product is perfect so you are likely going to get to a stage when you have at least two versions of each product under active development at the same time: updates / fixes for the last release and work for your next release (in reality some products end up having to support several versions simultaneously).

So, my math may be a little rusty bit if I’m right that works out to buying at least 8 new machines and building and testing 6 product configurations per box (and I haven’t even mentioned debug vs release builds yet!) Things get even more fun if the products are sufficiently large and complex that they can’t all be effectively built on the same machine (ie: if the system resources can not complete all builds and tests simultaneously in a timely fashion). Sure VMs and clouds may reduce the physical hardware requirements but at a cost of performance. But even with VMs you still have the licensing costs of buying all the production software needed to run those boxes (ie: Windows, Visual Studio, build agents, etc.).

The second problem to address then becomes time constraints. To be an effective continuous integration all builds for each product should complete in as short a time as possible. VMs and clouds slow things down. Waiting for multiple builds to complete in series slows things down. Running builds in parallel is sometimes non-trivial depending on the application architecture and adds to the complexity of your continuous integration (distributed builds are more complex to setup and maintain than stand alone linear builds).

The third problem comes with increased complexity in configuration management. Making sure that all builds and tests complete successfully for a particular application so that it can be promoted to a releasable state (either to the customer or for an agile “demo”) becomes difficult. If, for example, the output from your continuous integration is a redistributable package such as an application install or a compressed Java war file, then you want to ensure that all preceding parallel builds and test configurations that are required to validate the contents of that package complete successfully. This becomes difficult with distributed parallel builds.

Summary
Often developers underestimate the complexities inherent with scaling these agile processes and managers tend to underestimate the costs with introducing new dependencies on a product line. In the end, as build masters, we have to try and find a reasonable balance which helps increase confidence in the products we’re building while minimizing the overhead needed to deliver on those expectations. There is no right answer to these problems. It is more of an art than a science, based on experience.

Leave a Reply