Currently the company I work in gives each developer their own development virtual machine. On this machine (Windows 7) they install the entire stack of the product (minus database) this stack is normally spread amongst multiple machines of differing OS (although moving towards windows 2008 and 2008r2)
So when a developer has a new project they are likely to be updating only a small piece of their stack and as such the rest of it can become out of date with the latest production code. The isolation from others means some issues won’t be found until the code goes into shared test environments/production.
I’m suggesting a move from functional testing on these isolated machines to plugging machines into a shared environment. The goal being to move towards a deployment thats closer to production in mechanism and server type.
Developers would still make code changes on their Win7 vm and run unit/component testing locally but for functionally testing they would leverage a shared enviornment.
Does anyone else use a shared development environment like this? Are there many reasons against this sort of sandbox environment? The biggest drawback is a move away from only checking in code when you’ve done local functional testing to checking in after static testing.
I’m hoping an intelligent git branching strategy can take care of this for us.
6
I’m not sure if enforcing a shared development environment is a good idea, but I would like to suggest that you are focusing on the wrong tool.
It seems to me that what you really need is a well setup dev environment with nightly builds/continuous integration and automated testing.
It wouldn’t matter if I’m coding with vim in Ubuntu or with Eclipse in Windows if when I make a mistake the build/unit test system sends me an angry email that I broke something. By the same token, the said system would ensure the goal of better quality software much better than dictating a development environment.
It seems to me that whatever effort you would put towards a shared dev environment would be better spent setting up what I have described here.
1
Whether or not development should take place in a shared environment is a subtle question. My preference is not to, simply because that scales better. But I am also an advocate for making it as easy as possible to update the environment, and like to see it happen as often as possible. If everyone checks in early, checks in often, and syncs up regularly, you effectively have a shared environment except without resource contention or as big a window for oopses. If people aren’t doing that, then a shared environment may help.
If devs are failing to keep their environment up to date, and then relying on deprecated behavior, that may be a symptom of a deeper problem. Namely that devs are breaking each other’s code. In that case a shared development problem might solve the immediate problem, but could bring the underlying problem to the surface. Be aware of that, and be aware that people may blame the shared environment because it is an easier target.
But QA should definitely take place in a shared environment. It sounds like you’re doing that. To catch more the integration issues, I would highly recommend setting up https://wiki.jenkins-ci.org/display/JENKINS/Unit+Test to make sure that unit tests actually happen in a timely fashion, in a way that can track down problems to the commit that is at fault. Making sure that you have enough unit test coverage to actually see problems is a second issue, but is one that is highly worthwhile to tackle.
1
I’ve seen this happen… there were benefits, but some major drawbacks made it an unrealistic long-term solution when I saw it.
- Co-ordination between testers
- “Hi everyone, I need to do an IIS reset…” and the whole team waits
- Regression testing branches (ie. testing against a previously released/supported build)
- Unless you can host multiple versions of the software on the same hardware, you need a new environment.
- Testing fixes
- Dev makes a fix, and gives it to test… how do they deploy? Redploy to the shared environment? Wait until tomorrow to test it? What if it’s a high priority fix? It’s a simpler and safer scenario for QA to just deploy to a private environment… and if the latter is the most common scenario, it’s trivial to perform especially with automated deployment.
- Performance testing
- You need consistent and specific hardware with consistent and specific loads (ie. no other testers on it, only simulated ones) to get repeatable and stable performance results
That said, it was a great way to hammer 1 specific build, using all testers manually testing functionality. Got the team through a couple fire drills. Long term though? It’s really messy.
2