Intuitively it seems appropriate that the development environment (and all test environments) be as close to the production build as possible. Are there any documented arguments in support of this intuition?
6
It’s one of the founding principles of continuous delivery that your integration tests and manual tests need to run in a “production-like” environment in order to have any assurance of a stable release. The more production-like your testing and staging environments are, the more confident you can be, up to and including the point of daytime fire-and-forget releases.
That being said, your development environment does not need to be the same as production, and it definitely should not have production data – privacy leaks, ad-hoc updates, all sorts of problems there. Integration happens after your code leaves the development environment (specifically, in your CI environment, assuming you have one), and most teams don’t run integration tests locally, so mirroring production in dev won’t be that helpful since your code and unit tests are generally going to abstract away any environmental dependencies (assuming that you’ve designed them correctly).
It is, however, useful to use the same deployment scripts for both local/dev and test/staging/prod, because it adds another layer of testing to the deployment itself and helps you refine your process. But it doesn’t need to be the same. It’s not really cost-effective to buy an Oracle license for every single dev box, for example, so don’t count on perfect consistency.
2
I don’t expect this answer will be highly upvoted or very popular as I know people love to develop on their platform of choice, but here it goes…
I once witnessed a developer spend 12 hours attempting to replicate a single dependencies installation on his Macbook Pro. If you are doing enterprise development you will have many dependencies. Often with inter-os dependencies someone other than the original author re-packages the content and releases it with a slightly modified code base.
I would be very wary of any developer attempting to develop long-term in an environment that was not production. Not only would the time taken for him to manage his own environment be largely wasted when automated processes already exist to setup the production environment he’s risking adding features and adding dependencies unknowingly that may not work altogether.
In my organization we go so far as to have bulk unit-tests to know everytime a python package changes version. Sometimes it happens unintentionally and when it does we want to know. A good exmaple of this is PyMssql and this bug -> http://code.google.com/p/pymssql/issues/detail?id=98
Lastly, I can’t understand these claiming to be “agile” development teams that require everyone to continually commit their code. “If it’s not in Version Control it doesn’t exist!” – and then they turn around and have a completely laxed attitude about the environments, modules, version, being developed in. How about “if it’s not developed within production environments it doesn’t exist” ??
9
More precisely the Unit Test Environment should be as similar to production as possible.
In many cases the development environment may not be on the same platform as application will run on. (Think Android where the development is done on PC/eclipse but the code runs on your phone).
The main reason for not doing so in real life boils down to the cost of hardware or software licenses. Its hard to justify a multi-site, high availability cluster of multi-processor machines just for development.
3