Are there any industry standards or best practises on how to manage a rapidly changing code base?
The types of applications I am developing always have a custom aspect to them. So there will always be some code written which is client specific for every project.
The method we are using now is as follows:
Several ‘CORE’ projects exist in isolation from one another, each core project is a class library which is capable of running on its own, or with dependencies on other core modules.
All of these core modules are documented (Sandcastle) and NuGet packages created for them, which are served up via a private NuGet feed.
When a customer purchases the software we create a customer specific project which implements the bespoke code that they require, and pulls in the NuGet packages from the core libs as needed. The build of this project is what is provided to the customer, and each customer project has it’s own Git repository.
This has worked fine for a year or so, but I feel that as we add features to the core libraries the number of nuget packages is growing quite quickly.
Is there a better method of organising code, whilst making sure there are not seperate version of the same code lying around?
I know in this stackoverflow post it states that:
Google manages to keep the source code of all its projects, over 2000, in a single code trunk containing hundreds of millions of code lines, with more than 5,000 developers accessing the same repository.
But this seems like it produce problems for us, due to having bespoke deployments.
Any ideas or comments on what I am doing wrong?
Edit
The problems I foresee going with the google way are mainly to do with release management and build issues. We use continuous integration, so that every commit a build will be triggered and test automatically run. If we use one all encompassing repo I am not sure what effect this will have on the build and test servers. Will it have to build every customers project?
Or would there be a branch per customer? In which case is this any better than a repo per customer?
1
This is more a code management issue. At the heart of it, you need to make a distinction between common code and bespoke modifications. Once you can answer that, the SCM issue will fall into place easily.
For example, you could have all your application setup to use a lot of configuration. For all customer features, you add a config option and implement the feature in the common codebase. A customer then purchases a single application and a specially created config file (or DB or whatever).
Alternatively, you have a common codebase that you extend with inherited classes for each customer feature. The SCM would be used to checkout the common, unchanging codebase and then bring in the new files to add to the build for that customer.
Or you can try branching off a central trunk and editing the code as needed for each customer, with any fixes to the common trunk being merged into each and every customer branch. This last one is easiest for the devs, but a real pain for the build manager.
I’d say you’re probably doing things right with a large set of common libraries, but maybe your common code is just too fine-grained. Try combining them into fewer libraries, it doesn’t matter if the customer has extra that they never use in a single package. You might be able to get away with a single Nuget package that pulls in a single library with all common code in it, for example.
2