I work for a large financial company that manages money for thousands of clients. This company has very complex business processes and they operate in a very dynamic environment.
This is a fairly old company and over the years they have implemented many heterogeneous systems to automate processes within the company.
There are rumors in the company that there may be an effort to move to “a system” (re-written from scratch) that consolidates the many disparate systems in place now.
Debate has arisen that questions the value of such a move:
One side claims a single system is easier and cheaper to maintain due to greatly reduced redundancy i.e. properly factored business logic, unified data model, reusable UI components, etc.
Other side claims that the company, clients, and products (financial instruments) are too complex and too dynamic to be pigeonholed by one system. Single system will risk breakages in one component when another is modified. There is no way and no reason to consolidate business logic because it is too client- and product-specific. Time to implement changes will increase because multiple higher level components will be affected by changes to lower level components.
Question:
Should the company seek to invest time and resources in creating a unified data model, factoring business logic and creating reusable code at all layers? Or should they “copy and modify” in order to speed development time, reduce the chance of regression, and reduce complexity? Is there a false dilemma presented here (i.e will there be a net change one way or the other and does one answer necessarily preclude another)? To be clear, this really a matter of long term goals and priorities, not about a specific product design. The company will never eliminate a certain amount of redundancy and it would not be desirable for them to do so.
6
Your comparisation seems to be wrong. You are talking about:
1 big system handling everything in the business
multiple systems handling all seperate parts of the business
Now the issue with this is very simple: That one big system won’t be ONE. It will very likely be a group of libraries, models, etc. which combined show like a single application maybe. But the internals won’t be ONE big thing.
Now to answer:
Single system will risk breakages in one component when another is modified
If you have two systems communication together you have the same. In one application it’s even easier (interfaces and integration tests) to be sure this does not happen.
Should the company seek to invest time and resources in creating a unified data model, factoring business logic and creating reusable code at all layers?
Sounds impossible. Maybe when the world stood still it might be possible to wrap it all in code. In real: That won’t succeed. You will have some parts more nice than others. Which goes to your next question:
Or should they “copy and modify” in order to speed development time, reduce the chance of regression, and reduce complexity?
This is a business case which you should define per case. If you can make a million by having a copy-paste-solution next week it might be worth it. You can always refactor it further in the future.
Generally: You ask for a choice which is not the real choice. What I would expect if I read the story is: Start with this project from the start. So with the business goals in mind. Based on that see which result the current tools give. Which data they share. Then compare the options to integrate them or replace them. It’s likely that replacing all is very expensive so maybe a hybrid solution is better. Keep the complex but good tools, integrate the rest. And then link the existing complex tools to the general platform.
I’m not mentioning anything about writing from scratch, enough research on that available.
2