There is a point in time where you make design choices and debate them with management. In my case I have to debate my positions and design choices with senior management but it is frustrating that management only strives for performance while I think stability is a must while performance can be achieved later.
E.g. We are facing a design choice to make a recovery mechanism due to lack of transactionality in certain processes i.e. we need to guarantee transactionality of a those processes making them complete fully or rollback the changes it made to database. The current code makes this difficult because we are using stored procedures that manage their own transactions. This means that if the process calls 3 or 4 stored procedures, there is 3 or 4 transactions and if we want the recovery process we need to rollback those changes (yes, they are committed at that time, this means that we need to make more transactions to the database in order to leave it in a consistent state or at least somehow “ignore” those records).
Of course, I wanted to remove the transactions from the stored procedures and commit the transaction in the code after the process ends or rollback there if the process has exceptions.
The case is that management thinks that this approach will make the process slow and also will impact greatly in our code. I think this is correct but also I think that making the rollback process ourselves is plainly reinventing the wheel, error prone and IMHO it will take too much time in stabilize.
So, after the previous example, What could be the most beneficial approach in such cases? I mean, I want a Win-Win situation but I think it is just plainly impossible to agree on this because every time I want to talk about it I get responses like “there should be another way”, “you should not tell me there is no way around”, “this is not factible”, “the performance will degrade”, etc. and I think I will end making this faux recovery process just to comply with management.
OTOH I could be wrong and I should do what is told to me without complaining.
4
I guess all the good advices here on PSE to prefer maintainable code over fast code won’t convince your management as long as both of you have only opinions, but no facts. So here is my advice how you might act in your specific situation.
Having stored procedures making their own commits without the ability to control the transactions from outside makes it really hard to reuse that procedures in a combined process (that is true for any kind of code updating a database, stored procedure or not). I see this typically as a heavy design error, typically made by beginners (but I have seen this kind of thing too often from more or less “experienced” devs).
The case is that management thinks that this approach […] will impact greatly in our code.
Of course, it will impact your code (it will enhance its design). But IMHO in most cases you can refactor existing code in a way the risk of breaking things is not too big. For each of those stored procedure, add a parameter (for example bool autoCommit
) which allows to switch the commit off optionally. Let the commit enabled be default (that has typically only a small impact on the existing code), and make sure you did not break anything so far.
Then, make a test version of your code (or at least a representative example) which uses the new feature to control the transaction from outside. This gives you an opportunity to measure the performance impact – and prove or disprove managements objections. This will also allow a direct comparison of the old and the new code showing how much better the new solution is.
Later, you may think of refactoring the changed procedures to a state where the autoCommit
parameter is not needed any more.
Code should be maintainable, valid, and efficient, in that order of priority.
There is no point in having high performance code that doesn’t do what it is supposed to do (there goes robustness), and whenever you hit a bug or a performance issue, it is maintainability that is going to make the difference. I have no experience in database, but I would be surprised if this principle didn’t apply in database too.
Now the difficult part is getting management to accept that. And by the way, is performance a problem at all yet anyway?
9
What you look for are two things: correctness and efficiency. It just so happens that sufficient simplicity will give you evident correctness and make efficiency easy to implement.
However, your system is long past that point. Unless there is some drastically different approach to the problem that will greatly simplify things, achieving both correctness and efficiency will be hard. If you want to do both to emerge from the same code, you will probably fail.
The only approach here is to use tests. You will have a separate body of simple code that is not at all concerned with efficiency, but correctness. And once your tests are in place, you can implement the actual solution and wrestle with it until its fast enough while all tests past.
A well performing code that is unstable is no good
So the first and foremost lookout should be to achieve stability in code, and the way to do that is TESTS, build for yourself a great test suite.
When you have a great test suite that backs you, now you should try and do Load/Stress testing and if you find the performance lacking then and only then you should try to achieve performance.
And while achieving performance if you do something wrong your tests will warn you well ahead of production time.
Steps:
- Tests, tests and tests
- Flexible, Readable, Testable code
- Load testing
- Performance optimization if required and proved lacking from load/stress testing