In this question I asked: Is the cost of designing more levels of abstraction to allow loose coupling worth the benefits, or not?
People said that it’s often worth the cost, but one should sense whether his/her application will actually be using this underlying design later on, or if it’s just going to be a waste of time.
My new question is:
Should one create the infrastructure to allow flexibility and maintainability in advance – thus making the development ‘cleaner’ and more ‘organized’, as things are developed on existing infrastructure and abstraction levels?
With this approach, you desing the underlying infrastructure while not having much knowledge on how things will actually be implemented later. Thus, you might be coding things that might not be used later, and waste your time. But, the development will be cleaner and more organized.
Or should one create infrastructure to increase flexibility and make things more loosely coupled, during the implementation of something – thus wasting less time on things that might not actually be used later?
This approach prevents you from coding things that won’t be used, but forces you to design underlying abstraction levels and designs for existing code, thus modifying existing code, instead of coding the underlying designs before coding the code that uses these designs.
Example for approach 1: “Okay, now we start making this relatively large part of the application. This part will generally have to do with reacting to a couple of GUIs. Let’s make an underlying level of abstraction for this part of the program, to later use when we implement it.”
Example for approach 2: “Okay, I’m in the middle of designing this part of the program. While working on it, I see this part better be loosely coupled. Let’s make another level of abstraction to allow it, and modify this code to use that level of abstraction”.
Which approach makes more sense, or is more common?
2
Frankly, both.
It’s neigh impossible to add in abstraction layers after all of the code is written and tied together in dozens of places. Sadly, it’s also neigh impossible to get all of the abstraction layers just right without actually doing it and seeing where things go.
So in practice, you end up doing both. You add some abstraction in at the beginning, trying to toe the line between being flexible and over-engineering. You create flexibility where it’s very likely to be necessary, or where it’d be catastrophic should you be wrong in your guess. Long story short, you mitigate risk.
And then, you add or remove abstraction over time. Refactoring is key to making code “fit”. Sometimes you find that things are way too complex. Sometimes you find that things are too coupled. Clean it up over time.
This sort of feel for how much is “enough”, and where that abstraction lies is the key skill that develops over time as you make designs and see how they succeed (or more often, fail).
I’d advise against approach 1. Writing code is a process of discovery – just like no plan survives contact with the enemy, every design will have flaws that are only caught once you start implementing it. Besides, you can abstract anything in an arbitrary number of directions and levels. If you design for too many “what-ifs”, your architecture won’t be aligned with your needs; if taken far enough, you end up with ridiculous things like Greenspun’s Tenth Rule or the Inner-Platform Effect (think about it – if everything is customizable, what you’ll have left is a programming language.)
On top of that, you’re chasing a moving target (changing requirements). Even if you miraculously get the design correct now, there’s no guarantee it’ll still be relevant later, as you said.
In short, it’s a slipperly slope. It doesn’t deliver immediate value and there’s a good chance it’ll steer you in the wrong direction.
The Python community has this wonderful artifact: “Guido’s Time Machine”. That is, many marvel at Guido Van Rossum’s foresight in setting up syntax and mechanisms in Python that just proved to work very cleanly and intuitively over time.
Not everyone has a time machine, though, and anticipating requirements can be tricky.
In agile, the rule is “program close to the requirements”. In TDD it’s “the simplest solution that works.”
However, there’s a balance between brute force implementation, and abstracting to anticipate future requirements.
IMHO it’s important to a) keep an eye out for nice abstractions to provide for clean, flexible, extendible code, and b) watch out for turning your project into a swamp of over-generalized infrastructure that doesn’t actually do anything.
A lot of time, I’ll work on the broad-minded stuff in my spare time in my side projects. But I still wince in meetings when an engineer, at any level of experience, gets excited about some mechanism that’s going to Solve the World’s Problems.
At the end of the day, make sure you’re putting functionality in front of customers. But keep your powers of abstraction sharp with targeted application.