There has been a trend in the Ruby/Rails community to create lots of objects that have very small functionality (SRP anyone?) and live in their own file. These are often extracted from large, bloated model files.
This could be good, but I think it is often changing one problem (bloated models with hundreds of lines of code) for another, worse problem (hundreds of small object files in subdirectories several layers deep).
Are there any known effects on the cognitive load difference between lots of small files and fewer large ones?
I don’t know the answer, but it’s a question I’ve often asked myself while designing the structure of my systems. The way I’ve thought about it lately is, if the SRP is all about “a class having only one reason to change”, then what does that tell us about change? The answer is that it tells us nothing about change. The phrase only tells us about classes, not about change.
I’ve tried to formulate a corollary about change, but haven’t yet come up with anything particularly satisfying. There’s something about this answer (https://softwareengineering.stackexchange.com/a/213586/87944) in another question that I sort of like– the way the answer mentions the difference between ease of changing types vs. operations, that answer made me do some thinking.
I think in the end it comes down to cohesion of your system. A well designed system with high cohesion means that change is localised in a predictable way across classes, regardless of how many there are.
So to sum up: the cognitive load only becomes an issue when the time comes to make a change to the classes, and if the abstractions (types) in your system have high cohesion, then you should be able to reason about the system without undue cognitive load. If you find that you can’t reason about the code even though it seems to follow the SRP, then you probably have another S.O.L.I.D. violation, likely the D or the L.
1