I have code that looks like this:
tr.t.findIndexSmoothed(arg0.getX(), arg0.getY());
“tr” and “t” are objects. Is it bad practice to reach all the way down the object hierarchy to call methods? The only reason I can think of is that is breaks encapsulation, and if that’s the case, can anybody tell me why that poses a problem? Also, does this code structure inhibit performance in any way?
3
The Law of Demeter suggests that you shouldn’t call methods on objects two layers down like that.
The idea is that, in your case, you should have a method findIndexSmoothed on tr which in turn calls findIndexSmoothed on t.
What this does is makes your code adaptable. It means that the nature of tr.findIndexSmoothed can change without necessarily changing the nature of t.findIndexSmoothed, which may be called by other code you don’t intend to change.
However, it is commonly noted that it’s pretty rare you really need that adaptability, and you’ll have put yourself in a position where maintenance of t.findIndexSmoothed now requires maintenance of tr.findIndexSmoothed so, in some cases, you will increase your workload.
So, the answer to your question, as so many style questions in programming, is: Use your common sense. Keep the Law of Demeter in mind, but never make it an inflexible truth.
12
I think there is nothing wrong with it as long as it makes sense from the design point of view. Just ask yourself those questions:
- Is it sensible for tr to contain and expose t?
- Is findIndexSmoothed() the responsibility of t?
If you answer yes to both of the questions then it is a way to go. Only thing I would do differently is to make t private and access it via getter.
Also I think it actually makes sense with the single responsibility principle. What if t had 5 different methods? would you delegate all of those from tr? how would you name them? findIndexSmoothedInTr()?