I’ve been told that code reuse and abstraction in OOP is far more difficult to do than it is in FP, and that all the claims that have been made about Object Orientedness (for lack of a better term) being great at reusing code have been flat out lies
So I was wondering if anyone here could tell me why that is, and perhaps show me some code to back up these claims, I’m not saying I don’t believe you Functional programmers, it’s just that I’ve been “indoctrinated” to think Object Orientedly, and thus can’t (yet) think Functionally enough to see it myself
To quote Jimmy Hoffa (from an answer to one of my previous questions):
The cake is a lie, code reuse in OO is far more difficult than in FP. For all that OO has claimed code reuse over the years, I have seen it follow through a minimum of times. (feel free to just say I must be doing it wrong, I’m comfortable with how well I write OO code having had to design and maintain OO systems for years, I know the quality of my own results)
That quote is the basis of my question, I want to see if there’s anything to the claim or not
4
Many arguments against object oriented reuse is that for the majority of objects, you can’t use just them. You need to use them, and all of the things they depend on, which tends to be some framework to support their needs, a slew of interfaces you need to supply/implement, and possibly a pile of configuration to tie it all together.
Frankly, functional programming of any complexity suffers from the same sort of dependency chaining but the impact is lessened since almost all of the dependencies tend to boil down to list manipulation pretty quickly.
6
…all the claims that have been made about Object Orientedness (for
lack of a better term) being great at reusing code have been flat out
lies
Perhaps the best reason why people think OOP produces code that is hard to re-use is they aren’t even using OO methodologies! Just because you use an OO language does mean you are applying proper OO techniques.
I think Telastyn makes a good point about dependencies so I would like to propose a solution to that argument.
Code re-use is in fact a difficult thing to achieve in OOP. It requires careful attention to the overall architecture and layering of the application. If you lack experience then you may put little or no time into thinking about these things and you just start coding away or you may do this because it’s just the easiest way. Then you end up with tightly coupled classes that are not module and cannot be ripped out and re-used.
Aiming for loosely-coupled, small module classes is one of the best ways to achieve code re-use. Dependency Injection is a great tool for doing this (I am referring to the pattern, not any DI/IOC frameworks).
Here is an example to demonstrate this
Hard-Coded
class WebScraper
{
Database Db = new Database();
public string Scrape(string url)
{
string content = WebClient.DownloadString(url);
Db.Save(content);
return content;
}
}
Dependency Injection
class WebScraperPro
{
IStorage Storage;
public WebScraperPro(IStorage storage)
{
Storage = storage;
}
public string Scrape(string url)
{
string content = WebClient.DownloadString(url);
storage.Save(content);
return content;
}
}
After writing the first class I have hard-coded my database code directly into the class. This makes it hard to re-use because what if I don’t have a network connection to reach the database server, what if I don’t want to store it in a database? If I wanted to just scrape the website I would have to write a new class even though all the functionality would be the same except for the persistence.
On the other hand when we use Dependency Injection we decouple the database from the business logic. I can pass in a TextStorage
object that just writes it to a text file if I don’t have network access, or I can pass in a NoStorage
object if I don’t want to store it anywhere. It allows me to re-use it more easily. Also notice we have now separated out Logic layer from our Data layer. This is the same reason why we do not want to bake UI code into the Logic layer.
This is just a contrived example off the top of my head, you could move the persistence to a higher level, but it should help grasp the concept of removing hard-coded dependencies.
When people use static classes everywhere they do not even realizing that they are binding their code to these dependencies. When you allocate concrete classes inside other classes you are doing the same thing. Programming to an interface and removing hard-coded dependencies is a big step in writing code that is re-usable. I also find that writing to an interface makes me think harder about the responsibility of the class and I end up writing smaller classes that follow the single responsibility principle. This makes everything more modular and more flexible.
I cannot comment on the other end of the spectrum about whether FP is better for re-use or not because I do not have experience with FP, but I can say that it is possible to write highly re-usable code in OOP if done properly.