In other words, is there a Python design related reason for it to be so?
Functions like map, filter, reduce
etc. are just plain functions.
Is it just a poor design choice (if it is a good one, please explain)?
For example in scala, you can chain collection methods like smth.map(func1).reduce(func2)
. It seems much more convenient to me.
3
Which class would you put these methods on?
In python, I can use map
on lists, tuples, dictionaries, files, strings, sets, arrays, etc. There is no collection base class to put a map
,reduce
,filter
etc on.
Now, python could have had a collection class that all these different things inherited from. But that would really go against the “spirit” of python. In python everything is duck typed. Things work by virtue of the fact that you have the right methods. You can use map
, reduce
, and filter
on anything that defines an __iter__
. Having to subclass a collection class would go against that.
As it is, map/reduce/filter aren’t really considered the pythonic solutions.
Instead of map(x, lambda y: y+1)
use [y + 1 for y in x]
Instead of filter(x, lambda y: y % 2 == 0)
use [y for y in x if y % 2 == 0]
Instead of reduce(x, lambda x,y: x+y)
use
sum = 0
for y in x:
sum += y
16
I think this is just a historical artifact. These functions were introduced quite a while ago, when fluid interfaces were not all the rage. Since then everyone got used to them. (So yes, you can write it of as “bad design”).
Could these functions could be retrofitted to lists? Possibly, but it was not and should not have been done.
First, There should be one-- and preferably only one --obvious way to do it.
Second, and most importantly, map
, reduce
and many other functions work not just with lists but with anything that’s iterable. They work on tuples, sets, dicts, and, most importantly, any user-defined classes that implement iteration (via __iter__
) or generators (via yield
).
So, every iterable or generator should somehow receive their own implementations of map
, reduce
, and probably a bunch of other functions that accept an iterable / generator and return something compatible. This would require that any existing class that happen to be iterable not to expose names like map
and reduce
. Code-breaking changes are very much frowned-upon by Python maintainers and community alike.
Instead, you can wrap things into your own class that offers the additional interface, and enjoy either the fluid style you mention, or pipe style, or anything else.
Some considerations regarding the second part of the question:
Is it just a poor design choice (if it is a good one, please explain)?
For example in scala, you can chain collection methods like
smth.map(func1).reduce(func2). It seems much more convenient to me.
I do not think that it is poor design: Python supports both procedural and object-oriented styles, and one of the two styles had to be picked for higher-order functions on collections. Ruby and Scala favour the object-oriented style and therefore they chose high-order methods.
Regarding the convenience of method chaining, you have a corresponding compact syntax in languages supporting currying, e.g. in Haskell you can write
(map func1 . filter func2) smthg
Probably you can do something similar in Python (see e.g. PyMonad) even though it would not be the standard functions.