I have a question to NumPy experts. Consider a NumPy scalar: c = np.arange(3.0).sum()
. If I try to multiply it with a custom sequence like e.g.
class S:
def __init__(self, lst):
self.lst = lst
def __len__(self):
return len(self.lst)
def __getitem__(self, s):
return self.lst[s]
c = np.arange(3.0).sum()
s = S([1, 2, 3])
print(c * s)
it works and I get: array([3., 6., 9.])
.
However, I can’t do so with a list. For instance, if I inherit S from list and try it, this does not work anymore
class S(list):
def __init__(self, lst):
self.lst = lst
def __len__(self):
return len(self.lst)
def __getitem__(self, s):
return self.lst[s]
c = np.arange(3.0).sum()
s = S([1, 2, 3])
print(c * s)
and I get “can’t multiply sequence by non-int of type ‘numpy.float64′”.
So how does NumPy distinguish between the two cases?
I am asking because I want to prevent such behavior for my “S” class without inheriting from list.
Pavlo Bilous is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
3