I am trying to grok when I should expect to need map_batches()
and map_elements()
in polars. In my example below I am able to obtain the same results (z1:z_3
) regardless of which method I use.
What qualities of the UDF, my_sum()
, would need to change in order for differences to emerge between my three methods of computing z? I do understand that map_batches()
computes in parallel, but I am unclear about how I can know what qualities of my UDF will require vs. forbid its use.
pip install numpy==2.1.0
pip install polars==1.6.0
import numpy as np
import polars as pl
from polars import col, lit
TRIALS = 1
SIMS = 10
np.random.seed(42)
df = pl.DataFrame({
'a': np.random.binomial(TRIALS, .45, SIMS),
'b': np.random.binomial(TRIALS, .5, SIMS),
'c': np.random.binomial(TRIALS, .55, SIMS)
})
df.head()
def my_sum(x, y):
z = x + y
return(z)
df2 = df.with_columns(
z_1 = my_sum(col("a"), col("b")),
z_2 = pl.struct("a", "b")
.map_elements(lambda x: my_sum(x["a"], x["b"]), return_dtype=pl.Int64),
z_3 = pl.struct("a", "b")
.map_batches(lambda x: my_sum(x.struct["a"], x.struct["b"])) # note use of "x.struct[]" with map_batches
)
df2.head()
1
Case By Case
z_1 = my_sum(col(“a”), col(“b”)),
The reason this works is that col("a")
and col("b")
are each a pl.Expr
which overloads __add__
so that when you do col("a") + col("b")
you get the Expression col("a").add(col("b"))
. This will be the most performant of the 3 but other than for basic arithmetic, this pattern isn’t going to be very extensible.
pl.struct(“a”, “b”).map_elements(lambda x: my_sum(x[“a”], x[“b”]), return_dtype=pl.Int64)
This is essentially going to create a an empty list then run a for
loop on all the rows of your df. For each row it’ll turn the ‘a’ and ‘b’ column into a dict
of {'a': value, 'b':other_value}
and then your lambda function adds the two elements together and appends the result to the empty list. When it’s done with that it’ll convert the list into a Series which you’ll see in your output df
pl.struct(“a”, “b”).map_batches(lambda x: my_sum(x.struct[“a”], x.struct[“b”]))
This is going to first combine your ‘a’ and ‘b’ column into a single struct column where they’re each a member. It then take that one column into a Series where it runs your function. In this case x.struct["a"]
is its own Series as is x.struct[“b”] and since pl.Series overloads +
you’re dispatching the addition to polars and it does vectorized addition. To put that a different way, your function is only run once for the whole Series and then you see those results in your output.
One misconception, map_batches
doesn’t run in parallel. It is still subject to the Python GIL. In comparison to the map_elements
treatment, it is like being parallel since it can benefit from vectorization whereas map_elements
is doing each operation one at a time. If you have several map_batches
operations then, unlike pure polars expressions, it can’t use all your computer’s cores in parallel.
which to favor
You should always favor using polars expressions. If there’s something you need to do for which no expression exists, for example numpy, scipy functions, you’ll want map_batches
. Only if you’re using a function that expects single scaler values one at a time and outputs scalers one at a time should you ever use map_elements
.