I am looking for numerically stable ways to evaluate the functions below. Since my application concerns the t distribution, I am using the t distribution here as an example.
Log CDF
# Naive Python implementation of the function I need
import scipy
import numpy as np
def t_log_cdf(x, df):
p = scipy.stats.t.cdf(x, df=df)
return np.log(p)
In SciPy, there is a logcdf, but it is not sufficiently numerically stable. Compare the behavior of the scipy function:
>>> import scipy
>>> import numpy as np
>>> scipy.stats.t.logcdf(np.linspace(0, 60, 5), df=15)
array([-6.93147181e-01, -9.70638014e-11, -4.21884749e-15, 0.00000000e+00,
0.00000000e+00])
to the equivalent in R:
# R code
> pt(seq(0, 60, length.out = 5), df = 15, log.p = TRUE)
[1] -6.931472e-01 -9.706380e-11 -4.157404e-15 -1.012752e-17 -1.384523e-19
Tensorflow Probability offers an implementation, but its results resemble the scipy results.
I could not find an implementation in PyTorch (which does not offer the t distribution’s CDF at all).
Quantile function based on log probability:
# Naive Python implementation of the function I need
import scipy
import numpy as np
def t_log_ppf(log_p, loc, scale):
p = np.exp(log_p)
return scipy.stats.t.ppf(p, df=df))
I could not find quantile functions that operate on log probabilities in SciPy, PyTorch, or Tensorflow Probability.
1
The problem with log(t.cdf(x, df))
is that the numerically computed CDF saturates at 1.0 for values of x
that are only moderately large. Beyond that, all precision is lost and you will get 0.0. Fortunately there is a trick you can use to avoid that problem. The sf
(survival function, aka complement of the CDF) method is 1 - cdf(x, df)
, so log(t.cdf(x, df)) = log(1 - t.sf(x, df)) = log1p(-t.sf(x, df))
, where log1p(z)
is the function that computes log(1 + z)
. The survival function maintains pretty good precision for large values of x
, so you can get a reasonably high precision for the log of the CDF by using this formulation for positive x
. Use the regular formulation, log(t.cdf(x, df))
for negative x
. For the log PPF function, do the math to invert both formulations.
Here’s some code that implements this idea:
import numpy as np
from scipy import stats
def t_log_cdf_scalar(x, df):
if x < 0:
return np.log(stats.t.cdf(x, df))
else:
return np.log1p(-stats.t.sf(x, df))
def t_log_ppf_scalar(log_p, df):
# log(0.5) = -0.6931471805599453
if log_p < -0.6931471805599453:
return stats.t.ppf(np.exp(log_p), df)
else:
return stats.t.isf(-np.expm1(log_p), df)
# "Vectorize" the scalar functions. This is for convenience,
# not performance.
t_log_cdf = np.vectorize(t_log_cdf_scalar, otypes=[float])
t_log_ppf = np.vectorize(t_log_ppf_scalar, otypes=[float])
In use:
In [6]: df = 15
In [7]: x = np.linspace(-60, 60, 9)
In [8]: x
Out[8]: array([-60., -45., -30., -15., 0., 15., 30., 45., 60.])
In [9]: logp = t_log_cdf(x, df)
In [10]: logp
Out[10]:
array([-4.34237610e+01, -3.91312749e+01, -3.31138857e+01, -2.30556527e+01,
-6.93147181e-01, -9.70637965e-11, -4.15740360e-15, -1.01275228e-17,
-1.38452315e-19])
In [11]: x2 = t_log_ppf(logp, df)
In [12]: x2
Out[12]:
array([-6.00000000e+01, -4.50000000e+01, -3.00000000e+01, -1.50000000e+01,
-6.74946635e-17, 1.50000000e+01, 3.00000000e+01, 4.50000000e+01,
6.00000000e+01])
(In case you are wondering: yes, the SciPy implementation should do this, or something like it.)
You can use mpmath
library and write your own student distribution cumulative function.
import mpmath
mpmath.mp.dps = 50 # decimal digits of precision
def student_cdf(x, nu = 5):
"""
x - value in (-inf, inf)
nu - degrees of freedom"""
res = 0.5 + x * mpmath.gamma((nu+1)/2) * mpmath.hyp2f1(1/2,(nu+1)/2, 3/2,-x**2/nu)/(mpmath.sqrt(mpmath.pi * nu) * mpmath.gamma(nu/2))
return res
x = 80
df = 20
val = student_cdf(x, df)
val_log = mpmath.log(val)
print(val) # 0.99999999999999999999999999240488812543236046339214
print(val_log) # -7.5951118745676395366078569448673277378485550084107e-27
This answer only addresses the log-CDF part of the question.
The problem with logcdf()
in scipy.stats.t
is that it doesn’t bring its own implementation. If you look at the t distribution’s actual SciPy source code, you will not find a _logcdf()
method there – unlike with many other distributions – so what happens is that _logcdf()
is called on a base class (in this case, its grandparent class rv_generic
¹), which provides a fall-back that will calculate the (regular) CDF and take the logarithm on the result (see corresponding source code, lines 1037–1039) – which results in the numerical problems that you observed.
As proposed in Warren Weckesser’s solution, we could resort to using the survival function for larger values. Alternatively, we can build our own log-CDF: Following the definition of the CDF (as is given e.g. in the Wikipedia article for student’s t), we have for t > 0
F(t) = 1 - 0.5 · betainc(nu/2, 0.5, x(t))
withx(t) = nu / (t² + nu)
,
where F(t)
is the CDF and betainc()
is the regularized incomplete beta function.
To get the log-CDF, we can rewrite this as
log(F(t)) = log1p(-0.5 · betainc(nu/2, 0.5, nu / (t² + nu)))
.
While this may look like a small change, note that we can now make use of log1p()
rather than log()
on the right side, which gains us some significant numerical benefit (compare e.g. this question or, if you are curious, compare the result to writing log(1 - 0.5…)
in the code below instead).
For t ≤ 0, as mentioned in Warren Weckesser’s comment², we can use
log(F(t)) = log(0.5 · betainc(nu/2, 0.5, nu / (t² + nu)))
;
alternatively, we can stick with logcdf()
for negative values, as numerical stability is not an issue in this case.
NumPy/SciPy solution
So, all in all, we have
import scipy
import numpy as np
t = np.linspace(0, 60, 5)
nu = 15
# Given CDF from SciPy
print(scipy.stats.t.logcdf(t, df=nu))
# >>> [-6.93147181e-01 -9.70638014e-11 -4.21884749e-15 0.00000000e+00 0.00000000e+00]
# Proposed solution
_logcdf_t_gt_0 = lambda t, nu: np.log1p(-.5 * scipy.special.betainc(nu/2, .5, nu / (t ** 2 + nu)))
_logcdf_t_leq_0 = lambda t, nu: np.log(.5 * scipy.special.betainc(nu/2, .5, nu / (t ** 2 + nu)))
logcdf = lambda t, nu: np.where(t > 0, _logcdf_t_gt_0(t, nu), _logcdf_t_leq_0(t, nu))
print(logcdf(t, nu))
# >>> [-6.93147181e-01 -9.70637965e-11 -4.15740360e-15 -1.01275228e-17 -1.38452315e-19]
# Result with R from the question
# -6.931472e-01 -9.706380e-11 -4.157404e-15 -1.012752e-17 -1.384523e-19
# Sanity-check values ≤ 0:
assert np.allclose(logcdf(-t, nu), scipy.stats.t.logcdf(-t, df=nu))
The solution now matches the R result in all available digits.
Situation in PyTorch
Unfortunately, PyTorch currently does not provide betainc()
(see corresponding issue on GitHub), so a direct adaptation of the code above will not be possible. Likewise, PyTorch does not seem to provide the survival functions of its distributions, so using the survival function instead won’t work, either.
That said, there are two pull requests to implement betainc()
in PyTorch (an obsolete one and an open one), which you might want to have a look at, if you are up to creating your own implementation (most of their code is written in C++ rather than Python though, as far as I can tell).
¹) The t distribution’s class t_gen
inherits from rv_continuous
, which inherits from rv_generic
.
²) It is also (somewhat less obviously) mentioned in the Wikipedia article as: Other values would be obtained by symmetry.
3