I am using fmin_slsqp to find the weights that minimize mean squared error. The weights need to be positive. For each pair of X and y, it takes ~10 seconds. (Each X is (10, 1000) and y is (10,)). I have 8000 pairs that need to be calculated:(
Is there any error with the code, or it is just my data takes too long to converge? Is there any way to make this process efficient, like is there a way to calculate all 8000 pairs together?
def loss(W, X, y):
return np.mean((y - X.dot(W))**2)
def get_result(X, y):
w_start = [1/X.shape[1]] * X.shape[1]
weights = fmin_slsqp(partial(loss_w, X=X, y=y),
np.array(w_start),
bounds=[(0.0, np.inf)] * X.shape[1],
disp=False)
return weights
6
This can be solved in one shot as a sparse block-diagonal problem. This takes a couple of seconds to solve. It might be sped up if you can offer some reasonable estimate of W to the x0
parameter of lsqr
.
import numpy as np
import scipy
def solve(X: np.ndarray, y: np.ndarray) -> np.ndarray:
m, n, p = X.shape
indptr = np.arange(m + 1, dtype=np.int32)
indices = indptr[:-1]
block_diag = scipy.sparse.bsr_array((X, indices, indptr))
W = scipy.sparse.linalg.lsqr(A=block_diag, b=y.ravel())[0]
return W.reshape(m, p)
def demo() -> None:
m = 8000
n = 10
p = 1000
rand = np.random.default_rng(seed=0)
X = rand.uniform(size=(m, n, p), low=-1, high=1)
hidden_W = rand.uniform(size=(m, p), low=-1, high=1)
y = X @ hidden_W[..., np.newaxis]
W = solve(X, y)
error = y - X@W[..., np.newaxis]
print(f'{error.min()} <= error <= {error.max()}')
if __name__ == '__main__':
demo()
Note that this doesn’t enforce non-negativity for the values of W. If that’s crucial, then more time will need to be spent, either running a non-vectorised outer loop over something like scipy.optimize.nnls
, or taking a solution from above and performing a conditional polishing step for values that are out of bounds.
3
As stated in the comments, you can use scipy.optimize.nnls to solve this problem, however this is not vectorized, so you have to loop over your initial array dimensions.
If you’re willing to use other packages, you can use numba and nnls_numba (disclaimer I’m the author) to compile these loops and run them in parallel to solve it faster.
In SciPy versions 0.7 – 1.11, nnls
is a wrapper around a fortran subroutine, in more recent versions of SciPy, it uses a different algorithm that is implemented in python. In nnls_numba
you have access to numba compatible implementations of both of these functions: nnls_old
and nnls_new
respectively. Here’s how to use these functions to solve your problem:
import numba as nb
import numpy as np
import nnls_numba
from scipy import optimize
rng = np.random.default_rng(69)
K, M, N = 8000, 10, 1000
A = rng.random((K, M, N))
x = rng.random((K, N))
x[:, ::3] *= -1
b = np.einsum('ijk,ik->ij', A, x) + rng.standard_normal((K, M))
def many_nnls_scipy(A, b):
output = np.empty((A.shape[0], A.shape[2]))
for i in range(A.shape[0]):
output[i] = optimize.nnls(A[i], b[i])[0]
return output
@nb.njit
def many_nnls_new(A, b, maxiter=-1):
assert A.shape[:-1] == b.shape
assert A.ndim == 3
output = np.empty((A.shape[0], A.shape[2]))
for i in range(A.shape[0]):
output[i] = nnls_numba.nnls_new(A[i], b[i], None, 1e-9)[0]
return output
def many_nnls_old(A, b, maxiter=-1):
assert A.shape[:-1] == b.shape
assert A.ndim == 3
output = np.empty((A.shape[0], A.shape[2]))
for i in nb.prange(A.shape[0]):
output[i] = nnls_numba.nnls_old(A[i], b[i], maxiter)[0]
return output
many_nnls_old_serial = nb.njit(many_nnls_old, parallel=False)
many_nnls_old_parallel = nb.njit(many_nnls_old, parallel=True)
Note, you can just use @njit
on many_nnls_scipy
(no import required) and it will use either nnls_new
or nnls_old
behind the scenes depending on which SciPy version you have installed.
Test and time:
result_scipy = many_nnls_scipy(A, b)
result_new = many_nnls_new(A, b)
result_old_serial = many_nnls_old_serial(A, b)
result_old_parallel = many_nnls_old_parallel(A, b)
assert np.allclose(result_new, result_scipy)
assert np.allclose(result_new, result_old_serial)
assert np.allclose(result_new, result_old_parallel)
%timeit -n 1 -r 1 many_nnls_scipy(A, b)
%timeit -n 1 -r 1 many_nnls_new(A, b)
%timeit -n 1 -r 1 many_nnls_old_serial(A, b)
%timeit -n 1 -r 1 many_nnls_old_parallel(A, b)
Results:
16.9 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
13.8 s ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
960 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
138 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each)
There’s also the nnls_old_
function with which you can allocate the work arrays needed for the fortran subroutine yourself, but that is giving an error when I try it with your array dimensions.
Note, I’m not sure that nnls_new
will work on windows.