I’m encountering challenges while simulating an algorithm. I’m working to optimize a convex function (theoretically convex since it originates from -2log(p(X; Y)), relating to a Maximum Likelihood Estimation problem), where the variable Y is an m*m symmetric positive definite matrix.
Using scipy.optimize, I can successfully optimize the function in one-dimensional and two-dimensional cases. However, in three dimensions, optimization only works with small data sets; larger sets fail. I switched to using cvxpy, but even for two-dimensional systems, it errors out with “You are trying to minimize a function that is concave.”
Could anyone suggest alternative methods or approaches for this simulation?
This is the general framework for function Y, in which I have omitted some of the details of the calculations.def compute_Y(Y, …):
term1 = T * (np.log(np.linalg.det(Y))+1e-8)
term = np.zeros((m, m))
for t in range(T):
term += term_data[t] ** # theoretically, ‘term’ is a symmetric positive definite matrix.**
term2 = np.trace(np.linalg.inv(Y) @ term)
return term1 + term2
or in cvx:
def compute_Y(Y, …):
term1 = T * (cp.log_det(Y))
term = np.zeros((m, m))
for t in range(T):
term += term_data[t] # Assuming term_data is structured appropriately
M = cp.Variable((m, m))
constraints = [Y @ M == term]
term2 = cp.trace(M)
return term1 + term2, constraints
# or:
# term_L = cholesky(term, lower=True)
# term2 = cp.matrix_frac(term_L, Y)
# return term1 + term2
wang skye is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.