I’m trying to optimize a function based on discrete data sets I got in the lab.
Spectrum_1
and Spectrum_2
are experimental data sets with length N
. Those two arrays contain values taken for each lambda (wavelength) value in lambdas
array.
So:
len(Spectrum_1) == len(Spectrum_2) == len(lambdas)
Each Spectrum_1
and Spectrum_2
have the symbolic form
Spectrum = G(T,lambda) * E(lambda)
E(lambda) is unknown, but it is known to not vary with T.
G(T, lambda) is a known function of T and lambda (it is a continuous and defined function). It is the Planck blackbody radiation equation in regards to wavelength to be more specific.
E_1
and E_2
should be equal:
E_1 = Spectrum_1/np.array([G(T_1, lamda) for lamda in lambdas])
E_2 = Spectrum_2/np.array([G(T_2, lamda) for lamda in lambdas])
T_1
and T_2
are unknown but I know they are within 400 to 1000 range (both). T_1
and T_2
are two scalar values.
Knowing that, I need to minimize:
np.sum(np.array([(E_1[i] - E_2[i])**2 for i in lamdas]))
Or at least that’s what I think I should minimize. Ideally (E_1[i]-E_2[i])==0
but that won’t be the case granted the experimental data in Spectrum_1
and _2
contain noise and distortions due to atmospheric transmission.
I’m not very familiar with optimizing with multiple unknown variables (T_1 and T_2)
in Python. I could brute test millions of combinations of T_1
and T_2
I suppose, but I wish to do it correctly.
I wonder if anybody could help me.
I hear scipy.optimize could do it for me, but many methods ask for Jacobian and Hessian and I’m unsure how to proceed given I have experimental data (Spectrum_1
and Spectrum_2
) and I’m not dealing with continuous/smooth functions.