I am trying to apply the Karhunen-Loeve Expansion(KLE) on the 1D Gaussian Random Field from scratch using numpy,scipy, matplotlib.pyplot:
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
The domain can be set, like this:
D_a = 0
D_b = 1
nx = 10
x = np.linspace(D_a, D_b, nx).reshape(-1,1)
For this purpose I have coded several functions.
The covariance function is defined as follows:
from scipy.spatial.distance import pdist, squareform # compute the distance matrix between observations
def covariance_function( data, r , s ):
"""
r - range
data - data on locations i=1...N
s - covariance factor
"""
covariance_matrix = np.zeros((data.shape[0], data.shape[0]))
_dist = squareform( pdist(data,'euclidean'))
covariance_matrix = s**2 * np.exp(-1/r * _dist )
return covariance_matrix
The function for the mean value vector :
def expectation_function(data, slope, intercept ):
m = data @ slope + intercept
return m
The function for the 1D GRF:
def GaussRF(data, r,s, slope, intercept, seed):
covariance_matrix = covariance_function(data, r,s)
mean_value_vec = expectation_function(data, slope, intercept )
L = sp.linalg.cholesky(covariance_matrix, lower=True )
# m + L * Z ~ N(0, 1),
# where L is of size NxX and Z of the size Nx1, L * L^T is the original covariance matrix, i.e covariance_matrix
#m = mean_value_vec + L @ sp.stats.multivariate_normal(mean=np.zeros((mean_value_vec.shape)).flatten(), cov=np.eye(L.shape[0]), seed=seed).rvs().reshape(mean_value_vec.shape)
m = mean_value_vec + L @ sp.stats.norm(0,1).rvs(size=len(mean_value_vec), random_state=seed)
return m
Finally, the KLE function which makes use of the functions above:
def Karhunen_Loeve_Expansion(data, k, r, s, slope, intercept, seed):
G = covariance_function(data, r, s)
mean_value_vec = expectation_function(data, slope, intercept )
# eigendecomposition
lambdas, v = np.linalg.eigh(G)
# SVD
# u, lambdas, v = np.linalg.svd(G,full_matrices=True)
# v = v.T
# Sum of expansions
k_h_exp = v[:,:k] @ np.diag(np.sqrt(lambdas[:k])) @ sp.stats.norm(0,1).rvs(size=k, random_state=seed)
g = k_h_exp + mean_value_vec
return g, v, lambdas
The sum of sufficent number of the products of the eigenvalues, eigenvectors and the same random realizations of the standard normal distibution should lead to a good approximation. Hovewer, it does not coverge for my implementation. I wonder if anybody could explain me, what I do wrong.
Initially, I thought that the results do not converge due to wrong indexing of the eigenvectors, i.e I tried to take k column vectors and, then, k row-vectors but it did not help.
Then I tried to use different decomposition methods. However, the SVD and Eigendecomposition should be the same due to the properties of the covariance matrix.
The expected result is the full convergence of the approximated version of the GRF with the original one, if one usees all eigenvalues
KI_ is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.