-
I’ve a data frame of 2942 rows and 52 columns. One of the columns is the target variable and the other columns are features. I’m trying to calculate Pearson’s correlation between the feature column(s) and the target variable column (for one feature at a time).
-
For some feature columns, there are NaN values. I ran the following code to impute the NaN values:
#imputation of the nan values using SimpleImputer
import numpy as np
from sklearn.impute import SimpleImputer
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer = imp.fit(df)
# Imputing the data
df = imputer.transform(df)
df_pd = pd.DataFrame(df, columns = ['xx']) #convert the numpy array to pd df
- I ran the code below to check that the NaN values have been removed. (output was 0)
df_pd['xx'].isna().sum()
- Converted the ‘df_pd’ df to a list and added it as a column to the original df
list_ = df_pd['xx'].tolist()
df['xx'] = list_
- Also checked for infs
c = np.isinf(df['xx']).values.sum()
print(c) #output was 0 indicating no infs
- Then I ran the code below to check the correlation between the df[‘xx’] column and the target variable column (df[‘yy’]):
from scipy.stats import pearsonr
full_correlation = pearsonr(df['xx'],df['yy'])
print('Correlation coefficient:',full_correlation[0])
print('P-value:',full_correlation[1])
However, I’m getting the following error:
ValueError Traceback (most recent call last)
Cell In[343], line 2
1 from scipy.stats import pearsonr
----> 2 full_correlation = pearsonr(df_with_features_unwanted_cols_removed['time_diff_between_req_start_ans_start_no_nan'],df_with_features_unwanted_cols_removed['claude_rating'])
3 print('Correlation coefficient:',full_correlation[0])
4 print('P-value:',full_correlation[1])
File /opt/conda/envs/ml_v2/lib/python3.10/site-packages/scipy/stats/_stats_py.py:4797, in pearsonr(x, y, alternative, method)
4793 # Unlike np.linalg.norm or the expression sqrt((xm*xm).sum()),
4794 # scipy.linalg.norm(xm) does not overflow if xm is, for example,
4795 # [-5e210, 5e210, 3e200, -3e200]
4796 normxm = linalg.norm(xm)
-> 4797 normym = linalg.norm(ym)
4799 threshold = 1e-13
4800 if normxm < threshold*abs(xmean) or normym < threshold*abs(ymean):
4801 # If all the values in x (likewise y) are very close to the mean,
4802 # the loss of precision that occurs in the subtraction xm = x - xmean
4803 # might result in large errors in r.
File /opt/conda/envs/ml_v2/lib/python3.10/site-packages/scipy/linalg/_misc.py:146, in norm(a, ord, axis, keepdims, check_finite)
144 # Differs from numpy only in non-finite handling and the use of blas.
145 if check_finite:
--> 146 a = np.asarray_chkfinite(a)
147 else:
148 a = np.asarray(a)
File /opt/conda/envs/ml_v2/lib/python3.10/site-packages/numpy/lib/function_base.py:630, in asarray_chkfinite(a, dtype, order)
628 a = asarray(a, dtype=dtype, order=order)
629 if a.dtype.char in typecodes['AllFloat'] and not np.isfinite(a).all():
--> 630 raise ValueError(
631 "array must not contain infs or NaNs")
632 return a
ValueError: array must not contain infs or NaNs
Can anyone please help me understand why I’m getting this error? It’d be very helpful. Thanks
I ran the code below to check that there is no NaN or infs in the data files:
df_pd['xx'].isna().sum()
c = np.isinf(df['xx']).values.sum()
print(c) #output was 0 indicating no infs
I expected that the command for running correlation will be successful.