Suppose we have a dataframe with several columns, where missing values (NaN) are encoded as empty strings. Given a set of columns, I want to create an ID number such that:
- Rule 1: the ID is the same for all rows having identical or NaN values in that column set
- Rule 2: the ID is different for all rows if any column in the column set has >= 2 different non-NaN values
Expected output (for example dataframes, ID based on col1
–col4
):
col0 col1 col2 col3 col4 ID
0 hello 1 1 0
1 tree 2 test 2 1
2 world 2 3 2 1 # ID identical to row 1: col1 and col4 are identical, col2 and col3 only contain max. one non-NaN value
3 apple 3 2 2
col0 col1 col2 col3 col4 ID
0 hello 1 1 0
1 tree 2 test 2 1 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values
2 world 2 3 2 2 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values
3 apple 3 2 3
4 dog 2 test 4 2 4 # IDs different for rows 1, 2 and 4 because col3 contains two different non-NaN values
How can an ID as shown in the expected output be assigned to the rows? Can this be achieved via some df.groupby()
operation or is it necessary to iteratively check the column values for each row against previously stored value tuples?
Below is a code attempting to create the ID column via df.groupby().ngroup()
. However, grouping by the non-NaN columns only works for the first dataframe (since it does not apply rule 2), grouping by all columns only works for the second dataframe (since it does not apply rule 1). The ID numbers are not sorted, but this is not a big issue here.
import pandas as pd
import numpy as np
# example dataframes with empty strings
df_test1 = pd.DataFrame({'col0': ['hello','tree','world','apple'], 'col1': [1,2,2,3], 'col2': ['','test','',''], 'col3': ['','','3',''], 'col4': [1,2,2,2]})
df_test2 = pd.DataFrame({'col0': ['hello','tree','world','apple','dog'], 'col1': [1,2,2,3,2], 'col2': ['','test','','','test'], 'col3': ['','','3','','4'], 'col4': [1,2,2,2,2]})
df_test1 = df_test1.replace('', np.nan)
df_test2 = df_test2.replace('', np.nan)
def id_ngroup(df_in, groupby_cols):
# create an ID via grouping by columns
df = df_in.copy()
df['ID'] = df.groupby(groupby_cols, dropna=False).ngroup()
return df
df_test1a = id_ngroup(df_test1, groupby_cols=['col1','col2','col3','col4'])
df_test2a = id_ngroup(df_test2, groupby_cols=['col1','col2','col3','col4'])
print("IDs grouping by all columns:")
print("Test1a:n",df_test1a)
print("Test2a:n",df_test2a)
df_test1b = id_ngroup(df_test1, groupby_cols=['col1','col4'])
df_test2b = id_ngroup(df_test2, groupby_cols=['col1','col4'])
print("IDs grouping by non-NaN columns only:")
print("Test1b:n",df_test1b)
print("Test2b:n",df_test2b)