I’m working on a machine learning project to predict when to activate suppliers in an industrial purchasing process. I have two main datasets:
doc3
: Contains daily information about purchase orders.expedite2
: Contains records of when suppliers were activated.
My goal is to create a model that can predict, on any given day, whether a supplier needs to be activated based on historical patterns.
Here’s a simplified version of my current data preparation function:
import pandas as pd
def optimized_join_and_prepare_data(doc3, expedite2):
doc3['PROCESS_DATE'] = pd.to_datetime(doc3['PROCESS_DATE'])
expedite2['LOG_DATE'] = pd.to_datetime(expedite2['LOG_DATE'])
doc3['item_key'] = doc3['DOC_NUM'].astype(str) + '_' + doc3['DOC_ITEM'].astype(str) + '_' + doc3['SAP_CODE']
expedite2['item_key'] = expedite2['DOC_NUM'].astype(str) + '_' + expedite2['DOC_ITEM'].astype(str) + '_' + expedite2['SAP_WEB_CODE']
doc3 = doc3.sort_values('PROCESS_DATE')
expedite2 = expedite2.sort_values('LOG_DATE')
merged = pd.merge_asof(doc3, expedite2[['item_key', 'LOG_DATE', 'DATA01']],
left_on='PROCESS_DATE',
right_on='LOG_DATE',
by='item_key',
direction='nearest',
tolerance=pd.Timedelta('2d'))
merged['LOG_DATE'] = merged['LOG_DATE'].fillna(pd.NaT)
merged['DATA01'] = merged['DATA01'].fillna('NO_ACTIVATION')
merged = merged.dropna(subset=['PROCESS_DATE', 'item_key'])
return merged
final_data = optimized_join_and_prepare_data(doc3, expedite2)
My main concerns and questions are:
-
Is there a risk of data leakage in this approach, especially with the use of
merge_asof
and the 2-day tolerance? -
If data leakage is present, how can I modify this function to ensure I’m only using past information for each prediction day while maintaining computational efficiency?
-
Are there any other time series-specific considerations I should keep in mind when preparing this data for a predictive model?
I’m dealing with a large dataset, so efficiency is a concern, but I want to ensure the integrity of my time series analysis. Any insights or suggestions for improvement would be greatly appreciated!
I tried several approaches to prepare my time series data:
-
Initially, I used a simple merge on the item key and date, but this resulted in many missing values and didn’t capture the temporal nature of the data.
-
Then, I implemented the
merge_asof
function as shown in my code. I expected this to align the activation data (fromexpedite2
) with the nearest processing date (fromdoc3
), within a 2-day window. This worked well in terms of efficiency and seemed to capture recent activations. -
I also added historical features like previous activations and days since last activation using the
shift
function within each item group.
What I expected:
- A dataset where each row in
doc3
is enriched with the most recent activation data. - Historical features that capture past activation patterns for each item.
- An efficient process that can handle large datasets.
What actually happened:
- The function runs efficiently on my large dataset.
- I get a merged dataset with activation information and some historical features.
- However, I’m unsure if I’ve completely eliminated data leakage, especially with the 2-day tolerance in
merge_asof
. - I’m concerned that the
shift
function for creating historical features might not be capturing long-term patterns effectively.
I’m looking for validation of my approach or suggestions for improvement, especially regarding data leakage prevention and more sophisticated time series feature engineering.
Go off yourself then
Id-0it cant code for shi LOL. I made over six figures a year with this ez a job chat gpt does for me
Jnn is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.