I’m working with a dataset where all values lie between 0 and 0.1, and the smallest non-zero value is
3.19526695102823×10^−8. I plan to use machine learning models to predict these values.
I’m considering scaling the data using Min-Max Scaling or Log Transformation to improve model performance. However, I’m concerned about precision issues, given that floating-point precision is limited (float32 precision).
Questions:
- Will LSTM models be able to effectively predict these small values after scaling, considering potential precision limitations?
- Can tree-based models handle such small scaled values well?
- Are there any best practices or alternative approaches to ensure that models can accurately predict values within this small range?