I have been learning and playing with some machine learning classifiers on Jupyter notebook.
Firstly, the data size is not big, it is a farily simple toy example.
However, when I run a training code, I get the folloiwng error that causes the Jupyter notebook kernel to crash:
“The Kernel crashed while executing code in the current cell or a previous cell.”
I thought it could be due to running it on Jupyter notebook, and so made a same exact .py script, and I get the “zsh: segmentation fault” on the same part where I initiate the training.
I thought this was me implementing things incorrectly but when I ran the same copied code on my work laptio (which is of a similar spec, just less cluttered because I only use it for work). It runs perfectly fine, taking less than 10 seconds to train…
I am wondering if this is due to my memory or some personal setting that is different for my personal machine. Does anyone have similar experiences, or ideas for resolving this issue?
I have tried (i) reproducing this on another machine, and could not reproduce it, (ii) tried re-installing some Python / ML packages being used in this task, and still experience the same error, and (iii) tried reducing the parameters / training data involved in the ML model to minimal to see if it executes, and still gives the same errors.