How can I efficiently split a large .txt file into training and test sets in Python?
I have a very large .txt file (several gigabytes) that I need to split into training and test sets for a machine learning project. The usual methods of reading the entire file into memory and then splitting it are not feasible due to memory constraints. I am looking for a way to efficiently split the file without overloading memory.