I realize that how to read lines from a file in interval [start, stop] is a common question, however many of the standard answers don’t work well for my data set.
Specifically, I have data files with 500K lines and 100K columns. Each block of 50 rows is a separate data set which I need to read as a block, analyze, and then move on to the next block. Using readlines() to create a data object which I can sample in increments of 50 will not work, because the data objects take up too much memory.
I thought that something like the following would work (for the example below, I created a test file with 150 lines (3 replicates of 50). “myfunction()” is just a placeholder for the processing of each line)
infile = open("test_file", "r")
outfile = open("out_test_file", "w")
for rep in range(0:3):
to_sample = list(range(rep*50, rep*50+50))
i = 0
for line in infile:
if i in to_sample:
something_useful = my_function(line)
i=i+1
outfile.write(str(something_useful))
outfile.close()
The script gets me through the first iteration of 50, but then cannot proceed, presumably because the
for line infile
loop doesn’t start at the beginning of the file during the next iteration of rep, since it had already read the last line of the infile.
As I stated, if the data files were of manageable size I could just use readlines and then sample the matrix in the desired intervals using the loop over rep and line number, but this isn’t feasible for this dataset. What would be an efficient alternative?
2
The inner loop reads the entire file. When you repeat the outer loop, there’s nothing left in the file to read.
Use a range loop for the inner loop, and call readline()
.
for _ in range(3):
for i in range(50):
line = infile.readline()
something_useful = my_function(line)
outfile.write(something_useful)
1
If you want to process only certain line numbers, you could do this:
target_lines = range(5, 100) # or whatever lines you want to process
line_number = 0
for line in file:
line_number += 1
if line_number in target_lines:
# process this line