I'm trying to parse a large .tsv-File with 500k rows into single .txt files for each row. My Script is now able to iterate till id 11533, then it stops and prints out following error:
File "goldfish.py", line 18, in filename, text = prev_row
ValueError: too many values to unpack
My Script looks like that:
import csv import sys csv.field_size_limit(sys.maxsize) with open('id_descr.tsv', 'rb') as f: reader = csv.reader(f, delimiter='\t') fieldnames = next(reader) prev_row = next(reader) for row in reader: if not row: continue if len(row) == 1 or not row.isdigit(): prev_row[-1] += row else: filename, text = prev_row filename = filename + ".txt" with open(filename, 'wb') as output: output.write(text) output.write('\n') prev_row = row
The Following .tsv file contain the last iterated row (id=11533) and the following row, which isn't parsed (thats the point where the script stops) https://www.dropbox.com/s/8mizthp8n0kduax/sample.tsv?dl=0
So my questions are:
is there a way to ignore this kind of error, or how do I have to change the script to avoid this error?