我目前有14个CSV文件,每个文件包含一天的一列数据(14个因为它可以追溯到2周)
我想要做的是制作一个包含所有14个CSV数据的CSV文件
例如。如果每个CSV包含以下内容:
1
2
3
4
我希望结果是带有
的csv文件1,1,1,1,1,1,1,1,1,1,1,1,1,1,
2,2,2,2,2,2,2,2,2,2,2,2,2,2,
3,3,3,3,3,3,3,3,3,3,3,3,3,3,
4,4,4,4,4,4,4,4,4,4,4,4,4,4,
(实际的CSV有288行)
我目前正在使用我在另一个问题中找到的一些代码,它适用于2或3个CSV,但是当我添加更多时,它没有超过前3个代码,现在代码看起来非常混乱。 / p>
为大量代码道歉,但这是我到目前为止所做的。
def csvappend():
with open('C:\dev\OTQtxt\\result1.csv', 'rb') as csv1:
with open('C:\dev\OTQtxt\\result2.csv', 'rb') as csv2:
with open('C:\dev\OTQtxt\\result3.csv', 'rb') as csv3:
with open('C:\dev\OTQtxt\\result4.csv', 'rb') as csv4:
with open('C:\dev\OTQtxt\\result5.csv', 'rb') as csv5:
with open('C:\dev\OTQtxt\\result6.csv', 'rb') as csv6:
with open('C:\dev\OTQtxt\\result7.csv', 'rb') as csv7:
with open('C:\dev\OTQtxt\\result8.csv', 'rb') as csv8:
with open('C:\dev\OTQtxt\\result9.csv', 'rb') as csv9:
with open('C:\dev\OTQtxt\\result10.csv', 'rb') as csv10:
with open('C:\dev\OTQtxt\\result11.csv', 'rb') as csv11:
with open('C:\dev\OTQtxt\\result12.csv', 'rb') as csv12:
with open('C:\dev\OTQtxt\\result13.csv', 'rb') as csv13:
with open('C:\dev\OTQtxt\\result14.csv', 'rb') as csv14:
reader1 = csv.reader(csv1, delimiter=',')
reader2 = csv.reader(csv2, delimiter=',')
reader3 = csv.reader(csv3, delimiter=',')
reader4 = csv.reader(csv4, delimiter=',')
reader5 = csv.reader(csv5, delimiter=',')
reader6 = csv.reader(csv6, delimiter=',')
reader7 = csv.reader(csv7, delimiter=',')
reader8 = csv.reader(csv8, delimiter=',')
reader9 = csv.reader(csv9, delimiter=',')
reader10 = csv.reader(csv10, delimiter=',')
reader11 = csv.reader(csv11, delimiter=',')
reader12 = csv.reader(csv12, delimiter=',')
reader13 = csv.reader(csv13, delimiter=',')
reader14 = csv.reader(csv14, delimiter=',')
all = []
for row1, row2, row3, row4, row5, row6, row7, row8, row9, \
row10, row11, row12, row13, row14 in zip(reader1, \
reader2, reader3,\
reader4, reader5, \
reader7, reader8,\
reader9, reader10, \
reader11, reader12,\
reader13,reader14):
row14.append(row1[0])
row14.append(row2[0])
row14.append(row3[0])
row14.append(row4[0])
row14.append(row5[0])
row14.append(row6[0])
row14.append(row7[0])
row14.append(row8[0])
row14.append(row9[0])
row14.append(row10[0])
row14.append(row11[0])
row14.append(row12[0])
row14.append(row13[0])
all.append(row14)
with open('C:\dev\OTQtxt\TODAY.csv', 'wb') as output:
writer = csv.writer(output, delimiter=',')
writer.writerows(all)
我认为我的一些缩进在复制时已经搞砸了,但你应该明白这个想法。而且我不希望仔细阅读所有内容,这是非常重复的。
我已经看到一些推荐unix
工具的类似/相关问题。如果有人建议我最好告诉你这将在Windows上运行。
如果有人对我如何清理它并实际让它正常工作有任何想法。我非常感激!
答案 0 :(得分:2)
创建文件:
xxxx@xxxx:/tmp/files$ for i in {1..15}; do echo -e "1\n2\n3\n4" > "my_csv_$i.csv"; done
xxxx@xxxx:/tmp/files$ more my_csv_1.csv
1
2
3
4
xxxx@xxxx:/tmp/files$ ls
my_csv_10.csv my_csv_11.csv my_csv_12.csv my_csv_13.csv my_csv_14.csv my_csv_15.csv my_csv_1.csv my_csv_2.csv my_csv_3.csv my_csv_4.csv my_csv_5.csv my_csv_6.csv my_csv_7.csv my_csv_8.csv my_csv_9.csv
使用itertools.izip_longest
:
with open('result.csv', 'w') as f_obj:
rows = []
files = os.listdir('.')
for f in files:
rows.append(open(f).readlines())
iter = izip_longest(*rows)
for row in iter:
f_obj.write(','.join([field.strip() for field in row if field is not None])+'\n')
输出:
xxxxx@xxxx:/tmp/files$ more result.csv
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
2,2,2,2,2,2,2,2,2,2,2,2,2,2,2
3,3,3,3,3,3,3,3,3,3,3,3,3,3,3
4,4,4,4,4,4,4,4,4,4,4,4,4,4,4
这不是最佳解决方案,因为您将把所有数据都放在内存中。但你应该知道如何做到这一点。顺便说一下,如果你的所有数据都是数字的,我会留在numpy
并使用多维数组。
答案 1 :(得分:0)
您可以使用它,也可以在循环中指定文件的名称:
import numpy as np
filenames = ['file1', 'file2', 'file3'] # all the files to be read in
data = [] # saves data from the files
for filename in filenames:
data.append(open(filename, 'r').readlines()) # append a list of all numbers in the current file
data = np.matrix(data).T # transpose the list of list using numpy
data_string = '\n'.join([','.join([k.strip() for k in j]) for j in data.tolist()]) # create a string by separating inner elements by ',' and outer list by '\n'
with open('newfile', 'w') as fp:
fp.write(data_string)
答案 2 :(得分:0)
刚刚测试过:
import csv
import glob
files = glob.glob1("C:\\dev\\OTQtxt", "*csv")
rows=[]
with open('C:\\dev\\OTQtxt\\one.csv', 'a') as oneFile:
for file in files:
rows.append(open("C:\\dev\\OTQtxt\\" + file, 'r').read().splitlines())
for row in rows:
writer = csv.writer(oneFile)
writer.writerow(''.join(row))
这将导致目录中的文件one.csv
与csv一起包含所有merdged * csv文件