Python.exe在运行具有pandas和list实现的脚本时被挂起

时间:2017-04-21 11:04:54

标签: python csv pandas

我开发了一个处理CSV文件和脚本的脚本。生成另一个结果文件。脚本使用有限的测试数据成功运行,但是当我使用15列中有2500万行的实际数据文件执行脚本时,相同的脚本将被挂起并突然关闭。请参阅随附的错误屏幕截图。

那么,我可以使用CSV文件中的pandas或者在列表中存储记录的最大限制来读取任何最大限​​制吗?

请分享您的想法以优化以下脚本。

[Error Screen Shot]

下面是脚本。

import csv
import operator
import pandas as pd
import time

print time.strftime('Script Start Time : ' + "%Y-%m-%d %H:%M:%S")
sourceFile = raw_input('Enter file name along with path : ')
searchParam1 = raw_input('Enter first column name containing MSISDN : ').lower()
searchParam2 = raw_input('Enter second column name containing DATE-TIME : ').lower()
searchParam3 = raw_input('Enter file seperator (,/#/|/:/;) : ')

df = pd.read_csv(sourceFile, sep=searchParam3)
df.columns = df.columns.str.lower()
df = df.rename(columns={searchParam1 : 'msisdn', searchParam2 : 'datetime'})

destFileWritter = csv.writer(open(sourceFile + ' - ProcessedFile.csv','wb'))
destFileWritter.writerow(df.keys().tolist())
sortedcsvList = df.sort_values(['msisdn','datetime']).values.tolist()

rows = [row for row in sortedcsvList]
col_1 = [row[df.columns.get_loc('msisdn')] for row in rows]
col_2 = [row[df.columns.get_loc('datetime')] for row in rows]

for i in range(0,len(col_1)-1):
    if col_1[i] == col_1[i+1]:
        #print('Inside If...')
        continue
    else:
        for row in rows:
            if col_1[i] in row:
                if col_2[i] in row:
                    #print('Inside else...')
                    destFileWritter.writerow(row)
destFileWritter.writerow(rows[len(rows)-1])
print('Processing Completed, Kindly Check Response File On Same Location.')
print time.strftime('Script End Time : ' + "%Y-%m-%d %H:%M:%S")
raw_input('Press Enter to Exit...')[![enter image description here][1]][1]

更新了脚本:

import csv
import operator
import pandas as pd
import time
import sys

print time.strftime('Script Start Time : ' + "%Y-%m-%d %H:%M:%S")
sourceFile = raw_input('Enter file name along with path : ')
searchParam1 = raw_input('Enter first column name containing MSISDN : ').lower()
searchParam2 = raw_input('Enter second column name containing DATE-TIME : ').lower()
searchParam3 = raw_input('Enter file seperator (,/#/|/:/;) : ')

def csvSortingFunc(sourceFile, searchParam1, searchParam2, searchParam3):
    CHUNKSIZE = 10000
    for chunk in pd.read_csv(sourceFile, chunksize=CHUNKSIZE, sep=searchParam3):
        df = chunk
        #df = pd.read_csv(sourceFile, sep=searchParam3)
        df.columns = df.columns.str.lower()
        df = df.rename(columns={searchParam1 : 'msisdn', searchParam2 : 'datetime'})
        """destFileWritter = csv.writer(open(sourceFile + ' - ProcessedFile.csv','wb'))
        destFileWritter.writerow(df.keys().tolist()) """
        resultList = []
        resultList.append(df.keys().tolist())
        sortedcsvList = df.sort_values(['msisdn','datetime']).values.tolist()
        rows = [row for row in sortedcsvList]
        col_1 = [row[df.columns.get_loc('msisdn')] for row in rows]
        col_2 = [row[df.columns.get_loc('datetime')] for row in rows]
        for i in range(0,len(col_1)-1):
            if col_1[i] == col_1[i+1]:
                #print('Inside If...')
                continue
            else:
                for row in rows:
                    if col_1[i] in row:
                        if col_2[i] in row:
                            #print('Inside else...')
                            #destFileWritter.writerow(row)
                            resultList.append(row)
        #destFileWritter.writerow(rows[len(rows)-1])
    resultList.append(rows[len(rows)-1])
    writedf = pd.DataFrame(resultList)
    writedf.to_csv(sourceFile + ' - ProcessedFile.csv', header=False, index=False)
    #print('Processing Completed, Kindly Check Response File On Same Location.')


csvSortingFunc(sourceFile, searchParam1, searchParam2, searchParam3)
print('Processing Completed, Kindly Check Response File On Same Location.')
print time.strftime('Script End Time : ' + "%Y-%m-%d %H:%M:%S")
raw_input('Press Enter to Exit...')

1 个答案:

答案 0 :(得分:1)

如果您可以轻松地汇总结果,那么您最好考虑在 pd.read_csv 中使用参数 chunksize 。它允许您读取大块的 .csv 文件,例如100000条记录。

chunksize = 10000
for chunk in pd.read_csv(filename, chunksize=chunk_size):
    df = chunk
    #your code

之后,您应该将每个计算的结果追加到最后一个计算中。 希望它有所帮助,我在处理超过数百万行的文件时使用了这种方法。

续:

    i = 0
    for chunk in pd.read_csv(sourceFile, chunksize=10):
        print('chunk_no', i)
        i+=1

你可以运行这几行吗?它会打印出一些数字吗?