如何使用read_csv或lambda

时间:2019-01-31 07:00:50

标签: python pandas dataframe lambda

我通过read_csv读取了相同的文本文件两次。首次获取与该文件中的“ Col6”与特定字符串(MSG)匹配的键列表。这将为我提供一个仅包含与“ Col6”匹配的条目的数据框。然后第二次我读取相同的文件(同样是read_csv),如果是key1 == key2,则还要打印一些基于'Col1'的列。

我基本上有两个问题: 1.我可以将两个搜索(read_csv)组合在一起吗? 2.即使我将两个read_csv分开,如何读取多个文件?目前,我只读取一个文件(firstFile.txt),但是我想用'*.txt'替换文件名,以便对其中的所有read_csv文件执行*.txt操作目录。

数据文件如下所示。我想用Col1=12345打印所有行,因为Col6的值为'This is a test'

Col1  Col2    Col3    Col4    Col5    Col6
-       -       -       -       -       -
54321 544     657     888     4476    -
12345 345     456     789     1011    'This is a test'
54321 644     857     788     736     -
54321 744     687     898     7436    -
12345 365     856     789     1020    -
12345 385     956     689     1043    -
12345 385     556     889     1055    -
65432 444     676     876     4554    -
-     -       -       -       -       -
54321 544     657     888     776     -
12345 345     456     789     1011    -
54321 587     677     856     7076    -
12345 345     456     789     1011    -
65432 444     676     876     455     -
12345 345     456     789     1011    -
65432 447     776     576     4055    -
-     -       -       -       -       -   
65432 434     376     576     4155    -

我使用的脚本是:

import csv
import pandas as pd
import os
import glob

DL_fields1 = ['Col1', 'Col2']
DL_fields2 = ['Col1', 'Col2','Col3', 'Col4', 'Col5', 'Col6']

MSG = 'This is a test'

iter_csv = pd.read_csv('firstFile.txt', chunksize=1000, usecols=DL_fields1, skiprows=1)
df = pd.concat([chunk[chunk['Special_message'] == MSG] for chunk in iter_csv])

for i, row in df.iterrows():
    key1 = df.loc[i, 'Col1']
    j=0
    for line in pd.read_csv('firstFile.txt', chunksize=1, usecols=DL_fields2, skiprows=1, na_values={'a':'Int64'}):
        key2 = line.loc[j,'Col1']
        j = j + 1
        if (key2 == '-'):
            continue
        elif (int(key1) == int(key2)):
            print (line)

1 个答案:

答案 0 :(得分:2)

据我了解,您无需两次读取CSV文件。本质上,您希望MSGCol6出现的所有行。您实际上可以在一行中实现这一目标-

MSG = 'This is a test'
iter_csv = pd.read_csv('firstFile.txt', chunksize=1000, usecols=DL_fields1, skiprows=1)
# this gives you all the rows where MSG occurs in Col6
df = iter_csv.loc[iter_csv['Col6'] == MSG, :]
# this gives you all the rows where 12345 in Col1
df_12345 = df.loc[iter_csv['Col1'] == 12345,]

您可以通过这种方式创建数据的多个子集。


要回答问题的第二部分,您可以像这样遍历所有文本文件-

import glob
txt_files = glob.glob("test/*.txt")
for file in txt_files:
    with open(file, 'r') as foo:
        some_df = pd.read_csv(file)

编辑:这是您遍历文件并使用Col1=12345Col6=MSG-

查找所有键的方式
import glob
from functools import reduce

results_list = []
MSG = 'This is a test'

txt_files = glob.glob("test/*.txt")
for file in txt_files:
    with open(file, 'r') as foo:
        some_df = pd.read_csv(file, chunksize=1000, usecols=DL_fields1, skiprows=1)
        df = iter_csv.loc[iter_csv['Col6'] == MSG, :]
        # results_list is a list of all such dataframes
        results_list.append(df.loc[iter_csv['Col1'] == 12345, ])

# All results in one big dataframe
result_df = reduce(lambda x,y: pd.concat([x,y]), results_list)