更好地查询CSV文件每一行的API迭代请求?

时间:2018-10-29 21:57:03

标签: python json api csv

我在问是否有比我的更好的python查询,它可以允许更好的处理时间。我正在为CSV文件的每一行迭代REST API请求,并将结果导出到新的CSV文件中。当我运行10行时,大约花了11秒。但是我需要做50,000行。所以我猜这将需要 大约14小时(833分钟= 50,000秒)。

有什么办法可以减少处理时间? (是否有任何查询改进?)谢谢!

注意:此API可以通过输入个人地址,名字,姓氏等来确定个人地址是否最新。

Python查询

import requests
import json
import pandas as pd
import numpy as np
import csv

# Input CSV
df = pd.read_csv(r"C:\users\testu\documents\travis_50000.csv",delimiter = ',' , na_values="nan")
   # Writing first, last name column
splitted = df['prop_yr_owner_name'].str.split()
df['last_name'] = splitted.str[0]
df['first_name'] = splitted.str[1]

print(df["first_name"].iloc[0])



# Output CSV
with open(r"C:\users\testu\documents\travis_output.csv", 'w',  newline='') as fp:
    # Writing Header
    fieldnames = ["AddressExtras","AddressLine1","AddressLine2","BaseMelissaAddressKey","City","CityAbbreviation","MelissaAddressKey","MoveEffectiveDate","MoveTypeCode","PostalCode","State","StateName","NameFirst", "NameFull", "NameLast", "NameMiddle", "NamePrefix", "NameSuffix"]
    writer = csv.DictWriter(fp, fieldnames=fieldnames)
    writer.writeheader()

# Iterating requests for each row
for row in df.itertuples():
    url = 'https://smartmover.melissadata.net/v3/WEB/SmartMover/doSmartMover' 
    payload = {'t': '1353', 'id': '4t8hsfh8fj3jf', 'jobid': '1', 'act': 'NCOA, CCOA', 'cols': 'TotalRecords,AddressExtras,AddressLine1,AddressLine2,,BaseMelissaAddressKey,City,CityAbbreviation,MelissaAddressKey,MoveEffectiveDate,MoveTypeCode,PostalCode,RecordID,Results,State,StateName, NameFirst, NameFull, NameLast, NameMiddle, NamePrefix, NameSuffix', 'opt': 'ProcessingType: Standard', 'List': 'test', 'first': row.first_name, 'last': row.last_name, 'a1': row.prop_year_addr_line1, 'a2': row.prop_year_addr_line2, 'city': row.prop_addr_city, 'state': row.prop_addr_state, 'postal': row.prop_addr_zip, 'ctry': 'USA'}

    response = requests.get(
        url, params=payload,
        headers={'Content-Type': 'application/json'}
    )

    r = response.json()
    print(r)

    output_1 = r['Records'][0]['AddressExtras']
    output_2 = r['Records'][0]['AddressLine1']
    output_3 = r['Records'][0]['AddressLine2']
    output_4 = r['Records'][0]['BaseMelissaAddressKey']
    output_5 = r['Records'][0]['City']
    output_6 = r['Records'][0]['CityAbbreviation']
    output_7 = r['Records'][0]['MelissaAddressKey']
    output_8 = r['Records'][0]['MoveEffectiveDate']
    output_9 = r['Records'][0]['MoveTypeCode']
    output_10 = r['Records'][0]['PostalCode']
    output_11 = r['Records'][0]['State']
    output_12 = r['Records'][0]['StateName']
    output_13 = r['Records'][0]['NameFirst']
    output_14 = r['Records'][0]['NameFull']
    output_15 = r['Records'][0]['NameLast']
    output_16 = r['Records'][0]['NameMiddle']
    output_17 = r['Records'][0]['NamePrefix']
    output_18 = r['Records'][0]['NameSuffix']

    output_list = [output_1, output_2, output_3, output_4, output_5, output_6, output_7, output_8, output_9, output_10, output_11, output_12, output_13, output_14, output_15, output_16, output_17, output_18 ]
    print (output_list)

    with open(r"C:\users\testu\documents\travis_output.csv", 'a', newline='') as fp:
        csv.writer(fp).writerow(output_list)

一行的示例JSON API结果

  

{'CASSReportLink':'https://smartmover.melissadata.net/v3/Reports/CASSReport.aspx?tkenrpt=YvBDs39g52jKhLJyl5RgHKpuj5HwDMe1pE2lcQrczqRiG3/3y5yMlixj5S7lIvLJpDyAOkD8fE8vDCg56s3UogNuAkdTbS2aqoYF5FvyovUjnXzoQaHaL8TaQbwyCQ2RB7tIlszGy5+LqFnI7Xdr6sjYX93FDkSGei6Omck5OF4=','NCOAReportLink':'https://smartmover.melissadata.net/v3/Reports/NCOAReport.aspx?tkenrpt=8anQa424W7NYg8ueROFirapuj5HwDMe1pE2lcQrczqRiG3/3y5yMlixj5S7lIvLJpDyAOkD8fE8vDCg56s3UogNuAkdTbS2aqoYF5FvyovUjnXzoQaHaL8TaQbwyCQ2RB7tIlszGy5+LqFnI7Xdr6sjYX93FDkSGei6Omck5OF4=','Records':[{'AddressExtras':'','AddressKey':'78704,78704' ,'AddressLine1':',,,STEC-100','AddressLine2':'1009 W MONROE ST,1600 S 5TH ST,1008 W MILTON ST,3939 BEE CAVES RD','AddressTypeCode':'','BaseMelissaAddressKey ':'','CarrierRoute':'','City':'Austin,Austin,Austin,Austin','CityAbbreviation':'Austin ,Austin,Austin,Austin','CompanyName':``,'CountryCode':'US','CountryName':'United States','DeliveryIndi​​cator':'','DeliveryPointCheckDigit':'','DeliveryPointCode': '','MelissaAddressKey':'','MoveEffectiveDate':'','MoveTypeCode':'','PostalCode':'78704,78704,78704,78746','RecordID':'1','Results': 'AE07','State':'','StateName':'TX,TX,TX,TX','城市化':''}],'TotalRecords':'1','TransmissionReference':'1353', 'TransmissionResults':'','Version':'4.0.4.48'}   [在2.​​6秒内完成]

2 个答案:

答案 0 :(得分:2)

除了@Almasyx关于一次打开文件的答案以及@Random Davis关于并行化的注释之外,您还可以删除print语句以获得实质性的加速。另一个小的改进是将r['Records'][0]存储为变量,并在后续行中使用它。否则,您将反复索引字典中的列表。

此外,根据REST API调用响应对象的大小,您可以将它们全部存储为列表。而且只有在最后,在写入CSV文件的过程中一个接一个地检查它们。

答案 1 :(得分:0)

主要有两个性能陷阱:

  1. 对每行进行单个请求。
  2. 每次打开文件以附加信息。

关于第一点

这是一个猜测,但是您可能正在启动许多HTTP请求。改进该部分的一种方法是将它们按较大的请求进行批处理(最好是单个请求)。这样,您可以避免与PC和服务器之间的连接设置有关的大量开销。我不知道该url是否允许请求批量处理,但是如果您打算请求5万行,则应该对它进行研究(猜想您打算在该循环中启动所有请求)。

关于第二点

您可以尝试以下方法:

with open(r"C:\users\testu\documents\travis_output.csv", 'a', newline='') as fp:            
    csv_writer = csv.writer(fp)
    # Iterating requests for each row
    for row in df.itertuples():
        # Request info and manipulate its response
        # ... code ...
        # Finally, append the data to file
        csv_writer.writerow(output_list)

第二个提示的主要原因是因为打开文件是一项耗时的操作。因此,您应该尝试一次打开该文件并多次写入。

请注意,我尚未运行该代码,因为我没有该数据的任何样本。因此,这只是有关提高性能的常用方法的提示