我需要做一个python脚本
person_id
,name
,flag
)的csv文件。该文件有3000行。person_id
,我需要调用传递person_id
的URL来进行GET
http://api.myendpoint.intranet/get-data/1234
该URL将返回person_id
的某些信息,例如下面的示例。我需要获取所有租金对象并保存在csv中。我的输出需要像这样import pandas as pd
import requests
ids = pd.read_csv(f"{path}/data.csv", delimiter=';')
person_rents = df = pd.DataFrame([], columns=list('person_id','carId','price','rentStatus'))
for id in ids:
response = request.get(f'endpoint/{id["person_id"]}')
json = response.json()
person_rents.append( [person_id, rent['carId'], rent['price'], rent['rentStatus'] ] )
pd.read_csv(f"{path}/data.csv", delimiter=';' )
person_id;name;flag;cardId;price;rentStatus
1000;Joseph;1;6638;1000;active
1000;Joseph;1;5566;2000;active
响应示例
{
"active": false,
"ctodx": false,
"rents": [{
"carId": 6638,
"price": 1000,
"rentStatus": "active"
}, {
"carId": 5566,
"price": 2000,
"rentStatus": "active"
}
],
"responseCode": "OK",
"status": [{
"request": 345,
"requestStatus": "F"
}, {
"requestId": 678,
"requestStatus": "P"
}
],
"transaction": false
}
每次通话的回报都是这样
{"mileage":1000.0000}
{"mileage":550.0000}
最终输出必须为
person_id;name;flag;cardId;price;rentStatus;mileage
1000;Joseph;1;6638;1000;active;1000.0000
1000;Joseph;1;5566;2000;active;550.0000
SOmeone可以帮助我使用此脚本吗? 可以与pandas或任何python 3 lib一起使用。
答案 0 :(得分:2)
df
创建数据框pd.read_csv
。
'person_id'
中的所有值都是唯一的。.apply
上的'person_id'
呼叫prepare_data
。
prepare_data
期望'person_id'
是str
或int
,如类型注释Union[int, str]
API
,将返回一个dict
到prepare_data
函数。'rents'
将dict
的{{1}}键转换为数据框。pd.json_normalize
上使用.apply
来调用'carId'
,并提取API
作为列添加到数据帧'mileage'
中。 / li>
data
添加到'person_id'
,可用于将data
与df
合并。s
将pd.Series
,s
转换为数据帧,然后将{{1}上的pd.concat
merge
和df
}。s
以所需格式保存到csv。person_id
函数中发生。pd.to_csv
返回一个call_api
,就像问题中显示的响应一样,其余代码将正常工作以产生所需的输出。call_api
dict
作为文本粘贴到代码块中。import pandas as pd
import requests
import json
from typing import Union
def call_api(url: str) -> dict:
r = requests.get(url)
return r.json()
def prepare_data(uid: Union[int, str]) -> pd.DataFrame:
d_url = f'http://api.myendpoint.intranet/get-data/{uid}'
m_url = 'http://api.myendpoint.intranet/get-mileage/'
# get the rent data from the api call
rents = call_api(d_url)['rents']
# normalize rents into a dataframe
data = pd.json_normalize(rents)
# get the mileage data from the api call and add it to data as a column
data['mileage'] = data.carId.apply(lambda cid: call_api(f'{m_url}{cid}')['mileage'])
# add person_id as a column to data, which will be used to merge data to df
data['person_id'] = uid
return data
# read data from file
df = pd.read_csv('file.csv', sep=';')
# call prepare_data
s = df.person_id.apply(prepare_data)
# s is a Series of DataFrames, which can be combined with pd.concat
s = pd.concat([v for v in s])
# join df with s, on person_id
df = df.merge(s, on='person_id')
# save to csv
df.to_csv('output.csv', sep=';', index=False)
答案 1 :(得分:1)
有很多不同的方法来实现这一目标。其中之一就是,就像您开始发表评论一样:
一个没有任何错误处理的非常简单的解决方案可能看起来像这样:
from types import SimpleNamespace
import pandas as pd
import requests
import json
path = '/some/path/'
df = pd.read_csv(f'{path}/data.csv', delimiter=';')
rows_list = []
for _, row in df.iterrows():
rentCall = f'http://api.myendpoint.intranet/get-data/{row.person_id}'
print(rentCall)
response = requests.get(rentCall)
r = json.loads(response.text, object_hook=lambda d: SimpleNamespace(**d))
for rent in r.rents:
mileageCall = f'http://api.myendpoint.intranet/get-mileage/{rent.carId}'
print(mileageCall)
response2 = requests.get(mileageCall)
m = json.loads(response2.text, object_hook=lambda d: SimpleNamespace(**d))
state = "active" if r.active else "inactive"
rows_list.append((row['person_id'], row['name'], row['flag'], rent.carId, rent.price, state, m.mileage))
df = pd.DataFrame(rows_list, columns=('person_id', 'name', 'flag', 'carId', 'price', 'rentStatus', 'mileage'))
print(df.to_csv(index=False, sep=';'))
答案 2 :(得分:0)
您提到您有3000行,这意味着您必须进行很多API调用。根据连接的不同,这些调用中的每一个都可能需要一段时间。结果,以顺序方式执行此操作可能会太慢。在大多数情况下,您的程序只会等待服务器的响应,而无需执行其他任何操作。 我们可以使用multiprocessing来提高性能。
我使用了Trenton his answer中的所有代码,但是我替换了以下顺序调用:
# call prepare_data
s = df.person_id.apply(prepare_data)
具有并行选择:
from multiprocessing import Pool
n_processes=20 # Experiment with this to see what works well
with Pool(n_processes) as p:
s=p.map(prepare_data, df.person_id)
或者,线程池可能更快,但是您必须通过将导入替换为来进行测试
from multiprocessing.pool import ThreadPool as Pool
。