我有一个python代码,可以通过API get请求在多个位置的循环中提取资产负债表报告。我已经建立了一条else语句,以返回所有不获取JSON数据的位置ID。
有时循环会一直进行到最后,直到获得最终报告。但是大多数情况下,代码会引发以下错误并停止运行:
Traceback (most recent call last):
File "<ipython-input-2-85715734b89c>", line 1, in <module>
runfile('C:/Users/PVarimalla/.spyder-py3/temp.py', wdir='C:/Users/PVarimalla/.spyder-py3')
File "C:\Users\PVarimalla\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\PVarimalla\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/PVarimalla/.spyder-py3/temp.py", line 107, in <module>
dict1 = json.loads(json_data)
File "C:\Users\PVarimalla\Anaconda3\lib\json\__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "C:\Users\PVarimalla\Anaconda3\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\PVarimalla\Anaconda3\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Expecting value
例如, 完美运行:在50个无法JSON数据的位置中抛出15个位置ID,并且我已在数据框中添加了所有其他加盟商资产负债表。
不正确的运行:每次我运行脚本时,都会抛出5(或)6(或)3个位置ID,因为它无法获取JSON数据,并因上述错误而停止运行。
我不明白为什么脚本有时会完美运行,而在其余时间(大多数情况下)却表现怪异。是因为互联网连接还是Spyder 3.7的问题?
我认为我的整个脚本都没有错误,但是不确定为什么会遇到上述问题。请帮助我。
下面是代码:
import requests
import json
#import DataFrame
import pandas as pd
#from pandas.io.json import json_normalize
#import json_normalize
access_token = 'XXXXXXXXX'
url = 'https://api.XXXX.com/v1/setup'
url_company = "https://api.*****.com/v1/Reporting/ProfitAndLoss?CompanyId=1068071&RelativeDateRange=LastMonth&DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None"
url_locations_trend = "https://api.*****.com/v1/location/search?CompanyId=1068071"
url_locations_mu = "https://api.*****.com/v1/location/search?CompanyId=2825826"
url_locations_3yrs = "https://api.qvinci.com/v1/location/search?CompanyId=1328328"
ult_result = requests.get(url_locations_trend,
headers={
'X-apiToken': '{}'.format(access_token)})
#decoded_result= result.read().decode("UTF-8")
json_data_trend = ult_result.text
dict_trend = json.loads(json_data_trend)
locations_trend = {}
#Name
locations_trend["Name"] = []
for i in dict_trend["Items"]:
locations_trend["Name"].append(i["Name"])
#ID
locations_trend["ID"] = []
for i in dict_trend["Items"]:
locations_trend["ID"].append(i["Id"])
#creates dataframe for locations under trend transformations
df_trend = pd.DataFrame(locations_trend)
#making a call to get locations data for under 3 yrs
ul3_result = requests.get(url_locations_3yrs,
headers={
'X-apiToken': '{}'.format(access_token)})
#decoded_result= result.read().decode("UTF-8")
json_data_3yrs= ul3_result.text
dict_3yrs = json.loads(json_data_3yrs)
locations_3yrs = {}
#Name
locations_3yrs["Name"] = []
for i in dict_3yrs["Items"]:
locations_3yrs["Name"].append(i["Name"])
#ID
locations_3yrs["ID"] = []
for i in dict_3yrs["Items"]:
locations_3yrs["ID"].append(i["Id"])
#creates dataframe for locations under 3 yrs
df_3yrs = pd.DataFrame(locations_3yrs)
#making a call to get locations data for under 3 yrs
ulm_result = requests.get(url_locations_mu,
headers={
'X-apiToken': '{}'.format(access_token)})
#decoded_result= result.read().decode("UTF-8")
json_data_mu = ulm_result.text
dict_mu = json.loads(json_data_mu)
locations_mu = {}
#Name
locations_mu["Name"] = []
for i in dict_mu["Items"]:
locations_mu["Name"].append(i["Name"])
#ID
locations_mu["ID"] = []
for i in dict_mu["Items"]:
locations_mu["ID"].append(i["Id"])
#creates dataframe for locations under 3 yrs
df_mu = pd.DataFrame(locations_mu)
locations_df = pd.concat([df_mu, df_3yrs, df_trend])
df_final = pd.DataFrame()
count = 0
for i in locations_df["ID"]:
if count < 3:
url_bs = "https://api.******.com/v1/Reporting/BalanceSheet?DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None&IncludeComputedColumns=true&RelativeDateRange=LastTwoCYTD&UseCustomDateRange=false&CompanyId=2825826&Locations=" + i
elif 2 < count < 12:
url_bs = "https://api.******.com/v1/Reporting/BalanceSheet?DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None&IncludeComputedColumns=true&RelativeDateRange=LastTwoCYTD&UseCustomDateRange=false&CompanyId=1328328&Locations=" + i
else :
url_bs = "https://api.******.com/v1/Reporting/BalanceSheet?DateFrequency=Monthly&UseAccountMapping=true&VerticalAnalysisType=None&IncludeComputedColumns=true&RelativeDateRange=LastTwoCYTD&UseCustomDateRange=false&CompanyId=1068071&Locations=" + i
result = requests.get(url_bs,
headers={
'X-apiToken': '{}'.format(access_token)})
#decoded_result= result.read().decode("UTF-8")
json_data = result.text
if(json_data != ""):
final = {}
dict1 = json.loads(json_data)
final["Months"] = dict1["ReportModel"]["ColumnNames"]
final["Location"] = [dict1["SelectedOptions"]["Locations"][0]]*len(final["Months"])
set = {"Total 10000 Cash","Total 12000 Inventory Asset","Total Other Current Assets","Total Fixed Assets","Total ASSETS",
"Total Accounts Payable","Total Credit Cards","24004 Customer Deposits","Total Liabilities","Total Equity","Total Long Term Liabilities"}
def search(dict2):
if len(dict2["Children"]) == 0:
return
for i in dict2["Children"]:
if(i["Name"] in set):
final[i["Name"]] = []
for j in i["Values"]:
final[i["Name"]].append(j["Value"])
search(i)
if ("Total " + dict2["Name"]) in set:
final["Total " + dict2["Name"]] = []
for j in dict2["TotalRow"]["Values"]:
final["Total " + dict2["Name"]].append(j["Value"])
return
for total in dict1["ReportModel"]["TopMostRows"]:
search(total)
df_final = pd.concat([df_final,pd.DataFrame(final)], sort = False)
else: print(i)
count = count + 1
#exporting dataframe to pdf
#df_final.to_csv(, sep='\t', encoding='utf-8')
df_final.to_csv('file1.csv')
谢谢。
答案 0 :(得分:1)
您应该发布代码和整个异常,以获得更准确的答案。但是在我看来,API最终并没有返回JSON(例如,您可以 在很短的时间内发出许多请求,因此API返回404)
尝试解码之前记录/记录API响应以进行验证。
编辑:
根据反馈,在每次迭代之间设置时间间隔应该可以解决您的问题。您可以在for循环中使用time.sleep(0.5)
。 (请记住添加import time
)
您还应该考虑在代码中使用try/except,以便可以更广泛地处理异常。