我正在尝试使用azure上的文本分析API在csv文件上构建情感分析模型
这是我使用的代码:
for j in range(0,num_of_batches): # this loop will add num_of_batches strings to input_texts
input_texts.set_value(j,"") # initialize input_texts string j
for i in range(j*l//num_of_batches,(j+1)*l//num_of_batches): #loop through a window of rows from the dataset
comment = str(mydata["tweet"][i]) #grab the comment from the current row
comment = comment.replace("\"", "'") #remove backslashes (why? I don’t remember. #honestblogger)
#add the current comment to the end of the string we’re building in input_texts string j
input_texts.set_value(j, input_texts[j] + '{"language":"' + "pt"',"id":"' + str(i) + '","text":"'+ comment + '"},')
#after we’ve looped through this window of the input dataset to build this series, add the request head and tail
input_texts.set_value(j, '{"documents":[' + input_texts[j] + ']}')
headers = {'Content-Type':'application/json', 'Ocp-Apim-Subscription-Key':account_key}
Sentiment = pd.Series()
batch_sentiment_url = "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment"
直到现在一切都很好,但是当我尝试从api获取数据时,我在最后一部分出错了
for j in range(0,num_of_batches):
# Detect sentiment for the each batch.
req = urllib2.Request(batch_sentiment_url, input_texts[j], headers)
response = urllib2.urlopen(req)
result = response.read()
obj = json.loads(result.decode('utf-8'))
#loop through each result string, extracting the sentiment associated with each id
for sentiment_analysis in obj['documents']:
Sentiment.set_value(sentiment_analysis['id'], sentiment_analysis['score'])
#tack our new sentiment series onto our original dataframe
mydata.insert(len(mydata.columns),'Sentiment',Sentiment.values)
此错误
HTTPError: HTTP Error 400: Bad Request
答案 0 :(得分:1)
您收到400错误,因为您的JSON格式错误('pt'周围的引号不匹配)。我不认为你通过使用pandas
模块处理传出请求或尝试手工制作JSON而给自己带来任何好处。特别是你容易受到错误的引号或逃脱字符的攻击。</ p>
以下是您可能会这样做的方式:
input_texts = []
for j in range(0,num_of_batches): # this loop will add num_of_batches strings to input_texts
documents = []
for i in range(j*l//num_of_batches,(j+1)*l//num_of_batches): #loop through a window of rows from the dataset
documents.append({
'language':'pt',
'id': str(i),
'text': str(mydata["tweet"][i])})
input_texts.append({'documents':documents})
...
req = urllib2.Request(batch_sentiment_url, json.dumps(input_texts[j]), headers)
答案 1 :(得分:0)
始终首先使用curl
验证API调用。然后插入代码。这条curl
行对我有用:
curl -k -X POST -H "Ocp-Apim-Subscription-Key: <your ocp-apim-subscription-key>" -H "Content-Type: application/json" --data "{ 'documents': [ { 'id': '12345', 'text': 'now is the time for all good men to come to the aid of their party.' } ] }" "https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/sentiment"