来自Amazon网站的Web报文正在提供HTTP错误

时间:2019-03-13 10:09:36

标签: python-3.x web-scraping beautifulsoup nlp

我正在使用Python:3.7.1版本,并且要使用此版本,我想在Web上抓取Amazon网站(以下链接)中存在的I-Phone用户评论(或客户评论)。

  

链接(将被废弃):   https://www.amazon.in/Apple-iPhone-Silver-64GB-Storage/dp/B0711T2L8K/ref=sr_1_1?s=electronics&ie=UTF8&qid=1548335262&sr=1-1&keywords=iphone+X

当我尝试下面的代码时,它给了我下面的错误:

代码:

# -*- coding: utf-8 -*-

#import the library used to query a website
import urllib.request         
from bs4 import BeautifulSoup  

#specify the url
scrap_link = "https://www.amazon.in/Apple-iPhone-Silver-64GB-Storage/dp/B0711T2L8K/ref=sr_1_1?s=electronics&ie=UTF8&qid=1548335262&sr=1-1&keywords=iphone+X"
wiki = "https://en.wikipedia.org/wiki/List_of_state_and_union_territory_capitals_in_India"

#Query the website and return the html to the variable 'page'
page = urllib.request.urlopen(scrap_link) 
#page = urllib.request.urlopen(wiki) 
print(page)

#Parse the html in the 'page' variable, and store it in Beautiful Soup format
soup = BeautifulSoup(page)

print(soup.prettify())

错误:

  File "C:\Users\bsrivastava\AppData\Local\Continuum\anaconda3\lib\urllib\request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)

HTTPError: Service Unavailable

注意:当我尝试取消 wiki 链接(如代码所示)时,它工作正常。

那我为什么要使用Amazon链接得到此错误,我该如何克服呢?


此外,当我获得此客户评论数据时,我需要将其存储为如下所示的结构化格式。我该怎么做? (我对NLP完全陌生,因此需要一些指导)

 Structure:
a. Reviewer’s Name 
b. Date of review 
c. Color 
d. Size 
e. Verified Purchase (True or False) 
f. Rating 
g. Review Title 
h. Review Description

1 个答案:

答案 0 :(得分:1)

NLP?你确定吗?

import requests         
from bs4 import BeautifulSoup  


scrap_link = "https://www.amazon.in/Apple-iPhone-Silver-64GB-Storage/dp/B0711T2L8K/ref=sr_1_1?s=electronics&ie=UTF8&qid=1548335262&sr=1-1&keywords=iphone+X"

req = requests.get(scrap_link)
soup = BeautifulSoup(req.content, 'html.parser')
container = soup.findAll('div', attrs={'class':'a-section review aok-relative'})
data = []
for x in container:
    ReviewersName = x.find('span', attrs={'class':'a-profile-name'}).text
    data.append({'ReviewersName':ReviewersName})
print(data)
#later save the dictionary to csv