爬行亚马逊

时间:2016-12-18 01:00:36

标签: python request beautifulsoup

我正在尝试制作一个Python网络抓取工具,但由于某种原因,当我尝试抓取一个网站时,例如亚马逊,我的程序打印出的唯一内容是“无”

import requests
from bs4 import BeautifulSoup

def spider(max_pages):
    page = 1
    while page <= max_pages:
        url = 'https://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Apython&page=' + str(page) + '&keywords=python&ie=UTF8&qid=1482022018&spIA=B01M63XMN1,B00WFP9S2E'
        source = requests.get(url)
        plain_text = source.text
        obj = BeautifulSoup(plain_text, "html5lib")

        for link in obj.find_all('a'):
            href = link.get(url)
            print(href)
        page += 1

spider(1)

1 个答案:

答案 0 :(得分:-1)

没有User-Agent:

requests.exceptions.HTTPError: 503 Server Error: Service Unavailable for url: https://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Apython&page=1%20%27&keywords=python&ie=UTF8&qid=1482022018&spIA=B01M63XMN1,B00WFP9S2E%27

使用User-Agent:

headers = {'User-Agent':'Mozilla/5.0'}
r = requests.get('https://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Apython&page=1%20%27&keywords=python&ie=UTF8&qid=1482022018&spIA=B01M63XMN1,B00WFP9S2E%27', headers=headers)

它工作正常。

How to prevent getting blacklisted while scraping您可以阅读此页面以了解您应该使用UA的原因。