在抓取goodreads.com时被阻止

时间:2018-10-13 14:49:15

标签: python web-scraping proxy http-headers robots.txt

我正在尝试抓取“ https://www.goodreads.com/book/show/”上可用的图书的大样本(超过100k),但是我一直被封锁。 到目前为止,我已经尝试在代码中实现以下解决方案:

  • 检查robots.txt,查找无法访问哪些站点/元素

  • 指定一个或多个随机更改的标题

  • 使用多个工作代理以避免被阻止

  • 在使用10个并发线程的每次抓取迭代之间设置最多20秒的延迟

这是该代码的简化版本,在尝试仅刮取书名和作者时被阻止,而没有使用多个并发线程:

import requests
from lxml import html
import random

proxies_list = ["http://89.71.193.86:8080", "http://178.77.206.21:59298", "http://79.106.37.70:48550",
                "http://41.190.128.82:47131", "http://159.224.109.140:38543", "http://94.28.90.214:37641",
                "http://46.10.241.140:53281", "http://82.147.120.30:56281", "http://41.215.32.86:55561"]
proxies = {"http": random.choice(proxies_list)}

# real header
# headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'}

# multiple headers
headers_list = ['Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.62 Safari/537.36',
                'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36',
                'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36',
                'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.38 Safari/537.36',
                'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.103 Safari/537.36',
                'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36',
                'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1623.0 Safari/537.36',
                'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36']
headers = {"user-agent": random.choice(headers_list)}

first_url = 1
last_url = 10000     # Last book is 8,630,000
sleep_time = 20

for book_reference_number in range(first_url, last_url):
    try:
        goodreads_html = requests.get("https://www.goodreads.com/book/show/" + str(book_reference_number), timeout=5, headers=headers, proxies=proxies)
        doc = html.fromstring(goodreads_html.text)
        book_title = doc.xpath('//div[@id="topcol"]//h1[@id="bookTitle"]')[0].text.strip(", \t\n\r")
        try:
            author_name = doc.xpath('//div[@id="topcol"]//a[@class="authorName"]//span')[0].text.strip(", \t\n\r")
        except:
            author_name = ""
        time.sleep(sleep_time)
        print(str(book_reference_number), book_title, author_name)
    except:
        print(str(book_reference_number) + " cannot be scraped.")
        pass

1 个答案:

答案 0 :(得分:0)

如果您真的想抓取大型数据库,那么我将建议硒,被阻塞的机会会很低且稳定。不需要time.sleep()(时间延迟,但您可以添加使其更稳定)。检查下面的代码...

import time
from bs4 import BeautifulSoup
from selenium import webdriver
##copy chromedriver into python folder
driver = webdriver.Chrome()
#driver.set_window_position(-2000,0)#this function will minimize the window
first_url = 1
last_url = 10000     # Last book is 8,630,000

for book_reference_number in range(first_url, last_url):
    driver.get("https://www.goodreads.com/book/show/"+str(book_reference_number))
    #time.sleep(2)#optional
    soup = BeautifulSoup(driver.page_source, 'lxml')
    try:
        book_title = soup.select('.gr-h1.gr-h1--serif')[0].text.strip()
    except:
        book_title = ''
    try:
        author_name = soup.select('.authorName')[0].text.strip()
    except:
        author_name = ''

    print('NO.', book_reference_number, 'TITLE: ', book_title, 'AUTHOR: ', author_name)