Python,Mechanize - 即使在set_handle_robots和add_headers之后,robots.txt也不允许请求

时间:2013-08-07 07:11:38

标签: python mechanize robots.txt

我已经创建了一个网络抓取工具,它可以获取所有链接,直到第一级页面,并从中获取所有链接和文本以及图像链接和alt。这是完整的代码:

import urllib
import re
import time
from threading import Thread
import MySQLdb
import mechanize
import readability
from bs4 import BeautifulSoup
from readability.readability import Document
import urlparse

url = ["http://sparkbrowser.com"]

i=0

while i<len(url):

    counterArray = [0]

    levelLinks = []
    linkText = ["homepage"]
    levelLinks = []

    def scraper(root,steps):
        urls = [root]
        visited = [root]
        counter = 0
        while counter < steps:
            step_url = scrapeStep(urls)
            urls = []
            for u in step_url:
                if u not in visited:
                    urls.append(u)
                    visited.append(u)
                    counterArray.append(counter +1)
            counter +=1
        levelLinks.append(visited)
        return visited

    def scrapeStep(root):
        result_urls = []
        br = mechanize.Browser()
        br.set_handle_robots(False)
        br.set_handle_equiv(False)
        br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]

        for url in root:
            try:
                br.open(url)

                for link in br.links():
                    newurl = urlparse.urljoin(link.base_url, link.url)
                    result_urls.append(newurl)
                    #levelLinks.append(newurl)
            except:
                print "error"
        return result_urls


    scraperOut = scraper(url[i],1)

    for sl,ca in zip(scraperOut,counterArray):
        print "\n\n",sl," Level - ",ca,"\n"

        #Mechanize
        br = mechanize.Browser()
        page = br.open(sl)
        br.set_handle_robots(False)
        br.set_handle_equiv(False)
        br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
        #BeautifulSoup
        htmlcontent = page.read()
        soup = BeautifulSoup(htmlcontent)


        for linkins in br.links(text_regex=re.compile('^((?!IMG).)*$')):
            newesturl = urlparse.urljoin(linkins.base_url, linkins.url)
            linkTxt = linkins.text
            print newesturl,linkTxt

        for linkwimg in soup.find_all('a', attrs={'href': re.compile("^http://")}):
            imgSource = linkwimg.find('img')
            if linkwimg.find('img',alt=True):
                imgLink = linkwimg['href']
                #imageLinks.append(imgLink)
                imgAlt = linkwimg.img['alt']
                #imageAlt.append(imgAlt)
                print imgLink,imgAlt
            elif linkwimg.find('img',alt=False):
                imgLink = linkwimg['href']
                #imageLinks.append(imgLink)
                imgAlt = ['No Alt']
                #imageAlt.append(imgAlt)
                print imgLink,imgAlt

    i+=1

一切都运转良好,直到我的爬虫到达facebook links中的一个他无法读取的,但他给了我错误

httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt

表示第68行:page = br.open(sl)

我现在不知道为什么,因为正如您所看到的,我已经设置了机械化set_handle_robotsadd_headers选项。

我不知道为什么会这样,但我注意到我收到了facebook个链接的错误,在这种情况下facebook.com/sparkbrowser和谷歌来了。

欢迎任何帮助或建议。

欢呼声

1 个答案:

答案 0 :(得分:1)

好的,这个问题出现了同样的问题:

Why is mechanize throwing a HTTP 403 error?

通过发送普通浏览器发送的所有请求标头,并接受/发回服务器发送的cookie应解决问题。