HTTP错误999:请求被拒绝

时间:2015-05-17 15:34:48

标签: python web-scraping beautifulsoup linkedin mechanize

我正在尝试使用BeautifulSoup从LinkedIn抓取一些网页,我不断收到错误“HTTP错误999:请求被拒绝”。有没有办法避免这个错误。如果你查看我的代码,我已经尝试了Mechanize和URLLIB2,两者都给了我同样的错误。

from __future__ import unicode_literals
from bs4 import BeautifulSoup
import urllib2
import csv
import os
import re
import requests
import pandas as pd
import urlparse
import urllib
import urllib2
from BeautifulSoup import BeautifulSoup
from BeautifulSoup import BeautifulStoneSoup
import urllib
import urlparse
import pdb
import codecs
from BeautifulSoup import UnicodeDammit
import codecs
import webbrowser
from urlgrabber import urlopen
from urlgrabber.grabber import URLGrabber
import mechanize

fout5 = codecs.open('data.csv','r', encoding='utf-8', errors='replace')

for y in range(2,10,1):


    url = "https://www.linkedin.com/job/analytics-%2b-data-jobs-united-kingdom/?sort=relevance&page_num=1"

    params = {'page_num':y}

    url_parts = list(urlparse.urlparse(url))
    query = dict(urlparse.parse_qsl(url_parts[4]))
    query.update(params)

    url_parts[4] = urllib.urlencode(query)
    y = urlparse.urlunparse(url_parts)
    #print y



    #url = urllib2.urlopen(y)
    #f = urllib2.urlopen(y)

    op = mechanize.Browser() # use mecahnize's browser
    op.set_handle_robots(False) #tell the webpage you're not a robot
    j = op.open(y)
    #print op.title()


    #g = URLGrabber()
    #data = g.urlread(y)
    #data = fo.read()
    #print data

    #html = response.read()
    soup1 = BeautifulSoup(y)
    print soup1

2 个答案:

答案 0 :(得分:3)

您应该直接使用LinkedIn REST API或使用python-linkedin。它允许直接访问数据,而不是试图刮掉JavaScript密集的网站。

答案 1 :(得分:3)

尝试设置User-Agent标头。在op.set_handle_robots(False)

之后添加此行
op.addheaders = [('User-Agent': "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36")]

修改:如果您想抓取网站,请首先检查它是否包含处理API的API或库。