Python中的Scraper给出了#34; Access Denied"

时间:2017-02-01 14:18:06

标签: python beautifulsoup python-requests

我试图在Python中编写一个刮刀来从页面获取一些信息。与此页面上显示的优惠标题一样:
https://www.justdial.com/Panipat/Saree-Retailers/nct-10420585

现在我使用这段代码:

import bs4
import requests

def extract_source(url):
    source=requests.get(url).text
    return source

def extract_data(source):
    soup=bs4.BeautifulSoup(source)
    names=soup.findAll('title')
    for i in names:
        print i

extract_data(extract_source('https://www.justdial.com/Panipat/Saree-Retailers/nct-10420585'))

但是当我执行这段代码时,它给了我一个错误:

<titlee> Access Denied</titlee>

我该怎么做才能解决这个问题?

3 个答案:

答案 0 :(得分:4)

正如评论中所提到的,您需要指定允许的用户代理并将其作为headers传递:

def extract_source(url):
    headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 Firefox/50.0'}
    source=requests.get(url, headers=headers).text
    return source

答案 1 :(得分:0)

def extract_source(url):
    headers = {"User-Agent":"Mozilla/5.0"}
    source=requests.get(url, headers=headers).text
    return source

出:

<title>Saree Retailers in Panipat - Best Deals online - Justdial</title>

User-Agent添加到您的请求中,某些网站不响应没有用户代理的请求

答案 2 :(得分:0)

试试这个:

import bs4
import requests

def extract_source(url):
     agent = {"User-Agent":"Mozilla/5.0"}
     source=requests.get(url, headers=agent).text
     return source

def extract_data(source):
     soup=bs4.BeautifulSoup(source, 'lxml')
     names=soup.findAll('title')
     for i in names:
     print i

extract_data(extract_source('https://www.justdial.com/Panipat/Saree-Retailers/nct-10420585'))

我添加了&#39; lxml&#39;可能会避免解析错误。