抓取时访问被拒绝

时间:2017-07-01 21:45:14

标签: python web-scraping beautifulsoup

我想创建一个脚本继续https://www.size.co.uk/featured/footwear/并抓取内容但不知何故,当我运行脚本时,我被拒绝访问。这是代码:

from urllib import urlopen
from bs4 import BeautifulSoup as BS
url = urlopen('https://www.size.co.uk/')
print BS(url, 'lxml')

输出

<html><head>
<title>Access Denied</title>
</head><body>
<h1>Access Denied</h1>

You don't have permission to access "http://www.size.co.uk/" on this server.
<p>
Reference #18.6202655f.1498945327.11002828
</p></body>
</html>

当我与其他网站一起尝试时,代码工作正常,当我使用Selenium时,没有任何反应,但我仍然想知道如何在不使用Selenium的情况下绕过此错误。但是当我在不同的网站http://www.footpatrol.co.uk/shop上使用Selenium时,我得到了相同的拒绝访问错误,这里是footpatrol的代码:

from selenium import webdriver

driver = webdriver.PhantomJS('C:\Users\V\Desktop\PY\web_scrape\phantomjs.exe')
driver.get('http://www.footpatrol.com')
pageSource = driver.page_source
soup = BS(pageSource, 'lxml')
print soup

输出是:

<html><head>
<title>Access Denied</title>
</head><body>
<h1>Access Denied</h1>

You don't have permission to access "http://www.footpatrol.co.uk/" on this 
server.<p>
Reference #18.6202655f.1498945644.110590db


</p></body></html>

2 个答案:

答案 0 :(得分:2)

import requests
from bs4 import BeautifulSoup as BS

url = 'https://www.size.co.uk/'
agent = {"User-Agent":'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'}
page = requests.get(url, headers=agent)
print (BS(page.content, 'lxml'))

答案 1 :(得分:0)

尝试:

  headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:50.0) Gecko/20100101 
  Firefox/50.0'}
  source=requests.get(url, headers=headers).text
  print(source)