网站刮刀不会刮掉我的一个链接

时间:2016-03-31 21:28:32

标签: python web-scraping

我可以轻松抓住一个网站,但另一个我得到错误???我不确定它是否因为网站有某种阻塞或某种东西

import random
from bs4 import BeautifulSoup
import urllib2
import re
from urlparse import urljoin

user_input   = raw_input ("Search for Team = "); 


resp = urllib2.urlopen("http://idimsports.eu/football.html") ###working
soup = BeautifulSoup(resp, from_encoding=resp.info().getparam('charset'))

base_url = "http://idimsports.eu"
links = soup.find_all('a', href=re.compile(''+user_input))
if len(links) == 0:
    print "No Streams Available"
else:
    for link in links: 
        print urljoin(base_url, link['href'])

resp = urllib2.urlopen("http://cricfree.tv/football-live-stream") ###not working
soup = BeautifulSoup(resp, from_encoding=resp.info().getparam('charset'))

links = soup.find_all('a', href=re.compile(''+user_input))
if len(links) == 0:
    print "No Streams Available"
else:
    for link in links: 
        print urljoin(base_url, link['href'])

1 个答案:

答案 0 :(得分:0)

设置请求的用户代理标头

headers = { 'User-Agent' : 'Mozilla/5.0' }
req = urllib2.Request("http://cricfree.tv/football-live-stream", None, headers)
resp = urllib2.urlopen(req)

同样在你的第二个循环中你重用base_url你可能不想这样做。