无法获取网址列表

时间:2016-01-23 04:48:44

标签: python web-scraping mechanize

我正在尝试使用以下脚本。为什么不检索此网站的网址列表?它适用于其他网站。

最初我认为问题是robots.txt不允许这样做,但是当我运行它时它没有返回错误。

import urllib
from bs4 import BeautifulSoup
import urlparse
import mechanize

url = "https://www.danmurphys.com.au"

br = mechanize.Browser()
br.set_handle_robots(False)

urls = [url]
visited =[url]

print 
while len(urls)>0:
try:
    br.open(urls[0])
    urls.pop(0) 
    for link in br.links():
        #print link
        #print "The base url is :" + link.base_url # just check there is this applicable to all sites.
        #print "The url is: " + link.url # This gives generally just the page name
        new_url = urlparse.urljoin(link.base_url,link.url)
        b1 = urlparse.urlparse(new_url).hostname
        b2 = urlparse.urlparse(new_url).path
        new_url = "http://"+ b1 + b2

        if new_url not in visited and urlparse.urlparse(url).hostname in new_url:
            visited.append(new_url)
            urls.append(new_url)
            print new_url
except:
    print "error"
    urls.pop(0)

1 个答案:

答案 0 :(得分:0)

您需要使用其他内容来抓取该网址,例如scrapyscrapyJSPhantom JS,因为Mechanize库不能与Javascript一起使用。< / p>

r = br.open(urls[0])
html = r.read()
print html

你会看到输出:

<noscript>Please enable JavaScript to view the page content.</noscript>