Python简单的Web爬虫错误(无限循环爬行)

时间:2017-09-24 15:59:49

标签: python beautifulsoup urllib2

我在python中编写了一个简单的爬虫。它似乎工作正常并找到新的链接,但重复相同链接的发现,并没有下载找到的新网页。看起来它甚至在达到设定的爬行深度限制后无限爬行。我没有收到任何错误。它只是永远运行。这是代码和运行。我在Windows 7 64位上使用Python 2.7。

import sys
import time
from bs4 import *
import urllib2
import re
from urlparse import urljoin

def crawl(url):
    url = url.strip()
    page_file_name = str(hash(url))
    page_file_name = page_file_name + ".html" 
    fh_page = open(page_file_name, "w")
    fh_urls = open("urls.txt", "a")
    fh_urls.write(url + "\n")
    html_page = urllib2.urlopen(url)
    soup = BeautifulSoup(html_page, "html.parser")
    html_text = str(soup)
    fh_page.write(url + "\n")
    fh_page.write(page_file_name + "\n")
    fh_page.write(html_text)
    links = []
    for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
    links.append(link.get('href'))
    rs = []
    for link in links:
    try:
            #r = urllib2.urlparse.urljoin(url, link)
            r = urllib2.urlopen(link)
            r_str = str(r.geturl())
            fh_urls.write(r_str + "\n")
            #a = urllib2.urlopen(r)
            if r.headers['content-type'] == "html" and r.getcode() == 200:
                rs.append(r)
                print "Extracted link:"
        print link
        print "Extracted link final URL:"
        print r
    except urllib2.HTTPError as e:
            print "There is an error crawling links in this page:"
            print "Error Code:"
            print e.code
    return rs
    fh_page.close()
    fh_urls.close()

if __name__ == "__main__":
    if len(sys.argv) != 3:
    print "Usage: python crawl.py <seed_url> <crawling_depth>"
    print "e.g: python crawl.py https://www.yahoo.com/ 5"
    exit()
    url = sys.argv[1]
    depth = sys.argv[2]
    print "Entered URL:"
    print url
    html_page = urllib2.urlopen(url)
    print "Final URL:"
    print html_page.geturl()
    print "*******************"
    url_list = [url, ]
    current_depth = 0
    while current_depth < depth:
        for link in url_list:
            new_links = crawl(link)
            for new_link in new_links:
                if new_link not in url_list:
                    url_list.append(new_link)
            time.sleep(5)
            current_depth += 1
            print current_depth

以下是我运行时的结果:

C:\Users\Hussam-Den\Desktop>python test.py https://www.yahoo.com/ 4
Entered URL:
https://www.yahoo.com/
Final URL:
https://www.yahoo.com/
*******************
1

用于存储已爬网网址的输出文件就是这个:

https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account

知道什么是错的吗?

1 个答案:

答案 0 :(得分:1)

  1. 您在此处出现错误:depth = sys.argv[2]sys返回str而不是int。你应该写depth = int(sys.argv[2])
  2. 满分为1分,条件while current_depth < depth:始终返回True
  3. 尝试通过将argv[2]转换为int来解决此问题。我有薄错误