简单的网络爬虫

时间:2012-12-01 11:15:20

标签: python-2.7 beautifulsoup

我在python中为非常简单的网络爬虫编写了以下程序,但是当我运行它时它会返回给我 'NoneType'对象不可调用',你能帮我吗?

import BeautifulSoup
import urllib2
def union(p,q):
    for e in q:
        if e not in p:
            p.append(e)

def crawler(SeedUrl):
    tocrawl=[SeedUrl]
    crawled=[]
    while tocrawl:
        page=tocrawl.pop()
        pagesource=urllib2.urlopen(page)
        s=pagesource.read()
        soup=BeautifulSoup.BeautifulSoup(s)
        links=soup('a')        
        if page not in crawled:
            union(tocrawl,links)
            crawled.append(page)

    return crawled
crawler('http://www.princeton.edu/main/')

1 个答案:

答案 0 :(得分:5)

[更新]以下是完整的项目代码

https://bitbucket.org/deshan/simple-web-crawler

<强> [ANWSER]

soup('a')返回完整的html标记。

<a href="http://itunes.apple.com/us/store">Buy Music Now</a>

所以 urlopen 会出错  'NoneType'对象不可调用'。你需要提取唯一的url / href。

links=soup.findAll('a',href=True)
for l in links:
    print(l['href'])

您还需要验证网址。请参阅关注

我再次建议您使用python集代替Arrays.you可以轻松添加,省略重复的网址。

请尝试以下代码:

import re
import httplib
import urllib2
from urlparse import urlparse
import BeautifulSoup

regex = re.compile(
        r'^(?:http|ftp)s?://' # http:// or https://
        r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' #domain...
        r'localhost|' #localhost...
        r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
        r'(?::\d+)?' # optional port
        r'(?:/?|[/?]\S+)$', re.IGNORECASE)

def isValidUrl(url):
    if regex.match(url) is not None:
        return True;
    return False

def crawler(SeedUrl):
    tocrawl=[SeedUrl]
    crawled=[]
    while tocrawl:
        page=tocrawl.pop()
        print 'Crawled:'+page
        pagesource=urllib2.urlopen(page)
        s=pagesource.read()
        soup=BeautifulSoup.BeautifulSoup(s)
        links=soup.findAll('a',href=True)        
        if page not in crawled:
            for l in links:
                if isValidUrl(l['href']):
                    tocrawl.append(l['href'])
            crawled.append(page)   
    return crawled
crawler('http://www.princeton.edu/main/')