关于Python Webcrawler

时间:2015-02-05 20:37:05

标签: python web-crawler

我正在利用“使用python进行计算简介”中的代码来制作网络爬虫。我希望做的是避免某些网站,例如google或yahoo,因为它的大小和它可能会引导我去Andromeda。

因此,我创建了self.prohibited部分来筛选某些网页。 但是,它不起作用。你有什么建议要修理吗? 非常感谢提前。

def analyze(url):
    '''returns the list of http links
    in absolute format in the web page with URL url'''

    print('Visiting: ', url) # for testing

    # obtain links in the web page
    content=urlopen(url).read().decode()
    collector=Collector(url)
    collector.feed(content)
    urls = collector.getLink()

    # compute word frequencies
    content=collector.getData()
    freq=frequency(content)

    out=open('test.csv', 'a')
    print(out, 'URL', 'word', 'count')
    csv=writer(out)


    #print the frequency of every text data word in web page
    print('\n {:50}{:10}{:5}'.format('URL', 'word', 'count'))
    for word in freq:
        row1=(url, word, freq[word])
        print('\n {:50} {:10} {:5}'.format(url, word, freq[word]))
        csv.writerow(row1)

    print('\n {:50} {:10}'.format('URL', 'link'))
    for link in urls:
        print('\n {:50} {:10}'.format(url, link))
        row2=(url, link)
        csv.writerow(row2)

    return urls


class Crawler:
    'a web crawler'
    def __init__(self):
        self.visited = set()
        self.prohibited=['*google.com/*','*yahoo.com/*']

    def crawl(self, url):
        '''calls analyze() on web page url
        and calls itself on every link to an univisted webpage'''
        links=analyze(url)
        self.visited.add(url)

        for link in links:
            if link not in self.visited and self.prohibited:
                try:
                    self.crawl(link)
                except:
                    pass

1 个答案:

答案 0 :(得分:0)

link not in self.visited and self.prohibited大致相当于link not in self.visited,因为self.prohibited始终在此语句中评估为True。 (self.prohibited是非空列表)

我认为您希望将此self.prohibited替换为not any(re.match(x, link) for x in self.prohibited)。{{1}}。 对于每个禁止的正则表达式,此代码检查链接是否与正则表达式匹配。