使用python获取网站中avalibale的所有链接?

时间:2016-02-29 13:34:30

标签: python python-2.7 python-3.x url selenium-webdriver

有没有办法使用python来获取网站中的所有链接,而不仅仅是在网页中?我尝试了这段代码,但这只给了我网页上的链接

import urllib2
import re

#connect to a URL
website = urllib2.urlopen('http://www.example.com/')

#read html code
html = website.read()

#use re.findall to get all the links
links = re.findall('"((http|ftp)s?://.*?)"', html)

print links

1 个答案:

答案 0 :(得分:0)

以递归方式访问您收集的链接并废弃这些页面:

import urllib2
import re

stack = ['http://www.example.com/']
results = []

while len(stack) > 0:

    url = stack.pop()
    #connect to a URL
    website = urllib2.urlopen(url)

    #read html code
    html = website.read()

    #use re.findall to get all the links
    # you should not only gather links with http/ftps but also relative links
    # you could use beautiful soup for that (if you want <a> links)
    links = re.findall('"((http|ftp)s?://.*?)"', html)

    result.extend([link in links if is_not_relative_link(link)])

    for link in links:
        if link_is_valid(link): #this function has to be written
            stack.push(link)