使用python从网页中提取所有链接

时间:2016-01-05 11:04:57

标签: python

在Udacity的计算机科学简介之后,我正在尝试制作一个python脚本来从页面中提取链接,下面是我使用的代码:

我收到以下错误

  

NameError:未定义名称“页面”

以下是代码:

def get_page(page):
    try:
        import urllib
        return urllib.urlopen(url).read()
    except:
        return ''

start_link = page.find('<a href=')
start_quote = page.find('"', start_link)
end_quote = page.find('"', start_quote + 1)
url = page[start_quote + 1:end_quote]

def get_next_target(page):
    start_link = page.find('<a href=')
    if start_link == -1:
        return (None, 0)
    start_quote = page.find('"', start_link)
    end_quote = page.find('"', start_quote + 1)
    url = page[start_quote + 1:end_quote]
    return (url, end_quote)

(url, end_pos) = get_next_target(page)

page = page[end_pos:]

def print_all_links(page):
    while True:
        (url, end_pos) = get_next_target(page)
        if url:
            print(url)
            page = page[:end_pos]
        else:
            break

print_all_links(get_page("http://xkcd.com/"))

3 个答案:

答案 0 :(得分:3)

page未定义,这是导致错误的原因。

对于像这样的网页抓取,您只需使用beautifulSoup

即可
from bs4 import BeautifulSoup, SoupStrainer
import requests

url = "http://stackoverflow.com/"

page = requests.get(url)    
data = page.text
soup = BeautifulSoup(data)

for link in soup.find_all('a'):
    print(link.get('href'))

答案 1 :(得分:1)

我来晚了一点,但这是从给定页面获取链接的一种方法:

from html.parser import HTMLParser
import urllib.request


class LinkScrape(HTMLParser):

    def handle_starttag(self, tag, attrs):
        if tag == 'a':
            for attr in attrs:
                if attr[0] == 'href':
                    link = attr[1]
                    if link.find('http') >= 0:
                        print('- ' + link)


if __name__ == '__main__':
    url = input('Enter URL > ')
    request_object = urllib.request.Request(url)
    page_object = urllib.request.urlopen(request_object)
    link_parser = LinkScrape()
    link_parser.feed(page_object.read().decode('utf-8'))

答案 2 :(得分:0)

您可以在htmlpage中找到具有属性的所有标签实例,这些属性包含 http 。这可以通过使用find_all中的BeautifulSoup方法并传递attrs={'href': re.compile("http")}

来实现。
import re
from bs4 import BeautifulSoup

soup = BeautifulSoup(htmlpage, 'html.parser')
links = []
for link in soup.find_all(attrs={'href': re.compile("http")}):
    links.append(link.get('href'))

print(links)