从python 3中的页面中提取链接

时间:2018-04-06 16:01:00

标签: python python-3.x html-parsing urllib

我想提取页面中的所有链接,这是我的代码,但它什么也没做,当我打印获取的页面时我打印得很好但是为了解析它没有做任何事情!!

from html.parser import HTMLParser
import urllib
import urllib.request


class myParser(HTMLParser):
    def handle_starttag(self, tag, attrs):
        if (tag == "a"):
            for a in attrs:
                if (a[0] == "href"):
                    link = a[1]
                    if (link.find('http') >= 1):
                        print(link)
                        newParser = myParser()
                        newParser.feed(link)

url = "http://www.asriran.com"
req = urllib.request.Request(url)
response = urllib.request.urlopen(req)
handle = response.read()
parser = myParser()
print (handle)
parser.feed(str(handle))

1 个答案:

答案 0 :(得分:2)

由于以下两个原因,您的代码无法打印任何内容:

  • 您不会解码http响应,而您尝试解析字节而不是字符串
  • 对于以link.find('http') >= 1http开头的链接,
  • https永远不会成为现实。您应该使用link.find('http') == 0link.startswith('http')

如果您想坚持使用HTMLParser,可以按如下方式修改代码:

from html.parser import HTMLParser
import urllib.request


class myParser(HTMLParser):

    links = []

    def handle_starttag(self, tag, attrs):
        if tag =='a':
            for attr in attrs:
                if attr[0]=='href' and str(attr[1]).startswith('http'):
                    print(attr[1])
                    self.links.append(attr[1])


with urllib.request.urlopen("http://www.asriran.com") as response:
    handle = response.read().decode('utf-8')
parser = myParser()
parser.feed(handle)

http_links = myParser.links

否则我建议切换到Beautiful Soup并解析响应,例如:

from bs4 import BeautifulSoup
import urllib.request

with urllib.request.urlopen("http://www.asriran.com") as response:
   html = response.read().decode('utf-8')

soup = BeautifulSoup(html, 'html.parser')

all_links = [a.get('href') for a in soup.find_all('a')]
http_links = [link for link in all_links if link.startswith('http')]