我正在寻找一个Python脚本(使用3.4.3),该脚本从URL抓取HTML页面,并可以通过DOM尝试查找特定元素。
我目前有这个:
#!/usr/bin/env python
import urllib.request
def getSite(url):
return urllib.request.urlopen(url)
if __name__ == '__main__':
content = getSite('http://www.google.com').read()
print(content)
当我打印内容时,它会打印出整个html页面,这与我想要的内容相近......虽然我希望能够浏览DOM而不是将其视为一个巨大的字符串。
我还不擅长Python,但有多种其他语言的经验(主要是Java,C#,C ++,C,PHP,JS)。我以前曾用Java做过类似的事情,但想用Python试一试。
感谢任何帮助。 干杯!
答案 0 :(得分:7)
您可以使用许多不同的模块。例如,lxml或BeautifulSoup。
这是lxml
示例:
import lxml.html
mysite = urllib.request.urlopen('http://www.google.com').read()
lxml_mysite = lxml.html.fromstring(mysite)
description = lxml_mysite.xpath("//meta[@name='description']")[0] # meta tag description
text = description.get('content') # content attribute of the tag
>>> print(text)
"Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for."
一个BeautifulSoup
示例:
from bs4 import BeautifulSoup
mysite = urllib.request.urlopen('http://www.google.com').read()
soup_mysite = BeautifulSoup(mysite)
description = soup_mysite.find("meta", {"name": "description"}) # meta tag description
text = description['content'] # text of content attribute
>>> print(text)
u"Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for."
注意BeautifulSoup
如何返回unicode字符串,而lxml
则不然。根据需要,这可能有用/有害。
答案 1 :(得分:1)
查看BeautifulSoup模块。
from bs4 import BeautifulSoup
import urllib
soup = BeautifulSoup(urllib.urlopen("http://google.com").read())
for link in soup.find_all('a'):
print(link.get('href'))