我找到了一种通过网页上的超链接下载.pdf文件的方法。
从How can i grab pdf links from website with Python script了解到,方法是:
import lxml.html, urllib2, urlparse
base_url = 'http://www.renderx.com/demos/examples.html'
res = urllib2.urlopen(base_url)
tree = lxml.html.fromstring(res.read())
ns = {'re': 'http://exslt.org/regular-expressions'}
for node in tree.xpath('//a[re:test(@href, "\.pdf$", "i")]', namespaces=ns):
print urlparse.urljoin(base_url, node.attrib['href'])
问题是,我怎样才能在特定超链接下找到.pdf,而不是在网页上列出所有.pdf?
一种方法是,当它包含某些单词时,我可以限制打印:
If ‘CA-Personal.pdf’ in node:
但是如果.pdf文件名正在改变怎么办?或者我只想限制在“应用程序”的超链接上搜索网页?感谢。
答案 0 :(得分:1)
from bs4 import BeautifulSoup
import urllib2
domain = 'http://www.renderx.com'
url = 'http://www.renderx.com/demos/examples.html'
page = urllib2.urlopen(url)
soup = BeautifulSoup(page.read())
app = soup.find_all('a', text = "Applications")
for aa in app:
print domain + aa['href']