我有一个打开页面的浏览器实例。我想下载并保存所有链接(它们是PDF)。 有人知道怎么做吗?
THX
答案 0 :(得分:3)
import urllib, urllib2,cookielib, re
#http://www.crummy.com/software/BeautifulSoup/ - required
from BeautifulSoup import BeautifulSoup
HOST = 'https://www.adobe.com/'
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
req = opener.open( HOST + 'pdf' )
responce = req.read()
soup = BeautifulSoup( responce )
pdfs = soup.findAll(name = 'a', attrs = { 'href': re.compile('\.pdf') })
for pdf in pdfs:
if 'https://' not in pdf['href']:
url = HOST + pdf['href']
else:
url = pdf['href']
try:
#http://docs.python.org/library/urllib.html#urllib.urlretrieve
urllib.urlretrieve(url)
except Exception, e:
print 'cannot obtain url %s' % ( url, )
print 'from href %s' % ( pdf['href'], )
print e
else:
print 'downloaded file'
print url
答案 1 :(得分:1)
可能不是您正在寻找的答案,但我已将lxml和请求库一起用于自动锚定获取:
相关的lxml示例http://lxml.de/lxmlhtml.html#examples(将urllib替换为请求)
请求库主页http://docs.python-requests.org/en/latest/index.html
它不像机械一样紧凑,但提供更多控制。