所以,我有一个我要保存为pdf的网页列表。它们就像http://nptel.ac.in/courses/115103028/module1/lec1/3.html。这个列表非常长,这就是为什么我使用python来自动化这个过程的原因。这是我的代码
import pdfkit
import urllib2
page = urllib2.urlopen('http://nptel.ac.in/courses/115103028/module1/lec1/3.html')
page_content = page.read()
with open('page_content.html', 'w') as fid:
fid.write(page_content)
txt=open("page_content.html").read().split("\n")
txt1=""
for i in txt:
if not ".html" in i:
txt1+=i+"\n"
with open('page_content.html',"w") as f:
f.write(txt1)
config = pdfkit.configuration(wkhtmltopdf="C:\Program Files (x86)\wkhtmltopdf\\bin\\wkhtmltopdf.exe")
pdfkit.from_file('page_content.html', 'out.pdf',configuration=config)
但是我得到的输出pdf没有任何方程式图像,只有文本。我该如何解决这个问题? 另外,我第二次打开文件以删除网页顶部和底部的数字,你也可以帮我改进这个数字。
修改
这是我现在使用
的代码import os.path,pdfkit,bs4,urllib2,sys
reload(sys)
sys.setdefaultencoding('utf8')
url = 'http://nptel.ac.in/courses/115103028/module1/lec1/3.html'
directory, filename = os.path.split(url)
html_text = urllib2.urlopen(url).read()
html_text = html_text.replace('src="', 'src="'+directory+"/").replace('href="', 'href="'+directory+"/")
page = bs4.BeautifulSoup(html_text, "html5lib")
for ul in page.findAll("ul", {"id":"pagin"}):
ul.extract() # Deletes the tag and everything inside it
html_text = str(page)
config = pdfkit.configuration(wkhtmltopdf="C:\Program Files (x86)\wkhtmltopdf\\bin\\wkhtmltopdf.exe")
pdfkit.from_string(html_text, "out.pdf", configuration=config)
它仍然显示那些错误,错误信息的一部分,输出pdf没有任何图像
Loading pages (1/6)
Warning: Failed to load http://nptel.ac.in/courses/115103028/css/style.css (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image041.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image042.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image043.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image045.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image046.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image048.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image049.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image050.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image051.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image052.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image053.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image054.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image055.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image056.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image057.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image064.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image065.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image067.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image068.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image069.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image070.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image071.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image072.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image073.png (ignore)
Warning: Failed to load http://nptel.ac.in/courses/115103028/module1/lec1/images/image074.png (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/1h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/2h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/3h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/4h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/5h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/6h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/7h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/8h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/9h.jpg (ignore)
Warning: Failed to load file:///C:/Users/KOUSHI~1/AppData/images/10h.jpg (ignore)
Counting pages (2/6)
Resolving links (4/6)
Loading headers and footers (5/6)
Printing pages (6/6)
Done
答案 0 :(得分:1)
当我运行你的代码时,pdfkit会输出很多警告,如下所示:
Warning: Failed to load file:///C:/Users/.../images/image041.png (ignore)
pdfkit尝试在我的计算机上找到网站上的图像,因为我没有下载它们,所以无法找到它们。围绕该问题的一个小问题是将HTML源代码中的相对路径转换为绝对路径:
import os.path
url = 'http://nptel.ac.in/courses/115103028/module1/lec1/3.html'
directory, filename = os.path.split(url)
html_text = urllib2.urlopen(url).read()
html_text = html_text.replace('src="', 'src="'+directory+"/") \
.replace('href="', 'href="'+directory+"/")
此处directory
是找到网站的目录,在此示例中为http://nptel.ac.in/courses/115103028/module1/lec1
,这样
<img src="images/image041.png" width="63" height="21">
成为
<img src="http://nptel.ac.in/courses/115103028/module1/lec1/images/image041.png" width="63" height="21">
现在,您可以使用pdfkit.from_string
代替pdfkit.from_file
来创建PDF文件,而无需存储一些临时信息:
pdfkit.from_string(html_text, "out.pdf", configuration=config)
要从网站的顶部和底部删除指向其他页面(显示为数字)的链接,您有很多可能性。我最喜欢的是使用BeautifulSoup
查找带有ul
的{{1}}代码。这些标签包含指向其他页面的链接,您只需删除它们即可:
id="pagin"
现在import bs4
page = bs4.BeautifulSoup(html_text)
for ul in page.findAll("ul", {"id":"pagin"}):
ul.extract() # Deletes the tag and everything inside it
html_text = unicode(page)
不再包含那些不需要的链接了。要安装BeautifulSoup,只需使用pip:html_text
此解决方案显然只有在您的所有网站都采用这种方式构建时才有效,如果不是,您也可以删除所有python -m pip install bs4
标记以删除这些链接,但请注意不要删除所需信息。