我正在尝试使用Puppeteer下载任何给定的网页。 我想将HTML文件,CSS源和所有图像下载到本地文件夹(这样一来,以后就可以阅读页面,而无需连接到互联网)。
问题是:
到目前为止,我不确定解析“ html” const内容的最佳方法是什么(请参见下文)。我还没有想出如何使用JSDOM编辑html。
const puppeteer = require('puppeteer');
const jsdom = require('jsdom');
const { JSDOM } = jsdom;
(async () => {
const url = 'https://stackoverflow.com/questions/54507560/how-to-download-html-css-and-imgs-using-puppeteer';
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(url, {waitUntil: 'load'});
const html = await page.content();
const dom = new JSDOM(html);
// I'm trying to grab all the img and so to be able to edit the path
console.log(dom.window.document.querySelectorAll("img"));
// ^ this is not working it return the following object in my node shell :
// NodeList { '0': HTMLImageElement {}, '1': HTMLImageElement {} }
// I don't know how to process this object and to grab each image path and then to dl it. I don't know how to edit each path to make it relative to my local folder.
browser.close();
})();
更新:我现在正尝试使用JSDOM解析HTML。
答案 0 :(得分:0)
更改活动页面中的所有<img src>
标签
要更改文档中的所有img标签,您需要在浏览器中运行page.evaluate()
并在其中使用document.querySelectorAll()
。这是一个快速工作的代码段,可从文档中的每个图像源中删除域:
(async () => {
const browser = await puppeteer.launch();
const url = 'https://stackoverflow.com/questions/54507560/how-to-download-html-css-and-imgs-using-puppeteer';
const page = await browser.newPage();
await page.goto(url, {waitUntil: 'load'});
await page.evaluate(() => {
var imgs = document.querySelectorAll('img');
imgs.forEach(function(img){
let imageSrc = img.getAttribute("src");
img.setAttribute("src", imageSrc.replace(/^(?:https?:\/\/)?(?:[^@\n]+@)?(?:www\.)?([^:\/\n?]+)/img, ""));
console.log(img.getAttribute("src"));
});
});
})();
更改每个图像资源的URL
这有点复杂。简而言之,您需要拦截浏览器发出的每个请求,并使用修改后的URL continue()
进行拦截。
同样,一个有效的代码段将每个图像资源URL替换为我们选择的域:
var newDomain = "https://example.com";
(async () => {
const browser = await puppeteer.launch({headless: false});
const url = 'https://stackoverflow.com/questions/54507560/how-to-download-html-css-and-imgs-using-puppeteer';
const page = await browser.newPage();
await page.setRequestInterception(true);
page.on('request', (interceptedRequest) => {
// Continue if request URL is page URL
if (interceptedRequest.url() == page.url()) {
interceptedRequest.continue();
return;
}
// Intercept if request resource is an Image
if (interceptedRequest.resourceType() == "image") {
// Replace target domain with the new domain we want
let newUrl = interceptedRequest.url().replace(/^(?:https?:\/\/)?(?:[^@\n]+@)?(?:www\.)?([^:\/\n?]+)/img, newDomain);
console.log(newUrl);
interceptedRequest.continue({
url: newUrl,
});
return;
}
// Continue any other requests
interceptedRequest.continue();
})
await page.goto(url, {waitUntil: 'load'});
})();
答案 1 :(得分:0)
在这里。 这是我发现的内置解决方案: https://github.com/website-scraper/website-scraper-puppeteer 因此,这是在存储库上编写的代码。 看起来真的很好!
const scrape = require('website-scraper');
const PuppeteerPlugin = require('website-scraper-puppeteer');
scrape({
urls: ['https://www.instagram.com/gopro/'],
directory: '/path/to/save',
plugins: [ new PuppeteerPlugin() ]
});