使用Apify Puppeteer和requestQueue从sitemap.xml抓取网址

时间:2019-08-20 18:46:59

标签: puppeteer apify

Apify可以从sitemap.xml中抓取链接

const Apify = require('apify');

Apify.main(async () => {
    const requestList = new Apify.RequestList({
        sources: [{ requestsFromUrl: 'https://edition.cnn.com/sitemaps/cnn/news.xml' }],
    });
    await requestList.initialize();

    const crawler = new Apify.PuppeteerCrawler({
        requestList,
        handlePageFunction: async ({ page, request }) => {
            console.log(`Processing ${request.url}...`);
            await Apify.pushData({
                url: request.url,
                title: await page.title(),
                html: await page.content(),
            });
        },
    });

    await crawler.run();
    console.log('Done.');
});

https://sdk.apify.com/docs/examples/puppeteersitemap#docsNav

但是,如果我使用requestQueue,我不确定如何从sitemap.xml爬网链接。例如:

const requestQueue = await Apify.openRequestQueue();
await requestQueue.addRequest({url: "https://google.com});

 //this is not working. Apify is simply crawling sitemap.xml 
 //and not adding urls from sitemap.xml to requestQueue
 await requestQueue.addRequest({url:`https://google.com/sitemap.xml`});

 const crawler = new Apify.PuppeteerCrawler({
    requestQueue,

    // This function is called for every page the crawler visits
    handlePageFunction: async (context) => {


        const {request, page} = context;

        const title = await page.title();
        let page_url = request.url;
        console.log(`Title of ${page_url}: ${title}`);

        await Apify.utils.enqueueLinks({
            page, selector: 'a', pseudoUrls, requestQueue});
    },


});

await crawler.run();

1 个答案:

答案 0 :(得分:3)

Apify的优点是可以同时使用RequestListRequestQueue。在这种情况下,您在抓取时会将项目从列表中移到队列中(不会使队列超载)。同时使用这两种方法,您将获得两全其美的体验。

Apify.main(async () => {
    const requestList = new Apify.RequestList({
        sources: [{ requestsFromUrl: 'https://edition.cnn.com/sitemaps/cnn/news.xml' }],
    });
    await requestList.initialize();

    const requestQueue = await Apify.openRequestQueue();

    const crawler = new Apify.PuppeteerCrawler({
        requestList,
        requestQueue,
        handlePageFunction: async ({ page, request }) => {
            console.log(`Processing ${request.url}...`);

            // This is just an example, define your logic
            await Apify.utils.enqueueLinks({
                page, selector: 'a', pseudoUrls: null, requestQueue,
            });
            await Apify.pushData({
                url: request.url,
                title: await page.title(),
                html: await page.content(),
            });
        },
    });

    await crawler.run();
    console.log('Done.');
});

如果只想使用队列,则需要自己解析XML。当然,这不是大问题。您可以在抓取工具之前使用Cheerio或使用Apify.CheerioCrawler

轻松解析它

无论如何,我们建议对批量URL使用RequestList,因为它基本上是立即在内存中创建的,但是队列实际上是数据库(或本地JSON文件)。