我正在尝试使用PuppeteerCrawler抓取我们本地的Confluence安装。我的策略是先登录,然后提取会话cookie并在开始url的标题中使用它们。代码如下:
首先,我“步行”登录以提取相关凭证:
const Apify = require("apify");
const browser = await Apify.launchPuppeteer({sloMo: 500});
const page = await browser.newPage();
await page.goto('https://mycompany/confluence/login.action');
await page.focus('input#os_username');
await page.keyboard.type('myusername');
await page.focus('input#os_password');
await page.keyboard.type('mypasswd');
await page.keyboard.press('Enter');
await page.waitForNavigation();
// Get cookies and close the login session
const cookies = await page.cookies();
browser.close();
const cookie_jsession = cookies.filter( cookie => {
return cookie.name === "JSESSIONID"
})[0];
const cookie_crowdtoken = cookies.filter( cookie => {
return cookie.name === "crowd.token_key"
})[0];
然后,我使用准备好的请求标头构建爬虫结构:
const startURL = {
url: 'https://mycompany/confluence/index.action',
method: 'GET',
headers:
{
Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7',
Cookie: `${cookie_jsession.name}=${cookie_jsession.value}; ${cookie_crowdtoken.name}=${cookie_crowdtoken.value}`,
}
}
const requestQueue = await Apify.openRequestQueue();
await requestQueue.addRequest(new Apify.Request(startURL));
const pseudoUrls = [ new Apify.PseudoUrl('https://mycompany/confluence/[.*]')];
const crawler = new Apify.PuppeteerCrawler({
launchPuppeteerOptions: {headless: false, sloMo: 500 },
requestQueue,
handlePageFunction: async ({ request, page }) => {
const title = await page.title();
console.log(`Title of ${request.url}: ${title}`);
console.log(page.content());
await Apify.utils.enqueueLinks({
page,
selector: 'a:not(.like-button)',
pseudoUrls,
requestQueue
});
},
maxRequestsPerCrawl: 3,
maxConcurrency: 10,
});
await crawler.run();
按次登录和cookie提取似乎还可以(“ curlified”请求工作正常),但是Confluence不接受通过操纵p /无头铬登录。标题似乎以某种方式丢失了。.
我在做什么错了?
答案 0 :(得分:2)
无需先详细说明标题为何不起作用的详细信息,我建议在gotoFunction
选项中定义自定义PuppeteerCrawler
,例如:
{
// ...
gotoFunction: async ({ request, page }) => {
await page.setCookie(...cookies); // From page.cookies() earlier.
return page.goto(request.url, { timeout: 60000 })
}
}
这样,您无需进行解析,并且cookie会在每次加载页面之前自动注入浏览器中。
标头问题是一个复杂的问题,涉及Puppeteer中的request interception。 Apify SDK中的Here's the related GitHub issue。不幸的是,通过Request
对象覆盖标头的方法目前在PuppeteerCrawler
中不起作用,因此这就是您失败的原因。
请注意,在使用无头浏览器时修改默认请求标头不是一个好习惯,因为它可能导致在某些将接收到的标头与已知浏览器指纹列表匹配的网站上进行屏蔽。