Crawler4j:获取(机器人)网址

时间:2016-04-12 10:08:59

标签: java web-crawler crawler4j

我们正在使用crawler4j从网页上获取一些通知,根据官方文档,我完成了以下示例:

ArticleCrawler.java

public class ArticleCrawler extends WebCrawler
{
    private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|bmp|gif|jpe?g" + "|png|tiff?|mid|mp2|mp3|mp4"
            + "|wav|avi|mov|mpeg|ram|m4v|pdf" + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");


/**
 * This method receives two parameters. The first parameter is the page in
 * which we have discovered this new url and the second parameter is the new
 * url. You should implement this function to specify whether the given url
 * should be crawled or not (based on your crawling logic). In this example,
 * we are instructing the crawler to ignore urls that have css, js, git, ...
 * extensions and to only accept urls that start with
 * "http://www.ics.uci.edu/". In this case, we didn't need the referringPage
 * parameter to make the decision.
 */
@Override
public boolean shouldVisit(Page referringPage, WebURL url)
{
    String href = url.getURL().toLowerCase();
    return !FILTERS.matcher(href).matches() && href.startsWith("http://www.ics.uci.edu/");
}

/**
 * This function is called when a page is fetched and ready to be processed
 * by your program.
 */
@Override
public void visit(Page page)
{
    String url = page.getWebURL().getURL();
    log.info("ArticleCrawler: crawlers cover url {}", url);
}

}

Controller.java

public class Controller
{
    public static void main(String[] args) throws Exception {
        String crawlStorageFolder = "/";
        int numberOfCrawlers = 7;

        CrawlConfig config = new CrawlConfig();
        config.setCrawlStorageFolder(crawlStorageFolder);

        /*
         * Instantiate the controller for this crawl.
         */
        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
        CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

        /*
         * For each crawl, you need to add some seed urls. These are the first
         * URLs that are fetched and then the crawler starts following links
         * which are found in these pages
         */
        controller.addSeed("http://www.ics.uci.edu/~welling/");
        controller.addSeed("http://www.ics.uci.edu/~lopes/");
        controller.addSeed("http://www.ics.uci.edu/");

        /*
         * Start the crawl. This is a blocking operation, meaning that your code
         * will reach the line after this only when crawling is finished.
         */
        controller.start(ArticleCrawler.class, numberOfCrawlers);
    }
}

得到了错误:

  

错误[RobotstxtServer:128] 2016-04-12 17:38:59,672 - 获取(机器人)时发生错误网址:http://www.ics.uci.edu/robots.txt   org.apache.http.client.ClientProtocolException     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)     在edu.uci.ics.crawler4j.fetcher.PageFetcher.fetchPage(PageFetcher.java:237)     在edu.uci.ics.crawler4j.robotstxt.RobotstxtServer.fetchDirectives(RobotstxtServer.java:100)     在edu.uci.ics.crawler4j.robotstxt.RobotstxtServer.allows(RobotstxtServer.java:80)     在edu.uci.ics.crawler4j.crawler.CrawlController.addSeed(CrawlController.java:427)     在edu.uci.ics.crawler4j.crawler.CrawlController.addSeed(CrawlController.java:381)     在com.waijule.common.crawler.article.Controller.main(Controller.java:31)   引起:org.apache.http.HttpException:不支持的cookie策略:默认     在org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:150)     在org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)     在org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193)     at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)     在org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)     ......还有8个    INFO [CrawlController:230] 2016-04-12 17:38:59,699 - Crawler 1开始了    INFO [CrawlController:230] 2016-04-12 17:38:59,700 - Crawler 2开始了    INFO [CrawlController:230] 2016-04-12 17:38:59,700 - Crawler 3开始了    INFO [CrawlController:230] 2016-04-12 17:38:59,701 - Crawler 4开始了    INFO [CrawlController:230] 2016-04-12 17:38:59,701 - Crawler 5开始了    INFO [CrawlController:230] 2016-04-12 17:38:59,701 - Crawler 6开始了    INFO [CrawlController:230] 2016-04-12 17:38:59,701 - Crawler 7开始了    WARN [WebCrawler:412] 2016-04-12 17:38:59,864 - 获取http://www.ics.uci.edu/~welling/时出现未处理的异常:null    INFO [WebCrawler:357] 2016-04-12 17:38:59,864 - Stacktrace:   org.apache.http.client.ClientProtocolException     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)     在edu.uci.ics.crawler4j.fetcher.PageFetcher.fetchPage(PageFetcher.java:237)     在edu.uci.ics.crawler4j.crawler.WebCrawler.processPage(WebCrawler.java:323)     在edu.uci.ics.crawler4j.crawler.WebCrawler.run(WebCrawler.java:274)     在java.lang.Thread.run(Thread.java:745)   引起:org.apache.http.HttpException:不支持的cookie策略:默认     在org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:150)     在org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)     在org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193)     at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)     在org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)     ......还有6个    WARN [WebCrawler:412] 2016-04-12 17:39:00,071 - 获取http://www.ics.uci.edu/~lopes/时出现未处理的异常:null    INFO [WebCrawler:357] 2016-04-12 17:39:00,071 - Stacktrace:   org.apache.http.client.ClientProtocolException     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)     在edu.uci.ics.crawler4j.fetcher.PageFetcher.fetchPage(PageFetcher.java:237)     在edu.uci.ics.crawler4j.crawler.WebCrawler.processPage(WebCrawler.java:323)     在edu.uci.ics.crawler4j.crawler.WebCrawler.run(WebCrawler.java:274)     在java.lang.Thread.run(Thread.java:745)   引起:org.apache.http.HttpException:不支持的cookie策略:默认     在org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:150)     在org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)     在org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193)     at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)     在org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)     ......还有6个    WARN [WebCrawler:412] 2016-04-12 17:39:00,273 - 获取http://www.ics.uci.edu/时出现未处理的异常:null    INFO [WebCrawler:357] 2016-04-12 17:39:00,274 - Stacktrace:   org.apache.http.client.ClientProtocolException     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)     在org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)     在edu.uci.ics.crawler4j.fetcher.PageFetcher.fetchPage(PageFetcher.java:237)     在edu.uci.ics.crawler4j.crawler.WebCrawler.processPage(WebCrawler.java:323)     在edu.uci.ics.crawler4j.crawler.WebCrawler.run(WebCrawler.java:274)     在java.lang.Thread.run(Thread.java:745)   引起:org.apache.http.HttpException:不支持的cookie策略:默认     在org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:150)     在org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)     在org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193)     at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)     在org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)     在org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)

另外,我读了源代码,根据try_catch模块,我无法理解,这是源代码链接:https://github.com/yasserg/crawler4j/blob/master/src/main/java/edu/uci/ics/crawler4j/robotstxt/RobotstxtServer.java

感谢。

1 个答案:

答案 0 :(得分:0)

我已经解决了,它是由4.2版本使用过时的cookie规范版本引起的,检查到4.1或以下,到现在为止,使用4.1版本是一个更好的选择。 您可以从pull-request中找到更多信息。 schema