如何解决

时间:2016-09-08 12:15:03

标签: java web-crawler crawler4j

我正在尝试https://github.com/yasserg/crawler4j

中的QuickStart

我执行以下步骤来测试示例:

0)将crawler4j.jar添加到java库

1)创建一个名为mycrawler的

的java包

2)将Quickstart代码粘贴到class-mycrawler

3)运行

package mycrawler;
public class MyCrawler extends WebCrawler {

    private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|gif|jpg"
                                                           + "|png|mp3|mp3|zip|gz))$");

    /**
     * This method receives two parameters. The first parameter is the page
     * in which we have discovered this new url and the second parameter is
     * the new url. You should implement this function to specify whether
     * the given url should be crawled or not (based on your crawling logic).
     * In this example, we are instructing the crawler to ignore urls that
     * have css, js, git, ... extensions and to only accept urls that start
     * with "http://www.ics.uci.edu/". In this case, we didn't need the
     * referringPage parameter to make the decision.
     */
     @Override
     public boolean shouldVisit(Page referringPage, WebURL url) {
         String href = url.getURL().toLowerCase();
         return !FILTERS.matcher(href).matches()
                && href.startsWith("http://www.ics.uci.edu/");
     }

     /**
      * This function is called when a page is fetched and ready
      * to be processed by your program.
      */
     @Override
     public void visit(Page page) {
         String url = page.getWebURL().getURL();
         System.out.println("URL: " + url);

         if (page.getParseData() instanceof HtmlParseData) {
             HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
             String text = htmlParseData.getText();
             String html = htmlParseData.getHtml();
             Set<WebURL> links = htmlParseData.getOutgoingUrls();

             System.out.println("Text length: " + text.length());
             System.out.println("Html length: " + html.length());
             System.out.println("Number of outgoing links: " + links.size());
         }
    }
}

结果: enter image description here enter image description here     错误: 在mycrawler项目中找不到mycrawler.mycrawler类。

  

找不到主要课程&gt;

***如何解决?

我是Java新手。***

3 个答案:

答案 0 :(得分:1)

您的类扩展WebCrawler,但没有迹象表明Java如何解析该类。

您需要添加一个import语句来定位该类。

此外,如果您想要运行您的课程,则需要使用public static void main(String[] args)方法

答案 1 :(得分:1)

您似乎正在使用NetBeans。我建议使用Ctrl-Shift-I修复所有类导入。当类中没有错误时,它将能够编译。

然后,您需要为程序定义入口点,在java中这是一个静态main(String[] args)方法。当您选择将文件作为主类运行时,将执行该方法中的代码。

我建议你让别人介绍一下Java,因为你不可能只是按照你想要使用的库的 Quickstart 来完成你的任务。

答案 2 :(得分:1)

我认为您忘记按照文档

实施控制器

您还应该实现一个控制器类,它指定爬网的种子,应该存储中间爬网数据的文件夹以及并发线程数

public class Controller {
    public static void main(String[] args) throws Exception {
        String crawlStorageFolder = "/data/crawl/root";
        int numberOfCrawlers = 7;

        CrawlConfig config = new CrawlConfig();
        config.setCrawlStorageFolder(crawlStorageFolder);

        /*
         * Instantiate the controller for this crawl.
         */
        PageFetcher pageFetcher = new PageFetcher(config);
        RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
        RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
        CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);

        /*
         * For each crawl, you need to add some seed urls. These are the first
         * URLs that are fetched and then the crawler starts following links
         * which are found in these pages
         */
        controller.addSeed("http://www.ics.uci.edu/~lopes/");
        controller.addSeed("http://www.ics.uci.edu/~welling/");
        controller.addSeed("http://www.ics.uci.edu/");

        /*
         * Start the crawl. This is a blocking operation, meaning that your code
         * will reach the line after this only when crawling is finished.
         */
        controller.start(MyCrawler.class, numberOfCrawlers);
    }
}