无法使用Crawler4j抓取Wiki pedia页面

时间:2015-11-13 20:43:40

标签: java

在调试Basic Crawler的示例之后,我可以在示例

中成功抓取并写入来自URL的数据的文本文件
controller.addSeed("http://www.ics.uci.edu/"); 
controller.addSeed("http://www.ics.uci.edu/~lopes/"); 
controller.addSeed("http://www.ics.uci.edu/~welling/");

但是当我将URL更改为Wikipedia页面时,NetBean刚刚注意到“Build Success”并且没有运行并写入任何内容,我试图抓取其他页面,但有些页面有效,有些则没有。 我的控制器的代码:

public class BasicCrawlController {

public static CrawlController controller;

public static void main(String[] args) throws Exception {

    //  if (args.length != 2) { 
    //   System.out.println("Needed parameters: "); 
    //   System.out.println("\t rootFolder (it will contain intermediate crawl data)"); 
    //   System.out.println("\t numberOfCralwers (number of concurrent threads)"); 
    //   return; 
    //  } 
            /*
             * crawlStorageFolder is a folder where intermediate crawl data is 
             * stored. 
             */
    //  String crawlStorageFolder = args[0]; 
    String crawlStorageFolder = "C:\\Users\\AD-PC\\Desktop";

    /*
     * numberOfCrawlers shows the number of concurrent threads that should 
     * be initiated for crawling. 
     */
    int numberOfCrawlers = Integer.parseInt("1");

    CrawlConfig config = new CrawlConfig();

    config.setCrawlStorageFolder(crawlStorageFolder);

    /*
     * Be polite: Make sure that we don't send more than 1 request per 
     * second (1000 milliseconds between requests). 
     */
    config.setPolitenessDelay(1000);

    /*
     * You can set the maximum crawl depth here. The default value is -1 for 
     * unlimited depth 
     */
    config.setMaxDepthOfCrawling(4);

    /*
     * You can set the maximum number of pages to crawl. The default value 
     * is -1 for unlimited number of pages 
     */
    config.setMaxPagesToFetch(1000);

    /*
     * Do you need to set a proxy? If so, you can use: 
     * config.setProxyHost("proxyserver.example.com"); 
     * config.setProxyPort(8080); 
     *  
     * If your proxy also needs authentication: 
     * config.setProxyUsername(username); config.getProxyPassword(password); 
     */
    /*
     * This config parameter can be used to set your crawl to be resumable 
     * (meaning that you can resume the crawl from a previously 
     * interrupted/crashed crawl). Note: if you enable resuming feature and 
     * want to start a fresh crawl, you need to delete the contents of 
     * rootFolder manually. 
     */
    config.setResumableCrawling(false);

    /*
     * Instantiate the controller for this crawl. 
     */
    PageFetcher pageFetcher = new PageFetcher(config);
    RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
    RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
    controller = new CrawlController(config, pageFetcher, robotstxtServer);

    /*
     * For each crawl, you need to add some seed urls. These are the first 
     * URLs that are fetched and then the crawler starts following links 
     * which are found in these pages 
     */
    controller.addSeed("http://www.ics.uci.edu/");
    controller.addSeed("http://www.ics.uci.edu/~lopes/");
    controller.addSeed("http://www.ics.uci.edu/~welling/");

    /*
     * Start the crawl. This is a blocking operation, meaning that your code 
     * will reach the line after this only when crawling is finished. 
     */
    controller.start(BasicCrawler.class, numberOfCrawlers);

}

}

BasicCrawler

public class BasicCrawler extends WebCrawler {

private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|bmp|gif|jpe?g" + "|png|tiff?|mid|mp2|mp3|mp4"
        + "|wav|avi|mov|mpeg|ram|m4v|pdf" + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");

/**
 * You should implement this function to specify whether the given url
 * should be crawled or not (based on your crawling logic).
 */
@Override
public boolean shouldVisit(WebURL url) {
    String href = url.getURL().toLowerCase();
    return !FILTERS.matcher(href).matches() && href.startsWith("http://www.ics.uci.edu");
}

/**
 * This function is called when a page is fetched and ready to be processed
 * by your program.
 */
@Override
public void visit(Page page) {
    int docid = page.getWebURL().getDocid();
    String url = page.getWebURL().getURL();
    String domain = page.getWebURL().getDomain();
    String path = page.getWebURL().getPath();
    String subDomain = page.getWebURL().getSubDomain();
    String parentUrl = page.getWebURL().getParentUrl();
    String anchor = page.getWebURL().getAnchor();

    System.out.println("Docid: " + docid);
    System.out.println("URL: " + url);
    System.out.println("Domain: '" + domain + "'");
    System.out.println("Sub-domain: '" + subDomain + "'");
    System.out.println("Path: '" + path + "'");
    System.out.println("Parent page: " + parentUrl);

    if (page.getParseData() instanceof HtmlParseData) {

        try {
            HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
            String text = htmlParseData.getText();
            String title = htmlParseData.getTitle();
            String html = htmlParseData.getHtml();
            List<WebURL> links = htmlParseData.getOutgoingUrls();
            System.out.println("Title: " + title);
            System.out.println("Text length: " + text.length());
            System.out.println("Html length: " + html.length());
            System.out.println("Number of outgoing links: " + links.size());
            System.out.println("=============");
            //create an print writer for writing to a file
            PrintWriter out = new PrintWriter(new FileWriter("D:\\test.txt", true));

            //output to the file a lineD:\
            out.println(docid + ".");
            out.println("- Title: " + title);
            out.println("- Content: " + text);
            out.println("- Anchor: "+ anchor);

            //close the file (VERY IMPORTANT!)
            out.close();
        } catch (IOException e1) {
            System.out.println("Error during reading/writing");
        }

        if (docid == 300) {
            controller.shutdown();
        }
    }
}

有人可以告诉我如何修复它吗? Wiki是否阻止了crawler4j?

1 个答案:

答案 0 :(得分:1)

您的问题位于此处:

@Override public boolean shouldVisit(WebURL url) { String href = url.getURL().toLowerCase(); return !FILTERS.matcher(href).matches() && href.startsWith("http://www.ics.uci.edu"); }

每次抓取工具检索到某个URL时,都会调用此方法。在您使用维基百科页面的情况下,它将始终返回 false ,因为该示例的默认代码假定,应该抓取的每个页面都以 http://www.ics.uci.edu