抓取受密码保护的网页的所有链接

时间:2011-07-15 16:28:44

标签: java web-crawler

我正在抓取需要用户名和密码进行身份验证的网页。当我在代码中传递用户名和密码时,我成功地从该服务器的服务器获得了200 OK响应。但是一旦它给出200 OK响应,它就会停止。 在身份验证后,它不会向前移动到该页面以抓取该页面中的所有链接。 And this crawler is taken from http://code.google.com/p/crawler4j/。 这是我正在进行身份验证的代码...

public class MyCrawler extends WebCrawler {

    Pattern filters = Pattern.compile(".*(\\.(css|js|bmp|gif|jpe?g"
            + "|png|tiff?|mid|mp2|mp3|mp4" + "|wav|avi|mov|mpeg|ram|m4v|pdf"
            + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");

    List<String> exclusions;


    public MyCrawler() {

        exclusions = new ArrayList<String>();
        //Add here all your exclusions

    exclusions.add("http://www.dot.ca.gov/dist11/d11tmc/sdmap/cameras/cameras.html");

    }


    public boolean shouldVisit(WebURL url) {

    String href = url.getURL().toLowerCase();


    DefaultHttpClient client = null;

        try
        {
        System.out.println("----------------------------------------");
            System.out.println("WEB URL:- " +url);


            client = new DefaultHttpClient();

            client.getCredentialsProvider().setCredentials(
                    new AuthScope(AuthScope.ANY_HOST, AuthScope.ANY_PORT, AuthScope.ANY_REALM),
                    new UsernamePasswordCredentials("test", "test"));
            client.getParams().setParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS, true);



        for(String exclusion : exclusions){
            if(href.startsWith(exclusion)){
                return false;
            }
        }   

        if (href.startsWith("http://") || href.startsWith("https://")) {
            return true;
        }

            HttpGet request = new HttpGet(url.toString());

            System.out.println("----------------------------------------");
            System.out.println("executing request" + request.getRequestLine());
            HttpResponse response = client.execute(request);
            HttpEntity entity = response.getEntity();


            System.out.println(response.getStatusLine());



    }
        catch(Exception e) {
            e.printStackTrace();
        }


        return false;
    }

    public void visit(Page page) {
    System.out.println("hello");
    int docid = page.getWebURL().getDocid();
        String url = page.getWebURL().getURL();
        System.out.println("Page:- " +url);
        String text = page.getText();
        List<WebURL> links = page.getURLs();
    int parentDocid = page.getWebURL().getParentDocid();


    System.out.println("Docid: " + docid);
        System.out.println("URL: " + url);
        System.out.println("Text length: " + text.length());
        System.out.println("Number of links: " + links.size());
        System.out.println("Docid of parent page: " + parentDocid);

}
}

这是我的Controller类

public class Controller {
    public static void main(String[] args) throws Exception {

            CrawlController controller = new CrawlController("/data/crawl/root");


//And I want to crawl all those links that are there in this password protected page             
            controller.addSeed("http://search.somehost.com/");

            controller.start(MyCrawler.class, 20);  
            controller.setPolitenessDelay(200);
            controller.setMaximumCrawlDepth(2);
    }
}

我做错了什么......

1 个答案:

答案 0 :(得分:0)

http://code.google.com/p/crawler4j/中所述,shoudVisit()函数应该只返回true或false。但是在您的代码中,此函数还会获取错误的页面内容。当前版本的crawler4j(3.0)不支持对受密码保护的页面进行爬网。