使用jsoup刮取多个页面

时间:2017-12-20 19:05:07

标签: java web-scraping jsoup

我正在尝试在GitHub存储库的分页中废弃链接 我已经单独抓了它们,但现在我想要的是使用一些循环来优化它。知道我该怎么办?这是代码

ComitUrl= "http://github.com/apple/turicreate/commits/master";

 Document document2 = Jsoup.connect(ComitUrl ).get();

 Element pagination = document2.select("div.pagination a").get(0);
 String Url1    =   pagination.attr("href");
 System.out.println("pagination-link1 = " + Url1);


 Document document3 = Jsoup.connect(Url1).get();
 Element pagination2 = document3.select("div.pagination a").get(1);
 String Url2 = pagination2.attr("href");

 System.out.println("pagination-link2 = " + Url2);
 Document document4 = Jsoup.connect(Url2).get();

 Element check = document4.select("span.disabled").first();

 if (check.text().equals("Older")) {
     System.out.println("No pagination link more"); 
 }
 else { Element pagination3 = document4.select("div.pagination a").get(1);
        String Url3 = pagination3.attr("href");
        System.out.println("pagination-link3 = " + Url3);

 }

1 个答案:

答案 0 :(得分:2)

尝试下面给出的内容:

public static void main(String[] args) throws IOException{
    String url  = "http://github.com/apple/turicreate/commits/master";
    //get first link
    String link = Jsoup.connect(url).get().select("div.pagination a").get(0).attr("href");
    //an int just to count up links
    int i = 1;
    System.out.println("pagination-link_"+ i + "\t" + link);
    //parse next page using link
    //check if the div on next page has more than one link in it
    while(Jsoup.connect(link).get().select("div.pagination a").size() >1){
        link = Jsoup.connect(link).get().select("div.pagination a").get(1).attr("href");
        System.out.println("pagination-link_"+ (++i) +"\t" + link);
    }
}