使用Jsoup连接到url,但Jsoup调用另一个url。为什么?

时间:2016-02-19 06:04:23

标签: java url jsoup

我尝试使用Jsoup来抓取此

Document dok = Jsoup.connect("http://bola.kompas.com/ligaindonesia").userAgent("Mozilla/5.0").timeout(0).get();

但错误似乎是这样:

java.io.IOException: Too many redirects occurred trying to load URL http://m.kompas.com/bola

而且,当我输入时:

Document dok = Jsoup.connect("http://m.kompas.com/bola").userAgent("Mozilla/5.0").timeout(0).get();

错误出现如下:

java.io.IOException: Too many redirects occurred trying to load URL http://bola.kompas.com

实际上这是我的完整代码:

import java.io.IOException;

import org.jsoup.Connection;
import org.jsoup.HttpStatusException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;

public class MainBackup {
    public static void main(String[] args) throws IOException {
        processCrawling_kompas("http://bola.kompas.com/ligaindonesia");
    }

    public static void processCrawling_kompas(String URL){
        try{
            Connection.Response response = Jsoup.connect(URL).timeout(0).execute();
            int statusCode = response.statusCode();
            if(statusCode == 200){
                Document dok = Jsoup.connect(URL).userAgent("Mozilla/5.0").timeout(0).get();
                System.out.println("opened page: "+ URL);

                Elements nextPages = dok.select("a");
                for(Element nextPage: nextPages){
                    if(nextPage != null){
                        if(nextPage.attr("href").contains("bola.kompas.com")){
                            processCrawling_kompas(nextPage.attr("abs:href"));
                        }
                    }
                }
            }
        }catch (NullPointerException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        } catch (HttpStatusException e) {
            e.printStackTrace();
        } catch (IOException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}

这里究竟发生了什么?怎么解决这个问题?

感谢您的帮助:)

2 个答案:

答案 0 :(得分:4)

提供userAgent的想法是正确的想法。如果你在Jsoup的第一次调用中也这样做,它将按预期工作。

Connection.Response response = Jsoup.connect(URL)
            .userAgent("Mozilla/5.0")
            .timeout(0).execute();

顺便说一句 - 响应对象已经包含完整的html,因此您无需再次调用connect来获取文档。试试这个:

String URL = "http://bola.kompas.com/ligaindonesia";
Connection.Response response = Jsoup.connect(URL)
        .userAgent("Mozilla/5.0")
        .timeout(0).execute();
int statusCode = response.statusCode();
if(statusCode == 200){
    Document dok = Jsoup.parse(response.body(),URL);
    System.out.println("opened page: "+ URL);

    //your stuff

}

答案 1 :(得分:1)

processCrawling_kompas的第一行更改为:

Connection.Response response = Jsoup.connect(URL).userAgent("Mozilla/5.0").timeout(0).execute();

更改是添加用户代理!使用此代码,我能够获得以下输出:

opened page: https://login.kompas.com/act.php?do=ForgotPasswd&skin=default&sr=mykompas&done=http....