对于我的代码,它旨在检索该特定日期范围内的每个Google搜索结果。它工作一次。后来我再次重新运行。它失败了。我之前收到了错误。
线程中的异常" main" org.jsoup.HttpStatusException:HTTP错误提取URL。状态= 403,URL = http://ipv4.google.com/sorry/index?continue=http://www.google.com/search%253Fq%253Dstackoverflow%2526tbm%253Dnws%2526tbs%253Dcdr%2525253A1%2525252Ccd_min%2525253A5%2525252F30%2525252F2016%2525252Ccd_max%2525253A6%2525252F30%2525252F2016%2526start%253D0&q=EgTKLTckGKH5hsQFIhkA8aeDS-3IYZmr41q-m4rIMh7Uw7vC3wdLMgNyY24 org.jsoup.helper.HttpConnection $ Response.execute(HttpConnection.java:679)at org.jsoup.helper.HttpConnection $ Response.execute(HttpConnection.java:676)at at org.jsoup.helper.HttpConnection $ Response.execute(HttpConnection.java:628)org.jsoup.helper.HttpConnection.execute(HttpConnection.java:260)org.jsoup.helper.HttpConnection.get(HttpConnection.java: 249)在javaapplication3.JavaApplication3.main(JavaApplication3.java:36)
引用org.jsoup.HttpStatusException: HTTP error fetching URL. Status=503 (google scholar ban?)& Using Jsoup to access HTML but receives error code 503
我猜爬行速度太快,导致谷歌暂时禁止。
因此,我设定了这个。
.ignoreHttpErrors(true).followRedirects(true).timeout(100000).ignoreContentType(true).get();
然而,它仍然无效。
完整代码:`
public static void main(String[] args) throws UnsupportedEncodingException, IOException {
String google = "http://www.google.com/search?q=";
String search = "stackoverflow daterange:2016-01-01..2016-12-31 ";
//+"&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2016%2Ccd_max%3A12%2F31%2F2016"
// daterange:2457389-2457735
String charset = "UTF-8";
String news="&tbm=nws";
String string = google + URLEncoder.encode(search , charset) + news;
// String userAgent = "ExampleBot 1.0 (+http://example.com/bot)"; // Change this to your company's name and bot homepage!
String userAgent ="Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36";
int pages =0;
for(int j=0;j<50;j++){
// Document document = Jsoup.connect(string+"&start="+(j+0)*10).userAgent(userAgent).get(); <<-- original method
Document document = Jsoup.connect(string+"&start="+(j+0)*10).userAgent(userAgent). ignoreHttpErrors(true).followRedirects(true).timeout(100000).ignoreContentType(true).get(); //<<-- modified method
Elements links = document.select( ".r>a");
int numberOfResultpages = 1;
System.out.println("\n"); System.out.println(j);
for(int i=0;i<numberOfResultpages ;i++) {
for (Element link : links) {
String title = link.text();
String url = link.absUrl("href"); // Google returns URLs in format "http://www.google.com/url?q=<url>&sa=U&ei=<someKey>".
url = URLDecoder.decode(url.substring(url.indexOf('=') + 1, url.indexOf('&')), "UTF-8");
if (!url.startsWith("http")) {
continue; // Ads/news/etc.
}
System.out.println("Title: " + title);
System.out.println("URL: " + url);
}
}
}
}`