错误java.io.FileNotFoundException,读取一个网页

时间:2014-02-26 11:23:18

标签: java url filenotfoundexception

我想阅读一个包含多个页面的网页,例如:page = 1到100

import org.htmlcleaner.*;
...
url = http://www.webpage.com/search?id=10&page=1

for (int j = 1; j <= 100; j++) {
    WebParse thp = new WebParse(new URL(url+j));

有时我会收到以下错误:

java.io.FileNotFoundException: http://www.webpage.com/search?id=10&page=18
    at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
    at java.net.URL.openStream(Unknown Source)
    at org.htmlcleaner.Utils.readUrl(Utils.java:63)
    at org.htmlcleaner.HtmlCleaner.clean(HtmlCleaner.java:373)
    at org.htmlcleaner.HtmlCleaner.clean(HtmlCleaner.java:387)
    at <mypackage>.WebParse.<init>(WebParse.java:21)
    at <mypackage>.WebParse.runThis(WebParse.java:54)
    at <mypackage>.WebParse.main(WebParse.java:43)

我认为这个问题是由我的网络连接造成的,因为当我尝试刷新(重新运行)时,它的工作效果很好。

如何在发生此错误时自动尝试重新运行。

1 个答案:

答案 0 :(得分:1)

为什么不在它们之间添加一些尝试和一点延迟?

    for (int j = 1; j <= 100; j++) {
        int maxretries = 3;
        int attempts = 0;
        boolean success = false;
        while (attempts < maxretries && !success) {
            attempts++;
            try {
                WebParse thp = new WebParse(new URL(url + j));
                success = true;
            } catch (FileNotFoundException e) {
                e.printStackTrace();
                try {
                    Thread.sleep(1000); // play nice
                } catch (InterruptedException e1) {
                    e1.printStackTrace();
                }
            }
        }

    }