Java网站解析器

时间:2014-10-29 21:05:43

标签: java string parsing arraylist

我正在尝试从网站解析以下行:

<div class="search-result__price">£2,995</div>

我只想要它的2995部分,但我很难这样做。这是我的代码;它目前能够解析包含£符号的所有行并显示网站中的所有货币。请帮忙!

public class parser {

    private static String string1 = "&pound";
    private String testURL = "http://www.autotrader.co.uk/search/used/cars/bmw/1_series/postcode/tn126bg/radius/1500/onesearchad/used%2Cnearlynew%2Cnew/quicksearch/true/page/2";
    private ArrayList<String> list = new ArrayList<String>();
    private ArrayList<Integer> prices = new ArrayList<Integer>();
    private int averagePrice;
    private int start;
    private int finish;

    public parser() throws IOException {

        URL url = new URL(testURL);
        Scanner scan = new Scanner(url.openStream());
        boolean alreadyHit = false;

        while (scan.hasNext()) {

            String line = scan.nextLine();

            if (line.contains(string1)) {

                list.add(line);

                start = line.indexOf("&pound;");
                line = line.substring(start);
                for (int i = 0; i < line.length(); i++) {

                    if (((line.charAt((i)) == ' ') || ((line.charAt((i)) == '<'))) && (alreadyHit == false)) {
                        finish = i;
                        alreadyHit = true;
                    }
                }
                alreadyHit = false;

                line = line.substring(0, finish);
                line = line.trim();
                line = line.replace("&pound;", "");
                line = line.replace(",", "");

                try {

                    int price = Integer.parseInt(line);
                    prices.add(price);
                } catch (Exception e) {

                }
            }
        }
    }

    public static void main(String args[]) throws IOException {

        parser p = new parser();

        for (Integer x : p.prices) {

            System.out.println(x);
        }
    }
}

1 个答案:

答案 0 :(得分:4)

您可能应该使用jsoup之类的内容,而不是逐行使用Scanner或使用正则表达式(!)来显示HTML内容:

Document doc = Jsoup
    .connect(testURL)
    .userAgent("Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/25.0")
    .timeout(60000).get();
Elements elems = doc.select("div .search-result__price");