[JAVA]从网页获取html链接

时间:2016-09-10 16:56:49

标签: java html

我想使用java获取此图片中的链接,图片如下。该网页中的链接数量较少。我在stackoverflow上找到了这个代码,但我不明白如何使用它。

 import org.jsoup.Jsoup;
 import org.jsoup.nodes.Document;
 import org.jsoup.nodes.Element;
 import org.jsoup.select.Elements;

 public class weber{
    public static void main(String[] args)throws Exception{
        String url = "http://www.skyovnis.com/category/ufology/";
        Document doc = Jsoup.connect(url).get();

        /*String question = doc.select("#site-inner").text();
        System.out.println("Question: " + question);*/

        Elements anser = doc.select("#container .entry-title a");
        for (Element anse : anser){
            System.out.println("Answer: " + anse.text());
        }
    }
}

代码是从我发现的原始编辑的。请帮忙。

image

1 个答案:

答案 0 :(得分:1)

对于您的网址,以下代码可以正常使用。

public static void main(String[] args) {

    Document doc;
    try {

        // need http protocol
        doc = Jsoup.connect("http://www.skyovnis.com/category/ufology/").userAgent("Mozilla").get();
        // get page title
        String title = doc.title();
        System.out.println("title : " + title);

        // get all links (this is what you want)
        Elements links = doc.select("a[href]");
        for (Element link : links) {

            // get the value from href attribute
            System.out.println("\nlink : " + link.attr("href"));
            System.out.println("text : " + link.text());

        }

    } catch (IOException e) {
        e.printStackTrace();
    }

  }

输出

title : Ufology

link : http://www.shop.skyovnis.com/
text : Shop

link : http://www.shop.skyovnis.com/product-category/books/
text : Books

以下代码按文字过滤链接。

        for (Element link : links) {



            if(link.text().contains("Arecibo Message"))//find the link with some texts
            {
                System.out.println("here is the element you need");
                System.out.println("\nlink : " + link.attr("href"));
                System.out.println("text : " + link.text());
            }


        }

建议在Jsoup中指定“userAgent”,以避免HTTP 403错误消息。

文档doc = Jsoup.connect(" http://anyurl.com")。userAgent(" Mozilla")。get();

" Onna malli mage yuthukama kala。"

参考:

https://www.mkyong.com/java/jsoup-html-parser-hello-world-examples/