我正在对网络文档进行结构分析。为此,我需要仅提取Web文档的结构(仅标记)。我找到了一个名为Jsoup的java的html解析器。但我不知道如何使用它来提取标签。
示例:
<html>
<head>
this is head
</head>
<body>
this is body
</body>
</html>
输出:
html,head,head,body,body,html
答案 0 :(得分:2)
听起来像深度优先遍历:
public class JsoupDepthFirst {
private static String htmlTags(Document doc) {
StringBuilder sb = new StringBuilder();
htmlTags(doc.children(), sb);
return sb.toString();
}
private static void htmlTags(Elements elements, StringBuilder sb) {
for(Element el:elements) {
if(sb.length() > 0){
sb.append(",");
}
sb.append(el.nodeName());
htmlTags(el.children(), sb);
sb.append(",").append(el.nodeName());
}
}
public static void main(String... args){
String s = "<html><head>this is head </head><body>this is body</body></html>";
Document doc = Jsoup.parse(s);
System.out.println(htmlTags(doc));
}
}
另一个解决方案是使用jsoup NodeVisitor,如下所示:
SecondSolution ss = new SecondSolution();
doc.traverse(ss);
System.out.println(ss.sb.toString());
类:
public static class SecondSolution implements NodeVisitor {
StringBuilder sb = new StringBuilder();
@Override
public void head(Node node, int depth) {
if (node instanceof Element && !(node instanceof Document)) {
if (sb.length() > 0) {
sb.append(",");
}
sb.append(node.nodeName());
}
}
@Override
public void tail(Node node, int depth) {
if (node instanceof Element && !(node instanceof Document)) {
sb.append(",").append(node.nodeName());
}
}
}