我正在抓取一个网页并在抓取之后从该网页中提取所有链接,然后我尝试使用下面的代码使用Apache Tika和BoilerPipe解析所有网址,所以对于某些网址,它解析得非常好但是对于某些网页我得到这样的错误。它在HTMLParser.java上显示一些错误:第102行。这是HTMLParser.java中的第102行
String parsedText = tika.parseToString(htmlStream, md);
我也提供了HTMLParse代码。
org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.microsoft.ooxml.OOXMLParser@67c28a6a
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:203)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:197)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135)
at org.apache.tika.Tika.parseToString(Tika.java:357)
at edu.uci.ics.crawler4j.crawler.HTMLParser.parse(HTMLParser.java:102)
at edu.uci.ics.crawler4j.crawler.WebCrawler.handleHtml(WebCrawler.java:227)
at edu.uci.ics.crawler4j.crawler.WebCrawler.processPage(WebCrawler.java:299)
at edu.uci.ics.crawler4j.crawler.WebCrawler.run(WebCrawler.java:118)
at java.lang.Thread.run(Unknown Source)
Caused by: java.util.zip.ZipException: invalid block type
at java.util.zip.InflaterInputStream.read(Unknown Source)
at java.util.zip.ZipInputStream.read(Unknown Source)
at java.io.FilterInputStream.read(Unknown Source)
at org.apache.poi.openxml4j.util.ZipInputStreamZipEntrySource$FakeZipEntry.<init>(ZipInputStreamZipEntrySource.java:114)
at org.apache.poi.openxml4j.util.ZipInputStreamZipEntrySource.<init>(ZipInputStreamZipEntrySource.java:55)
at org.apache.poi.openxml4j.opc.ZipPackage.<init>(ZipPackage.java:82)
at org.apache.poi.openxml4j.opc.OPCPackage.open(OPCPackage.java:220)
at org.apache.poi.extractor.ExtractorFactory.createExtractor(ExtractorFactory.java:152)
at org.apache.tika.parser.microsoft.ooxml.OOXMLExtractorFactory.parse(OOXMLExtractorFactory.java:65)
at org.apache.tika.parser.microsoft.ooxml.OOXMLParser.parse(OOXMLParser.java:67)
at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:197)
... 8 more
这是我的HTMLParser.java文件 -
public void parse(String htmlContent, String contextURL) {
InputStream htmlStream = null;
text = null;
title = null;
metaData = new HashMap<String, String>();
urls = new HashSet<String>();
char[] chars = htmlContent.toCharArray();
bulletParser.setCallback(textExtractor);
bulletParser.parse(chars);
try {
text = articleExtractor.getText(htmlContent);
} catch (BoilerpipeProcessingException e) {
e.printStackTrace();
}
if (text == null){
text = textExtractor.text.toString().trim();
}
title = textExtractor.title.toString().trim();
try {
Metadata md = new Metadata();
String utfHtmlContent = new String(htmlContent.getBytes(),"UTF-8");
htmlStream = new ByteArrayInputStream(utfHtmlContent.getBytes());
//The below line is at the line number 102 according to error above
String parsedText = tika.parseToString(htmlStream, md);
//very unlikely to happen
if (text == null){
text = parsedText.trim();
}
processMetaData(md);
} catch (Exception e) {
e.printStackTrace();
} finally {
IOUtils.closeQuietly(htmlStream);
}
bulletParser.setCallback(linkExtractor);
bulletParser.parse(chars);
Iterator<String> it = linkExtractor.urls.iterator();
String baseURL = linkExtractor.base();
if (baseURL != null) {
contextURL = baseURL;
}
int urlCount = 0;
while
(it.hasNext()) {
String href = it.next();
href = href.trim();
if (href.length() == 0) {
continue;
}
String hrefWithoutProtocol = href.toLowerCase();
if (href.startsWith("http://")) {
hrefWithoutProtocol = href.substring(7);
}
if (hrefWithoutProtocol.indexOf("javascript:") < 0
&& hrefWithoutProtocol.indexOf("@") < 0) {
URL url = URLCanonicalizer.getCanonicalURL(href, contextURL);
if (url != null) {
urls.add(url.toExternalForm());
urlCount++;
if (urlCount > MAX_OUT_LINKS) {
break;
}
}
}
}
}
任何建议都将受到赞赏。
答案 0 :(得分:1)
听起来像是一个格式错误的OOXML文档(.docx,.xlsx等)。要检查最新的Tika版本是否仍然出现问题,您可以download tika-app jar并按照以下方式运行:
java -jar tika-app-1.0.jar --text http://url.of.the/troublesome/document.docx
这应该打印出文件中包含的文字。如果它不起作用,请向bug report提交麻烦文件的网址(如果文件不公开,请附上文件)。
答案 1 :(得分:1)
我有同样的问题,我发现我试图解析的文档(docx)文件实际上并不是简单的文档,它是在microsoft word中开发的,带有文本和输入字段旁边的标签文本。
我从文件夹中删除了这些文件,并将所有文件的其余部分发布到Solr引擎进行解析和索引,这很有效。