我正在尝试构建一个简单的linkchecker应用程序。我从网页中提取所有href属性,然后输出到文件。然后我检查我对正则表达式解析的内容以检查有效的URL,并将有效的URL输出到另一个文件。然后我访问这些URL并将任何损坏的链接输出到第三个文件。
在下面的删节代码中,假设href已经被提取并列在page_contents.txt中。我在这里提供该文本文件的内容:
http://computing.dcu.ie/~humphrys/
http://computing.dcu.ie/~humphrys/
http://computing.dcu.ie/~humphrys/blog.html
http://computing.dcu.ie/~humphrys/teaching.html
http://computing.dcu.ie/~humphrys/research.html
http://computing.dcu.ie/~humphrys/contact.html
http://computing.dcu.ie/~humphrys/
http://computing.dcu.ie/~humphrys/ca249/
http://computing.dcu.ie/~humphrys/ca318/
http://computing.dcu.ie/~humphrys/ca425/
http://computing.dcu.ie/~humphrys/ca651/
http://w2mind.computing.dcu.ie/
http://w2mind.org/
index.html
computers.internet.html
#world
#ireland
#uk
#multimedia
#internet
http://www.pressreader.com/
http://www.pressdisplay.com/
http://www.newspaperdirect.com/
http://www.newseum.org/todaysfrontpages/
http://news.google.com/
http://news.google.com/news?ned=uk
http://news.google.com/news?ned=en_ie
http://www.google.com/alerts
http://en.wikinews.org/
http://news.yahoo.com/
http://uk.news.yahoo.com/
http://www.apimages.com/
http://en.wikipedia.org/wiki/Next_Media_Animation
http://www.youtube.com/user/NMAWorldEdition
http://www.youtube.com/user/NMANews
http://www.time.com/
http://www.newsweek.com/
http://www.economist.com/
http://www.salon.com/
http://www.tnr.com/
http://thenewrepublic.com/
http://www.nytimes.com/
http://www.nypost.com/
http://www.washingtonpost.com/
http://www.latimes.com/
http://www.wsj.com/
http://www.jpost.com/
http://www.smh.com.au/
http://www.theonion.com/
http://www.theonion.com/content/video
http://www.youtube.com/user/TheOnion
http://www.theonion.com/content/radionews
http://www.thedailymash.co.uk/
http://themire.net/
http://waterfordwhispersnews.com/
http://www.evilgerald.com/
http://www.langerland.com/
http://www.portadownnews.com/
http://www.portadownnews.com/archive.htm
http://www.irishurls.com/
http://www.irishtimes.com/
http://www.irish-times.com/
http://www.ireland.com/
http://notices.irishtimes.com/
http://www.irishtimes.com/search/
http://www.independent.ie/
http://www.unison.ie/irish_independent/
http://www.independent.ie/search/index.jsp
http://www.announcement.ie/
http://www.iannounce.co.uk/Republic-of-Ireland/52
http://www.sbpost.ie/
http://www.thepost.ie/
http://archives.tcm.ie/businesspost/
http://en.wikipedia.org/wiki/Sunday_Tribune
http://www.irishexaminer.com/
http://www.examiner.ie/
http://www.magill.ie/
http://www.villagemagazine.ie/
http://www.phoenix-magazine.com/
http://www.hotpress.com/
http://www.emigrant.ie/
http://groups.google.com/groups/dir?sel=gtype%3D0%2Cusenet%3Die&
http://www.listenlive.eu/ireland.html
http://www.rte.ie/
http://www.rte.ie/player/
http://www.rte.ie/tv/
http://www.rte.ie/news/
http://www.rte.ie/aertel/170-01.html
http://www.rte.ie/radio/
http://www.rte.ie/radio1/
http://www.rte.ie/smiltest/radio_new.smil
http://www.rte.ie/lyricfm/
http://dynamic.rte.ie/av/live/radio/lyric.smil
http://www.rte.ie/aertel/184-01.html
http://www.tv3.ie/
http://www.tg4.ie/
http://www.tnag.ie/
http://www.rte.ie/aertel/
http://www.rte.ie/aertel/103-01.html
http://www.irishtimes.com/weather/
http://www.rte.ie/weather/
http://dir.yahoo.com/Regional/Countries/United_Kingdom/News_and_Media/
http://www.thetimes.co.uk/
http://www.the-times.co.uk/
http://www.timesonline.co.uk/
http://en.wikipedia.org/wiki/The_Times
http://www.thesundaytimes.co.uk/
http://www.sunday-times.co.uk/
http://archive.timesonline.co.uk/tol/archive/
http://www.thetimes.co.uk/tto/archive/
http://www.newsint-archive.co.uk/
http://www.newstext.com.au/
http://www.telegraph.co.uk/
http://www.independent.co.uk/
http://www.guardian.co.uk/
http://en.wikipedia.org/wiki/The_Guardian
http://www.observer.co.uk/
http://observer.guardian.co.uk/
http://archive.guardian.co.uk/
http://browse.guardian.co.uk/
http://www.guardian.co.uk/Archive/
http://users.guardian.co.uk/help/search/
http://www.spectator.co.uk/
http://www.private-eye.co.uk/
http://www.newstatesman.co.uk/
我使用几个不同的页面运行程序没有问题,但是对于一个特定的页面,我有以下错误消息:
Exception in thread "main" java.lang.IllegalArgumentException: protocol = http host = null
at sun.net.spi.DefaultProxySelector.select(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.followRedirect(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
at sun.net.www.protocol.http.HttpURLConnection.getHeaderFieldKey(Unknown Source)
at test2.main(test2.java:77)
错误发生在代码的这一行
String name = http.getHeaderFieldKey(i);
针对此主题的先前问题的回答表明问题是程序正在将URL的主机读取为null。我不知道为什么会出现这种情况(假设主机为空是问题的根源?)。似乎导致问题的网址是http://www.newstatesman.co.uk/,它似乎格式正确,并且与程序正确处理的许多其他网址没有任何不同。
这或多或少是我的第一个问题,所以欢迎任何有关问题或我的问题表述的建设性意见。
import javax.swing.text.html.*;
import javax.swing.text.Element;
import javax.swing.text.ElementIterator;
import javax.swing.text.SimpleAttributeSet;
import javax.swing.text.BadLocationException;
import java.net.*;
import java.io.*;
import java.util.regex.*;
class test2
{
public static void main (String args[]) throws Exception
{
String fileOut2 = System.getProperty("user.dir") + File.separator + "page_contents.txt";
String fileURLOut = System.getProperty("user.dir") + File.separator + "urls.txt";
String brokenLinks = System.getProperty("user.dir") + File.separator + "broken2.html";
BufferedReader URLIn = new BufferedReader(new FileReader(fileOut2));
PrintWriter URLOut = new PrintWriter(new FileWriter(fileURLOut));
PrintWriter brokenOut = new PrintWriter(new FileWriter(brokenLinks));
try
{
String urlPattern = "((https?|ftp|gopher|telnet):((//)|(\\\\))+[\\w\\d:#@%/;$()~_?\\+-=\\\\\\.&]*)";
String x;
while ((x = URLIn.readLine()) != null)
{
System.out.println("Entered while loop!");
Pattern p = Pattern.compile(urlPattern,Pattern.CASE_INSENSITIVE);
Matcher m = p.matcher(x);
if (m.find())
{
URLOut.println(x.substring(m.start(0),m.end(0)));
URL url = new URL(x.substring(m.start(0),m.end(0)));
HttpURLConnection http = (HttpURLConnection)url.openConnection();
http.setConnectTimeout(5000);
for (int i=0; ; i++)
{
String name = http.getHeaderFieldKey(i);
String value = http.getHeaderField(i);
if (name == null && value == null) // end of headers
{
break;
}
if (name == null) // first line of headers
{
if(!value.substring(9, 12).equals("200"))
{
brokenOut.println("<li><a href=\"" + url + "\">" + url + "</a>" + " " + value.substring(9, 12) + "</li>");
}
}
else
{
System.out.println(name + "=" + value + "!!!!!!");
}
}
}
}
} catch (MalformedURLException e)
{
System.out.println("Malformed URL!!!!!");
} catch (IOException e)
{
throw new RuntimeException("IO Exception!!!!!", e);
} finally
{
if (URLIn != null)
{
URLIn.close();
}
if (URLOut != null)
{
URLOut.close();
}
if (brokenOut != null)
{
brokenOut.close();
}
}
}
}
答案 0 :(得分:0)
你想在循环中处理异常,它给你一些从坏网址中恢复的机会。