Java Multi Link Checker Spider - 需要改进

时间:2010-11-17 03:37:24

标签: java performance testing web-crawler

我有以下工作代码(在此处更改,因此您在复制和粘贴时使用大脑)。我想改进它,以便检测所有无效的网页,包括待售域名。它的效率约为89%。如果你看到任何我可以通过使用额外的现有库或一些很棒的小调整来改进。

 List all = linkService.getAllLinks();
    notValidLinks = new LinkedList();
    final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(39867);
    int poolSize = 90;
    int maxPoolSize = 100;
    long keepAliveTime = 40;
    ThreadPoolExecutor tpe = new ThreadPoolExecutor(poolSize, maxPoolSize,
            keepAliveTime, TimeUnit.SECONDS, queue);

    for (link : all) {
       Thread task = new CheckSite(link);
       tpe.execute(task);
       System.out.println("Task count:" + queue.size());
    }

class CheckSite extends Thread {
    Link link;

    CheckSite(Link link) {
        this.link = link;
    }

    public void run() {
        boolean notValid = false;
        try {
            log.info(link.getLink() + " " + link.getId());
            URL u = new URL(link.getLink());
            HttpURLConnection huc = (HttpURLConnection) u.openConnection();
            HttpURLConnection.setFollowRedirects(false);
            huc.setConnectTimeout(40000);
            huc.setRequestMethod("GET");
            huc.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729)");

            huc.connect();
            int code = huc.getResponseCode();

            if (code != HttpURLConnection.HTTP_OK
                    && code != HttpURLConnection.HTTP_MOVED_PERM
                    && code != HttpURLConnection.HTTP_MOVED_TEMP ){
                notValid = true;
                log.info("Invalid code: " + code + " - " + link.getLink());
            }
            if (code == HttpURLConnection.HTTP_MOVED_PERM) {
                log.info(link.getLink() + " Perm move");
            }
            if (code == HttpURLConnection.HTTP_MOVED_TEMP) {
                log.info(link.getLink() + " Temp move");
            }

            try {
                if (!notValid) {
                    BufferedReader reader = new BufferedReader(new InputStreamReader(huc.getInputStream()));
                    StringBuilder stringBuilder = new StringBuilder();

                    String line;
                    while ((line = reader.readLine()) != null) {
                        stringBuilder.append(line);
                    }

                    notValid = StringUtils.containsIgnoreCase(Jsoup.parse(stringBuilder.toString()).text(), "Related Searches");

                }
            } catch (Exception e) {
                   log.error(e.getMessage());
            }

            huc.disconnect();
        } catch (MalformedURLException me) {
            log.info("Malformed URL:" + link.getLink());
            notValid = true;
        } catch (IOException e) {
            log.info("Refused connection | Does not exist:" + link.getLink());
            notValid = true;
        }
        if (notValid) {
            link.setApproved(false);
            link.setDateApproved(null);
            notValidLinks.add(linkService.save(link));

        }
        log.debug("URL Finieshed!");
    }
}

2 个答案:

答案 0 :(得分:1)

  

我想对其进行改进,以便检测所有无效的网页包括待售域名

我怀疑突出显示的部分是不切实际的。蜘蛛如何能够告诉某个域名是否可以出售?

关注

@Mat Banik建议寻找特定短语或将DNS记录视为可能的解决方案。

  • 检查特定短语的启发式方法会产生误报和漏报。

  • 检查DNS记录是棘手的,在Java中会很棘手。您可以对URL的主机名部分执行简单的DNS查找,并根据已知的DNS停放站点IP列表检查生成的IP地址。但这并不能告诉您原始主机名是否实际上是出售。它可能是一个托管在同一基础设施上的真实网站......或者是一个非出售的停放域。

但我想,如果您准备接受一些误报和否定,那么尝试过滤掉待售域名是可行的。

答案 1 :(得分:1)

查看Bloom Filter [wiki]。这将有助于您快速,高效地查找内容.Bloom过滤器的问题在于它会出现误报,即。它会告诉那些不存在的东西。但如果Bloom过滤器说错,那肯定是假的。