检测诚实的网络爬虫

时间:2009-02-13 01:55:33

标签: c# web-crawler bots

我想检测(在服务器端)哪些请求来自机器人。我现在不关心恶意机器人,只关注那些玩得很好的机器人。我见过一些主要涉及将用户代理字符串与'bot'等关键字匹配的方法。但这似乎很尴尬,不完整,不可维护。那么有没有人有更坚实的方法?如果没有,您是否拥有用于跟上所有友好用户代理的最新资源?

如果你很好奇:我不打算对任何搜索引擎政策做任何事情。我们有一个网站的部分,其中用户随机呈现一个页面的几个略有不同的版本之一。但是,如果检测到网络爬虫,我们总是给它们相同的版本,以便索引一致。

我也在使用Java,但我认为这种方法对于任何服务器端技术都是类似的。

7 个答案:

答案 0 :(得分:83)

你说在'bot'上匹配用户代理可能很尴尬,但我们发现这是一个非常好的匹配。我们的研究表明,它将覆盖您收到的大约98%的点击量。我们也没有遇到过任何误报。如果你想把它提高到99.9%,你可以加入一些其他着名的比赛,比如'履带式','baiduspider','ia_archiver','curl'等等。我们已经在我们的生产系统上测试了这个数百万命中

以下是一些针对您的c#解决方案:

1)最简单

处理未命中时最快。即来自非机器人的流量 - 普通用户。 捕获99%以上的爬虫。

bool iscrawler = Regex.IsMatch(Request.UserAgent, @"bot|crawler|baiduspider|80legs|ia_archiver|voyager|curl|wget|yahoo! slurp|mediapartners-google", RegexOptions.IgnoreCase);

2)中等

处理点击时速度最快。即来自机器人的流量。对于未命中也很快。 捕获接近100%的爬虫。 预先匹配'bot','crawler','spider'。 您可以添加任何其他已知的抓取工具。

List<string> Crawlers3 = new List<string>()
{
    "bot","crawler","spider","80legs","baidu","yahoo! slurp","ia_archiver","mediapartners-google",
    "lwp-trivial","nederland.zoek","ahoy","anthill","appie","arale","araneo","ariadne",            
    "atn_worldwide","atomz","bjaaland","ukonline","calif","combine","cosmos","cusco",
    "cyberspyder","digger","grabber","downloadexpress","ecollector","ebiness","esculapio",
    "esther","felix ide","hamahakki","kit-fireball","fouineur","freecrawl","desertrealm",
    "gcreep","golem","griffon","gromit","gulliver","gulper","whowhere","havindex","hotwired",
    "htdig","ingrid","informant","inspectorwww","iron33","teoma","ask jeeves","jeeves",
    "image.kapsi.net","kdd-explorer","label-grabber","larbin","linkidator","linkwalker",
    "lockon","marvin","mattie","mediafox","merzscope","nec-meshexplorer","udmsearch","moget",
    "motor","muncher","muninn","muscatferret","mwdsearch","sharp-info-agent","webmechanic",
    "netscoop","newscan-online","objectssearch","orbsearch","packrat","pageboy","parasite",
    "patric","pegasus","phpdig","piltdownman","pimptrain","plumtreewebaccessor","getterrobo-plus",
    "raven","roadrunner","robbie","robocrawl","robofox","webbandit","scooter","search-au",
    "searchprocess","senrigan","shagseeker","site valet","skymob","slurp","snooper","speedy",
    "curl_image_client","suke","www.sygol.com","tach_bw","templeton","titin","topiclink","udmsearch",
    "urlck","valkyrie libwww-perl","verticrawl","victoria","webscout","voyager","crawlpaper",
    "webcatcher","t-h-u-n-d-e-r-s-t-o-n-e","webmoose","pagesinventory","webquest","webreaper",
    "webwalker","winona","occam","robi","fdse","jobo","rhcs","gazz","dwcp","yeti","fido","wlm",
    "wolp","wwwc","xget","legs","curl","webs","wget","sift","cmc"
};
string ua = Request.UserAgent.ToLower();
bool iscrawler = Crawlers3.Exists(x => ua.Contains(x));

3)偏执狂

相当快,但比选项1和2慢一点。 它是最准确的,并允许您根据需要维护列表。 如果你害怕未来的误报,你可以在其中维护一个单独的名单,其中包含'bot'。 如果我们得到一个短的匹配,我们记录它并检查它是否为误报。

// crawlers that have 'bot' in their useragent
List<string> Crawlers1 = new List<string>()
{
    "googlebot","bingbot","yandexbot","ahrefsbot","msnbot","linkedinbot","exabot","compspybot",
    "yesupbot","paperlibot","tweetmemebot","semrushbot","gigabot","voilabot","adsbot-google",
    "botlink","alkalinebot","araybot","undrip bot","borg-bot","boxseabot","yodaobot","admedia bot",
    "ezooms.bot","confuzzledbot","coolbot","internet cruiser robot","yolinkbot","diibot","musobot",
    "dragonbot","elfinbot","wikiobot","twitterbot","contextad bot","hambot","iajabot","news bot",
    "irobot","socialradarbot","ko_yappo_robot","skimbot","psbot","rixbot","seznambot","careerbot",
    "simbot","solbot","mail.ru_bot","spiderbot","blekkobot","bitlybot","techbot","void-bot",
    "vwbot_k","diffbot","friendfeedbot","archive.org_bot","woriobot","crystalsemanticsbot","wepbot",
    "spbot","tweetedtimes bot","mj12bot","who.is bot","psbot","robot","jbot","bbot","bot"
};

// crawlers that don't have 'bot' in their useragent
List<string> Crawlers2 = new List<string>()
{
    "baiduspider","80legs","baidu","yahoo! slurp","ia_archiver","mediapartners-google","lwp-trivial",
    "nederland.zoek","ahoy","anthill","appie","arale","araneo","ariadne","atn_worldwide","atomz",
    "bjaaland","ukonline","bspider","calif","christcrawler","combine","cosmos","cusco","cyberspyder",
    "cydralspider","digger","grabber","downloadexpress","ecollector","ebiness","esculapio","esther",
    "fastcrawler","felix ide","hamahakki","kit-fireball","fouineur","freecrawl","desertrealm",
    "gammaspider","gcreep","golem","griffon","gromit","gulliver","gulper","whowhere","portalbspider",
    "havindex","hotwired","htdig","ingrid","informant","infospiders","inspectorwww","iron33",
    "jcrawler","teoma","ask jeeves","jeeves","image.kapsi.net","kdd-explorer","label-grabber",
    "larbin","linkidator","linkwalker","lockon","logo_gif_crawler","marvin","mattie","mediafox",
    "merzscope","nec-meshexplorer","mindcrawler","udmsearch","moget","motor","muncher","muninn",
    "muscatferret","mwdsearch","sharp-info-agent","webmechanic","netscoop","newscan-online",
    "objectssearch","orbsearch","packrat","pageboy","parasite","patric","pegasus","perlcrawler",
    "phpdig","piltdownman","pimptrain","pjspider","plumtreewebaccessor","getterrobo-plus","raven",
    "roadrunner","robbie","robocrawl","robofox","webbandit","scooter","search-au","searchprocess",
    "senrigan","shagseeker","site valet","skymob","slcrawler","slurp","snooper","speedy",
    "spider_monkey","spiderline","curl_image_client","suke","www.sygol.com","tach_bw","templeton",
    "titin","topiclink","udmsearch","urlck","valkyrie libwww-perl","verticrawl","victoria",
    "webscout","voyager","crawlpaper","wapspider","webcatcher","t-h-u-n-d-e-r-s-t-o-n-e",
    "webmoose","pagesinventory","webquest","webreaper","webspider","webwalker","winona","occam",
    "robi","fdse","jobo","rhcs","gazz","dwcp","yeti","crawler","fido","wlm","wolp","wwwc","xget",
    "legs","curl","webs","wget","sift","cmc"
};

string ua = Request.UserAgent.ToLower();
string match = null;

if (ua.Contains("bot")) match = Crawlers1.FirstOrDefault(x => ua.Contains(x));
else match = Crawlers2.FirstOrDefault(x => ua.Contains(x));

if (match != null && match.Length < 5) Log("Possible new crawler found: ", ua);

bool iscrawler = match != null;

备注:

  • 继续为正则表达式选项1添加名称很诱人。但是如果这样做会变慢。如果你想要一个更完整的列表,那么带有lambda的linq会更快。
  • 确保.ToLower()不在您的linq方法之内 - 请记住该方法是一个循环,您将在每次迭代期间修改字符串。
  • 始终将最重的机器人放在列表的开头,以便它们更快匹配。
  • 将列表放入静态类,以便不在每个网页浏览中重建它们。

<强>蜜罐

唯一真正的替代方法是在您的网站上创建一个只有机器人才能到达的“蜜罐”链接。然后,将访问蜜罐页面的用户代理字符串记录到数据库中。然后,您可以使用这些记录的字符串对爬虫进行分类。

Postives:它会匹配一些未声明的未知抓取工具。

Negatives:并非所有抓取工具都能深入挖掘您网站上的每个链接,因此他们可能无法访问您的蜜罐。

答案 1 :(得分:25)

您可以在robotstxt.org Robots Database中找到关于已知“好”网络抓取工具的非常全面的数据库。利用这些数据远比在用户代理中匹配 bot 更有效。

答案 2 :(得分:9)

一个建议是在您的页面上创建一个只有机器人会跟随的空锚。普通用户不会看到链接,留下蜘蛛和机器人。例如,指向子文件夹的空锚标记会在日志中记录获取请求...

<a href="dontfollowme.aspx"></a>

许多人在运行HoneyPot时使用此方法来捕获未遵循robots.txt文件的恶意机器人。我使用ASP.NET honeypot solution中的空锚方法编写陷阱并阻止那些令人毛骨悚然的爬虫......

答案 3 :(得分:5)

任何访问者的输入页面是/robots.txt可能都是机器人。

答案 4 :(得分:4)

像这样快速和肮脏的东西可能是一个好的开始:

return if request.user_agent =~ /googlebot|msnbot|baidu|curl|wget|Mediapartners-Google|slurp|ia_archiver|Gigabot|libwww-perl|lwp-trivial/i

注意:rails代码,但正则表达式通常适用。

答案 5 :(得分:0)

我很确定大部分机器人都不使用robots.txt,但这是我的第一个想法。

在我看来,检测机器人的最佳方法是在请求之间有时间,如果请求之间的时间一直快于机器人。

答案 6 :(得分:0)

void CheckBrowserCaps()
    {
        String labelText = "";
        System.Web.HttpBrowserCapabilities myBrowserCaps = Request.Browser;
        if (((System.Web.Configuration.HttpCapabilitiesBase)myBrowserCaps).Crawler)
        {
            labelText = "Browser is a search engine.";
        }
        else
        {
            labelText = "Browser is not a search engine.";
        }

        Label1.Text = labelText;
    }

HttpCapabilitiesBase.Crawler Property