如何实施抓取广告链接的网络抓取工具?

时间:2016-06-04 23:41:22

标签: python web-crawler

为了获得培训数据,我写了一个爬虫来跟踪Alexa上的前500个网站,深度为2,并将找到的所有链接写入文件。现在,它查找html中的所有链接并将它们写入文件。问题是,抓取工具错过了所有广告链接,其中一些链接位于iframe中或位于CSS文件中。如何更改我的网络抓取工具,以便抓取所有链接,包括广告?相关代码可以在下面找到。

class Crawler(object):

xRunnable::new

类Fetcher(对象):

def __init__(self, root, depth, locked=True):
    self.root = root
    self.depth = depth
    self.locked = locked
    self.host = urlparse.urlparse(root)[1]
    self.urls = []
    self.links = 0
    self.followed = 0


def crawl(self):
    #print " in crawl"
    page = Fetcher(self.root)
    q = Queue()
    #print "made fetcher"
    try:
        page.fetch()
        if page.urls == []:
            print "Error: could not fetch urls for %s" % (self.root)
            return
            #raise KeyboardInterrupt
        else: 
            target = open("output.txt", 'w')
            for url in page.urls:
                q.put(url)
                target.write((url+'\n').encode('utf-8'))
            followed = [self.root]
            target.close()

    except Exception as e:
        print('Error: could not fetch urls')
        raise KeyboardInterrupt
        '''
    q = Queue()
    target = open("output.txt", 'w')
    for url in page.urls:
        q.put(url) f
        target.write((url+'\n').encode('utf-8'))
    followed = [self.root]
    target.close()
    #print followed
    '''

    n = 0

    while True:
        try:
            url = q.get()
        except QueueEmpty:
            break

        n += 1

        if url not in followed:
            try:
                host = urlparse.urlparse(url)[1]

                if self.locked and re.match(".*%s" % self.host, host):
                    followed.append(url)
                    #print url
                    self.followed += 1
                    page = Fetcher(url)
                    page.fetch()
                    for i, url in enumerate(page):
                        if url not in self.urls:
                            self.links += 1
                            q.put(url)
                            self.urls.append(url)
                            with open("data.out", 'w') as f:
                               f.write(url)
                    if n > self.depth and self.depth > 0:
                        break
            except Exception, e:
                print "ERROR: Can't process url '%s' (%s)" % (url, e)
                print format_exc()

静态方法:

def __init__(self, url):
    self.url = url
    self.urls = []

def __getitem__(self, x):
    return self.urls[x]

def _addHeaders(self, request):
    request.add_header("User-Agent", AGENT)

def open(self):
    url = self.url
    try:
        request = urllib2.Request(url)
        handle = urllib2.build_opener()
    except IOError:
        return None
    return (request, handle)

def fetch(self):
    request, handle = self.open()
    self._addHeaders(request)
    if handle:
        try:
            content = unicode(handle.open(request).read(), "utf-8",
                    errors="replace")
            soup = BeautifulSoup(content)
            tags = soup('a')
        except urllib2.HTTPError, error:
            if error.code == 404:
                print >> sys.stderr, "ERROR: %s -> %s" % (error, error.url)
            else:
                print >> sys.stderr, "ERROR: %s" % error
            tags = []
        except urllib2.URLError, error:
            print >> sys.stderr, "ERROR: %s" % error
            tags = []
        for tag in tags:
            href = tag.get("href")
            if href is not None:
                url = urlparse.urljoin(self.url, escape(href))
                if url not in self:
                    self.urls.append(url)

def getLinks(url):
    page = Fetcher(url)
    page.fetch()
    for i, url in enumerate(page):
        print "%d. %s" % (i, url)

1 个答案:

答案 0 :(得分:0)

通过在页面上执行的异步javascript提供大量广告。如果您只是抓取服务器初始输出,您将无法获得其他链接。一种方法是使用像PhantomJS这样的无头浏览器将html渲染到文件,然后使用你的脚本。还有其他可能性。