从softpedia.com获取Scrapy下载安装程序

时间:2013-11-04 18:52:53

标签: python scrapy

目前,我可以从softpedia.com获得无尽的抓取链接(包括所需的安装程序链接,例如http://hotdownloads.com/trialware/download/Download_a1keylogger.zip?item=33649-3&affiliate=22260)。

spider.py如下:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

class MySpider(CrawlSpider):
    """ Crawl through web sites you specify """
    name = "softpedia"

    # Stay within these domains when crawling
    allowed_domains = ["www.softpedia.com"]

    start_urls = [
    "http://win.softpedia.com/",]

    download_delay = 2

    # Add our callback which will be called for every found link
    rules = [
            Rule(SgmlLinkExtractor(), follow=True)
    ]
除了添加到settings.py的行外,

items.py,pipelines.py,settings.py是默认设置:

FILES_STORE = '/home/test/softpedia/downloads'

使用urllib2,我能够判断链接是否是安装程序,在这种情况下,我在content_type中获得'application':

>>> import urllib2
>>> url = 'http://hotdownloads.com/trialware/download/Download_a1keylogger.zip?item=33649-3&affiliate=22260'
>>> response = urllib2.urlopen(url)
>>> content_type = response.info().get('Content-Type')
>>> print content_type
application/zip

我的问题是,如何收集所需的安装程序链接,并将它们下载到我的目标文件夹?提前致谢!

PS:

我现在找到了2种方法,但我无法让它们正常工作:

1。https://stackoverflow.com/a/7169241/2092480,我通过向蜘蛛添加以下代码来遵循这个答案:

def parse_installer(self, response):
    # extract links
    lx = SgmlLinkExtractor()  
    urls = lx.extract_links(response)
    for url in urls:
        yield Request(url, callback=self.save_installer)

def save_installer(self, response):
    path = self.get_path(response.url)
    with open(path, "wb") as f: # or using wget
        f.write(response.body)

蜘蛛只是因为这些代码永远不会存在而我没有下载文件,有人可以看到哪里出错了吗?

2. https://groups.google.com/forum/print/msg/scrapy-users/kzGHFjXywuY/O6PIhoT3thsJ,当我在[“file_urls”]中提供预定义的链接时,此方法本身正在工作。但是如何设置scrapy来收集[“file_urls”]的所有安装程序链接?另外,我想这么简单的任务,上面的方法应该足够了。

1 个答案:

答案 0 :(得分:1)

我结合了两种方法来获取实际/镜像安装程序下载,然后使用文件下载管道进行实际下载。但是,如果文件下载URL是动态/复杂的,它似乎不起作用,例如http://www.softpedia.com/dyn-postdownload.php?p=00000&t=0&i=1。但它适用于更简单的链接,例如http://www.ietf.org/rfc/rfc2616.txt

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.http import Request
from scrapy.contrib.loader import XPathItemLoader
from scrapy import log
from datetime import datetime 
from scrapy.conf import settings
from myscraper.items import SoftpediaItem

class SoftpediaSpider(CrawlSpider):
name = "sosoftpedia"
allowed_domains = ["www.softpedia.com"]
start_urls = ['http://www.softpedia.com/get/Antivirus/']
rules = Rule(SgmlLinkExtractor(allow=('/get/', ),allow_domains=("www.softpedia.com"), restrict_xpaths=("//td[@class='padding_tlr15px']",)), callback='parse_links', follow=True,),


def parse_start_url(self, response):
    return self.parse_links(response)

def parse_links(self, response):
    print "PRODUCT DOWNLOAD PAGE: "+response.url
    hxs = HtmlXPathSelector(response)
    urls = hxs.select("//a[contains(@itemprop, 'downloadURL')]/@href").extract()
    for url in urls:
        item = SoftpediaItem()
        request =  Request(url=url, callback=self.parse_downloaddetail) 
        request.meta['item'] = item
        yield request

def parse_downloaddetail(self, response):
    item = response.meta['item']
    hxs = HtmlXPathSelector(response)
    item["file_urls"] = hxs.select('//p[@class="fontsize16"]/b/a/@href').extract() #["http://www.ietf.org/rfc/rfc2616.txt"]
    print "ACTUAL DOWNLOAD LINKS "+ hxs.select('//p[@class="fontsize16"]/b/a/@href').extract()[0]
    yield item