我在python中使用scrapy框架编写了一个爬虫来选择一些链接和元标记。然后抓取起始网址并将数据以JSON编码格式写入文件。问题是当爬虫运行时两个或者使用相同的起始URL三次,文件中的数据会重复。为避免这种情况,我在scrapy中使用了一个下载中间件:http://snippets.scrapy.org/snippets/1/
我所做的是将上面的代码复制并粘贴到我的scrapy项目中的文件中,然后通过添加以下行在settings.py文件中启用它:
SPIDER_MIDDLEWARES = {'a11ypi.removeDuplicates.IgnoreVisitedItems':560}
其中“a11ypi.removeDuplicates.IgnoreVisitedItems”是类路径名,最后我进入并修改了我的items.py文件并包含以下字段
visit_id = Field()
visit_status = Field()
但是这不起作用,并且爬虫仍会生成相同的结果,在运行两次时将其附加到文件
我在pipelines.py文件中写了这个文件,如下所示:
import json
class AYpiPipeline(object):
def __init__(self):
self.file = open("a11ypi_dict.json","ab+")
# this method is called to process an item after it has been scraped.
def process_item(self, item, spider):
d = {}
i = 0
# Here we are iterating over the scraped items and creating a dictionary of dictionaries.
try:
while i<len(item["foruri"]):
d.setdefault(item["foruri"][i],{}).setdefault(item["rec"][i],{})[item["foruri_id"][i]] = item['thisurl'] + ":" +item["thisid"][i]
i+=1
except IndexError:
print "Index out of range"
json.dump(d,self.file)
return item
我的蜘蛛代码如下:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from a11ypi.items import AYpiItem
class AYpiSpider(CrawlSpider):
name = "a11y.in"
allowed_domains = ["a11y.in"]
# This is the list of seed URLs to begin crawling with.
start_urls = ["http://www.a11y.in/a11ypi/idea/fire-hi.html"]
# This is the callback method, which is used for scraping specific data
def parse(self,response):
temp = []
hxs = HtmlXPathSelector(response)
item = AYpiItem()
wholeforuri = hxs.select("//@foruri").extract() # XPath to extract the foruri, which contains both the URL and id in foruri
for i in wholeforuri:
temp.append(i.rpartition(":"))
item["foruri"] = [i[0] for i in temp] # This contains the URL in foruri
item["foruri_id"] = [i.split(":")[-1] for i in wholeforuri] # This contains the id in foruri
item['thisurl'] = response.url
item["thisid"] = hxs.select("//@foruri/../@id").extract()
item["rec"] = hxs.select("//@foruri/../@rec").extract()
return item
请建议做什么。
答案 0 :(得分:1)
尝试理解为什么片段按原样编写:
if isinstance(x, Request):
if self.FILTER_VISITED in x.meta:
visit_id = self._visited_id(x)
if visit_id in visited_ids:
log.msg("Ignoring already visited: %s" % x.url,
level=log.INFO, spider=spider)
visited = True
请注意,在第2行中,您实际上需要在Request.meta中使用名为FILTER_VISITED
的密钥,以便中间件删除请求。原因很明确,因为您访问过的每个网址都会被跳过,如果您不这样做,您将无法使用网址。因此,FILTER_VISITED
实际上允许您选择要跳过的网址模式。如果您想要跳过通过特定规则提取的链接,请执行
Rule(SgmlLinkExtractor(allow=('url_regex1', 'url_regex2' )), callback='my_callback', process_request = setVisitFilter)
def setVisitFilter(request):
request.meta['filter_visited'] = True
return request
P.S我不知道它是否适用于0.14及以上,因为在sqlite db中存储了蜘蛛上下文的一些代码已经改变。