我是新手。我在github上找到了一个抓取工具,可以抓取来自网站的电子邮件。
此蜘蛛与命令行中的参数传递一起使用:
scrapy crawl spider -a domain="example.com" -o emails-found.csv
此蜘蛛将结果存储在csv文件中。我想将结果存储在Mysql DB中。
所以我在pipeline.py中做了一些更改。
今天下午,我非常努力地尝试获取此参数“ domain”的值。您可以在此处查看有关该主题的上一篇文章: How to Import a variable of my spider class from my pipelines.py file?
但是我没有成功。日志告诉我:
AttributeError:类型对象'ThoroughSpider'没有属性......
我尝试使用start_urls,domain和allowed_domains,但是我总是得到相同的日志消息“ ...没有属性...”。
@gangabass提出了一个好主意:产生域名以便从pipeline.py获取它。
但是正如我所说,我是新手,我也不知道该怎么做。
我已经花了整整一个下午的时间来寻找解决方案,但是没有成功(请不要笑,对我来说这并不容易:-))。我相信对于专家来说,这是一件简单的事情。
现在,我不太在乎该方法的实现方式。我只想在我的pipes.py中收集此域值。
这是蜘蛛的代码:
# implementation of the thorough spider
import re
from urllib.parse import urljoin, urlparse
import scrapy
from scrapy.linkextractors import IGNORED_EXTENSIONS
from scraper.items import EmailAddressItem
# scrapy.linkextractors has a good list of binary extensions, only slight tweaks needed
IGNORED_EXTENSIONS.extend(['ico', 'tgz', 'gz', 'bz2'])
def get_extension_ignore_url_params(url):
path = urlparse(url).path # conveniently lops off all params leaving just the path
extension = re.search('\.([a-zA-Z0-9]+$)', path)
if extension is not None:
return extension.group(1)
else:
return "none" # don't want to return NoneType, it will break comparisons later
class ThoroughSpider(scrapy.Spider):
name = "spider"
def __init__(self, domain=None, subdomain_exclusions=[], crawl_js=False):
self.allowed_domains = [domain]
start_url = "http://" + domain
self.start_urls = [
start_url
]
self.subdomain_exclusions=subdomain_exclusions
self.crawl_js = crawl_js
# boolean command line parameters are not converted from strings automatically
if str(crawl_js).lower() in ['true', 't', 'yes', 'y', '1']:
self.crawl_js = True
def parse(self, response):
# print("Parsing ", response.url)
all_urls = set()
# use xpath selectors to find all the links, this proved to be more effective than using the
# scrapy provided LinkExtractor during testing
selector = scrapy.Selector(response)
# grab all hrefs from the page
# print(selector.xpath('//a/@href').extract())
all_urls.update(selector.xpath('//a/@href').extract())
# also grab all sources, this will yield a bunch of binary files which we will filter out
# below, but it has the useful property that it will also grab all javavscript files links
# as well, we need to scrape these for urls to uncover js code that yields up urls when
# executed! An alternative here would be to drive the scraper via selenium to execute the js
# as we go, but this seems slightly simpler
all_urls.update(selector.xpath('//@src').extract())
# custom regex that works on javascript files to extract relativel urls hidden in quotes.
# This is a workaround for sites that need js executed in order to follow links -- aka
# single-page angularJS type designs that have clickable menu items that are not rendered
# into <a> elements but rather as clickable span elements - e.g. jana.com
all_urls.update(selector.re('"(\/[-\w\d\/\._#?]+?)"'))
for found_address in selector.re('[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,6}'):
item = EmailAddressItem()
item['email_address'] = found_address
yield item
for url in all_urls:
# ignore commonly ignored binary extensions - might want to put PDFs back in list and
# parse with a pdf->txt extraction library to strip emails from whitepapers, resumes,
# etc.
extension = get_extension_ignore_url_params(url)
if extension in IGNORED_EXTENSIONS:
continue
# convert all relative paths to absolute paths
if 'http' not in url:
url = urljoin(response.url, url)
if extension.lower() != 'js' or self.crawl_js is True:
yield scrapy.Request(url, callback=self.parse)
请问有什么好心的专家可以告诉我该怎么做?
答案 0 :(得分:1)
您可以通过以下简单方式访问管道中的spider参数:
spider.domain
。您必须通过在蜘蛛的__init__中添加self.domain = domain
来使参数作为属性可用。