我是python的初学者,并使用scrapy递归地爬网所有链接,并希望将每个链接映射到该链接中找到的文本。
为此,我需要定义自己的Spider类,该类可以接受要爬网的网站名称和类型列表的参数,并且我想构建一个指向网站中文本链接的字典,但是我缺乏概念python类中的对象。我在下面的代码中尝试通过创建对象来运行scrapy,但这给了我错误。
请帮助我制作该类的对象(它们传递具有要爬网的网页/网站名称的参数),并形成{'URL':'all text found in that URL'}
的字典
#rinku
import scrapy
# class LinkExtractor():
class MyntraSpider(scrapy.Spider):
name = "Myntra"
# allowed_domains = ["myntra.com"]
# start_urls = [
# "http://www.myntra.com/",
# ]
# name = "Linker"
# def __init__(allowed_domains = [], start_urls = []):
# self.allowed_domains = allowed_domains
# self.start_urls = start_urls
def __init__(self, allowed_domains=None, start_urls=None):
super().__init__()
# self.name = name
if allowed_domains is None:
self.allowed_domains = []
else:
self.allowed_domains = allowed_domains
if start_urls is None:
self.start_urls = []
else:
self.start_urls = start_urls
def parse(self, response):
hxs = scrapy.Selector(response)
# extract all links from page
all_links = hxs.xpath('*//a/@href').extract()
# iterate over links
for link in all_links:
yield scrapy.http.Request(url=link, callback=print_this_link)
def print_this_link(self, link):
print("Link --> {this_link}".format(this_link=link))
m1 = MyntraSpider(["myntra.com"], ["http://www.myntra.com/"])
# m1 = MyntraSpider("Linker",["myntra.com"], ["http://www.myntra.com/",])
我得到的输出没有打印链接
(venv) C:\Users\Carthaginian\Desktop\projectLink\crawler>scrapy crawl Myntra
2019-08-14 13:32:51 [scrapy.utils.log] INFO: Scrapy 1.7.3 started (bot: crawler)
2019-08-14 13:32:51 [scrapy.utils.log] INFO: Versions: lxml 4.4.1.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.7.0, Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1c 28 May 2019), cryptography 2.7, Platform Windows-10-10.0.17134-SP0
2019-08-14 13:32:51 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'crawler', 'NEWSPIDER_MODULE': 'crawler.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['crawler.spiders']}
2019-08-14 13:32:51 [scrapy.extensions.telnet] INFO: Telnet Password: 3109504fb87f6b47
2019-08-14 13:32:51 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2019-08-14 13:32:52 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-08-14 13:32:52 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-08-14 13:32:52 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-08-14 13:32:52 [scrapy.core.engine] INFO: Spider opened
2019-08-14 13:32:52 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-08-14 13:32:52 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-08-14 13:32:52 [scrapy.core.engine] INFO: Closing spider (finished)
2019-08-14 13:32:52 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.015957,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2019, 8, 14, 8, 2, 52, 585291),
'log_count/INFO': 10,
'start_time': datetime.datetime(2019, 8, 14, 8, 2, 52, 569334)}
2019-08-14 13:32:52 [scrapy.core.engine] INFO: Spider closed (finished)
答案 0 :(得分:2)
要使用参数运行,必须使用__init__
class MyntraSpider(scrapy.Spider):
def __init__(self, name, allowed_domains=None, start_urls=None):
super().__init__()
self.name = name
if allowed_domains is None:
self.allowed_domains = []
else:
self.allowed_domains = allowed_domains
if start_urls is None:
self.start_urls = []
else:
self.start_urls = start_urls
您何时运行(无scrapy.Spider
)
m1 = MyntraSpider("Myntra", ["myntra.com"], ["http://www.myntra.com/"])
然后Python将执行类似的操作
MyntraSpider.__init__(m1, "Myntra", ["myntra.com"], ["http://www.myntra.com/"])
如果您生成了要运行搜寻器的项目,那么您不会创建实例,而是运行scrapy,它会自动使用Spider并必须使用
在命令行中发送数据scrapy crawl MyntraSpider -a nama=Myntra -a allowed_domains=myntra.com -a start_urls=http://www.myntra.com/
,但是它将作为字符串发送,因此您可能必须将它们转换为列表-即。在split()
中使用__init__
编辑:使用后的工作代码
full_link = response.urljoin(link)
将相对网址转换为绝对网址
并在self.
中添加callback=self.print_this_link
不需要创建hxs = scrapy.Selector(response)
,因为response.xpath
也可以工作。
这是独立脚本,无需创建项目即可使用。它产生URL和页面标题,该标题和页面标题保存在output.csv
import scrapy
class MySpider(scrapy.Spider):
name = "MySpider"
def __init__(self, allowed_domains=None, start_urls=None):
super().__init__()
# self.name = name
if allowed_domains is None:
self.allowed_domains = []
else:
self.allowed_domains = allowed_domains
if start_urls is None:
self.start_urls = []
else:
self.start_urls = start_urls
def parse(self, response):
print('[parse] url:', response.url)
# extract all links from page
all_links = response.xpath('*//a/@href').extract()
# iterate over links
for link in all_links:
print('[+] link:', link)
#yield scrapy.http.Request(url="http://www.myntra.com" + link, callback=self.print_this_link)
full_link = response.urljoin(link)
yield scrapy.http.Request(url=full_link, callback=self.print_this_link)
def print_this_link(self, response):
print('[print_this_link] url:', response.url)
title = response.xpath('//title/text()').get() # get() will replace extract() in the future
yield {'url': response.url, 'title': title}
# --- run without creating project and save in `output.csv` ---
from scrapy.crawler import CrawlerProcess
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
# save in file as CSV, JSON or XML
'FEED_FORMAT': 'csv', # csv, json, xml
'FEED_URI': 'output.csv', #
})
c.crawl(MySpider)
c.crawl(MySpider, allowed_domains=["myntra.com"], start_urls=["http://www.myntra.com/"])
c.start()