我认为我有一个解决方案,但当然一些网站有不同的结构,它不在那里工作。我需要知道如何摆脱所有Javascript,JQuery以及可能在网站源代码中的任何其他代码,而不是纯文本。
我在MySpider.py和items.py上尝试了这个解决方案(Scraping text without javascript code using scrapy)。我不知道为什么它不与remove_tags_with_content
合作,但事实并非如此。工作文件目前看起来像这样:
from scrapy.linkextractors import LinkExtractor
from scrapy.loader import ItemLoader
from scrapy.selector import Selector
from scrapy.spiders import CrawlSpider, Rule
#from scrapy.utils.markup import remove_tags_with_content
from Scrapy_One.items import Items_Main
class MySpider(CrawlSpider):
name = 'spiderName'
allowed_domains = ['abb.de']
start_urls = ['http://www.abb.de/']
rules = (Rule(LinkExtractor(allow = ('', ),
deny = ('/(\w|\W)*([Ii]mpressum|[Aa]bout|[Pp]rivacy|[Tt]erms|[Cc]opyright|[Hh]elp|[Hh]ilfe|[Dd]atenschutz|[Rr]echtliche(\w|\W)*[Hh]inweis|[Hh]aftungsausschlu)'),
unique = True),
callback = 'parse_stuff',
follow = True),
)
def parse_stuff(self, response):
hxs = Selector(response)
sites = hxs.xpath('//html')
items_main = []
for site in sites:
loader = ItemLoader(item = Items_Main(), response = response)
loader.add_xpath('fragment', '//body//text()')
items_main.append(loader.load_item())
return items_main
from scrapy.item import Item, Field
from scrapy.loader.processors import MapCompose, Join, TakeFirst
#from scrapy.utils.markup import remove_tags_with_content
from w3lib.html import replace_escape_chars, remove_tags
class Items_Main(Item):
fragment = Field(
input_processor = MapCompose(lambda v: v.strip(), remove_tags, replace_escape_chars),
output_processor = Join(),
)
我知道这不符合我的要求(删除所有Javascript-,JQuery-,...代码),但这是我必须采用的当前情况。 所以,如果您有任何建议如何摆脱它,我想尝试一下。
答案 0 :(得分:0)
我想我找到了答案(至少有一个对我有用)。
我将以下行从 MySpider.py loader.add_xpath('fragment', '//body//text()')
更改为loader.add_xpath('fragment', '//*[not(self::script)]/text()')
。
所以这个文件的完整代码现在是:
from scrapy.linkextractors import LinkExtractor
from scrapy.loader import ItemLoader
from scrapy.selector import Selector
from scrapy.spiders import CrawlSpider, Rule
from Scrapy_One.items import Items_Main
class MySpider(CrawlSpider):
name = 'spiderName'
allowed_domains = ['example.de']
start_urls = ['http://www.example.de/']
rules = (Rule(LinkExtractor(allow = ('', ),
deny = ('/(\w|\W)*([Ii]mpressum|[Aa]bout|[Pp]rivacy|[Tt]erms|[Cc]opyright|[Hh]elp|[Hh]ilfe|[Dd]atenschutz|[KkCc]onta[kc]t|[Rr]echtliche(\w|\W)*[Hh]inweis|[Hh]aftungsausschlu)'),
unique = True),
callback = 'parse_stuff',
follow = True),
)
def parse_stuff(self, response):
hxs = Selector(response)
sites = hxs.xpath('//body')
items_main = []
for site in sites:
loader = ItemLoader(item = Items_Main(), response = response)
loader.add_xpath('fragment', '//*[not(self::script)]/text()')
items_main.append(loader.load_item())
return items_main