我创建了一个scrapy项目,我需要的数据也被刮掉了。
但问题是抓取的数据包含许多不需要的东西,如Javascript函数和其他html标记。如何摆脱它们只获取数据?
我的testSpider.py
代码:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from testing.items import testingItem
class TestSpider(CrawlSpider):
name = 'testspider'
session_id = -1
start_urls = ["https://www.wikipedia.org/"]
rules = ( Rule (SgmlLinkExtractor(allow=("", ),),
callback="parse_items", follow= True),
)
def __init__(self, session_id=-1, *args, **kwargs):
super(TestSpider, self).__init__(*args, **kwargs)
self.session_id = session_id
def parse_items(self, response):
sel = Selector(response)
items = []
item = testingItem()
item["session_id"] = self.session_id
item["depth"] = response.meta["depth"]
# item["current_url"] = response.url
# referring_url = response.request.headers.get('Referer', None)
# item["referring_url"] = referring_url
item["title"] = sel.xpath('//title/text()').extract()
item["content"]=sel.xpath('content/text()').extract()
items.append(item)
return items
My items.py:
from scrapy.item import Item, Field
class testingItem(Item):
session_id =Field()
depth = Field()
current_url=Field()
referring_url =Field()
title=Field()
content=Field()
答案 0 :(得分:0)
如果您无法完全提取所需内容,则需要创建一些函数或类来为您清理数据,作为单独的部分。在你的解析函数中调用它。例如
utils.py
class Cleaner(object):
def clean_html_tags(data):
....
return data
def clean_empty_space(data):
...
return data
然后在你的解析函数中你可以使用类似的东西:
from spider.utils import Cleaner
...
def parse(self, response):
item['something'] = Cleaner.clean_html_tags(selector.xpath("//div[@class='myclass']/div/text()").extract())