带有Scrapy和Python的XPath无法使XPath正常工作

时间:2018-10-10 13:17:50

标签: python mongodb xpath web-scraping scrapy

过去一周,我一直在使用以下教程使用Python和scrapy:https://realpython.com/web-scraping-with-scrapy-and-mongodb/

本教程的目的是通过使用易爬网的抓取工具在stackoverflow上抓取最常见的问题及其网址,然后将其存储到mongoDB数据库和集合中。

我正在尝试适应本教程中所做的事情,以将多个项目抓取并存储到同一mongoDB数据库的多个集合中,然后以CSV格式导出,我想出了大部分方法,但是我在使用“ xpaths”时遇到麻烦,scrapy用来搜索网页上的指定项目,更具体的说,我已经弄清楚了如何对mongodb进行pipleline以及存储多个集合以及更改集合基于被抓取的项目名称的名称,但是我无法使“蜘蛛”专门用于xpath,或者据我所知,问题出在xpath错误。

我以前没有关于scrapy的经验,我已经进行了数天的研究,试图弄清楚如何执行xpath,但是我似乎无法使其正常工作。

我要抓取的页面:https://stackoverflow.com/

问题标题和URL的蜘蛛按预期工作:

from scrapy import Spider
from scrapy.selector import Selector

from stack.items import QuestionItem

class QuestionSpider(Spider):
    name = "questions"
    allowed_domains = ["stackoverflow.com"]
    start_urls = [
        "http://stackoverflow.com/questions?pagesize=50&sort=newest",
    ]

    def parse(self, response):
        questions =     Selector(response).xpath('//div[@class="summary"]/h3')

        for question in questions:
            item = QuestionItem()
            item['title'] = question.xpath(
                'a[@class="question-hyperlink"]/text()').extract()[0]
            item['url'] = question.xpath(
                'a[@class="question-hyperlink"]/@href').extract()[0]
            yield item     

蜘蛛网的答案,投票和观点数量未达到预期目标:

from scrapy import Spider
from scrapy.selector import Selector

from stack.items import PopularityItem

class PopularitySpider(Spider):
    name = "popularity"
    allowed_domains = ["stackoverflow.com"]
    start_urls = [
        "http://stackoverflow.com/questions?pagesize=50&sort=newest",
    ]

    def parse(self, response):
        popularity =     Selector(response).xpath('//div[@class="summary"]/h3')


        for poppart in popularity:
           item = PopularityItem()
           item['votes'] = poppart.xpath(
                'div[contains(@class, "votes")]/text()').extract()
           item['answers'] = poppart.xpath(
                'div[contains(@class, "answers")]/text()').extract()
           item['views'] = poppart.xpath(
                'div[contains(@class, "views")]/text()').extract()
           yield item

最后是第三只蜘蛛,其问题与第二只蜘蛛相似。

使用第二个蜘蛛,在使用以下命令启动蜘蛛后,我将以下输出和数据存储到我的mongoDB数据库中:

scrapy crawl popularity

{ "_id" : ObjectId("5bbde11cb395bb1dc0d9410d"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9410e"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9410f"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94110"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94111"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94112"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94113"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94114"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94115"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94116"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94117"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94118"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94119"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411a"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411b"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411c"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411d"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411e"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d9411f"), "votes" : [ ], "answers" : [ ], "views" : [ ] }
{ "_id" : ObjectId("5bbde11cb395bb1dc0d94120"), "votes" : [ ], "answers" : [ ], "views" : [ ] }

如您所见,所有项目都是空的,我能够获得一些输出的唯一方法是使用xpath:

//div[contains(@class, "views")]/text()

据我了解,使用“ //”表示div中所有class =“ views”的元素

使用此方法只能部分工作,因为我只获取view项目的输出,所有输出存储在一个项目行中,然后再次将下一个循环中的所有输出存储在下一个项目行中,这很有意义,因为我正在使用

//div instead of div

之所以发生“或我认为是”是因为循环,它循环遍历页面上的“摘要”类的数量,作为一种告诉刮板要刮刮和存储多少行的方法,这是通过下面的xpath和代码段“我确实在上面显示了它,但是只是为了清楚起见”:

    def parse(self, response):
        popularity =     Selector(response).xpath('//div[@class="summary"]/h3')


    for poppart in popularity:

使用

时给出的输出
//div

如下:

{ "_id" : ObjectId("5bbdf34ab395bb249c3c71c2"), "votes" : [ "\r\n                        ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                " ], "answers" : [ ], "views" : [ "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    3 views\r\n", "\r\n    8 views\r\n", "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    5 views\r\n", "\r\n    10 views\r\n", "\r\n    5 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    14 views\r\n", "\r\n    2 views\r\n", "\r\n    5 views\r\n", "\r\n    3 views\r\n", "\r\n    5 views\r\n", "\r\n    3 views\r\n", "\r\n    6 views\r\n", "\r\n    7 views\r\n", "\r\n    3 views\r\n", "\r\n    7 views\r\n", "\r\n    5 views\r\n", "\r\n    14 views\r\n", "\r\n    4 views\r\n", "\r\n    12 views\r\n", "\r\n    16 views\r\n", "\r\n    7 views\r\n", "\r\n    7 views\r\n", "\r\n    7 views\r\n", "\r\n    4 views\r\n", "\r\n    4 views\r\n", "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    3 views\r\n", "\r\n    3 views\r\n", "\r\n    8 views\r\n", "\r\n    2 views\r\n", "\r\n    10 views\r\n", "\r\n    6 views\r\n", "\r\n    3 views\r\n" ] }
{ "_id" : ObjectId("5bbdf34ab395bb249c3c71c3"), "votes" : [ "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                ", "\r\n                    ", "\r\n                    ", "\r\n                " ], "answers" : [ ], "views" : [ "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    3 views\r\n", "\r\n    8 views\r\n", "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    5 views\r\n", "\r\n    10 views\r\n", "\r\n    5 views\r\n", "\r\n    2 views\r\n", "\r\n    2 views\r\n", "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    14 views\r\n", "\r\n    2 views\r\n", "\r\n    5 views\r\n", "\r\n    3 views\r\n", "\r\n    5 views\r\n", "\r\n    3 views\r\n", "\r\n    6 views\r\n", "\r\n    7 views\r\n", "\r\n    3 views\r\n", "\r\n    7 views\r\n", "\r\n    5 views\r\n", "\r\n    14 views\r\n", "\r\n    4 views\r\n", "\r\n    12 views\r\n", "\r\n    16 views\r\n", "\r\n    7 views\r\n", "\r\n    7 views\r\n", "\r\n    7 views\r\n", "\r\n    4 views\r\n", "\r\n    4 views\r\n", "\r\n    3 views\r\n", "\r\n    2 views\r\n", "\r\n    4 views\r\n", "\r\n    3 views\r\n", "\r\n    3 views\r\n", "\r\n    8 views\r\n", "\r\n    2 views\r\n", "\r\n    10 views\r\n", "\r\n    6 views\r\n", "\r\n    3 views\r\n" ] }

输入“ it”以获取更多信息

我只显示两行,但是这样做是为了forloop指定的行数。

总而言之,我相信我的xpath在这里做错了。 感谢您的帮助,因为我花了很多天试图解决此问题,但仍未成功。

我包括我的指导线,设置和要完成的项目。

设置:

BOT_NAME = 'stack'

SPIDER_MODULES = ['stack.spiders']
NEWSPIDER_MODULE = 'stack.spiders'


# Crawl responsibly by identifying yourself (and your website) on the     user-agent
#USER_AGENT = 'stack (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

ITEM_PIPELINES = {'stack.pipelines.MongoDBPipeline': 300}

MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = "testpop13" 

项目:

import scrapy


from scrapy.item import Item, Field



class QuestionItem(Item):
    title = Field()
    url = Field()

class PopularityItem(Item):
    votes = Field()
    answers = Field()
    views = Field()


class ModifiedItem(Item):
    lastModified = Field()
    modName = Field()

油路:

import pymongo
import logging

class StackPipeline(object):
    def process_item(self, item, spider):
        return item



from scrapy.conf import settings
from scrapy.exceptions import DropItem
from scrapy import log

class MongoDBPipeline(object):


    def __init__(self):

    connection = pymongo.MongoClient(settings['MONGODB_SERVER'],     settings['MONGODB_PORT'])
        self.db = connection[settings['MONGODB_DB']]

    def process_item(self, item, spider):
        collection = self.db[type(item).__name__.lower()]
        logging.info(collection.insert(dict(item)))
        return item

最后是问题蜘蛛的正确输出看起来如何:

> db.questionitem.find()
{ "_id" : ObjectId("5bbdfa29b395bb1c74c9721c"), "title" : "Why I can't enforce EditTextPreference to take just numbers?", "url" : "/questions/52741046/why-i-cant-enforce-edittextpreference-to-take-just-numbers" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9721d"), "title" : "mysql curdate method query is not giving correct result", "url" : "/questions/52741045/mysql-curdate-method-query-is-not-giving-correct-result" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9721e"), "title" : "how to execute FME workbench with parameters in java", "url" : "/questions/52741044/how-to-execute-fme-workbench-with-parameters-in-java" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9721f"), "title" : "create a top 10 list for multiple groups with a ranking in python", "url" : "/questions/52741043/create-a-top-10-list-for-multiple-groups-with-a-ranking-in-python" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97220"), "title" : "Blob binding not working in VS2017 Azure function template", "url" : "/questions/52741041/blob-binding-not-working-in-vs2017-azure-function-template" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97221"), "title" : "How to convert float to vector<unsigned char> in C++?", "url" : "/questions/52741039/how-to-convert-float-to-vectorunsigned-char-in-c" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97222"), "title" : "Nginx serving server and static build", "url" : "/questions/52741038/nginx-serving-server-and-static-build" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97223"), "title" : "Excel Shortout key to format axis bound?", "url" : "/questions/52741031/excel-shortout-key-to-format-axis-bound" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97224"), "title" : "POST successful but the data doesn't appear in the controller", "url" : "/questions/52741029/post-successful-but-the-data-doesnt-appear-in-the-controller" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97225"), "title" : "Node - Nested For loop async behaviour", "url" : "/questions/52741028/node-nested-for-loop-async-behaviour" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97226"), "title" : "KSH Shell script not zipping up files", "url" : "/questions/52741027/ksh-shell-script-not-zipping-up-files" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97227"), "title" : "Property 'replaceReducer' does not exist on type 'Store<State>' After upgrading @ngrx/store", "url" : "/questions/52741023/property-replacereducer-does-not-exist-on-type-storestate-after-upgrading" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97228"), "title" : "passing more than 10 arguments to a shell script within gitlab yaml", "url" : "/questions/52741022/passing-more-than-10-arguments-to-a-shell-script-within-gitlab-yaml" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c97229"), "title" : "Setting an environmental variable in a docker-compose.yml file is the same as setting that variable in a .env file?", "url" : "/questions/52741021/setting-an-environmental-variable-in-a-docker-compose-yml-file-is-the-same-as-se" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722a"), "title" : "Pass list of topics from application yml to KafkaListener", "url" : "/questions/52741016/pass-list-of-topics-from-application-yml-to-kafkalistener" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722b"), "title" : "Copy numbers at the beggining of each line to the end of line", "url" : "/questions/52741015/copy-numbers-at-the-beggining-of-each-line-to-the-end-of-line" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722c"), "title" : "Pretty JSON retrieved from response in GoLang", "url" : "/questions/52741013/pretty-json-retrieved-from-response-in-golang" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722d"), "title" : "Swift: Sorting Core Data child entities based on Date in each parent", "url" : "/questions/52741010/swift-sorting-core-data-child-entities-based-on-date-in-each-parent" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722e"), "title" : "How to create Paypal developer account", "url" : "/questions/52741009/how-to-create-paypal-developer-account" }
{ "_id" : ObjectId("5bbdfa2ab395bb1c74c9722f"), "title" : "output of the program and explain why a and b showing different values", "url" : "/questions/52741008/output-of-the-program-and-explain-why-a-and-b-showing-different-values" }
Type "it" for more

从此输出中,我可以将其保存为CSV,一切正常。

对于冗长的帖子,我深表歉意。如果需要其他任何信息,我希望对此尽可能完整。请不要犹豫,我将密切关注这个问题。

在此先感谢您的帮助。

1 个答案:

答案 0 :(得分:1)

首先,您不需要写

Selector(response).xpath(...)

相反,您只能编写

response.xpath(...)

第二,在PopularSpider中,您需要将选择器更改为:-

popularity = response.xpath('//div[contains(@class, "question-summary")]')

for poppart in popularity:
       item = PopularityItem()
       item['votes'] = poppart.xpath(
            '//div[contains(@class, "votes")]//span/text()').extract()
       item['answers'] = poppart.xpath(
            '//div[contains(@class, "answered")]//span/text()').extract()
       item['views'] = poppart.xpath(
            '//div[contains(@class, "views")]//span/text()').extract()
       yield item

希望它可以解决您的问题。

注意:检查选择器是否正常的好习惯是在浏览器(chrome等)中使用inspect选项,甚至更好的选择是在scrapy shell中测试代码。您只需要运行以下命令:-

scrapy shell www.example.com