Selenium&Scrapy:最后一个URL覆盖其他URL

时间:2019-09-22 09:42:00

标签: selenium scrapy web-crawler

我目前正在尝试从三个网站(三个不同的URL)抓取数据。因此,我正在使用一个文本文件将不同的URL加载到start_url中。 目前,我的文件中包含三个URL。但是,该脚本之前只是保存/覆盖了两个URL的数据。

这是我的代码:

# -*- coding: utf-8 -*-
import scrapy
from scrapy import Spider
from selenium import webdriver
from scrapy.selector import Selector
from scrapy.http import Request
from time import sleep
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import re
import csv

class AlltipsSpider(Spider):
    name = 'alltips'
    allowed_domains = ['blogabet.com']

    def start_requests(self):

        self.driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe')    
        with open("urls.txt", "rt") as f:
            start_urls = [l.strip() for l in f.readlines()]

        self.driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe')
        for url in start_urls:
            self.driver.get(url)

            self.driver.find_element_by_id('currentTab').click()
            sleep(3)
            self.logger.info('Sleeping for 5 sec.')
            self.driver.find_element_by_xpath('//*[@id="_blog-menu"]/div[2]/div/div[2]/a[3]').click()
            sleep(7)
            self.logger.info('Sleeping for 7 sec.')
            yield Request(self.driver.current_url, callback=self.crawltips)     

    def crawltips(self, response):
        sel = Selector(text=self.driver.page_source)
        allposts = sel.xpath('//*[@class="block media _feedPick feed-pick"]')

        for post in allposts:
            username = post.xpath('.//div[@class="col-sm-7 col-lg-6 no-padding"]/a/@title').extract()
            publish_date = post.xpath('.//*[@class="bet-age text-muted"]/text()').extract()


            yield{'Username': username,
                'Publish date': publish_date
                }

0 个答案:

没有答案
相关问题