从react-select中选择后如何持久保存搜索数据

时间:2020-07-08 07:25:03

标签: javascript reactjs ecmascript-6 react-redux

我在项目中使用react-select,因此我在选择一个值时尝试保留搜索数据。实际上,当我在“下拉列表”中搜索并选择一个值时,搜索数据已丢失。我要保留搜索数据。例如,当我选择F-10时,我在搜索F-1 to F-10的地方有一个下拉菜单,而当我选择F-6时,我的下拉菜单中的 import React from 'react'; import Select from 'react-select'; const areas = [ { label: "Newyork", value: newyork }, { label: "F-1", value: f1 }, { label: "F-2", value: f2 }, { label: "F-3", value: f3 }, { label: "F-4", value: f4 }, { label: "F-5", value: f5 }, { label: "F-6", value: f6 }, { label: "F-7", value: f7 }, { label: "F-8", value: f8 }, { label: "F-9", value: f9 }, { label: "F-10", value: f10 }, ]; const App = () => ( <div className="app"> <div className="container"> <Select options={areas} /> </div> </div> ); 则丢失了我的搜索数据。您能帮我如何保存我的搜索数据。

谢谢

代码

name = "nbtest"

allowed_domains = ['norisbank.de']

start_urls = ['https://www.norisbank.de']



custom_settings={ 'FEED_URI': "norisbank_%(time)s.csv",

                  'FEED_FORMAT': 'csv',

                  'DEPTH_LIMIT': 3}



def parse(self, response):

    page = response.url.split("/")[-2]

    filename = 'nbtest-%s.html' % page

    with open(filename, 'wb') as f:

        f.write(response.body)

    self.log('Saved file %s' % filename)



    #Content Extraction

    print(response.url)

    siteurl = response.xpath("//link[@rel='canonical']/@href")[0].getall()

    dbCategory = response.xpath("//meta[@name='dbCategory']/@content")[0].getall()

    titles = response.css('title::text').extract()

    descriptions = response.xpath("//meta[@name='description']/@content")[0].getall()

    date = response.xpath("//meta[@name='date']/@content")[0].getall()

    version = response.xpath("//meta[@name='v']/@content")[0].getall()

    time = response.xpath("//meta[@name='time']/@content")[0].getall()



    #Sort extracted conten in rows

    for item in zip(siteurl,dbCategory,titles,descriptions,date,version,time):

        scraped_info = {

           'siteurl' : item[0],

            'dbCategory' : item [1],

            'title' : item [2],

            'description' : item [3],

            'date' : item [4],

            'version' : item [5],

            'time' : item [6],

        }



        yield scraped_info



    next_page = response.xpath('//a[contains(@href, "norisbank")]/@href').get()

    if next_page is not None:

        next_page = response.urljoin(next_page)

        yield scrapy.Request(next_page, callback=self.parse)

0 个答案:

没有答案