xpath结果与scrapy和浏览器控制台不同

时间:2017-09-12 20:14:59

标签: selenium xpath scrapy web-crawler

我使用selenium和PhantomJS来收集教授们。来自大学的联系信息(非恶意目的)web page

出于测试目的,让我们说kw.txt是一个只包含两个姓氏的文件

最大

import scrapy
from selenium import webdriver

from universities.items import UniversitiesItem

class iupui(scrapy.Spider):
    name = 'iupui'
    allowed_domains = ['iupui.com']
    start_urls = ['http://iupuijags.com/staff.aspx']

    def __init__(self):
        self.last_name = ''

    def parse(self, response):
        with open('kw.txt') as file_object:
            last_names = file_object.readlines()

        for ln in last_names:
            #driver = webdriver.PhantomJS("C:\\Users\yashi\AppData\Roaming\Python\Python36\Scripts\phantomjs.exe")
            driver = webdriver.Chrome('C:\\Users\yashi\AppData\Local\Programs\Python\Python36\chromedriver.exe')
            driver.set_window_size(1120, 550)
            driver.get('http://iupuijags.com/staff.aspx')

            kw_search = driver.find_element_by_id('ctl00_cplhMainContent_txtSearch')
            search = driver.find_element_by_id('ctl00_cplhMainContent_btnSearch')

            self.last_name = ln.strip()
            kw_search.send_keys(self.last_name)
            search.click()

            item = UniversitiesItem()
            results = response.xpath('//table[@class="default_dgrd staff_dgrd"]//tr[contains(@class,"default_dgrd_item '
                                    'staff_dgrd_item") or contains(@class, "default_dgrd_alt staff_dgrd_alt")]')
            for result in results:
                full_name = result.xpath('./td[@class="staff_dgrd_fullname"]/a/text()').extract_first()
                print(full_name)
                if self.last_name in full_name.split():
                    item['full_name'] = full_name
                    email = result.xpath('./td[@class="staff_dgrd_staff_email"]/a/href').extract_first()
                    if email is not None:
                        item['email'] = email[7:]
                    else:
                        item['email'] = ''
                    item['phone'] = result.xpath('./td[@class="staff_dgrd_staff_phone"]/text()').extract_first()
                yield item
            driver.close()

然而,结果总是给我一堆看起来像

的名字
2017-09-12 15:27:13 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
Dr. Roderick Perry
2017-09-12 15:27:13 [scrapy.core.scraper] DEBUG: Scraped from <200 http://iupuijags.com/staff.aspx>
{}
Gail Barksdale
2017-09-12 15:27:13 [scrapy.core.scraper] DEBUG: Scraped from <200 http://iupuijags.com/staff.aspx>
{}
John Rasmussen
2017-09-12 15:27:13 [scrapy.core.scraper] DEBUG: Scraped from <200 http://iupuijags.com/staff.aspx>
{}
Jared Chasey
2017-09-12 15:27:13 [scrapy.core.scraper] DEBUG: Scraped from <200 http://iupuijags.com/staff.aspx>
{}
Denise O'Grady
2017-09-12 15:27:13 [scrapy.core.scraper] DEBUG: Scraped from <200 http://iupuijags.com/staff.aspx>
{}
Ed Holdaway
2017-09-12 15:27:13 [scrapy.core.scraper] DEBUG: Scraped from <200 http://iupuijags.com/staff.aspx>
{}

每次迭代结果的长度始终相同。

当我在其中放入xpath时,这就是它在控制台中的样子: console result

我真的无法弄清楚问题是什么。

1 个答案:

答案 0 :(得分:1)

这么少的问题。

  • 您没有使用硒代码的回复。你是 浏览页面,然后从页面源中做任何事情。

  • 接下来,即使未找到匹配项,您也会产生项目,因此 空白项目。

  • 此外,当你应该在

  • 里面时,你会创建外部循环项目
  • 您正在进行的比较区分大小写。所以你检查一下 max但结果为Max,您忽略了匹配。

  • 您在电子邮件的href中也缺少@

以下是固定版本

class iupui(scrapy.Spider):
    name = 'iupui'
    allowed_domains = ['iupui.com']
    start_urls = ['http://iupuijags.com/staff.aspx']

    # def __init__(self):
    #     self.last_name = ''

    def parse(self, response):
        # with open('kw.txt') as file_object:
        #     last_names = file_object.readlines()
        last_names = ["max"]
        for ln in last_names:
            #driver = webdriver.PhantomJS("C:\\Users\yashi\AppData\Roaming\Python\Python36\Scripts\phantomjs.exe")
            driver = webdriver.Chrome()
            driver.set_window_size(1120, 550)
            driver.get('http://iupuijags.com/staff.aspx')

            kw_search = driver.find_element_by_id('ctl00_cplhMainContent_txtSearch')
            search = driver.find_element_by_id('ctl00_cplhMainContent_btnSearch')

            self.last_name = ln.strip()
            kw_search.send_keys(self.last_name)
            search.click()

            res = response.replace(body=driver.page_source)


            results = res.xpath('//table[@class="default_dgrd staff_dgrd"]//tr[contains(@class,"default_dgrd_item '
                                    'staff_dgrd_item") or contains(@class, "default_dgrd_alt staff_dgrd_alt")]')
            for result in results:
                full_name = result.xpath('./td[@class="staff_dgrd_fullname"]/a/text()').extract_first()
                print(full_name)
                if self.last_name.lower() in full_name.lower().split():
                    item = UniversitiesItem()

                    item['full_name'] = full_name
                    email = result.xpath('./td[@class="staff_dgrd_staff_email"]/a/@href').extract_first()
                    if email is not None:
                        item['email'] = email[7:]
                    else:
                        item['email'] = ''
                    item['phone'] = result.xpath('./td[@class="staff_dgrd_staff_phone"]/text()').extract_first()
                    yield item
            driver.close()