我有以下蜘蛛,它几乎应该发布到一个表单。 我似乎无法让它工作。当我通过Scrapy这样做时,反应从未显示出来。 有人可以告诉我这里出了什么问题吗?
这是我的蜘蛛代码:
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import scrapy
from scrapy.http import FormRequest
from scrapy.shell import inspect_response
class RajasthanSpider(scrapy.Spider):
name = "rajasthan"
allowed_domains = ["rajtax.gov.in"]
start_urls = (
'http://www.rajtax.gov.in/',
)
def parse(self, response):
return FormRequest.from_response(
response,
formname='rightMenuForm',
formdata={'dispatch': 'dealerSearch'},
callback=self.dealer_search_page)
def dealer_search_page(self, response):
yield FormRequest.from_response(
response,
formname='dealerSearchForm',
formdata={
"zone": "select",
"dealertype": "VAT",
"dealerSearchBy": "dealername",
"name": "ana"
}, callback=self.process)
def process(self, response):
inspect_response(response, self)
当我用Splash替换我的dealer_search_page()
时:
def dealer_search_page(self, response):
yield FormRequest.from_response(
response,
formname='dealerSearchForm',
formdata={
"zone": "select",
"dealertype": "VAT",
"dealerSearchBy": "dealername",
"name": "ana"
},
callback=self.process,
meta={
'splash': {
'endpoint': 'render.html',
'args': {'wait': 0.5}
}
})
我收到以下警告:
2016-03-14 15:01:29 [scrapy] WARNING: Currently only GET requests are supported by SplashMiddleware; <POST http://rajtax.gov.in:80/vatweb/dealerSearch.do> will be handled without Splash
程序在我inspect_response()
函数中的process()
之前退出。
错误说Splash还不支持POST
。
Splash
是否适用于此用例,或者我应该使用Selenium
?
答案 0 :(得分:1)
您可以使用selenium
进行处理。这是一个完整的工作示例,我们使用与Scrapy代码中相同的搜索参数提交表单,并在控制台上打印结果:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("http://www.rajtax.gov.in/")
# accept the alert
driver.switch_to.alert.accept()
# open "Search for Dealers"
wait = WebDriverWait(driver, 10)
search_for_dealers = wait.until(EC.visibility_of_element_located((By.PARTIAL_LINK_TEXT, "Search for Dealers")))
search_for_dealers.click()
# set search parameters
dealer_type = Select(driver.find_element_by_name("dealertype"))
dealer_type.select_by_visible_text("VAT")
search_by = Select(driver.find_element_by_name("dealerSearchBy"))
search_by.select_by_visible_text("Dealer Name")
search_criteria = driver.find_element_by_name("name")
search_criteria.send_keys("ana")
# search
driver.find_element_by_css_selector("table.vattabl input.submit").click()
# wait for and print results
table = wait.until(EC.visibility_of_element_located((By.XPATH, "//table[@class='pagebody']/following-sibling::table")))
for row in table.find_elements_by_css_selector("tr")[1:]: # skipping header row
print(row.find_elements_by_tag_name("td")[1].text)
从搜索结果表中打印TIN号码:
08502557052
08451314461
...
08734200736
请注意,使用selenium
自动执行的浏览器可以是无头 - PhantomJS
或虚拟显示器上的常规浏览器。
回答最初的问题(编辑前):
我在经销商搜索页面上看到的内容 - 表单及其字段是使用浏览器中执行的一堆JavaScript脚本构建的。 Scrapy无法执行JS ,你需要帮助它完成这部分。在这种情况下,我非常确定Scrapy+Splash就够了,您不需要进入浏览器自动化。以下是使用Scrapy和Splash的一个工作示例: