如何使用Robobrowser或请求提交Web表单

时间:2017-04-14 04:28:35

标签: python python-requests robobrowser

我不是Python专家,但发现它在很多方面非常有用。在提交表单和检索输出的上下文中,我在两个不同的场合遇到过Robobrowser和Requests入门问题。我想做的事;使用网络检查员将旅行日期提交至westjet.com我看到" origin-event"和"目的地事件"我需要在我的情况下提交机场代码YYZ和POP以及日期。有人可以帮我理解这些库,所以我知道如何完成这个以及正确的解释,谢谢。

import requests


headers = {'User-Agent': 'Safari 9.0'}
print("Please Print Your Intended Departure Date")
#departure_date = input()

print("Please Print Your Intended Return Date")
#return_date = input()


#
# FLIGHTS ONLY
#

westjetF = "https://www.westjet.com/en-ca/index"
airtransatF = "http://www.airtransat.com/en-CA/home"
sunwingF = "http://www.sunwing.ca"
hotwireF = "https://www.hotwire.com"

#
# VACATIONS ONLY
#
westjetV = "https://www.westjet.com/en-ca/index"
airtransatV = "https://www.transat.com/en-CA/?ici=homepage&icn=moteur_forfait&_ga=1.139381168.282228778.1458947467&search=package&origin=YOW"
airtransatV = "http://www.airtransat.com/en-CA/home"
sunwingV = "http://www.sunwing.ca"




# Payload for Westjet.com yy/mm/dd 00-00-00
Flight_data = {

  'orgin-event': 'POP',
  'destination-event' : 'YYZ',
  'depart' : '2017-04-22' ,
  'return': '2017-04-24',
 # 'numAdults' : '1',
 # 'numChildren': '0',
  #'numInfants': '0',
  #'promoCode': ''
}


with requests.Session() as s:

  execute = s.post(westjetF,data=Flight_data)

print(execute.text)









status = execute.status_code


if(status == 200):
 print("great, Request was processed")
else:
  print("Sorry That Request Wasnt Processed")

1 个答案:

答案 0 :(得分:0)

好的,这是如何发布requests

import requests
from bs4 import BeautifulSoup  ## for html parsing ##

url = 'http://www.westjet.com'  ## the page you want to GET or POST ##  

## Now get the input names ( with web inspector ) ##
## and make a dictionary with the inputs and data you wand to post ##
post_data = { 'name1' : 'value1', 'name2' : 'value2', etc }

req = requests.post(url, data=post_data)  ## post your data to the page ##
# req = requests.get(url, params=post_data)  ## use this method if you want to GET the page ##

status = req.status_code  ## this is the HTTP response status, eg 200 ##
html = req.text  ## this is the response content ( html ) ##

## Next , you can process your html with BeautifulSoup ## 
## or you can skip this part if you dont care about it ##
soup = BeautifulSoup(html, "html.parser") 
table = soup.find('table')  ## This will find the first table , for example ##

当然,这是一个非常基本的例子,你可以做更多的事情 我希望这有点帮助