使用python3.6刮取网站。我无法进入登录页面

时间:2017-01-24 11:05:52

标签: python web-scraping

网站的html表单代码:

                <form class="m-t" role="form" method="POST" action="">

                <div class="form-group text-left">
                    <label for="username">Username:</label>
                    <input type="text" class="form-control" id="username" name="username" placeholder="" autocomplete="off" required />
                </div>
                <div class="form-group text-left">
                    <label for="password">Password:</label>
                    <input type="password" class="form-control" id="pass" name="pass" placeholder="" autocomplete="off" required />
                </div>

                <input type="hidden" name="token" value="/bGbw4NKFT+Yk11t1bgXYg48G68oUeXcb9N4rQ6cEzE=">
                <button type="submit" name="submit" class="btn btn-primary block full-width m-b">Login</button>

到目前为止足够简单。我过去曾多次搜集过一些网站。

我试过:selenium,mechanize(虽然不得不退回到早期版本的python),mechanicalsoup,requests。

我看过:SO上的多个帖子以及: https://kazuar.github.io/scraping-tutorial/ http://docs.python-requests.org/en/latest/user/advanced/#session-objects 还有很多。

示例代码:

import requests
from lxml import html
session_requests = requests.session()
result = session_requests.get(url)
tree = html.fromstring(result.text)
authenticity_token = list(set(tree.xpath("//input[@name='token']/@value")))[0]
result = session_requests.post(
    url, 
    data = payload, 
    headers = dict(referer=url)
)
result = session_requests.get(url3)
print(result.text)

import mechanicalsoup
import requests
from http import cookiejar

c = cookiejar.CookieJar()
s = requests.Session()
s.cookies = c
browser = mechanicalsoup.Browser(session=s)

login_page = browser.get(url)

login_form = login_page.soup.find('form', {'method':'POST'})

login_form.find('input', {'name': 'username'})['value'] = username
login_form.find('input', {'name': 'pass'})['value'] = password

response = browser.submit(login_form, login_page.url)

尽我所能我只是不能返回除了登录页面的html代码以外的任何内容,我不知道接下来要去哪里探索,实际上找出未发生的事情以及原因。

url =包含登录页面网址的变量,url3 =我要抓取的网页。

非常感谢任何帮助!

2 个答案:

答案 0 :(得分:1)

你试过标题吗?

首先尝试浏览器并观察所需的标头并在请求中发送标头。 标题是识别用户或客户的重要部分。

尝试从不同的IP,可能有人正在观看请求的IP。

试试这个例子。我在这里使用硒和铬驱动器。首先,我从selenium获取cookie,然后将其保存在文件中以供日后使用,然后我使用已保存cookie的请求来访问需要登录的页面。

from selenium import webdriver
import os
import demjson

# download chromedriver from given location and put at some accessible location and set path
# utl to download chrome driver - https://chromedriver.storage.googleapis.com/index.html?path=2.27/
chrompathforselenium = "/path/chromedriver"

os.environ["webdriver.chrome.driver"]=chrompathforselenium
driver=webdriver.Chrome(executable_path=chrompathforselenium)
driver.set_window_size(1120, 550)

driver.get(url1)

driver.find_element_by_name("username").send_keys(username)
driver.find_element_by_name("pass").send_keys(password)

# you need to find how to access button on the basis of class attribute
# here I am doing on the basis of ID
driver.find_element_by_id("btnid").click()

# set your accessible cookiepath here.
cookiepath = ""

cookies=driver.get_cookies()
getCookies=open(cookiepath, "w+")
getCookies.write(demjson.encode(cookies))
getCookies.close()

readCookie = open(cookiepath, 'r')
cookieString = readCookie.read()
cookie = demjson.decode(cookieString)

headers = {}
# write all the headers
headers.update({"key":"value"})

response = requests.get(url3, headers=headers, cookies=cookie)
# check your response

答案 1 :(得分:1)

这是最终工作的代码:

from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
import demjson
import requests
capabilities = DesiredCapabilities.FIREFOX.copy()
import os
os.chdir('C:\\...') #chdir to the dir with geckodriver.exe in it
driver = webdriver.Firefox(capabilities=capabilities, firefox_binary='C:\\Program Files\\Mozilla Firefox\\firefox.exe')
username = '...'
password = '...'
url = 'https://.../login.php' #login url
url2 = '...' #1st page you want to scrape

driver.get(url)
driver.find_element_by_name("usr").send_keys(username)
driver.find_element_by_name("pwd").send_keys(password)

driver.find_element_by_name("btn_id").click()

s = requests.session()
for cookie in driver.get_cookies():
    c = {cookie['name']: cookie['value']}
    s.cookies.update(c)


response = s.get(url2)