使用python请求和csrf令牌登录

时间:2018-10-28 14:15:41

标签: python python-requests csrf

我正在使用python的请求模块来尝试登录网页。我打开一个request.session(),然后得到cookie和csrf令牌,它们包含在meta标签中。我用用户名,密码,隐藏的输入字段和meta标记中的csrf令牌建立了有效负载。之后,我使用post方法,并通过登录URL,cookie,有效负载和标头。但是之后,我无法访问登录页面后面的页面。 我在做什么错了?

这是我执行登录时的请求标头:

Request Headers:

:authority: www.die-staemme.de
:method: POST
:path: /page/auth
:scheme: https
accept: application/json, text/javascript, */*; q=0.01
accept-encoding: gzip, deflate, br
accept-language: de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7
content-length: 50
content-type: application/x-www-form-urlencoded
cookie: cid=261197879; remember_optout=0; ref=start; 
PHPSESSID=3eb4f503f38bfda1c6f48b8f9036574a
origin: https://www.die-staemme.de
referer: https://www.die-staemme.de/
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36
x-csrf-token: 3c49b84153f91578285e0dc4f22491126c3dfecdabfbf144
x-requested-with: XMLHttpRequest

到目前为止,这是我的代码:

import requests
from bs4 import BeautifulSoup as bs
import lxml

# Page header
head= { 'Content-Type':'application/x-www-form-urlencoded',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36'
}
# Start Page
url = 'https://www.die-staemme.de/'
# Login URL
login_url = 'https://www.die-staemme.de/page/auth'
# URL behind the login page
url2= 'https://de159.die-staemme.de/game.php?screen=overview&intro'

# Open up a session
s = requests.session()

# Open the login page
r = s.get(url)

# Get the csrf-token from meta tag
soup = bs(r.text,'lxml')
csrf_token = soup.select_one('meta[name="csrf-token"]')['content']

# Get the page cookie
cookie = r.cookies

# Set CSRF-Token
head['X-CSRF-Token'] = csrf_token
head['X-Requested-With'] = 'XMLHttpRequest'

# Build the login payload
payload = {
'username': '', #<-- your username
'password': '', #<-- your password
'remember':'1' 
}

# Try to login to the page
r = s.post(login_url, cookies=cookie, data=payload, headers=head)

# Try to get a page behind the login page
r = s.get(url2)

# Check if login was successful, if so there have to be an element with the id menu_row2
soup = bs(r.text, 'lxml')
element = soup.select('#menu_row2')
print(element)

1 个答案:

答案 0 :(得分:1)

值得注意的是,使用Python Requests模块时,您的请求将与标准用户请求不完全相同。为了完全模仿现实的请求,并因此不受站点的任何防火墙或安全措施的阻止,您将需要同时复制所有POST参数,GET参数和标头。

您可以使用诸如Burp Suite之类的工具来拦截登录请求。复制发送它的URL,也复制所有POST参数,最后复制所有标头。您应该使用requests.Session()函数来存储cookie。您可能还想对首页进行初始会话GET请求,以获取Cookie,因为用户不先访问首页就发送登录请求是不现实的。

我希望这是有意义的,可以像这样传递标头参数:

import requests

headers = {
    'User-Agent': 'My User Agent (copy your real one for a realistic request).'
}

data = {
    'username': 'John',
    'password': 'Doe'
}

s = requests.Session()
s.get("https://mywebsite.com/")
s.post("https://mywebsite.com/", data=data, headers=headers)
相关问题