无法使用请求从网页中抓取 csrf 令牌(在页面源中可用)

时间:2021-04-28 12:38:11

标签: python python-3.x web-scraping python-requests

我正在尝试从网站上抓取 csrf 令牌。但是,即使页面源中的令牌可用,我创建的脚本也会失败。这是site url

我尝试过:

import requests
from bs4 import BeautifulSoup

url = 'https://fanniemae.mbs-securities.com/fannie/search?issrSpclSecuType=Super&status=Active'

with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'
    r = s.get(url)
    soup = BeautifulSoup(r.text,"lxml")
    csrf = soup.select_one("[name='_csrf']").get("content")
    print(csrf)
<块引用>

如何使用请求从该站点抓取 csrf 令牌?

1 个答案:

答案 0 :(得分:0)

这里的技巧是在标头中包含 Accept 键和值以获得所需的响应。这就是我使用请求从该站点获取表格内容的方式:

import requests
from bs4 import BeautifulSoup

url = 'https://fanniemae.mbs-securities.com/fannie/search?issrSpclSecuType=Super&status=Active'
link = 'https://fanniemae.mbs-securities.com/api/search/fannie'
params = {
    'issrSpclSecuType': 'Super',
    'status': 'Active',
    'page': 1,
    'max_results': 100,
    'sortField': 'cusip',
    'sortAsc': 'true'
}
with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'
    s.headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9'
    r = s.get(url)
    soup = BeautifulSoup(r.text,"lxml")
    s.headers['x-csrf-token'] = soup.select_one("[name='_csrf']")["content"]
    s.headers['referer'] = 'https://fanniemae.mbs-securities.com/fannie/search?issrSpclSecuType=Super&status=Active'
    res = s.get(link,params=params)
    for item in res.json():
        print(item['cusip'])