无法使用请求登录网站

时间:2018-07-05 06:59:49

标签: python web-scraping beautifulsoup python-requests

我正在尝试登录以下网站:https://archiwum.polityka.pl/sso/loginform,以抓取一些文章。

这是我的代码:

import requests
from bs4 import BeautifulSoup

login_url = 'https://archiwum.polityka.pl/sso/loginform'
base_url = 'http://archiwum.polityka.pl'

payload = {"username" : XXXXX, "password" : XXXXX}
headers = {"User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:61.0) Gecko/20100101 Firefox/61.0"}

with requests.Session() as session:

    # Login...
    request = session.get(login_url, headers=headers)
    post = session.post(login_url, data=payload)

    # Now I want to go to the page with a specific article
    article_url = 'https://archiwum.polityka.pl/art/na-kanapie-siedzi-len,393566.html'
    request_article = session.get(article_url, headers=headers)

    # Scrape its content
    soup = BeautifulSoup(request_article.content, 'html.parser')
    content = soup.find('p', {'class' : 'box_text'}).find_next_sibling().text.strip()

    # And print it.
    print(content)

但是我的输出就像这样:

... [pełna treść dostępna dla abonentów Polityki Cyfrowej]

用我的母语意思

... [full content available for subscribers of the Polityka Cyfrowa]

我的凭据是正确的,因为我可以从浏览器中完全访问内容,但不能完全使用请求。

对于有关如何通过请求执行此操作的任何建议,我将不胜感激。还是我必须为此使用硒?

1 个答案:

答案 0 :(得分:1)

我可以帮助您进行登录。其余的,我想,您可以管理自己。您的payload不包含获取有效响应的所有必要信息。从下面的脚本中填写usernamepassword这两个字段,然后运行它。我想,您将看到已经登录该网页时看到的名字。

import requests
from bs4 import BeautifulSoup

payload = {
    'username': 'username here',
    'password': 'your password here',
    'login_success': 'http://archiwum.polityka.pl',
    'login_error': 'http://archiwum.polityka.pl/sso/loginform?return=http%3A%2F%2Farchiwum.polityka.pl'
}
with requests.Session() as session:
    session.headers={"User-Agent":"Mozilla/5.0"}
    page = session.post('https://www.polityka.pl/sso/login', data=payload)
    soup = BeautifulSoup(page.text,"lxml")
    profilename = soup.select_one("#container p span.border").text
    print(profilename)