Beautifulsoup request.get()从指定的URL重定向

时间:2019-01-04 03:26:42

标签: python web-scraping beautifulsoup python-requests

我正在使用

requests.get('https://www.pastemagazine.com/search?t=tweets+of+the+week&m=Lists')

像这样:

import requests
from bs4 import BeautifulSoup
url = 'https://www.pastemagazine.com/search?t=tweets+of+the+week&m=Lists'
thepage = requests.get(url)
urlsoup = BeautifulSoup(thepage.text, "html.parser")
print(urlsoup.find_all("a", attrs={"class": "large-3 medium-3 cell image"})[0])

但是它一直在抓取不是从完整URL而是从主页('https://www.pastemagazine.com')。我可以告诉您,因为我希望print语句能够打印:

<a class="large-3 medium-3 cell image" href="/articles/2018/12/the-funniest-tweets-of-the-week-109.html" aria-label="">
    <picture data-sizes="[&quot;(min-width: 40em)&quot;,&quot;(min-width: 64em)&quot;]" class="lazyload" data-sources="[&quot;https://cdn.pastemagazine.com/www/opt/120/dogcrp-72x72.jpg&quot;,&quot;https://cdn.pastemagazine.com/www/opt/120/dogcrp-151x151.jpg&quot;,&quot;https://cdn.pastemagazine.com/www/opt/120/dogcrp-151x151.jpg&quot;]">
      <img alt="" />
    </picture>
  </a>

但是它打印:

<a aria-label='Daily Dose: Michael Chapman feat. Bridget St. John, "After All This Time"' class="large-3 medium-3 cell image" href="/articles/2019/01/daily-dose-michael-chapman-feat-bridget-st-john-af.html"> 
    <picture class="lazyload" data-sizes='["(min-width: 40em)","(min-width: 64em)"]' data-sources='["https://cdn.pastemagazine.com/www/opt/300/MichaelChapman2019_ConstanceMensh_Square-72x72.jpg","https://cdn.pastemagazine.com/www/opt/300/MichaelChapman2019_ConstanceMensh_Square-151x151.jpg","https://cdn.pastemagazine.com/www/opt/300/MichaelChapman2019_ConstanceMensh_Square-151x151.jpg"]'>
      <img alt='Daily Dose: Michael Chapman feat. Bridget St. John, "After All This Time"'/>
    </picture>
  </a>

与主页上的元素相对应,而不是我想从搜索词中抓取的特定网址。为什么它重定向到主页?我该如何阻止它?

2 个答案:

答案 0 :(得分:2)

如果您对重定向部分有把握,可以将{{3}}设置为False以防止重定向。

r = requests.get(url, allow_redirects=False)

答案 1 :(得分:0)

要获得连接到tweet的必需URL,可以尝试以下脚本。事实证明,将标头和cookie一起使用可以解决重定向问题。

import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup

url = "https://www.pastemagazine.com/search?t=tweets+of+the+week&m=Lists"

with requests.Session() as s:
    res = s.get(url,headers={"User-Agent":"Mozilla/5.0"})
    soup = BeautifulSoup(res.text,'lxml')
    for item in set([urljoin(url,item.get("href")) for item in soup.select("ul.articles a[href*='tweets-of-the-week']")]):
        print(item)

或者要使其变得更容易,请升级以下库:

pip3 install lxml --upgrade
pip3 install beautifulsoup4 --upgrade

然后尝试:

with requests.Session() as s:
    res = s.get(url,headers={"User-Agent":"Mozilla/5.0"})
    soup = BeautifulSoup(res.text,'lxml')
    for item in soup.select("a.noimage[href*='tweets-of-the-week']"):
        print(urljoin(url,item.get("href")))