beautifulsoup find_all无法获取div数据

时间:2016-06-28 08:51:52

标签: python python-3.x beautifulsoup

我尝试从网站获取html数据,但data_table返回null 并尝试跟踪代码,当我尝试获取标题数据时,它将返回html context

    import requests
    from bs4 import BeautifulSoup
    import html.parser
    from html.parser import HTMLParser
    import time
    from random import randint
    import sys
    from IPython.display import clear_output
    import pymysql

links = ['https://www.ptt.cc/bbs/Gossiping/index'+str(i+1)+'.html' for i in range(10)]
    data_links=[]

for link in links:
    res = requests.get(link)
    soup = BeautifulSoup(res.text.encode("utf-8"),"html.parser")
    data_table = soup.findAll("div",{"id":"r-ent"})
    print(data_table)

1 个答案:

答案 0 :(得分:1)

当您在浏览器中访问该页面时,您必须确认您已经超过18岁才能获得实际内容,这就是您获得的页面,您需要发送到https://www.ptt.cc/ask/over18的帖子。数据yes=yesfrom = "/bbs/Gossiping/index{the_number}.html",如果您打印返回的来源,则可以看到该表单。

<form action="/ask/over18" method="post">
    <input type="hidden" name="from" value="/bbs/Gossiping/index1.html">
    <div class="over18-button-container">
        <button class="btn-big" type="submit" name="yes" value="yes">我同意,我已年滿十八歲<br><small>進入</small></button>
    </div>
    <div class="over18-button-container">
        <button class="btn-big" type="submit" name="no" value="no">未滿十八歲或不同意本條款<br><small>離開</small></button>
    </div>
</form>

页面上也没有 r-ent ,只有div:

import requests
from bs4 import BeautifulSoup

links = ['https://www.ptt.cc/bbs/Gossiping/index{}.html' for i in range(1,11)]
data_links = []
data = {"yes":"yes"}
head = {"User-Agent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"}

for ind, link in enumerate(links, 1):
    with requests.Session() as s:
        data["from"] = "/bbs/Gossiping/index{}.html".format(ind)
        s.post("https://www.ptt.cc/ask/over18", data=data, headers=head)
        res = s.get(link, headers=head)
        soup = BeautifulSoup(res.text,"html.parser")
        data_divs= soup.select("div.r-ent")
        print(data_divs)

上面的代码为您提供了所有类r-ent的div。

使用Session发布一次可能很好,因为cookie会被存储,所以下面的代码应该可以正常工作。

links = ['https://www.ptt.cc/bbs/Gossiping/index{}.html' for i in range(1,11)]
data_links=[]
data = {"yes":"yes"}
head = {"User-Agent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"}
with requests.Session() as s:
    data["from"] = "/bbs/Gossiping/index1.html"
    s.post("https://www.ptt.cc/ask/over18", data=data, headers=head)
    for link in links:
        res = s.get(link, headers=head)
        BeautifulSoup(res.text,"html.parser")
        data_divs= soup.select("div.r-ent")
        print(data_divs)