Python3.5 BeautifulSoup4从' p'获取文本在div

时间:2017-05-16 01:11:00

标签: html python-3.x beautifulsoup python-requests

我正在尝试从div类中删除所有文本' caselawcontent searchable-content'。此代码只打印HTML而不包含网页中的文本。得到文本我错过了什么?

以下链接位于&finquocasesdoc.text'文件:
http://caselaw.findlaw.com/mo-court-of-appeals/1021163.html

import requests
from bs4 import BeautifulSoup

with open('filteredcasesdoc.txt', 'r') as openfile1:

    for line in openfile1:
                rulingpage = requests.get(line).text
                soup = BeautifulSoup(rulingpage, 'html.parser')
                doctext = soup.find('div', class_='caselawcontent searchable-content')
                print (doctext)

2 个答案:

答案 0 :(得分:4)

from bs4 import BeautifulSoup
import requests

url = 'http://caselaw.findlaw.com/mo-court-of-appeals/1021163.html'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')

我添加了更多可靠 .find方法(键

whole_section = soup.find('div',{'class':'caselawcontent searchable-content'})


the_title = whole_section.center.h2
#e.g. Missouri Court of Appeals,Southern District,Division Two.
second_title = whole_section.center.h3.p
#e.g. STATE of Missouri, Plaintiff-Appellant v....
number_text = whole_section.center.h3.next_sibling.next_sibling
#e.g.
the_date = number_text.next_sibling.next_sibling
#authors
authors = whole_section.center.next_sibling
para = whole_section.findAll('p')[1:]
#Because we don't want the paragraph h3.p.
# we could aslso do findAll('p',recursive=False) doesnt pickup children

基本上,我解剖整个      至于段落(例如正文, var para),你必须循环 print(authors)

# and you can add .text (e.g. print(authors.text) to get the text without the tag. 
# or a simple function that returns only the text 
def rettext(something):
    return something.text
#Usage: print(rettext(authorts)) 

答案 1 :(得分:2)

尝试打印doctext.text。这将为您删除所有HTML标记。

from bs4 import BeautifulSoup
cases = []

with open('filteredcasesdoc.txt', 'r') as openfile1: 
    for url in openfile1:
        # GET the HTML page as a string, with HTML tags  
        rulingpage = requests.get(url).text 

        soup = BeautifulSoup(rulingpage, 'html.parser') 
        # find the part of the HTML page we want, as an HTML element
        doctext = soup.find('div', class_='caselawcontent searchable-content')
        print(doctext.text) # now we have the inner HTML as a string
        cases.append(doctext.text) # do something useful with this !