美丽的汤-提取数据仅包含td标签(没有div,id,class等标签。)

时间:2020-04-30 06:23:41

标签: python beautifulsoup

我是Beautiful Soup的新手,我有类似这样的数据,其中包含3组用户数据(在这种情况下)。

我想获取每个USER_ID的所有信息并保存到数据库。

  • 用户ID
  • 标题
  • 内容
  • PID(并非每个用户都有此行)
  • 日期
  • URL
<table align="center" border="0" style="width:550px">
    <tbody>
        <tr>
            <td colspan="2">USER_ID 11111</td>
        </tr>
        <tr>
            <td colspan="2">string_a</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: aaa</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://aaa.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">USER_ID 22222</td>
        </tr>
        <tr>
            <td colspan="2">string_b</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: bbb</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://aaa.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">USER_ID 33333</td>
        </tr>
        <tr>
            <td colspan="2">string_c</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: ccc</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>PID:</strong><strong>ABCDE</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://ccc.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
    </tbody>
</table>

我的问题是,
所有数据仅在td内部,并且不包含div名称和父标记。我无法分为三组数据。

我尝试下面的代码,它可以找到所有的USER_ID,但是我不知道如何获取每个USER_ID的其他数据

soup = BeautifulSoup(content, 'html.parser')
p = soup.find_all('td', text=re.compile("^USER_ID"))
for item in p:
   title = item.find_next_siblings('td') # <--- return empty
   ...

我正在使用
python 3.6
django 2.0.2

3 个答案:

答案 0 :(得分:2)

from bs4 import BeautifulSoup
import re
from more_itertools import split_when

data = """<table align="center" border="0" style="width:550px">
    <tbody>
        <tr>
            <td colspan="2">USER_ID 11111</td>
        </tr>
        <tr>
            <td colspan="2">string_a</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: aaa</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://aaa.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">USER_ID 22222</td>
        </tr>
        <tr>
            <td colspan="2">string_b</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: bbb</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://aaa.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">USER_ID 33333</td>
        </tr>
        <tr>
            <td colspan="2">string_c</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: ccc</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>PID:</strong><strong>ABCDE</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://ccc.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
    </tbody>
</table>"""

soup = BeautifulSoup(data, 'html.parser')

target = soup.find("table", align="center")

goal = [item.text for item in target.select(
    "td", text=re.compile("^USER_ID")) if item.text.strip() != '']


final = list(split_when(goal, lambda _, y: y.startswith("USER")))

print(final)  # list of lists

for x in final:  # or loop
    print(x)

输出

[['USER_ID 11111', 'string_a', 'content: aaa', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com'], ['USER_ID 22222', 'string_b', 'content: bbb', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com'], ['USER_ID 33333', 'string_c', 'content: ccc', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'PID:ABCDE', 'URL:https://ccc.com']]

还有

['USER_ID 11111', 'string_a', 'content: aaa', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com']
['USER_ID 22222', 'string_b', 'content: bbb', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com']
['USER_ID 33333', 'string_c', 'content: ccc', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'PID:ABCDE', 'URL:https://ccc.com']

答案 1 :(得分:1)

尝试下面的代码来标识find_all_next('td'),并检查是否有条件破坏dataset

import re
from bs4 import BeautifulSoup

html='''<table align="center" border="0" style="width:550px">
    <tbody>
        <tr>
            <td colspan="2">USER_ID 11111</td>
        </tr>
        <tr>
            <td colspan="2">string_a</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: aaa</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://aaa.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">USER_ID 22222</td>
        </tr>
        <tr>
            <td colspan="2">string_b</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: bbb</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://aaa.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">USER_ID 33333</td>
        </tr>
        <tr>
            <td colspan="2">string_c</td>
        </tr>
        <tr>
            <td colspan="2"><strong>content: ccc</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td>
        </tr>
        <tr>
            <td colspan="2"><strong>PID:</strong><strong>ABCDE</strong></td>
        </tr>
        <tr>
            <td colspan="2"><strong>URL:https://ccc.com</strong></td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
        <tr>
            <td colspan="2">&nbsp;</td>
        </tr>
    </tbody>
</table>'''

soup=BeautifulSoup(html,'html.parser')

final_list=[]
for item in soup.find_all('td',text=re.compile("USER_ID")):
    row_list=[]
    row_list.append(item.text.strip())
    siblings=item.find_all_next('td')
    for sibling in siblings:
        if "USER_ID" in sibling.text:
            break
        else:
            if sibling.text.strip()!='':
               row_list.append(sibling.text.strip())
    final_list.append(row_list)

print(final_list)

输出

[['USER_ID 11111', 'string_a', 'content: aaa', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com'], ['USER_ID 22222', 'string_b', 'content: bbb', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com'], ['USER_ID 33333', 'string_c', 'content: ccc', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'PID:ABCDE', 'URL:https://ccc.com']]

如果要打印每个列表,请尝试此操作。

soup=BeautifulSoup(html,'html.parser')

for item in soup.find_all('td',text=re.compile("USER_ID")):
    row_list=[]
    row_list.append(item.text.strip())
    siblings=item.find_all_next('td')
    for sibling in siblings:
        if "USER_ID" in sibling.text:
            break
        else:
            if sibling.text.strip()!='':
               row_list.append(sibling.text.strip())
    print(row_list)

输出

['USER_ID 11111', 'string_a', 'content: aaa', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com']
['USER_ID 22222', 'string_b', 'content: bbb', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'URL:https://aaa.com']
['USER_ID 33333', 'string_c', 'content: ccc', 'date:2020-05-01 00:00:00 To 2020-05-03 23:59:59', 'PID:ABCDE', 'URL:https://ccc.com']

答案 2 :(得分:-1)

您可以简单地使用soup.select('table tr')

示例

from bs4 import BeautifulSoup

html = '<table align="center" border="0" style="width:550px"><tbody>' \
       '<tr><td colspan="2">USER_ID 11111</td></tr>' \
        '<tr><td colspan="2">string_a</td></tr>' \
        '<tr><td colspan="2"><strong>content: aaa</strong></td></tr>' \
        '<tr><td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td></tr>' \
        '<tr><td colspan="2"><strong>URL:https://aaa.com</strong></td></tr>' \
        '<tr><td colspan="2">&nbsp;</td></tr>' \
        '<tr><td colspan="2">&nbsp;</td></tr>' \
        '<tr><td colspan="2">USER_ID 22222</td></tr>' \
        '<tr><td colspan="2">string_b</td></tr>' \
        '<tr><td colspan="2"><strong>content: bbb</strong></td></tr>' \
        '<tr><td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td></tr>' \
        '<tr><td colspan="2"><strong>URL:https://aaa.com</strong></td></tr>' \
        '<tr><td colspan="2">&nbsp;</td></tr>' \
        '<tr><td colspan="2">&nbsp;</td></tr>' \
        '<tr><td colspan="2">USER_ID 33333</td></tr>' \
        '<tr><td colspan="2">string_c</td></tr>' \
        '<tr><td colspan="2"><strong>content: ccc</strong></td></tr>' \
        '<tr><td colspan="2"><strong>date:</strong>2020-05-01 00:00:00 To 2020-05-03 23:59:59</td></tr>' \
        '<tr><td colspan="2"><strong>PID:</strong><strong>ABCDE</strong></td></tr>' \
        '<tr><td colspan="2"><strong>URL:https://ccc.com</strong></td></tr>' \
        '<tr><td colspan="2">&nbsp;</td></tr>' \
        '<tr><td colspan="2">&nbsp;</td></tr></tbody></table>'

soup = BeautifulSoup(html, features="lxml")
elements = soup.select('table tr')
print(elements)

for element in elements:
    print(element.text)

打印出

USER_ID 11111
string_a
content: aaa
date:2020-05-01 00:00:00 To 2020-05-03 23:59:59
URL:https://aaa.com
 
 
USER_ID 22222
string_b
content: bbb
date:2020-05-01 00:00:00 To 2020-05-03 23:59:59
URL:https://aaa.com
 
 
USER_ID 33333
string_c
content: ccc
date:2020-05-01 00:00:00 To 2020-05-03 23:59:59
PID:ABCDE
URL:https://ccc.com