从页面中的表中提取网址中的文本 - 美丽的汤

时间:2018-04-20 18:54:48

标签: python beautifulsoup

我正在尝试从网站中提取表格中的文本和网址,但我似乎只能获取文本。我猜这与此有关 text.strip在我的代码中,但我不是如何清除html标记而不删除其中的url链接。以下是我到目前为止所做的事情:

import requests
from bs4 import BeautifulSoup

start_number = 0
max_number = 5

urls=[]

for number in range(start_number, max_number + start_number):
    url = 'http://www.ispo-org.or.id/index.php?option=com_content&view=article&id=79:pengumumanpublik&catid=10&Itemid=233&showall=&limitstart=' + str(number)+ '&lang=en'
    urls.append(url)

data = []

for url in urls:
    r = requests.get(url)
    soup = BeautifulSoup(r.content,"html.parser")
    table = soup.find("table")
    table_body = table.find('tbody')
    rows = table_body.find_all('tr')
    for row in rows:
        cols = row.find_all('td')
        cols = [ele.text.strip() for ele in cols]
        data.append([ele for ele in cols if ele]) # Get rid of empty values

1 个答案:

答案 0 :(得分:1)

只需从href元素中提取<a>即可。出于答案的目的,我简化了代码,不用担心后续页面。

from collections import namedtuple

import requests
from bs4 import BeautifulSoup

url = 'http://www.ispo-org.or.id/index.php?option=com_content&view=article&id=79:pengumumanpublik&catid=10&Itemid=233&showall=&limitstart=0&lang=en'

data = []
Record = namedtuple('Record', 'id company agency date pdf_link')

r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
rows = soup.select('table > tbody > tr')

for row in rows[1:]:  # omit header row
    cols = row.find_all('td')
    fields = [td.text.strip() for td in cols if td.text.strip()]

    if fields:  # if the row is not empty
        pdf_link = row.find('a')['href']
        record = Record(*fields, pdf_link)
        data.append(record)
>>> data[0].pdf_link
'images/notifikasi/619.%20Pengumuman%20Publik%20PT%20IGP.compressed.pdf'