使用特定格式的python在CSV文件中刮取HTML表格

时间:2018-05-27 09:10:07

标签: python csv beautifulsoup lxml

我想从特定格式的链接中提取HTML表格内容。

网页上的HTML代码:

<table>
<thead>
<tr>
<th>name</th>
<th>brand</th>
<th>description</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="http://abcd.com"><span style="color: #000; min-width: 160px;">abcd</span></a></td>
<td><a href="http://abcd.com" target="_blank"><span style="color: #000;">abcd123</span></a></td>
<td><a href="http://abcd.com" target="_blank"><span style="color: #000;">abcd 123 (1g)</span></a><br/></td>
</tr>
<tr>
<td><a href="http://efgh.com" target="_blank"><span style="color: #000; min-width: 160px;">efgh</span></a></td>
<td><a href="http://efgh.com" target="_blank"><span style="color: #000;">efgh456</span></a></td>
<td><a href="http://efgh.com" target="_blank"><span style="color: #000;">efgh 456 (2g)</span></a><br/></td>
</tr>
<tr>
<td><a href="http://ijkl.com" target="_blank"><span style="color: #000; min-width: 160px;">ijkl</span></a></td>
<td><a href="http://ijkl.com" target="_blank"><span style="color: #000;">ijkl789</span></a></td>
<td><a href="http://ijkl.com" target="_blank"><span style="color: #000;">ijkl 789 (3g)</span></a><br/></td>
</tr>
</tbody>
</table>

CSV文件中所需的输出格式如下所示:

链接,名称,品牌,描述

http://abcd.com,abcd,abcd123,abcd 123(1g)

http://efgh.com,efgh,efgh456,efgh 456(2g)

http://ijkl.com,ijkl,ijkl789,ijkl 789(3g)

以下是我的代码:

rows = doc.xpath("//table")
       for tr in rows:
           tds = tr.xpath("//td")
           for td in tds:
               Link = td.xpath("//td[1]/a/@href")
               name = td.xpath("//td[1]//text()")
               brand = td.xpath("//td[2]//text()")
               description = td.xpath("//td[3]//text()")
       results = []
       results.append(Link)
       results.append(name)
       results.append(brand)
       results.append(description)
       for result in results:
           writer.writerow(result)

在这里,我无法弄清楚如何以CSV格式获取上述特定格式的数据。

2 个答案:

答案 0 :(得分:0)

尝试以下方法。每个xpath都返回一个列表,因此您可以将它们一起追加以创建行:

from lxml import html

doc = html.fromstring(html_text)

with open('output.csv', 'w', newline='') as f_output:
    csv_output = csv.writer(f_output)

    for tr in doc.xpath("//table"):
        tds = tr.xpath("//td")
        for td in tds:
            Link = td.xpath("//td[1]/a/@href")
            name = td.xpath("//td[1]//text()")
            brand = td.xpath("//td[2]//text()")
            description = td.xpath("//td[3]//text()")
            csv_output.writerow(Link + name + brand + description)

为您提供一个CSV文件,如下所示:

http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)
http://abcd.com,http://efgh.com,http://ijkl.com,abcd,efgh,ijkl,abcd123,efgh456,ijkl789,abcd 123 (1g),efgh 456 (2g),ijkl 789 (3g)

答案 1 :(得分:0)

您可以使用BeautifulSoup

from bs4 import BeautifulSoup as soup
import csv
with open('filename.csv', 'w') as f:
  write = csv.writer(f)
  header = ['Link']+[i.text for i in soup(data, 'html.parser').find_all('th')]
  final_results = [[[b.find('a')['href'], b.text] for b in i.find_all('td')] for i in soup(data, 'html.parser').find_all('tr')][1:]
  write.writerows([header]+[[b[0][0], *[i[-1] for i in b]] for b in final_results])

输出:

Link,name,brand,description
http://abcd.com,abcd,abcd123,abcd 123 (1g)
http://efgh.com,efgh,efgh456,efgh 456 (2g)
http://ijkl.com,ijkl,ijkl789,ijkl 789 (3g)