我正在尝试在抓取HTML表后将数据写入csv文件

时间:2015-02-17 09:18:52

标签: python html web-scraping beautifulsoup

from bs4 import BeautifulSoup
import urllib2
from lxml.html import fromstring
import re
import csv

wiki = "http://en.wikipedia.org/wiki/List_of_Test_cricket_records"
header = {'User-Agent': 'Mozilla/5.0'} #Needed to prevent 403 error on Wikipedia
req = urllib2.Request(wiki,headers=header)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)


csv_out = open("mycsv.csv",'wb')
mywriter = csv.writer(csv_out) 

def parse_rows(rows):
 results = []
 for row in rows:
     tableheaders = row.findall('th')
    if table_headers:
        results.append(headers.get_text() for headers in table_headers])

    table_data = row.find_all('td')
    if table_data:
        results.append([data.gettext() for data in table_data])
return results

# Get table
 try:
     table = soup.find_all('table')[1]
 except AttributeError as e:
     print 'No tables found, exiting'
       # return 1

  # Get rows
 try:
    rows = table.find_all('tr')
 except AttributeError as e:
    print 'No table rows found, exiting'
     #return 1

table_data = parse_rows(rows)

# Print data
for i in table_data:
    print '\t'.join(i)

mywriter.writerow(I)   csv_out.close()


UnicodeEncodeError Traceback(最近一次调用最后一次)  in()

---> 51 mywriter.writerow(d1)

UnicodeEncodeError:' ascii'编解码器不能对字符u' \ xa0'进行编码。位置0:序数不在范围内(128)


我确实在ipython笔记本上获得了数据,但我无法弄清楚何时编写csv文件。

可能是什么错误?请帮忙

1 个答案:

答案 0 :(得分:1)

这是python中csv编写的一个已知问题。您可以看到解决方案here。在你的情况下,这一切都归结为写作:

mywriter.writerow([s.encode("utf-8") for s in d1])

或者,您可以使用unicodecsv库来避免此伎俩