对于这个项目,我正在从数据库中抓取数据并尝试将此数据导出到电子表格中以供进一步分析。
虽然我的代码看起来效果很好,但是当涉及最后一点 - 导出为CSV时 - 我没有运气。这个问题已被问过几次,但似乎答案是针对不同的方法,我没有任何运气适应他们的答案。
我的代码如下:
from bs4 import BeautifulSoup
import requests
import re
url1 = "http://www.elections.ca/WPAPPS/WPR/EN/NC?province=-1&distyear=2013&district=-1&party=-1&pageno="
url2 = "&totalpages=55&totalcount=1368&secondaryaction=prev25"
date1 = []
date2 = []
date3 = []
party=[]
riding=[]
candidate=[]
winning=[]
number=[]
for i in range(1, 56):
r = requests.get(url1 + str(i) + url2)
data = r.text
cat = BeautifulSoup(data)
links = []
for link in cat.find_all('a', href=re.compile('selectedid=')):
links.append("http://www.elections.ca" + link.get('href'))
for link in links:
r = requests.get(link)
data = r.text
cat = BeautifulSoup(data)
date1.append(cat.find_all('span')[2].contents)
date2.append(cat.find_all('span')[3].contents)
date3.append(cat.find_all('span')[5].contents)
party.append(re.sub("[\n\r/]", "", cat.find("legend").contents[2]).strip())
riding.append(re.sub("[\n\r/]", "", cat.find_all('div', class_="group")[2].contents[2]).strip())
cs= cat.find_all("table")[0].find_all("td", headers="name/1")
elected=[]
for c in cs:
elected.append(c.contents[0].strip())
number.append(len(elected))
candidate.append(elected)
winning.append(cs[0].contents[0].strip())
import csv
file = ""
for i in range(0,len(date1)):
file = [file,date1[i],date2[i],date3[i],party[i],riding[i],"\n"]
with open ('filename.csv','rb') as file:
writer=csv.writer(file)
for row in file:
writer.writerow(row)
真的 - 任何提示都会非常感激。非常感谢。
*第2部分:另一个问题:我以前认为只需总是选择表格中出现的第一个名字就可以简化在表格中找到获胜候选人,因为我认为“赢家”总是首先出现。然而,这种情况并非如此。 候选人是否当选,以第一栏中的图片形式存储。我如何抓取并将其存储在电子表格中? 它位于< td标头>为:
< img src="/WPAPPS/WPR/Content/Images/selected_box.gif" alt="contestant won this nomination contest" >
我有尝试某种布尔排序度量的想法,但我不确定如何实现。非常感谢。* 更新:此问题现在是一个单独的帖子here。
答案 0 :(得分:1)
以下内容应正确将您的数据导出到CSV文件:
from bs4 import BeautifulSoup
import requests
import re
import csv
url = "http://www.elections.ca/WPAPPS/WPR/EN/NC?province=-1&distyear=2013&district=-1&party=-1&pageno={}&totalpages=55&totalcount=1368&secondaryaction=prev25"
rows = []
for i in range(1, 56):
print(i)
r = requests.get(url.format(i))
data = r.text
cat = BeautifulSoup(data, "html.parser")
links = []
for link in cat.find_all('a', href=re.compile('selectedid=')):
links.append("http://www.elections.ca" + link.get('href'))
for link in links:
r = requests.get(link)
data = r.text
cat = BeautifulSoup(data, "html.parser")
lspans = cat.find_all('span')
cs = cat.find_all("table")[0].find_all("td", headers="name/1")
elected = []
for c in cs:
elected.append(c.contents[0].strip())
rows.append([
lspans[2].contents[0],
lspans[3].contents[0],
lspans[5].contents[0],
re.sub("[\n\r/]", "", cat.find("legend").contents[2]).strip(),
re.sub("[\n\r/]", "", cat.find_all('div', class_="group")[2].contents[2]).strip().encode('latin-1'),
len(elected),
cs[0].contents[0].strip().encode('latin-1')
])
with open('filename.csv', 'w', newline='') as f_output:
csv_output = csv.writer(f_output)
csv_output.writerows(rows)
在CSV文件中为您提供以下类型的输出:
"September 17, 2016","September 13, 2016","September 17, 2016",Liberal,Medicine Hat--Cardston--Warner,1,Stanley Sakamoto
"June 25, 2016","May 12, 2016","June 25, 2016",Conservative,Medicine Hat--Cardston--Warner,6,Brian Benoit
"September 28, 2015","September 28, 2015","September 28, 2015",Liberal,Cowichan--Malahat--Langford,1,Luke Krayenhoff
无需为每列数据构建大量单独的列表,只需直接构建rows
列表即可。然后可以一次性将其轻松写入CSV(或者在收集数据时一次写入一行)。