正如前面提到的问题所提到的,我正在使用带有python的Beautiful soup来从网站上检索天气数据。
以下是网站的外观:
<channel>
<title>2 Hour Forecast</title>
<source>Meteorological Services Singapore</source>
<description>2 Hour Forecast</description>
<item>
<title>Nowcast Table</title>
<category>Singapore Weather Conditions</category>
<forecastIssue date="18-07-2016" time="03:30 PM"/>
<validTime>3.30 pm to 5.30 pm</validTime>
<weatherForecast>
<area forecast="TL" lat="1.37500000" lon="103.83900000" name="Ang Mo Kio"/>
<area forecast="SH" lat="1.32100000" lon="103.92400000" name="Bedok"/>
<area forecast="TL" lat="1.35077200" lon="103.83900000" name="Bishan"/>
<area forecast="CL" lat="1.30400000" lon="103.70100000" name="Boon Lay"/>
<area forecast="CL" lat="1.35300000" lon="103.75400000" name="Bukit Batok"/>
<area forecast="CL" lat="1.27700000" lon="103.81900000" name="Bukit Merah"/>`
..
..
<area forecast="PC" lat="1.41800000" lon="103.83900000" name="Yishun"/>
<channel>
我设法使用这些代码检索我需要的信息:
import requests
from bs4 import BeautifulSoup
import urllib3
import csv
import sys
import json
#getting the Validtime
area_attrs_li = []
r = requests.get('http://www.nea.gov.sg/api/WebAPI/?
dataset=2hr_nowcast&keyref=781CF461BB6606AD907750DFD1D07667C6E7C5141804F45D')
soup = BeautifulSoup(r.content, "xml")
time = soup.find('validTime').string
print "validTime: " + time
#getting the date
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "date: " + element['date']
#getting the time
for currentdate in soup.find_all('item'):
element = currentdate.find('forecastIssue')
print "time: " + element['time']
#print area
for area in soup.select('area'):
area_attrs_li.append(area)
print area
#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
f = open("C:\\scripts\\testing\\testingnea.csv" , 'wt')
try:
for area in area_attrs_li:
#print str(area) + "\n"
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area, name))
finally:
f.close()
print open("C:/scripts/testing/testingnea.csv", 'rt').read()
我设法以CSV格式获取数据,但是当我运行这部分代码时:
#print area name
areas = soup.select('area')
for data in areas:
name = (data.get('name'))
print name
结果如下:
显然,我的循环不起作用,因为它一遍又一遍地打印最后一个记录的最后一个区域。
编辑:我尝试循环浏览列表中的区域数据:
for area in area_attrs_li:
name = (area.get('name'))
print name
然而,它仍然没有循环。
我不确定代码出错的地方:/
答案 0 :(得分:1)
问题在于:writer.writerow( (time, element['date'], element['time'], area, name))
,name
永远不会改变。
解决问题的方法:
try:
for index, area in enumerate(area_attrs_li):
# print str(area) + "\n"
writer = csv.writer(f)
writer.writerow((time, element['date'], element['time'], area, areas[index].get('name')))
finally:
f.close()
答案 1 :(得分:1)
这是因为在你写作时,你指的是循环的最后一个例子,试试这个:
writer.writerow( (time, element['date'], element['time'], area, area['name']))
答案 2 :(得分:1)
在循环之后,您只在name变量中获得一个值。你需要一个清单。试试这个
areas = soup.select('area')
name=[]
for data in areas:
name.append(data.get('name'))
print name
l=len(name)
并在最后尝试
i=0
try:
for area in area_attrs_li:
writer = csv.writer(f)
writer.writerow( (time, element['date'], element['time'], area, name[i]))
i=i+1