我需要一些帮助才能将基本网络刮刀的输出保存为CSV文件。
以下是代码:
from urllib.request import urlopen
from bs4 import BeautifulSoup
import csv
html_ = urlopen("some_url")
bsObj_ = BeautifulSoup(html_, "html.parser")
nameList_ = bsObj_2.findAll("div", {"class":"row proyecto_name_venta"})
for name in nameList_:
print(name.get_text())
具体来说,我想将name.get_text()
结果保存在CSV文件中。
答案 0 :(得分:1)
如果nameList_
中的元素是由','
分隔的列的行,请尝试以下操作:
import csv
with open('out.csv', 'w') as outf:
writer = csv.writer(outf)
writer.writerows(name.get_text().split(',') for name nameList_)
如果nameList_.get_text()
只是一个字符串,并且您想要编写单个列CSV,则可以尝试这样做:
import csv
with open('out.csv', 'w') as outf:
writer = csv.writer(outf)
writer.writerows([name.get_text()] for name in nameList_)
答案 1 :(得分:0)
这是你要求的一个非常全面的例子。 。 。
import urllib2
listOfStocks = ["AAPL", "MSFT", "GOOG", "FB", "AMZN"]
urls = []
for company in listOfStocks:
urls.append('http://real-chart.finance.yahoo.com/table.csv?s=' + company + '&d=6&e=28&f=2015&g=m&a=11&b=12&c=1980&ignore=.csv')
Output_File = open('C:/Users/rshuell001/Historical_Prices.csv','w')
New_Format_Data = ''
for counter in range(0, len(urls)):
Original_Data = urllib2.urlopen(urls[counter]).read()
if counter == 0:
New_Format_Data = "Company," + urllib2.urlopen(urls[counter]).readline()
rows = Original_Data.splitlines(1)
for row in range(1, len(rows)):
New_Format_Data = New_Format_Data + listOfStocks[counter] + ',' + rows[row]
Output_File.write(New_Format_Data)
Output_File.close()