无法将输出写入csv bs4 python

时间:2013-11-12 22:28:05

标签: python csv beautifulsoup

我正在尝试将以下代码的输出写入csv文件。数据被覆盖。所以最后我只能看到输出文件中从网站上删除的最后数据。

from bs4 import BeautifulSoup
import urllib2
import csv
import re
import requests
for i in xrange(3179,7000):
    try:
        page = urllib2.urlopen("http://bvet.bytix.com/plus/trainer/default.aspx?id={}".format(i))
    except:
        continue
    else:
        soup = BeautifulSoup(page.read())
        for eachuniversity in soup.findAll('fieldset',{'id':'ctl00_step2'}):
            data = i, re.sub(r'\s+',' ',''.join(eachuniversity.findAll(text=True)).encode('utf-8')),'\n'
    print data  
    myfile = open("ttt.csv", 'wb')
    wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
    wr.writerow(data)

我是新手。我不知道我错在哪里。

更新

from bs4 import BeautifulSoup
import urllib2
import csv
import re
import requests
with open("BBB.csv", 'wb') as myfile:
    writer = csv.writer(myfile, quoting=csv.QUOTE_ALL)
    for i in xrange(3179,7000):
        try:
            page = urllib2.urlopen("http://bvet.bytix.com/plus/trainer/default.aspx?id={}".format(i))
        except Exception:
            continue
        else:
            soup = BeautifulSoup(page.read())
            for eachuniversity in soup.findAll('fieldset',{'id':'ctl00_step2'}):
                data = [i] + [re.sub('\s+', '', text).encode('utf8') for text in eachuniversity.find_all(text=True) if text.strip()]
                writer.writerow(data)

3 个答案:

答案 0 :(得分:5)

在循环之前打开文件一次,然后在循环中向其中添加数据:

with open("ttt.csv", 'wb') as myfile:
    writer = csv.writer(myfile, quoting=csv.QUOTE_ALL)
    for i in xrange(3179,7000):
        try:
            page = urllib2.urlopen("http://bvet.bytix.com/plus/trainer/default.aspx?id={}".format(i))
        except urllib2.HTTPError:
            continue
        else:
            soup = BeautifulSoup(page.read(), from_encoding=page.info().getparam('charset'))
            for eachuniversity in soup.findAll('fieldset',{'id':'ctl00_step2'}):
                data = [i] + [re.sub('\s+', ' ', text).strip().encode('utf8') for text in eachuniversity.find_all(text=True) if text.strip()]
                writer.writerow(data)

每次以w写入模式打开文件时,都会截断。删除任何先前的数据,以便为您要编写的新数据创建位置。然后诀窍是在开始时只打开文件一次,并保持打开状态,在保持打开的同时写下你需要的所有内容。

此处with语句在外部for循环完成后为您关闭文件。

from_encoding将服务器标头提供的任何字符集传递给BeautifulSoup,因此不必猜测为难;对于给定的URL,如果不添加该关键字参数,BeautifulSoup实际上是猜错了。

我删除了您添加到每一行的换行符; csv.writer()班为您处理换行。我还将毯子except:更改为except urllib2.HTTPError:以捕获 实际上在此处抛出的异常,而不是所有内容

清理文本以将每个文本条目写入单独的列。

这会产生如下输出:

"3179","Neue Suche starten","MoHaJa - die Schule für Hunde und Halter","Stähli Monika","Meisenweg 1","3303 Jegenstorf","Routenplaner","Ausbildung/Anerkennung: Triple-S Ausbildungszentrum für Mensch und Hund","Sprache: Deutsch","Tel.: +41 31 761 14 33","Handy: +41 79 760 41 69","info@hundeschule-mohaja.ch","www.hundeschule-mohaja.ch"
"3180","Neue Suche starten","Dogs Nature","Fernandez Salome-Nicole","Dorzematte 30","3313 Büren zum Hof","Routenplaner","Ausbildung/Anerkennung: Triple-S Ausbildungszentrum für Mensch und Hund","Sprache: Deutsch","Tel.: 079 658 71 71","info@dogsnature.ch","www.dogsnature.ch"
"3181","Neue Suche starten","Gynny-Dog","Speiser Franziska","Wirtsgartenweg 27","4123 Allschwil","Routenplaner","Ausbildung/Anerkennung: Triple-S Ausbildungszentrum für Mensch und Hund","Sprache: Deutsch","Handy: 076 517 20 94","franziska.speiser@gynny-dog.ch","www.gynny-dog.ch"
"3183","Neue Suche starten","keep-natural","Mory Sandra","Beim Werkhof","4434 Hölstein","Routenplaner","Ausbildung/Anerkennung: Triple-S Ausbildungszentrum für Mensch und Hund","Sprache: Deutsch","Tel.: 079 296 00 65","sandra.mory@keep-natural.ch","www.keep-natural.ch"
"3184","Neue Suche starten","Küng Silvia","Eptingerstrasse 41","4448 Läufelfingen","Routenplaner","Ausbildung/Anerkennung: Triple-S Ausbildungszentrum für Mensch und Hund","Sprache: Deutsch","Tel.: 061 981 38 04","Handy: 079 415 83 57","silvia.kueng@hotmail.com","www.different-dogs.ch"

答案 1 :(得分:3)

它被覆盖了,因为你在循环中写这个

 myfile = open("ttt.csv", 'wb')

把它移到外面。

答案 2 :(得分:2)

移动此行:

myfile = open("ttt.csv", 'wb')

for循环之前。你每次循环都会覆盖文件。