写入csv文件会在其自己的单元格中输出每个字母

时间:2019-02-22 12:03:58

标签: python web-scraping export-to-csv

import bs4 as bs
import urllib.request
import csv

source = 
   urllib.request.urlopen('http://www.thebest100lists.com/best100actors/').read()

soup = bs.BeautifulSoup(source, 'lxml')

for paragraph in soup.find_all('ol'):
    celebList = paragraph.text
    print(celebList)

with open('celebList.csv', 'w', newline='') as f:
    writer = csv.writer(f)

writer.writerow[soup.title.string]
for i in celebList:
    writer.writerow([i])

我正在玩漂亮的汤4,以从网站上的列表中抓取数据以将其输出到.csv文件。我已经正确地抓取了我要查找的数据,但是当我保存运行该程序时,csv文件的每一行中的每个字母都位于其自己的单元格中。我曾尝试将数据转换为字符串,还尝试将(i)放在方括号中,但这对我没有帮助。

4 个答案:

答案 0 :(得分:0)

您遍历celebList中的文本而不是列表。

您可能想做类似的事情

celebList = []
for paragraph in soup.find_all('ol'):
    celebList.append(paragraph.text)

答案 1 :(得分:0)

您可以这样做:

celeblistsplit=celebList.split('\n')
celeblistsplit

然后:

f=open('output.csv','w')
for each in celeblistsplit:
    if len(each)>0:
        f.write(each)
        f.write(',')
        f.write('\n')
f.close()

结果文件:

Robert De Niro,
Al Pacino,
Tom Hanks,
Johnny Depp,
Jack Nicholson,
Marlon Brando,
Meryl Streep,
Leonardo DiCaprio,
...

答案 2 :(得分:0)

import bs4 as bs
import urllib.request
import csv

source = urllib.request.urlopen('http://www.thebest100lists.com/best100actors/').read()

soup = bs.BeautifulSoup(source, 'lxml')

celebList = []     # an empty list to store the text
for paragraph in soup.find_all('ol'):
    celebList.append(paragraph.text)
    # print(celebList)

# file writing
# print(celebList) # ["\nRobert De Niro\n\nAl Pacino\n\nTom Hanks\n\nJohnny .. ] 
celebList = map(lambda s: s.strip(), celebList)   # removing the leading spaces in the list
celebList = list(celebList)


with open('celebList.csv', 'w') as file:
    for text in celebList:
        file.write(text)

输出:

Robert De Niro

Al Pacino

Tom Hanks

Johnny Depp

Jack Nicholson

Marlon Brando

.
.
.

答案 3 :(得分:0)

我认为对标签使用类选择器会更有效,然后使用大熊猫将其转储到csv中

from bs4 import BeautifulSoup
import requests
import pandas as pd

url = 'http://www.thebest100lists.com/best100actors/'
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
names = [name.text for name in soup.select('a.class1')]
df = pd.DataFrame(names,columns=['Names'])
df.to_csv(r'C:\Users\User\Desktop\Celebs.csv', sep=',', encoding='utf-8',index = False )