使用python在文件中的打印文本中有点错误

时间:2017-07-24 18:27:59

标签: python web-scraping beautifulsoup

我正在使用python在csv文件中编写一些文本。 以下是我在文件中写入数据的截图。 enter image description here

您可以看到,在“渠道社交媒体链接”列中,所有链接都在其他下一行单元格中写得很好,但第一个链接未写入“渠道社交媒体链接”列。请问我怎么写这样的? enter image description here

我的python脚本在这里

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup

myUrl='https://www.youtube.com/user/HolaSoyGerman/about'


uClient = uReq(myUrl)
page_html = uClient.read()
uClient.close()

page_soup = soup(page_html, "html.parser")

containers = page_soup.findAll("h1",{"class":"branded-page-header-title"})

filename="Products2.csv"
f = open(filename,"w")

headers = "Channel Name,Channel Description,Channel Social Media Links\n"

f.write(headers)

channel_name = containers[0].a.text 
print("Channel Name :" + channel_name)

# For About Section Info
aboutUrl='https://www.youtube.com/user/HolaSoyGerman/about'


uClient1 = uReq(aboutUrl)
page_html1 = uClient1.read()
uClient1.close()

page_soup1 = soup(page_html1, "html.parser")

description_div = page_soup.findAll("div",{"class":"about-description 
branded-page-box-padding"})
channel_description = description_div[0].pre.text
print("Channel Description :" + channel_description)
f.write(channel_name+ "," +channel_description)
links = page_soup.findAll("li",{"class":"channel-links-item"})
for link in links: 
social_media = link.a.get("href")
f.write(","+","+social_media+"\n")
f.close()

1 个答案:

答案 0 :(得分:1)

如果您在写入文件时使用Python的CSV库,这将有所帮助。这可以将项列表转换为正确的逗号分隔值。

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
import csv

myUrl = 'https://www.youtube.com/user/HolaSoyGerman/about'

uClient = uReq(myUrl)
page_html = uClient.read()
uClient.close()

page_soup = soup(page_html, "html.parser")
containers = page_soup.findAll("h1",{"class":"branded-page-header-title"})
filename = "Products2.csv"

with open(filename, "w", newline='') as f:
    csv_output = csv.writer(f)
    headers = ["Channel Name", "Channel Description", "Channel Social Media Links"]
    csv_output.writerow(headers)

    channel_name = containers[0].a.text 
    print("Channel Name :" + channel_name)

    # For About Section Info
    aboutUrl = 'https://www.youtube.com/user/HolaSoyGerman/about'

    uClient1 = uReq(aboutUrl)
    page_html1 = uClient1.read()
    uClient1.close()

    page_soup1 = soup(page_html1, "html.parser")

    description_div = page_soup.findAll("div",{"class":"about-description branded-page-box-padding"})
    channel_description = description_div[0].pre.text
    print("Channel Description :" + channel_description)

    links = [link.a.get('href') for link in page_soup.findAll("li",{"class":"channel-links-item"})]
    csv_output.writerow([channel_name, channel_description, links[0]])

    for link in links[1:]:
        csv_output.writerow(['', '', link])

这将为您提供最后一列中每个href的单行,例如:

Channel Name,Channel Description,Channel Social Media Links
HolaSoyGerman.,Los Hombres De Verdad Usan Pantuflas De Perrito,http://www.twitter.com/germangarmendia
,,http://instagram.com/germanchelo
,,http://www.youtube.com/juegagerman
,,http://www.youtube.com/juegagerman
,,http://www.twitter.com/germangarmendia
,,http://instagram.com/germanchelo
,,https://plus.google.com/108460714456031131326

每个writerow()调用都会以逗号分隔值的形式将值列表写入文件,并在最后自动为您添加换行符。所需要的只是为每行构建值列表。首先,获取您的第一个链接,并将其作为频道描述后列表中的最后一个值。其次,为前两个列具有空值的每个剩余链接写一行。

要回答您的评论,以下内容应该让您入门:

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
import csv

def get_data(url, csv_output):

    if not url.endswith('/about'):
        url += '/about'

    print("URL: {}".format(url))
    uClient = uReq(url)
    page_html = uClient.read()
    uClient.close()

    page_soup = soup(page_html, "html.parser")
    containers = page_soup.findAll("h1", {"class":"branded-page-header-title"})

    channel_name = containers[0].a.text 
    print("Channel Name :" + channel_name)

    description_div = page_soup.findAll("div", {"class":"about-description branded-page-box-padding"})
    channel_description = description_div[0].pre.text
    print("Channel Description :" + channel_description)

    links = [link.a.get('href') for link in page_soup.findAll("li", {"class":"channel-links-item"})]
    csv_output.writerow([channel_name, channel_description, links[0]])

    for link in links[1:]:
        csv_output.writerow(['', '', link])

    #TODO - get list of links for the related channels

    return related_links


my_url = 'https://www.youtube.com/user/HolaSoyGerman'
filename = "Products2.csv"

with open(filename, "w", newline='') as f:
    csv_output = csv.writer(f)
    headers = ["Channel Name", "Channel Description", "Channel Social Media Links"]
    csv_output.writerow(headers)

    for _ in range(5):
        next_links = get_data(my_url, csv_output)
        my_url = next_links[0]      # e.g. follow the first of the related links