如何将每个页面的数据保存到csv

时间:2018-12-31 15:34:23

标签: html for-loop web-scraping beautifulsoup

我正在进行一个抓取项目,试图从13页中抓取信息。页面的结构是相同的,唯一改变的是URL。

我能够使用for循环抓取每个页面,并且可以在终端中查看每个页面的信息。但是,当我将其保存到csv时,保存的只是最后一页(第13页)中的信息。

我确定我缺少什么,但似乎无法弄清楚。谢谢!

我正在使用python 3.7和BeautifulSoup进行抓取。

from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup

pages = [str(i) for i in range (1,14)]

for page in pages:

    my_url ='Myurl/=' + page

    uClient = uReq(my_url)
    page_html = uClient.read()
    uClient.close()

    page_soup = soup(page_html, "html.parser")
    containers = page_soup.findAll("table", {"class":"hello"})
    container = containers[0]

    filename = "Full.csv"
    f = open(filename, "w")

    headers= "Aa, Ab, Ac, Ad, Ba, Bb, Bc, Bd\n"
    f.write(headers)

    for container in containers:

        td_tags = container.find_all('td')
        A = td_tags[0]
        B=td_tags[2]

        Aa = A.a.text   
        Ab = A.span.text
        Ac = A.find('span', attrs = {'class' :'boxes'}).text.strip()
        Ad = td_tags[1].text

        Ba = B.a.text   
        Bb = B.span.text
        Bc = B.find('span', attrs = {'class' :'boxes'}).text.strip()
        Bd = td_tags[3].text

        print("Aa:" + Aa)
        print("Ab:" + Ab)
        print("Ac:" + Ac)
        print("Ad:" + Ad)
        print("Ba:" + Ba)
        print("Bb:" + Bb)
        print("Bc:" + Bc)
        print("Bd:" + bd)


        f.write(Aa + "," + Ab + "," + Ac.replace(",", "|") + "," + Ad + "," + Ba + "," + Bb + "," + Bc.replace(",", "|") + "," + Bd + "\n")

    f.close()

编辑*另外,如果有人对如何确认和记录每个容器来自的页码有个好主意,那也将有所帮助。再次感谢!

1 个答案:

答案 0 :(得分:0)

执行此操作以将其追加到文件中,而不是覆盖它:

with open(filename, "a") as myfile:
    myfile.write(Aa + "," + Ab + "," + Ac.replace(",", "|") + "," + Ad + "," + Ba + "," + Bb + "," + Bc.replace(",", "|") + "," + Bd + "\n")