Python:从嵌套循环内部将网站抓取的数据从行转换为列

时间:2019-02-04 10:23:06

标签: python python-3.x dataframe xlsxwriter

我正在尝试将行转换为在嵌套的for循环中生成的列。

简而言之,它是这样的: 值1在行中,属于值1的数据必须在列中 值2在行中,属于值2的数据必须在列中

现在如何 所有“值”都将导出为行,然后,该值的所有“值”都将导出为行,这使其变得不可读。

问题是要获取Value1,value2等...我必须经历for循环并获取值1的所有数据,我需要经历另一个for循环(嵌套循环)。

我获取的所有数据均来自一个网站(正在抓取)。 我已经包含了imgurl链接,它是如何以及应该如何(到目前为止的进展)。第一个是它的现状,第二个是它应该如何。我相信用图片比用我自己的话更容易解释。 https://imgur.com/a/2LRhQrj

我正在使用pandas和xlsxwriter存储到excel。 我设法将所有数据导出到我需要的Excel中,但是我似乎无法将每个值的值都转换为列。 第一行是时间。这是应该的。

        #Initialize things before loop
        df = pd.DataFrame()
        ### Time based on hour 00:00, 01:00 etc...
        df_time = pd.DataFrame(columns=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23])

        for listing in soup.find_all('tr'):

            listing.attrs = {}
            #assetTime = listing.find_all("td", {"class": "locked"})
            assetCell = listing.find_all("td", {"class": "assetCell"})
            assetValue = listing.find_all("td", {"class": "assetValue"})


            for data in assetCell:

                array = [data.get_text()]
                df = df.append(pd.DataFrame({
                                        'Fridge name': array,
                                        }))

                for value in assetValue:

                    asset_array = [value.get_text()]
                    df_time = df_time.append(pd.DataFrame({
                                                'Temperature': asset_array
                                                }))
                ### End of assetValue loop
            ### End of assetCell loop

        ### Now we need to save the data to excel
        ### Create a Pandas Excel writer using XlsxWriter as the Engine
        writer = pd.ExcelWriter(filename+'.xlsx', engine='xlsxwriter')

        ### Convert dataframes
        frames = [df, df_time]
        result = pd.concat(frames)

        ### Convert the dataframe to an XlsxWriter Excel object and skip first row for custom header
        result.to_excel(writer, sheet_name='SheetName', startrow=1, header=True)

        ### Get the xlsxwritert workbook and worksheet objects
        workbook = writer.book
        worksheet = writer.sheets['SheetName']

        ### Write the column headers with the defined add_format
        for col_num, value in enumerate(result.columns.values):
            worksheet.write(0, col_num +1, value)

            ### Close Pandas Excel writer and output the Excel file
            writer.save()

1 个答案:

答案 0 :(得分:0)

经过大量测试,我采用了另一种方法。我没有弄乱熊猫,而是使用制表来刮取整个数据,然后将整个表结构导出为csv。

from tabulate import tabulate
import csv
import datetime ### Import date function to make the files based on date
import requests
from bs4 import BeautifulSoup



 if (DAY_INTEGER <= 31) and (DAY_INTEGER > 0):

    while True:
        try:
            ### Validate the user input
            form_data = {'UserName': USERNAME, 'Password': PASSWORD}
            with requests.Session() as sesh:
                sesh.post(login_post_url, data=form_data)
                response = sesh.get(internal_url)
                html = response.text
                break
        except requests.exceptions.ConnectionError:
            print ("Whoops! This is embarrasing :( ")
            print ("Unable to connect to the address. Looks like the website is down.")

    if(sesh):

        #BeautifulSoup version
        soup = BeautifulSoup(html,'lxml')
        table = soup.find_all("table")[3] # Skip the first two tables as there isn't something useful there
        df = pd.read_html(str(table))


        df2 = (tabulate(df[0], headers='keys', tablefmt='psql', showindex=False))

        myFile = open(filename+'.csv', 'w')
        myFile.write(str(df2))

    else:
        print("Oops. Something went wrong :(")
        print("It looks like authentication failed")