要获取表中的列,还可以通过单击该链接以获取数据来获取包含链接的第一列

时间:2019-05-03 10:09:31

标签: python selenium-webdriver beautifulsoup

我有以下链接

http://www.igrmaharashtra.gov.in/eASR/eASRCommon.aspx?hDistName=Pune

在这种情况下,我想在excel中以适当的格式抓取数据。SurveyNo链接包含单击时的数据,我希望将行数据与单击调查编号的数据一起包含。

还想要我在图像中附加的格式(在excel中需要输出)

import urllib.request
from bs4 import BeautifulSoup
import csv
import os
from selenium import webdriver
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.keys import Keys
import time
url = 'http://www.igrmaharashtra.gov.in/eASR/eASRCommon.aspx? 
hDistName=Pune'
chrome_path =r'C:/Users/User/AppData/Local/Programs/Python/Python36/Scripts/chromedriver.exe'
driver = webdriver.Chrome(executable_path=chrome_path)
driver.implicitly_wait(10)
driver.get(url)
Select(driver.find_element_by_name('ctl00$ContentPlaceHolder5$ddlTaluka')).select_by_value('5')
Select(driver.find_element_by_name('ctl00$ContentPlaceHolder5$ddlVillage')).select_by_value('1872')
soup=BeautifulSoup(driver.page_source, 'lxml')
table = soup.find("table" , attrs = {'id':'ctl00_ContentPlaceHolder5_grdUrbanSubZoneWiseRate' })
with open('Baner.csv', 'w',encoding='utf-16',newline='') as csvfile:
     f = csv.writer(csvfile, dialect='excel')
     f.writerow(['SurveyNo','Subdivision', 'Open ground', 'Resident house','Offices','Shops','Industrial','Unit (Rs./)'])  # headers
     rows = table.find_all('tr')[1:] 
     data=[]
     for tr in rows:  
         cols = tr.find_all('td')
         for td in cols:
              links = driver.find_elements_by_link_text('SurveyNo')
              l =len(links)
              data12 =[]
              for i in range(l):
                   newlinks = driver.find_elements_by_link_text('SurveyNo')
                   newlinks[i].click()
                   soup = BeautifulSoup(driver.page_source, 'lxml')
                   td1 = soup.find("textarea", attrs={'class': 'textbox'})
                   data12.append(td1.text)
                   data.append(td.text)
                   data.append(data12)
              print(data)

请找到image。以这种格式,我需要输出抓取数据。

1 个答案:

答案 0 :(得分:0)

您可以执行以下操作,只需在结尾处重新排列列以及所需的重命名即可。假设所有需要的行都存在SurveyNo。我从SurveyNo单元中提取了hrefs,这些单元实际上是可执行字符串,您可以将其传递给execute_script以显示调查编号,而不必担心过时的元素等。...

from selenium import webdriver
import pandas as pd

url = 'http://www.igrmaharashtra.gov.in/eASR/eASRCommon.aspx?hDistName=Pune'
d = webdriver.Chrome()
d.get(url)
d.find_element_by_css_selector('[value="5"]').click()
d.find_element_by_css_selector('[value="1872"]').click()
tableElement = d.find_element_by_id('ctl00_ContentPlaceHolder5_grdUrbanSubZoneWiseRate')
table = pd.read_html(tableElement.get_attribute('outerHTML'))[0]
table.columns = table.iloc[0]
table = table.iloc[1:]
table = table[table.Select == 'SurveyNo'] #assumption SurveyNo exists for all wanted rows
surveyNo_scripts = [item.get_attribute('href') for item in d.find_elements_by_css_selector("#ctl00_ContentPlaceHolder5_grdUrbanSubZoneWiseRate [href*='Select$']")]
i = 0
for script in surveyNo_scripts:
    d.execute_script(script)
    surveys = d.find_element_by_css_selector('textarea').text
    table.iloc[i]['Select'] = surveys
    i+=1   
print(table)
#rename and re-order columns as required
table.to_csv(r"C:\Users\User\Desktop\Data.csv", sep=',', encoding='utf-8-sig',index = False ) 

重命名和重新排序之前的输出:


在一个循环中,您可以连接所有df,然后一次性写出(我的偏好-显示为here),或稍后附加为here