我的“ for”循环是从多个页面抓取(在这种情况下,是我放入列表中的三个页面)。但是print输出/ csv输出并没有选择循环中的先前迭代(它只是给我最后第三页的结果)。我认为我在这里寻找的术语是“数组”,因为我希望每个页面的结果相互垂直附加。我似乎误解了此功能的工作原理:
results.append(details)
这全都归功于QHarr出色的答案:How Can I Export Scraped Data to Excel Horizontally?
这是我使用的完整的工作代码:
import requests, re
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time
examplelist = [['1'], ['2'], ['3']]
pages = [i for I in examplelist for i in I]
for key in pages:
driver = webdriver.Chrome(executable_path=r"C:\Users\User\Downloads\chromedriver_win32\chromedriver.exe")
driver.get('https://www.restaurant.com/listing?&&st=KS&p=KS&p=PA&page=' + str(key) + '&&searchradius=50&loc=10021')
time.sleep(10)
WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".restaurants")))
soup = BeautifulSoup(driver.page_source, 'html.parser')
restaurants = soup.select('.restaurants')
results = []
for restaurant in restaurants:
details = [re.sub(r'\s{2,}|[,]', '', i) for i in restaurant.select_one('h3 + p').text.strip().split('\n') if i != '']
details.insert(0, restaurant.select_one('h3 a').text)
results.append(details)
#print(results)
df = pd.DataFrame(results, columns= ['Name', 'Address', 'City', 'State', 'Zip', 'Phone', 'AdditionalInfo'])
df.to_csv(r'C:\Users\User\Documents\Restaurants.csv', sep=',', encoding='utf-8-sig', index = False)
driver.close()
谢谢
答案 0 :(得分:1)
我认为您在循环内不断用results
清空results = []
,因此您丢失了已经放入的内容。像这样在循环外初始化
results=[]
for key in pages:
driver = webdriver.Chrome(executable_path=r"C:\Users\User\Downloads\chromedriver_win32\chromedriver.exe")
driver.get('https://www.restaurant.com/listing?&&st=KS&p=KS&p=PA&page=' + str(key) + '&&searchradius=50&loc=10021')
time.sleep(10)
WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, ".restaurants")))
soup = BeautifulSoup(driver.page_source, 'html.parser')
restaurants = soup.select('.restaurants')
for restaurant in restaurants:
details = [re.sub(r'\s{2,}|[,]', '', i) for i in restaurant.select_one('h3 + p').text.strip().split('\n') if i != '']
details.insert(0, restaurant.select_one('h3 a').text)
results.append(details)
#print(results)
并从循环内部删除该初始化。