从具有多个页面的网站进行Web爬网(Python)

时间:2018-10-15 16:52:22

标签: python web-scraping

使用BeautifulSoap库我想从例如booking.com网站上收集信息。这是我的代码:

from bs4 import BeautifulSoup 
from selenium import webdriver
import pandas as pd

url= 'https://www.booking.com/searchresults.en-gb.html?label=gen173nr-1FCAEoggJCAlhYSDNYBGgOiAEBmAEuwgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&lang=en-gb&sid=1dcb4bc0b28bc6f79fb85ac861a6e6ce&sb=1&src=searchresults&src_elem=sb&error_url=https%3A%2F%2Fwww.booking.com%2Fsearchresults.en-gb.html%3Flabel%3Dgen173nr-1FCAEoggJCAlhYSDNYBGgOiAEBmAEuwgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM%3Bsid%3D1dcb4bc0b28bc6f79fb85ac861a6e6ce%3Bclass_interval%3D1%3Bdest_id%3D-390625%3Bdest_type%3Dcity%3Bfrom_sf%3D1%3Bgroup_adults%3D2%3Bgroup_children%3D0%3Blabel_click%3Dundef%3Bno_rooms%3D1%3Boffset%3D0%3Braw_dest_type%3Dcity%3Broom1%3DA%252CA%3Bsb_price_type%3Dtotal%3Bslp_r_match%3D0%3Bsrc%3Dindex%3Bsrc_elem%3Dsb%3Bsrpvid%3D9fe26fd0b0e90202%3Bss%3DMadrid%3Bssb%3Dempty%3Bssne%3DMadrid%3Bssne_untouched%3DMadrid%26%3B&ss=Madrid&ssne=Madrid&ssne_untouched=Madrid&city=-390625&checkin_monthday=15&checkin_month=10&checkin_year=2018&checkout_monthday=19&checkout_month=10&checkout_year=2018&group_adults=2&group_children=0&no_rooms=1&from_sf=1'
driver = webdriver.Chrome(r"C:\Users\yefida\Desktop\Study_folder\Online_Courses\python mega course\Project 6 - Web Scraping\chromedriver.exe")
driver.get(url)
html = driver.page_source

soup = BeautifulSoup(html, 'html.parser')

list_data = []
data = soup.find_all('div',{'class':'sr_item_content sr_item_content_slider_wrapper '})
for item in data:
    temp = {}
    temp['Title'] = item.find('span',{'class':'sr-hotel__name'}).text.replace('\n','')
    temp['Address'] = item.find('div',{'class':'address'}).text.replace('\n','').lstrip(' ').partition(',')[0]
    try:
        temp['Demand'] = item.find('span',{'class':'sr_property_in_demand'}).text.replace('!','')
    except:
        temp['Demand'] = None
    temp['Raiting Users Words'] = item.find('span',{'class':'review-score-widget__text'}).text.replace('\n','')
    try:
        temp['Number Reviews'] = item.find('span',{'class':'review-score-widget__subtext'}).text.replace('\n','').replace(' reviews','')
    except:
        temp['Number Reviews'] = None
    temp['Raiting Users Numbers'] = item.find('span',{'class':'review-score-badge'}).text.replace('\n','')
    try:
        temp['Location'] = item.find('div',{'class':'search-secondary-review-score'}).text.replace('\n','').replace('Location','')
    except:
        temp['Location'] = None
    try:
        temp['Price'] = item.find('strong',{'class':'price'}).text.replace('\n','').replace('€\xa0','')
    except:
        temp['Price'] = None

    list_data.append(temp)

基本上,我从Booking.com的首页获得了信息。 但这是第二页的链接:

"https://www.booking.com/searchresults.en-gb.html?aid=304142&label=gen173nr-1FCAEoggJCAlhYSDNYBGgOiAEBmAEuwgEKd2luZG93cyAxMMgBDNgBAegBAfgBC5ICAXmoAgM&sid=b7189b7c140bf45ceae5bfd81d08947f&checkin_month=10&checkin_monthday=15&checkin_year=2018&checkout_month=10&checkout_monthday=19&checkout_year=2018&city=-390625&class_interval=1&dest_id=-390625&dest_type=city&from_sf=1&group_adults=2&group_children=0&label_click=undef&no_rooms=1&raw_dest_type=city&room1=A%2CA&sb_price_type=total&slp_r_match=0&src=searchresults&src_elem=sb&srpvid=49147618d7ba03be&ss=Madrid&ssb=empty&ssne=Madrid&ssne_untouched=Madrid&rows=50&offset=50"
如您所见,

第一页和第二页的链接之间的差异很大(URL字符串中的差异)。那么,如何解决这个问题呢?有一些把戏吗?

1 个答案:

答案 0 :(得分:1)

您所要做的就是在网址查询中添加grpcnt <- setNames(c(50,100,70), unique(mtcars$cyl)) # 6 4 8 # 50 100 70 mtcars[mtcars[, sample(.I, grpcnt[as.character(.BY[[1]])], replace=TRUE), by=cyl]$V1]

或者, 您可以使用bs4获取下一页链接。 像这样: &offset=100