尽管增加了页码,但Python请求模块获得了相同的结果

时间:2016-11-05 20:48:23

标签: python ajax python-requests

URL中唯一更改的是页码,在每次请求后递增。

除了Selenium或相关工具,我不确定可以使用什么方法来遍历页面。我的直觉是可能有一些标题/查询组合直接获取数据,但我不知道在哪里找到它。

url = 'http://therunningbug.co.uk/events/find-races.aspx?EventName=&AddressRegion=&AddressCounty=&Date=&Surface=#Sort=Date&page='

page = 1

while True:

    pageData = BeautifulSoup(requests.get(url + str(page)).content)

    articles = pageData.find('div', {'class':"items-content"})

    for a in articles.find_all('article'):
        name = a.find('span', {'itemprop':"name"}).text
        d, t = a.find('time').get('datetime').split('T')

        timeData = t[:-3]

        dateData = d.split('-')
        date = (dateData[1] + '/' + dateData[2] + '/' + dateData[0][2:]).strip()
        description = a.find('p', {'itemprop':"description"}).text.strip()
        webLink = 'http://therunningbug.co.uk' + a.find('a', {'itemprop':"url"}).get('href')
        category = a.find('span', {'class':"surface"}).text
        location = a.find('span', {'class':"region"}).text + ', ' + a.find('span', {'class':"county"}).text

        print name, ' -- name'
        print date, ', ', timeData, ' -- date, time'
        print description, ' -- description'
        print webLink, ' -- website link'
        print category, ' -- category'
        print location, ' -- location\n'

    page += 1

1 个答案:

答案 0 :(得分:0)

问题可能是URL编码。你可以urlencode:

url = 'http://therunningbug.co.uk/events/find-races.aspx'
payload = {'page': page}
pageData = BeautifulSoup(requests.get(url, params = payload).content)

这也可以,因为URI中没有复杂的字符可以真正进行URL编码。

url = 'http://therunningbug.co.uk/events/find-races.aspx'
pageData = BeautifulSoup(requests.get(url + '?page=' + str(page)).content)

请参阅url编码的请求文档。 http://docs.python-requests.org/en/master/user/quickstart/

完整代码:

#!/usr/bin/env python

import requests
from bs4 import BeautifulSoup

page = 1
while True:

    url = 'http://therunningbug.co.uk/events/find-races.aspx'
    payload = {'page': page}
    pageData = BeautifulSoup(requests.get(url, params = payload).content)

    articles = pageData.find('div', {'class':"items-content"})

    for a in articles.find_all('article'):
        name = a.find('span', {'itemprop':"name"}).text
        d, t = a.find('time').get('datetime').split('T')

        timeData = t[:-3]

        dateData = d.split('-')
        date = (dateData[1] + '/' + dateData[2] + '/' + dateData[0][2:]).strip()
        description = a.find('p', {'itemprop':"description"}).text.strip()
        webLink = 'http://therunningbug.co.uk' + a.find('a', {'itemprop':"url"}).get('href')
        category = a.find('span', {'class':"surface"}).text
        location = a.find('span', {'class':"region"}).text + ', ' + a.find('span', {'class':"county"}).text

        print name, ' -- name'
        print date, ', ', timeData, ' -- date, time'
        print description, ' -- description'
        print webLink, ' -- website link'
        print category, ' -- category'
        print location, ' -- location\n'

    page += 1