如何在cricinfo中取消所有测试比赛详细信息

时间:2018-11-28 12:10:25

标签: python-3.x beautifulsoup

我正在尝试废弃所有测试比赛的详细信息,但显示HTTP Error 504: Gateway Timeout我正在获取测试比赛的详细信息,但未显示这是我使用过bs4的代码测试来自cricinfo的比赛详细信息

我需要抓取2000次测试匹配的详细信息,这是我的代码

import urllib.request as req

BASE_URL = 'http://www.espncricinfo.com'

if not os.path.exists('./espncricinfo-fc'):
    os.mkdir('./espncricinfo-fc')

for i in range(0, 2000):

    soupy = BeautifulSoup(urllib2.urlopen('http://search.espncricinfo.com/ci/content/match/search.html?search=test;all=1;page=' + str(i)).read())

    time.sleep(1)
    for new_host in soupy.findAll('a', {'class' : 'srchPlyrNmTxt'}):
        try:
            new_host = new_host['href']
        except:
            continue
        odiurl =BASE_URL + urljoin(BASE_URL,new_host)
        new_host = unicodedata.normalize('NFKD', new_host).encode('ascii','ignore')
        print(new_host)
        html = req.urlopen(odiurl).read()
        if html:
            with open('espncricinfo-fc/{0!s}'.format(str.split(new_host, "/")[4]), "wb") as f:
                f.write(html)
                print(html)
        else:
            print("no html")

2 个答案:

答案 0 :(得分:0)

不确定为什么,这似乎对我有用。

我通过链接对循环进行了一些更改。我不确定您希望输出如何将其写入文件,因此我将这一部分放在一边。但是就像我说的那样,看来我的工作还可以。

import bs4
import requests  
import os
import time
import urllib.request as req

BASE_URL = 'http://www.espncricinfo.com'

if not os.path.exists('C:/espncricinfo-fc'):
    os.mkdir('C:/espncricinfo-fc')

for i in range(0, 2000):

    i=0
    url = 'http://search.espncricinfo.com/ci/content/match/search.html?search=test;all=1;page=%s' %i
    html = requests.get(url)

    print ('Checking page %s of 2000' %(i+1))

    soupy = bs4.BeautifulSoup(html.text, 'html.parser')

    time.sleep(1)
    for new_host in soupy.findAll('a', {'class' : 'srchPlyrNmTxt'}):
        try:
            new_host = new_host['href']
        except:
            continue
        odiurl = BASE_URL + new_host
        new_host = odiurl
        print(new_host)
        html = req.urlopen(odiurl).read()

        if html:
            with open('C:/espncricinfo-fc/{0!s}'.format('_'.join(str.split(new_host, "/")[4:])), "wb") as f:
                f.write(html)
                #print(html)
        else:
            print("no html")

答案 1 :(得分:0)

这通常是在执行多个请求时过快发生的,它可能是服务器关闭或服务器防火墙阻止了您的连接,请尝试增加sleep()或添加随机睡眠。

import random

.....
for i in range(0, 2000):
    soupy = BeautifulSoup(....)

    time.sleep(random.randint(2,6))