在哪里放置以避免脚本删除?

时间:2015-07-26 18:46:18

标签: python

如果发生以下问题,我已经尝试了几次以允许脚本继续运行,因为您可以看到它现在放在哪里:如果代码失败,可​​以放在哪里循环并继续。

input#Male {
outline: 3px solid red;
}
input#Female {
outline: 3px solid green;
}

完整代码:

Traceback (most recent call last):
  File "C:\Users\Laptop\Desktop\score\BBC_Grabber.py", line 93, in <module>
    all_league_results()
  File "C:\Users\Laptop\Desktop\score\BBC_Grabber.py", line 84, in all_league_results
    parse_page(subr.text)
  File "C:\Users\Laptop\Desktop\score\BBC_Grabber.py", line 18, in parse_page
    date = subsoup.find('div', attrs={'id':'article-sidebar'}).findNext('span').text
AttributeError: 'NoneType' object has no attribute 'findNext'

1 个答案:

答案 0 :(得分:0)

您必须在函数调用之前放置try except,而不是在其定义之前。 这可能是一个有效的解决方案:

# -*- coding: utf-8 -*-
"""
Created on Sun Jul 26 20:58:53 2015

@author: george
"""

import requests
from bs4 import BeautifulSoup 
import csv
import re
import time
import logging


def parse_page(data):
        '#Sleep to avoid excessive requests'
        time.sleep(1)

        subsoup = BeautifulSoup(data,"html.parser")
        rs = requests.get("http://www.bbc.co.uk/sport/0/football/31776459")
        ssubsoup = BeautifulSoup(rs.content,"html.parser")
        matchoverview = subsoup.find('div', attrs={'id':'match-overview'})
        print '--------------'
        date = subsoup.find('div', attrs={'id':'article-sidebar'}).findNext('span').text
        league = subsoup.find('a', attrs={'class':'secondary-nav__link'}).findNext('span').findNext('span').text

        #HomeTeam info printing
        homeTeam = matchoverview.find('div', attrs={'class':'team-match-details'}).findNext('span').findNext('a').text
        homeScore = matchoverview.find('div', attrs={'class':'team-match-details'}).findNext('span').findNext('span').text
        homeGoalScorers = []
        for goals in matchoverview.find('div', attrs={'class':'team-match-details'}).findNext('p').find_all('span'):
            homeGoalScorers.append(goals.text.replace(u'\u2032', "'"))
        homeGoals = homeGoalScorers
        homeGoals2 = ''.join(homeGoals)
        homeGoals3 = re.sub("[^0-9']","",homeGoals2)
        homeGoals4 = homeGoals3.replace("'","',")
        homeGoals5 = homeGoals4.replace("'","H")
        if homeScore == '0':
                homeGoals5 =''

        #AwayTeam info printing
        awayTeam = matchoverview.find('div', attrs={'id': 'away-team'}).find('div', attrs={'class':'team-match-details'}).findNext('span').findNext('a').text
        awayScore = matchoverview.find('div', attrs={'id': 'away-team'}).find('div', attrs={'class':'team-match-details'}).findNext('span').findNext('span').text
        awayGoalScorers = []
        for goals in matchoverview.find('div', attrs={'id': 'away-team'}).find('div', attrs={'class':'team-match-details'}).findNext('p').find_all('span'):
            awayGoalScorers.append(goals.text.replace(u'\u2032', "'"))
        awayGoals = awayGoalScorers
        awayGoals2 = ''.join(awayGoals)
        awayGoals3 = re.sub("[^0-9']","",awayGoals2)
        awayGoals4 = awayGoals3.replace("'","',")
        awayGoals5 = awayGoals4.replace("'","A")
        if awayScore == '0':
                awayGoals5 =''

        #combine scores
        scores = homeGoals5+awayGoals5

        #Printouts
        print date
        print league
        print '{0} {1} - {2} {3}'.format(homeTeam, homeScore, awayTeam, awayScore)
        print scores
        if len(homeTeam) >1:
                with open('score.txt', 'a') as f:
                        writer = csv.writer(f)
                        writer.writerow([league,date,homeTeam,awayTeam,scores])


def all_league_results():
    r = requests.get("http://www.bbc.co.uk/sport/football/league-two/results")
    soup = BeautifulSoup(r.content,"html.parser")

    # Save Teams

    for link in soup.find_all("a", attrs={'class': 'report'}):
        fullLink = 'http://www.bbc.com' + link['href']
        time.sleep(2)
        subr = requests.get(fullLink)
        logging.basicConfig(filename='LogReport.log',level=logging.DEBUG)
        logging.debug('DEBUG ERROR:')
        logging.info('INFO ERROR:')
        logging.warning('WARNING ERROR:')
        logging.error('ERROR WARNING:')
        logging.critical('CRITICAL ERROR:')

        try:
            parse_page(subr.text)
        except:
            print "Item Missing"
            with open('score.txt', 'a') as f:
                writer = csv.writer(f)
                writer.writerow(["--Item Missing--"])


def specific_game_results(url):
    subr = requests.get(url)
    parse_page(subr.text)

#get specific games results
#specific_game_results('http://www.bbc.co.uk/sport/0/football/31776459')
#get all current league results
all_league_results()

请注意,您应该捕获特定的例外。

相关问题