与美丽汤和硒一起刮时出错

时间:2019-02-27 18:23:09

标签: python selenium selenium-webdriver web-scraping beautifulsoup

我一般是Python和网络抓取的新手。在这段代码中,我同时使用了Bs4和Selenium。我正在使用Selenium来自动单击“显示更多”按钮,以便我可以抓取所有结果,而不仅仅是抓取显示在结果首页上的结果。 我正在尝试抓取以下网站:https://boards.euw.leagueoflegends.com/en/search?query=improve

但是,将Bs4和Selenium结合在一起时,我要抓取的3个字段(用户名,服务器和主题)现在给我以下两个错误。

1)我收到AttributeError:'NoneType'对象的服务器和用户名都没有属性'text'

Traceback (most recent call last):
  File "failoriginale.py", line 153, in <module>
    main()
  File "failoriginale.py", line 132, in main
    song_data = get_songs(index_page) # Get songs with metadata
  File "failoriginale.py", line 81, in get_songs
    username = row.find(class_='username').text.strip()
AttributeError: 'NoneType' object has no attribute 'text'

2)我收到有关主题的错误

Traceback (most recent call last):
  File "failoriginale.py", line 153, in <module>
    main()
  File "failoriginale.py", line 132, in main
    song_data = get_songs(index_page) # Get songs with metadata
  File "failoriginale.py", line 86, in get_songs
    topic = row.find('div', {'class':'discussion-footer byline opaque'}).find_all('a')[1].text.strip()
IndexError: list index out of range

但是,在将bs4与Selenium结合使用之前,这3个领域的工作方式与其他领域相同,因此我认为问题出在其他地方。我不明白Song_data的主要功能有什么问题?我已经在stackoverflow上查找了其他问题,但无法解决问题。我是scraping和bs4,硒库的新手,对不起,如果我问一个愚蠢的问题。

代码如下:

browser = webdriver.Firefox(executable_path='./geckodriver')
browser.get('https://boards.euw.leagueoflegends.com/en/search?query=improve&content_type=discussion')
html = browser.page_source #page_source is where selenium stores the html source

def get_songs(url):

    html = browser.page_source
    index_page = BeautifulSoup(html,'lxml') # Parse the page

    items = index_page.find(id='search-results') # Get the list on from the webpage
    if not items: # If the webpage does not contain the list, we should exit
        print('Something went wrong!', file=sys.stderr)
        sys.exit()
    data = list()
 # button show more, if the page has the show more button, it will click on that x5secs
    if index_page.find('a', {"class": "box show-more",}):
        button = browser.find_element_by_class_name('box.show-more')
        timeout = time.time() + 5
        while True:
            button.click()
            time.sleep(5.25)
            if time.time() > timeout:
                break

html = browser.page_source
    index_page = BeautifulSoup(html,'lxml')
    items = index_page.find(id='search-results')

    for row in items.find_all(class_='discussion-list-item'):

        username = row.find(class_='username').text.strip()
        question = row.find(class_='title-span').text.strip()
        sentence = row.find('span')['title']
        serverzone = row.find(class_='realm').text.strip()
        #print(serverzone)
        topic = row.find('div', {'class':'discussion-footer byline opaque'}).find_all('a')[1].text.strip()
        #print(topic)
        date=row.find(class_='timeago').get('title')
        #print(date)
        views = row.find(class_='view-counts byline').find('span', {'class' : 'number opaque'}).get('data-short-number')
        comments = row.find(class_='num-comments byline').find('span', {'class' : 'number opaque'}).get('data-short-number')

        # Store the data in a dictionary, and add that to our list
        data.append({
                     'username': username,
                     'topic':topic,
                     'question':question,
                     'sentence':sentence,
                     'server':serverzone,
                     'date':date,
                     'number_of_comments':comments,
                     'number_of_views':views
                    })
    return data
def get_song_info(url):
    browser.get(url)
    html2 = browser.page_source
    song_page = BeautifulSoup(html2, features="lxml")
    interesting_html= song_page.find('div', {'class' : 'list'})
    if not interesting_html: # Check if an article tag was found, not all pages have one
        print('No information availible for song at {}'.format(url), file=sys.stderr)
        return {}
    answer = interesting_html.find('span', {'class' : 'high-quality markdown'}).find('p').text.strip() #.find('span', {"class": "high-quality markdown",}).find('p')
    return {'answer': answer} # Return the data of interest



def main():
    index_page = BeautifulSoup(html,'lxml')
    song_data = get_songs(index_page) # Get songs with metadata
     #for each row in the improve page enter the link and extract the data  
    for row in song_data:
        print('Scraping info on {}.'.format(row['link'])) # Might be useful for debugging
        url = row['link'] #defines that the url is the column link in the csv file 
        song_info = get_song_info(url) # Get lyrics and credits for this song, if available
        for key, value in song_info.items():
            row[key] = value # Add the new data to our dictionary
    with open('results.csv', 'w', encoding='utf-8') as f: # Open a csv file for writing
        fieldnames=['link','username','topic','question','sentence','server','date','number_of_comments','number_of_views','answer'] # These are the values we want to store

感谢您的帮助!

1 个答案:

答案 0 :(得分:1)

我很想使用请求来检索总结果计数和每批结果的数量,然后循环单击带有等待条件的按钮,直到出现所有结果为止。然后可以一口气抓住它们。下面的大纲可以根据需要重写。您可以始终使用n端点在n页之后停止单击并在循环内递增n。您可能还添加 WebDriverWait(d,20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '.inline-profile .username')))首先是在收集其他项目之前结束,以便在最后一次点击后留出时间。

import requests
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

data = requests.get('https://boards.euw.leagueoflegends.com/en/search?query=improve&json_wrap=1').json()
total = data['searchResultsCount']
batch = data['resultsCount']

d = webdriver.Chrome()
d.get('https://boards.euw.leagueoflegends.com/en/search?query=improve')

counter = batch
while counter < total:
    WebDriverWait(d, 20).until(EC.element_to_be_clickable((By.CSS_SELECTOR, '.show-more-label'))).click()
    counter +=batch
    #print(counter)

userNames = [item.text for item in d.find_elements_by_css_selector('.inline-profile .username')]
topics = [item.text for item in d.find_elements_by_css_selector('.inline-profile + a')]
servers = [item.text for item in d.find_elements_by_css_selector('.inline-profile .realm')]
results = list(zip(userNames, topics, servers))

有趣的是,尽管可以单击按钮,但它似乎在给定的结束计数之前停止更新。手动单击时也会发生这种情况。