如何在不覆盖结果的情况下抓取多个网页?

时间:2019-05-08 09:01:21

标签: python web-scraping beautifulsoup xml-parsing html-parsing

抓取并尝试从Transfermarkt抓取多个网页而又不覆盖前一个网页的新手。

知道此问题以前曾被问过,但在这种情况下我无法解决。

from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools

headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']

for url in urls:
    r = requests.get(url,  headers = headers)
    soup = bs(r.content, 'html.parser')


    position_number = [item.text for item in soup.select('.items .rn_nummer')]
    position_description = [item.text for item in soup.select('.items td:not([class])')]
    name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
    dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
    nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
    height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
    foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
    joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
    signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                   for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
    contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]

df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)
print(df)

df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv')

在抓取后能够区分网页也将很有帮助。

任何帮助将不胜感激。

2 个答案:

答案 0 :(得分:1)

您上面的代码将抓取每个URL的数据,将其解析为而不,将其放入数据帧中,然后移至下一个URL。由于您对pd.DataFrame()的调用发生在循环之外,因此您正在从urls中的最后一个URL构建页面数据的数据框。

您需要在for循环之外创建一个数据框,然后将每个URL的传入数据附加到此数据框。

from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools

headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']
urls = ['https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1']

#### Add this before for-loop. ####
# Create empty dataframe with expected column names.
df_full = pd.DataFrame(columns = df_headers)

for url in urls:
    r = requests.get(url,  headers = headers)
    soup = bs(r.content, 'html.parser')


    position_number = [item.text for item in soup.select('.items .rn_nummer')]
    position_description = [item.text for item in soup.select('.items td:not([class])')]
    name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
    dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
    nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
    height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
    foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
    joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
    signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                   for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
    contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]


    #### Add this to for-loop. ####

    # Create a dataframe for page data.
    df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)

    # Add page URL to index of page data.
    df.index = [url] * len(df)

    # Append page data to full data.
    df_full = df_full.append(df)

print(df_full)

答案 1 :(得分:0)

两种可能的方法:

  1. 您可以在文件名中添加时间戳记,以便每次运行脚本时都创建一个不同的CSV文件

    from datetime import datetime
    
    timestamp = datetime.now().strftime("%Y-%m-%d %H.%m.%s")
    df.to_csv(rf'Uljanas-MacBook-Air-2:~ uljanadufour$\{timestamp}  bayern-munich123.csv')
    

    哪个文件会以以下格式提供:

    "2019-05-08 10.39.05  bayern-munich123.csv"
    

    使用年月日格式,您的文件将自动按时间顺序排序。

  2. 或者您可以使用附加模式将其添加到现有CSV文件中:

    df.to_csv(r'Uljanas-MacBook-Air-2:~ uljanadufour$\bayern-munich123.csv', mode='a')
    

最后,您当前的代码仅保存了最后一个URL,如果要将每个URL保存为不同的文件,则需要在循环内缩进最后两行。您可以在文件名中添加一个数字,以区分每个网址,例如12如下。 Python的enumerate()函数可用于为每个URL提供一个数字:

from datetime import datetime
from bs4 import BeautifulSoup as bs
import requests
import re
import pandas as pd
import itertools


headers = {'User-Agent' : 'Mozilla/5.0'}
df_headers = ['position_number' , 'position_description' , 'name' , 'dob' , 'nationality' , 'height' , 'foot' , 'joined' , 'signed_from' , 'contract_until']

urls = [
    'https://www.transfermarkt.com/fc-bayern-munich-u17/kader/verein/21058/saison_id/2018/plus/1', 
    'https://www.transfermarkt.com/fc-hennef-05-u17/kader/verein/48776/saison_id/2018/plus/1'
]

for index, url in enumerate(urls, start=1):
    r = requests.get(url,  headers=headers)
    soup = bs(r.content, 'html.parser')

    position_number = [item.text for item in soup.select('.items .rn_nummer')]
    position_description = [item.text for item in soup.select('.items td:not([class])')]
    name = [item.text for item in soup.select('.hide-for-small .spielprofil_tooltip')]
    dob = [item.text for item in soup.select('.zentriert:nth-of-type(3):not([id])')]
    nationality = ['/'.join([i['title'] for i in item.select('[title]')]) for item in soup.select('.zentriert:nth-of-type(4):not([id])')]
    height = [item.text for item in soup.select('.zentriert:nth-of-type(5):not([id])')]
    foot = [item.text for item in soup.select('.zentriert:nth-of-type(6):not([id])')]
    joined = [item.text for item in soup.select('.zentriert:nth-of-type(7):not([id])')]
    signed_from = ['/'.join([item.find('img')['title'].lstrip(': '), item.find('img')['alt']]) if item.find('a') else ''
                   for item in soup.select('.zentriert:nth-of-type(8):not([id])')]
    contract_until = [item.text for item in soup.select('.zentriert:nth-of-type(9):not([id])')]

    df = pd.DataFrame(list(zip(position_number, position_description, name, dob, nationality, height, foot, joined, signed_from, contract_until)), columns = df_headers)

    timestamp = datetime.now().strftime("%Y-%m-%d %H.%M.%S")
    df.to_csv(rf'{timestamp}  bayern-munich123_{index}.csv')    

这将为您提供文件名,例如:

"2019-05-08 11.44.38  bayern-munich123_1.csv"