如何从网络链接列表中的URL中检索URL和数据

时间:2019-08-15 14:35:56

标签: python python-3.x web-scraping beautifulsoup

“您好,我是网络爬虫的新手。我最近检索了一个网络链接列表,并且这些链接中包含包含表数据的URL。我打算抓取数据,但似乎无法获取网址。非常感谢您提供任何形式的帮助”

“网络链接列表为

https://aviation-safety.net/database/dblist.php?Year=1919

https://aviation-safety.net/database/dblist.php?Year=1920

https://aviation-safety.net/database/dblist.php?Year=1921

https://aviation-safety.net/database/dblist.php?Year=1922

https://aviation-safety.net/database/dblist.php?Year=2019

“我打算从链接列表中

a。在这些链接中获取网址

https://aviation-safety.net/database/record.php?id=19190802-0

https://aviation-safety.net/database/record.php?id=19190811-0

https://aviation-safety.net/database/record.php?id=19200223-0

“ b。从每个URL内的表中获取数据 (例如,事件日期,事件时间,类型,操作员,注册,msn,首飞,分类)”

    #Get the list of weblinks

    import numpy as np
    import pandas as pd
    from bs4 import BeautifulSoup
    import requests

    headers = {'insert user agent'}

    #start of code

    mainurl = "https://aviation-safety.net/database/"
    def getAndParseURL(mainurl):
       result = requests.get(mainurl)
       soup = BeautifulSoup(result.content, 'html.parser')
       datatable = soup.find_all('a', href = True)
       return datatable

    datatable = getAndParseURL(mainurl)

    #go through the content and grab the URLs

    links = []
    for link in datatable:
        if 'Year' in link['href']:
            url = link['href']
            links.append(mainurl + url)

    #check if links are in dataframe

    df = pd.DataFrame(links, columns=['url'])

    df.head(10)

    #save the links to a csv

    df.to_csv('aviationsafetyyearlinks.csv')


    #from the csv read each web-link and get URLs within each link

    import csv
    from urllib.request import urlopen

    contents = []
    df = pd.read_csv('aviationsafetyyearlinks.csv')

    urls = df['url']
    for url in urls:
        contents.append(url) 
        for url in contents:
            page = requests.get(url)
            soup = BeautifulSoup(page.content, 'html.parser')
            addtable = soup.find_all('a', href = True)

“我只能获取Web链接列表,而无法获取这些Web链接中的URL或数据。该代码连续显示数组 不确定我的代码在哪里出错,请多谢帮助,并在此先感谢。”

1 个答案:

答案 0 :(得分:0)

在请求页面时。添加用户代理。

headers = {'User-Agent':
       'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'}
mainurl = "https://aviation-safety.net/database/dblist.php?Year=1919"
def getAndParseURL(mainurl):
    result = requests.get(mainurl,headers=headers)
    soup = BeautifulSoup(result.content, 'html.parser')
    datatable = soup.select('a[href*="database/record"]')
    return datatable

print(getAndParseURL(mainurl))