如何抓取嵌入式链接和表格信息

时间:2019-06-01 10:49:31

标签: python web-scraping beautifulsoup python-requests

我正在尝试抓取this website上可用数据集的信息。

我想收集资源的URL,至少要收集数据集的标题。

this resource为例,我想捕获“转到资源” 中嵌入的URL和表中列出的标题: enter image description here

我已经创建了一个基本的刮板,但是似乎没有用:

import requests
import csv
from bs4 import BeautifulSoup

site = requests.get('https://data.nsw.gov.au/data/dataset');
data_list=[]

if site.status_code is 200:
    content = BeautifulSoup(site.content, 'html.parser')
    internals = content.select('.resource-url-analytics')
    for url in internals:
        title = internals.select=('.resource-url-analytics')[0].get_text()
        link = internals.select=('.resource-url-analytics')[0].get('href')
        new_data = {"title": title, "link": link}
        data_list.append(new_data)
    with open ('selector.csv','w') as file:
            writer = csv.DictWriter(file, fieldnames = ["dataset", "link"], delimiter = ';')
            writer.writeheader()
            for row in data_list:
                writer.writerow(row)

我想将输出写入CSV,其中包含URL和标题的列。

这是所需输出的示例

Desired output table

非常感谢您的帮助

2 个答案:

答案 0 :(得分:1)

看看API for the datasets,这可能是最简单的方法。

同时,这是从这些页面获取ID级别的API链接并将所有软件包的完整软件包信息存储在一个列表data_sets中,而仅将感兴趣的信息存储在另一个变量中的方法(results)。如果有更好的方法,请务必查看API文档-例如,如果ID可以分批而不是按ID提交,那将是很好的选择。

下面的答案利用了文档中详细介绍的端点,该端点用于获取数据集,资源或其他对象的完整JSON表示形式

在以下目标网页上获取当前的第一个结果

Guyra 1:25000的植被表VIS_ID 240

我们希望父级a的最后一个子级h3的父级为.dataset-item。在下面,选择器之间的空格为descendant combinators

.dataset-item h3 a:last-child

您可以将其缩短为h3 a:last-child,以提高效率。

此关系可靠地选择了页面上的所有相关链接。

enter image description here

继续此示例,访问检索到的第一个列出项目的url,我们可以使用api端点(该ID检索与此程序包相关的json),通过一个具有= *运算符的attribute = value选择器来找到ID。我们知道这个特定的api端点具有一个公共字符串,因此我们在href属性值上进行了子字符串匹配:

[href*="/api/3/action/package_show?id="]

域可能会有所不同,并且某些检索到的链接是相对的,因此我们必须测试是否相对并添加适当的域。

该匹配的首页html:

enter image description here


注意:

  1. data_sets是一个包含每个包装的所有包装数据的列表,内容广泛。如果您有兴趣查看这些软件包中的内容(除了查看API文档),我会这样做
  2. 您可以通过以下方式从页面上的汤对象获取总页数:

   num_pages = int(soup.select('[href^="/data/dataset?page="]')[-2].text)

您可以更改循环以减少页面数。

  1. 会话对象用于efficiency of re-using connection。我确定还有其他改进。特别是,我会寻找任何减少请求数量的方法(例如,为什么我提到要寻找批处理ID端点)。
  2. 返回的包中不能有多个资源URL。请参见示例here。您可以编辑代码来处理此问题。

Python:

from bs4 import BeautifulSoup as bs
import requests
import csv
from urllib.parse import urlparse

json_api_links = []
data_sets = []

def get_links(s, url, css_selector):
    r = s.get(url)
    soup = bs(r.content, 'lxml')
    base = '{uri.scheme}://{uri.netloc}'.format(uri=urlparse(url))
    links = [base + item['href'] if item['href'][0] == '/' else item['href'] for item in soup.select(css_selector)]
    return links

results = []
#debug = []
with requests.Session() as s:

    for page in range(1,2):  #you decide how many pages to loop

        links = get_links(s, 'https://data.nsw.gov.au/data/dataset?page={}'.format(page), '.dataset-item h3 a:last-child')

        for link in links:
            data = get_links(s, link, '[href*="/api/3/action/package_show?id="]')
            json_api_links.append(data)
            #debug.append((link, data))
    resources = list(set([item.replace('opendata','') for sublist in json_api_links for item in sublist])) #can just leave as set

    for link in resources:
        try:
            r = s.get(link).json()  #entire package info
            data_sets.append(r)
            title = r['result']['title'] #certain items

            if 'resources' in r['result']:
                urls = ' , '.join([item['url'] for item in r['result']['resources']])
            else:
                urls = 'N/A'
        except:
            title = 'N/A'
            urls = 'N/A'
        results.append((title, urls))

    with open('data.csv','w', newline='') as f:
        w = csv.writer(f)
        w.writerow(['Title','Resource Url'])
        for row in results:
            w.writerow(row)

所有页面

(非常长时间运行,请考虑使用线程/异步):

from bs4 import BeautifulSoup as bs
import requests
import csv
from urllib.parse import urlparse

json_api_links = []
data_sets = []

def get_links(s, url, css_selector):
    r = s.get(url)
    soup = bs(r.content, 'lxml')
    base = '{uri.scheme}://{uri.netloc}'.format(uri=urlparse(url))
    links = [base + item['href'] if item['href'][0] == '/' else item['href'] for item in soup.select(css_selector)]
    return links

results = []
#debug = []

with requests.Session() as s:
    r = s.get('https://data.nsw.gov.au/data/dataset')
    soup = bs(r.content, 'lxml')
    num_pages = int(soup.select('[href^="/data/dataset?page="]')[-2].text)
    links = [item['href'] for item in soup.select('.dataset-item h3 a:last-child')]

    for link in links:     
        data = get_links(s, link, '[href*="/api/3/action/package_show?id="]')
        json_api_links.append(data)
        #debug.append((link, data))
    if num_pages > 1:
        for page in range(1, num_pages + 1):  #you decide how many pages to loop

            links = get_links(s, 'https://data.nsw.gov.au/data/dataset?page={}'.format(page), '.dataset-item h3 a:last-child')

            for link in links:
                data = get_links(s, link, '[href*="/api/3/action/package_show?id="]')
                json_api_links.append(data)
                #debug.append((link, data))

        resources = list(set([item.replace('opendata','') for sublist in json_api_links for item in sublist])) #can just leave as set

        for link in resources:
            try:
                r = s.get(link).json()  #entire package info
                data_sets.append(r)
                title = r['result']['title'] #certain items

                if 'resources' in r['result']:
                    urls = ' , '.join([item['url'] for item in r['result']['resources']])
                else:
                    urls = 'N/A'
            except:
                title = 'N/A'
                urls = 'N/A'
            results.append((title, urls))

    with open('data.csv','w', newline='') as f:
        w = csv.writer(f)
        w.writerow(['Title','Resource Url'])
        for row in results:
            w.writerow(row)

答案 1 :(得分:0)

为简单起见,请使用硒包:

from selenium import webdriver
import os

# initialise browser
browser = webdriver.Chrome(os.getcwd() + '/chromedriver')
browser.get('https://data.nsw.gov.au/data/dataset')

# find all elements by xpath
get_elements = browser.find_elements_by_xpath('//*[@id="content"]/div/div/section/div/ul/li/div/h3/a[2]')

# collect data
data = []
for item in get_elements:
    data.append((item.text, item.get_attribute('href')))

输出:

('Vegetation of the Guyra 1:25000 map sheet VIS_ID 240', 'https://datasets.seed.nsw.gov.au/dataset/vegetation-of-the-guyra-1-25000-map-sheet-vis_id-2401ee52')
('State Vegetation Type Map: Riverina Region Version v1.2 - VIS_ID 4469', 'https://datasets.seed.nsw.gov.au/dataset/riverina-regional-native-vegetation-map-version-v1-0-vis_id-4449')
('Temperate Highland Peat Swamps on Sandstone (THPSS) spatial distribution maps...', 'https://datasets.seed.nsw.gov.au/dataset/temperate-highland-peat-swamps-on-sandstone-thpss-vegetation-maps-vis-ids-4480-to-4485')
('Environmental Planning Instrument - Flood', 'https://www.planningportal.nsw.gov.au/opendata/dataset/epi-flood')

and so on