Python Web Scraping-如何抓取此类网站?

时间:2019-11-21 00:21:06

标签: python web-scraping beautifulsoup python-requests

好的,所以我需要抓取以下网页:https://www.programmableweb.com/category/all/apis?deadpool=1

这是API列表。有大约22,000个API可供抓取。


我需要:

1)获取表中的每个API的URL(第1-889页),并抓取以下信息:

  • API名称
  • 说明
  • 类别
  • 已提交

2)然后,我需要从每个URL抓取一堆信息。

3)将数据导出到CSV


问题是,我对如何考虑这个项目有点迷茫。据我所知,没有进行过AJAX调用来填充表格,这意味着我将不得不直接解析HTML(对吗?)


在我的脑海中,逻辑将是这样的:

  1. 使用请求和BS4库刮擦表

  2. 然后,以某种方式从每一行中获取HREF

  3. 访问该HREF,抓取数据,移至下一个

  4. 冲洗并重复所有表行。


我在正确的轨道上吗,请求和BS4可能吗?

以下是我一直在尝试解释的内容screenshots

非常感谢您的帮助。这伤了我的头哈哈

2 个答案:

答案 0 :(得分:0)

如果您要进行剪贴,您应该阅读有关剪贴的更多信息。

from bs4 import BeautifulSoup
import csv , os , requests
from urllib import parse


def SaveAsCsv(list_of_rows):
    try:
        with open('data.csv', mode='a',  newline='', encoding='utf-8') as outfile:
            csv.writer(outfile).writerow(list_of_rows)
    except PermissionError:
        print("Please make sure data.csv is closed\n")

if os.path.isfile('data.csv') and os.access('data.csv', os.R_OK):
    print("File data.csv Already exists \n")
else:
    SaveAsCsv([ 'api_name','api_link','api_desc','api_cat'])
BaseUrl = 'https://www.programmableweb.com/category/all/apis?deadpool=1&page={}'
for i in range(1, 890):
    print('## Getting Page {} out of 889'.format(i))    
    url = BaseUrl.format(i)
    res = requests.get(url)
    soup = BeautifulSoup(res.text,'html.parser')
    table_rows = soup.select('div.view-content > table[class="views-table cols-4 table"] > tbody tr')
    for row in table_rows:
        tds = row.select('td')
        api_name = tds[0].text.strip()
        api_link = parse.urljoin(url, tds[0].find('a').get('href'))
        api_desc = tds[1].text.strip()
        api_cat  = tds[2].text.strip()  if len(tds) >= 3 else ''
        SaveAsCsv([api_name,api_link,api_desc,api_cat])

答案 1 :(得分:0)

我们在这里使用requestsBeautifulSouppandas

import requests
from bs4 import BeautifulSoup
import pandas as pd

url = 'https://www.programmableweb.com/category/all/apis?deadpool=1&page='

num = int(input('How Many Page to Parse?> '))
print('please wait....')
name = []
desc = []
cat = []
sub = []
for i in range(0, num):
    r = requests.get(f"{url}{i}")
    soup = BeautifulSoup(r.text, 'html.parser')
    for item1 in soup.findAll('td', attrs={'class': 'views-field views-field-title col-md-3'}):
        name.append(item1.text)
    for item2 in soup.findAll('td', attrs={'class': 'views-field views-field-search-api-excerpt views-field-field-api-description hidden-xs visible-md visible-sm col-md-8'}):
        desc.append(item2.text)
    for item3 in soup.findAll('td', attrs={'class': 'views-field views-field-field-article-primary-category'}):
        cat.append(item3.text)
    for item4 in soup.findAll('td', attrs={'class': 'views-field views-field-created'}):
        sub.append(item4.text)

result = []
for item in zip(name, desc, cat, sub):
    result.append(item)

df = pd.DataFrame(
    result, columns=['API Name', 'Description', 'Category', 'Submitted'])
df.to_csv('output.csv')

print('Task Completed, Result saved to output.csv file.')

可以在线查看结果:Check Here

简单输出:

enter image description here

现在要进行href解析:

import requests
from bs4 import BeautifulSoup
import pandas as pd

url = 'https://www.programmableweb.com/category/all/apis?deadpool=0&page='

num = int(input('How Many Page to Parse?> '))
print('please wait....')

links = []
for i in range(0, num):
    r = requests.get(f"{url}{i}")
    soup = BeautifulSoup(r.text, 'html.parser')
    for link in soup.findAll('td', attrs={'class': 'views-field views-field-title col-md-3'}):
        for href in link.findAll('a'):
            result = 'https://www.programmableweb.com'+href.get('href')
            links.append(result)

spans = []
for link in links:
    r = requests.get(link)
    soup = soup = BeautifulSoup(r.text, 'html.parser')
    span = [span.text for span in soup.select('div.field span')]
    spans.append(span)

data = []
for item in spans:
    data.append(item)

df = pd.DataFrame(data)
df.to_csv('data.csv')
print('Task Completed, Result saved to data.csv file.')

在线检查结果:Here

下面是示例视图:

enter image description here

如果您想将这两个csv文件放在一起,那么代码如下:

import pandas as pd

a = pd.read_csv("output.csv")
b = pd.read_csv("data.csv")
merged = a.merge(b)
merged.to_csv("final.csv", index=False)

在线结果:Here