请求库

时间:2017-12-13 17:55:14

标签: python html web screen-scraping

我是Python及其可用库的新手,我正在尝试创建一个脚本来抓取一个网站。我想读取父页面上的所有链接,并让我的脚本解析并从父页面读取所有子链接中的数据。

出于某种原因,我的代码出现了这一系列错误:

python ./scrape.py
/
Traceback (most recent call last):
  File "./scrape.py", line 27, in <module>
    a = requests.get(url)
  File "/Library/Python/2.7/site-packages/requests/api.py", line 72, in get
    return request('get', url, params=params, **kwargs)
  File "/Library/Python/2.7/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/Library/Python/2.7/site-packages/requests/sessions.py", line 494, in request
    prep = self.prepare_request(req)
  File "/Library/Python/2.7/site-packages/requests/sessions.py", line 437, in prepare_request
    hooks=merge_hooks(request.hooks, self.hooks),
  File "/Library/Python/2.7/site-packages/requests/models.py", line 305, in prepare
    self.prepare_url(url, params)
  File "/Library/Python/2.7/site-packages/requests/models.py", line 379, in prepare_url
    raise MissingSchema(error)
requests.exceptions.MissingSchema: Invalid URL '/': No schema supplied. Perhaps you meant http:///?

从我的Python脚本:

from bs4 import BeautifulSoup

import requests

#somesite = 'https://www.somesite.com/"

page = 'https://www.investopedia.com/terms/s/stop-limitorder.asp'

count = 0
#url = raw_input("Enter a website to extract the URL's from: ")
r = requests.get(page)              #requests html document
data = r.text                       #set data = to html text
soup = BeautifulSoup(data, "html.parser")          #parse data with BS

#count = 0;
#souplist = []

#list
A = []

#loop to seach for all <a> tags that hold urls, store page data in array
for link in soup.find_all('a'):
    #print(link.get('href'))
    url = link.get('href')
    print(url)

    a = requests.get(url)


    #a = requests.get(url)
    #data1 = a.text
    #souplist.insert(0, BeautifulSoup[data1])
    #++count



#
#for link in soup.find_all('p'):
    #print(link.getText())

1 个答案:

答案 0 :(得分:0)

您正在抓取的网页的某些链接是网站的相对网址(https://www.investopedia.com)。因此,您可能必须通过添加网站来抓取此类网址。

from urlparse import urlparse, urljoin

# Python 3
# from urllib.parse import urlparse
# from urllib.parse import urljoin

site = urlparse(page).scheme + "://" + urlparse(page).netloc
for link in soup.find_all('a'):
    #print(link.get('href'))
    url = link.get('href')        
    if not urlparse(url).scheme:
        url = urljoin(site, url)        
    print(url)
    a = requests.get(url)