如何使用网络抓取请求?

时间:2020-07-09 23:30:35

标签: python web-scraping beautifulsoup python-requests

这是我第一次尝试进行网络抓取,并且正在关注教程。到目前为止,我拥有的代码是:

from bs4 import BeautifulSoup
import requests

source = requests.get('https://www.usnews.com/best-colleges/rankings/national-universities')

soup = BeautifulSoup(source, 'lxml')

print(soup.prettify())

但是,我遇到了错误:


Traceback (most recent call last):
  File "/Users/alanwen/Desktop/webscrape.py", line 4, in <module>
    source = requests.get('https://www.usnews.com/best-colleges/rankings/national-universities')
  File "/Users/alanwen/Library/Python/2.7/lib/python/site-packages/requests/api.py", line 76, in get
    return request('get', url, params=params, **kwargs)
  File "/Users/alanwen/Library/Python/2.7/lib/python/site-packages/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "/Users/alanwen/Library/Python/2.7/lib/python/site-packages/requests/sessions.py", line 530, in request
    resp = self.send(prep, **send_kwargs)
  File "/Users/alanwen/Library/Python/2.7/lib/python/site-packages/requests/sessions.py", line 643, in send
    r = adapter.send(request, **kwargs)
  File "/Users/alanwen/Library/Python/2.7/lib/python/site-packages/requests/adapters.py", line 529, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='www.usnews.com', port=443): Read timed out. (read timeout=None)
[Finished in 25.1s with exit code 1]
[shell_cmd: python -u "/Users/alanwen/Desktop/webscrape.py"]
[dir: /Users/alanwen/Desktop]
[path: /Library/Frameworks/Python.framework/Versions/3.8/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/share/dotnet:/opt/X11/bin:~/.dotnet/tools:/Library/Frameworks/Mono.framework/Versions/Current/Commands]

3 个答案:

答案 0 :(得分:0)

您必须在请求行的末尾添加.text才能获取网页的实际源代码。代替

from bs4 import BeautifulSoup
import requests

source = requests.get('https://www.usnews.com/best-colleges/rankings/national-universities')

soup = BeautifulSoup(source, 'lxml')

print(soup.prettify())

from bs4 import BeautifulSoup
import requests

source = requests.get('https://www.usnews.com/best-colleges/rankings/national-universities').text

soup = BeautifulSoup(source, 'lxml')

print(soup.prettify())

如果收到错误bs4.FeatureNotFound: Couldn't find a tree builder with the features you requested: lxml. Do you need to install a parser library?,请通过pip安装lxml模块

答案 1 :(得分:0)

该网站首次进行网络抓取是一个糟糕的选择,因为我认为该网站不是用于抓取的普通网站,因此必须使用它selenium

var animal = Animal{Age: 99, Name: ""}
db.Create(&animal)

答案 2 :(得分:-1)

此页面需要User-Agent标头才能识别浏览器。它甚至可以是不完整的Mozilla/5.0,但requests通常会发送python-requests/2.23.0

如果没有正确的标头,此服务器将阻止连接,并且一段时间后,您可能会收到"timedout"的消息,因为requests等不了更长的时间来等待来自服务器的数据。

顺便说一句:BeautifulSoup需要source.textsource.content而不是sourcerequests对象)。


工作代码:

import requests
from bs4 import BeautifulSoup

url = 'https://www.usnews.com/best-colleges/rankings/national-universities'
headers = {'User-Agent': 'Mozilla/5.0'}

r = requests.get(url, headers=headers)

soup = BeautifulSoup(r.content, 'lxml')

print(soup.prettify())

顺便说一句::使用页面https://httpbin.org/,您可以检查发送到服务器的内容

import requests

r = requests.get('https://httpbin.org/get')
#r = requests.post('https://httpbin.org/post')
#r = requests.get('https://httpbin.org/ip')
#r = requests.get('https://httpbin.org/user-agent')

print( r.text )
#print( r.content )
#print( r.json() )

您还可以检查(如果您将URL与/get/post一起使用)

print( r.json()['headers']['User-Agent'] ) 

或者您可以签入请求对象

print( r.request.headers['User-Agent'] )

,您将看到User-Agent

{
  "args": {}, 
  "headers": {
    "Accept": "*/*", 
    "Accept-Encoding": "gzip, deflate", 
    "Host": "httpbin.org", 
    "User-Agent": "python-requests/2.23.0", 
    "X-Amzn-Trace-Id": "Root=1-5f07c942-067f5b72784a207b31e76ce4"
  }, 
  "origin": "83.23.22.221", 
  "url": "https://httpbin.org/get"
}

python-requests/2.23.0

python-requests/2.23.0