搜寻网站时为什么会出现SSL错误?

时间:2019-03-22 09:17:22

标签: python ssl certificate

我有以下Python脚本来抓取网站https://www.notebooksbilliger.de的Monitor的价格:

from lxml import html
import csv, os, json
import requests
from time import sleep

url = "https://www.notebooksbilliger.de/asus+vz239he"
headers = { 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36'}
page = requests.get(url, headers=headers)
doc = html.fromstring(page.content)
RAW_PRICE = doc.xpath('//div[@id="product_detail_price"]')[0].values()[4]

但是出现以下错误:urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.notebooksbilliger.de', port=443): Max retries exceeded with url: /asus+vz239he (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1045)'))

您知道为什么我会收到此错误吗?

1 个答案:

答案 0 :(得分:-1)

可能不是最佳实践,但对我有用:page = {requests.get(url, headers=headers, verify=False)

为请求添加了verfiy = False