我正在制作一个可以从任何网站获取信息的程序... 但该程序无法正常工作。 谁能帮我解决这个测验
示例:网站为naukri.com 我们必须收集页面的所有超链接
import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
import ssl
isc = ssl.create_default_context()
isc.check_hostname = False
isc.verify_mode = ssl.CERT_NONE
open = urllib.request.urlopen('https://www.naukri.com/job-listings-Python-
Developer-Cloud-Analogy-Softech-Pvt-Ltd-Noida-Sector-63-Noida-1-to-2-years-250718003152src=jobsearchDesk&sid=15325422374871&xp=1&px=1&qp=python%20developer
&srcPage=s', context = isc).read()
soup = BeautifulSoup(open, 'html.parser')
tags = soup('a')
for tag in tags:
print(tag.get('href', None))
答案 0 :(得分:1)
我会使用请求和bs4。我能够使它起作用,并且我认为它具有预期的结果。试试这个:
import requests
from bs4 import BeautifulSoup
url = ('https://www.naukri.com/job-listings-Python-Developer-Cloud-Analogy-Softech-Pvt-Ltd-Noida-Sector-63-Noida-1-to-2-years-250718003152src=jobsearchDesk&sid=15325422374871&xp=1&px=1&qp=python%20developer&srcPage=s')
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page, 'html.parser')
links = soup.find_all('a', href=True)
for each in links:
print(each.get('href'))