我试图通过beautifulsoup阅读此链接内容,然后尝试获取span.f中的文章日期
import requests
import json
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.86 Safari/537.36'}
from selenium import webdriver
link="https://www.google.com/search?q=replican+party+announced&ie=utf-8&oe=utf-8&client=firefox-b"
browser=webdriver.Firefox()
browser.get(link)
s=requests.get(link)
soup5 =BeautifulSoup(s.content,'html.parser')
现在我要获取<span class="f">Apr 27, 2018 - </span>
中显示的所有文章日期及其相应的“链接网址”
但这段代码不是为我取任何东西
for i in soup5.find_all("div",{"class":"g"}):
print (i.find_all("span",{"class":"f"}))
答案 0 :(得分:2)
此任务不需要selenium。使用BeautifulSoup的.select()
方法如下:
import requests
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.86 Safari/537.36'}
link = "https://www.google.com/search?q=replican+party+announced&ie=utf-8&oe=utf-8&client=firefox-b"
r = requests.get(link, headers=headers, timeout=4)
encoding = r.encoding if 'charset' in r.headers.get('content-type','').lower() else None
soup = BeautifulSoup(r.content, 'html.parser', from_encoding=encoding)
for d in soup.select("div.s > div"):
# check if date exists
if d.select("span.st > span.f"):
date = d.select("span.st > span.f")
link = d.select("div.f > cite")
print(date[0].text)
print(link[0].text)
输出:
2018. 4. 27. -
https://www.cnn.com/2017/11/10/politics/house.../index.html
2018. 3. 19. -
thehill.com/.../379087-former-gop-lawmaker-announces-hes-leav...
2018. 4. 11. -
https://www.nytimes.com/2018/04/11/us/.../paul-ryan-speaker.htm...
2017. 10. 24. -
https://www.theguardian.com/.../jeff-flake-retire-republican-senat...
答案 1 :(得分:1)
当您使用 Selenium 时,您可以轻松地通过 BeautifulSoup 取出requests
并调用page_source
,而不是使用find_all()
并按如下方式打印日期:
代码块:
from bs4 import BeautifulSoup as soup
from selenium import webdriver
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.86 Safari/537.36'}
link="https://www.google.com/search?q=replican+party+announced&ie=utf-8&oe=utf-8&client=firefox-b"
browser = webdriver.Firefox(executable_path=r'C:\Utility\BrowserDrivers\geckodriver.exe')
browser.get(link)
soup5 = soup(browser.page_source,'html.parser')
print("Dates are as follows : ")
for i in soup5.find_all("span",{"class":"f"}):
print (i.text)
print("Link URLs are as follows : ")
for i in soup5.find_all("cite",{"class":"iUh30"}):
print (i.text)
控制台输出:
Dates are as follows :
Mar 19, 2018 -
Apr 27, 2018 -
Feb 1, 2018 -
Apr 17, 2018 -
Jan 9, 2018 -
Link URLs are as follows :
thehill.com/.../379087-former-gop-lawmaker-announces-hes-leaving-gop-tears-into-tr...
https://edition.cnn.com/2017/11/10/politics/house-retirement-tracker/index.html
https://en.wikipedia.org/wiki/Republican_Party_presidential_candidates,_2016
https://www.cbsnews.com/.../joe-scarborough-announces-hes-leaving...
如果您想要并排打印日期和链接网址,您可以使用:
代码块:
from bs4 import BeautifulSoup as soup
from selenium import webdriver
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.86 Safari/537.36'}
link="https://www.google.com/search?q=replican+party+announced&ie=utf-8&oe=utf-8&client=firefox-b"
browser = webdriver.Firefox(executable_path=r'C:\Utility\BrowserDrivers\geckodriver.exe')
browser.get(link)
soup5 = soup(browser.page_source,'html.parser')
for i,j in zip(soup5.find_all("span",{"class":"f"}), soup5.find_all("cite",{"class":"iUh30"})):
print(i.text, j.text)
控制台输出:
Mar 19, 2018 - thehill.com/.../379087-former-gop-lawmaker-announces-hes-leaving-gop-tears-into-tr...
Apr 27, 2018 - https://edition.cnn.com/2017/11/10/politics/house-retirement-tracker/index.html
Feb 1, 2018 - https://en.wikipedia.org/wiki/Republican_Party_presidential_candidates,_2016
Apr 17, 2018 - https://www.cbsnews.com/.../joe-scarborough-announces-hes-leaving...
Jan 9, 2018 - www.travisgop.com/2018_precinct_conventions