我正在尝试从webpage抓取表格内容。问题是,当我在脚本标题内的浏览器中使用硬编码的cookie时,可以在控制台中看到表格内容,否则当我摆脱cookie时,会得到200响应而没有所需的内容。到我将代码粘贴到此处时,cookie可能已经过期。
import requests
from bs4 import BeautifulSoup
link = 'https://www.health.gov.il/Subjects/KidsAndMatures/child_development/Pages/ADHD_experts.aspx'
headers = {
"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.183 Safari/537.36",
"Cookie":'ASP.NET_SessionId=hsqyvzg5jgkzfvzadzsyxdwx; p_hosting=!+bizF/4qwD7oEFze0NvCZLoPxuY/qnj9vRDa16ox8qkWDZTqjX1X9ZUoroByq7ynIZpFpUltU2jMCtk=; _ga=GA1.3.2020672306.1604911293; _gid=GA1.3.1145592749.1604911293; _hjTLDTest=1; _hjid=b62d7912-acfd-4ded-8a37-ae8b333fec04; WSS_FullScreenMode=false; _hjIncludedInPageviewSample=1; BotMitigationCookie_14016509088757896949="210109001604917723jho9/3TYoZILQoHOaZvAPwJt1Q8="; _gat_UA-72144815-4=1'
}
r = requests.get(link,headers=headers)
print(r.status_code)
soup = BeautifulSoup(r.text,"lxml")
print(soup.select_one('table:has(> caption.resultsSummaryPhones)'))
如何在不使用硬编码Cookie的情况下使用请求获取表格内容?
答案 0 :(得分:0)
如果您不想使用硬编码的cookie,则可能要考虑在selenium
模式下将webdriver
与headless
一起使用。
例如:
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(options=options)
driver.get('https://www.health.gov.il/Subjects/KidsAndMatures/child_development/Pages/ADHD_experts.aspx')
time.sleep(1)
soup = BeautifulSoup(driver.page_source, "html.parser").select_one('table:has(> caption.resultsSummaryPhones)')
phone_numbers = [
n.getText(strip=True) for n
in soup.find_all("td", {"class": "phoneBookListWorkPhone"})
if n.getText(strip=True)
]
print(phone_numbers)
输出:
['02-5630147', '08-9330328', '03-6287200', '08-9703940', '08-8505515', '02-6413026', '04-6727000', '03-6302211', '04-8377717', '02-9939555 02-5887300', '04-9551155', '074-7034622']