获取groupon的访问令牌

时间:2019-01-06 12:08:07

标签: python selenium beautifulsoup xmlhttprequest

我想检索网页的源代码,但是我收到一条消息,提示“访问被拒绝”。我尝试使用用户代理来解决此问题,该问题已经解决了好几次,但是在出现相同的错误之后。我看到一种解决方案是从问题中恢复页面的令牌。如何从页面或网站恢复令牌。

网站的网址:https://www.groupon.com/browse/boston?category=food-and-drink

#import needed object
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from bs4 import BeautifulSoup
from fake_useragent import UserAgent

#put the url of the groupon page in the object url
url = "https://www.groupon.com/browse/boston?category=food-and-drink"
#print the url on the screen, return an object of type NoneType
print(url)

#create a UserAgent object
ua = UserAgent()
#put a random UserAgent in the object userAgent
userAgent = ua.random
#print the userAgent on the screen, return an object of type NoneType
print(userAgent)
#initialise the object options with the options of chrome webdriver 
selenium.webdriver.chrome.options.Options
options = Options()
#add a argument in the object option, return an object of type NoneType
options.add_argument(f'user-agent={userAgent}')
#define the option of chrome webdriver
 options.headless = True
#create a webdriver object, return the object driver of type 
selenium.webdriver.chrome.webdriver.WebDriver
driver = webdriver.Chrome(options=options, executable_path=r'C:\Users\user\AppData\Local\Programs\Python\Python36\Scripts\chromedriver.exe')
#get the url, return an object of type NoneType
driver.get(url)
#create a beautifulsoup object, return an object of type bs4.BeautifulSoup
soup = BeautifulSoup(driver.page_source,features="html.parser")

#select body, return the object codeSource of type list
codeSource = soup.select('body')
#print codeSource on the screen, return an object of type NoneType 
print(codeSource)

result

1 个答案:

答案 0 :(得分:0)

您在无头模式下遇到此问题,在正常模式下可以正常工作。因此,您可以删除option.headless = true,并且页面源代码不会出现任何问题