提取一些漂亮汤的基本数据

时间:2019-07-12 05:47:40

标签: selenium-webdriver web-scraping beautifulsoup instagram

最近,我尝试使用python开始网络剪贴,以便使用漂亮的汤提取instagram中的一些基本信息。

我写了一个简单的代码,如下所示:

from bs4 import BeautifulSoup
import selenium.webdriver as webdriver

url = 'http://instagram.com/umnpics/'
driver = webdriver.Firefox()
driver.get(url)

soup = BeautifulSoup(driver.page_source)

for x in soup.findAll('li', {'class':'photo'}):
    print (x)

但是运行它之后,发生了一些异常:

Traceback (most recent call last):
  File "C:\Users\Mhdn\AppData\Roaming\Python\Python37\site-packages\selenium\webdriver\common\service.py", line 76, in start
    stdin=PIPE)
  File "C:\Program Files (x86)\Python37-32\lib\subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "C:\Program Files (x86)\Python37-32\lib\subprocess.py", line 1178, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Mhdn\Desktop\test2.py", line 5, in <module>
    driver = webdriver.Firefox()
  File "C:\Users\Mhdn\AppData\Roaming\Python\Python37\site-packages\selenium\webdriver\firefox\webdriver.py", line 164, in __init__
    self.service.start()
  File "C:\Users\Mhdn\AppData\Roaming\Python\Python37\site-packages\selenium\webdriver\common\service.py", line 83, in start
    os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.

1 个答案:

答案 0 :(得分:0)

  • 您需要从heregeckodriver下载到本地系统
  • 在您的代码中,您需要为 geckodriver
  • 提供 executable_path

executable_path添加到您的代码中:

from bs4 import BeautifulSoup
import selenium.webdriver as webdriver

url = 'http://instagram.com/umnpics/'
driver = webdriver.Firefox(executable_path= 'path/to/geckodriver')   #<---Add path to your geckodriver

#example: driver = webdriver.Firefox(executable_path= 'home/downloads/geckodriver')

driver.get(url)

soup = BeautifulSoup(driver.page_source)

for x in soup.findAll('li', {'class':'photo'}):
    print (x)