我正在尝试在Mac上进行dryscrape会话。我正在尝试运行的代码如下:
import dryscrape
session = dryscrape.Session(base_url = 'http://google.com')
但是当我运行它时,我得到了这个权限错误:
Traceback (most recent call last):
File "<ipython-input-37-5e3204f25ebb>", line 3, in <module>
session = dryscrape.Session(base_url = 'http://google.com')
File "/Users/MyName/anaconda/lib/python3.5/site-packages/dryscrape/session.py", line 22, in __init__
self.driver = driver or DefaultDriver()
File "/Users/MyName/anaconda/lib/python3.5/site-packages/dryscrape/driver/webkit.py", line 30, in __init__
super(Driver, self).__init__(**kw)
File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 230, in __init__
self.conn = connection or ServerConnection()
File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 507, in __init__
self._sock = (server or get_default_server()).connect()
File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 450, in get_default_server
_default_server = Server()
File "/Users/MyName/anaconda/lib/python3.5/site-packages/webkit_server.py", line 416, in __init__
stderr = subprocess.PIPE)
File "/Users/MyName/anaconda/lib/python3.5/subprocess.py", line 947, in __init__
restore_signals, start_new_session)
File "/Users/MyName/anaconda/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
PermissionError: [Errno 13] Permission denied
我尝试在终端上用sudo运行它,但我仍然得到同样的错误。谢谢你的帮助!注意:我会支持所有答案,并接受最好的答案。
答案 0 :(得分:2)
我有这个工作:
# scrape.py
import dryscrape
s = dryscrape.Session()
s.visit("https://www.google.com/search?q={}".format('query'))
print(s.body().encode("utf-8"))
那应该打印html
我这样做:
python scrape.py > results.html
然后在浏览器中打开results.html以检查
答案 1 :(得分:1)
这是文档中的一个非常基本的例子。
import dryscrape
import sys
if 'linux' in sys.platform:
# start xvfb in case no X is running. Make sure xvfb
# is installed, otherwise this won't work!
dryscrape.start_xvfb()
search_term = 'dryscrape'
# set up a web scraping session
sess = dryscrape.Session(base_url = 'http://google.com')
# we don't need images
sess.set_attribute('auto_load_images', False)
# visit homepage and search for a term
sess.visit('/')
q = sess.at_xpath('//*[@name="q"]')
q.set(search_term)
q.form().submit()
# extract all links
for link in sess.xpath('//a[@href]'):
print(link['href'])
# save a screenshot of the web page
sess.render('google.png')
print("Screenshot written to 'google.png'")