我知道如何在工作localy时从外部源将数据加载到Scrapy spider中。但我发现有关如何将此文件部署到scrapinghub以及在那里使用的路径的任何信息。现在我使用SH文档中的这种方法 - enter link description here但是收到NONE对象。
import pkgutil
class CodeSpider(scrapy.Spider):
name = "code"
allowed_domains = ["google.com.au"]
def start_requests(self, ):
f = pkgutil.get_data("project", "res/final.json")
a = json.loads(f.read())
感谢。 我的设置文件
from setuptools import setup, find_packages
setup(
name = 'project',
version = '1.0',
packages = find_packages(),
package_data = {'project': ['res/*.json']
},
entry_points = {'scrapy': ['settings = au_go.settings']},
zip_safe=False,
)
我得到的错误。
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 127, in _next_request
request = next(slot.start_requests)
File "/tmp/unpacked-eggs/__main__.egg/au_go/spiders/code.py", line 16, in start_requests
a = json.loads(f.read())
AttributeError: 'NoneType' object has no attribute 'read'
答案 0 :(得分:2)
从您提供的回溯中,我假设您的项目文件如下所示:
au_go/
__init__.py
settings.py
res/
final.json
spiders/
__init__.py
code.py
scrapy.cfg
setup.py
有了这个假设,setup.py
' package_data
需要引用名为au_go
的包:
from setuptools import setup, find_packages
setup(
name = 'au_go',
version = '1.0',
packages = find_packages(),
package_data = {
'au_go': ['res/*.json']
},
entry_points = {'scrapy': ['settings = au_go.settings']},
zip_safe=False,
)
然后你可以使用pkgutil.get_data("au_go", "res/final.json")
。