我试图抓取一个网站。在这个网站中,我将被抓取的人存储在person_set中,以及用于在parse_queue中抓取下一个人的队列。在每个人抓取的开始时,我需要将这两个数据结构写入文件,以防由于异常或连接错误而爬行中断,以便我可以在以后继续。
我有三个python文件。主文件,蜘蛛和人模型。主要实例化蜘蛛,蜘蛛开始解析并在必要时调用写入和读取。 person文件具有Person类,它是存储人员数据的模型。
我在阅读我写的数据时遇到了问题。我检查了很多关于这个错误的问题,这似乎是一个导入问题。但即使我将人类导入主要和蜘蛛,它仍然给我错误。似乎emergency_read方法不受我的顶级导入的影响。
main.py
from spiders import Spider
from person import Person
import pickle
def main():
....
spider = Spider("seed_input")
spider.parse(client)
spiders.py
import pickle
from person import Person
class Spider:
def __init__(self, filename):
self.person_set = Set()
self.file_to_seed(filename)
for seed_url in self.seed_list:
self.seed_to_parse_queue(seed_url)
def parse(self, client):
if os.path.exists('tmp.person_set'):
print "Program wasnt ended properly, continuing from where it left"
self.emergency_read()
... starts parsing
def emergency_write(self):
if os.path.exists('tmp.person_set'):
self.delete_emergency_files()
with open('tmp.person_set', 'wb') as f:
pickle.dump(self.person_set, f)
with open('tmp.parse_queue', 'wb') as f:
pickle.dump(self.parse_queue, f)
def emergency_read(self):
with open('tmp.person_set', 'rb') as f:
self.person_set = pickle.load(f)
with open('tmp.parse_queue', 'rb') as f:
self.parse_queue = pickle.load(f)
person.py
class Person:
def __init__(self, name):
self.name = name
self.friend_set = Set()
self.profile_url = ""
self.id = 0
self.color = "Grey"
self.parent = None
self.depth = 0
def add_friend(self, friend):
self.friend_set.add(friend)
def __repr__(self):
return "Person(%s, %s)" % (self.profile_url, self.name)
def __eq__(self, other):
if isinstance(other, Person):
return ((self.profile_url == other.profile_url) and (self.name == other.name))
else:
return False
def __ne__(self, other):
return (not self.__eq__(other))
def __hash__(self):
return hash(self.__repr__())
堆栈跟踪
python main.py
Program wasnt ended properly, continuing from where it left
Traceback (most recent call last):
File "main.py", line 47, in <module>
main()
File "main.py", line 34, in main
spider.parse(client)
File "/home/ynscn/py-workspace/lll/spiders.py", line 39, in parse
self.emergency_read()
File "/home/ynscn/py-workspace/lll/spiders.py", line 262, in emergency_read
self.person_set = pickle.load(f)
File "/usr/lib/python2.7/pickle.py", line 1378, in load
return Unpickler(file).load()
File "/usr/lib/python2.7/pickle.py", line 858, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1198, in load_setitem
dict[key] = value
File "/home/ynscn/py-workspace/lll/person.py", line 30, in __hash__
return hash(self.__repr__())
File "/home/ynscn/py-workspace/lll/person.py", line 18, in __repr__
return "Person(%s, %s)" % (self.profile_url, self.name)
AttributeError: Person instance has no attribute 'profile_url'
答案 0 :(得分:1)
Pickle以非确定性顺序加载类实例的组件。在加载期间但在对Person.profile_url
属性进行反序列化之前会发生此错误。请注意,它在load_setitem
期间失败,这意味着它可能正在尝试加载friend_set
属性,这是一个集合。
您的自定义__repr__()
依赖于类属性,然后您的自定义__hash__()
(pickle
需要)依赖于__repr__()
。
我的建议是使用Python的默认__hash__
方法。那会有用吗?
答案 1 :(得分:0)
如果您使用dill
而不是pickle
,您的代码可能会按 序列化 。 dill
可以腌制类对象,实例,方法和属性......以及python中的大部分内容。 dill
还可以为类和类实例存储动态修改的状态。我同意这似乎是一个挑剔load
错误,正如@nofinator指出的那样。但是,dill
可能会让它绕过它。
如果您想强制加载和卸载订单,可能会更好,您可以尝试添加__getstate__
和__setstate__
方法。