我正在创建训练数据,其中的想法是从PiCamera
捕获图像并将它们存储在JSON encoded
文件中,以后我可以将其加载到我的神经网络中进行训练。
由于我使用Pi board
,因此内存显然是一种约束。因此,我无法拍摄大量图像,然后立即对其进行序列化。
我希望在捕获它时序列化每个图像,特别是在出现故障的情况下,我不会丢失所有数据
def trainer(LEFT, RIGHT):
# capture frames from the camera
with open('robot-train.json', 'w') as train_file:
writer = csv.writer(open('robot-train.csv', 'w'), delimiter=',', quotechar='|')
for frame in camera.capture_continuous(rawCapture, format="bgr", use_video_port=True):
data = {}
# grab the raw NumPy array representing the image, then initialize the timestamp
data['image'] = frame.array
data['time'] = time.time()
data['left'] = LEFT
data['right'] = RIGHT
# human readable version
writer.writerow([data['time'], data['left'], data['right']])
train_file.write(json.dumps(data, cls=NumpyEncoder))
# prepare for next image
rawCapture.truncate(0)
但是我收到了错误
File "/home/pi/pololu-rpi-slave-arduino-library/pi/xiaonet.py", line 30, in default
return json.JSONEncoder(self, obj)
RuntimeError: maximum recursion depth exceeded
这样做的正确方法是什么?