当我运行以下代码时:
model = Sequential()
model.add(Conv2D(64,(3,3),input_shape = X.shape[1:],data_format = "channels_last"))
model.add(Activation('relu'))
model.add(MaxPool2D(pool_size = (2,2)))
model.add(Conv2D(64,(3,3)))
model.add(Activation('relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(optimizer = 'adam',loss = 'binary_crossentropy',metrics = ['accuracy'])
model.fit(X, y, batch_size=32, epochs=1,
validation_split=0.3)
我得到:
InvalidArgumentError跟踪(最近的调用) 最后)在() 3 batch_size = 32, 4个纪元= 1, ----> 5validation_split = 0.3)
〜\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ keras \ engine \ training.py 适合(自我,x,y,batch_size,时代,冗长,回调, validate_split,validation_data,随机播放,class_weight, sample_weight,initial_epoch,steps_per_epoch,validation_steps, ** kwargs)1361 initial_epoch = initial_epoch,1362 steps_per_epoch = steps_per_epoch, -> 1363 validate_steps = validation_steps)1364 1365 def评估(自身,
〜\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ keras \ engine \ training_arrays.py 在fit_loop中(模型,输入,目标,sample_weights,batch_size, 时代,冗长,回调,val_inputs,val_targets, val_sample_weights,随机播放,callback_metrics,initial_epoch, steps_per_epoch,validation_steps) 262 ins_batch [i] = ins_batch [i] .toarray() 263 -> 264次出局= f(ins_batch) 265如果不是isinstance(outs,list): 266次= [失败]
〜\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ keras \ backend.py在 呼叫((自我,输入)2910 feed_symbols!= self._feed_symbols或self.fetches!= self._fetches或2911
会话!= self._session): -> 2912 self._make_callable(feed_arrays,feed_symbols,symbol_vals,session)2913 2914已获取= self._callable_fn(* array_vals)〜\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ keras \ backend.py在 _make_callable(自己,feed_arrays,feed_symbols,symbol_vals,会话)2855 callable_opts.target.append(self.updates_op.name)2856
创建可调用对象。
-> 2857 callable_fn =会话。_make_callable_from_options(callable_opts)2858#缓存 与生成的callable相对应的参数,使2859
我们可以检测到将来的不匹配项并刷新可调用项。
〜\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py在 _make_callable_from_options(self,callable_options)1412“”“ 1413 self._extend_graph() -> 1414返回BaseSession._Callable(self,callable_options)1415 1416
〜\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py在 init ((自身,会话,可调用选项)1366)出现错误。raise_exception_on_not_ok_status()的状态为:1367
self._handle = tf_session.TF_SessionMakeCallable( -> 1368 session._session,options_ptr,状态)1369最后:1370 tf_session.TF_DeleteBuffer(options_ptr)〜\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ framework \ errors_impl.py 在退出(自身,type_arg,value_arg,traceback_arg)中 517无,无, 第518节 -> 519 c_api.TF_GetCode(self.status.status)) 520#从内存中删除基础状态对象,否则它保持活动状态 521#因为由于以下原因而从追溯中引用了状态信息
InvalidArgumentError:默认MaxPoolingOp仅支持NHWC 设备类型CPU [[Node:max_pooling2d_2 / MaxPool = MaxPoolT = DT_FLOAT, _class = [“ loc:@ training_3 / Adam / gradients / max_pooling2d_2 / MaxPool_grad / MaxPoolGrad”], data_format =“ NCHW”,ksize = [1,1,2,2],padding =“ VALID”,步幅= [1, 1,2,2], _device =“ / job:localhost /副本:0 /任务:0 /设备:CPU:0”]]
我也在使用4GB RAM CPU。我只跑了一个纪元。这与内存有关吗?