我正在尝试编写自己的<?php
$result = [];
$connection = new PDO("host","dbname","username","password");
$tables = $connection->prepare("show table");
//add ' from [your database name]' to get the tables from specific db.
$tables->execute();
foreach($tables->fetchAll() as $table){
$query = $connection->prepare("SELECT * FROM $table");
$query->execute();
//or do another foreach to do some fance stuff, no idea what this outputs.
array_push($result,array($table => $query->fetchAll()));
}
?>
函数。但是,在初始化模型时,我收到以下错误:
<div>
<div no-reorder each="{ items }">{ fname }</div>
</div>
与计算渐变有关。这是完整的堆栈跟踪(即使我删除了渐变剪辑,它也会发生 - 它只是移动到优化器)。
dynamic_rnn()
我已将问题确定为AttributeError: 'IndexedSlices' object has no attribute 'get_shape'
中的问题。如果我将AttributeError Traceback (most recent call last)
<ipython-input-8-c4ed7a228363> in <module>()
----> 1 model = Model(args)
<ipython-input-5-2ab152ab1152> in __init__(self, args, infer)
58
59 tvars = tf.trainable_variables()
---> 60 grads, _ = tf.clip_by_global_norm(tf.gradients(self.cost, tvars),
61 args.grad_clip)
62 optimizer = tf.train.AdamOptimizer(self.lr) # tf.train.GradientDescentOptimizer(self.lr) #
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients.pyc in gradients(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method)
479 # pylint: enable=protected-access
480 else:
--> 481 in_grads = _AsList(grad_fn(op, *out_grads))
482 _VerifyGeneratedGradients(in_grads, op)
483 if gate_gradients and len(
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_grad.pyc in _EnterGrad(op, grad)
184 if op.get_attr("is_constant"):
185 # Add a gradient accumulator for each loop invariant.
--> 186 result = grad_ctxt.AddBackPropAccumulator(grad)
187 else:
188 result = exit(grad)
/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.pyc in AddBackPropAccumulator(self, value)
1430 """
1431 self.Exit()
-> 1432 shape = value.get_shape()
1433 if not shape.is_fully_defined():
1434 shape = None
AttributeError: 'IndexedSlices' object has no attribute 'get_shape'
作为参数添加到while_loop
函数中,一切都很好。以下是相关代码。
backprop=False
我尽可能地尝试从while_loop
复制代码。虽然我已经为我的特定用例简化了一点。
我不确定代码的哪一部分正在创建inputs = array_ops.transpose(self.input_data, [1, 0])
input_shape = array_ops.shape(inputs)
(time_steps, batch_size) = array_ops.unpack(input_shape, 2)
time = array_ops.constant(0, dtype=dtypes.int32, name="time")
state = self.initial_state
# TensorArrays
base_name = scope
output_ta = tensor_array_ops.TensorArray(
dtype=dtypes.float32, size=time_steps,
tensor_array_name=base_name + "output")
input_ta = tensor_array_ops.TensorArray(
dtype=inputs.dtype, size=time_steps,
tensor_array_name=base_name + "input")
input_ta = input_ta.unpack(inputs)
# Step function
def _take_step(cur_time, output_ta_t, cur_state):
inps = input_ta.read(cur_time)
step_inps = tf.nn.embedding_lookup(embedding, inps)
step_inps = tf.reshape(step_inps,[-1,self.input_embedding_size])
output, new_state = self.cell((step_inps, attention), cur_state)
output_ta_t = output_ta_t.write(cur_time, output)
variable_scope.get_variable_scope().reuse_variables()
return (cur_time + 1, output_ta_t, new_state)
# Tensor while_loop
final_loop_vars = control_flow_ops.while_loop(
cond=lambda t, *_: t < time_steps,
body=_take_step,
loop_vars=(time, output_ta, state),
parallel_iterations=None,
swap_memory=False)
对象。所以我在找到开始调试的地方时遇到了麻烦。