我正在用jsp打印一个arraylist。该数组列表中的每个对象都印有这样的循环:
<% ArrayList <MessageObject> list = (ArrayList<MessageObject>) request.getAttribute("list"); %>
<%int index = 0;%>
<%for(MessageObject msg :list){
index++;
if(mensaje.getState().compareTo("unread") == 0){%>
<tr data-status="unread" class="unread">
<td>
<a href="javascript:;" class="star">
<i class="glyphicon glyphicon-star"></i>
</a>
</td>
<td>
<div class="media">
<h4 class="title">
User Identifier
</h4>
</div>
</td>
<td id="unread-id">
<div class="media">
<p class="summary"><% out.print(msg.getMessage());%></p>
<input id="index" type="text" value="<%out.print(index);%>"></input>
</div>
为了使我的代码更易于阅读,上面没有写一些结束标签和其他结构。
基本上,它从队列中打印出我的消息,以及它在arraylist中的索引:
我的问题是,我想在单击任何邮件时保存它们的索引值。
我尝试过:
<script>
$(document).on('click', '#unread-id', function () {
var index = $('#index').val();
$("#setindex").val(index);
});
因此,我单击包含消息的任何div,即会调用脚本,但我总是得到相同的索引值1。 问题在于,具有相同ID名称的始终相同的div会导致我的脚本始终选择ID为unread-id的第一个div,该ID始终是第一个,因此它返回1。
如果我所有的容器div都具有相同的id值,如何获取被点击的div的索引?
答案 0 :(得分:2)
将类似于cpp
的类添加到parameter_servers = ["localhost:2222"]
workers = ["localhost:2223"]
cluster = tf.train.ClusterSpec({"ps": parameter_servers, "worker":
workers})
# start a server for a specific task
server = tf.train.Server(
cluster,
job_name=FLAGS.job_name,
task_index=FLAGS.task_index)
class VAE(object):
def __init__():
# MODEL
self.n_features = n_features
self.params = {
'M': 2048,
'd': 5,
'n_epoch': 2,
'hidden_d': [25, 5],
'learning_rate': 0.01
}
self.saved_dir_path = dir_path
self.ckpt_path = os.path.join(self.saved_dir_path,
'checkpointFiles/') + 'model.ckpt'
# distributed training
if FLAGS.job_name == "ps":
server.join()
elif FLAGS.job_name == "worker":
# Between-graph replication
with tf.device(tf.train.replica_device_setter(
worker_device="/job:worker/task:%d" % FLAGS.task_index,
cluster=cluster)):
self.global_step = tf.get_variable(
'global_step',
[],
dtype=tf.int64,
initializer=tf.constant_initializer(0),
trainable=False)
self.z = Normal(
loc=tf.zeros([self.params['M'], self.params['d']]),
scale=tf.ones([self.params['M'],
self.params['d']]))
# self.hidden0 = tf.layers.dense(
self.hidden = tf.layers.dense(
self.z, self.params['hidden_d'][0],
activation=tf.nn.relu)
if len(self.params['hidden_d']) > 1:
for i in xrange(1, len(params['hidden_d'])):
self.hidden = \
tf.layers.dense(
self.hidden,
self.params['hidden_d'][i],
activation=tf.nn.relu)
self.x = Bernoulli(
logits=tf.layers.dense(
self.hidden,
self.n_features), dtype=tf.float64)
# INFERENCE
self.x_ph = tf.placeholder(dtype=tf.float64,shape=[None,
self.n_features])
self.hidden = tf.layers.dense(
tf.cast(self.x_ph, tf.float32),
self.params['hidden_d'][-1],
activation=tf.nn.relu)
if len(self.params['hidden_d']) > 1:
for i in xrange(1, len(params['hidden_d'])):
j = -(1+i)
self.hidden = \
tf.layers.dense(
self.hidden,
self.params['hidden_d'][j],
activation=tf.nn.relu)
self.qz = Normal(
loc=tf.layers.dense(self.hidden, self.params['d']),
scale=tf.layers.dense(
self.hidden, self.params['d'],
activation=tf.nn.softplus))
self.x_avg = Bernoulli(
logits=tf.reduce_mean(self.x.parameters['logits'], 0),
name='x_avg')
self.log_likli =
tf.reduce_mean(self.x_avg.log_prob(self.x_ph), 1)
self.optimizer = tf.train.RMSPropOptimizer(
self.params['learning_rate'], epsilon=1.0)
# self.
self.inference = ed.KLqp({self.z: self.qz}, data={self.x:
self.x_ph})
self.inference_init = self.inference.initialize(
optimizer=self.optimizer, global_step =
self.global_step, logdir='log')
self.init = tf.global_variables_initializer()
self.saver = tf.train.Saver()
def train(self, train_data):
#Generate x_batch
start = 0 # pointer to where we are in iteration
while True:
stop = start + self.params['M']
diff = stop - train_data.shape[0]
if diff <= 0:
batch = train_data[start:stop]
start += self.params['M']
else:
batch = np.concatenate((train_data[start:],
train_data[:diff]))
start = diff
yield batch
train_data_generator = batch
saver_hook = tf.train.CheckpointSaverHook(
checkpoint_dir=FLAGS.model_path,
save_steps=100,
saver=tf.train.Saver(),
checkpoint_basename='model.ckpt',
scaffold=None
)
hooks = [saver_hook]
with tf.train.MonitoredTrainingSession(
master=server.target,
is_chief=(FLAGS.task_index == 0),
checkpoint_dir=FLAGS.model_path,
hooks=hooks,
config= tf.ConfigProto(allow_soft_placement=True,
log_device_placement=True),
save_summaries_steps=None,
save_summaries_secs=None
) as sess:
sess.run(self.init)
# sess.run(self.inference_init)
# self.inference.initialize(optimizer=self.optimizer)
n_iter_per_epoch = np.ceil(
train_data.shape[0] / self.params['M']).astype(int)
for epoch in xrange(1, self.params['n_epoch'] + 1):
print "Epoch: {0}".format(epoch)
avg_loss = 0.0
pbar = Progbar(n_iter_per_epoch)
for t in xrange(1, n_iter_per_epoch + 1):
pbar.update(t)
x_batch = next(train_data_generator)
info_dict = self.inference.update(
feed_dict={self.x_ph: x_batch})
avg_loss += info_dict['loss']
avg_loss /= n_iter_per_epoch
avg_loss /= self.params['M']
print "-log p(x) <= {:0.3f}".format(avg_loss)
print "Done training the model."
if __name__ == "__main__":
vae = VAE()
vae.train(data)
,并为以下脚本更改脚本。您的<td id="unread-id">
应该看起来像row
。另外,请勿在输入中使用ID,而是将其更改为类,例如td
。
JS
<td class="row">
JSP更改
row-input
至$(document).on('click', '.row', function () {
var index = $(this).find('.row-input').val();
$("#setindex").val(index);
});
<td id="unread-id">
至<td class="row">
注意
您正在为所有行设置相同的<input id="index" type="text" value="<%out.print(index);%>"></input>
。 id必须唯一,这就是您不断获得相同索引的原因。
答案 1 :(得分:1)
首先-ID在您的页面中应该是唯一的。您应该真正解决此问题(如果需要一些选择器来处理多个元素-您可以改用类名)。
但是-您的代码可以正常工作(可能会导致某些浏览器出现问题,因此我建议您尽快解决此问题)
$(function() {
$(document).on('click', '#unread-id', function () {
console.log($(this).val());
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<input id="unread-id" value="1" /><br />
<input id="unread-id" value="2" /><br />
<input id="unread-id" value="3" /><br />
<input id="unread-id" value="4" /><br />
在click
函数内部-this
元素是您刚刚单击的元素。您可以使用它来获取所需的值。