多个线程如何维护对实例的引用?

时间:2017-10-24 03:06:24

标签: java multithreading synchronization this

向客户端提供一个名为Scheduler的类,该类以特定间隔调度警报。要做到这一点,我在课程中有一个方法setAlarm,它会向优先队列添加警报,并执行notify()Timer程序知道它是时候醒来并处理队列顶部的内容。

以下是基本实现的示例 -

class Scheduler {

    synchronized public void setAlarm(Date date) {
        notify();
    }

    synchronized private void alarmTimer() {
        while (true) {
             wait();
             System.out.println("Alarm Triggered");
        }
    }

现在我需要Timer程序在另一个线程上运行,因为它会一直等到收到通知或者时间结束。但是客户端应该不知道有两个线程。就客户而言,它只会执行schedulerInstance.setAlarm(myDate)并且应该处理它。

因此Scheduler必须为alarmTimer()创建一个单独的线程。如何在仍然引用正确的this对象的同时从这个新线程调用alarmTimer?处理我希望多个线程维持对原始Scheduler实例的引用以便wait()notify()工作的情况的正确方法是什么?

1 个答案:

答案 0 :(得分:0)

这是一种方法:

  1. 扩展线程
  2. this上比在方法级别上同步
  3. 开始你的线程&请继续参考 class Scheduler extends java.lang.Thread { public void setAlarm(Date date) { synchronized(this) { this.notify(); } } private void alarmTimer() { this.start(); } @Override public void run() { synchronized(this) { while (true) { this.wait(); System.out.println("Alarm Triggered"); } } } } ...
  4. anyObject.wait()

    其他所有方法都是可以想象的,您可以在报警开始和停止之间保持对等待对象的参考。 (anyObject.notify()import tensorflow as tf import numpy as np from sklearn import cross_validation import pandas as pd np.random.seed(20160612) tf.set_random_seed(20160612) #this is input data, data is a 7x86594 and label is a 5x86594 data2 = pd.read_csv('rawdata.csv', sep=',', header=None) data = np.array(data2) label2=pd.read_csv('class.csv', sep='\t', header=None) label=np.array(label2) train_x,test_x,train_t,test_t=cross_validation.train_test_split(data,label,test_size=0.1,random_state=None) #this is supposed to be neural size in hidden layer num_units = 15 x = tf.placeholder(tf.float32, [None, 7]) t = tf.placeholder(tf.float32, [None, 5]) w1 = tf.Variable(tf.truncated_normal([7, num_units], mean=0.0, stddev=0.05)) b1 = tf.Variable(tf.zeros([num_units])) hidden1 = tf.nn.relu(tf.matmul(x, w1) + b1) w0 = tf.Variable(tf.zeros([num_units, 5])) b0 = tf.Variable(tf.zeros([5])) p = tf.nn.softmax(tf.matmul(hidden1, w0) + b0) loss = -tf.reduce_sum(t * tf.log(tf.clip_by_value(p,1e-10,1.0))) train_step = tf.train.AdamOptimizer().minimize(loss) correct_prediction = tf.equal(tf.argmax(p, 1), tf.argmax(t, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) sess = tf.InteractiveSession() sess.run(tf.initialize_all_variables()) #this is how i think batching is batch_size = 100 for j in range(0, 86594, batch_size): xs,ys= train_x[j:j+batch_size], train_t[j:j+batch_size] i = 0 for _ in range(4000): i += 1 sess.run(train_step, feed_dict={x: xs, t: ys}) if i % 100 == 0: loss_val, acc_val = sess.run([loss, accuracy],feed_dict={x:test_x, t: test_t}) print ('Step: %d, Loss: %f, Accuracy: %f'% (i, loss_val, acc_val))