Tensorflow:在缩小问题的同时保持最佳内部状态

时间:2019-05-15 12:13:28

标签: python tensorflow tensorflow2.0

我目前正在使用TensorFlow执行大批量的功能最小化,如下所示:

Optimise function for many pseudodata realisations in TensorFlow 2

我正在通过将输入参数的张量拟合到大小相同的观测向量张量来做到这一点。因此TensorFlow只是看到一个最大的最小化问题。但是,输入张量和样本张量的各个分量是完全独立的。因此,我分别跟踪它们的收敛性。

接下来,我想停止优化我估计已充分收敛的张量分量。目前,我是通过将它们从所有问题变量中剔除而实现的,就像这样:

self.mus[i] = tf.Variable(m[~iconv],dtype='float32') # for i,m in enumerate(self.mus)...

(其中iconv是选择收敛张量分量的掩码)

下面是一个完整的示例:

import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import datetime

Npars = 10

# Bunch of independent Normal distributions that we want to combine
norm0 = [tfd.Normal(loc = 10, scale = 1) for i in range(Npars)]

# Construct joint distributions
joint0 = tfd.JointDistributionSequential(norm0)

N = int(1e6)
# Generate pseudodata to be fitted
samples0 = joint0.sample(N)

q0 =-2*joint0.log_prob(samples0)
print("q0:", q0)

class Model:
    def __init__(self,samples):
        # Parameters to optimize for each trial
        self.mus = [tf.Variable(10*np.ones(N, dtype='float32'), name='mu{0}'.format(i)) for i in range(Npars)]
        self.samples0 = samples

    def loss(self):
        norm_free = [tfd.Normal(loc = self.mus[i], scale = 1) for i in range(Npars)]
        joint_free = tfd.JointDistributionSequential(norm_free)
        qM = -2*(joint_free.log_prob(samples0))
        self.qM = qM
        total_loss = tf.math.reduce_sum(qM,axis=0)
        return total_loss

    def remove_converged(self,iconv):
        """Remove converged trials of fit and rebuild input variables
           using only the remaining, un-converged trials.
           iconv - mask selecting the trials to be removed
                   from the problem.
        """
        for i,m in enumerate(self.mus):
            #print(i, m)
            #print(iconv)
            self.mus[i] = tf.Variable(m[~iconv],dtype='float32')

        for i,x in enumerate(self.samples0):
            self.samples0[i] = tf.Variable(x[~iconv],dtype='float32')

        return self.qM[iconv]

m = Model(samples0)
#opt = tf.optimizers.SGD(0.01)
opt = tf.optimizers.Adam(0.01) 
i=0
tol = 0.001
rebuild_N = 1000
prev_loss = None
start = datetime.datetime.now()
N_converged_tot = 0
qM = None
prev_pars = None
while N_converged_tot < N:
     print("Problem size: {0}".format(m.mus[0].shape))
     stop = False
     N_converged = 0
     prev_qM = None
     prev_pars = None
     while not stop:
        with tf.GradientTape() as tape:
            loss = m.loss()
            gradients = tape.gradient(loss, m.mus)
        opt.apply_gradients(zip(gradients, m.mus))
        if prev_qM is None:
            delta_qM = m.qM*0 + 1e99
        else:
            delta_qM = tf.math.abs(m.qM - prev_qM)
        cmask = (delta_qM.numpy()<tol) 
        N_converged = np.sum(cmask)
        # Compute how big the last step was
        if prev_pars is None:
            step_sizes = None
            avg_step_size = None
        else:
            step_sizes = np.abs(np.sqrt(np.sum([(p.numpy() - pprev)**2 for p,pprev in zip(m.mus,prev_pars)],axis=0)))
            avg_step_size = np.mean(step_sizes)
        prev_pars = [p.numpy() for p in m.mus]
        print("i: {0}, max(delta_qM): {1}, tol: {2}, avg_step_size: {3}, N_converged_tot: {4}".format(i,np.max(delta_qM.numpy()),tol,avg_step_size,N_converged_tot + N_converged))
        i+=1
        # Stop if either everything is converged, or more than a certain number of trials have converged (in which case we will stop and remove them)
        stop = (N_converged==len(cmask)) or (N_converged > rebuild_N)
        prev_qM = m.qM

     # Many sub-minimisations are now converged, so rebuild the problem to minimize only the
     # un-converged ones
     # But only if the problem size is not already small
     if N_converged > rebuild_N:
         qpart = m.remove_converged(cmask)
         if qM is None:
             qM = qpart
         else:
             qM = tf.concat([qM,qpart],axis=0)    
     N_converged_tot += N_converged

# Collect last results
qpart = m.remove_converged(cmask)
if qM is None:
    qM = qpart
else:
    qM = tf.concat([qM,qpart],axis=0)    

elapsed = datetime.datetime.now() - start

print("Time elapsed (s), N samples, N pars")
print("({0}, {1}, {2}),".format(elapsed.total_seconds(),N,Npars))

q = q0 - qM

fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(121)
sns.distplot(q, kde=False, ax=ax, norm_hist=True)
qx = np.linspace(np.min(q),np.max(q),1000)
qy = np.exp(tfd.Chi2(df=Npars).log_prob(qx))
sns.lineplot(qx,qy)
ax = fig.add_subplot(122)
ax.set(yscale="log")
sns.distplot(q, kde=False, ax=ax, norm_hist=True)
qx = np.linspace(np.min(q),np.max(q),1000)
qy = np.exp(tfd.Chi2(df=Npars).log_prob(qx))
sns.lineplot(qx,qy)
ax.set_ylim(1./N,1.01*np.max(qy[np.isfinite(qy)]))

plt.show()

现在,我的问题是:在我重建TensorFlow变量时,似乎优化器的内部状态被重置。当然这是完全合理的,因为变量ID和所有内容都已更改,因此优化器无法知道我仍在继续相同的优化。为了清楚地看到这一点,这里是示例代码的一些输出:

Problem size: (1000000,)
i: 0, max(delta_qM): inf, tol: 0.001, avg_step_size: None, N_converged_tot: 0
i: 1, max(delta_qM): 0.3656349182128906, tol: 0.001, avg_step_size: 0.03143804147839546, N_converged_tot: 0
i: 2, max(delta_qM): 0.36358642578125, tol: 0.001, avg_step_size: 0.031275395303964615, N_converged_tot: 0
i: 3, max(delta_qM): 0.3614768981933594, tol: 0.001, avg_step_size: 0.031083986163139343, N_converged_tot: 0
...
i: 79, max(delta_qM): 0.1918487548828125, tol: 0.001, avg_step_size: 0.014670683071017265, N_converged_tot: 960
i: 80, max(delta_qM): 0.190338134765625, tol: 0.001, avg_step_size: 0.014529905281960964, N_converged_tot: 1037
Problem size: (998963,)
i: 81, max(delta_qM): inf, tol: 0.001, avg_step_size: None, N_converged_tot: 1037
i: 82, max(delta_qM): 0.19749069213867188, tol: 0.001, avg_step_size: 0.03214956447482109, N_converged_tot: 1037
i: 83, max(delta_qM): 0.2643852233886719, tol: 0.001, avg_step_size: 0.03562989458441734, N_converged_tot: 1037
i: 84, max(delta_qM): 0.3062019348144531, tol: 0.001, avg_step_size: 0.036608822643756866, N_converged_tot: 1037
i: 85, max(delta_qM): 0.33426666259765625, tol: 0.001, avg_step_size: 0.039152681827545166, N_converged_tot: 1048

因此,人们看到平均步长在第80次迭代时缩减为0.015,然后在变量被切掉后的第82次迭代中跃升至0.032。

如何避免这种行为?有什么方法可以保持优化器的状态,以免发生这种“复位”?

0 个答案:

没有答案