使用PySwarms运行CNN代码以优化CNN参数

时间:2019-05-31 06:21:04

标签: python python-3.x particle-swarm

我正在使用PySwarm库,但是我不知道在哪里可以找到损失函数,因此我可以让它调用一些训练网络的CNN代码并返回该分类器的准确性。

我计划的是设置PSO,为我希望优化的参数(学习率,LeNet的各层之间的辍学率)设置一个域,然后为每个粒子使用这些参数训练CNN ,返回精度并使用它来评估要保留的粒子。

我的问题是我不知道提供的示例代码中的哪些内容可以替换它来训练CNN。

我一直在查看PySwarm网站(https://pyswarms.readthedocs.io/en/latest/examples/usecases/train_neural_network.html)上的文档,但是我仍然不确定如何修改它以获得所需的结果。

def forward_prop(self, params):
    """Forward propagation as objective function
    This computes for the forward propagation of the neural network, as
    well as the loss. It receives a set of parameters that must be
    rolled-back into the corresponding weights and biases.
    Inputs
    ------
    params: np.ndarray
        The dimensions should include an unrolled version of the
        weights and biases.
    Returns
    -------
    float
        The computed negative log-likelihood loss given the parameters
    """
    # Neural network architecture
    n_inputs = self.n_inputs
    n_hidden = self.n_hidden
    n_classes = self.n_classes

    # Roll-back the weights and biases
    W1 = params[0:80].reshape((n_inputs, n_hidden))
    b1 = params[80:100].reshape((n_hidden,))
    W2 = params[100:160].reshape((n_hidden, n_classes))
    b2 = params[160:163].reshape((n_classes,))

    # Perform forward propagation
    z1 = self.x_data.dot(W1) + b1  # Pre-activation in Layer 1
    a1 = np.tanh(z1)     # Activation in Layer 1
    z2 = a1.dot(W2) + b2 # Pre-activation in Layer 2
    logits = z2          # Logits for Layer 2

    # Compute for the softmax of the logits
    exp_scores = np.exp(logits)
    probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True)

    # Compute for the negative log likelihood
    N = 150 # Number of samples
    corect_logprobs = -np.log(probs[range(N), self.y_data])
    loss = np.sum(corect_logprobs) / N

    return loss

现在,我没有输出,但我希望获得的是精度值,因此我们可以在其余PSO的粒子选择中使用它

0 个答案:

没有答案