从generate_np方法获取异常

时间:2019-04-17 20:48:45

标签: cleverhans

我正在尝试使用generate_np方法生成MNIST对抗图像。在异常下面获取此信息。使用来自cleverhans的默认CNN模型。

FailedPreconditionError:从容器:本地主机读取资源变量density_2 / bias时出错。这可能意味着该变量未初始化。找不到:资源localhost / dense_2 / bias / N10tensorflow3VarE不存在。      [[node model_2_1 / dense_2 / BiasAdd / ReadVariableOp(在/anaconda3/envs/Python36/lib/python3.6/site-packages/cleverhans/utils_keras.py:228中定义)]]

import tensorflow as tf 
import keras
import numpy as np

from cleverhans.attacks import FastGradientMethod
from cleverhans.compat import flags
from cleverhans.dataset import MNIST
from cleverhans.utils import AccuracyReport
from cleverhans.utils_keras import cnn_model
from cleverhans.utils_keras import KerasModelWrapper

FLAGS = flags.FLAGS
train_start=0
train_end=6000 
test_start=0
test_end=1000 
nb_epochs=3
batch_size=128
learning_rate=.001
testing=False
label_smoothing=0.1
# Get MNIST test data
mnist = MNIST(train_start=train_start, train_end=train_end, test_start=test_start, test_end=test_end)
x_train, y_train = mnist.get_set('train')
x_test, y_test = mnist.get_set('test')

# Obtain Image Parameters
img_rows, img_cols, nchannels = x_train.shape[1:4]
nb_classes = y_train.shape[1]

# Label smoothing
y_train -= label_smoothing * (y_train - 1. / nb_classes)

print(x_train.shape)


# Set TF random seed to improve reproducibility
tf.set_random_seed(1234)
# Force TensorFlow to use single thread to improve reproducibility
config = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)

# Create TF session and set as Keras backend session
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer())
#keras.backend.set_image_data_format('channels_last')
keras.backend.set_session(sess)

# Define Keras model
model = cnn_model(img_rows=img_rows, img_cols=img_cols,
                    channels=nchannels, nb_filters=64,
                    nb_classes=nb_classes)
print("Defined Keras model.")

# Initialize the Fast Gradient Sign Method (FGSM) attack object
wrap = KerasModelWrapper(model)
fgsm = FastGradientMethod(wrap, sess=sess)
fgsm_params = {'eps': 0.3,
                 'clip_min': 0.,
                 'clip_max': 1.}
x_val = np.random.rand(100, 2)
x_val = np.array(x_val, dtype=np.float32)
adv_x = fgsm.generate_np(x_train, **fgsm_params)
print(model.input.shape)

1 个答案:

答案 0 :(得分:0)

快速浏览代码,似乎您在攻击模型之前尚未训练模型。如果尝试使用预训练的模型,则应在发起攻击之前初始化并恢复权重。否则,您应该先训练模型,然后再进行攻击。