如何从字典中随机选择一个项目?

时间:2018-07-20 20:55:13

标签: python dictionary keyerror

我是一名尝试制作二十一点游戏的python初学者,并且一直不断遇到与此代码有关的多个键盘错误

def rank(rank):
    rank={
        '2':2,'3':3,'4':4,'5':5,'6':6,'7':7,'8':8,'9':9,'10':10,'Jack':10,
       'King':10,'Queen':10,'Ace':1}
    return random.choice(rank)

当我尝试这样调用函数时发生错误

def draw_card(card):
    card_rank=Card.rank(rank)
    card_suit=Card.suit(suit)
    card={card_suit:card_rank}
    return card

尝试使用“卡片”类中的“排名”功能将属性应用于新的“卡片”变量

1 个答案:

答案 0 :(得分:4)

import tensorflow as tf import numpy as np import scipy from glob import glob from tqdm import tqdm from os import getcwd learning_rate = 0.001 epochs = 10 batch_size = 100 data_len = 50000 training_data = [] for filename in tqdm(glob('D:/AI/Celeba/*.jpg')[0:data_len],desc='loading images'): im = scipy.misc.imread(filename) im = scipy.misc.imresize(im,(128,128)) im = im/255 training_data.append(im) x = tf.placeholder(tf.float32,shape=(None,128,128,3)) latent_space = tf.placeholder(tf.float32,shape=(None,32)) def encoder(x): layer1_conv = tf.layers.conv2d(x,32,5,padding='same',activation=tf.nn.relu) layer1_pool = tf.layers.max_pooling2d(layer1_conv,2,1,padding='same') layer2_conv = tf.layers.conv2d(layer1_pool,16,5,padding='same',activation=tf.nn.relu) layer2_pool = tf.layers.max_pooling2d(layer2_conv,2,1,padding='same') layer3_conv = tf.layers.conv2d(layer2_pool,8,5,padding='same',activation=tf.nn.relu) layer3_pool = tf.layers.max_pooling2d(layer3_conv,2,1,padding='same') layer4_conv = tf.layers.conv2d(layer3_pool,4,5,padding='same',activation=tf.nn.relu) layer4_pool = tf.layers.max_pooling2d(layer4_conv,2,1,padding='same') layer5_conv = tf.layers.conv2d(layer4_pool,2,3,padding='same',activation=tf.nn.relu) layer5_pool = tf.layers.max_pooling2d(layer5_conv,2,1,padding='same') layer7 = tf.layers.flatten(layer5_pool) layer8 = tf.layers.dense(layer7,32,activation=tf.nn.sigmoid) return layer8 def sampling(x): mean = tf.layers.dense(x,32) std = tf.layers.dense(x,32) eps = tf.random_normal(tf.shape(std)) z = mean + tf.exp(std/2)*eps return z, mean, std def decoder(x): layer1 = tf.layers.dense(x,32,activation=tf.nn.sigmoid) layer2 = tf.reshape(layer1,shape=(-1,4,4,2)) layer3_conv = tf.layers.conv2d(layer2,2,3,padding='same',activation=tf.nn.relu) layer3_upsample = tf.image.resize_images(layer3_conv,(8,8)) layer4_conv = tf.layers.conv2d(layer3_upsample,4,5,padding='same',activation=tf.nn.relu) layer4_upsample = tf.image.resize_images(layer4_conv,(16,16)) layer5_conv = tf.layers.conv2d(layer4_upsample,8,5,padding='same',activation=tf.nn.relu) layer5_upsample = tf.image.resize_images(layer5_conv,(32,32)) layer6_conv = tf.layers.conv2d(layer5_upsample,16,5,padding='same',activation=tf.nn.relu) layer6_upsample = tf.image.resize_images(layer6_conv,(64,64)) layer7_conv = tf.layers.conv2d(layer6_upsample,32,5,padding='same',activation=tf.nn.relu) layer7_upsample = tf.image.resize_images(layer7_conv,(128,128)) layer8 = tf.layers.conv2d(layer7_upsample,3,5,padding='same',activation=tf.nn.relu) return layer8 encoding, mean, std = sampling(encoder(x)) decoding = decoder(latent_space) reconstruction_loss = tf.reduce_mean(tf.squared_difference(decoding,x)) latent_loss = tf.reduce_mean(0.5 * tf.reduce_sum(tf.square(mean) + tf.square(std) - tf.log(tf.square(std)) - 1,1)) loss = reconstruction_loss + latent_loss train = tf.train.RMSPropOptimizer(learning_rate).minimize(loss) saver = tf.train.Saver() init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) example = np.random.random((1,32)) i = 0 for epoch in range(epochs): print('epoch {0} out of {1}'.format(epoch+1,epochs)) losses = [] for sample in tqdm(range(int(len(training_data)/batch_size))): batch = training_data[sample*batch_size : sample*batch_size+batch_size] latent = sess.run(encoding,feed_dict={x:batch}) l,_ = sess.run([loss,train],feed_dict={x:batch,latent_space:latent}) losses.append(l) test = sess.run(decoding,feed_dict={latent_space:example})[0] scipy.misc.imsave(getcwd()+'/training/{}.png'.format(i),test) i += 1 print('average loss:', sum(losses)/len(losses)) saver.save(sess,getcwd()+'/saved/model.ckpt') 在输入中使用random.choice(或list)(它需要可整数索引的对象)。

因此,只需将tuple字典(即值)转换为rank然后选择一个值:像这样(我创建了一个新函数,因为list位没道理,您不需要参数来挑选卡片:

rank(rank)
在python 3中,

# globally defined (constant), pointless to define it locally, you may # need it in another part of the program rank={'2':2,'3':3,'4':4,'5':5,'6':6,'7':7,'8':8,'9':9,'10':10,'Jack':10, 'King':10,'Queen':10,'Ace':1} def pick(): return random.choice(list(rank.values())) 是必需的 ,字典值不再是list(rank.values())类型。也许您想预先计算此列表以节省计算时间。