如何生成特定范围内的字节值数组(Java)?

时间:2018-01-19 02:15:48

标签: java arrays random byte

对于赋值,我需要生成一个包含100个随机生成的字节值的数组,范围从0到10(例如{10,2,3,2,7,5 ...})。我已经能够生成随机字节值的数组,但我不知道如何使值在0-10之间。这是我到目前为止生成数组的原因:

byte[] array = new byte[100];
        Random rand = new Random();
        rand.nextBytes(array);

        System.out.print(Arrays.toString(array));

1 个答案:

答案 0 :(得分:-1)

这可以通过将Random.nextInt()转换为def buildModel(info, training_data, training_targets): graph = tf.Graph() with graph.as_default(): # numBatches is passed in from Python once per Epoch. batch_size = tf.placeholder(tf.float32, name = 'batch_size') # Initializers for loop variables for tf.while_loop batchCounter = tf.Variable(0, dtype=tf.float32, trainable=False) lossList = tf.Variable(tf.zeros([0,1]), trainable=False) # In a full example, I'd normalize my data here. And possibly shuffle tf_training_data = tf.constant(training_data, dtype=tf.float32) tf_training_targets = tf.constant(training_targets, dtype=tf.float32) # For brevity, I'll spare the definitions of my variables. Because tf.Variables # are essentially treated as globals in the model and are manipulated directly (like with tf.apply) # they can reside outside runMinibatch, the body of tf.while_loop. # weights_1 = # biases_1 = # etc. def moreMinibatches(batchCount, lossList): return (batchCount + 1) * batch_size <= len(training_data) def runMinibatch(batchCount, lossList): # These tensors and ops have to be defined inside runMinibatch, otherwise they're not updated as tf.wile_loop loops. This means # slices, model definition, loss tensor, and training op. dat_batch = tf.slice(tf_training_data, [tf.cast(batchCounter * batch_size, tf.int32) , 0], [tf.cast(batch_size, tf.int32), -1]) targ_batch = tf.slice(tf_training_targets, [tf.cast(batchCounter * batch_size, tf.int32) , 0], [tf.cast(batch_size, tf.int32), -1]) # Here's where you'd define the model as a function of weights and biases above and dat_batch # model = <insert here> loss = tf.reduce_mean(tf.squared_difference(model, targ_batch)) optimizer = tf.train.AdagradOptimizer() # for example train_op = optimizer.minimize(while_loss, name='optimizer') # control_dependences ensures that train_op is run before return # even though the return values don't explicitly depend on it. with tf.control_dependencies([train_op]): return batchCount + 1, tf.concat([lossList, [[while_loss]]],0) # So, the idea is that this trains a full epoch without returning to Python. trainMinibatches = tf.while_loop(moreMinibatches, runMinibatch, [minibatchCounter, lossList] shape_invariants=[batchCounter.get_shape(), tf.TensorShape(None)]) return (graph, {'trainMinibatches' : trainAllMinibatches, 'minibatchCounter' : minibatchCounter, 'norm_loss' : norm_loss, } ) numEpochs = 100 # e.g. minibatchSize = 32 # # training_dataset = <data here> # training_targets = <targets here> graph, ops = buildModel(info, training_dataset, training_targets, minibatch_size) with tf.Session(graph=graph, config=config) as session: tf.global_variables_initializer().run() for i in range(numEpochs): # This op will train on as all minibatches that fit in the full dataset. finalBatchCount with be the number of # complete minibatches in the dataset. lossList is a list of each step's minibatches. finalBatchCount, lossList = session.run(ops['trainAllMinibatches'], feed_dict={'batch_size:0':minibatchSize}) print('minibatch losses at Epoch', i, ': ', lossList) 来完成,byte会返回0到0但不包括参数的随机数。

byte[] array = new byte[100];
    Random rand = new Random();
    for(int i = 0; i < 100; ++i) {
        array[i] = (byte) rand.nextInt(11);
    }
    System.out.print(Arrays.toString(array));