我想将kotlin代码作为脚本运行来自java Java Scripting API,类似于javascript:
import javax.script.*;
public class EvalScript {
public static void main(String[] args) throws Exception {
// create a script engine manager
ScriptEngineManager factory = new ScriptEngineManager();
// create a JavaScript engine
ScriptEngine engine = factory.getEngineByName("JavaScript");
// evaluate JavaScript code from String
engine.eval("print('Hello, World')");
}
}
或使用类似的API。
答案 0 :(得分:6)
是的,可以从Kotlin 1.1开始:http://kotlinlang.org/docs/reference/whatsnew11.html#javaxscript-support
此配置将Kotlin脚本引擎添加到我的Kotlin 1.2项目中:
<dependency>
<groupId>org.jetbrains.kotlin</groupId>
<artifactId>kotlin-script-runtime</artifactId>
<version>${kotlin.version}</version>
</dependency>
<dependency>
<groupId>org.jetbrains.kotlin</groupId>
<artifactId>kotlin-script-util</artifactId>
<version>${kotlin.version}</version>
</dependency>
文件,内容来自https://github.com/JetBrains/kotlin/blob/master/libraries/examples/kotlin-jsr223-local-example/src/main/resources/META-INF/services/javax.script.ScriptEngineFactory
2个图书馆:
kotlin-script-util
<强>更新强>
从Kotlin 1.2.20 kotlin-compiler
开始并不明确依赖
<dependency>
<groupId>org.jetbrains.kotlin</groupId>
<artifactId>kotlin-compiler-embeddable</artifactId>
<version>${kotlin.version}</version>
</dependency>
(参见https://youtrack.jetbrains.com/issue/KT-17561)。因此应该提供另外一个模块(截至build file in example project):
import tensorflow as tf
tf.set_random_seed(1337)
import numpy as np
np.random.seed(1337)
import random
random.seed(1337)
# Train-test split
data_train, data_test, labels_train, labels_test = train_test_split(data, labels_, test_size=TEST_SIZE, random_state=RANDOM_STATE)
def DeepMLPClassifier(_X, _weights, _biases, dropout_keep_prob):
layer1 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(_X, _weights['h1']), _biases['b1'])), dropout_keep_prob)
layer2 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(layer1, _weights['h2']), _biases['b2'])), dropout_keep_prob)
layer3 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(layer2, _weights['h3']), _biases['b3'])), dropout_keep_prob)
layer4 = tf.nn.dropout(tf.nn.tanh(tf.add(tf.matmul(layer3, _weights['h4']), _biases['b4'])), dropout_keep_prob)
out = ACTIVATION_FUNCTION_OUT(tf.add(tf.matmul(layer4, _weights['out']), _biases['out']))
return out
# Here are the dictionary of weights and biases of each layer
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1], stddev=STDDEV, seed=1337)),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2], stddev=STDDEV, seed=1337)),
'h3': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_3], stddev=STDDEV, seed=1337)),
'h4': tf.Variable(tf.random_normal([n_hidden_3, n_hidden_4], stddev=STDDEV, seed=1337)),
'out': tf.Variable(tf.random_normal([n_hidden_4, n_classes], stddev=STDDEV, seed=1337)),
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1], seed=1337)),
'b2': tf.Variable(tf.random_normal([n_hidden_2], seed=1337)),
'b3': tf.Variable(tf.random_normal([n_hidden_3], seed=1337)),
'b4': tf.Variable(tf.random_normal([n_hidden_4], seed=1337)),
'out': tf.Variable(tf.random_normal([n_classes], seed=1337))
}
# Build model
pred = DeepMLPClassifier(X, weights, biases, dropout_keep_prob)
# Loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y)) # softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE).minimize(cost)
# Accuracy
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print("Deep MLP networks has been built successfully...")
print("Starting training...")
# ------------------------------------------------------------------------------
# Initialize variables
init_op = tf.global_variables_initializer()
# Launch session
sess = tf.Session()
sess.run(init_op)
# tf.set_random_seed(1)
acc_list = []
cost_list = []
i_data = []
# Training loop
for epoch in range(TRAINING_EPOCHS):
avg_cost = 0.0
total_batch = int(data_train.shape[0] / BATCH_SIZE)
# Loop over all batches
for i in range(total_batch):
randidx = np.random.randint(int(TRAIN_SIZE), size=BATCH_SIZE)
batch_xs = data_train[randidx, :]
batch_ys = labels_train[randidx, :]
# Fit using batched data
sess.run(optimizer, feed_dict={X: batch_xs, y: batch_ys, dropout_keep_prob: 0.9})
# Calculate average cost
avg_cost += sess.run(cost, feed_dict={X: batch_xs, y: batch_ys, dropout_keep_prob: 1.}) / total_batch
# Display progress
# if epoch % DISPLAY_STEP == 0:
i_data.append(epoch + 1)
cost_list.append(avg_cost)
print("\n Epoch:%3d/%3d, cost:%.9f" % (epoch, TRAINING_EPOCHS, avg_cost))
train_acc = sess.run(accuracy, feed_dict={X: batch_xs, y: batch_ys, dropout_keep_prob: 1.})
acc_list.append(train_acc)
print("Training accuracy: %.3f" % (train_acc))
答案 1 :(得分:4)
Kotlin对Java Scripting API的支持是planned,但是从版本1.0.3开始还没有。同时,您可以尝试使用现有的open-source implementation。