我一直在使用张量流损失(docker run -it --entrypoint=cmd <image>
)编写keras模型,但遇到了这个问题。对于此模型,真实值应为具有形状(batch_size)的张量,并且模型的输出将具有形状(batch_size,num_classes)。我已经验证了模型的输出是形状(?,num_classes),并且为真实值创建了目标张量,但这似乎无法解决问题。有人对如何解决有任何想法吗?我有什么想念的吗?下面是相关代码。
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.sql import SparkSession
from pyspark.sql import SQLContext
spark = SparkSession\
.builder\
.appName("SparkSessionExample")\
.master("local[4]")\
.config("spark.sql.warehouse.dir", "target/spark-warehouse")\
.config("spark.driver.bindAddress", "localhost")\
.getOrCreate()\
# make some test data
columns = ['id', 'dogs', 'cats']
vals = [
(1, 2, 0),
(2, 0, 1)
]
# create DataFrame
df = spark.createDataFrame(vals, columns)
df.show()
File "/Users/USERNAME/server/spark-2.4.3-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.6 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
当我查看损失函数时,我发现y_true的形状为(?,num_classes),而y_pred的形状为(?,?)。
答案 0 :(得分:0)
好吧,我感到非常尴尬,但该错误只是一个错字。 “ target_tensors”而不是“ target_tensor”。如果不是这样,另一种选择就是简单地修改损失函数中的张量。