函数中的Python for循环不返回值

时间:2018-08-06 14:28:12

标签: python for-loop

我编写了这个相当简单的Python函数,但是出于某种原因,在for循环结束后,什么也没有返回或无法在该函数中打印出来。我可以很好地调用该函数,并且在for循环中调用了prints以确保值正确且正确。我在这里错过任何明显的东西吗?底部的打印语句不打印任何内容。

("100", 'AAA')

这是函数的调用方式(由另一个带有嵌套循环的函数

After: list_tuple [('80', 'BBB'), ('20', 'CCC'), ('40', 'DDD')]

输出:最佳ARIMANone SARIMANone RMSE = inf

新版本仍然无法使用:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
18/08/06 14:17:39 INFO Worker: Started daemon with process name: 24104@barracuda5
18/08/06 14:17:39 INFO SignalUtils: Registered signal handler for TERM
18/08/06 14:17:39 INFO SignalUtils: Registered signal handler for HUP
18/08/06 14:17:39 INFO SignalUtils: Registered signal handler for INT
18/08/06 14:17:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/08/06 14:17:39 INFO SecurityManager: Changing view acls to: barracuda5
18/08/06 14:17:39 INFO SecurityManager: Changing modify acls to: barracuda5
18/08/06 14:17:39 INFO SecurityManager: Changing view acls groups to:
18/08/06 14:17:39 INFO SecurityManager: Changing modify acls groups to:
18/08/06 14:17:39 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(barracuda5); groups with view permissions: Set(); users  with modify permissions: Set(barracuda5); groups with modify permissions: Set()
18/08/06 14:17:40 INFO Utils: Successfully started service 'sparkWorker' on port 46635.
18/08/06 14:17:40 INFO Worker: Starting Spark worker 10.0.6.6:46635 with 4 cores, 14.7 GB RAM
18/08/06 14:17:40 INFO Worker: Running Spark version 2.1.0
18/08/06 14:17:40 INFO Worker: Spark home: /usr/lib/spark/spark-2.1.0-bin-hadoop2.7
18/08/06 14:17:40 INFO Utils: Successfully started service 'WorkerUI' on port 8081.
18/08/06 14:17:40 INFO WorkerWebUI: Bound WorkerWebUI to 0.0.0.0, and started at http://10.0.6.6:8081
18/08/06 14:17:40 INFO Worker: Connecting to master Cudatest.533gwuzexxzehbkoeqpn4rgs4d.ux.internal.cloudapp.net:7077...
18/08/06 14:17:40 WARN Worker: Failed to connect to master Cudatest.533gwuzexxzehbkoeqpn4rgs4d.ux.internal.cloudapp.net:7077
org.apache.spark.SparkException: Exception thrown in awaitResult
        at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
        at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
        at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
        at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
        at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
        at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
        at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
        at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:100)
        at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:108)
        at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$tryRegisterAllMasters$1$$anon$1.run(Worker.scala:218)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Failed to connect to Cudatest.533gwuzexxzehbkoeqpn4rgs4d.ux.internal.cloudapp.net:7077
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:228)
        at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:179)
        at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:197)
        at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:191)
        at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:187)
        ... 4 more
Caused by: java.nio.channels.UnresolvedAddressException
        at sun.nio.ch.Net.checkAddress(Net.java:101)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622)
        at io.netty.channel.socket.nio.NioSocketChannel.doConnect(NioSocketChannel.java:242)
        at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.connect(AbstractNioChannel.java:205)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.connect(DefaultChannelPipeline.java:1226)
        at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:550)
        at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:535)
        at io.netty.channel.ChannelOutboundHandlerAdapter.connect(ChannelOutboundHandlerAdapter.java:47)
        at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:550)
        at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:535)
        at io.netty.channel.ChannelDuplexHandler.connect(ChannelDuplexHandler.java:50)
        at io.netty.channel.AbstractChannelHandlerContext.invokeConnect(AbstractChannelHandlerContext.java:550)
        at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:535)
        at io.netty.channel.AbstractChannelHandlerContext.connect(AbstractChannelHandlerContext.java:517)
        at io.netty.channel.DefaultChannelPipeline.connect(DefaultChannelPipeline.java:970)
        at io.netty.channel.AbstractChannel.connect(AbstractChannel.java:215)
        at io.netty.bootstrap.Bootstrap$2.run(Bootstrap.java:166)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:408)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:455)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
        ... 1 more

3 个答案:

答案 0 :(得分:2)

您的print()语句必须 打印。但是,由于您没有return语句,因此您的函数不会返回任何内容(嗯,它会返回None)。如果要让函数返回某些内容,请添加最后一行:

return scores

调试尝试:

简化代码:

In [1]: def evaluate_arima_model(X, arima_order, s_arima_order):
   ...:     scores = []
   ...:     train_steps = [36, 48, 60, 72, 84]
   ...:     for i in train_steps:
   ...:         rmse = None
   ...:         scores.append(rmse)
   ...:     print(scores)
   ...:     return scores
   ...: 
   ...: 

In [2]: evaluate_arima_model(1,1,1)
[None, None, None, None, None]
Out[2]: [None, None, None, None, None]

我看不出这样做不起作用的原因。

答案 1 :(得分:0)

您需要编写return语句而不是print。

def evaluate_arima_model(X, arima_order, s_arima_order):
    scores = []
    train_steps = [36, 48, 60, 72, 84]
    for i in train_steps:
        Train = X[0:i]
        Test = X[i:i + 12]
        model = SARIMAX(Train, order=arima_order, seasonal_order=s_arima_order)
        model_fit = model.fit(trend='nc', disp=0)
        yhat = model_fit.forecast(12)
        rmse = sqrt(mean_squared_error(numpy.exp(Test), numpy.exp(yhat)))
        scores.append(rmse)
    return(scores)

答案 2 :(得分:-1)

最后解决了这个问题,问题是返回列表与标量,这正是我所需要的。因此,“ return scores [0]”将其修复。

def evaluate_arima_model(X, arima_order, s_arima_order):
    scores = []
    train_steps = [36]
    for i in train_steps:
        Train = X[0:i]
        Test = X[i:i + 12]
        model = SARIMAX(Train, order=arima_order, seasonal_order=s_arima_order)
        model_fit = model.fit(trend='nc', disp=0)
        yhat = model_fit.forecast(12)
        rmse = sqrt(mean_squared_error(numpy.exp(Test), numpy.exp(yhat)))
        scores.append(rmse)
    return scores[0]