从用户输入

时间:2016-01-28 23:34:39

标签: python

对于此任务,我必须输入一个名为帐号的七位数输入并使用后三位数来返回磁盘存储位置,到目前为止,我的代码如下所示:

Account_num = int(input("Enter account number: "))                    
Disk = Account_num[-3:]
if Account_num <= 99999999:
    print("Your disk storage location is:",Disk
          )
else:
    print("Invalid account number entred")

它还应该要求用户输入另一个帐户代码,如果磁盘存储位置已满,则回复错误消息。 它应该返回的是:

"Your disk storage location is" (three digit number)
"Enter another account number: "

但它会返回:

    Disk = Account_num[-3:] TypeError: 'type' object is not subscriptable

我对编码几乎一无所知,所以任何帮助都会受到赞赏。

4 个答案:

答案 0 :(得分:1)

Account_num类型为int,并且序列(即包含其他对象的对象)支持您尝试使用的[]切片符号。

为了从数字中获取最后的三个数字,您可以使用 % 运算符生成除法的余数,{{1 }}:

1000

对于给定的Disk = Account_num % 1000

Accound_num = 9230939

答案 1 :(得分:0)

您可以稍后投出输入

 i = 1
+------+----+
|fStart|fEnd|
+------+----+
|     1|   4|
+------+----+


 i = 2
+------+----+
|fStart|fEnd|
+------+----+
|     1|   5|
+------+----+


 i = 3
+------+----+
|fStart|fEnd|
+------+----+
|     1|   6|
+------+----+
...

 i = 10
+------+----+
|fStart|fEnd|
+------+----+
|     1|  13|
+------+----+


 i = 11
16/01/29 00:28:59 ERROR Utils: Uncaught exception in thread driver-heartbeater
java.io.IOException: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.HashMap$SerializationProxy to field    org.apache.spark.executor.TaskMetrics._accumulatorUpdates of type scala.collection.immutable.Map in instance of org.apache.spark.executor.TaskMetrics
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1207)
at org.apache.spark.executor.TaskMetrics.readObject(TaskMetrics.scala:219)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.util.Utils$.deserialize(Utils.scala:92)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$reportHeartBeat$1$$anonfun$apply$6.apply(Executor.scala:436)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$reportHeartBeat$1$$anonfun$apply$6.apply(Executor.scala:426)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$reportHeartBeat$1.apply(Executor.scala:426)
at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$reportHeartBeat$1.apply(Executor.scala:424)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:424)
at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:468)
at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:468)
at org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:468)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1741)
at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:468)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.HashMap$SerializationProxy to field org.apache.spark.executor.TaskMetrics._accumulatorUpdates of type scala.collection.immutable.Map in instance of org.apache.spark.executor.TaskMetrics
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:501)
at org.apache.spark.executor.TaskMetrics$$anonfun$readObject$1.apply$mcV$sp(TaskMetrics.scala:220)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1204)
... 32 more
+------+----+
|fStart|fEnd|
+------+----+
|     1|  14|
+------+----+
...

但是,如果你这样做,你还需要在比较中转换Account_num

>>> Account_num = input("Enter account number: ")
Enter account number: 12345
>>> Disk = int(Account_num[-3:])
>>> Disk
345

答案 2 :(得分:0)

您输入

Account_num = int(input("Enter account number: ")) 

但是,您尝试将其作为字符串处理。这可以工作

Account = input("Enter account number: ") # string                 
Account_num = int(Account) # Now an int     
Disk = int(Account[-3:]) #  Make this an int also for use later
if Account_num <= 99999999:
    print("Your disk storage location is:", Disk)
else:
    print("Invalid account number entred")

答案 3 :(得分:0)

你不能在int上使用[-3:],但是你可以在一个字符串上,所以只需获取输入,然后转换为字符串以获取最后三个字符,然后转换为int。 / p>

Account_num = input("Enter account number: ")
Disk = int(str(Account_num)[-3:])
if Account_num <= 99999999:
    print("Your disk storage location is:",Disk
          )
else:
    print("Invalid account number entred")