我的字典看起来像这样
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 171, in main
process()
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 166, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 103, in <lambda>
func = lambda _, it: map(mapper, it)
File "<string>", line 1, in <lambda>
File "/usr/hdp/current/spark2-client/python/pyspark/worker.py", line 70, in <lambda>
return lambda *a: f(*a)
File "<stdin>", line 14, in <lambda>
TypeError: unsupported operand type(s) for /: 'NoneType' and 'NoneType'
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
我如何在数组中添加值(数字)(它们已经是整数形式),以便它可以在数组中写入总数。在此之后,我将如何对字典进行排序,以便将最低总数放在首位?
我需要对它进行排序,因为用户将输入数据进入这些数组。
答案 0 :(得分:-1)
要对值求和,您可以使用.items()
循环数据,并在每次迭代时返回带有关联值的键:
s = {'A': [3, 4, 7], 'B': [4, 9, 9], 'C': [3, 4, 5], 'D': [2, 2, 6], 'E': [6, 7, 9], 'F': [2, 4, 5]}
new_dict = {}
for a, b in s.items():
new_dict[a] = sum(b) #here, using sum function to get the total of all elements
字典未排序,但是,您可以使用以下格式保存数据:
final_data = sorted(new_dict.items(), key=lambda x:x[-1])
final_output = final_data[2:]
print(final_output)
答案 1 :(得分:-1)
这就是你想要的,下面有一个解释:
#!/usr/bin/env python3.6
def main():
nums = {'A': [3, 4, 7], 'B': [4, 9, 9], 'C': [3, 4, 5], 'D': [2, 2, 6], 'E': [6, 7, 9], 'F': [2, 4, 5]}
for key in nums.keys():
nums[key] = sum(nums[key])
print(nums)
if __name__ == "__main__":
main()
所以我们在这里做的是循环你的dict中的键然后重新分配该键等于它的元素的总和。你不能对字典进行排序,因此这里不涉及排序。你为什么还要把它分类呢?