让我们考虑下面的代码
代码:
#!/usr/bin/env python
class Foo():
def __init__(self, b):
self.a = 0.0
self.b = b
def count_a(self):
self.a += 0.1
foo = Foo(1)
for i in range(0, 15):
foo.count_a()
print "a =", foo.a, "b =", foo.b, '"a == b" ->', foo.a == foo.b
输出:
a = 0.2 b = 1 "a == b" -> False
a = 0.4 b = 1 "a == b" -> False
a = 0.6 b = 1 "a == b" -> False
a = 0.8 b = 1 "a == b" -> False
a = 1.0 b = 1 "a == b" -> True
a = 1.2 b = 1 "a == b" -> False
a = 1.4 b = 1 "a == b" -> False
a = 1.6 b = 1 "a == b" -> False
a = 1.8 b = 1 "a == b" -> False
a = 2.0 b = 1 "a == b" -> False
a = 2.2 b = 1 "a == b" -> False
a = 2.4 b = 1 "a == b" -> False
a = 2.6 b = 1 "a == b" -> False
a = 2.8 b = 1 "a == b" -> False
a = 3.0 b = 1 "a == b" -> False
但如果我将代码11
更改为foo = Foo(2)
,则输出将转为:
a = 0.2 b = 2 "a == b" -> False
a = 0.4 b = 2 "a == b" -> False
a = 0.6 b = 2 "a == b" -> False
a = 0.8 b = 2 "a == b" -> False
a = 1.0 b = 2 "a == b" -> False
a = 1.2 b = 2 "a == b" -> False
a = 1.4 b = 2 "a == b" -> False
a = 1.6 b = 2 "a == b" -> False
a = 1.8 b = 2 "a == b" -> False
a = 2.0 b = 2 "a == b" -> False *
a = 2.2 b = 2 "a == b" -> False
a = 2.4 b = 2 "a == b" -> False
a = 2.6 b = 2 "a == b" -> False
a = 2.8 b = 2 "a == b" -> False
a = 3.0 b = 2 "a == b" -> False
您会看到输出a = 2.0 b = 2 "a == b" -> False
非常奇怪。我想我可能会误解Python中的一些OOP概念。请向我解释为什么会发生意外输出以及如何解决这个问题。
答案 0 :(得分:2)
这与面向对象无关 - 它与计算机在内部表示浮点数的方式以及舍入错误有关。 http://floating-point-gui.de/basic/
这里的Python特性是浮点数的默认字符串表示形式,它将在小于十字形位置的位置进行舍入,而不是用于漂亮打印的内部表示。
虽然对于需要正确比较的人来说,尊重浮点数的规模,Python已经引入了一个很好的PEP 485机制,它将math.isclose
函数添加到标准库中。
答案 1 :(得分:0)
除了jsbueno的正确解释之外,请记住Python通常允许转换"基本类型"对自己。
即。 str(" a")==" a"
因此,如果除了原因之外还需要解决方法,只需将int / float mix转换为所有浮点并测试它们。
a = 2.0
b = 2
print "a == b", float(a) == float(b)
输出:
a == b True