Mac与Windows中的Pythons time.clock()有什么区别?

时间:2014-08-14 20:44:01

标签: python macos time

我正在使用Python的时间来衡量Selenium流程的时间范围。我的脚本是这样的......

start_time = time.clock()
...
#ending with
final_time = '{0:.2f}'.format(time.clock()-start_time)

当在Windows操作系统上运行时,我会得到类似55.22的内容,但如果在Mac上运行它会返回类似.14的内容,即使它几乎是同一时间。

知道Mac上发生了什么不同的事情吗?我实际上也会尝试使用Ubuntu来查看差异。

2 个答案:

答案 0 :(得分:5)

the documentationtime.clock在Unix(包括Mac OS X)和Windows之间有所不同:

  

在Unix上,将当前处理器时间作为浮点数返回   用秒表示。精度,实际上是非常的定义   “处理器时间”的含义取决于C函数的含义   同名,但无论如何,这是用于的功能   基准测试Python或计时算法。

     

在Windows上,此函数返回自...以来经过的挂号秒   首先调用此函数,作为浮点数,基于   Win32函数QueryPerformanceCounter()。通常是分辨率   好于一微秒。

如果您想要跨平台一致性,请考虑time.time

处理器时间挂钟时间之间的差异在此article by Doug Hellmann中进行了解释 - 基本上处理器时钟只有在您的流程正在运行时才会提升

答案 1 :(得分:2)

标准库中的timeit模块使用timeit.default_timer来测量墙上时间:

if sys.platform == "win32":
    # On Windows, the best timer is time.clock()
    default_timer = time.clock
else:
    # On most other platforms the best timer is time.time()
    default_timer = time.time

help(timeit)解释说:

The difference in default timer function is because on Windows,
clock() has microsecond granularity but time()'s granularity is 1/60th
of a second; on Unix, clock() has 1/100th of a second granularity and
time() is much more precise.  On either platform, the default timer
functions measure wall clock time, not the CPU time.  This means that
other processes running on the same computer may interfere with the
timing.  The best thing to do when accurate timing is necessary is to
repeat the timing a few times and use the best time.  The -r option is
good for this; the default of 3 repetitions is probably enough in most
cases.  On Unix, you can use clock() to measure CPU time.

因此,对于跨平台一致性,您可以使用

import timeit
clock = timeit.default_timer

start_time = clock()
...
final_time = clock()