在C Problems中使用time()和clock()

时间:2016-02-27 19:20:48

标签: c time

我正在进行编程任务,我得到了奇怪的结果。 我们的想法是计算处理器滴答数和运行算法所需的时间。

通常代码运行得如此之快,以至于所花费的时间为0秒,但我注意到处理器滴答的数量在开始和结束时为0,导致0处理器滴答。

我使用usleep添加了一个延迟,因此所用的时间不为零,但处理器的滴答仍然为零,时间戳之间的计算仍为零。

我已经好好打了好几天了,不能解决这个问题,非常欢迎任何建议。 我的代码如下:

/* This program takes an input "n". If n is even it divides n by 2
* If n is odd, it multiples n by 3 and adds 1. Each time through the loop
* it iterates a counter.
* It continues until n is 1
*
* This program will compute the time taken to perform the above algorithm
*/
#include <stdio.h>
#include <time.h>

void delay(int);

int main(void) {
    int n, i = 0;
    time_t start, finish, duration;
    clock_t startTicks, finishTicks, diffTicks;
    printf("Clocks per sec = %d\n", CLOCKS_PER_SEC);
    printf("Enter an integer: ");
    scanf("%d", &n);    // read value from keyboard
    time(&start);       // record start time in ticks
    startTicks = clock();
    printf("Start Clock = %s\n", ctime(&start));
    printf("Start Processor Ticks = %d\n", startTicks);

    while (n != 1) {    // continues until n=1
        i++;    // increment counter
        printf("iterations =%d\t", i);  // display counter iterations
        if (n % 2) {            // if n is odd, n=3n+1
            printf("Input n is odd!\t\t");
            n = (n * 3) + 1;
            printf("Output n = %d\n", n);
            delay(1000000);
        } else {                //if n is even, n=n/2
            printf("Input n is even!\t");
            n = n / 2;
            printf("Output n = %d\n", n);
            delay(1000000);
        }
    }
    printf("n=%d\n", n);
    time(&finish);      // record finish time in ticks
    finishTicks = clock();
    printf("Stop time = %s\n", ctime(&finish));
    printf("Stop Processor Ticks = %d\n", finishTicks);
    duration = difftime(finish, start); // compute difference in time
    diffTicks = finishTicks - startTicks;
    printf("Time elapsed = %2.4f seconds\n", duration);
    printf("Processor ticks elapsed = %d\n", diffTicks);
    return (n);
}

void delay(int us) {
    usleep(us);
}

编辑:因此在进一步研究之后,我发现usleep()不会影响程序的运行时间,所以我在asm中写了一个延迟函数。现在我得到了处理器滴答的值,但是我仍然没有采用零秒来运行算法。

void delay(int us) {
    for (int i = 0; i < us; i++) {
        __asm__("nop");
    }
}

2 个答案:

答案 0 :(得分:2)

您可以使用以下公式计算经过的时间。

double timeDiff  = (double)(EndTime - StartTime) / CLOCKS_PER_SEC.

这是虚拟代码。

void CalculateTime(clock_t startTime, clock_t endTime)
{
   clock_t diffTime = endTime - startTime;
   printf("Processor time elapsed = %lf\n", (double)diffTime /CLOCKS_PER_SEC);
}

希望这有帮助。

答案 1 :(得分:1)

你正试图实施哥德巴赫猜想。我不明白你如何希望在它包含延迟时获得有意义的执行时间。另一个问题是clock()结果的粒度,如CLOCKS_PER_SEC的值所示。

尝试使用分辨率为1秒的time()会更加困难。

这样做的方法是计算大量的值。这样只打印其中的10个,以确保计算不会被优化,但不会过多地扭曲计算时间。

#include <stdio.h>
#include <stdlib.h>
#include <time.h>

#define SAMPLES 100000

int main(void) {

    int i, j, n;
    double duration;
    clock_t startTicks = clock();

    for(j=2; j<SAMPLES; j++) {
        n = j;                      // starting number
        i = 0;                      // iterations
        while(n != 1) {
            if (n % 2){             // if n is odd, n=3n+1
                n = n * 3 + 1;
            }
            else {                  // if n is even, n=n/2
                n = n / 2;
            }
            i++;
        }
        if(j % (SAMPLES/10) == 0)   // print 10 results only
            printf ("%d had %d iterations\n", j, i);
    }

    duration = ((double)clock() - startTicks) / CLOCKS_PER_SEC;
    printf("\nDuration: %f seconds\n", duration);
    return 0;
}

节目输出:

10000 had 29 iterations
20000 had 30 iterations
30000 had 178 iterations
40000 had 31 iterations
50000 had 127 iterations
60000 had 179 iterations
70000 had 81 iterations
80000 had 32 iterations
90000 had 164 iterations

Duration: 0.090000 seconds