为什么我要把make -j作为参数传递? (而不是留空)

时间:2014-02-17 01:25:15

标签: build compilation makefile

我已经看过很多关于在你跑步时传递给X的好价值的讨论

make -j X

通常,人们认为X应该是系统核心数量的函数。在我的项目中,我通过省略X并简单地运行

找到了最佳性能
make -j

如果您不关心为其他进程保留资源并且只是想要最快的构建,是否有任何理由来修复X?

2 个答案:

答案 0 :(得分:3)

使用-j而没有参数的项目可能是最佳解决方案。如果您可以并行运行的作业数量相对较少,那就没关系。

但是,资源不是无限的。单独使用-j告诉make它应该运行 all 可能构建的作业,而不考虑系统资源。它不会考虑您拥有多少CPU,拥有多少内存,系统上的负载有多高等等。

因此,如果您的构建系统是非递归的,并且/或者包含可以并行构建的数百或数千个文件(彼此不相互依赖),make会尝试一次性运行它们。就像当你试图在你的系统中同时做太多事情一样,它会慢下来并且最终花费的时间比一次做几件事要长,所以运行太多的工作会让你的系统瘫痪。 / p>

尝试使用-j构建Linux内核,作为示例,看看它是如何工作的: - )。

答案 1 :(得分:2)

更新:将使用指定的'-l N'标志,请参阅load-average标志

 { 'l', floating, &max_load_average, 1, 1, 0, &default_load_average,
      &default_load_average, "load-average" },

看起来make尽量不要消耗太多资源,请参阅https://github.com/mirror/make/blob/master/src/job.c

  /* If we are running at least one job already and the load average
     is too high, make this one wait.  */
  if (!c->remote
      && ((job_slots_used > 0 && load_too_high ())
#ifdef WINDOWS32
          || (process_used_slots () >= MAXIMUM_WAIT_OBJECTS)
#endif
          ))
    {
      /* Put this child on the chain of children waiting for the load average
         to go down.  */
      set_command_state (f, cs_running);
      c->next = waiting_jobs;
      waiting_jobs = c;
      return 0;
    }

对load_too_high()的评论:

/* Determine if the load average on the system is too high to start a new job.
   The real system load average is only recomputed once a second.  However, a
   very parallel make can easily start tens or even hundreds of jobs in a
   second, which brings the system to its knees for a while until that first
   batch of jobs clears out.

   To avoid this we use a weighted algorithm to try to account for jobs which
   have been started since the last second, and guess what the load average
   would be now if it were computed.

   This algorithm was provided by Thomas Riedl <thomas.riedl@siemens.com>,
   who writes:

!      calculate something load-oid and add to the observed sys.load,
!      so that latter can catch up:
!      - every job started increases jobctr;
!      - every dying job decreases a positive jobctr;
!      - the jobctr value gets zeroed every change of seconds,
!        after its value*weight_b is stored into the 'backlog' value last_sec
!      - weight_a times the sum of jobctr and last_sec gets
!        added to the observed sys.load.
!
!      The two weights have been tried out on 24 and 48 proc. Sun Solaris-9
!      machines, using a several-thousand-jobs-mix of cpp, cc, cxx and smallish
!      sub-shelled commands (rm, echo, sed...) for tests.
!      lowering the 'direct influence' factor weight_a (e.g. to 0.1)
!      resulted in significant excession of the load limit, raising it
!      (e.g. to 0.5) took bad to small, fast-executing jobs and didn't
!      reach the limit in most test cases.
!
!      lowering the 'history influence' weight_b (e.g. to 0.1) resulted in
!      exceeding the limit for longer-running stuff (compile jobs in
!      the .5 to 1.5 sec. range),raising it (e.g. to 0.5) overrepresented
!      small jobs' effects.

 */

#define LOAD_WEIGHT_A           0.25
#define LOAD_WEIGHT_B           0.25

此外,人们可以看到Windows上的作业数限制为MAXIMUM_WAIT_OBJECTS,即64。