分段Prime筛

时间:2014-11-03 04:23:58

标签: c++ algorithm primes sieve-of-eratosthenes

在互联网上遇到这个高效的分段主筛,请帮助我理解工作,特别是使用 next vector

段大小的具体选择如何影响性能?

const int L1D_CACHE_SIZE = 32768;
void segmented_sieve(int64_t limit, int segment_size = L1D_CACHE_SIZE)
{
    int sqrt = (int) std::sqrt((double) limit);
    int64_t count = (limit < 2) ? 0 : 1;
    int64_t s = 2;
    int64_t n = 3;

    // vector used for sieving
    std::vector<char> sieve(segment_size);

    // generate small primes <= sqrt
    std::vector<char> is_prime(sqrt + 1, 1);
    for (int i = 2; i * i <= sqrt; i++)
        if (is_prime[i])
            for (int j = i * i; j <= sqrt; j += i)
                is_prime[j] = 0;

    std::vector<int> primes;
    std::vector<int> next;

    for (int64_t low = 0; low <= limit; low += segment_size)
    {
        std::fill(sieve.begin(), sieve.end(), 1);

        // current segment = interval [low, high]
        int64_t high = std::min(low + segment_size - 1, limit);

        // store small primes needed to cross off multiples
        for (; s * s <= high; s++)
        {
            if (is_prime[s])
            {
                primes.push_back((int) s);
                next.push_back((int)(s * s - low));
            }
        }
        // sieve the current segment
        for (std::size_t i = 1; i < primes.size(); i++)
        {
            int j = next[i];
            for (int k = primes[i] * 2; j < segment_size; j += k)
                sieve[j] = 0;
            next[i] = j - segment_size;
        }

        for (; n <= high; n += 2)
            if (sieve[n - low]) // n is a prime
                count++;
    }

    std::cout << count << " primes found." << std::endl;
} 

2 个答案:

答案 0 :(得分:1)

这是一个相同算法的更简洁的表述,它应该使原理更清晰(full, runnable .cpp for segment size timings @ pastebin的一部分)。这初始化了一个打包(仅限赔率)筛子,而不是计算素数,但所涉及的原则是相同的。下载并运行.cpp以查看分段大小的影响。基本上,最佳值应该在CPU的L1高速缓存大小附近。太小了,由于轮数增加导致的开销占主导地位;太大了,你会受到L2和L3缓存较慢时间的影响。另请参阅How does segmentation improve the running time of Sieve of Eratosthenes?

void initialise_packed_sieve_4G (void *data, unsigned segment_bytes = 1 << 15, unsigned end_bit = 1u << 31)
{
   typedef std::vector<prime_and_offset_t>::iterator prime_iter_t;
   std::vector<prime_and_offset_t> small_factors;

   initialise_odd_primes_and_offsets_64K(small_factors);

   unsigned segment_bits = segment_bytes * CHAR_BIT;
   unsigned partial_bits = end_bit % segment_bits;
   unsigned segments     = end_bit / segment_bits + (partial_bits != 0);

   unsigned char *segment = static_cast<unsigned char *>(data);
   unsigned bytes = segment_bytes;

   for ( ; segments--; segment += segment_bytes)
   {
      if (segments == 0 && partial_bits)
      {
         segment_bits = partial_bits;
         bytes = (partial_bits + CHAR_BIT - 1) / CHAR_BIT;
      }

      std::memset(segment, 0, bytes);

      for (prime_iter_t p = small_factors.begin(); p != small_factors.end(); ++p)
      {
         unsigned n = p->prime;
         unsigned i = p->next_offset;

         for ( ; i < segment_bits; i += n)
         {
            set_bit(segment, i);
         }

          p->next_offset = i - segment_bits;
      }
   }
}

如果在段到段之间没有记住偏移量,那么每次使用至少一个除法和每个重新计算的索引乘以一个乘法加上条件或严重的位技巧时,它们将不得不重新计算。当筛选完整的2 ^ 32数字范围(8192个段,每个32 KB)时,至少有53,583,872个慢划分和相同数量的更快的乘法;大约一秒加到初始化完整的2 ^ 32筛子所需的时间(2 ^ 31位用于仅有几率的Eratosthenes)。

以下是一些使用此“重新构建”数学的旧筛子的实际代码:

for (index_t k = 1; k <= max_factor_bit; ++k)
{
   if (bitmap_t::traits::bt(bm.bm, k))  continue;

   index_t n = (k << 1) + 1;     // == index_for_value(value_for_index(k) * 2) == n
   index_t i = square(n) >> 1;   // == index_for_value(square(n))

   if (i < offset)
   {
      i += ((offset - i) / n) * n;
   }

   for ( ; i <= new_max_bit; i += n)
   {
      bitmap_t::traits::bts(bm.bm, i); 
   }
}

全筛需要5.5秒(VC ++);使用相同的编译器,首先显示的代码只需4.5秒,使用gcc 4.8.1(MinGW64)只需3.5秒。

以下是gcc时间:

sieve bits = 2147483648 (equiv. number = 4294967295)

segment size    4096 (2^12) bytes ...   4.091 s   1001.2 M/s
segment size    8192 (2^13) bytes ...   3.723 s   1100.2 M/s
segment size   16384 (2^14) bytes ...   3.534 s   1159.0 M/s
segment size   32768 (2^15) bytes ...   3.418 s   1198.4 M/s
segment size   65536 (2^16) bytes ...   3.894 s   1051.9 M/s
segment size  131072 (2^17) bytes ...   4.265 s    960.4 M/s
segment size  262144 (2^18) bytes ...   4.453 s    919.8 M/s
segment size  524288 (2^19) bytes ...   5.002 s    818.9 M/s
segment size 1048576 (2^20) bytes ...   5.176 s    791.3 M/s
segment size 2097152 (2^21) bytes ...   5.135 s    797.7 M/s
segment size 4194304 (2^22) bytes ...   5.251 s    780.0 M/s
segment size 8388608 (2^23) bytes ...   7.412 s    552.6 M/s

digest { 203280221, 0C903F86, 5B253F12, 774A3204 }

注意:从那时起可以通过一种称为“预先”的技巧来削减额外的秒数,即将预先计算的模式爆破到位图中,而不是在开始时将其归零。这使整个筛子的gcc时间降至2.1秒,this hacked copy of the earlier .cpp。这个技巧非常适用于缓存大小的块中的分段筛选。

答案 1 :(得分:0)

我不是这方面的专家,但我的直觉告诉了我:

  1. 限制筛选搜索表

    适合CPU的L1 CACHE 获得当前硬件架构性能提升的全部好处

  2. next vector

    如果你想分割筛子 然后你必须记住每个已经过筛的素数的最后一个索引,例如:

    • 过筛的素数:2,3,5
    • 分段大小:8

       |0 1 2 3 4 5 6 7|0 1 2 3 4 5 6 7|0 1 2 3 4 5 6 7| // segments
      -----------------------------------------------
      2|-   x   x    x   x   x   x   x    x   x   x   x   
      3|-     x      x      x      x      x      x      x  
      5|-          x          x          x          x      
      -----------------------------------------------
       |                 ^                ^                ^ 
                                  // next value offset for each prime
      

    因此,在填写下一部分时,您可以顺利地继续......