我想要做的是采用由位对组成的64位无符号整数,如果相应对中的两个位都为0,则从中创建一个包含0的32位整数,否则为1。换句话说,转换看起来像的东西:
01 00 10 11
看起来像这样的东西
1 0 1 1
两个明显的解决方案是每个字节的暴力循环或查找表,然后执行八次查找并将它们组合成OR和位移的最终结果但我确信应该有一种有效的位方法 - 弄清楚这一点。我将在C ++中为64位整数做这个,但如果有人知道有效的方法来做更短的整数,我相信我可以弄清楚如何扩展它。
答案 0 :(得分:54)
这是一个可移植的C ++实现。它似乎在我的简短测试中起作用。解交织代码基于this SO question。
uint64_t calc(uint64_t n)
{
// (odd | even)
uint64_t x = (n & 0x5555555555555555ull) | ((n & 0xAAAAAAAAAAAAAAAAull) >> 1);
// deinterleave
x = (x | (x >> 1)) & 0x3333333333333333ull;
x = (x | (x >> 2)) & 0x0F0F0F0F0F0F0F0Full;
x = (x | (x >> 4)) & 0x00FF00FF00FF00FFull;
x = (x | (x >> 8)) & 0x0000FFFF0000FFFFull;
x = (x | (x >> 16)) & 0x00000000FFFFFFFFull;
return x;
}
gcc,clang和msvc都将其编译为大约30条指令。
根据评论,可以进行修改。
可能(?)改进的代码是:
uint64_t calc(uint64_t n)
{
// (odd | even)
uint64_t x = (n | (n >> 1)) & 0x5555555555555555ull; // single bits
// ... the restdeinterleave
x = (x | (x >> 1)) & 0x3333333333333333ull; // bit pairs
x = (x | (x >> 2)) & 0x0F0F0F0F0F0F0F0Full; // nibbles
x = (x | (x >> 4)) & 0x00FF00FF00FF00FFull; // octets
x = (x | (x >> 8)) & 0x0000FFFF0000FFFFull; // halfwords
x = (x | (x >> 16)) & 0x00000000FFFFFFFFull; // words
return x;
}
答案 1 :(得分:43)
使用BMI2 instruction set x86架构的最快解决方案:
#include <stdint.h>
#include <x86intrin.h>
uint32_t calc (uint64_t a)
{
return _pext_u64(a, 0x5555555555555555ull) |
_pext_u64(a, 0xaaaaaaaaaaaaaaaaull);
}
这总共编写了5条指令。
答案 2 :(得分:14)
如果你没有pext
并且你仍然希望比平凡的方式做得更好,那么这个提取可以表示为位移动的对数(如果你用长度来概括): / p>
// OR adjacent bits, destroys the odd bits but it doesn't matter
x = (x | (x >> 1)) & rep8(0x55);
// gather the even bits with delta swaps
x = bitmove(x, rep8(0x44), 1); // make pairs
x = bitmove(x, rep8(0x30), 2); // make nibbles
x = bitmove(x, rep4(0x0F00), 4); // make bytes
x = bitmove(x, rep2(0x00FF0000), 8); // make words
res = (uint32_t)(x | (x >> 16)); // final step is simpler
使用:
bitmove(x, mask, step) {
return x | ((x & mask) >> step);
}
repk
就是这样我可以编写更短的常量。 rep8(0x44) = 0x4444444444444444
等。
此外,如果你做有pext
,你只能使用其中一个,这可能更快,至少更短:
_pext_u64(x | (x >> 1), rep8(0x55));
答案 3 :(得分:10)
好吧,让我们把它变得更加hacky(可能是错误的):
uint64_t x;
uint64_t even_bits = x & 0xAAAAAAAAAAAAAAAAull;
uint64_t odd_bits = x & 0x5555555555555555ull;
现在,我原来的解决方案做到了这一点:
// wrong
even_bits >> 1;
unsigned int solution = even_bits | odd_bits;
然而,正如Jack Aidley指出的那样,虽然这会将这些位对齐,但它并没有从中间移除空格!
值得庆幸的是,我们可以使用BMI2 instruction set中非常有用的_pext
指令。
u64 _pext_u64(u64 a, u64 m)
- 将掩码m指定的相应位位置的a中的位提取到dst中的连续低位; dst中剩余的高位设置为零。
solution = _pext_u64(solution, odd_bits);
或者,不是使用&
和>>
来分隔这些位,而是可以在原始数字上使用_pext
两次使用提供的掩码(将其分成两部分)两个连续的32位数字),然后只是or
结果。
但是,如果您无法访问BMI2,我很确定删除间隙仍会涉及循环;或许比你原来的想法更简单一点。
答案 4 :(得分:7)
与LUT方法相比略有改进(4次查找而不是8次):
按位计算或清除每隔一位。然后将字节对的位交织在一起以产生四个字节。最后,通过256个条目的查找表重新排序四个字节中的位(映射在四字上):
[System.Runtime.InteropServices.DllImport("kernel32.dll")]
internal static extern bool Beep(int freq, int duration);
答案 5 :(得分:6)
困难的部分似乎是在oring之后打包。 oring由以下人员完成:
ored = (x | (x>>1)) & 0x5555555555555555;
(假设int
足够大,所以我们不必使用后缀)。然后我们可以逐步打包然后先将它们打包为2×2,4×4等等:
pack2 = ((ored*3) >> 1) & 0x333333333333;
pack4 = ((ored*5) >> 2) & 0x0F0F0F0F0F0F;
pack8 = ((ored*17) >> 4) & 0x00FF00FF00FF;
pac16 = ((ored*257) >> 8) & 0x0000FFFF0000FFFF;
pack32 = ((ored*65537) >> 16) & 0xFFFFFFFF;
// (or cast to uint32_t instead of the final & 0xFFF...)
包装中发生的事情是,通过乘以我们将数据与移位的数据相结合。在您的示例中,我们将进行第一次乘法运算(我将ored
中的掩码中的零表示为o
,将另一个0
表示为来自原始数据:
o1o0o1o1
x 11
----------
o1o0o1o1
o1o0o1o1
----------
o11001111
^^ ^^
o10oo11o < these are bits we want to keep.
我们也可以通过oring来做到这一点:
ored = (ored | (ored>>1)) & 0x3333333333333333;
ored = (ored | (ored>>2)) & 0x0F0F0F0F0F0F0F0F;
ored = (ored | (ored>>4)) & 0x00FF00FF00FF00FF;
ored = (ored | (ored>>8)) & 0x0000FFFF0000FFFF;
ored = (ored | (ored>>16)) & 0xFFFFFFFF;
// ored = ((uint32_t)ored | (uint32_t)(ored>>16)); // helps some compilers make better code, esp. on x86
答案 6 :(得分:1)
我做了一些vectorized versions (godbolt link still with some big design-notes comments)并在这个问题是新问题时做了一些基准测试。我打算花更多的时间在上面,但从未回过头来。发布我的内容,以便关闭此浏览器标签。 &GT;。&LT;欢迎改进。
我没有可以测试的Haswell,所以我无法对pextr
版本进行基准测试。不过,我确信它更快,因为它只有4个快速指令。
*** Sandybridge (i5-2500k, so no hyperthreading)
*** 64bit, gcc 5.2 with -O3 -fno-tree-vectorize results:
TODO: update benchmarks for latest code changes
total cycles, and insn/clock, for the test-loop
This measures only throughput, not latency,
and a bottleneck on one execution port might make a function look worse in a microbench
than it will do when mixed with other code that can keep the other ports busy.
Lower numbers in the first column are better:
these are total cycle counts in Megacycles, and correspond to execution time
but they take frequency scaling / turbo out of the mix.
(We're not cache / memory bound at all, so low core clock = fewer cycles for cache miss doesn't matter).
AVX no AVX
887.519Mc 2.70Ipc 887.758Mc 2.70Ipc use_orbits_shift_right
1140.68Mc 2.45Ipc 1140.47Mc 2.46Ipc use_orbits_mul (old version that right-shifted after each)
718.038Mc 2.79Ipc 716.452Mc 2.79Ipc use_orbits_x86_lea
767.836Mc 2.74Ipc 1027.96Mc 2.53Ipc use_orbits_sse2_shift
619.466Mc 2.90Ipc 816.698Mc 2.69Ipc use_orbits_ssse3_shift
845.988Mc 2.72Ipc 845.537Mc 2.72Ipc use_orbits_ssse3_shift_scalar_mmx (gimped by stupid compiler)
583.239Mc 2.92Ipc 686.792Mc 2.91Ipc use_orbits_ssse3_interleave_scalar
547.386Mc 2.92Ipc 730.259Mc 2.88Ipc use_orbits_ssse3_interleave
The fastest (for throughput in a loop) with AVX is orbits_ssse3_interleave
The fastest (for throughput in a loop) without AVX is orbits_ssse3_interleave_scalar
but obits_x86_lea comes very close.
AVX for non-destructive 3-operand vector insns helps a lot
Maybe a bit less important on IvB and later, where mov-elimination handles mov uops at register-rename time
// Tables generated with the following commands:
// for i in avx.perf{{2..4},{6..10}};do awk '/cycles / {c=$1; gsub(",", "", c); } /insns per cy/ {print c / 1000000 "Mc " $4"Ipc"}' *"$i"*;done | column -c 50 -x
// Include 0 and 1 for hosts with pextr
// 5 is omitted because it's not written
几乎可以肯定的最佳版本(使用BMI2)是:
#include <stdint.h>
#define LOBITS64 0x5555555555555555ull
#define HIBITS64 0xaaaaaaaaaaaaaaaaull
uint32_t orbits_1pext (uint64_t a) {
// a|a<<1 compiles more efficiently on x86 than a|a>>1, because of LEA for non-destructive left-shift
return _pext_u64( a | a<<1, HIBITS64);
}
这编译为:
lea rax, [rdi+rdi]
or rdi, rax
movabs rax, -6148914691236517206
pext rax, rdi, rax
ret
所以它只有4个uop,关键路径延迟是5c = 3(pext)+ 1(或)+ 1(lea)。 (英特尔Haswell)。吞吐量应该是每个周期一个结果(没有循环开销或加载/存储)。但是,常量的mov imm
可以从循环中提升,因为它不会被破坏。这意味着吞吐量方面,我们每个结果只需要3个融合域uops。
mov r, imm64
并不理想。 (一个1uop广播 - 立即32或8位到64位reg将是理想的,但没有这样的指令)。在数据存储器中使用常量是一个选项,但在指令流中内联是很好的。 64b常量占用了大量的uop-cache空间,这使得pext
版本具有两个不同的掩码甚至更糟。使用not
从另一个生成一个掩码可能会有所帮助,但是:movabs
/ pext
/ not
/ pext
/ or
,但与lea
技巧启用的4相比,这仍然是5个insn。
最佳版本(使用AVX)是:
#include <immintrin.h>
/* Yves Daoust's idea, operating on nibbles instead of bytes:
original:
Q= (Q | (Q << 1)) & 0xAAAAAAAAAAAAL // OR in pairs
Q|= Q >> 9; // Intertwine 4 words into 4 bytes
B0= LUT[B0]; B1= LUT[B2]; B2= LUT[B4]; B3= LUT[B6]; // Rearrange bits in bytes
To operate on nibbles,
Q= (Q | (Q << 1)) & 0xAAAAAAAAAAAAL // OR in pairs, same as before
Q|= Q>>5 // Intertwine 8 nibbles into 8 bytes
// pshufb as a LUT to re-order the bits within each nibble (to undo the interleave)
// right-shift and OR to combine nibbles
// pshufb as a byte-shuffle to put the 4 bytes we want into the low 4
*/
uint32_t orbits_ssse3_interleave(uint64_t scalar_a)
{
// do some of this in GP regs if not doing two 64b elements in parallel.
// esp. beneficial for AMD Bulldozer-family, where integer and vector ops don't share execution ports
// but VEX-encoded SSE saves mov instructions
__m128i a = _mm_cvtsi64_si128(scalar_a);
// element size doesn't matter, any bits shifted out of element boundaries would have been masked off anyway.
__m128i lshift = _mm_slli_epi64(a, 1);
lshift = _mm_or_si128(lshift, a);
lshift = _mm_and_si128(lshift, _mm_set1_epi32(0xaaaaaaaaUL));
// a = bits: h g f e d c b a (same thing in other bytes)
// lshift = hg 0 fe 0 dc 0 ba 0
// lshift = s 0 r 0 q 0 p 0
// lshift = s 0 r 0 q 0 p 0
__m128i rshift = _mm_srli_epi64(lshift, 5); // again, element size doesn't matter, we're keeping only the low nibbles
// rshift = s 0 r 0 q 0 p 0 (the last zero ORs with the top bit of the low nibble in the next byte over)
__m128i nibbles = _mm_or_si128(rshift, lshift);
nibbles = _mm_and_si128(nibbles, _mm_set1_epi8(0x0f) ); // have to zero the high nibbles: the sign bit affects pshufb
// nibbles = 0 0 0 0 q s p r
// pshufb -> 0 0 0 0 s r q p
const __m128i BITORDER_NIBBLE_LUT = _mm_setr_epi8( // setr: first arg goes in the low byte, indexed by 0b0000
0b0000,
0b0100,
0b0001,
0b0101,
0b1000,
0b1100,
0b1001,
0b1101,
0b0010,
0b0110,
0b0011,
0b0111,
0b1010,
0b1110,
0b1011,
0b1111 );
__m128i ord_nibbles = _mm_shuffle_epi8(BITORDER_NIBBLE_LUT, nibbles);
// want 00 00 00 00 AB CD EF GH from:
// ord_nibbles = 0A0B0C0D0E0F0G0H
// 0A0B0C0D0E0F0G0 H(shifted out)
__m128i merged_nibbles = _mm_or_si128(ord_nibbles, _mm_srli_epi64(ord_nibbles, 4));
// merged_nibbles= 0A AB BC CD DE EF FG GH. We want every other byte of this.
// 7 6 5 4 3 2 1 0
// pshufb is the most efficient way. Mask and then packuswb would work, but uses the shuffle port just like pshufb
__m128i ord_bytes = _mm_shuffle_epi8(merged_nibbles, _mm_set_epi8(-1,-1,-1,-1, 14,12,10,8,
-1,-1,-1,-1, 6, 4, 2,0) );
return _mm_cvtsi128_si32(ord_bytes); // movd the low32 of the vector
// _mm_extract_epi32(ord_bytes, 2); // If operating on two inputs in parallel: SSE4.1 PEXTRD the result from the upper half of the reg.
}
没有AVX的最佳版本是略微修改,一次仅适用于一个输入,仅使用SIMD进行混洗。从理论上讲,使用MMX代替SSE会更有意义,尤其是如果我们关心第一代Core2 64b pshufb很快,但128b pshufb不是单周期。无论如何,编译器在MMX内在函数方面做得不好。此外,EMMS很慢。
// same as orbits_ssse3_interleave, but doing some of the math in integer regs. (non-vectorized)
// esp. beneficial for AMD Bulldozer-family, where integer and vector ops don't share execution ports
// VEX-encoded SSE saves mov instructions, so full vector is preferable if building with VEX-encoding
// Use MMX for Silvermont/Atom/Merom(Core2): pshufb is slow for xmm, but fast for MMX. Only 64b shuffle unit?
uint32_t orbits_ssse3_interleave_scalar(uint64_t scalar_a)
{
uint64_t lshift = (scalar_a | scalar_a << 1);
lshift &= HIBITS64;
uint64_t rshift = lshift >> 5;
// rshift = s 0 r 0 q 0 p 0 (the last zero ORs with the top bit of the low nibble in the next byte over)
uint64_t nibbles_scalar = (rshift | lshift) & 0x0f0f0f0f0f0f0f0fULL;
// have to zero the high nibbles: the sign bit affects pshufb
__m128i nibbles = _mm_cvtsi64_si128(nibbles_scalar);
// nibbles = 0 0 0 0 q s p r
// pshufb -> 0 0 0 0 s r q p
const __m128i BITORDER_NIBBLE_LUT = _mm_setr_epi8( // setr: first arg goes in the low byte, indexed by 0b0000
0b0000,
0b0100,
0b0001,
0b0101,
0b1000,
0b1100,
0b1001,
0b1101,
0b0010,
0b0110,
0b0011,
0b0111,
0b1010,
0b1110,
0b1011,
0b1111 );
__m128i ord_nibbles = _mm_shuffle_epi8(BITORDER_NIBBLE_LUT, nibbles);
// want 00 00 00 00 AB CD EF GH from:
// ord_nibbles = 0A0B0C0D0E0F0G0H
// 0A0B0C0D0E0F0G0 H(shifted out)
__m128i merged_nibbles = _mm_or_si128(ord_nibbles, _mm_srli_epi64(ord_nibbles, 4));
// merged_nibbles= 0A AB BC CD DE EF FG GH. We want every other byte of this.
// 7 6 5 4 3 2 1 0
// pshufb is the most efficient way. Mask and then packuswb would work, but uses the shuffle port just like pshufb
__m128i ord_bytes = _mm_shuffle_epi8(merged_nibbles, _mm_set_epi8(0,0,0,0, 0,0,0,0, 0,0,0,0, 6,4,2,0));
return _mm_cvtsi128_si32(ord_bytes); // movd the low32 of the vector
}
很抱歉代码转储的答案很多。在这一点上,我觉得不值得花费大量时间讨论事情而不是评论已经做了。有关针对特定微体系结构进行优化的指南,请参阅http://agner.org/optimize/。其他资源的x86 wiki也是。