我有一个我必须阅读和解析的文本文件。每行包含ASCII格式的HEX值。例如:
0100002c
0100002c
80000000
08000000
0a000000
转换为带符号的32位整数后,我必须按以下方式检查位:
bit #31 => should result in decimal 0 or 1
bit #30 to 23 => should result in decimal 0 to 10
bit #22 to 0 => should result in a signed decimal number
我将原始int32分配给以下struct / union(我只设置了raw
部分):
typedef struct DATA{
union {
int32_t raw;
struct {
int32_t data:24;
uint8_t operation:7;
uint8_t binop:1;
};
} members;
} data_t;
现在的问题是,在Linux机器上并使用GCC进行编译(我试过4.8和5.4)我使用以下函数得到了正确的结果:
void vm_data_debug(data_t* const inst, int const num) {
printf("DEBUG DATA #%d => "
" RAW: %-12d"
" BINCODE: %-1d"
"\tOPCODE: %-1d"
"\tDATA: %-10d", num, inst->members.raw,
inst->members.binop, inst->members.operation , inst->members.data);
printf("\tBITS: ");
vm_data_print_raw_bits(sizeof inst->members.raw, &inst->members.raw);
}
这是Linux机器上问题顶部的示例ASCII的结果,很好,很花哨!
DEBUG DATA #0 => RAW: 16777260 BINCODE: 0 OPCODE: 1 DATA: 44 BITS: 00000001000000000000000000101100
DEBUG DATA #1 => RAW: 16777260 BINCODE: 0 OPCODE: 1 DATA: 44 BITS: 00000001000000000000000000101100
DEBUG DATA #2 => RAW: -2147483648 BINCODE: 1 OPCODE: 0 DATA: 0 BITS: 10000000000000000000000000000000
DEBUG DATA #3 => RAW: 134217728 BINCODE: 0 OPCODE: 8 DATA: 0 BITS: 00001000000000000000000000000000
DEBUG DATA #4 => RAW: 167772160 BINCODE: 0 OPCODE: 10 DATA: 0 BITS: 00001010000000000000000000000000
现在,在Windows机器上使用完全相同的代码(我运行Linux的机器)我得到了一个非常不同的结果(我用MinGW和MSVC2015编译):
DEBUG DATA #0 => RAW: 16777260 BINCODE: 1 OPCODE: 77 DATA: 44 BITS: 00000001000000000000000000101100
DEBUG DATA #1 => RAW: 16777260 BINCODE: 1 OPCODE: 77 DATA: 44 BITS: 00000001000000000000000000101100
DEBUG DATA #2 => RAW: 2147483647 BINCODE: 1 OPCODE: 77 DATA: -1 BITS: 01111111111111111111111111111111
DEBUG DATA #3 => RAW: 134217728 BINCODE: 1 OPCODE: 77 DATA: 0 BITS: 00001000000000000000000000000000
DEBUG DATA #4 => RAW: 167772160 BINCODE: 1 OPCODE: 77 DATA: 0 BITS: 00001010000000000000000000000000
所以问题是,这种差异来自哪里?我该怎么做才能使Windows和Linux之间保持一致?
我已经检查了这个question,但它并没有为我解决这个问题,因为所有已签名或未签名的工会成员仍无法在Windows上运行。