位移位在不应该导致溢出的情况下

时间:2019-09-11 02:52:05

标签: c linux bit-shift

当我对最大正2的补码进行位移位时,不应将其移位31位使之成为有效0,因为它从0111 1111开始,依此类推。

我曾尝试减小这种偏移,但我认为这只是计算机在错误地读取它。

int main(void) 
{
  int x = 0x7FFFFFFF;
  int nx = ~x;
  int ob = nx >> 31;
  int ans = ob & nx;

  printf("%d",ans);
}

我期望ob为0,但事实证明这是二进制补码的最小值。我正在使用它来创建爆炸,而没有实际使用!。

1 个答案:

答案 0 :(得分:3)

如果要移动最大正数2的补数,则 最终将为零。

但是您不是转移那个数字:

int x = 0x7FFFFFFF;
int nx = ~x;          // 0x80000000 (assuming 32-bit int).

您正在将最大数量的数字移位。

而且,根据标准文档C11 6.5.7 Bitwise shift operators /5(我的重点):

  

E1 >> E2的结果是E1个右移E2位的位置。如果E1具有无符号类型,或者E1具有带符号类型并且具有非负值,则结果的值是E1 / 2^E2商的整数部分。 如果E1具有带符号的类型和负值,则结果值是实现定义的。

您的实现似乎保留了符号位,这就是为什么您最终得到非零负值的原因。


顺便说一句,如果您想要!运算符,则可以使用:

output = (input == 0);

请参见上述标准6.5.3.3 Unary arithmetic operators /5(再次强调,我在其中明确指出了等效性:

  

如果逻辑否定运算符!的结果比较不等于0,则结果为0;如果逻辑运算符1的比较结果等于0,则结果为int !E。结果的类型为(0==E) 表达式 freshness review 0 1 Manakamana doesn't answer any questions, yet ... 1 1 Wilfully offensive and powered by a chest-thu... 2 0 It would be difficult to imagine material mor... 3 0 Despite the gusto its star brings to the role... 4 0 If there was a good idea at the core of this ... class ReviewDataset(Dataset): def __init__(self, review_df, vectorizer): self.review_df = review_df self.vectorizer_ = vectorizer self.train_df = self.review_df[self.review_df["split"] == "train"] self.train_size = len(self.train_df) self.val_df = self.review_df[self.review_df["split"] == "val"] self.val_size = len(self.val_df) self.test_df = self.review_df[self.review_df["split"] == "test"] self.test_size = len(self.test_df) self.lookup_dict_ = { "train": (self.train_df, self.train_size), "val": (self.val_df, self.val_size), "test": (self.test_df, self.test_size) } self.set_split("train") @classmethod def load_and_vectorize(cls, review_csv, sample=True): review_df = pd.read_csv(review_csv) if sample != True: review_df.columns = [ col.lower() for col in review_df.columns ] review_df = split_into_sets(review_df) review_df["review"] = review_df["review"].apply(clean_text) return cls(review_df, Vectorizer.from_dataframe(review_df)) else: # sample data to facilitate model tuning sampleData = review_df.sample(frac=.1, random_state=1) sampleData.columns = [ col.lower() for col in sampleData.columns ] # remove class imbalance within sample posRating = sampleData[sampleData["freshness"] == 1] negRating = sampleData[sampleData["freshness"] == 0] neg_downsampled = resample(negRating, replace=False, n_samples=23926, random_state=1) balancedSample = pd.concat([neg_downsampled, posRating]) balancedSample = split_into_sets(balancedSample) balancedSample["review"] = balancedSample["review"].apply(clean_text) return cls(balancedSample, Vectorizer.from_dataframe(balancedSample)) def get_vectorizer(self): return self.vectorizer_ def set_split(self, split="train"): self.target_split_ = split self.target_df_, self.target_size_ = self.lookup_dict_[split] def __len__(self): return self.target_size_ def __getitem__(self, index): row = self.target_df_.iloc[index] review_vector = \ self.vectorizer_.vectorize(row["review"]) rating_index = \ self.vectorizer_.rating_vocab.lookup_token(row["freshness"]) return { "x_data": review_vector, "y_target": rating_index } def get_n_batches(self, batch_size): return len(self) // batch_size 等效。