RuntimeError:尝试调整无法在/pytorch/aten/src/TH/generic/THStorage.c上调整大小的存储的大小:183

时间:2019-04-18 12:21:15

标签: numpy pytorch

代码是关于在gpu上填充每个文本的。

培训了整个项目几天后,出现了一个问题

import sklearn print('The scikit-learn version is {}.'.format(sklearn.__version__)) X, y = make_classification(n_classes=2, class_sep=0, weights=[0.05,0.95],n_clusters_per_class=2, n_features=2, n_samples=10000, n_informative=2, n_redundant=0, n_repeated=0) #Repeated ENN renn = RepeatedEditedNearestNeighbours( n_neighbors = 5, n_jobs= 2, max_iter = 100) #Oversampling after have undersampled smote_enn = SMOTEENN() #Classifier classifier = LogisticRegression(random_state = 0) # Make the splits n = 2 kf = StratifiedKFold(n_splits = n, random_state = 0) # Create regularization penalty space penalty = ['l1', 'l2'] # Create regularization hyperparameter space C = np.logspace(0, 4, 10) # Create hyperparameter options parameters = dict(C=C, penalty=penalty) random_search = RandomizedSearchCV(pipeline, param_distributions=parameters, n_iter=1000, cv = kf, return_train_score = True) gg = random_search.fit(X, y) gg .best_estimator_ random_search.cv_results_ ,错误是

x = torch.from_numpy(x.astype(int))

我搜索了该错误的几个答案,但是这些都不是由RuntimeError: Trying to resize storage that is not resizable at /pytorch/aten/src/TH/generic/THStorage.c:183触发的

x = torch.from_numpy(x.astype(int))

它运行了好几天,但是突然出错了。是因为我的GPU(Tesla K80)的内存不足?错误 max_seq_len = max(lengths) batch_size = len(batch) x = np.zeros((batch_size, max_seq_len)) for i, text_ids in enumerate(texts_ids): padded = np.zeros(max_seq_len) # padded = [0,0,0,....,max_sqe_len] padded[:len(text_ids)] = text_ids # padded = [50,4,16,...,0,0,0] x[i, :] = padded # x = [[text],[text][text]] indexed by batches x = torch.from_numpy(x.astype(int)) x = move_to_cuda(x) lengths = move_to_cuda(lengths) labels = move_to_cuda(labels) 是什么意思?

0 个答案:

没有答案