我正在尝试使用' visual_92_categories' mne-python的数据集,但是当我想过滤和提取时期时,我得到了内存错误!我的RAM是7G。我想知道是否有人可以帮助我。 python或jupyter笔记本有没有内存限制? 感谢
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
max_trigger = 92
conds = conds[:max_trigger]
conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[24])
event_id = dict(zip(conditions, conds.trigger + 1))
n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block) for block in range(n_runs)]
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
y = epochs.events[:, 2]
X1 = epochs.copy().get_data()
答案 0 :(得分:0)
执行此代码对我来说需要超过7Gb的内存。甚至X1
阵列也大约是4Gb。但是它的类型是float64
,因此,如果您无法获得更多的内存,请尝试将其另存为float32
(内存消耗将减半)。在大多数情况下,精度下降是可以接受的。
此外,您probalby可以尝试逐块处理数据,将其作为numpy.array保存到磁盘,并在完成后上载和连接数组:
# leaving initial part intact
import pickle # need to save a data
for block in range(n_runs):
raw = mne.io.read_raw_fif(fname % block)
# raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
picks = mne.pick_types(raw.info, meg=True)
try:
epochs = mne.Epochs(raw, events=events, event_id=event_id, base
line=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
except ValueError: # there's no correct data in some blocks, catch exception
continue
y = epochs.events[:, 2].astype('float32')
X1 = epochs.copy().get_data().astype('float32')
pickle.dump(y, open('y_block_{}.pkl'.format(block), 'wb')) # use convenient names
pickle.dump(X1, open('x_block_{}.pkl'.format(block), 'wb'))
# remove unnecessary objects from memory
del y
del X1
del raw
del epochs
X1 = None # strore x_arrays
y = None # sore y_s
for block in range(n_runs):
try:
if X1 is None:
X1 = pickle.load(open('x_block_{}.pkl'.format(block), 'rb'))
y = pickle.load(open('y_block_{}.pkl'.format(block), 'rb'))
else:
X1 = np.concatenate((X1, pickle.load(open('x_block_{}.pkl'.format(block), 'rb'))))
y = np.concatenate((y, pickle.load(open('y_block_{}.pkl'.format(block), 'rb'))))
except FileNotFoundError: # if no such block from the previous stage
pass
因此,此代码对我而言没有内存耗尽(即<7 Gb),但是我不太确定mne
是独立处理所有块的,并且是等效的代码。至少这段代码创建了一个没有约0.5%样本的数组。对mne
更有经验的人可能比我更能解决这个问题。