坚持下面的内容。
log_iter = pd.read_hdf(FN, dspath,
where = [pd.Term('hashID','=',idList)],
iterator=True,
chunksize=3000)
dspath有35列,可能非常大,导致MemoryError。
所以试图去迭代器/ chunksize路线。但'where ='子句失败了
ValueError: The passed where expression: [hashID=[147685,...,147197]]
contains an invalid variable reference
all of the variable refrences must be a reference to
an axis (e.g. 'index' or 'columns'), or a data_column
The currently defined references are: ** list of column names **
问题是hashID不在列名列表中。但是,如果我做了
read_hdf(FN, dspath).columns
hashID在列中。有什么建议?我的目标是读入所有行x 35列,其hashID在idList中。
更新。以下工作表明,一旦读入数据集,hashID就作为列存在。
def dsIterator(self, q, idList):
hID = u'hashID'
FN = self.db._hdf_FN()
dspath = self.getdatasetname(q)
log_iter = pd.read_hdf(FN, dspath,
#where = [pd.Term(u'logid_hashID','=',idList)],
iterator=True,
chunksize=30000)
n_all = 0
retDF = None
for dfChunk in log_iter:
goodChunk = dfChunk.loc[dfChunk[hID].isin(idList)]
if retDF is None : retDF = goodChunk
else:
retDF = pd.concat([retDF, goodChunk], ignore_index=True)
n_all += dfChunk[hID].count()
n_ret = retDF[hID].count()
return retDF
答案 0 :(得分:0)
log_iter = pd.read_hdf(FN, dspath,
where = ['logid_hashID={:d}'.format(id_) for id_ in idList]
iterator=True,
chunksize=3000)
工作?
如果idList
很大,这可能是一个坏主意。