如何拆分列表的元素?

时间:2011-07-14 15:44:45

标签: python list split

我有一个清单:

my_list = ['element1\t0238.94', 'element2\t2.3904', 'element3\t0139847']

如何删除\t以及之后的所有内容以获得此结果:

['element1', 'element2', 'element3']

7 个答案:

答案 0 :(得分:81)

类似的东西:

>>> l = ['element1\t0238.94', 'element2\t2.3904', 'element3\t0139847']
>>> [i.split('\t', 1)[0] for i in l]
['element1', 'element2', 'element3']

答案 1 :(得分:29)

myList = [i.split('\t')[0] for i in myList] 

答案 2 :(得分:7)

尝试遍历列表中的每个元素,然后将其拆分为制表符并将其添加到新列表中。

for i in list:
    newList.append(i.split('\t')[0])

答案 3 :(得分:4)

不要将list用作变量名。 您也可以查看以下代码:

clist = ['element1\t0238.94', 'element2\t2.3904', 'element3\t0139847', 'element5']
clist = [x[:x.index('\t')] if '\t' in x else x for x in clist]

或就地编辑:

for i,x in enumerate(clist):
    if '\t' in x:
        clist[i] = x[:x.index('\t')]

答案 4 :(得分:0)

我必须将特征提取列表分为两部分lt,lc:

ltexts = ((df4.ix[0:,[3,7]]).values).tolist()
random.shuffle(ltexts)

featsets = [(act_features((lt)),lc) 
              for lc, lt in ltexts]

def act_features(atext):
  features = {}
  for word in nltk.word_tokenize(atext):
     features['cont({})'.format(word.lower())]=True
  return features

答案 5 :(得分:0)

使用 map 和 lambda 表达式的解决方案:

my_list = list(map(lambda x: x.split('\t')[0], my_list))

答案 6 :(得分:-4)

sentences = ("The cat ate a big mouse. This was becasue the mouse was annoying him")

import re

liste = re.findall(r"[\w']+|[.,!?;]", sentences)

nodu = []
for x in liste:
if x not in nodu:
    nodu.append(x)
print(nodu)

pos = []
for word in liste:
if word in nodu:
    pos.append(nodu.index(word)+1)
print(pos)

lpos = []
for word in liste:
lpos.append(liste.index(word)+1)

nodus = (str(nodu))
file=open("t3.txt","w")
file.write(nodus)
file.write("\n")
file.write(str(pos))
file.close()



for number in lpos:
for word in liste:
    number = word
    print(number)
break