多个文件中的单词匹配

时间:2013-11-24 00:41:10

标签: python regex file file-io python-3.x

我有这样的词语。有超过3000字。但是有2个文件:

File #1:
#fabulous       7.526   2301    2
#excellent      7.247   2612    3
#superb         7.199   1660    2
#perfection     7.099   3004    4
#terrific       6.922   629     1
#magnificent    6.672   490     1

File #2:
) #perfect      6.021   511     2
? #great        5.995   249     1
! #magnificent  5.979   245     1
) #ideal        5.925   232     1
day #great      5.867   219     1
bed #perfect    5.858   217     1
) #heavenly     5.73    191     1
night #perfect  5.671   180     1
night #great    5.654   177     1
. #partytime    5.427   141     1

我有很多像这样的句子,超过3000行如下:

superb, All I know is the road for that Lomardi start at TONIGHT!!!! We will set a record for a pre-season MNF I can guarantee it, perfection.

All Blue and White fam, we r meeting at Golden Corral for dinner to night at 6pm....great

我必须经历每一行并完成以下任务:
1)找出那些单词语料库在句子中的任何地方是否匹配 2)找出那些单词语料库是否匹配句子的前导和尾随

我能够完成第2部分而不是第1部分)。我能做到,但找到一种有效的方法。 我有以下代码:

for line in sys.stdin:
(id,num,senti,words) = re.split("\t+",line.strip())
sentence = re.split("\s+", words.strip().lower())

for line1 in f1: #f1 is the file containing all corpus of words like File #1
    (term2,sentimentScore,numPos,numNeg) = re.split("\t", line1.strip()) 
    wordanalysis["trail"] = found if re.match(sentence[(len(sentence)-1)],term2.lower()) else not(found)
    wordanalysis["lead"] = found  if re.match(sentence[0],term2.lower()) else not(found)

for line in sys.stdin:
  (id,num,senti,words) = re.split("\t+",line.strip())
  sentence = re.split("\s+", words.strip().lower())

for line1 in f1: #f1 is the file containing all corpus of words like File #1
  (term2,sentimentScore,numPos,numNeg) = re.split("\t", line1.strip()) 
  wordanalysis["trail"] = found if re.match(sentence[(len(sentence)-1)],term2.lower()) else not(found)
  wordanalysis["lead"] = found  if re.match(sentence[0],term2.lower()) else not(found)

for line1 in f2: #f2 is the file containing all corpus of words like File #2
  (term2,sentimentScore,numPos,numNeg) = re.split("\t", line1.strip())
  wordanalysis["trail_2"] = found if re.match(sentence[(len(sentence)-1)],term.lower()) else not(found)
  wordanalysis["lead_2"] = found  if re.match(sentence[0],term.lower()) else not(found)

我这样做了吗?有没有更好的方法呢。

1 个答案:

答案 0 :(得分:0)

这是一个经典的地图缩减问题,如果你想认真对待效率,你应该考虑这样的事情:http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/

如果你太懒/没有足够的资源来设置你自己的hadoop环境,你可以尝试一个现成的http://aws.amazon.com/elasticmapreduce/

在完成后可以在这里发布你的代码:)很高兴看到它如何被翻译成mapreduce算法......