需要将#tags拆分为文本

时间:2016-07-27 19:34:54

标签: nlp artificial-intelligence

我需要以自动方式将#tags拆分为有意义的单词。

示例输入:

  • iloveusa
  • mycrushlike
  • mydadhero

示例输出

  • 我爱美国
  • 我的喜欢
  • 我爸爸的英雄

我可以使用任何实用程序或开放API来实现此目的吗?

1 个答案:

答案 0 :(得分:1)

检查Word Segmentation Task来自Norvig的工作。

from __future__ import division
from collections import Counter
import re, nltk

WORDS = nltk.corpus.brown.words()
COUNTS = Counter(WORDS)

def pdist(counter):
    "Make a probability distribution, given evidence from a Counter."
    N = sum(counter.values())
    return lambda x: counter[x]/N

P = pdist(COUNTS)

def Pwords(words):
    "Probability of words, assuming each word is independent of others."
    return product(P(w) for w in words)

def product(nums):
    "Multiply the numbers together.  (Like `sum`, but with multiplication.)"
    result = 1
    for x in nums:
        result *= x
    return result

def splits(text, start=0, L=20):
    "Return a list of all (first, rest) pairs; start <= len(first) <= L."
    return [(text[:i], text[i:]) 
            for i in range(start, min(len(text), L)+1)]

def segment(text):
    "Return a list of words that is the most probable segmentation of text."
    if not text: 
        return []
    else:
        candidates = ([first] + segment(rest) 
                      for (first, rest) in splits(text, 1))
        return max(candidates, key=Pwords)

print segment('iloveusa')     # ['i', 'love', 'us', 'a']
print segment('mycrushlike')  # ['my', 'crush', 'like']
print segment('mydadhero')    # ['my', 'dad', 'hero']

为了更好的解决方案,你可以使用bigram / trigram。

更多示例:Word Segmentation Task