在Google App Engine中使用python进行文本预测的建议解决方案是什么?

时间:2015-08-04 17:42:04

标签: python algorithm google-app-engine machine-learning

我正在使用Google App Engine和Python开发网站。我希望在网站上添加一个功能,用户可以输入一个单词,系统会根据用户的意见给出最接近的单词/句子(基于用法)。现在我已经实现了一种基于Peter Norvig的方法拼写检查算法的算法。但从长远来看,我认为这不是一个非常可扩展的解决方案。我正在寻找在Google App Engine上实现此类功能的建议方法。是预测Api的方法吗?或者编写自己的算法是最好的方法吗?如果编写我自己的算法,那么任何人都可以给我一些关于如何使解决方案健壮的指示吗?

代码段:

import re, collections
from bp_includes.models import User, SocialUser
from bp_includes.lib.basehandler import BaseHandler
from google.appengine.ext import ndb
import utils.ndb_json as ndb_json

class TextPredictionHandler(BaseHandler):
  alphabet_list = 'abcdefghijklmnopqrstuvwxyz' #list of alphabets

  #Creates corpus with frequency/probability distribution
  def trainForText(self,features):
    search_dict = collections.defaultdict(lambda: 1)
    for f in features:
      search_dict[f] += 1
    return search_dict

  #Heart of the code. Decides how many words can be formed by modifying a given word by one letter
  def edit_dist_one(self,word):
    splits      = [(word[:i],word[i:]) for i in range(len(word) + 1)]
    deletes     = [a + b[1:] for a,b in splits if b]
    transposes  = [a + b[1] + b[0] + b[2:] for a,b in splits if (len(b) > 1)]
    replaces = [a + c + b[1:] for a, b in splits for c in self.alphabet_list if b]
    inserts  = [a + c + b     for a, b in splits for c in self.alphabet_list]
    return set(deletes + transposes + replaces + inserts)

  #Checks for exact matches in Corpus for words 
  def existing_words(self,words,trainSet):
    return set(w for w in words if w in trainSet)

  #Checks for partial matches in Corpus for a word.
  def partial_words(self,word,trainSet):
    regex = re.compile(".*("+word+").*")    
    return set(str(m.group(0)) for l in trainSet for m in [regex.search(l)] if m)

  def found_words(self,word):
    word = word.lower()
    data = []
    q = models.SampleModel.query()    #This line will not work as I had to mask out the model I am using
    #Really bad way of making a Corpus. Needs to modified to be scalable. So many loops. Corpus can be stored in google cloud storage to reduce processing time.
    for upost in q.fetch(): 
        if upost.text!="":
          tempTextData = re.sub("[^\w]", " ",  upost.text).split()
          for t in range(len(tempTextData)):
            data.append(tempTextData[t].lower())
          # data.append(upost.text.lower())
        if upost.definition!="":
          tempData = re.sub("[^\w]", " ",  upost.definition).split()
          for t in range(len(tempData)):
            data.append(tempData[t].lower())
        if upost.TextPhrases:
         for e in upost.TextPhrases:
          for p in e.get().phrases: 
              data.append(p.lower())
        if upost.Tags:
          for h in upost.Tags:
            if h.get().text.replace("#","")!="" :
              data.append(h.get().text.replace("#","").lower())
    trainSet = self.trainForText(data)
    set_of_words = self.existing_words([word],trainSet).union(self.existing_words(self.edit_dist_one(word),trainSet))
    set_of_words = set_of_words.union(self.partial_words(word,trainSet))
    set_of_words = set_of_words.union([word])
    return set_of_words

  def get(self, search_text):
    outputData = self.found_words(search_text)
    data = {"texts":[]}
    for dat in outputData:
      pdata = {}
      pdata["text"] = dat;
      data["texts"].append(pdata)
    self.response.out.write(ndb_json.dumps(data))

1 个答案:

答案 0 :(得分:1)

使用Prediction API比制作自己的API更可靠和可扩展。没有必要重新发明轮子。
如果你要编写自己的代码,它可能是一个漫长而复杂的过程,在路上有很多障碍,除非你对学习和编码系统有兴趣,我建议你使用现有的工具。
这是Google自己的example
这是documentation for the Prediction API
具有Prediction API的Hello World program