gremlin中的TF-IDF算法

时间:2014-05-07 17:38:28

标签: graph-databases gremlin tf-idf

我试图在我的rexster图数据库中计算TF_IDF。这是我得到的:

假设我有一个由一组顶点组成的图表,这些顶点代表术语,T和一组代表文档的顶点,D。

T中的术语与D中的文档之间有边E.,每个边都有一个术语频率,tf。

EG。 (伪代码):

#x, y, and z are arbitrary IDs.
T(x) - E(y) -> D(z)

E(y).tf = 20

T(x).outE()
  => A set of edges.

T(x).outE().inV()
  => A list of Documents, a subset of D

当我尝试执行以下操作时,如何编写计算TF_IDF的germlin脚本?

  • 答:给定一个术语 t ,计算与 t 直接相关的每个文档的TF_IDF。
  • B:给定一组术语 Ts ,计算Ts.outE().inV()中与 Ts 中每个适用术语相关的每个文档的TF_IDF的总和。 / LI>

到目前为止我所拥有的:

#I know this does not work
term = g.v(404)
term.outE().inV().as('docs').path().
groupBy{it.last()}{
  it.findAll{it instanceof Edge}.
  collect{it.getProperty('frequency')} #I would actually like to use augmented frequency (aka frequency_of_t_in_document / max_frequency_of_any_t_in_document) 
}.collect{d,tf-> [d, 
  tf * ??log(??g.V.has('isDocument') / docs.count() ?? ) ??
]}

#I feel I am close, but I can't quite make this work.

1 个答案:

答案 0 :(得分:3)

我可能还没有涵盖这部分

  

B:......与 Ts 中的每个适用术语有关。

......但其余的应该按预期工作。我写了一个小辅助函数,它接受单个术语和多个术语:

tfidf = { g, terms, N ->
  def closure = {
    def paths = it.outE("occursIn").inV().path().toList()
    def numPaths = paths.size()
    [it.getProperty("term"), paths.collectEntries({
      def title = it[2].getProperty("title")
      def tf = it[1].getProperty("frequency")
      def idf = Math.log10(N / numPaths)
      [title, tf * idf]
    })]
  }
  def single = terms instanceof String
  def pipe = single ? g.V("term", terms) : g.V().has("term", T.in, terms)
  def result = pipe.collect(closure).collectEntries()
  single ? result[terms] : result
}

然后我拿了维基百科的例子来测试它:

g = new TinkerGraph()

g.createKeyIndex("type", Vertex.class)
g.createKeyIndex("term", Vertex.class)

t1 = g.addVertex(["type":"term","term":"this"])
t2 = g.addVertex(["type":"term","term":"is"])
t3 = g.addVertex(["type":"term","term":"a"])
t4 = g.addVertex(["type":"term","term":"sample"])
t5 = g.addVertex(["type":"term","term":"another"])
t6 = g.addVertex(["type":"term","term":"example"])

d1 = g.addVertex(["type":"document","title":"Document 1"])
d2 = g.addVertex(["type":"document","title":"Document 2"])

t1.addEdge("occursIn", d1, ["frequency":1])
t1.addEdge("occursIn", d2, ["frequency":1])
t2.addEdge("occursIn", d1, ["frequency":1])
t2.addEdge("occursIn", d2, ["frequency":1])
t3.addEdge("occursIn", d1, ["frequency":2])
t4.addEdge("occursIn", d1, ["frequency":1])
t5.addEdge("occursIn", d2, ["frequency":2])
t6.addEdge("occursIn", d2, ["frequency":3])

N = g.V("type","document").count()

tfidf(g, "this", N)
tfidf(g, "example", N)
tfidf(g, ["this", "example"], N)

<强>输出:

gremlin> tfidf(g, "this", N)
==>Document 1=0.0
==>Document 2=0.0
gremlin> tfidf(g, "example", N)
==>Document 2=0.9030899869919435
gremlin> tfidf(g, ["this", "example"], N)
==>this={Document 1=0.0, Document 2=0.0}
==>example={Document 2=0.9030899869919435}

我希望这已经有所帮助。

干杯, 丹尼尔