我是RapidMiner的新手......我想要做的是我有一份10个文件的清单,我使用ProcessDocuments运算符(子任务)进行标记 - > Tokenize ...结果是一个 10乘800 示例集,包含10行(每个文档一个)和800个属性(每个令牌一个)。
现在我想过滤800令牌,我再次使用ProcessDocuments运算符(子任务) - >在前一个ProcessDocuments运算符生成的worldlist上的FilterByLength ...结果是800 x 700矩阵... 800来自前一个ProcessDocuments运算符的800个令牌和700个简化的令牌集。 / p>
我想要实现的是 10 x 700 exampleset ,我可以将其传递给Kmeans群集运算符。我怎么能这样做?
感谢
答案 0 :(得分:1)
我不确定你为什么要使用两个“Process Documents”操作符,因为你可以在第一个操作符中添加“Tokenize”和“Filter Tokens(By Length)”,它应该产生你需要的东西。
以下是一个小例子。
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.3.005">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="5.3.005" expanded="true" name="Process">
<process expanded="true">
<operator activated="true" class="text:create_document" compatibility="5.3.000" expanded="true" height="60" name="Create Document" width="90" x="45" y="75">
<parameter key="text" value="This is a test with a looooooooooong word"/>
</operator>
<operator activated="true" class="text:create_document" compatibility="5.3.000" expanded="true" height="60" name="Create Document (2)" width="90" x="45" y="165">
<parameter key="text" value="Again a text which has anoooooooooooooother long word."/>
</operator>
<operator activated="true" class="text:process_documents" compatibility="5.3.000" expanded="true" height="112" name="Process Documents" width="90" x="313" y="75">
<process expanded="true">
<operator activated="true" class="text:tokenize" compatibility="5.3.000" expanded="true" height="60" name="Tokenize" width="90" x="45" y="30"/>
<operator activated="true" class="text:filter_by_length" compatibility="5.3.000" expanded="true" height="60" name="Filter Tokens (by Length)" width="90" x="179" y="30">
<parameter key="max_chars" value="10"/>
</operator>
<connect from_port="document" to_op="Tokenize" to_port="document"/>
<connect from_op="Tokenize" from_port="document" to_op="Filter Tokens (by Length)" to_port="document"/>
<connect from_op="Filter Tokens (by Length)" from_port="document" to_port="document 1"/>
<portSpacing port="source_document" spacing="0"/>
<portSpacing port="sink_document 1" spacing="0"/>
<portSpacing port="sink_document 2" spacing="0"/>
</process>
</operator>
<operator activated="true" class="k_means" compatibility="5.3.005" expanded="true" height="76" name="Clustering" width="90" x="447" y="75"/>
<connect from_op="Create Document" from_port="output" to_op="Process Documents" to_port="documents 1"/>
<connect from_op="Create Document (2)" from_port="output" to_op="Process Documents" to_port="documents 2"/>
<connect from_op="Process Documents" from_port="example set" to_op="Clustering" to_port="example set"/>
<connect from_op="Clustering" from_port="cluster model" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>
答案 1 :(得分:0)
我倾向于同意已提供的答案;看起来它解决了问题,但你可以做类似以下的事情。
我做了一些模糊的类似here
700字的限制很难控制。在我看来,按长度排序的单词列表不太可能在700处有一个方便的截止。