我用您的Open IE 5提取了三元组并得到以下结果,
文本输入
通过称为LevenbergMarquardt反向传播算法的算法方法,可重复减少误差。一些ANN模型采用监督培训,而其他模型则称为非监督或自组织培训。但是,绝大多数的ANN模型都使用监督监督培训。训练阶段可能会花费很多时间。在监督培训中,将人工神经网络的实际输出与所需输出进行比较。训练集包括将输入和输出数据呈现给网络。网络调整权重系数,通常从随机集开始,以便下一次迭代将在ANN的实际输出与实际输出之间产生更紧密的匹配。训练方法尝试使所有处理元素的当前错误最小化。通过不断修改
输出
0.89 Context(The training method tries,List([723, 748))):(The training method; tries to minimize; the current errors for all processing elements)
0.95 (the vast majority of ANN models; use; supervisory the supervisory training)
0.88 (others; are referred; as self - organizing training)
0.89 Context(The training method tries,List([717, 742))):(The training method; tries to minimize; the current errors for all processing elements)
0.93 Context(Some ANN models employ The training phase may consume,List([120, 340))):(the error; is decreased; T:repeatedly; T:By the algorithmic approach)
0.94 Context(The training phase may consume,List([310, 340))):(Some ANN models; employ; supervisory training; while others are referred to as self - organizing training)
0.89 Context(The training method tries,List([724, 749))):(The training method; tries to minimize; the current errors for all processing elements)
0.93 Context(The training phase may consume,List([311, 341))):(the vast majority of ANN models; use; supervisory the supervisory training)
0.93 Context(Some ANN models employ The training phase may consume,List([120, 341))):(the error; is decreased; T:repeatedly; T:By the algorithmic approach)
0.94 Context(The training phase may consume,List([311, 341))):(Some ANN models; employ; supervisory training; while others are referred to as none - supervisory training)
0.92 (This global error reduction; is created; T:over time; by continuously modifying the)
任何人都可以帮助我理解
答案 0 :(得分:0)
为了回答“什么是List([723,748))):?”
我认为这是输入句子中上下文短语的位置/跨度。
T:随着时间的流逝;这将角色标记为时间。即“随着时间的推移”是SRL中的时间角色。
在某些情况下,它具有4个实体,(错误;减少; T:重复; T:通过算法方法):OpenIE除了通常的三重提取之外,有时还会提供n元关系提取。