使用Spacy和Pyspark进行杀毒

时间:2019-11-07 13:26:48

标签: python user-defined-functions spacy lemmatization pyspark-dataframes

我创建了一个函数来对Pyspark DataFrame列进行定形。 这是代码:

class StockMoveReconcile(models.Model):
    _name = 'stock.move.reconcile'
    _description = 'Stock Move Reconcile'

    move_to_id = fields.Many2one('stock.move', string='Move To')
    move_from_id = fields.Many2one('stock.move', string='Move From')

def recalculate(self):
    moves = self.browse(('active_ids'))
    moves_to_recalculate = []
    for move1 in moves:
        #I add my first move in chain to list
        moves_to_recalculate.append(move1.id)
        #First move have 2 moves_to_ids so i make another loop to add it ID to list
        for second_tier_move in move.move_to_ids:
            moves_to_recalculate.appen(second_tier_move.move_to_id.id)
            # secont tier move has 1 move_to_ids so i do another loop, and add it's ID to list.
            for third_tier_move in second_tier_move.move_to_ids:
                moves_to_recalculate.appen(third_tier_move.move_to_id.id)
                #third_tier_move has another move_to_ids , and so on.

问题是,当我对其进行测试时,它会在大约两分钟后运行,而如果我通过其他功能(例如矢量化,停用词删除等)在主函数中运行它;主线永远运行。似乎扰乱了整个过程。我有信心主程序可以很好地执行其他任务,因此非常完美。我该如何解决这个问题?

0 个答案:

没有答案