如何在python中有效搜索字符串中的列表元素

时间:2019-02-01 06:48:38

标签: python list

我有一个概念列表(myconcepts)和一个句子列表(sentences)如下。

concepts = [['natural language processing', 'text mining', 'texts', 'nlp'], ['advanced data mining', 'data mining', 'data'], ['discourse analysis', 'learning analytics', 'mooc']]


sentences = ['data mining and text mining', 'nlp is mainly used by discourse analysis community', 'data mining in python is fun', 'mooc data analysis involves texts', 'data and data mining are both very interesting']

简而言之,我想在concepts中找到sentences。更具体地说,给定concepts中的列表(例如['natural language processing', 'text mining', 'texts', 'nlp']),我想在句子中标识这些概念,并用其第一个元素(即natural language processing)替换。

示例: 因此,如果我们考虑句子data mining and text mining;结果应为advanced data mining and natural language processing。 (因为data miningtext mining的前两个元素分别是advanced data miningnatural language processing

上述虚拟数据的结果应为:

['advanced data mining and natural language processing', 'natural language processing is mainly used by discourse analysis community', 'advanced data mining in python is fun', 'discourse analysis advanced data mining analysis involves natural language processing', 'advanced data mining and advanced data mining are both very interesting']

我目前正在使用正则表达式执行以下操作:

concepts_re = []

for item in sorted_wikipedia_redirects:
        item_re = "|".join(re.escape(item) for item in item)
        concepts_re.append(item_re)

sentences_mapping = []

for sentence in sentences:
    for terms in concepts:
        if len(terms) > 1:
            for item in terms:
                if item in sentence:
                    sentence = re.sub(concepts_re[concepts.index(terms)], item[0], sentence)
    sentences_mapping.append(sentence)

在我的真实数据集中,我大约有800万concepts。因此,我的方法效率很低,大约需要5分钟才能处理一个句子。我想知道在python中是否有任何有效的方法。

对于那些想要处理一长串concepts来衡量时间的人,我在此附上了一张更长的清单:https://drive.google.com/file/d/1OsggJTDZx67PGH4LupXIkCTObla0gDnX/view?usp=sharing

很高兴在需要时提供更多详细信息。

3 个答案:

答案 0 :(得分:19)

以下提供的解决方案在运行时具有大约 O(n)的复杂度,其中 n 是每个句子中标记的数量。

对于500万个句子,您的concepts.txt会在约30秒内执行所需的操作,请参阅第三部分中的基本测试。

在空间复杂性方面,您必须保留一个嵌套的字典结构(现在就这样简化它),说它是 O(c * u),其中 u 是用于特定概念长度(令牌方式)的唯一令牌,而c是概念长度。

很难确定确切的复杂性,但是与此非常相似(对于您的示例数据和您提供的[concepts.txt] ,这些数据非常准确,但是我们会得到在实施过程中详细了解细节。

我假设您可以在空格上拆分概念和句子,如果不是这种情况,我建议您看一下spaCy,它提供了更智能的方式来标记数据。

1。简介

让我们举个例子:

concepts = [
    ["natural language processing", "text mining", "texts", "nlp"],
    ["advanced data mining", "data mining", "data"],
    ["discourse analysis", "learning analytics", "mooc"],
]

正如您所说,概念中的每个元素都必须映射到第一个元素,因此,在Pythonish中,它将大致遵循以下原则:

for concept in concepts:
    concept[1:] = concept[0]

如果所有概念的令牌长度等于1(在这里不是这种情况),那么任务将很容易,并且将是唯一的。让我们集中讨论第二种情况和concept的一个特定示例(稍作修改)以了解我的观点:

["advanced data mining", "data something", "data"]

此处data将映射到advanced data mining data somethingdata组成,应在其之前映射。如果我对您的理解正确,那么您将需要以下句子:

"Here is data something and another data"

要映射到:

"Here is advanced data mapping and another advanced data mining"

代替天真的方法:

"Here is advanced data mapping something and another advanced data mining"

请参见第二个示例,我们仅映射了data,而不是data something

为确定data something的优先级(以及其他适合此模式的优先级),我使用了填充有字典的数组结构,其中数组中较早的概念是较长的令牌方式的

继续我们的示例,这样的数组如下所示:

structure = [
    {"data": {"something": "advanced data mining"}},
    {"data": "advanced data mining"},
]

请注意,如果我们按此顺序浏览标记(例如,首先通过具有连续标记的第一个字典进行搜索,如果找不到匹配项,则转到第二个字典,依此类推),我们将首先获得最长的概念。

2。代码

好的,我希望您有基本的想法(如果没有,请在下面发表评论,我将尝试更详细地解释不清楚的部分)。

免责声明:我并不对此代码方式感到特别自豪,但是它可以完成工作,而且我想可能会更糟

2.1分层字典

首先,让我们以令牌方式获得最长的概念(不包括第一个元素,因为这是我们的目标,我们不必更改它):

def get_longest(concepts: List[List[str]]):
    return max(len(text.split()) for concept in concepts for text in concept[1:])

使用此信息,我们可以通过创建与概念长度不同的词典一样多的字典来初始化结构(在上面的示例中为2,因此适用于您的所有数据。但是可以使用任何长度的概念):

def init_hierarchical_dictionaries(longest: int):
    return [(length, {}) for length in reversed(range(longest))]

注意,我将每个概念的长度添加到数组中,IMO在遍历时更容易做到这一点,尽管对实现进行了一些更改,也可以不用它。

现在,有了这些辅助功能后,我们可以从概念列表中创建结构:

def create_hierarchical_dictionaries(concepts: List[List[str]]):
    # Initialization
    longest = get_longest(concepts)
    hierarchical_dictionaries = init_hierarchical_dictionaries(longest)

    for concept in concepts:
        for text in concept[1:]:
            tokens = text.split()
            # Initialize dictionary; get the one with corresponding length.
            # The longer, the earlier it is in the hierarchy
            current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
            # All of the tokens except the last one are another dictionary mapping to
            # the next token in concept.
            for token in tokens[:-1]:
                current_dictionary[token] = {}
                current_dictionary = current_dictionary[token]

            # Last token is mapped to the first concept
            current_dictionary[tokens[-1]] = concept[0].split()

    return hierarchical_dictionaries

此函数将创建我们的分层字典,请参见源代码中的注释以获取一些说明。您可能想创建一个保留该内容的自定义类,因此使用起来应该更容易。

这与 1中描述的对象完全相同。简介

2.2遍历字典

这部分要困难得多,但是这次让我们使用自上而下的方法。我们将轻松开始:

def embed_sentences(sentences: List[str], hierarchical_dictionaries):
    return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)

提供了分层字典,它创建了一个生成器,该生成器根据概念映射来转换每个句子。

现在traverse函数:

def traverse(sentence: str, hierarchical_dictionaries):
    # Get all tokens in the sentence
    tokens = sentence.split()
    output_sentence = []
    # Initialize index to the first token
    index = 0
    # Until any tokens left to check for concepts
    while index < len(tokens):
        # Iterate over hierarchical dictionaries (elements of the array)
        for hierarchical_dictionary_tuple in hierarchical_dictionaries:
            # New index is returned based on match and token-wise length of concept
            index, concept = traverse_through_dictionary(
                index, tokens, hierarchical_dictionary_tuple
            )
            # Concept was found in current hierarchical_dictionary_tuple, let's add it
            # to output
            if concept is not None:
                output_sentence.extend(concept)
                # No need to check other hierarchical dictionaries for matching concept
                break
        # Token (and it's next tokens) do not match with any concept, return original
        else:
            output_sentence.append(tokens[index])
        # Increment index in order to move to the next token
        index += 1

    # Join list of tokens into a sentence
    return " ".join(output_sentence)

再一次,如果您不确定发生了什么,请发表评论

悲观地,使用这种方法,我们将执行 O(n * c!)检查,其中n是句子中标记的数量,c是最长概念的标记方式长度,它是阶乘。这种情况在实践中极不可能发生,句子中的每个标记都必须几乎完全适合最长的概念,所有较短的概念都必须是最短的前缀一个(例如super data miningsuper datadata)。

对于任何实际问题,它都将更接近O(n),就像我之前说的那样,使用您在.txt文件中提供的数据,O(3 * n)最差-case,通常为O(2 * n)。

遍历每本字典

def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
    # Get the level of nested dictionaries and initial dictionary
    length, current_dictionary = hierarchical_dictionary_tuple
    # inner_index will loop through tokens until match or no match was found
    inner_index = index
    for _ in range(length):
        # Get next nested dictionary and move inner_index to the next token
        current_dictionary = current_dictionary.get(tokens[inner_index])
        inner_index += 1
        # If no match was found in any level of dictionary
        # Return current index in sentence and None representing lack of concept.
        if current_dictionary is None or inner_index >= len(tokens):
            return index, None

    # If everything went fine through all nested dictionaries, check whether
    # last token corresponds to concept
    concept = current_dictionary.get(tokens[inner_index])
    if concept is None:
        return index, None
    # If so, return inner_index (we have moved length tokens, so we have to update it)
    return inner_index, concept

这构成了我解决方案的“肉”。

3。结果

为简便起见,下面提供了完整的源代码(concepts.txt是您提供的源代码):

import ast
import time
from typing import List


def get_longest(concepts: List[List[str]]):
    return max(len(text.split()) for concept in concepts for text in concept[1:])


def init_hierarchical_dictionaries(longest: int):
    return [(length, {}) for length in reversed(range(longest))]


def create_hierarchical_dictionaries(concepts: List[List[str]]):
    # Initialization
    longest = get_longest(concepts)
    hierarchical_dictionaries = init_hierarchical_dictionaries(longest)

    for concept in concepts:
        for text in concept[1:]:
            tokens = text.split()
            # Initialize dictionary; get the one with corresponding length.
            # The longer, the earlier it is in the hierarchy
            current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
            # All of the tokens except the last one are another dictionary mapping to
            # the next token in concept.
            for token in tokens[:-1]:
                current_dictionary[token] = {}
                current_dictionary = current_dictionary[token]

            # Last token is mapped to the first concept
            current_dictionary[tokens[-1]] = concept[0].split()

    return hierarchical_dictionaries


def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
    # Get the level of nested dictionaries and initial dictionary
    length, current_dictionary = hierarchical_dictionary_tuple
    # inner_index will loop through tokens until match or no match was found
    inner_index = index
    for _ in range(length):
        # Get next nested dictionary and move inner_index to the next token
        current_dictionary = current_dictionary.get(tokens[inner_index])
        inner_index += 1
        # If no match was found in any level of dictionary
        # Return current index in sentence and None representing lack of concept.
        if current_dictionary is None or inner_index >= len(tokens):
            return index, None

    # If everything went fine through all nested dictionaries, check whether
    # last token corresponds to concept
    concept = current_dictionary.get(tokens[inner_index])
    if concept is None:
        return index, None
    # If so, return inner_index (we have moved length tokens, so we have to update it)
    return inner_index, concept


def traverse(sentence: str, hierarchical_dictionaries):
    # Get all tokens in the sentence
    tokens = sentence.split()
    output_sentence = []
    # Initialize index to the first token
    index = 0
    # Until any tokens left to check for concepts
    while index < len(tokens):
        # Iterate over hierarchical dictionaries (elements of the array)
        for hierarchical_dictionary_tuple in hierarchical_dictionaries:
            # New index is returned based on match and token-wise length of concept
            index, concept = traverse_through_dictionary(
                index, tokens, hierarchical_dictionary_tuple
            )
            # Concept was found in current hierarchical_dictionary_tuple, let's add it
            # to output
            if concept is not None:
                output_sentence.extend(concept)
                # No need to check other hierarchical dictionaries for matching concept
                break
        # Token (and it's next tokens) do not match with any concept, return original
        else:
            output_sentence.append(tokens[index])
        # Increment index in order to move to the next token
        index += 1

    # Join list of tokens into a sentence
    return " ".join(output_sentence)


def embed_sentences(sentences: List[str], hierarchical_dictionaries):
    return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)


def sanity_check():
    concepts = [
        ["natural language processing", "text mining", "texts", "nlp"],
        ["advanced data mining", "data mining", "data"],
        ["discourse analysis", "learning analytics", "mooc"],
    ]
    sentences = [
        "data mining and text mining",
        "nlp is mainly used by discourse analysis community",
        "data mining in python is fun",
        "mooc data analysis involves texts",
        "data and data mining are both very interesting",
    ]

    targets = [
        "advanced data mining and natural language processing",
        "natural language processing is mainly used by discourse analysis community",
        "advanced data mining in python is fun",
        "discourse analysis advanced data mining analysis involves natural language processing",
        "advanced data mining and advanced data mining are both very interesting",
    ]

    hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)

    results = list(embed_sentences(sentences, hierarchical_dictionaries))
    if results == targets:
        print("Correct results")
    else:
        print("Incorrect results")


def speed_check():
    with open("./concepts.txt") as f:
        concepts = ast.literal_eval(f.read())

    initial_sentences = [
        "data mining and text mining",
        "nlp is mainly used by discourse analysis community",
        "data mining in python is fun",
        "mooc data analysis involves texts",
        "data and data mining are both very interesting",
    ]

    sentences = initial_sentences.copy()

    for i in range(1_000_000):
        sentences += initial_sentences

    start = time.time()
    hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)
    middle = time.time()
    letters = []
    for result in embed_sentences(sentences, hierarchical_dictionaries):
        letters.append(result[0].capitalize())
    end = time.time()
    print(f"Time for hierarchical creation {(middle-start) * 1000.0} ms")
    print(f"Time for embedding {(end-middle) * 1000.0} ms")
    print(f"Overall time elapsed {(end-start) * 1000.0} ms")


def main():
    sanity_check()
    speed_check()


if __name__ == "__main__":
    main()

下面提供的速度检查结果:

Time for hierarchical creation 107.71822929382324 ms
Time for embedding 30460.427284240723 ms
Overall time elapsed 30568.145513534546 ms

因此,对于500万个句子(您提供的5个句子进行了1百万次连接)和您提供的概念文件(1.1 mb),执行概念映射大约需要30秒,我想这还不错。

在最坏的情况下,词典应使用与输入文件一样大的内存(在这种情况下为concepts.txt),但通常会更低/更低,这取决于概念长度和唯一单词的组合这些话。

答案 1 :(得分:5)

使用suffix array方法,

如果您的数据已被清理,请跳过此步骤。

首先,对数据进行消毒处理,以所有不属于任何概念或句子的字符替换所有空白字符。

然后为所有句子构建后缀数组。每个句子需要O(nLogn)时间。很少有算法可以使用suffix trees

在O(n)时间内完成此操作

一旦为所有句子准备好后缀数组,只需对概念进行二进制搜索即可。

您可以使用LCP阵列进一步优化搜索。请参阅:kasai's

使用LCP和后缀数组,可以将搜索的时间复杂度降低到O(n)。

编辑: 这种方法通常用于基因组的序列比对,并且也很流行。您应该轻松找到适合自己的实现。

答案 2 :(得分:2)

import re
concepts = [['natural language processing', 'text mining', 'texts', 'nlp'], ['advanced data mining', 'data mining', 'data'], ['discourse analysis', 'learning analytics', 'mooc']]
sentences = ['data mining and text mining', 'nlp is mainly used by discourse analysis community', 'data mining in python is fun', 'mooc data analysis involves texts', 'data and data mining are both very interesting']

replacementDict = {concept[0] : concept[1:] for concept in concepts}

finderAndReplacements = [(re.compile('(' + '|'.join(replacees) + ')'), replacement) 
for replacement, replacees in replacementDict.items()]

def sentenceReplaced(findRegEx, replacement, sentence):
    return findRegEx.sub(replacement, sentence, count=0)

def sentencesAllReplaced(sentences, finderAndReplacements=finderAndReplacements):
    for regex, replacement in finderAndReplacements:
        sentences = [sentenceReplaced(regex, replacement, sentence) for sentence in sentences]
    return sentences

print(sentencesAllReplaced(sentences))
  • 设置:我更喜欢将concepts表示为dict,其中键(值)是替换,替换。将此存储在replacementDict
  • 为每个预期的替换组编译一个匹配的正则表达式。将其及其预期的替代品存储在finderAndReplacements列表中。
  • sentenceReplaced函数在执行替换后返回输入语句。 (此处的应用顺序无关紧要,因此,如果我们注意避免出现竞争情况,则应该可以并行化。)
  • 最后,我们循环浏览并查找/替换每个句子。 (大量并行结构将提供改进的性能。)

我希望看到一些详尽的基准测试/测试/报告,因为我确信根据任务输入的性质(conceptssentences)和硬件运行它。

sentences是主要输入成分的情况下,与concepts替代相比,我相信编译正则表达式将是有利的。当句子少而概念多时,尤其是如果大多数概念不在任何句子中时,编译这些匹配器将是浪费。而且,如果每次替换都有很多替换,则所使用的编译方法可能会执行不佳甚至出错。 。 。 (有关输入参数的各种假设通常会提供许多折衷的考虑因素。)