吞噬巨量公羊的Python字典

时间:2017-11-22 04:41:47

标签: python linux dictionary search-engine

我已经构建了一个python字典,它将单词作为键和它们出现的文件列表存储。下面是代码snipet。

if len(sys.argv) < 2:
    search_query = input("Enter the search query")
else:
    search_query = sys.argv[1]

#path to the directory where files are stored, store the file names in list    named directory_name
directory_name = os.listdir("./test_input")
#create a list list_of_files to get the entore path of the files , so that they can be opend later
list_of_files = []
#appending the files to the list_files
for files in directory_name:
    list_of_files.append("./test_input"+"/"+files)
#empty dictionary
search_dictionary = {}

#iterate over the files in the list_of files one by one
for files in list_of_files:
    #open the file 
    open_file = open(files,"r")
    #store the basename of the file in as file_name
    file_name = os.path.basename(files)

   for line in open_file:
        for word in line.split():
        #if word in the file is not in the dictionary, add the word and the file_name in the dictionary
            if word not in search_dictionary:
                search_dictionary[word] = [file_name]
            else:
        #if the filename of a particular word is the same then ignore that
                if file_name in search_dictionary[word]:
                    continue
        #if the same word is found in the different file then append that filename
                search_dictionary[word].append(file_name)

def search(search_dictionary, search_query):
    if search_query in search_dictionary:
        print 'found '+ search_query
        print search_dictionary[search_query]
    else:
        print 'not found '+ search_query 

search(search_dictionary, search_query)

input_word = ""
while input_word != 'quit':    
    input_word = raw_input('enter a word to search ')
    start1 = time.time()
    search(search_dictionary,input_word)
    end1 = time.time()
    print(end1 - start1)

但如果没有。目录中的文件类似于500 MB RAM和SWAP空间被吃掉。如何管理内存使用情况。

1 个答案:

答案 0 :(得分:3)

如果您有大量文件,那么您可能无法关闭文件。更常见的模式是将文件用作上下文管理器,如下所示:

with open(files, 'r') as open_file:
    file_name=os.path.basename(files)
    for line in open_file:
        for word  in line.split():
            if word not in search_dictionary:
                search_dictionary[word]=[file_name]
            else:
                if file_name in search_dictionary[word]:
                    continue
                search_dictionary[word].append(file_name)

使用此语法意味着您不必担心关闭文件。如果您不想这样做,那么在您完成迭代之后,仍然应该致电open_file.close()。这是我在代码中可以看到的唯一一个可以看到可能导致如此高内存使用率的问题(尽管如果您打开一些没有换行符的大文件,也可以这样做。)

这对内存使用情况有所帮助,但您可以使用这种数据类型来大大简化代码:collections.defaultdict。您的代码可以这样编写(我还包含了os模块可以帮助您的一些内容):

from collections import defaultdict


directory_name="./test_input"

list_of_files=[]
for files in os.listdir(directory_name):
    list_of_files.append(os.path.join(directory_name, files))
search_dictionary = defaultdict(set)

start=time.time()
for files in list_of_files:
    with open(files) as open_file:
        file_name=os.path.basename(files)
        for line in open_file:
            for word  in line.split():
                search_dictionary[word].add(file_name)