如何用句号,句子(用'|'分隔)来写CSV文件?

时间:2014-03-05 21:45:44

标签: python csv file-io

所以我试图读取文件列表,提取文件ID和摘要。摘要的每个句子都应写入CSV文件,文件ID,句号和句子用“|”分隔。

有人告诉我使用NLTK的标记器。我安装了NLTK,但不知道如何使用我的代码。我的Python是3.2.2。以下是我的代码:

import re, os, sys
import csv
# Read into the list of files.
topdir = r'E:\Grad\LIS\LIS590 Text mining\Part1\Part1' # Topdir has to be an object rather than a string, which means that there is no paranthesis.
matches = []
for root, dirnames, filenames in os.walk(topdir):
    for filename in filenames:
        if filename.endswith(('.txt','.pdf')):
            matches.append(os.path.join(root, filename))

# Create a list and fill in the list with the abstracts. Every abstract is a string in the list.
capturedabstracts = []
for filepath in matches[:10]:  # Testing with the first 10 files.
    with open (filepath,'rt') as mytext:
    mytext=mytext.read()

        # code to capture files
    matchFile=re.findall(r'File\s+\:\s+(\w\d{7})',mytext)[0]
    capturedfiles.append(matchFile)


    # code to capture abstracts
    matchAbs=re.findall(r'Abstract\s+\:\s+(\w.+)'+'\n',mytext)[0]
    capturedabstracts.append(matchAbs)
    print (capturedabstracts)

with open('Abstract.csv', 'w') as csvfile:
writer = csv.writer(csvfile)
for data in capturedabstracts:
    writer.writerow([data])

我是Python的初学者,我可能无法理解您的评论,如果您可以使用修订后的代码提供评论,那就太棒了。

2 个答案:

答案 0 :(得分:1)

作为第一次尝试,查看a sentence tokenizer并将文本拆分为列表,然后使用writerow存储到csv:

with file(u'Abstract.csv','w') as outfile:
    sent_detector = nltk.data.load('tokenizers/punkt/english.pickle')
    list_of_sentences = sent_detector.tokenize(text.strip())
    writer = csv.DictWriter(outfile, headers = ['phrase'], delimiter = '|',  quotechar = None, quoting = csv.QUOTE_NONE, escapechar="\\")
    for phrase in list_of_sentences:
        phrasedict = {'phrase':phrase}
        writer.writerow(phrase)
    writer.close()

答案 1 :(得分:0)

尝试使用writerow

尝试这样的事情:

with open('Abstract.csv', 'w') as csvfile:
    writer = csv.writer(csvfile)
    for data in capturedabstracts:
        writer.writerow([data])