迭代多个文件并使用Beautiful Soup从HTML追加文本

时间:2013-04-26 20:02:00

标签: python beautifulsoup

我有一个下载的HTML文件目录(其中46个),我试图遍历每个文件,阅读其内容,删除HTML,并将文本仅附加到文本文件中。但是,我不确定在哪里搞砸了,因为没有任何内容写入我的文本文件?

import os
import glob
from bs4 import BeautifulSoup
path = "/"
for infile in glob.glob(os.path.join(path, "*.html")):
        markup = (path)
        soup = BeautifulSoup(markup)
        with open("example.txt", "a") as myfile:
                myfile.write(soup)
                f.close()

- - - - ----更新 我已经更新了我的代码,但是文本文件仍然没有被创建。

import os
import glob
from bs4 import BeautifulSoup
path = "/"
for infile in glob.glob(os.path.join(path, "*.html")):
    markup = (infile)
    soup = BeautifulSoup(markup)
    with open("example.txt", "a") as myfile:
        myfile.write(soup)
        myfile.close()

-----更新2 -----

啊,我发现我的目录不正确,所以现在我有:

import os
import glob
from bs4 import BeautifulSoup

path = "c:\\users\\me\\downloads\\"

for infile in glob.glob(os.path.join(path, "*.html")):
    markup = (infile)
    soup = BeautifulSoup(markup)
    with open("example.txt", "a") as myfile:
        myfile.write(soup)
        myfile.close()

执行此操作时,我收到此错误:

Traceback (most recent call last):
  File "C:\Users\Me\Downloads\bsoup.py, line 11 in <module>
    myfile.write(soup)
TypeError: must be str, not BeautifulSoup

我通过更改

修复了最后一个错误
myfile.write(soup)

myfile.write(soup.get_text())

-----更新3 ----

现在它正常工作,这是工作代码:

import os
import glob
from bs4 import BeautifulSoup

path = "c:\\users\\me\\downloads\\"

for infile in glob.glob(os.path.join(path, "*.html")):
    markup = (infile)
    soup = BeautifulSoup(open(markup, "r").read())
    with open("example.txt", "a") as myfile:
        myfile.write(soup.get_text())
        myfile.close()

2 个答案:

答案 0 :(得分:1)

实际上你不是在阅读html文件,这应该可行,

soup=BeautifulSoup(open(webpage,'r').read(), 'lxml')

答案 1 :(得分:0)

如果你想直接使用lxml.html这里是我为项目使用的一些代码的修改版本。如果您想获取所有文本,请不要按标记过滤。可能有一种方法可以在没有迭代的情况下完成,但我不知道。它将数据保存为unicode,因此您必须在opening the file时将其考虑在内。

import os
import glob

import lxml.html

path = '/'

# Whatever tags you want to pull text from.
visible_text_tags = ['p', 'li', 'td', 'h1', 'h2', 'h3', 'h4',
                     'h5', 'h6', 'a', 'div', 'span']

for infile in glob.glob(os.path.join(path, "*.html")):
    doc = lxml.html.parse(infile)

    file_text = []

    for element in doc.iter(): # Iterate once through the entire document

        try:  # Grab tag name and text (+ tail text)   
            tag = element.tag
            text = element.text
            tail = element.tail
        except:
            continue

        words = None # text words split to list
        if tail: # combine text and tail
            text = text + " " + tail if text else tail
        if text: # lowercase and split to list
            words = text.lower().split()

        if tag in visible_text_tags:
            if words:
                file_text.append(' '.join(words))

    with open('example.txt', 'a') as myfile:
        myfile.write(' '.join(file_text).encode('utf8'))