从多个网页收集格式化内容

时间:2014-03-31 15:53:58

标签: python parsing web

我正在做一个研究项目,需要一个节目的数据记录的内容。问题是,成绩单的格式是针对特定的wiki(Arrested Development wiki),而我需要它们是机器可读的。

下载所有这些成绩单并重新格式化的最佳方法是什么?我最好的选择是Python's HTMLParser吗?

1 个答案:

答案 0 :(得分:2)

我在python中编写了一个脚本,它将wiki脚本的链接作为输入,然后在文本文件中为您提供文本版本的脚本作为输出。我希望这对你的项目有所帮助。

from pycurl import *
import cStringIO
import re

link = raw_input("Link to transcript: ")
filename = link.split("/")[-1]+".txt"

buf = cStringIO.StringIO()

c = Curl()
c.setopt(c.URL, link)
c.setopt(c.WRITEFUNCTION, buf.write)
c.perform()
html = buf.getvalue()
buf.close()

Speaker = ""
SpeakerPositions = [m.start() for m in re.finditer(':</b>', html)]

file = open(filename, 'w')

for x in range(0, len(SpeakerPositions)):
    if html[SpeakerPositions[x] + 5] != "<":

        searchpos = SpeakerPositions[x] - 1
        char = ""
        while char != ">":
            char = html[searchpos]
            searchpos = searchpos - 1
            if char != ">":
                Speaker += char

        Speaker = Speaker[::-1]
        Speaker += ": "

        searchpos = SpeakerPositions[x] + 5
        char = ""
        while char != "<":
            char = html[searchpos]
            searchpos = searchpos + 1
            if char != "<":
                Speaker += char

        Speaker = Speaker.replace("&#160;", "")
        file.write(Speaker + "\n")
        Speaker = ""

file.close()