输出未正确显示所有utf-8

时间:2014-11-08 09:11:18

标签: python python-3.x utf-8 io

我正在为http://www.delfi.lt编写一个网站刮刀(在Windows 8上使用lxml和py3k) - 目标是将某些信息输出到.txt文件。由于网站是立陶宛语,显然ASCII无法作为编码工作,所以我尝试用UTF-8打印它。但是,并非所有非ASCII字符都正确地打印到文件中。

这方面的一个例子是DELFI Žinios > Dienos naujienos > Užsienyje而不是DELFI Žinios > Dienos naujienos > Užsienyje

据我所知,刮刀是这样的:

from lxml import html
import sys

# Takes in command line input, namely the URL of the story and (optionally) the name of the CSV file that will store all of the data
# Outputs a list consisting of two strings, the first will be the URL, and the second will be the name if given, otherwise it'll be an empty string
def accept_user_input():
    if len(sys.argv) < 2 or len(sys.argv) > 3:
        raise type('IncorrectNumberOfArgumentsException', (Exception,), {})('Should have at least one, up till two, arguments.')
    if len(sys.argv) == 2:
        return [sys.argv[1], '']
    else:
        return sys.argv[1:]

def main():
    url, name = accept_user_input()
    page = html.parse(url)

    title = page.find('//h1[@itemprop="headline"]')
    category = page.findall('//span[@itemprop="title"]')

    with open('output.txt', encoding='utf-8', mode='w') as f:
        f.write((title.text) + "\n")
        f.write(' > '.join([x.text for x in category]) + '\n')

if __name__ == "__main__":
    main()

示例运行:python scraper.py http://www.delfi.lt/news/daily/world/ukraina-separatistai-siauteja-o-turcynovas-atnaujina-mobilizacija.d?id=64678799会生成一个名为output.txt的文件,其中包含

Ukraina: separatistai siautÄja, O. TurÄynovas atnaujina mobilizacijÄ
DELFI Žinios > Dienos naujienos > Užsienyje

而不是

Ukraina: separatistai siautÄja, O. TurÄynovas atnaujina mobilizacijÄ
DELFI Žinios > Dienos naujienos > Užsienyje

如何让脚本正确输出所有文本?

1 个答案:

答案 0 :(得分:3)

使用request和beautifulSoup并让请求使用.content处理编码对我有用:

import requests
from bs4 import BeautifulSoup

def main():
    url, name = "http://www.delfi.lt/news/daily/world/ukraina-separatistai-siauteja-o-turcynovas-atnaujina-mobilizacija.d?id=64678799","foo.csv"
    r = requests.get(url)

    page = BeautifulSoup(r.content)

    title = page.find("h1",{"itemprop":"headline"})
    category = page.find_all("span",{"itemprop":"title"})
    print(title)
    with open('output.txt', encoding='utf-8', mode='w') as f:
        f.write((title.text) + "\n")
        f.write(' > '.join([x.text for x in category]) + '\n')

输出:

Ukraina: separatistai siautėja, O. Turčynovas atnaujina mobilizacijąnaujausi susirėmimų vaizdo įrašai
DELFI Žinios > Dienos naujienos > Užsienyje

更改解析器编码也有效:

parser = etree.HTMLParser(encoding="utf-8")
page = html.parse(url,parser)

因此,请将您的代码更改为以下内容:

from lxml import html,etree
import sys

# Takes in command line input, namely the URL of the story and (optionally) the name of the CSV file that will store all of the data
# Outputs a list consisting of two strings, the first will be the URL, and the second will be the name if given, otherwise it'll be an empty string
def accept_user_input():
    if len(sys.argv) < 2 or len(sys.argv) > 3:
        raise type('IncorrectNumberOfArgumentsException', (Exception,), {})('Should have at least one, up till two, arguments.')
    if len(sys.argv) == 2:
        return [sys.argv[1], '']
    else:
        return sys.argv[1:]

def main():
    parser = etree.HTMLParser(encoding="utf-8")
    page = html.parse(url,parser))

    title = page.find('//h1[@itemprop="headline"]')
    category = page.findall('//span[@itemprop="title"]')

    with open('output.txt', encoding='utf-8', mode='w') as f:
        f.write((title.text) + "\n")
        f.write(' > '.join([x.text for x in category]) + '\n')

if __name__ == "__main__":
    main()