Python TextBlob翻译问题

时间:2019-03-14 17:41:26

标签: python nltk sentiment-analysis textblob

我正在使用Python,TextBlob和NLTK做一个快速的情绪分析控制台应用程序。

当前,我正在使用西班牙语中的Wiki文章链接,因此无需翻译,可以使用nltk西班牙语停用词列表,但是如果我想使此代码适用于不同的语言链接怎么办?

如果我使用TextFinal=TextFinal.translate(to="es")下的textFinal=TextBlob(texto)行(下面的代码),则会出现错误,因为它无法将西班牙语翻译成西班牙语。

我可以仅通过尝试/捕获来防止这种情况吗?有没有一种方法可以根据输入到应用程序的链接的语言,尝试将代码转换为不同的语言(以及使用不同的停用词列表)?

import nltk
nltk.download('stopwords')
from nltk import  word_tokenize
from nltk.corpus import stopwords
import string
from textblob import TextBlob, Word
import urllib.request
from bs4 import BeautifulSoup

response = urllib.request.urlopen('https://es.wikipedia.org/wiki/Valencia')
html = response.read()

soup = BeautifulSoup(html,'html5lib')
text = soup.get_text(strip = True)


tokens = word_tokenize(text)
tokens = [w.lower() for w in tokens]

table = str.maketrans('', '', string.punctuation)
stripped = [w.translate(table) for w in tokens]
words = [word for word in stripped if word.isalpha()]

stop_words = set(stopwords.words('spanish'))

words = [w for w in words if not w in stop_words]

with open('palabras.txt', 'w') as f:
    for word in words:
        f.write(" " + word)

with open('palabras.txt', 'r') as myfile:
    texto=myfile.read().replace('\n', '')


textFinal=TextBlob(texto)

print (textFinal.sentiment)

freq = nltk.FreqDist(words)

freq.plot(20, cumulative=False)

1 个答案:

答案 0 :(得分:1)

看看包langdetect。您可以检查正在输入的页面的语言,如果页面语言与翻译语言匹配,则可以跳过翻译。类似于以下内容:

import string
import urllib.request

import nltk
from bs4 import BeautifulSoup
from langdetect import detect
from nltk import word_tokenize
from nltk.corpus import stopwords
from textblob import TextBlob, Word

nltk.download("stopwords")
# nltk.download("punkt")

response = urllib.request.urlopen("https://es.wikipedia.org/wiki/Valencia")
html = response.read()

soup = BeautifulSoup(html, "html5lib")
text = soup.get_text(strip=True)
lang = detect(text)

tokens = word_tokenize(text)
tokens = [w.lower() for w in tokens]

table = str.maketrans("", "", string.punctuation)
stripped = [w.translate(table) for w in tokens]
words = [word for word in stripped if word.isalpha()]

stop_words = set(stopwords.words("spanish"))

words = [w for w in words if w not in stop_words]

with open("palabras.txt", "w", encoding="utf-8") as f:
    for word in words:
        f.write(" " + word)

with open("palabras.txt", "r", encoding="utf-8") as myfile:
    texto = myfile.read().replace("\n", "")


textFinal = TextBlob(texto)

translate_to = "es"
if lang != translate_to:
    textFinal = textFinal.translate(to=translate_to)

print(textFinal.sentiment)

freq = nltk.FreqDist(words)

freq.plot(20, cumulative=False)