早上好!
我目前正在使用sklearn的TfidfVectorizer和自定义标记器。这个想法是创建一个腌制的TfidfVectorizer并将该矢量化器加载到AWS Lambda函数中,该函数可以转换文本输入。
问题是:在我的本地计算机上它运行良好:我能够从S3-bucket加载矢量化器,对其进行反序列化,创建一个新的矢量化器对象,并使用它来转换文本。在AWS上不起作用。看来它无法加载我的自定义令牌生成器,我总是收到AttributeError。
我已经尝试过使用lambda函数和dill-pickler,但是它在AWS上也不起作用。它找不到我在自定义标记器中使用的PorterStemmer模块。
序列化的TfidfVectorizer(我在本地计算机上序列化了):
import pickle
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.stem.porter import PorterStemmer
def tokenizer_porter(text):
porter = PorterStemmer()
return [porter.stem(word) for word in text.split()]
tfidf = TfidfVectorizer(ngram_range=(1, 1), stop_words=None, tokenizer=tokenizer_porter)
tfidf.fit(X)
pickle.dump(tfidf, open(pickle_path + 'tfidf_vect.pkl', 'wb'), protocol=4)
反序列化(在AWS Lambda服务中):
def tokenizer_porter(text):
porter = PorterStemmer()
return [porter.stem(word) for word in text.split()]
def load_model_from_bucket(key, bucket_name):
s3 = boto3.resource('s3')
complete_key = 'serialized_models/' + key
res = s3.meta.client.get_object(Bucket=bucket_name, Key=complete_key)
model_str = res['Body'].read()
model = pickle.loads(model_str)
return model
tfidf = load_model_from_bucket('tfidf_vect.pkl', bucket_name)
tfidf.transform(text_data)
在AWS Cloudwatch中,我得到了此追溯:
Can't get attribute 'tokenizer_porter' on <module '__main__' from '/var/runtime/awslambda/bootstrap.py'>: AttributeError
Traceback (most recent call last):
File "/var/task/handler.py", line 56, in index
tfidf = load_model_from_bucket('tfidf_vect.pkl', bucket_name)
File "/var/task/handler.py", line 35, in load_model_from_bucket
model = pickle.loads(model_str)
AttributeError: Can't get attribute 'tokenizer_porter' on <module '__main__' from '/var/runtime/awslambda/bootstrap.py'>
您有任何想法我在做什么错吗?
编辑:我选择在AWS Lambda-skript中进行tfidf矢量化而不使用pickle-serialization,这会增加一些计算开销,但它不会引起问题。
答案 0 :(得分:0)
基于以下两个参考,我找到了适用于我的Heroku应用程序的解决方案:
AttributeError when reading a pickle file
Failed to find application object 'server' in 'app'
基本上,对于我的两个泡菜(file1.pickle和file2.pickle),我更改了读取文件的方式,并添加了以下内容:
class MyCustomUnpickler(pickle.Unpickler):
def find_class(self, module, name):
if module == "__main__":
module = "app"
return super().find_class(module, name)
with open('file1.pickle', 'rb') as f:
unpickler = MyCustomUnpickler(f)
object1 = unpickler.load()
with open('file2.pickle', 'rb') as f:
unpickler = MyCustomUnpickler(f)
object2 = unpickler.load()
并在app = dash.Dash(__name__)
之后添加了此内容:
server = app.server
以上链接的详细说明。