I keep getting a sporadic error from Cloud Functions for Firebase when converting a relatively small image (2mb). When successful, the function only takes about 2000ms or less to finish, and according to Image Magick documentation should I should not see any problems.
I tried increasing the buffer size for the command, which isn't allows from within Firebase, and I tried to find alternatives to .spawn()
as that could be overloaded with garbage and slow things down. Nothing works.
答案 0 :(得分:32)
我在UI中迷失了,找不到任何更改内存的选项,但终于找到了它:
问候,彼得
答案 1 :(得分:22)
[update] As one commenter suggested, this should no longer be an issue, as firebase functions now maintain their settings on re-deploy. Thanks firebase!
Turns out, and this is not obvious or documented, you can increase the memory allocation to your functions in the Google Functions Console. You can also increase the timeout for long-running functions. It solved the problem with memory overload and everything is working great now.
Edit: Note that Firebase will reset your default values on deploy, so you should remember to login to the console and update them right away. I am still looking around for a way to update these settings via CLI, will update when I find it.
答案 2 :(得分:11)
您可以在Firebase的Cloud Function文件中进行设置。
const runtimeOpts = {
timeoutSeconds: 300,
memory: '1GB'
}
exports.myStorageFunction = functions
.runWith(runtimeOpts)
.storage
.object()
.onFinalize((object) = > {
// do some complicated things that take a lot of memory and time
});
从此处获取文档: https://firebase.google.com/docs/functions/manage-functions#set_timeout_and_memory_allocation
别忘了从终端运行firebase deploy
。
答案 3 :(得分:9)
最新的firebase deploy命令会将内存分配覆盖为默认值256MB,超时时间最长为60秒。
或者,要指定所需的内存分配和最大超时,我使用gcloud命令,例如:
gcloud beta功能部署 YourFunctionName --memory = 2048MB --timeout = 540s
其他选项,请参阅:
https://cloud.google.com/sdk/gcloud/reference/beta/functions/deploy
答案 4 :(得分:4)
答案 5 :(得分:3)
更新:看起来它们现在保留了重新部署的设置,因此您可以安全地更改云控制台中的内存分配!
答案 6 :(得分:1)
Firebase Cloud Functions中的默认ImageMagick资源配置似乎与分配给该功能的实际内存不匹配。
在Firebase云功能的上下文中运行identify -list resource
会产生:
File Area Memory Map Disk Thread Throttle Time
--------------------------------------------------------------------------------
18750 4.295GB 2GiB 4GiB unlimited 8 0 unlimited
分配给FCF的默认内存为256MB - 默认的ImageMagick实例认为它有2GB,因此不会从磁盘分配缓冲区,并且很容易尝试过度分配内存导致函数在{{1}上失败}
一种方法是增加所需的内存,如上所述 - 尽管仍存在风险,IM会根据您的使用案例和异常值尝试过度分配。
更安全的是,使用Error: memory limit exceeded. Function killed.
作为图像处理过程的一部分,为IM设置正确的内存限制。你可以通过`-debug Cache'运行你的IM逻辑来计算你的大约内存使用量。 - 它将显示分配的所有缓冲区,它们的大小以及它们是内存还是磁盘。
如果IM达到内存限制,它将开始在磁盘上分配缓冲区(内存映射,然后是常规磁盘缓冲区。您必须考虑I / O性能与内存成本之间的特定平衡。每增加一个字节的价格您分配给FCF的内存乘以100毫秒的使用量 - 这样可以快速增长。
答案 7 :(得分:1)
答案 8 :(得分:0)