Google Cloud SQL PostgreSQL数据库的连接数量相对较低。根据计划,该限制在25到500之间,而Google Cloud SQL中MySQL的限制在250到4000之间,很快就会达到4000。
我们目前有许多针对在Kubernetes上运行并由同一Google Cloud SQL Postgres服务器支持的不同客户的试用实例。每个实例使用一组单独的数据库,角色和连接(每个服务一个)。我们已经达到计划的连接限制(50),甚至还没有达到内存或cpu限制。连接池似乎不是一个选择,因为连接使用不同的用户。 我现在想知道为什么限额如此之低,以及是否有办法在不升级到更昂贵的计划的情况下增加限额。
答案 0 :(得分:9)
Public Issue Tracker中有一个Feature请求,用于公开并控制PostgreSQL中的max_connections
。 This comment(我在这里重现)说明了以现在的方式设置连接数的原因:
Per-tier max_connections is now fully rolled out. As shown on
https://cloud.google.com/sql/faq#sizeqps, the limits are now:
Memory size, in GiB | Maximum concurrent connections
--------------------+-------------------------------
0.6 (db-f1-micro) | 25
1.7 (db-g1-small) | 50
3.75 up to 6 | 100
6 up to 7.5 | 150
7.5 up to 15 | 200
15 up to 30 | 250
30 up to 60 | 300
60 up to 120 | 400
120 and above | 500
I understand your frustration about the micro/small instances having fewer than 100
concurrent connections and the lack of control of this flag. We arrived at these values by
taking the available RAM, reducing it by overhead, shared buffers, autovacuum memory and
then dividing the remaining ram by typical per-connection memory and rounding off. This
gives us the number of connections that can be used without risk of hitting out-of-memory
condition
The basic premise of a fully managed service with an attached SLA is that we provide safe
hosting. This is what motivates us using a max_connections that is safe against OOM.
您的选项是,因为您已经放弃了连接池,因此可以使用具有higher memory的实例。
更新:
如上述线程的a comment中所述,最大连接设置已更改,现在为:
此外,默认值现在可以为overridden with flags,最多可连接260K。
答案 1 :(得分:3)