But that sounds like it could get more expensive. I'm also guessing there may be a hybrid solution that allows us to use many databases but minimize the connections per server with a fan-out using logical replication. How expensive it is really to open/close a connection if it happens quite often?Īnd if it's not feasible, is the number of databases allowed in postgres just a computer science theoretical number, that we'll only ever use a fraction of a fraction of a percent of? And in reality, we'd at best use a number of databases that correlates closely with the number of connections? Multiplex and/or context switch connections from the API server, using an LRU cache of pg pools that manage connections, closing the ones that the LRU cache disposes.Īssuming I wanted to build 10,000 databases per database server, I'd look to get a solution like #2 working, to access different databases. Shard tenants across schemas, and use connection pooling on same database So my options I believe at each end of the spectrum are: I was hoping there would be a solution that allowed a connection pool across databases on the same server, finding that's not the case - please enlighten me if there is something in this realm of solutions. In looking up connection info in pg_stat_activity, seems that each connection includes a database name, so I'm guessing you cannot create a connection pool on the same server across different database names. One example of such a cost would be connection/disconnection latency for every connection that is created, the OS needs to allocate memory to the process that is opening the network socket, and PostgreSQL needs to do its own under-the-hood computations to establish that connection. In no way would I need to utilize 4B databases, but would be great to utilize more than 100, certainly. That means if we used every active connection up to some recommended amount (using 100 as an example), we can at one time only access 0.000002 % of the available databases. However, although a single Postgres install supports 4,294,950,911 databases on a server, the suggested max_connections is around 100 or so. We're building a system where ideally we'd give a database per client in a multi-tenant setup.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |