Million Tables Test Club
On the InfoSec Exchange forum, a fascinating discussion unfolded around the challenge of creating 1 million tables and measuring TPS — in this case, tables per second rather than the usual transactions per second. The original poster (OP) was investigating performance issues in MySQL but decided to run the same test in PostgreSQL. PostgreSQL outshone MySQL by a factor of 5x to 50x in terms of speed, but the OP hit some snags, with crashes and lengthy recovery times.
Curious about the crashes, I decided to replicate the test. I suspected the issue stemmed from PostgreSQL’s default settings, which are known to be conservative and perhaps not suited to such an extreme use case of rapid, large-scale table creation. Given that the 1-billion-table experiment had previously succeeded, I was pretty certain it all boiled down to configuration or the python script.
Hardware chosen was less powerful than my Android phone.
Skipping any Python scripts, I opted for a direct loop in psql
. On my first try, I encountered an "out of shared memory" error, though PostgreSQL didn’t crash. PostgreSQL helpfully suggested, "HINT: You might need to increase max_locks_per_transaction."
So, I bumped up max_locks_per_transaction
to 8000 and increased shared_buffers
to 9GB. I kept fsync
off, though I don’t think it was relevant to the error.
On the second attempt, the process completed in 8.5 minutes without any crashes or errors. This test ran on a 12GB RAM VM with just two CPUs, running Ubuntu 22.04.
This million-table showdown proves that with the right tweaks, PostgreSQL is more than ready to enter the ring.
Next step : Join those 1 Million tables
https://www.cybertec-postgresql.com/en/next-stop-joining-1-million-tables/
How about 1 Million users?
https://www.cybertec-postgresql.com/en/creating-1-million-users-in-postgresql/
Disclaimer: These stunts are performed by trained professionals, don’t try this at home or Production.