Back to blog

Browsing Thousands of Databases Without Losing Your Mind

The multi-tenant PostgreSQL problem

If you run a SaaS platform, there's a good chance your architecture looks something like this: one PostgreSQL cluster, hundreds or thousands of databases, one per customer. Each database has the same schema — the same tables, the same columns — but different data. Different configurations, different user accounts, different ingested records.

This pattern is everywhere. It's the simplest form of multi-tenancy for PostgreSQL. No row-level security to get wrong, no tenant ID columns to forget, no risk of one customer's query accidentally reading another customer's data. Every customer gets their own isolated database instance. Clean, safe, straightforward.

Until you need to look inside one of them.

The pain of psql at scale

PostgreSQL ships with psql, and for a single database it's excellent. Connect, run queries, check results. But when you're dealing with hundreds of tenant databases, the experience falls apart quickly.

Here's what the workflow actually looks like. A support ticket comes in: "Customer X reports their data isn't syncing." You need to check their database. So you open a terminal:

psql -h prod-cluster.internal -U app_user -d customer_x_db

First you need to remember the database name. Or look it up. Then you need to remember which schema the application tables live in. Is it public? Is it app? You run \dt and get a wall of table names. You run \d sync_jobs to check the table structure. You write a query to check the last sync time. You find what you need, disconnect, and move on.

Except five minutes later, another ticket comes in for a different customer. So you do the whole thing again. Different database name, same sequence of commands, same mental context-switching.

After the third or fourth time, you start wondering if there's a better way.

Why existing tools don't solve this

There are tools that try to make PostgreSQL more accessible. pgcli adds autocomplete and syntax highlighting to the psql experience — genuinely useful, but fundamentally still a REPL. You're still typing SQL to navigate. You still need to know what you're looking for before you can find it.

GUI tools like pgAdmin, DBeaver, and DataGrip solve the browsing problem well. You can click through databases, expand schemas, see table structures. But they come with trade-offs that matter in production environments:

Then there's the scripting approach. Write a shell script that loops through databases, runs a query, and collects results. This works for known, repeatable checks. But it doesn't work for investigation — the kind of exploratory poking around where you don't know exactly what you're looking for until you see it.

What investigation actually looks like

Let's be honest about what "checking a customer database" really involves. It's rarely a single query. It's a sequence of small decisions:

You connect. You look at the schemas to orient yourself. You browse the tables to find the one that's relevant. You check its columns to understand the structure. You look at a few rows of data to see what's actually in there. You notice something unexpected — a null value where there shouldn't be one, a timestamp that's hours old — and you dig deeper. You check a related table. You run a join. You find the answer.

This is fundamentally a browsing activity, not a querying activity. You're navigating a hierarchy — database, schema, table, data — making decisions at each level about where to go next. The tool should support that workflow, not fight against it.

Learning from Kubernetes

The Kubernetes ecosystem solved an almost identical problem with k9s. Kubernetes clusters can have thousands of resources across dozens of namespaces. Before k9s, the workflow was: kubectl get pods -n some-namespace, find the pod name, kubectl describe pod some-pod -n some-namespace, read the output, kubectl logs some-pod -n some-namespace, and so on. Lots of typing. Lots of copy-pasting resource names. Lots of context-switching between commands.

k9s replaced all of that with a terminal UI. You see your namespaces. You press Enter to drill into one. You see pods. Enter again for details. Esc to go back. Slash to filter. It's fast, it's keyboard-driven, and it works over SSH because it's just a terminal application.

The best interface for navigating a hierarchy is one that lets you see what's at each level, make a choice, and move on — without having to remember or type the name of what you just saw.

PostgreSQL multi-tenant clusters have the same hierarchical structure. Databases at the top, schemas inside them, tables inside those, data at the bottom. The navigation pattern is identical. The tooling gap was obvious.

Browsing instead of querying

This is what we built pgbrowser to solve. Point it at your PostgreSQL cluster and you get an immediate, browsable view of everything inside it.

You see your databases — all 2,000 of them if that's what you have. Press slash to filter. Type the customer name. Three results. Enter on the one you want. Now you see schemas. Enter on the application schema. Now you see tables with estimated row counts. Enter on sync_jobs. Now you're looking at the columns, the indexes, and the first 50 rows of data — without writing a single query.

The whole interaction takes seconds, not minutes. And critically, you never lose context. A breadcrumb trail at the bottom of the screen tells you exactly where you are: Databases > acme_corp > app > Tables. Press Esc to go back a level. Press Esc again to go back further. The navigation is a stack, just like your browser's back button.

When you do need to run a query — because browsing only gets you so far — the query tab is right there. You're already connected to the right database, already in the right context. Type your SQL, hit Ctrl+Enter, see the results.

The multi-tenant workflow

Here's where this really pays off. When you're investigating an issue across multiple customer databases, the workflow becomes:

Check customer A's database. Esc back to the database list. Filter for customer B. Enter. Same schema, same tables, different data. Compare. Esc back out. Filter for customer C. The tool stays connected to the cluster the whole time. You're not reconnecting, not re-authenticating, not retyping connection strings. You're just browsing.

For the common scenarios — checking application config tables, verifying user data was provisioned correctly, confirming data ingestion is working — this turns a five-minute investigation into a thirty-second one. Multiply that by the number of support tickets your team handles in a day and it adds up fast.

The cases that matter most

We've found the tool most useful in a few specific situations:

Incident triage. Something's broken for a customer. You need to see their data right now. Not in thirty seconds when DBeaver finishes loading. Not after you've SSH-tunnelled to the right network. Right now, from the terminal you already have open.

Post-migration verification. You've just run a schema migration across all tenant databases. Did it apply correctly? Browse a few databases, check the table structures, confirm the new columns exist with the right types and defaults. Faster than writing a verification script for something you'll only check once.

Customer onboarding debugging. A new customer's database was provisioned but something's not right. Browse in, check the config tables, check whether seed data was inserted, check whether the user accounts were created. All without knowing the exact table names in advance — you can see them and recognise what's relevant.

Data auditing. Periodically spot-checking that customer data looks reasonable. Row counts in the expected range. Timestamps recent enough. No obviously corrupted values. The kind of sanity checking that's too exploratory for a script but too tedious to do by hand in psql.

One binary, no dependencies

pgbrowser is a single binary. There's no Python to install, no Java runtime, no Node.js, no Docker. Download it, point it at a connection string, and go. It runs on macOS, Linux, and Windows. It works over SSH because it's a terminal application — if you can see a command prompt, you can run pgbrowser.

Install it with Homebrew in two commands:

brew tap zagware/tap && brew install pgbrowser

Or download the binary directly from the releases page and drop it in your path. Either way, you're browsing databases in under a minute.

The right tool for the job

pgbrowser doesn't replace psql. It doesn't replace your GUI database client. It doesn't replace your monitoring stack or your migration tooling. It fills a specific gap: the gap between "I need to look at something in a database" and "I have the answer." For multi-tenant PostgreSQL environments where that gap is wide — where there are hundreds or thousands of databases and you need to move between them quickly — it closes that gap dramatically.

The best tools are the ones that match the shape of the problem. Multi-tenant PostgreSQL is a hierarchy. Investigating customer issues is a browsing activity. pgbrowser is a hierarchical browser. It fits.

Try pgbrowser

Free to download. Single binary. Point it at any PostgreSQL cluster and start browsing.

Download from GitHub