Building a document OCR system with FastAPI for the first time and torn between plain PostgreSQL (Neon/Railway) vs Supabase. Would love community input. My situation: First FastAPI project (coming from React/Next.js background) Auth: Still unsure on what to use here Use case: OCR system storing document metadata + JSONB OCR results + user corrections Storage: AWS S3 (already set up) Frontend: React Vite (already built)
The dilemma: I know Supabase is "just Postgres" under the hood. I'd essentially be paying for/using: Managed Postgres with good dashboard Connection pooling Storage (s3)
My questions: For heavy JSONB usage (OCR results with varying schemas per document type), is there any advantage to Supabase vs plain Postgres? I know Postgres has great JSONB support natively.
Connection pooling: With FastAPI's async SQLAlchemy + asyncpg, is Supabase's pooler better than what I'd get with Neon or a self-hosted PgBouncer?
Learning curve: Since this is my first FastAPI project, would starting with plain PostgreSQL + SQLAlchemy teach me "proper" patterns better than using Supabase's client libraries? I want to learn skills that transfer to any Python backend job.
Real talk: For a solo dev learning FastAPI, is the Supabase dashboard/realtime worth it if I'm already comfortable with SQL and ? What would you choose in my position? Plain Postgres (Neon/Railway) or Supabase ?
A user is building a document OCR system with FastAPI and is deciding between using plain PostgreSQL (Neon/Railway) or Supabase for the database. They seek advice on JSONB usage, connection pooling, and the learning curve associated with each option. The user wants to learn skills transferable to any Python backend job and is considering the benefits of Supabase's managed services and dashboard.
Between Supabase and Plain postgres i don't think there is much difference when it comes to JSONB usage as you can use that column type in both. You could make a case that Supabase makes things a little smoother by having extensions built in to do schema validation with pg_jsonschema. On the topic of connection pooling , Supabase has Supavisor which is known for being pretty scalable and more powerful than pgBouncer in some ways. Here is a video explaining a bit more on that. https://supabase.com/features/supavisor. If you are already comfortable with SQL then i assume just using python would help transfer skills to your job? Using Supabase doesn't negate needing to know postgres knowledge, having a good understanding of it can help in many situations such as when writing RLS policies or RPC functions. At the end of the day, like you said, all of these tools are postgres under the hood and you will have to use python no matter what, i would just pick whatever looks like it would be simpler for you. If you really want to do more SQL to practise then plain postgres would probably help most. I chose Supabase > Postgres for easy of use and i haven't used Neon so can't offer much opinion there