Connect to external LLMs, generate embeddings, and process massive datasets with a SQL-first experience. Your data stays in your VPC.
-- pgInfer: Summarize 1k rows in one query
SELECT ai_live.summarize(content)
FROM production_logs
WHERE created_at > now() - interval '1 hour';Choose the mode that fits your needs
The Speed King
Instant access to OpenAI and Anthropic via SQL. No Python "glue" code needed. Real-time results.
The Budget Optimizer
Save 50% on token costs using the remote Batch API. Perfect for massive historical data enrichment.
Enterprise-grade security and compliance
With Native mode, inference happens on your hardware. No data ever leaves your VPC.
We never store your data or use your queries for training. We process tokens, you keep the intelligence.
Track usage, costs, and intent using the SQL tools you already own.
We spent years building fragile AI pipelines that broke every time a schema changed. We got tired of "glue code" and the security risks of moving production data to the cloud. V1 turns PostgreSQL into a native AI engine.
Choose the plan that fits your scale
Perfect for testing and development
For production workloads
For enterprise-scale deployments