Traditional / DevOps Guide

Managed PostgreSQL and MySQL with standard connection strings. Any framework, any language.

You're reading: AI Agents Traditional / DevOps

Managed databases, simplified

Polykomos provisions real PostgreSQL and MySQL databases — not serverless emulations, not compatibility layers. You get a standard connection string that works with any framework, ORM, or language that speaks either protocol.

No server management, no capacity planning, no patching. Create a database in seconds, connect with the tools you already use, and let Polykomos handle the infrastructure.

  • Instant provisioning — Databases are ready in under a second thanks to a pre-provisioned pool.
  • PostgreSQL and MySQL — Choose the engine that fits your project. Full SQL support, extensions, transactions, triggers — everything you'd expect.
  • Any framework — Laravel, Django, Rails, Express, Spring, Phoenix — if it supports PostgreSQL or MySQL, it works.
  • Standard connection stringspostgresql:// or mysql:// URIs. No proprietary SDKs or custom drivers.

How provisioning works

Behind the scenes, Polykomos maintains a pool of pre-provisioned databases for both PostgreSQL and MySQL. When you create a database — whether from the dashboard or the API — you're assigned one from the pool instantly. No cold starts, no waiting for servers to spin up.

1
You request a database — one API call or one click in the dashboard.
2
Polykomos assigns a pre-provisioned instance from the pool, creates your credentials, and sets permissions.
3
You get a connection stringpostgresql:// or mysql:// depending on the engine you chose — and you're live.

The pool replenishes automatically in the background, so there are always databases ready to go.


CI/CD pipeline databases

Every test run deserves its own database. No shared state between runs, no flaky tests caused by leftover data, no cleanup scripts. Provision a fresh database at the start of your pipeline and tear it down when you're done.

Example
GitHub Actions integration test workflow
Your CI workflow provisions a Polykomos database, runs migrations against it, executes your test suite, then deletes the database. Each run is completely isolated. No Docker Compose, no test containers, no local database setup.
# In your GitHub Actions workflow: - name: Create test database run: | DB_RESPONSE=$(curl -sS -X POST "https://polykomos.com/api/v1/databases?wait=true" \ -H "Authorization: Bearer ${{ secrets.POLYKOMOS_API_KEY }}" \ -H "Idempotency-Key: ${{ github.run_id }}" \ -H "Content-Type: application/json" \ -d '{"name": "ci-run-${{ github.run_number }}"}') echo "DATABASE_URL=$(echo $DB_RESPONSE | jq -r '.data.connection_string')" >> $GITHUB_ENV - name: Run migrations run: npx prisma migrate deploy - name: Run tests run: npm test

Website and app databases

Connect your web application to a Polykomos database using your framework's standard database configuration. No special drivers or adapters — just a connection string for PostgreSQL or MySQL.

# .env — PostgreSQL DB_CONNECTION=pgsql DB_HOST=your-host.polykomos.com DB_PORT=5432 DB_DATABASE=your_database_name DB_USERNAME=your_username DB_PASSWORD=your_password # .env — MySQL DB_CONNECTION=mysql DB_HOST=your-host.polykomos.com DB_PORT=3306 DB_DATABASE=your_database_name DB_USERNAME=your_username DB_PASSWORD=your_password
# settings.py — PostgreSQL DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "HOST": "your-host.polykomos.com", "PORT": "5432", "NAME": "your_database_name", "USER": "your_username", "PASSWORD": "your_password", } } # settings.py — MySQL DATABASES = { "default": { "ENGINE": "django.db.backends.mysql", "HOST": "your-host.polykomos.com", "PORT": "3306", "NAME": "your_database_name", "USER": "your_username", "PASSWORD": "your_password", } }
# config/database.yml — PostgreSQL production: adapter: postgresql host: your-host.polykomos.com port: 5432 database: your_database_name username: your_username password: your_password # config/database.yml — MySQL production: adapter: mysql2 host: your-host.polykomos.com port: 3306 database: your_database_name username: your_username password: your_password
// PostgreSQL DATABASE_URL="postgresql://your_username:your_password@your-host.polykomos.com:5432/your_database_name" datasource db { provider = "postgresql" url = env("DATABASE_URL") } // MySQL DATABASE_URL="mysql://your_username:your_password@your-host.polykomos.com:3306/your_database_name" datasource db { provider = "mysql" url = env("DATABASE_URL") }

Your framework's migrations, seeders, and ORM work exactly as they do with any other PostgreSQL or MySQL host.


Serverless functions

Serverless functions on AWS Lambda, Vercel, or Cloudflare Workers can connect to Polykomos databases using a standard DATABASE_URL environment variable. No VPC configuration, no connection pooler setup.

Example
Vercel serverless function
Set DATABASE_URL in your Vercel project settings, and your serverless functions connect directly. Works with Prisma, Drizzle, Knex, pg, mysql2 — whatever you prefer.
// api/users.ts — Vercel serverless function import { Pool } from "pg"; const pool = new Pool({ connectionString: process.env.DATABASE_URL, max: 1, // one connection per function instance }); export default async function handler(req, res) { const { rows } = await pool.query("SELECT id, name, email FROM users LIMIT 50"); res.json({ users: rows }); }

For high-concurrency serverless workloads, Polykomos handles connection management so individual functions don't exhaust the connection pool.


Development environments

Give every developer on your team their own database instance. No shared dev databases, no conflicting migrations, no "who dropped the users table?" incidents.

Workflow
One database per developer
Each developer creates their own Polykomos database from the dashboard or CLI. They run migrations against it independently, experiment freely, and reset when needed. When they push to CI, a fresh database is provisioned for testing — completely separate from their dev instance.
# Quick setup for a new team member $ curl -sS -X POST "https://polykomos.com/api/v1/databases?wait=true" \ -H "Authorization: Bearer $POLYKOMOS_API_KEY" \ -H "Content-Type: application/json" \ -d '{"name": "dev-sarah"}' # Copy the connection string to your .env (PostgreSQL or MySQL — depends on what you chose) DATABASE_URL=postgresql://dev_sarah_user:...@host.polykomos.com:5432/dev_sarah_db

Staging and preview environments

Preview deployments (Vercel, Netlify, Render) are great for reviewing frontend changes, but they usually share a staging database — which means conflicting data and broken previews.

With Polykomos, each preview deployment gets its own database. Provision it in your deploy hook, seed it with test data, and delete it when the PR is merged.

Pattern
Ephemeral databases for preview deploys
Your deploy hook calls the Polykomos API to create a database named after the PR number: preview-pr-142. Migrations run automatically. The preview app connects to its own isolated database. When the PR is closed, a cleanup webhook deletes the database.

This gives reviewers confidence that what they see in the preview is exactly what will happen in production — because the database state is clean and predictable.


Getting started

Four steps to a working database:

1
Sign up for a Polykomos account. Free tier includes one database.
2
Create a database from the dashboard or via the API.
3
Copy the connection string from the database detail page.
4
Connect your app — paste the connection string into your framework's config or environment variables.

Connection examples

Standard connections in every major language. Examples below show both PostgreSQL and MySQL:

# PostgreSQL $ psql "postgresql://your_user:your_pass@your-host.polykomos.com:5432/your_db" # MySQL $ mysql -h your-host.polykomos.com -P 3306 -u your_user -p your_db
# PostgreSQL import psycopg2 conn = psycopg2.connect("postgresql://your_user:your_pass@your-host.polykomos.com:5432/your_db") cur = conn.cursor() cur.execute("SELECT 1") print(cur.fetchone()) # (1,) conn.close() # MySQL import mysql.connector conn = mysql.connector.connect( host="your-host.polykomos.com", port=3306, user="your_user", password="your_pass", database="your_db" ) cur = conn.cursor() cur.execute("SELECT 1") print(cur.fetchone()) # (1,) conn.close()
// PostgreSQL import pg from "pg"; const pool = new pg.Pool({ connectionString: "postgresql://your_user:your_pass@your-host.polykomos.com:5432/your_db", }); const { rows } = await pool.query("SELECT NOW() as current_time"); console.log(rows[0].current_time); // MySQL import mysql from "mysql2/promise"; const conn = await mysql.createConnection("mysql://your_user:your_pass@your-host.polykomos.com:3306/your_db"); const [rows] = await conn.execute("SELECT NOW() as current_time"); console.log(rows[0].current_time);
// PostgreSQL $dsn = "pgsql:host=your-host.polykomos.com;port=5432;dbname=your_db"; $pdo = new PDO($dsn, "your_user", "your_pass"); $stmt = $pdo->query("SELECT NOW()"); echo $stmt->fetchColumn(); // MySQL $dsn = "mysql:host=your-host.polykomos.com;port=3306;dbname=your_db"; $pdo = new PDO($dsn, "your_user", "your_pass"); $stmt = $pdo->query("SELECT NOW()"); echo $stmt->fetchColumn();
# PostgreSQL require "pg" conn = PG.connect("postgresql://your_user:your_pass@your-host.polykomos.com:5432/your_db") result = conn.exec("SELECT NOW()") puts result[0]["now"] conn.close # MySQL require "mysql2" client = Mysql2::Client.new(host: "your-host.polykomos.com", port: 3306, username: "your_user", password: "your_pass", database: "your_db") result = client.query("SELECT NOW() as now") puts result.first["now"] client.close
// PostgreSQL import _ "github.com/lib/pq" db, _ := sql.Open("postgres", "postgresql://your_user:your_pass@your-host.polykomos.com:5432/your_db") // MySQL import _ "github.com/go-sql-driver/mysql" db, _ := sql.Open("mysql", "your_user:your_pass@tcp(your-host.polykomos.com:3306)/your_db")

Schema blueprints

Start with a proven schema instead of writing migrations from scratch. Blueprints are pre-built database schemas for common application types:

  • SaaS Starter — Users, organizations, memberships, billing records, API keys. Multi-tenant from day one.
  • E-Commerce — Products, categories, orders, line items, inventory, customer profiles.
  • CMS / Blog — Posts, pages, categories, tags, media, comments, authors.
  • Analytics — Events, sessions, page views, user properties, funnels.

Browse available blueprints in the blueprint catalog, or apply them programmatically via the Blueprints API.