Comments
- Loading...
we're going to build a server that lets users make "posts", we'll add persistence, authentication, and authorization to it as we go
first we need to set up our package.json and install some dependencies
npm init -ynpm i postgres zod express express-session morgan npm-run-allexport PORT=3000
{"scripts": {"dev": "run-p dev:server","dev:server": "node --watch --watch-preserve-output server/index.mjs",}}
first import and initialize express
add our logging middleware first
in order to handle parsing the body of an http request we need to add some middleware
and also configure the express-session middleware
we can add some routes that update an in-memory list of "posts"
this usually isn't a great idea but we'll fix it later
and finally, store the user name in a session
import express from 'express'const { PORT } = process.envconst app = express()app.listen(PORT, () => console.log(`server running at http://localhost:${PORT}`))
we can test our server by running npm run dev:server and sending some requests to it, here for example using curl:
# send a post with a name and store the cookiecurl -v -c sess.cookie localhost:$PORT/posts -d 'name=you' -d 'body=hello world!'# use the cookie to retrieve the namecurl -v -b sess.cookie localhost:$PORT/currentUser
how do cookie sessions work?
- we put some randomly generated and very hard to guess gibberish (sometimes called an opaque token) in a [signed] cookie for a user
- you put that token, along with some data (eg their user id) into a table with an expiration
- on every request you look at the token in the cookie [and check it's signature], see if it exists in the db and isnt expired, and return the data (eg the user id)
- passing around that random, hard-to-guess token makes it very hard to try to impersonate another user, and storing the data (user_id) and expiration in the db means you can revoke the session whenever you want since you own the db
the code above uses in-memory data structures to store posts and sessions, which may work fine during development, but isn't suitable for a production deployment since it has no persistence or cache eviction strategy and so memory usage will scale linearly with new sessions and posts and eventually cause the process to use up all available memory and crash with no way to recover the data afterwards
To persist our data we'll need a database, and what better choice is there than Postgres, the world's most advanced open source database?
There are many ways to run Postgres, one of the easiest ways to manage Postgres for development and production is to use Docker. I've provided a docker-compose.yml file that will run a Postgres 16 server and expose it on port 5432, and a script that will create a database and user for our application.
npm i -D inquirer @dotenvx/dotenvx
version: "3.8"services:db:image: postgres:16-alpineenvironment:- POSTGRES_USER=${ROOT_DATABASE_USER}- POSTGRES_PASSWORD=${ROOT_DATABASE_PASSWORD}command: "postgres -c shared_preload_libraries=pg_stat_statements -c pg_stat_statements.track=all -c log_destination=stderr"volumes:- /var/run/postgresql:/var/run/postgresqlports:- 5432:5432
{"scripts": {"setup": "run-s db:up db:setup","db:up": "docker-compose up -d db","db:setup": "node -r dotenv/config scripts/db-setup.mjs","dev:server": "node --watch --watch-preserve-output server/index.mjs","predev": "run-s -s db:up","dev": "run-p -s dev:*"}}
now that we have a running Postgres cluster first we need to create some tables
Postgres lets you model data in groups of rows and columns, and those rows and columns are grouped in tables. groups of tables are stored in a schema, by default the 'public' schema is used. a database contains many schemas. the instance of Postgres running on your system that contains all the databases is called a cluster.
drop schema if exists app_public cascade;drop schema if exists app_private cascade;create extension if not exists citext with schema public;create schema app_public;create schema app_private;
create table app_private.sessions (sid text primary key,user_id uuid not null,expires timestamptz not null default now() + interval '14 days');
create table app_public.users (user_id uuid primary key default gen_random_uuid(),username citext not null,bio text,password_hash text not null unique,created_at timestamptz not null default now(),updated_at timestamptz not null default now());alter table app_private.sessionsadd constraint sessions_user_id_fkey foreign key (user_id) references app_public.users;create unique index users_username_idx on app_public.users (username);
create table app_public.posts (id uuid primary key default gen_random_uuid(),name text not null,body text not null,created_at timestamptz not null default now(),updated_at timestamptz not null default now());
now that we have a posts table we can update our application code and store our posts in it
import express from 'express'import session from 'express-session'import morgan from 'morgan'import postgres from 'postgres'const { DATABASE_URL, NODE_ENV, PORT } = process.envconst sql = postgres(DATABASE_URL)const app = express()app.use(morgan(NODE_ENV === 'production' ? 'combined' : 'dev'))app.use(express.json({ strict: true }))app.use(express.urlencoded({ extended: false }))app.use(session({rolling: true,saveUninitialized: false,resave: false,secret: process.env.SECRET,cookie: {maxAge: 1000 * 60 * 60 * 24 * 14,httpOnly: true,sameSite: 'lax',secure: 'auto',},}),)app.get('/posts', async (req, res) => {res.json(await sql`select * from posts order by created_at desc`)})app.post('/posts', async (req, res) => {const { name, body } = req.bodyreq.session.user = { name }await sql`insert into posts (name, body) values (${name}, ${body})`res.json(await sql`select * from posts order by created_at desc`)})app.get('/currentUser', (req, res) => {res.json(req.session.user)})app.listen(PORT, () => console.log(`server running at http://localhost:${PORT}`))
the Postgres adapter we're using provides a tagged template to run our SQL queries. it's important to note the syntax for calling tagged template functions doesn't use parentheses, which allows the tagged function to send the query separately from the user input and avoid SQL injection.
persist our sessions in postgres
our store needs a method to set sessions
and a method to retrieve sessions
a method to update the session expiration
and a method to delete sessions
import express from 'express'import session from 'express-session'import morgan from 'morgan'import postgres from 'postgres'const { DATABASE_URL, NODE_ENV, PORT } = process.envconst sql = postgres(DATABASE_URL)const app = express()app.use(morgan(NODE_ENV === 'production' ? 'combined' : 'dev'))app.use(express.json({ strict: true }))app.use(express.urlencoded({ extended: false }))class PgSessionStore extends session.Store {}app.use(session({rolling: true,saveUninitialized: false,resave: false,secret: process.env.SECRET,cookie: {maxAge: 1000 * 60 * 60 * 24 * 14,httpOnly: true,sameSite: 'lax',secure: 'auto',},store: new PgSessionStore(),}),)app.get('/posts', async (req, res) => {res.json(await sql`select * from posts order by created_at desc`)})app.post('/posts', async (req, res) => {const { name, body } = req.bodyreq.session.user = { name }await sql`insert into posts (name, body) values (${name}, ${body})`res.json(await sql`select * from posts order by created_at desc`)})app.get('/currentUser', (req, res) => {res.json(req.session.user)})app.listen(PORT, () => console.log(`server running at http://localhost:${PORT}`))
typically most apps would use an existing library like connect-pg-simple, but as of writing one did not exist that uses the
postgresadapter we are using, and I also wanted to explicitly show what astoredoes under the hood
this is suddenly a lot of different code in one file, so let's break it up into some smaller files
import express from 'express'import session from 'express-session'import morgan from 'morgan'import { PgSessionStore } from './PgSessionStore'import posts from './posts'const { NODE_ENV, PORT } = process.envconst app = express()app.use(morgan(NODE_ENV === 'production' ? 'combined' : 'dev'))app.use(express.json({ strict: true }))app.use(express.urlencoded({ extended: false }))app.use(session({rolling: true,saveUninitialized: false,resave: false,secret: process.env.SECRET,cookie: {maxAge: 1000 * 60 * 60 * 24 * 14,httpOnly: true,sameSite: 'lax',secure: 'auto',},store: new PgSessionStore(),}))app.use(posts)app.listen(PORT, () => console.log(`server running at http://localhost:${PORT}`))
npm i argon2
create router and add some simple validations
add the /register route
add some basic field validations
some error handling on conflicting usernames
now we can start on the login route
and again with some basic validations
plz use zod
the last piece is the logout route
don't forget to add the new auth router to the express instance
import { Router } from 'express'import argon from 'argon2'import { sql } from './db'const argonOpts = {type: argon.argon2id,hashLength: 40,}const validate = {username(value) {if (typeof value !== 'string') return 'username must be a string'// TODO: check for forbidden words},password(value) {if (typeof value !== 'string') return 'password must be a string'if (value.length < 8) return 'password must be at least 8 characters'// TODO: check against zxcvbn},}const router = Router()router.get('/currentUser', (req, res) => {res.json({ data: { currentUser: req.session.user || null } })})export default router
so far a user can register, login, logout, and make post, but there are no restrictions on a user's name, so anyone can post as any user.
in order to leverage Postgres Row-Level-Security we need a visitor role with limited permissions, and an authenticator role that we can log-in to
// @ts-checkimport postgres from 'postgres'import inquirer from 'inquirer'const {DATABASE_OWNER,DATABASE_OWNER_PASSWORD,DATABASE_NAME,ROOT_DATABASE_URL,DATABASE_VISITOR,DATABASE_AUTHENTICATOR,DATABASE_AUTHENTICATOR_PASSWORD} = process.envconst RECONNECT_BASE_DELAY = 100const RECONNECT_MAX_DELAY = 30000/** @param {number | undefined} ms */function sleep(ms) {return new Promise(resolve => setTimeout(resolve, ms))}if (!ROOT_DATABASE_URL) {console.error('ROOT_DATABASE_URL is not set')process.exit(1)}const sql = postgres(ROOT_DATABASE_URL)async function main() {let attempts = 0// eslint-disable-next-line no-constant-conditionwhile (true) {try {await sql`select true as "Connection test"`break} catch (e) {if (e.code === '28P01') throw eattempts++if (attempts <= 30) {console.log(`Database is not ready yet (attempt ${attempts}): ${e.message}`)} else {console.log(`Database never came up, aborting :(`)process.exit(1)}const delay = Math.floor(Math.min(RECONNECT_MAX_DELAY,RECONNECT_BASE_DELAY * Math.random() * 2 ** attempts,),)await sleep(delay)}}console.log(`DROP DATABASE ${DATABASE_NAME}`)console.log(`DROP ROLE ${DATABASE_OWNER}`)const { confirm } = process.env.NOCONFIRM? { confirm: true }: await inquirer.prompt([{name: 'confirm',message: 'press y to continue:',type: 'confirm',prefix: '',},])if (!confirm) process.exit()try {await sql`drop database if exists ${sql.unsafe(DATABASE_NAME)}`await sql`drop role if exists ${sql.unsafe(DATABASE_OWNER)}`await sql`create database ${sql.unsafe(DATABASE_NAME)}`console.log(`CREATE DATABASE ${DATABASE_NAME}`)await sql.unsafe`create role ${DATABASE_OWNER} with loginpassword '${DATABASE_OWNER_PASSWORD}' noinherit`console.log(`CREATE ROLE ${DATABASE_OWNER}`)await sql.unsafe`grant all privileges on database ${DATABASE_NAME}to ${DATABASE_OWNER}`console.log(`GRANT ${DATABASE_OWNER}`)await sql.unsafe`create role ${DATABASE_AUTHENTICATOR} with loginpassword '${DATABASE_AUTHENTICATOR_PASSWORD}' noinherit`console.log(`CREATE ROLE ${DATABASE_AUTHENTICATOR}`)await sql.unsafe`create role ${DATABASE_VISITOR}`console.log(`CREATE ROLE ${DATABASE_VISITOR}`)await sql.unsafe`grant ${DATABASE_VISITOR} to ${DATABASE_AUTHENTICATOR}`console.log(`GRANT ${DATABASE_VISITOR} TO ${DATABASE_AUTHENTICATOR}`)} catch (e) {console.error(e)} finally {await sql.end()}}main()
the rest of this article is still missing explanations and context, but the code is complete and should work as expected
check back later for more details!
drop schema if exists app_public cascade;drop schema if exists app_private cascade;create extension if not exists citext with schema public;revoke all on schema public from public;alter default privileges revoke all on sequences from public;alter default privileges revoke all on functions from public;grant all on schema public to :DATABASE_OWNER;create schema app_public;create schema app_private;grant usage on schema public, app_public to :DATABASE_VISITOR;alter default privileges in schema public, app_publicgrant usage, select on sequences to :DATABASE_VISITOR;alter default privileges in schema public, app_publicgrant execute on functions to :DATABASE_VISITOR;create function app_private.tg__timestamps() returns trigger as $$beginNEW.created_at = (case when TG_OP = 'INSERT' then NOW() else OLD.created_at end);NEW.updated_at = (case when TG_OP = 'UPDATE' and OLD.updated_at >= NOW() then OLD.updated_at + interval '1 millisecond' else NOW() end);return NEW;end;$$ language plpgsql volatile set search_path to pg_catalog, public, pg_temp;
PostGraphile is a huge inspiration for this post and some of the ideas and code are directly lifted from and the graphile-starter. Big thanks to Benjie for the incredible work he does on the Graphile suite!