
© Laurenz Albe 2025
In a previous article, I recommended using bigint for sequence-generated primary keys (but I make an exception for lookup tables!). If you didn't heed that warning, you might experience integer overflow. That causes downtime and pain. So I thought it would be a good idea to show you how to monitor for the problem and how to keep the worst from happening.
The problem of integer overflow in sequence-generated keys
There are two ways how people typically create sequence-generated primary keys. The “traditional” way is to use serial:
CREATE TABLE tab (
id serial PRIMARY KEY,
...
);
That will actually create a (four-byte) integer column with a DEFAULT value:
\d tab
Table "public.tab"
Column | Type | Collation | Nullable | Default
--------+---------+-----------+----------+---------------------------------
id | integer | | not null | nextval('tab_id_seq'::regclass)
...
The smarter, SQL standard compliant way of creating an auto-incrementing column is to use an identity column:
CREATE TABLE tab (
id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
...
);
An identity column will also create a sequence behind the scenes. No matter which of the two techniques you choose: if you insert a lot of rows, you will eventually get the dreaded error message
ERROR: nextval: reached maximum value of sequence "tab_id_seq" (2147483647)
Note that you don't need to have a large table to encounter this problem: If you INSERT and DELETE rows frequently, you will hit the same problem.
The classical solution for integer overflow
The solution to the problem is to convert the column to bigint, which is an eight-byte integer:
ALTER TABLE tab ALTER id TYPE bigint;
That solution is simple enough, but modifying a table like that requires PostgreSQL to rewrite the entire table. The reason for rewriting the table is that the storage format of bigint is different from that of a four-byte integer. However, such a t
[...]