Rodrigo Rosenfeld Rosas

The day I reached the 1600 columns limit in PostgreSQL

Tue, 26 Sep 2017 11:15:00 +0000

WARNING: skip the TLDR section if you like some drama.

TLDR: PostgreSQL doesn't reclaim space when dropping a column. If you use some script that will add temporary columns and run it many times at some point it will reach the 1600 max columns per table limit.

It was a Friday afternoon (it's always on Friday, right?) and we were close to start a long awaited migration process and after several tests everything seemed to be working just fine, until someone told me they were no longer able to continue testing as the servers wouldn't allow them to port deals anymore. After a quick inspection in the logs I noticed the message saying we had reached the 1600 columns per table limit in PostgreSQL.

If you never got into this situation (and if you haven't read the TLDR) you might be wondering: "how the hell would someone get 1600 columns in a single table?!". Right? I was just as impressed, although I already suspected what could be happening, since I knew the script would create temporary columns to store the previous reference ids when inserting new records, even though they were dropped by the end of the transaction.

If that didn't happen to you, you might think I was the first to face this issue but you'd be wrong. A quick search in the web for the 1600 columns limit and you'll find many more cases of people unexpectedly reaching this limit without actually having that many columns in the table. I wasn't the first one and won't be the last one to face this issue but, luckily for you who are reading this article, you won't be the next person to reach that limit ;)

Why using a temporary column?

Yes, now I agree it's not a good idea after all, but let me try to explain why I did it in the first place.

In case you're not aware, you can only use columns from the table being inserted in the "returning" clause of some "insert-into-select-returning" statement. But I wanted to keep a mapping between the newly inserted ids and the previous ones, from the "select" clause of the insert-into statement. So my first idea was to simply add a temporary "previous_id" column to the table and use it to store the old id so that I could map them.

Let me give some concrete example, with tables and queries so that it gets clearer for those of you who might be confused by the above explanation. We have documents, that can have many references associated to it and each reference can have multiple citations. The actual model is as much complicated as irrelevant to the problem, so let me simplify it to make my point.

Suppose we want to duplicate a document and its references and citations. We could have the following tables:

  • doc_refs(id, doc_id, category_id)
  • citations(id, ref_id, citation)

In my first implementation the strategy was to add a temporary previous_id to doc_refs and then the script would do something like:

1insert into doc_refs(previous_id, doc_id, category_id) select id, doc_id, 30 from
2 doc_refs where category_id = 20;

This way it would be possible to know the mapping between the copied and pasted references so that the script could duplicate the citations using that mapping.

This script would have to run thousands of times to port all deals so, since I learned about the columns limit and how dropping a column wouldn't really reclaim space in PostgreSQL, I'd need another strategy to get the mapping without resorting to some temporary column. I'd also have to figure out how to reclaim that space at some point in case I'd need to add some additional column for good at some point in the future, but I'll discuss that part in another section below.

A better solution to the mapping problem

In case you reached those limits for the same reason as me, I'll tell you how I modified the script to use a temporary mapping table instead of a temporary column. Our tables use a serial (integer with a generator) column. The process is just a little bit more complicated then using the temporary column:

1create temp table refs_mapping as
2 select id, nextval('doc_refs_id_seq') from doc_refs where category_id = 20;

With that table it's just a matter of inserting the records using this table to get the mapping between the ids. Not that hard after all, and the solution is free from the columns limit issue :)

How to reclaim back the space from dropped columns?

Once the script to port deals was fixed and running I decided to take some action to reclaim the space used by the dropped columns so that I could create new columns later in that table if I had to.

After searching the web some would tell that a full vacuum freeze would take care of rewriting the table, which would then reclaim the space. It didn't work in my tests. It seems the easiest would be to create a dump and restore it in a new database but in our case that would mean some downtime which I wanted to avoid. Maybe it would be possible to use this strategy with some master-slave replication setup with no downtime, but I decided to try another strategy, which was simpler in our case.

Our clients only need read access to those tables, while the input is done by an internal team, which is much easier for us to manage downtime if needed.

So I decided to lock the table for write access while the script would recreate the table and then I'd replace the old one with the new one. It took only a handful seconds to complete the operation (the table had about 3 million records). The script looked something like this:

1begin;
2lock doc_refs in exclusive mode;
3lock citations in exclusive mode;
4create table new_refs (
5 id integer not null primary key default nextval('doc_refs_id_seq'),
6 doc_id integer not null references documents(id),
7 category_id integer not null references categories(id) on delete cascade
8);
9create index on new_refs(doc_id, category_id);
10create index on new_refs(category_id);
11
12insert into new_refs select * from doc_refs;
13
14alter table citations drop constraint fk_citations_reference;
15alter table doc_refs rename to old_refs;
16alter table new_refs rename to doc_refs;
17alter table citations add constraint fk_citations_reference
18 foreign key (ref_id) references doc_refs(id) on delete cascade;
19alter sequence doc_refs_id_seq owned by doc_refs.id;
20commit;
21
22-- clean-up after that:
23
24drop table references_old;

Fortunately that table was only referenced by one table, so it wasn't that complicate as if that had happened to some other tables in our database. With a simple script like that we were able to rewrite the table with no downtime and the write access was locked for about 20 or 30 seconds only, while the read access wasn't affected at all. I hope that could be an useful trick in case you found this article because you got yourself in a similar situation :)

If you have other suggestions on how to handle the mentioned issues I'd love to hear from you. I'm always curious about possible solutions, after all, who knows when it will be the next time I'd have to think out of the box? ;) Please let me know in the comments below. Thanks :)

Powered by Disqus