One of the little things that make working with Postgres easier than with other database products, are its error messages. They usually include not only the actual error but also a “Hint” that more often than not actually helps to fix the problem.
Suport for JSON is ubiquitous in modern relational database. Since the introduction of JSON support in th SQL:2016 standard, accessing (nested) JSON values is very similar across database products using SQL/JSON path expressions.
PostgreSQL was the first relational database to support JSON and corresponding JSON functions. With version 12 it also supports SQL/JSON path.
For certain types of conditions, the PostgreSQL specific operators and functions are more powerful than the SQL/JSON path functions.
One aspect of database sequences which a lot of people stumble over, is that they can have gaps. Either because a value obtained with
nextval wasn’t used, or because rows were deleted.
For a surrogate key (“generated key”) this is not a problem, because those values only have to be unique and gaps are meaningless and can safely be ignored.
In some situations gapless number migh however be a (legal) requirement, e.g. for invoice numbers.
Sometimes cyclic foreign keys can’t be avoided, but modern SQL features make it quite easy to insert new data without a big hussle.
In my previous post about unpivot in Postgres I showed how this can be done in a compact manner without using a series of UNION statements.
But Postgres offers an even more compact and dynamic way to do this.
The SQL:2016 standard introduced JSON and various ways to query JSON values. Postgres has been supporting JSON for a long time but defined its own set of operators and functions to query JSON values.
With PostgreSQL 12, the standard JSON path query functionality is now also supported by Postgres. Although the functions accepting JSON path queries do not conform to the SQL standard, the parameters and behavior does.
With the increasing popularity of Postgres, I see more and more questions requiring help to migrate code from other database systems such as Oracle or SQL Server to Postgres.
What I very often see is, that the migration tries to simply translate the syntax from one system to another. This isn’t limited to migrations to Postgres. But very often migrations from Oracle to SQL Server or from SQL Server to Oracle, fall into the same trap.
Recently I see an increase in questions in various forums (including Stackoverflow) where people are using (big) integer values instead of proper
timestamp values to represent timestamps values (less so for DATE values though).
All modern databases systems provide highly efficient data types to store real
timestamp values, but I often get questions asking what the actual downside of using a “UNIX epoch” instead of a proper
I have been wondering for a while, why there are so many databases where all well known best practises for good data modelling are thrown overboard and the wrong data types are used.
In my post about choosing the language for stored functions I showed what impact the different languages have.
When choosing PL/pgSQL as the language, there is one performance optimization that is often overlooked: assignment of values to variables. Different approaches have a different performance.