On the choice of datatypes
I have been wondering for a while, why there are so many databases where all well known best practises for good data modelling are thrown overboard and the wrong data types are used.
I have been wondering for a while, why there are so many databases where all well known best practises for good data modelling are thrown overboard and the wrong data types are used.
In my post about choosing the language for stored functions I showed what impact the different languages have.
When choosing PL/pgSQL as the language, there is one performance optimization that is often overlooked: assignment of values to variables. Different approaches have a different performance.
PostgreSQL has a little, but very powerful feature: range types
At first they seem like a “nice-to-have” feature, but especially when dealing with date ranges they are very helpful.
A popular (technical) interview question for programming jobs is the Fizz Buzz problem.
In a nutshell, the problem is to write a program that outputs the numbers from 1 to n and prints “Fizz” for numbers that can be divided by 3, “Buzz” for numbers that can be divided by 5 and “Fizz Buzz” for numbers that can be divided by 15.
In my post about dynamic SQL I showed how PostgreSQL’s query_to_xml()
could be used to run dynamic SQL and process the result in a regular SQL query.
If you prefer to deal with JSON, rather than XML, it’s quite easy to write a corresponding query_to_jsonb()
function.
From a recent discussion on the Postgres performance mailing list, I learned that the configuration of the SQL client has an influence on the execution plan chosen by Postgres.
Sometimes it’s necessary to run dynamic SQL, but it’s not always possible (or sensible) to write a stored procedure just for that.
Postgres and Oracle both support functions that can run queries using dynamic SQL and return the result of those queries as XML.
One type of problems is inherently complicated with SQL: turning rows into columns. This is very often referred to as a “crosstab” or “pivot”.
Every now and then I see queries that use GROUP BY
without any aggregates which is the same as using DISTINCT
instead.
Sometimes it’s necessary to normalize de-normalized tables - the opposite of a “crosstab” or “pivot” operation. Postgres does not support an UNPIVOT operator like Oracle or SQL Server, but simulating it, is very simple.