
Size matter
Nowadays it’s not uncommon to deal with machines with hundreds of GB of RAM.
Abundant memory can give PostgreSQL a massive performance boost. However things work slightly different than you may expect.
Let’s find out!
Nowadays it’s not uncommon to deal with machines with hundreds of GB of RAM.
Abundant memory can give PostgreSQL a massive performance boost. However things work slightly different than you may expect.
Let’s find out!
Finally I found time to get a grip on the issues I had with gohugo and get my blog operational again.
I’m taking the occasion to write about a project I care a lot as it’s strictly related with my hometown, the upcoming PGDay Napoli.
Again there is another pretty long hiatus.
Thing happens and independently from how bad my childhood was, family comes first.
Anyway after a rollercoaster ride that started in July 2021 things are getting more stable.
For obvious reason the FOSDEM this year is an online event. The staff is building from scratch an infrastructure in order to deliver the speaker’s videos in a virtual environment.
The catch is that all the talks must be pre recorded and uploaded via pentabarf, the software historically used by FOSDEM to manage the talk submissions.
What follows is my experience in recording,uploading and submitting the video for my upcoming talk.
When in a pl/pgsql function there is an exception then the function stops the execution and returns an error. When this happens all the changes made are rolled back.
It’s always possible to manage the error at application level, however there are some cases where managing the exception inside the function it may be a sensible choice. And pl/pgsql have a nice way to do that. The EXCEPTION block.
However handling the exception inside a function is not just a cosmetic thing. The way the excepion is handled have implications that may cause issues.
With PostgreSQL 12 the generated columns are now supported natively. Until the version Postgresql 11 it were possible to have generated columns using a trigger.
In this post we’ll see how to configure a generated column via trigger and natively then we’ll compare the performances of both strategies.
The transactional model has been in PostgreSQL since the early versions. In PostgreSQL its implementation follows the guidelines of the SQL standard with some notable exceptions.
When designing an application it’s important to understand how the concurrent access to data happens in order to avoid unexpected results or even errors.
Has been a long time since I wrote on my blog. The paths that life decides for you are strange. Often what you planned is completely scrambled by something bigger and completely unforeseen.
Back in 2018 I planned to relocate to France but the move went quite wrong for reasons that only now I can see clearly.
Anyway, finally after 10 months wandering without a real home I’m finally settled down somewhere.
Then this is is the perfect occasion to celebrate the second database administrators appreciation day.
Back in 2018 I launched the event because there were no day dedicated for this obscure and yet very important figure within the enterprises.
Therefore don’t forget 5th of July, it’s the day when you should say thank you to your DBA for all of the hard work.
In the previous post we modified the apt role for controlling the setup in a declarative way. Then we added a ssh role for configuring the three devuan servers. The role is used to configure the server’s postgres process owner for ssh passwordless connection.
In this tutorial we’ll complete the setup for the postgres user and then we’ll configure the database clusters with a new role.