This is the archive of the old blog hosted on blogger.
The old blog is still available on the url https://4thdoctordba.blogspot.com
I’ve been busy recently and I failed to update on the last meetup news.
I apologise for that.
We had a very interesting meetup in January.
Alexey Bashtanov explained how the grouping works in postgres and how to improve or even re implement in C the grouping functions.
On the meetup page there are the pictures from the meeting.
The presentation’s recording is available thereand the slides are free to download on slideshare there.
I’ll be at the University of Ferrara Saturday 9th of January for a PostgreSQL afternoon.
This is the confirmed schedule.
15:00 - Federico Campoli: PostgreSQL, the big the fast and the (NOSQL on) Acid
15:40 - Michele Finelli: The PostgreSQL’s transactional system
16:20 - Coffee break / general chat
16:40 - Federico Campoli: Streaming replication
17:30 - Federico Campoli: Query tuning in PostgreSQL
18:00 - Michele Finelli: An horror fairy tale: how we have lost a database
This second meetup went very well. The audience was interested and we had fun time thanks to the beer and pizzas offered alongside with the venue by our sponsor brandwatch.
Here a couple of pictures from the meetup.
The recording worked much better than the previous time, here’s the presentation’s video. We’ll meet again shortly for a nice beer. Next technical talk will be probably in January.
Three days to go for the next Brighton PostgreSQL meetup.
I’ll run a live hangout of the talk.
You can join the event there.
https://plus.google.com/events/cge4691km5qm8euj4erkcp7jecs
The record will become available on youtube shortly after the talk’s end.
November 27th at 19.00 GMT I’ll talk at theBrighton PostgreSQL meetup.
This time the group chosen the streaming replication as topic.
The talk will cover the PostgreSQL write ahead logging and the crash recovery process. The audience will learn how to setup a standby server using the streaming replication and how to troubleshoot it.
Please RSVP here.
What follows is the synthesis of several years of frustration. Before you start reading please note that the things written in the dark age section do not apply for the high end environments like Oracle. That’s mostly because starting an Oracle project without a DBA on board is an interesting and creative way to get bankrupt in few months. Obviously things evolves and maybe in the next decade my Oracle fellows will join me in this miserable situation.
Like previously said, the next Brighton PostgreSQL meetup will be September 25th at 7 pm BST. The topic chosen by the member is the query planning and execution in PostgreSQL.
I will do the presentation exploring the various steps a query passes through from the client to the execution. I’ll also explain how to read the execution plan and why sometimes the executor seems to ignore the indices put in place for speeding up the operations.
Friday 14th August we kicked off the Brighton PostgreSQL Meetup.
We had a nice evening with cool people all togheter discussing about PostgreSQL and how we can run effectively the meetup.
We decided to have a regular monthly meetup hosted around Brighton, possibly, by companies or any suitable venue.
The next meetup will be the 25th of September and this time there will be some PostgreSQL talks. The general interest favours the standby servers and the streaming replication.
There is just one day left and we’ll start the Brighton PostgreSQL Meetup. I invested some resources in this project like and I truly believe it can be a success.
I still can’t believe that in just one month 25 people already have shown the interest on being part of the Brighton PostgreSQL Group. And today another nice suprise. I received the new shiny mascot for our group.
He’s Marvin, the sea elephant.
After upgrading some clusters to PostgreSQL 9.4.4 I noticed an increase of the database backup. Because the databases are quite large I’m taking the advantage of the parallel export introduced with PostgreSQL 9.3.
The parallel dump uses the PostgreSQL’s snapshot export with multiple backends. The functionality requires the dump to be in directory format where a toc file is saved alongside with the compressed exports, one per each table saved by pg_dump.