PostgreSQL disk space problems

I know that the vacuum can arrange a table of contents, which occupy disk space is smaller, but the PgSQL can free space in the table could not use automatically.?
For example, I did a test:
In a list of insert one hundred thousand records, record it occupied disk space, such as 1000K, and then delete the 50000 records, then turn in 50000 records, then the table of the disk space is not close to the previously seen in 1000K, but a 1500K, how could this be? The 50000 deleted records will always occupy a space, until I come to do vacuum?

Started by Quincy at February 04, 2016 - 10:40 PM

Then I suggest you go to Oracle,sql server ,sybase
Do you have a look this operation. They didn't increase? ? ?

Do you think this file. Delete in plus size were the same...

To log in, rollback, ......

Posted by Kris at February 09, 2016 - 11:30 PM

Monitor brother please don't get angry This question I really feel more confused, so come here to ask for solutions. Oracle I tested, and I certainly didn't expect any database can be unlimited data at a fixed size files, this is the most basic common sense.
The problem I find your answer, or want to use vacuum to get the max_fsm_pages, and the configuration of the size, the question seemed too much food, so I don't post it in here.

Posted by Quincy at February 24, 2016 - 12:16 AM

Ha-ha. I've used SQL SERVER found in growing up, a long long ah ah.. I can't. The final assembly: )

Posted by Kris at March 02, 2016 - 1:10 AM

You can describe in detail
max_fsm_pages

Your this operation experience for everyone: )

Posted by Kris at March 15, 2016 - 2:08 AM

In response to monitor call: em11:
Long long ago..... In an article entitled <<Tuning PostgreSQL for; performance> >; article, in which there was a:

max_fsm_pages:

PostgreSQL records free space in each of its data pages. This information is useful for vacuum to find out how many and which pages to look for when it frees up the space.
If you have a database that does lots of updates and deletes, that is going to generate dead tuples, due to PostgreSQL's MVCC system. The space occupied by dead tuples can be freed with vacuum, unless there is more wasted space than is covered by the Free Space Map, in which case the much less convenient "vacuum full" is required. By expanding the FSM to cover all of those dead tuples, you might never again need to run vacuum full except on holidays.

The best way to set max _fsm_pages is interactive; First, figure out the vacuum (regular) frequency of your database based on write activity; next, run the database under normal production load, and run "vacuum verbose analyze" instead of vacuum, saving the output to a file; finally, calculate the maximum total number of pages reclaimed between vacuums based on the output, and use that.

Remember, this is a database cluster wide setting. So bump it up enough to cover all databases in your database cluster. Also, each FSM page uses 6 bytes of RAM for administrative overhead, so increasing FSM substantially on systems low on RAM may be counter-productive.

It is said, The parameters in the postgresql.conf (max_fsm_pages) is used to tell how big PostgreSQL application memory space to save the data file free space information, According to a simple understanding of my, If some records deleted in a table, PostgreSQL will bring the changes recorded in the " Free Space Map" in, Next time you go inside and inserted record, According to the Free Space information in the Map, Can use previously deleted records and released disk space. But Free Space Map is still in memory, the size is limited after all, for a large number of data deletion and insertion, or specify a larger max_fsm_pages, either in time vacuum to sort table of debris, otherwise, PostgreSQL only to the end of the newly inserted record is added to the file, resulting in more and more documents. I have a program that is unexpected because the disk is full stop, it will go to a table inserted about 5000000 records, this before the first delete the same number of a number of records, but in the end I filled the entire hard disk.
I think this way of working with PostgreSQL one of its benefits, if the memory is large enough, we can specify a great Free Space Map, application for OLTP, may substantially improve performance (guess, have not tested), other users can choose vacuum or vacuum full at the proper time, if you believe a table only to insert records (such as recording operation log), on this table you can never vacuum full, is not very flexible?
However, the use of vacuum full large amount of mobile data is a time-consuming work, during database performance degrades significantly, perhaps this is the "flexible" price. In this regard, Oracle; Block-> Extent-> Segment; mechanism of this complex may be more effective. The concept is said to PostgreSQL will be introduced in the table space, it is worth looking forward to.!
As for the Free Space Map is set high, the above article teaches a way, do it, just need to understand, this is a "Map", if you want to delete 300M record, Free Space Map does not need to apply for 300M.

Posted by Quincy at March 25, 2016 - 3:01 AM

Tuning PostgreSQL for performance

Shridhar Daithankar, Josh Berkus
July 3, 2003 Copyright 2003 Shridhar Daithankar and Josh Berkus.
Authorized for re-distribution only under the PostgreSQL license (see www.postgresql.org/license).


Table of Contents

1 Introduction
2 Some basic parameters
2.1 Shared buffers
2.2 Sort memory
2.3 Effective Cache Size
2.4 Fsync and the WAL files
3 Some less known parameters
3.1 random_ page_cost
3.2 Vacuum_ mem
3.3 max_fsm_pages
3.4 max fsm_ relations
3.5 wal_buffers
4 Other tips
4.1 Check your file system
4.2 Try the Auto Vacuum daemon
4.3 Try FreeBSD
5 The CONF Setting Guide

1 Introduction
This is a quick start guide for tuning PostgreSQL's settings for performance. This assumes minimal familiarity with PostgreSQL administration. In particular, one should know,
How to start and stop the postmaster service
How to tune OS parameters
How to test the changes
It also assumes that you have gone through the PostgreSQL administration manual before starting, and to have set up your PostgreSQL server with at least the default configuration.

There are two important things for any performance optimization:

Decide what level of performance you want
If you don't know your expected level of performance, you will end up chasing a carrot always couple of meters ahead of you. The performance tuning measures give diminishing returns after a certain threshold. If you don't set this threshold beforehand, you will end up spending lot of time for minuscule gains.
Know your load
This document focuses entirely tuning postgresql.conf best for your existing setup. This is not the end of performance tuning. After using this document to extract the maximum reasonable performance from your hardware, you should start optimizing your application for efficient data access, which is beyond the scope of this article.
Please also note that the tuning advices described here are hints. You should not implement them all blindly. Tune one parameter at a time and test its impact and decide whether or not you need more tuning. Testing and benchmarking is an integral part of database tuning.

Tuning the software settings explored in this article is only about one-third of database performance tuning, but it's a good start since you can experiment with some basic setting changes in an afternoon, whereas some other aspects of tuning can be very time-consuming. The other two-thirds of database application tuning are:

Hardware Selection and Setup
Databases are very bound to your system's I/O (disk) access and memory usage. As such, selection and configuration of disks, RAID arrays, RAM, operating system, and competition for these resources will have a profound effect on how fast your database is. We hope to have a later article covering this topic.
Efficient Application Design
Your application also needs to be designed to access data efficiently, though careful query writing, planned and tested indexing, good connection management, and avoiding performance pitfalls particular to your version of PostgreSQL. Expect another guide someday helping with this, but really it takes several large books and years of experience to get it right ... or just a lot of time on the mailing lists.
2 Some basic parameters
2.1 Shared buffers
Shared buffers defines a block of memory that PostgreSQL will use to hold requests that are awaiting attention from the kernel buffer and CPU. The default value is quite low for any real world workload and need to be beefed up. However, unlike databases like Oracle, more is not always better. There is a threshold above which increasing this value can hurt performance.
This is the area of memory PostgreSQL actually uses to perform work. It should be sufficient enough to handle load on database server. Otherwise PostgreSQL will start pushing data to file and it will hurt the performance overall. Hence this is the most important setting one needs to tune up.

This value should be set based on the dataset size which the database server is supposed to handle at peak loads and on your available RAM (keep in mind that RAM used by other applications on the server is not available). We recommend following rule of thumb for this parameter:

Start at 4MB (512) for a workstation
Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)
Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-3276
PLEASE NOTE. PostgreSQL counts a lot on the OS to cache data files and hence does not bother with duplicating its file caching effort. The shared buffers parameter assumes that OS is going to cache a lot of files and hence it is generally very low compared with system RAM. Even for a dataset in excess of 20GB, a setting of 128MB may be too much, if you have only 1GB RAM and an aggressive-at-caching OS like Linux.

There is one way to decide what is best for you. Set a high value of this parameter and run the database for typical usage. Watch usage of shared memory using ipcs or similar tools. A recommended figure would be between 1.2 to 2 times peak shared memory usage.


2.2 Sort memory
This parameter sets maximum limit on memory that a database connection can use to perform sorts. If your queries have order-by or group-by clauses that require sorting large data set, increasing this parameter would help. But beware: this parameter is per sort, per connection. Think twice before setting this parameter too high on any database with many users. A recommended approach is to set this parameter per connection as and when required; that is, low for most simple queries and higher for large, complex queries and data dumps.

2.3 Effective Cache Size
This parameter allows PostgreSQL to make best possible use of RAM available on your server. It tells PostgreSQL the size of OS data cache. So that PostgreSQL can draw different execution plan based on that data.
Say there is 1.5GB RAM in your machine, shared buffers are set to 32MB and effective cache size is set to 800MB. So if a query needs 700MB of data set, PostgreSQL would estimate that all the data required should be available in memory and would opt for more aggressive plan in terms of optimization, involving heavier index usage and merge joins. But if effective cache is set to only 200MB, the query planner is liable to opt for the more I/O efficient sequential scan.

While setting this parameter size, leave room for other applications running on the server machine. The objective is to set this value at the highest amount of RAM which will be available to PostgreSQL all the time.


2.4 Fsync and the WAL files
This parameters sets whether or not write data to disk as soon as it is committed, which is done through Write Ahead Logging (WAL). If you trust your hardware, your power company, and your battery power supply enough, you set this to No for an immediate boost to data write speed. But be very aware that any unexpected database shutdown will force you to restore the database from your last backup.
If that's not an option for you, you can still have the protection of WAL and better performance. Simply move your WAL files, using either a mount or a symlink to the pg_xlog directory, to a separate disk or array from your main database files. In high-write-activity databases, WAL should have its own disk or array to ensure continuous high-speed access. Very large RAID arrays and SAN/NAS devices frequently handle this for you through their internal management systems.

3 Some less known parameters
3.1 random_page_cost
This parameter sets the cost to fetch a random tuple from the database, which influences the planner's choice of index vs. table scan. This is set to a high value as the default default based on the expectation of slow disk access. If you have reasonably fast disks like SCSI or RAID, you can lower the cost to 2. You need to experiment to find out what works best for your setup by running a variety of queries and comparing execution times.
3.2 Vacuum_mem
This parameter sets the memory allocated to Vacuum. Normally, vacuum is a disk intensive process, but raising this parameter will speed it up by allowing PostgreSQL to copy larger blocks into memory. Just don't set it so high it takes significant memory away from normal database operation. Things between 16-32MB should be good enough for most setups.
3.3 max_fsm_pages
PostgreSQL records free space in each of its data pages. This information is useful for vacuum to find out how many and which pages to look for when it frees up the space.
If you have a database that does lots of updates and deletes, that is going to generate dead tuples, due to PostgreSQL's MVCC system. The space occupied by dead tuples can be freed with vacuum, unless there is more wasted space than is covered by the Free Space Map, in which case the much less convenient "vacuum full" is required. By expanding the FSM to cover all of those dead tuples, you might never again need to run vacuum full except on holidays.

The best way to set max _ fsm _ pages is interactive; First, figure out the vacuum (regular) frequency of your database based on write activity; next, run the database under normal production load, and run "vacuum verbose analyze" instead of vacuum, saving the output to a file; finally, calculate the maximum total number of pages reclaimed between vacuums based on the output, and use that.

Remember, this is a database cluster wide setting. So bump it up enough to cover all databases in your database cluster. Also, each FSM page uses 6 bytes of RAM for administrative overhead, so increasing FSM substantially on systems low on RAM may be counter-productive.

3.4 max _ fsm _ relations
This setting dictates how many number of relations (tables) will be tracked in free space map. Again this is a database cluster-wide setting, so set it accordingly. In version 7.3.3 and later, this parameter should be set correctly as a default. In older versions, bump it up to 300-1000.
3.5 wal_buffers
This setting decides the number of buffers WAL(Write ahead Log) can have. If your database has many write transactions, setting this value bit higher than default could result better usage of disk space. Experiment and decide. A good start would be around 32-64 corresponding to 256-512K memory.
4 Other tips
4.1 Check your file system
On OS like Linux, which offers multiple file systems, one should be careful about choosing the right one from a performance point of view. There is no agreement between PostgreSQL users about which one is best.
Contrary to popular belief, today's journaling file systems are not necessarily slower compared to non-journaling ones. Ext2 can be faster on some setups but the recovery issues generally make its use prohibitive. Different people have reported widely different experiences with the speed of Ext3, ReiserFS, and XFS; quite possibly this kind of benchmark depends on a combination of file system, disk/array configuration, OS version, and database table size and distribution. As such, you may be better off sticking with the file system best supported by your distribution, such as ReiserFS for SuSE Linux or Ext3 for Red Hat Linux, not to forget XFS known for it's large file support . Of course, if you have time to run comprehensive benchmarks, we would be interested in seeing the results!

As an easy performance boost with no downside, make sure the file system on which your database is kept is mounted "noatime", which turns off the access time bookkeeping.

4.2 Try the Auto Vacuum daemon
There is a little known module in PostgreSQL contrib directory called as pgavd. It works in conjunction with statistics collector. It periodically connects to a database and checks if it has done enough operations since the last check. If yes, it will vacuum the database.
Essentially it will vacuum the database when it needs it. It would get rid of playing with cron settings for vacuum frequency. It should result in better database performance by eliminating overdue vacuum issues.

4.3 Try FreeBSD
Large updates, deletes, and vacuum in PostgreSQL are very disk intensive processes. In particular, since vacuum gobbles up IO bandwidth, the rest of the database activities could be affected adversely when vacuuming very large tables.
OS's from the BSD family, such as FreeBSD, dynamically alter the IO priority of a process. So if you lower the priority of a vacuum process, it should not chew as much bandwidth and will better allow the database to perform normally. Of course this means that vacuum could take longer, which would be problematic for a "vacuum full."

If you are not done with your choice of OS for your server platform, consider BSD for this reason.


5 The CONF Setting Guide
Available here is an Annotated Guide to the PostgreSQL configuration file settings, in both OpenOffice.org and PDF format. This guide expands on the official documentation and may eventually be incorporated into it.
The first column of the chart is the GUC setting in the postgresql.conf file.
The second is the maximum range of the variable; note that the maximum range is often much larger than the practical range. For example, random_page_cost will accept any number between 0 and several billion, but all practical numbers are between 1 and 5.
The third column contains an enumeration of RAM or disk space used by each unit of the parameter.
The fourth column indicates whether or not the variable may be SET from the PSQL terminal during an interactive setting. Most settings marked as "no" may only be changed by restarting PostgreSQL.
The fifth column quotes the official documentation available from the PostgreSQL web site.
The last column is our notes on the setting, how to set it, resources it uses, etc. You'll notice some blank spaces, and should be warned as well that there is still strong disagreement on the value of many settings.
Users of PostgreSQL 7.3 and earlier will notice that the order of the parameters in this guide do not match the order of the parameters in your postgresql.conf file. This is because this document was generated as part of an effort to re-organize the conf parameters and documentation; starting with 7.4, this document, the official documentation, and the postgresql.conf file are all in the same logical order.
As noted in the worksheet, it covers PostgreSQL versions 7.3 and 7.4. If you are using an earlier version, you will not have access to all of these settings, and defaults and effects of some settings will be different.

Posted by Quincy at March 26, 2016 - 3:31 AM

Add a trigger to do vacuum?

Posted by Baird at April 05, 2016 - 4:18 AM

The UGC!

images.jpg

Posted by Caesar at April 09, 2016 - 5:07 AM

PG space is big, no way!

Posted by Wendy at April 19, 2016 - 5:25 AM

[quote]The original post by " yanglii"]pg space is big, no way! [/quote published:


Don't know don't say.

Posted by Cynthia at April 20, 2016 - 5:51 AM

Continue to pay attention to

Posted by Bartholomew at April 21, 2016 - 6:25 AM

Please send the same data into the PG and Oracle, you can have a look how much space is?
PG internal principle I don't understand, see space or will it, you can, why not take PG to be good? Ha-ha

Posted by Wendy at May 03, 2016 - 6:44 AM

Users select PostgreSQL or Oracle, because he has a brain to think,
Because they can determine whether can handle problems; they can judge different database
The pros and cons, heart very clear what they need; not like you.,
No mind is a big mouth can only say some more things.

As for the owner, he modified without modifying the PostgreSQL, tell you what, you need to
Report? Your contribution and work he does, do you understand? Cannot read what you know
No contribution? He told you to cast pearls before swine. What is the difference?! I guess it here
The owner or other who will tell you a contemptuous disregard.

Comes from the fear of new things vitality. And you fear, I think from myself
Don't have a good command of PostgreSQL, also not have the Oracle (or other), because of their own
Just a parrot role, so worried about their jobs.
Oh, good luck.

Posted by Cynthia at May 08, 2016 - 6:46 AM

Using MSSQL and MYSQL. Many of the same points POSTGRESQL and MSSQL. But the POSTGRESQL partition in the RELEASE version will have. This is very good. Yesterday, just started learning PGSQL. Wrong you don't hard row of bricks.

Posted by Amos at December 22, 2016 - 6:25 PM

fq, Quite fq. not dare to face their own shortcomings.

If everything can be done, PG is not PG.

The user's view is the most important point of view, PG can't do or don't do well, no, not good place trouble this FQ was PG, as if to fight you.


Always say see document document, if see the document can solve the problem? I have to answer document.?
Why not go to see the source code it? Can you understand what source? How much do you understand SQL92 standard?

Bother FQ

Posted by Oliver at January 01, 2017 - 9:42 PM

Meaningful, rewarding, thank you

Posted by Ahern at January 03, 2017 - 7:12 PM

In a recent study PGSQL document, and posts on discussion of the phenomenon is very close, the learning experience up close

In order to realize the consistency of PGSQL modified under concurrent read in, when the UPDATE or DELETE, not to modify or delete the previous versions of the data.
Multiple versions of the data will be retained in the data file, until the user performs the VACUUM function, finally will remove'DEAD ROW VERSIONS'from the data in the table.

VACUUM can delete a row version, providing space for reuse, but will not release the operating system on the space.
Release a small part of the space under special circumstances( in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be easily obtained. ).

VACUUM will achieve the operating system on the space reduction, the implementation is the table to create a file, and then delete the original table to achieve. That is to say, need to have extra space in the operating system, to perform this operation.
In contrast, VACUUM FULL actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes.

Posted by Cliff at January 03, 2017 - 9:04 PM

Don't delete space recovery data, this is a disadvantage.???
Don't know....
I think, this is the database strategy...
A large block of disk without, repeated with a little earlier..

Solid state disk, what idea to??????

Posted by Valentine at January 12, 2017 - 7:34 PM

Well said, the answer~

Posted by Yvette at January 12, 2017 - 8:21 PM