there’s an aspect of postgres memory allocation about large data object I don’t understand.
In more details, statistics about memory allocation don’t match with allocation calculated considering involved tables
Postgresl version: 12
I have 5 tables using lob fields in my db, implemented as oid and related large objects.
If I execute this query
select pg_database_size ('<db_name>')
dbms tells me that size is 2.8 gb
If I execute this query
SELECT table_schema || '.' || table_name AS table_full_name, pg_size_pretty(pg_total_relation_size('"' || table_schema || '"."' || table_name || '"')) AS size FROM information_schema.tables ORDER BY pg_total_relation_size('"' || table_schema || '"."' || table_name || '"') DESC;
dbms tells me that large objects table is 2.6 gb large
Dry running vaccumlo, I can find wich tables are using large objects, they are 5
- List item
- string_value in al_system_conf
- email_body in als_email_record
- body in als_mail_history
- content in als_resource
- doc_content in pm_document
Considering that large objects are stored in page of 2048 kb, I can estimate memory consumption of each table, that gives me
If I dry run vacuumlo, it says there are not orphans
So, how can I find how the 2.6 gb are allocated?
How can I free them?