I'm having a problem and need some advice. I am normally a developer, however with recent staff changes at my company I am now the sole IT person, so I'm having to branch into a lot of unknown areas and really need some help.
We are running postgres 8.3. The database is trying to run AUTO_VACUUM on a large object table (pg_catalog.pg_large_object) to prevent transaction ID wrap-around. I think I understand the basics of what that means. The problem is, this table is 750G with 452 million rows. AUTO_VACUUM is writing to disk a lot, and eating up disk space (it consumed our last 250GB of 1TB yesterday). After an emergency outage, we are back up and running with 1100GB of space, and 100GB free. However, once postgres was back up and running, it kicked off the AUTO_VACUUM process again. If I kill the transaction (which I'm sure is not recommended), it just restarts.
So here are my questions:
1) For that table, how much space would it need to finish the AUTO_VACUUM process? How do I determine this?
2) Is there a better way to configure the server to handle this situation so it doesn't require ridiculous amounts of disk space whenever it needs to do this?
3) If no to 2, how do you propose this problem get fixed?
I am not a DBA, and do not have linux server administration experience, just a developer being asked to wear many hats. I'm trying to get a DBA consultant to help resolve the problem, but the company is pushing back. They don't seem to understand the severity of the problem, despite my best efforts.
Suggestions? Comments? Any advice or guidance you can provide would be greatly appreciated. If you need more info, let me know.