In my Oracle 19c-database, I would like to make 1 trigger per table where the jornalling is handled for insert, update and delete. I would like that every trigger is the same for each table, with the exception of the name and a table reference. This way, I can add or change columns without having to change the trigger.
The journalling table _JN consists of the columns OLD_VALUES and NEW_VALUES and can be stored as JSON in CLOB columns. The journalling information columns like sysdate and user, should be added in only 1 procedure that is called from the triggers. This way, if I want to change the way my data is being journalled, I only have to change 1 procedure.
I got stuck on a few problems:
:new and :old cannot be used when inside the procedure I called from a trigger
:new and :old cannot be used in dynamic SQL
I can loop through the columns with a USER_TAB_COLUMNS, but I haven’t foud a way to pass the :new-value to the JSON without hardcoding on the column name.
The JSON should contain the column name and the column value which is inserted.
Is there someone that can help me out, of give me the right directions?
Thanks in advance,
Allard
The procedure is just as easy as this. The procedure should be called from all tables the trigger is put on.
create or replace package body alhi_test_pck is
procedure log( p_operation varchar2
, p_old_value CLOB --JSON
, p_new_value CLOB --JSON
)
is
begin
insert into alhi_test_jn
( operation
, date_modified
, old_value
, new_value
)
values
( p_operation
, sysdate
, p_old_value
, p_new_value
);
end log;
end alhi_test_pck;
Are you certain you want to do this yourself? Oracle provides functionality to track changes to tables automatically via Flashback Data Archive/ Flashback Time Travel. That’s going to be a lot more performant than writing your own triggers.
Are you sure about how you want to store the data? Saving historical information in JSON is going to generate a ton of data that is going to be very painful to query. The JSON format is going to require quite a bit more space to store the data than is required in a relational database since you’re presumably going to be duplicating the column names in every JSON. And trying to create enough JSON indexes on the journal table to make audit queries run acceptably fast would be a massive challenge.
If you do want to roll your own solution and you are committed to storing the data this way, the best option would be to write some code that dynamically generates the trigger definitions. You’d loop through user_tab_columns
in that code, which would have to be called every time you added or removed a column from a table, because you realistically can’t do that at runtime. If you really wanted, you could have a DDL trigger that would submit a job that would regenerate the trigger every time columns were added. Of course, you’d have to embed the logic for how you want to transform a row of the table into JSON in the code that generates the trigger. So if you wanted to change that logic, you’d need to change it in one place and then regenerate the triggers on all the tables.