Slony-I Trigger Handling

11. Slony-I Trigger Handling

Slony-I has had two "flavours" of trigger handling:

  • In versions up to 1.2, PostgreSQL had no awareness of replication, with the result that Slony-I needed to "hack" on the system catalog in order to deactivate, on subscribers, triggers that ought not to run.

This has had a number of somewhat painful side-effects including:

  • Corruption of the system catalog on subscribers, as existing triggers, that generally need to be hidden, are "hacked", via pg_catalog.pg_trigger, to point to the index being used by Slony-I as its "primary key".

    The very same thing was true for rules.

    This had the side-effect that pg_dump could not be used to pull proper schemas from subscriber nodes.

  • It introduced the need to take out exclusive locks on all replicated tables when processing Section 17 as triggers on each replicated table would need to be dropped and re-added during the course of processing.

  • In PostgreSQL version 8.3, there is new functionality where triggers and rules can have their behaviour altered via ALTER TABLE, and specify any of the following further trigger-related options:

  • DISABLE TRIGGER trigger_name

  • ENABLE TRIGGER trigger_name


  • ENABLE ALWAYS TRIGGER trigger_name

  • DISABLE RULE rewrite_rule_name

  • ENABLE RULE rewrite_rule_name

  • ENABLE REPLICA RULE rewrite_rule_name

  • ENABLE ALWAYS RULE rewrite_rule_name

A new GUC variable, session_replication_role controls whether the session is in origin, replica, or local mode, which then, in combination with the above enabling/disabling options, controls whether or not the trigger function actually runs.

We may characterize when triggers fire, under Slony-I replication, based on the following table; the same rules apply to PostgreSQL rules.

Table 1. Trigger Behaviour

Trigger FormWhen EstablishedLog Triggerdenyaccess TriggerAction - originAction - replica Action - local
DISABLE TRIGGERUser requestdisabled on subscriberenabled on subscriberdoes not firedoes not firedoes not fire
ENABLE TRIGGERDefaultenabled on subscriberdisabled on subscriberfiresdoes not firefires
ENABLE REPLICA TRIGGERUser requestinappropriateinappropriatedoes not firefiresdoes not fire
ENABLE ALWAYS TRIGGERUser requestinappropriateinappropriatefiresfiresfires

There are, correspondingly, now, several ways in which Slony-I interacts with this. Let us outline those times that are interesting:

  • Before replication is set up, every database starts out in "origin" status, and, by default, all triggers are of the ENABLE TRIGGER form, so they all run, as is normal in a system uninvolved in replication.

  • When a Slony-I subscription is set up, on the origin node, both the logtrigger and denyaccess triggers are added, the former being enabled, and running, the latter being disabled, so it does not run.

    From a locking perspective, each SLONIK SET ADD TABLE request will need to briefly take out an exclusive lock on each table as it attaches these triggers, which is much the same as has always been the case with Slony-I.

  • On the subscriber, the subscription process will add the same triggers, but with the polarities "reversed", to protect data from accidental corruption on subscribers.

    From a locking perspective, again, there is not much difference from earlier Slony-I behaviour, as the subscription process, due to running TRUNCATE, copying data, and altering table schemas, requires extensive exclusive table locks, and the changes in trigger behaviour do not change those requirements.

    However, note that the ability to enable and disable triggers in a PostgreSQL-supported fashion means that we have had no need to "corrupt" the system catalog, so we have the considerable advantage that pg_dump may be used to draw a completely consistent backup against any node in a Slony-I cluster.

  • If you take a pg_dump of a Slony-I node, and drop out the Slony-I namespace, this now cleanly removes all Slony-I components, leaving the database, including its schema, in a "pristine", consistent fashion, ready for whatever use may be desired.

  • Section 17 is now performed in quite a different way: rather than altering each replicated table to "take it out of replicated mode", Slony-I instead simply shifts into the local status for the duration of this event.

    On the origin, this deactivates the logtrigger trigger.

    On each subscriber, this deactivates the denyaccess trigger.

    This may be expected to allow DDL changes to become enormously less expensive, since, rather than needing to take out exclusive locks on all replicated tables (as used to be mandated by the action of dropping and adding back the Slony-I-created triggers), the only tables that are locked are those ones that the DDL script was specifically acting on.

  • At the time of invoking SLONIK MOVE SET against the former origin, Slony-I must transform that node into a subscriber, which requires dropping the lockset triggers, disabling the logtrigger triggers, and enabling the denyaccess triggers.

    At about the same time, when processing SLONIK MOVE SET against the new origin, Slony-I must transform that node into an origin, which requires disabling the formerly active denyaccess triggers, and enabling the logtrigger triggers.

    From a locking perspective, this will not behave differently from older versions of Slony-I; to disable and enable the respective triggers requires taking out exclusive locks on all replicated tables.

  • Similarly to SLONIK MOVE SET, SLONIK FAILOVER transforms a subscriber node into an origin, which requires disabling the formerly active denyaccess triggers, and enabling the logtrigger triggers. The locking implications are again, much the same, requiring an exclusive lock on each such table.