Bug 129 - FAILOVER command does not update sl_subscriber
Summary: FAILOVER command does not update sl_subscriber
Status: RESOLVED DUPLICATE of bug 136
Alias: None
Product: Slony-I
Classification: Unclassified
Component: slonik (show other bugs)
Version: devel
Hardware: PC Linux
: low critical
Assignee: Slony Bugs List
URL:
Depends on:
Blocks:
 
Reported: 2010-05-25 14:29 UTC by Steve Singer
Modified: 2010-06-23 06:51 UTC (History)
1 user (show)

See Also:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Steve Singer 2010-05-25 14:29:10 UTC
This is with 2.0.3

On a cluster where nodes 4 and 5 are cascded from 3.

1===>3=====>4
     \\
      5

The slonik script
--
FAILOVER(id=3,backup node=1);
echo 'the failover command has completed';
---

Slonik exists normally and seems to execute the script with no errors (it prints - <stdin>:13: the failover command has completed )

but then


 select * FROM _disorder_replica.sl_subscribe ;
 sub_set | sub_provider | sub_receiver | sub_forward | sub_active 
---------+--------------+--------------+-------------+------------
       1 |            1 |            2 | t           | t
       1 |            1 |            3 | t           | t
       1 |            3 |            4 | t           | t
       1 |            3 |            5 | t           | t



shows node 3 still as the provider for nodes 4 and 5.
Comment 1 Steve Singer 2010-06-03 14:55:52 UTC
What I think is happening is as follows

If more than one node is a direct subscriber of the node being failed over then slonik will find the node with the most recent sync and call failedNode2() on it.

failedNode2() will post an ACCEPT SET event which will have to propogate for the failover to finish.

If you do a 

failover(.....)

slonik will finish before the ACCEPT SET event is processed.

but if you do a 

failover(....)
wait for event (......)

and there isn't more than one direct subscriber then failedNode2() doesn't get called an no ACCEPT SET event is posted.  further more the wait for event() will fail because there is no event to wait for.

I think the solution from a slonik scripting point of view is to do
failover(....)
sync(id=new_origin)
wait for event(....)

the sync will ensure there is always an event to wait for.   This seems a bit convoluted though and isn't documented clearly.   

I wonder if we can make this simpler somehow
Comment 2 Steve Singer 2010-06-03 18:32:16 UTC
The approach that I described in my last note won't work.

The slonik script doesn't know which node slonik posted the event into, thta node is determined by slonik at runtime.  Issuing a sync against the new origin and waiting for it to be confirmed doesn't seem to ensure that the accept set posted against some other node gets confirmed on all nodes. 

Also doing a wait for event(confirmed=all) can be problametic here since the failed node has not yet been dropped (we can't drop it until the failover properly completes) but if that node really failed it won't be able to confirm any events.
Comment 3 Steve Singer 2010-06-04 09:47:53 UTC
There is a second problem in the scenario described above.

Node 3 has two subscribers (4 and 5) for the set.
Since it has more than one direct subscriber failedNode() doesn't switch the subscriptions but expects slonik to do it.

Slonik in slonik_failed_node() determines num_sets by this query
 select S.set_id, count(S.set)id)
    from sl_set S,  sl_subscribe SUB 
   where S.set_id = SUB.sub_set "
				 "    and S.set_origin = %d "
				 "    and SUB.sub_provider = %d "
				 "    and SUB.sub_active "
				 "    group by set_id ",
Comment 4 Steve Singer 2010-06-04 09:52:07 UTC
The previous comment should have read:



There is a second problem in the scenario described above.

Node 3 has two subscribers (4 and 5) for the set.
Since it has more than one direct subscriber failedNode() doesn't switch the
subscriptions but expects slonik to do it.

Slonik in slonik_failed_node() determines num_sets by this query
 select S.set_id, count(S.set)id)
    from sl_set S,  sl_subscribe SUB 
   where S.set_id = SUB.sub_set "
                     and S.set_origin = 3
                     and SUB.sub_provider =3
                     and SUB.sub_active "
                     group by set_id

The problem is that node 3 is NOT the origin for set 1 it is just a forwarding provider so the query comes back with nothing.   This means that later on in the function we don't loop over any replication sets (so we never call subscribeSet(...) or failedNode2(...)
Comment 5 Steve Singer 2010-06-09 12:53:34 UTC
The issues addressed here needs the following:

1) If the failover target id is NOT a set origin (but just a provider) then the failover command should be rejected on an error

2) As part of a failover we want to mark the failed node as being inactive in sl_node and make it so that WAIT FOR  confirmed=all don't wait on this failed nodes to confirm things.

3) slonik needs to remember the sequence number returned by failedNode2 so that it is possible to WAIT FOR that event on the backup node to ensure it is confirmed by all.    Exactly how a slonik script can wait still needs to be figured out.   This won't be done until 2.1
Comment 6 Steve Singer 2010-06-10 12:12:45 UTC
It is also worth noting that if you do the failover to node 3,

then do a DROP NODE (id=1, event_node=3) the DROP NODE will delete the 'FAILOVER_SET' event from the sl_event on node 3 since it has node 1 as the simulated origin.   This might mean that node 4 will get the subsequenet ACCEPT_SET but not the FAILOVER_SET.
Comment 7 Steve Singer 2010-06-23 06:51:57 UTC
I am marking this bug as a duplicate.  The two issues raised below are better described by bugs 130 and 136.

130: should address the case of sl_subscriber not showing the new information when the FAILOVER finishes failing node 1==>3, we need waits

136: Deals with the case of  3===>1 where 3 is just a provider/forwarder, we don't want to use the FAILOVER command in this case, we just want the SUBSCRIBE to always work.

*** This bug has been marked as a duplicate of bug 136 ***