Testing

Table of Contents

Discovery testing is done resorting to discover-server config files. Each test uses a specific XML file that tests a single or several discovery features. To automatically launch the tests using colcon, please check section installation. To manually launch a test, use the following procedure:

Linux:

[BUILD]/install/discovery-server/bin$ . ../../local_setup.bash
[BUILD]/install/discovery-server/bin$ ./discovery-server-X.Y.Z(d)
[SOURCES]/discovery-server/resources/xml/test_XXX.xml

Windows:

[BUILD]\install\discovery-server\bin>..\..\local_setup.bat
[BUILD]\install\discovery-server\bin>discovery-server-X.Y.Z(d) [SOURCES]\discovery-server\resources\xml\test_XXX.xml

To view the full discovery information messages and snapshots in debug configuration, run colcon with the additional flag -DLOG_LEVEL_INFO=ON.

A brief description of each test is given below. Note that a detailed explanation of the XML syntax is given in section configuration files.

test_1_PDP_UDP.xml

This is the most simple scenario: a single server manages the discovery information of four clients. The server prefix and listening ports are given in the profiles UDP server and UDP client. A snapshot is taken after 3 seconds to assess that all clients are aware of the existence of each other.

  
  <servers>
    <server name="server" profile_name="UDP server" />
  </servers>

  <clients>
    <client name="client1" profile_name="UDP client"/>
    <client name="client2" profile_name="UDP client"/>
    <client name="client3" profile_name="UDP client"/>
    <client name="client4" profile_name="UDP client"/>
  </clients>

  <snapshots>
    <snapshot time="3">Check all clients met the server and know each other</snapshot>
  </snapshots>
  

The snapshot information output would be something like:

2019-04-24 12:58:36.936 [DISCOVERY_SERVER Info] Snapshot taken at 2019-04-24 12:58:36 description: Check all clients
met the server and know each other
5 participants report the following discovery info:
Participant 1.f.1.30.ac.12.0.0.2.0.0.0|0.0.1.c1 discovered:
                 Participant client2 1.f.1.30.ac.12.0.0.3.0.0.0|0.0.1.c1
                 Participant client3 1.f.1.30.ac.12.0.0.4.0.0.0|0.0.1.c1
                 Participant client4 1.f.1.30.ac.12.0.0.5.0.0.0|0.0.1.c1
                 Participant server 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1

Participant 1.f.1.30.ac.12.0.0.3.0.0.0|0.0.1.c1 discovered:
                 Participant client1 1.f.1.30.ac.12.0.0.2.0.0.0|0.0.1.c1
                 Participant client3 1.f.1.30.ac.12.0.0.4.0.0.0|0.0.1.c1
                 Participant client4 1.f.1.30.ac.12.0.0.5.0.0.0|0.0.1.c1
                 Participant server 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1

Participant 1.f.1.30.ac.12.0.0.4.0.0.0|0.0.1.c1 discovered:
                 Participant client1 1.f.1.30.ac.12.0.0.2.0.0.0|0.0.1.c1
                 Participant client2 1.f.1.30.ac.12.0.0.3.0.0.0|0.0.1.c1
                 Participant client4 1.f.1.30.ac.12.0.0.5.0.0.0|0.0.1.c1
                 Participant server 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1

Participant 1.f.1.30.ac.12.0.0.5.0.0.0|0.0.1.c1 discovered:
                 Participant client1 1.f.1.30.ac.12.0.0.2.0.0.0|0.0.1.c1
                 Participant client2 1.f.1.30.ac.12.0.0.3.0.0.0|0.0.1.c1
                 Participant client3 1.f.1.30.ac.12.0.0.4.0.0.0|0.0.1.c1
                 Participant server 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1

Participant 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1 discovered:
                 Participant client1 1.f.1.30.ac.12.0.0.2.0.0.0|0.0.1.c1
                 Participant client2 1.f.1.30.ac.12.0.0.3.0.0.0|0.0.1.c1
                 Participant client3 1.f.1.30.ac.12.0.0.4.0.0.0|0.0.1.c1
                 Participant client4 1.f.1.30.ac.12.0.0.5.0.0.0|0.0.1.c1

We’ll only get this with the debug binary. On Release mode, we can resort to provide a filename to the snapshots tag. Then an XML file will be generated with the same info (note that generating an XML automatically disables validation thus, it cannot be used in singleton tests).

    <DS_Snapshot timestamp="11684334716598" someone="true">
        <description>Check all clients met the server and know each other</description>
        <ptdb guid_prefix="1.f.74.42.80.35.0.0.2.0.0.0" guid_entity="0.0.1.c1">
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.3.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client2"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.4.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client3"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.5.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client4"/>
            <ptdi guid_prefix="4d.49.47.55.45.4c.5f.42.41.52.52.4f" guid_entity="0.0.1.c1" server="true" alive="true" name="server"/>
        </ptdb>
        <ptdb guid_prefix="1.f.74.42.80.35.0.0.3.0.0.0" guid_entity="0.0.1.c1">
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.2.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client1"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.4.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client3"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.5.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client4"/>
            <ptdi guid_prefix="4d.49.47.55.45.4c.5f.42.41.52.52.4f" guid_entity="0.0.1.c1" server="true" alive="true" name="server"/>
        </ptdb>
        <ptdb guid_prefix="1.f.74.42.80.35.0.0.4.0.0.0" guid_entity="0.0.1.c1">
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.2.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client1"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.3.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client2"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.5.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client4"/>
            <ptdi guid_prefix="4d.49.47.55.45.4c.5f.42.41.52.52.4f" guid_entity="0.0.1.c1" server="true" alive="true" name="server"/>
        </ptdb>
        <ptdb guid_prefix="1.f.74.42.80.35.0.0.5.0.0.0" guid_entity="0.0.1.c1">
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.2.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client1"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.3.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client2"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.4.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client3"/>
            <ptdi guid_prefix="4d.49.47.55.45.4c.5f.42.41.52.52.4f" guid_entity="0.0.1.c1" server="true" alive="true" name="server"/>
        </ptdb>
        <ptdb guid_prefix="4d.49.47.55.45.4c.5f.42.41.52.52.4f" guid_entity="0.0.1.c1">
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.2.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client1"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.3.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client2"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.4.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client3"/>
            <ptdi guid_prefix="1.f.74.42.80.35.0.0.5.0.0.0" guid_entity="0.0.1.c1" server="false" alive="true" name="client4"/>
        </ptdb>
    </DS_Snapshot>

Here we see how all participants reported the discovery of all the others. Note that, because there is no Fast-RTPS discovery callback from a participant to report its own discovery, participants do not report themselves. This must be taken into account when a snapshot is checked. Note, however, that participants do discover themselves when they create a publisher or subscriber because there are callbacks associated with those cases.

test_2_PDP_TCP.xml

Resembles the previous scenario but uses TCP transport instead of the default UDP one. A single server manages the discovery info of four clients. The server prefix and listening ports are given in the profiles TCP server and TCP client. A snapshot is taken after 3 seconds to assess that all clients are aware of the existence of each other.

Specific transport descriptor must be created for server and clients:

      
      <transport_descriptor>
        <transport_id>TCPv4_SERVER</transport_id>
        <type>TCPv4</type>
        <listening_ports>
          <port>02049</port>
        </listening_ports>
        <calculate_crc>false</calculate_crc>
        <check_crc>false</check_crc>
      </transport_descriptor>

      <transport_descriptor>
        <transport_id>TCPv4_CLIENT</transport_id>
        <type>TCPv4</type>
        <calculate_crc>false</calculate_crc>
        <check_crc>false</check_crc>
      </transport_descriptor>
      

Client and server participant profiles must reference this transport and discard builtin ones.

<participant profile_name="TCP client" >
  <rtps>
        <userTransports>
          <transport_id>TCPv4_CLIENT</transport_id>
        </userTransports>
        <useBuiltinTransports>false</useBuiltinTransports>
        ...
  </rtps>
</participant>

<participant profile_name="TCP server" >
  <rtps>
        ....
        <userTransports>
          <transport_id>TCPv4_SERVER</transport_id>
        </userTransports>
        <useBuiltinTransports>false</useBuiltinTransports>
        ...
  </rtps>
</participant>

test_3_PDP_UDP.xml

Here we test the discovery capacity of handling late joiners. A single server is created, which manages the discovery information of four clients with different lifespans. The server prefix and listening ports are given in the profiles UDP server and UDP client. A snapshot is taken whenever there is a participant creation or removal to assess all entities are aware of it.

  
  <servers>
    <server name="server" profile_name="UDP server" />
  </servers>

  <clients>
    <client name="client1" profile_name="UDP client" removal_time="8"/>
    <client name="client2" profile_name="UDP client" creation_time="2" removal_time="10" />
    <client name="client3" profile_name="UDP client" creation_time="4" removal_time="12" />
    <client name="client4" profile_name="UDP client" creation_time="6" removal_time="14" />
  </clients>

  <snapshots>
    <snapshot time="1">Check server and client1 known each other</snapshot>
      <snapshot time="3">Check server and client1 acknowledge client2 creation</snapshot>
      <snapshot time="5">Check server, client1 and client2 acknowledge client3 creation</snapshot>
      <snapshot time="7">Check server, client1, client2 and client3 acknowledge client4 creation</snapshot>
      <snapshot time="9">Check client1 removal is acknowledge by all remaining</snapshot>
      <snapshot time="11">Check client2 removal is acknowledge by all remaining</snapshot>
      <snapshot time="13">Check client3 removal is acknowledge by all remaining</snapshot>
      <snapshot time="15">Check client4 removal is acknowledge by all remaining</snapshot>
  </snapshots>
  

test_4_PDP_UDP.xml

Here we test the capability of one server to exchange information with another one. Two servers are created and each one has different associated clients. We take a snapshot to assess all clients are aware of the other server’s clients existence. Note that we don’t need to modify the previous tests profiles, as we can rely on the server and client tag attributes to avoid create redundant boilerplate profiles:

  • server prefix attribute is used to supersede the profile specified one and uniquely identifies each server.
  • server ListeningPorts and ServerList tags allow us to link servers between them without creating specific server profiles.
  • client server attribute is used to link a client with its server without using a new profile or a ServerList.
  
  <servers>
    <server name="server1" prefix="4D.49.47.55.45.4c.5f.42.41.52.52.4f" profile_name="UDP server" />
    <server name="server2" prefix="4D.49.47.55.45.4c.7e.42.41.52.52.4f" profile_name="UDP server">
      <ListeningPorts>
        <metatrafficUnicastLocatorList>
          <locator>
            <udpv4>
              <address>127.0.0.1</address>
              <port>31569</port>
            </udpv4>
          </locator>
        </metatrafficUnicastLocatorList>
      </ListeningPorts>
      <ServersList>
        <RServer prefix="4D.49.47.55.45.4c.5f.42.41.52.52.4f" />
      </ServersList>    
    </server>
  </servers>

  <clients>
    <client name="client1" profile_name="UDP client" server="4D.49.47.55.45.4c.5f.42.41.52.52.4f"/>
    <client name="client2" profile_name="UDP client" server="4D.49.47.55.45.4c.7e.42.41.52.52.4f" />
    <client name="client3" profile_name="UDP client" server="4D.49.47.55.45.4c.7e.42.41.52.52.4f" />
    <client name="client4" profile_name="UDP client" server="4D.49.47.55.45.4c.7e.42.41.52.52.4f" />
  </clients>

  <snapshots>
    <snapshot time="3">Check all clients known each other</snapshot>
  </snapshots>
  

test_5_EDP_UDP.xml & test_5_EDP_TCP.xml

These tests introduce dummy publishers and subscribers to assess proper EDP discovery operation over UDP and TCP transport. A server and two clients are created, and each participant (server included) creates publishers and subscribers with different types and topics. In the end, a snapshot is taken to verify all publishers and subscribers have been reported by all participants. Note that the tags publisher and subscriber have attributes to supersede topics specified in profiles.

  
  <servers>
    <server name="server" profile_name="TCP server">
        <subscriber topic="topic 1" />
      <publisher topic="topic 2" />
    </server>
  </servers>

  <clients>
    <client name="client1" profile_name="TCP client" listening_port="56858">
        <subscriber /> <!-- defaults to helloworld type -->
        <subscriber topic="topic 2" />
        <subscriber profile_name="Sub 1" />
        <publisher profile_name="Pub 2" />    
    </client>
    <client name="client2" profile_name="TCP client" listening_port="56859">
        <publisher />  <!-- defaults to helloworld type -->
      <publisher topic="topic 1" />
      <publisher profile_name="Pub 1" />
      <subscriber profile_name="Sub 2" />
    </client>
  </clients>

  <snapshots>
    <snapshot time="3">Check all publishers and subscribers are properly discovered by everybody</snapshot>
  </snapshots>
  

Snapshots with EDP information are far more verbose:

2019-04-24 14:52:44.300 [DISCOVERY_SERVER Info] Snapshot taken at 2019-04-24 14:52:44 description: Check all
publishers and subscribers are properly discovered by everybody
3 participants report the following discovery info:
Participant 1.f.1.30.64.47.0.0.2.0.0.0|0.0.1.c1 discovered:
         Participant 1.f.1.30.64.47.0.0.2.0.0.0|0.0.1.c1 has:
                1 publishers:
                        Publisher 1.f.1.30.64.47.0.0.2.0.0.0|0.0.1.3 TypeName: sample_type_2 TopicName: topic_2
                3 subscribers:
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.2.4 TypeName: HelloWorld TopicName: HelloWorldTopic
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.3.4 TypeName: sample_type_2 TopicName: topic_2
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.4.4 TypeName: sample_type_1 TopicName: topic_1

         Participant client2 1.f.1.30.64.47.0.0.3.0.0.0|0.0.1.c1 has:
                3 publishers:
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.1.3 TypeName: HelloWorld TopicName: HelloWorldTopic
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.2.3 TypeName: sample_type_1 TopicName: topic_1
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.3.3 TypeName: sample_type_1 TopicName: topic_1
                1 subscribers:
                        Subscriber 1.f.1.30.64.47.0.0.3.0.0.0|0.0.4.4 TypeName: sample_type_2 TopicName: topic_2

         Participant server 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1 has:
                1 publishers:
                        Publisher 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.3 TypeName: sample_type_2 TopicName: topic_2
                1 subscribers:
                        Subscriber 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.2.4 TypeName: sample_type_1 TopicName: topic_1


Participant 1.f.1.30.64.47.0.0.3.0.0.0|0.0.1.c1 discovered:
         Participant client1 1.f.1.30.64.47.0.0.2.0.0.0|0.0.1.c1 has:
                1 publishers:
                        Publisher 1.f.1.30.64.47.0.0.2.0.0.0|0.0.1.3 TypeName: sample_type_2 TopicName: topic_2
                3 subscribers:
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.2.4 TypeName: HelloWorld TopicName: HelloWorldTopic
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.3.4 TypeName: sample_type_2 TopicName: topic_2
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.4.4 TypeName: sample_type_1 TopicName: topic_1

         Participant 1.f.1.30.64.47.0.0.3.0.0.0|0.0.1.c1 has:
                3 publishers:
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.1.3 TypeName: HelloWorld TopicName: HelloWorldTopic
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.2.3 TypeName: sample_type_1 TopicName: topic_1
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.3.3 TypeName: sample_type_1 TopicName: topic_1
                1 subscribers:
                        Subscriber 1.f.1.30.64.47.0.0.3.0.0.0|0.0.4.4 TypeName: sample_type_2 TopicName: topic_2

         Participant server 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1 has:
                1 publishers:
                        Publisher 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.3 TypeName: sample_type_2 TopicName: topic_2
                1 subscribers:
                        Subscriber 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.2.4 TypeName: sample_type_1 TopicName: topic_1


Participant 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1 discovered:
         Participant client1 1.f.1.30.64.47.0.0.2.0.0.0|0.0.1.c1 has:
                1 publishers:
                        Publisher 1.f.1.30.64.47.0.0.2.0.0.0|0.0.1.3 TypeName: sample_type_2 TopicName: topic_2
                3 subscribers:
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.2.4 TypeName: HelloWorld TopicName: HelloWorldTopic
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.3.4 TypeName: sample_type_2 TopicName: topic_2
                        Subscriber 1.f.1.30.64.47.0.0.2.0.0.0|0.0.4.4 TypeName: sample_type_1 TopicName: topic_1

         Participant client2 1.f.1.30.64.47.0.0.3.0.0.0|0.0.1.c1 has:
                3 publishers:
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.1.3 TypeName: HelloWorld TopicName: HelloWorldTopic
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.2.3 TypeName: sample_type_1 TopicName: topic_1
                        Publisher 1.f.1.30.64.47.0.0.3.0.0.0|0.0.3.3 TypeName: sample_type_1 TopicName: topic_1
                1 subscribers:
                        Subscriber 1.f.1.30.64.47.0.0.3.0.0.0|0.0.4.4 TypeName: sample_type_2 TopicName: topic_2

         Participant 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.c1 has:
                1 publishers:
                        Publisher 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.1.3 TypeName: sample_type_2 TopicName: topic_2
                1 subscribers:
                        Subscriber 4d.49.47.55.45.4c.5f.42.41.52.52.4f|0.0.2.4 TypeName: sample_type_1 TopicName: topic_1

test_6_EDP_UDP.xml

Here we test how the discovery handles EDP late joiners. It’s the same scenario with a server and two clients with different lifespans. Each participant (server included) creates publishers and subscribers with different lifespans, types, and topics. Snapshots are taken whenever an endpoint is created or destroyed to assess every participant shares the same discovery info.

  
  <servers>
    <server name="server" profile_name="UDP server" creation_time="1" >
      <subscriber topic="topic 1" creation_time="3" />
      <publisher topic="topic 2" creation_time="5" />
    </server>
  </servers>

  <clients>
    <client name="client1" profile_name="UDP client" removal_time="16">
      <subscriber creation_time="7" removal_time="14" /> <!-- defaults to helloworld type -->
      <subscriber topic="topic 2" creation_time="7" removal_time="14" />
      <subscriber profile_name="Sub 1" creation_time="7" removal_time="14" />
      <publisher profile_name="Pub 2" creation_time="12" />    
    </client>
  <client name="client2" profile_name="UDP client" creation_time="9">
      <publisher creation_time="10" />  <!-- defaults to helloworld type -->
      <publisher topic="topic 1" creation_time="10" />
      <publisher profile_name="Pub 1" creation_time="10" />
      <subscriber profile_name="Sub 2" creation_time="10" />
    </client>
  </clients>

  <snapshots>
      <snapshot time="2">Check client1 detects the server</snapshot>
      <snapshot time="4">Check client1 detects server's subscriber</snapshot>
      <snapshot time="6">Check client1 detects server's publisher</snapshot>
      <snapshot time="8">Check server detects client1's subscribers</snapshot>
      <snapshot time="11">Check server and client1 detects client2's and its entities</snapshot>
      <snapshot time="13">Check everybody detects new client1's publisher</snapshot>
      <snapshot time="15">Check everybody detects client1's subscribers' removal</snapshot>
      <snapshot time="17" someone="false">Check server and client2 detects client1 removal</snapshot>
  </snapshots>
  

test_7_PDP_UDP.xml

Here we test how the discovery handles server shutdown and reboot. This is a clean shutdown made from the Fast RTPS API Domain::removeParticipant. Each time the server dies it notifies this fact to all its clients which automatically begin pinging on the server again trying to reconnect when its rebooted. Snapshots check that clients are aware of server absence after shutdown and presence after reboot.

  
  <servers>
    <server name="server" profile_name="UDP server" removal_time="2" />
    <server name="server" profile_name="UDP server" creation_time="4" removal_time="6" />
    <server name="server" profile_name="UDP server" creation_time="8" removal_time="10" />
  </servers>

  <clients>
    <client name="client1" profile_name="UDP client" />
    <client name="client2" profile_name="UDP client" />
    <client name="client3" profile_name="UDP client" />
    <client name="client4" profile_name="UDP client" />
  </clients>

  <snapshots>
    <snapshot time="1">Check server-clients awareness</snapshot>
    <snapshot time="3">Check server demise has been reported to all clients</snapshot>
    <snapshot time="5">Check server-clients awareness has been recovered</snapshot>
    <snapshot time="7">Check server demise has been reported to all clients</snapshot>
    <snapshot time="9">Check server-clients awareness has been recovered</snapshot>
    <snapshot time="11">Check server demise has been reported to all clients</snapshot>
  </snapshots>
  

test_8_lease_client.xml & test_8_lease_server.xml

Standard lease duration mechanism no longer makes sense on the client-server architecture. Clients no longer multicast DATA(p) messages in order to make all other clients aware of its presence as in PDP standard mechanism, thus, this periodical messages can no longer be used to assert participant liveliness. In the client-server architecture:

  • clients only track its server liveliness by sending periodical messages to them. If a server dies because of lease-duration its client must resume pinging on it in order to reconnect.
  • servers track clients and linked servers liveliness by sending periodical messages to them. If a client dies the server must propagate a DATA(p[UD]) for that client over its PDP network. This way all server’s clients have a shared lease duration capability.

In order to test this a python script is used to launch two discovery-servers instances:

  • A server with several clients. These instances will take a snapshot at the beginning and another at the end.
  • A client who references the server on the first instance. This process would be killed from python between the snapshots.

The first snapshot must show how all clients (remote one included) known each other. After killing process 2 (and its client) the server must kill its proxy by lease duration time out and report it to all other clients. The second snapshot must show how all participants have removed the remote client from its discovery database.