[E] Fwd: [opendaylight-dev] Message Approval Needed - rohini.ambika@infosys.com posted to dev@lists.opendaylight.org


Hsia, Andrew
 

Anil,

I tested the helm chart in k8s deployment but in standalone mode.
I recall Rahul made some modifications to deploy in cluster mode.

On Wed, Jul 27, 2022 at 5:05 PM Anil Shashikumar Belur <abelur@...> wrote:
Hi Andrew and Rahul:

I remember we have discussed these topics in the ODL containers and helm charts meetings. 
Do we know if the expected configuration would work with the ODL on K8s clusters setup or requires some configuration changes?

Cheers,
Anil 

---------- Forwarded message ---------
From: Group Notification <noreply@...>
Date: Wed, Jul 27, 2022 at 9:04 PM
Subject: [opendaylight-dev] Message Approval Needed - rohini.ambika@... posted to dev@...
To: <odl-mailman-owner@...>


A message was sent to the group https://lists.opendaylight.org/g/dev from rohini.ambika@... that needs to be approved because the user is new member moderated.

View this message online

Subject: FW: ODL Clustering issue - High Availability

Hi All,

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

Details and configurations as follows:


* Requirement : ODL clustering for high availability (HA) on data distribution
* Env Configuration:

* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
* CPU : 8 Cores
* RAM : 20GB
* Java Heap size : Min - 512MB Max - 16GB
* JDK version : 11
* Kubernetes version : 1.19.1
* Docker version : 20.10.7

* ODL features installed to enable clustering:

* odl-netconf-clustered-topology
* odl-restconf-all

* Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
* Use Case:

* Fail Over/High Availability:

* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
* Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.

* JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjira.opendaylight.org%2Fbrowse%2FCONTROLLER-2035&data=05%7C01%7Crohini.ambika%40infosys.com%7C12cedda8fd77459df73b08da6fb6802e%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C637945126890707334%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6yZbWAhTgVdwHVbpO7UtUenKW5%2B476j%2BG4ZEodjBUKc%3D&reserved=0>
* Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)


Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.


Thanks & Regards,
Rohini

A complete copy of this message has been attached for your convenience.

To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.

Reject this message and notify the sender.

Delete this message and do not notify the sender.

NOTE: The pending message will expire after 14 days. If you do not take action within that time, the pending message will be automatically rejected.


Change your notification settings




---------- Forwarded message ----------
From: rohini.ambika@...
To: "dev@..." <dev@...>
Cc: 
Bcc: 
Date: Wed, 27 Jul 2022 11:03:22 +0000
Subject: FW: ODL Clustering issue - High Availability

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 



--
Thanks

Andrew