Date   

Re: [opendaylight-dev] Message Approval Needed - rohini.ambika@infosys.com posted to dev@lists.opendaylight.org

Rahul Sharma <rahul.iitr@...>
 

Hello Rohini,

Thank you for the answers.
  1. For the 1st one: when you say you tried with the official Helm charts - which helm charts are you referring to? Can you send more details on how (parameters in values.yaml that you used) when you deployed these charts.
  2. What was the Temporary fix that reduced the occurrence of the issue. Can you point to the check-in made or change in configuration parameters? Would be helpful to diagnose a proper fix.
Regards,
Rahul


On Thu, Jul 28, 2022 at 2:21 AM Rohini Ambika <rohini.ambika@...> wrote:

Hi Anil,

 

Thanks for the response.

 

Please find the details below:

 

1.            Is the Test deployment using our Helm charts (ODL Helm Chart)? –  We have created our own helm chart for the ODL deployment. Have also tried the use case with official helm chart.

2.            I see that the JIRA mentioned in the below email ( https://jira.opendaylight.org/browse/CONTROLLER-2035  ) is already marked Resolved. Has somebody fixed it in the latest version. – This was a temporary fix from our end and  the failure rate has reduced due to the fix, however we are still facing the issue when we do multiple restarts of master node.

 

ODL version used is Phosphorous SR2

All the configurations are provided and attached in the initial mail .

 

Thanks & Regards,

Rohini

Cell: +91.9995241298 | VoIP: +91.471.3025332

 

From: Anil Shashikumar Belur <abelur@...>
Sent: Thursday, July 28, 2022 5:05 AM
To: Rahul Sharma <rahul.iitr@...>
Cc: Hsia, Andrew <andrew.hsia@...>; Rohini Ambika <rohini.ambika@...>; Casey Cain <ccain@...>; Luis Gomez <ecelgp@...>; TSC <tsc@...>
Subject: Re: [opendaylight-dev] Message Approval Needed - rohini.ambika@... posted to dev@...

 

[**EXTERNAL EMAIL**]

Hi, 

 

I belive they are using ODL Helm charts and K8s for the cluster setup, that said  I have requested the version of ODL being used. 

Rohoni: Can you provide more details on the ODL version, and configuration, that Rahul/Andrew requested?

 

On Thu, Jul 28, 2022 at 8:08 AM Rahul Sharma <rahul.iitr@...> wrote:

Hi Anil,

 

Thank you for bringing this up.

 

Couple of questions:

  1. Is the Test deployment using our Helm charts (ODL Helm Chart)?
  2. I see that the JIRA mentioned in the below email ( https://jira.opendaylight.org/browse/CONTROLLER-2035  ) is already marked Resolved. Has somebody fixed it in the latest version.

 

Thanks,
Rahul

 

On Wed, Jul 27, 2022 at 5:05 PM Anil Shashikumar Belur <abelur@...> wrote:

Hi Andrew and Rahul:

 

I remember we have discussed these topics in the ODL containers and helm charts meetings. 

Do we know if the expected configuration would work with the ODL on K8s clusters setup or requires some configuration changes?

 

Cheers,

Anil 

 

---------- Forwarded message ---------
From: Group Notification <noreply@...>
Date: Wed, Jul 27, 2022 at 9:04 PM
Subject: [opendaylight-dev] Message Approval Needed - rohini.ambika@... posted to dev@...
To: <odl-mailman-owner@...>

 

A message was sent to the group https://lists.opendaylight.org/g/dev from rohini.ambika@... that needs to be approved because the user is new member moderated.

View this message online

Subject: FW: ODL Clustering issue - High Availability

Hi All,

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

Details and configurations as follows:


* Requirement : ODL clustering for high availability (HA) on data distribution
* Env Configuration:

* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
* CPU : 8 Cores
* RAM : 20GB
* Java Heap size : Min - 512MB Max - 16GB
* JDK version : 11
* Kubernetes version : 1.19.1
* Docker version : 20.10.7

* ODL features installed to enable clustering:

* odl-netconf-clustered-topology
* odl-restconf-all

* Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
* Use Case:

* Fail Over/High Availability:

* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
* Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.

* JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjira.opendaylight.org%2Fbrowse%2FCONTROLLER-2035&data=05%7C01%7Crohini.ambika%40infosys.com%7C12cedda8fd77459df73b08da6fb6802e%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C637945126890707334%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6yZbWAhTgVdwHVbpO7UtUenKW5%2B476j%2BG4ZEodjBUKc%3D&reserved=0>
* Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)


Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.


Thanks & Regards,
Rohini

A complete copy of this message has been attached for your convenience.

To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.

Reject this message and notify the sender.

Delete this message and do not notify the sender.

NOTE: The pending message will expire after 14 days. If you do not take action within that time, the pending message will be automatically rejected.


Change your notification settings




---------- Forwarded message ----------
From: rohini.ambika@...
To: "dev@..." <dev@...>
Cc: 
Bcc: 
Date: Wed, 27 Jul 2022 11:03:22 +0000
Subject: FW: ODL Clustering issue - High Availability

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 


 

--

- Rahul Sharma



--
- Rahul Sharma


Re: [opendaylight-dev] Message Approval Needed - rohini.ambika@infosys.com posted to dev@lists.opendaylight.org

Anil Belur
 

Hi, 

I belive they are using ODL Helm charts and K8s for the cluster setup, that said  I have requested the version of ODL being used. 
Rohoni: Can you provide more details on the ODL version, and configuration, that Rahul/Andrew requested?


On Thu, Jul 28, 2022 at 8:08 AM Rahul Sharma <rahul.iitr@...> wrote:
Hi Anil,

Thank you for bringing this up.

Couple of questions:
  1. Is the Test deployment using our Helm charts (ODL Helm Chart)?
  2. I see that the JIRA mentioned in the below email ( https://jira.opendaylight.org/browse/CONTROLLER-2035  ) is already marked Resolved. Has somebody fixed it in the latest version.

Thanks,
Rahul

On Wed, Jul 27, 2022 at 5:05 PM Anil Shashikumar Belur <abelur@...> wrote:
Hi Andrew and Rahul:

I remember we have discussed these topics in the ODL containers and helm charts meetings. 
Do we know if the expected configuration would work with the ODL on K8s clusters setup or requires some configuration changes?

Cheers,
Anil 

---------- Forwarded message ---------
From: Group Notification <noreply@...>
Date: Wed, Jul 27, 2022 at 9:04 PM
Subject: [opendaylight-dev] Message Approval Needed - rohini.ambika@... posted to dev@...
To: <odl-mailman-owner@...>


A message was sent to the group https://lists.opendaylight.org/g/dev from rohini.ambika@... that needs to be approved because the user is new member moderated.

View this message online

Subject: FW: ODL Clustering issue - High Availability

Hi All,

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

Details and configurations as follows:


* Requirement : ODL clustering for high availability (HA) on data distribution
* Env Configuration:

* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
* CPU : 8 Cores
* RAM : 20GB
* Java Heap size : Min - 512MB Max - 16GB
* JDK version : 11
* Kubernetes version : 1.19.1
* Docker version : 20.10.7

* ODL features installed to enable clustering:

* odl-netconf-clustered-topology
* odl-restconf-all

* Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
* Use Case:

* Fail Over/High Availability:

* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
* Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.

* JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjira.opendaylight.org%2Fbrowse%2FCONTROLLER-2035&data=05%7C01%7Crohini.ambika%40infosys.com%7C12cedda8fd77459df73b08da6fb6802e%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C637945126890707334%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6yZbWAhTgVdwHVbpO7UtUenKW5%2B476j%2BG4ZEodjBUKc%3D&reserved=0>
* Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)


Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.


Thanks & Regards,
Rohini

A complete copy of this message has been attached for your convenience.

To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.

Reject this message and notify the sender.

Delete this message and do not notify the sender.

NOTE: The pending message will expire after 14 days. If you do not take action within that time, the pending message will be automatically rejected.


Change your notification settings




---------- Forwarded message ----------
From: rohini.ambika@...
To: "dev@..." <dev@...>
Cc: 
Bcc: 
Date: Wed, 27 Jul 2022 11:03:22 +0000
Subject: FW: ODL Clustering issue - High Availability

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 



--
- Rahul Sharma


Re: [opendaylight-dev] Message Approval Needed - rohini.ambika@infosys.com posted to dev@lists.opendaylight.org

Rahul Sharma <rahul.iitr@...>
 

Hi Anil,

Thank you for bringing this up.

Couple of questions:
  1. Is the Test deployment using our Helm charts (ODL Helm Chart)?
  2. I see that the JIRA mentioned in the below email ( https://jira.opendaylight.org/browse/CONTROLLER-2035  ) is already marked Resolved. Has somebody fixed it in the latest version.

Thanks,
Rahul


On Wed, Jul 27, 2022 at 5:05 PM Anil Shashikumar Belur <abelur@...> wrote:
Hi Andrew and Rahul:

I remember we have discussed these topics in the ODL containers and helm charts meetings. 
Do we know if the expected configuration would work with the ODL on K8s clusters setup or requires some configuration changes?

Cheers,
Anil 

---------- Forwarded message ---------
From: Group Notification <noreply@...>
Date: Wed, Jul 27, 2022 at 9:04 PM
Subject: [opendaylight-dev] Message Approval Needed - rohini.ambika@... posted to dev@...
To: <odl-mailman-owner@...>


A message was sent to the group https://lists.opendaylight.org/g/dev from rohini.ambika@... that needs to be approved because the user is new member moderated.

View this message online

Subject: FW: ODL Clustering issue - High Availability

Hi All,

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

Details and configurations as follows:


* Requirement : ODL clustering for high availability (HA) on data distribution
* Env Configuration:

* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
* CPU : 8 Cores
* RAM : 20GB
* Java Heap size : Min - 512MB Max - 16GB
* JDK version : 11
* Kubernetes version : 1.19.1
* Docker version : 20.10.7

* ODL features installed to enable clustering:

* odl-netconf-clustered-topology
* odl-restconf-all

* Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
* Use Case:

* Fail Over/High Availability:

* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
* Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.

* JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjira.opendaylight.org%2Fbrowse%2FCONTROLLER-2035&data=05%7C01%7Crohini.ambika%40infosys.com%7C12cedda8fd77459df73b08da6fb6802e%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C637945126890707334%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6yZbWAhTgVdwHVbpO7UtUenKW5%2B476j%2BG4ZEodjBUKc%3D&reserved=0>
* Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)


Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.


Thanks & Regards,
Rohini

A complete copy of this message has been attached for your convenience.

To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.

Reject this message and notify the sender.

Delete this message and do not notify the sender.

NOTE: The pending message will expire after 14 days. If you do not take action within that time, the pending message will be automatically rejected.


Change your notification settings




---------- Forwarded message ----------
From: rohini.ambika@...
To: "dev@..." <dev@...>
Cc: 
Bcc: 
Date: Wed, 27 Jul 2022 11:03:22 +0000
Subject: FW: ODL Clustering issue - High Availability

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 



--
- Rahul Sharma


Re: [E] Fwd: [opendaylight-dev] Message Approval Needed - rohini.ambika@infosys.com posted to dev@lists.opendaylight.org

Hsia, Andrew
 

Anil,

I tested the helm chart in k8s deployment but in standalone mode.
I recall Rahul made some modifications to deploy in cluster mode.

On Wed, Jul 27, 2022 at 5:05 PM Anil Shashikumar Belur <abelur@...> wrote:
Hi Andrew and Rahul:

I remember we have discussed these topics in the ODL containers and helm charts meetings. 
Do we know if the expected configuration would work with the ODL on K8s clusters setup or requires some configuration changes?

Cheers,
Anil 

---------- Forwarded message ---------
From: Group Notification <noreply@...>
Date: Wed, Jul 27, 2022 at 9:04 PM
Subject: [opendaylight-dev] Message Approval Needed - rohini.ambika@... posted to dev@...
To: <odl-mailman-owner@...>


A message was sent to the group https://lists.opendaylight.org/g/dev from rohini.ambika@... that needs to be approved because the user is new member moderated.

View this message online

Subject: FW: ODL Clustering issue - High Availability

Hi All,

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

Details and configurations as follows:


* Requirement : ODL clustering for high availability (HA) on data distribution
* Env Configuration:

* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
* CPU : 8 Cores
* RAM : 20GB
* Java Heap size : Min - 512MB Max - 16GB
* JDK version : 11
* Kubernetes version : 1.19.1
* Docker version : 20.10.7

* ODL features installed to enable clustering:

* odl-netconf-clustered-topology
* odl-restconf-all

* Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
* Use Case:

* Fail Over/High Availability:

* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
* Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.

* JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjira.opendaylight.org%2Fbrowse%2FCONTROLLER-2035&data=05%7C01%7Crohini.ambika%40infosys.com%7C12cedda8fd77459df73b08da6fb6802e%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C637945126890707334%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6yZbWAhTgVdwHVbpO7UtUenKW5%2B476j%2BG4ZEodjBUKc%3D&reserved=0>
* Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)


Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.


Thanks & Regards,
Rohini

A complete copy of this message has been attached for your convenience.

To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.

Reject this message and notify the sender.

Delete this message and do not notify the sender.

NOTE: The pending message will expire after 14 days. If you do not take action within that time, the pending message will be automatically rejected.


Change your notification settings




---------- Forwarded message ----------
From: rohini.ambika@...
To: "dev@..." <dev@...>
Cc: 
Bcc: 
Date: Wed, 27 Jul 2022 11:03:22 +0000
Subject: FW: ODL Clustering issue - High Availability

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 



--
Thanks

Andrew


[opendaylight-dev] Message Approval Needed - rohini.ambika@infosys.com posted to dev@lists.opendaylight.org

Anil Belur
 

Hi Andrew and Rahul:

I remember we have discussed these topics in the ODL containers and helm charts meetings. 
Do we know if the expected configuration would work with the ODL on K8s clusters setup or requires some configuration changes?

Cheers,
Anil 

---------- Forwarded message ---------
From: Group Notification <noreply@...>
Date: Wed, Jul 27, 2022 at 9:04 PM
Subject: [opendaylight-dev] Message Approval Needed - rohini.ambika@... posted to dev@...
To: <odl-mailman-owner@...>


A message was sent to the group https://lists.opendaylight.org/g/dev from rohini.ambika@... that needs to be approved because the user is new member moderated.

View this message online

Subject: FW: ODL Clustering issue - High Availability

Hi All,

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

Details and configurations as follows:


* Requirement : ODL clustering for high availability (HA) on data distribution
* Env Configuration:

* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
* CPU : 8 Cores
* RAM : 20GB
* Java Heap size : Min - 512MB Max - 16GB
* JDK version : 11
* Kubernetes version : 1.19.1
* Docker version : 20.10.7

* ODL features installed to enable clustering:

* odl-netconf-clustered-topology
* odl-restconf-all

* Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
* Use Case:

* Fail Over/High Availability:

* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
* Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.

* JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035<https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fjira.opendaylight.org%2Fbrowse%2FCONTROLLER-2035&data=05%7C01%7Crohini.ambika%40infosys.com%7C12cedda8fd77459df73b08da6fb6802e%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C637945126890707334%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6yZbWAhTgVdwHVbpO7UtUenKW5%2B476j%2BG4ZEodjBUKc%3D&reserved=0>
* Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)


Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.


Thanks & Regards,
Rohini

A complete copy of this message has been attached for your convenience.

To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.

Reject this message and notify the sender.

Delete this message and do not notify the sender.

NOTE: The pending message will expire after 14 days. If you do not take action within that time, the pending message will be automatically rejected.


Change your notification settings




---------- Forwarded message ----------
From: rohini.ambika@...
To: "dev@..." <dev@...>
Cc: 
Bcc: 
Date: Wed, 27 Jul 2022 11:03:22 +0000
Subject: FW: ODL Clustering issue - High Availability

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 


TSC Meeting for July 28, 2022 at 9 am Pacific

Guillaume Lambert
 

Hello OpenDaylight Community,

 

The next TSC meeting is July 28, 2022 at 9 am Pacific Time.

As usual, the agenda proposal and the connection details for this meeting are available in the wiki

at the following URL:

https://wiki.opendaylight.org/x/FwGdAQ

If you need to add anything, please let me know or add it there.

The meeting minutes will be at the same location after the meeting is over.

 

Best Regards

Guillaume

 

_________________________________________________________________________________________________________________________

Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.

This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: [OpenDaylight Infrastructure] [integration-dev] [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Luis Gomez
 

OK, release is done and patches are merged now.

On Jul 14, 2022, at 9:07 AM, Ivan Hrasko <ivan.hrasko@...> wrote:

Documentation chain:



Od: integration-dev@... <integration-dev@...> v mene používateľa Daniel de la Rosa <ddelarosa0707@...>
Odoslané: sobota, 9. júla 2022 5:21
Komu: Anil Belur; THOUENON Gilles TGI/OLN
Kópia: 'integration-dev@...' (integration-dev@...) (integration-dev@...); Release; TSC; OpenDaylight Infrastructure; Andrew Grimberg; Casey Cain; Rudy Grigar; Luis Gomez; LAMBERT Guillaume TGI/OLN
Predmet: Re: [integration-dev] [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status
 
Thanks Anil. 

Luis, as @LAMBERT Guillaume TGI/OLN  and @THOUENON Gilles TGI/OLN mentioned, they are done with their artifacts so please proceed with the distribution for Sulfur SR1

Now, in the absence of Guillaume, who can help with updating documentation? 

On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:
Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval




ONE Summit CFP reminder

Casey Cain
 

Hello OpenDaylight Community,

ONE Summit North America, LFN’s flagship event, returns in-person, November 15-16 in Seattle, WA (followed by the Fall LFN Developer & Testing Forum Nov. 17-18)!  Speaking submissions are now being accepted through July 29, 2022.  


ONE Summit is the one industry event focused on best practices, technical challenges, and business opportunities facing network decision makers across Access, Edge, and Cloud.

 

CFP closes July 29!  Learn more and submit today!

 

We encourage you & your peers to submit to speak, and we hope to see you in person in November!  

 

More details on ONE Summit:

 

For anyone using networking and automation to transform business, whether it’s deploying a 5G network, building government infrastructure, or innovating at their industry’s network edge, the ONE Summit collaborative environment enables peer interaction and learning focused on open source technologies that are redefining the ecosystem. As the network is key to new opportunities across Telecommunications, Industry 4.0, Public and Government Infrastructure, the new paradigm will be open. Come join this interactive and collaborative event, the ONE place to learn, innovate, and create the networks our organizations require. 

Registration is also open, and LFN members receive a 10% discount with code ONE22LFNMEM. 

 

Please direct any questions to events@...


Best,
Casey Cain
Senior Technical Community Architect
Linux Foundation
_________________
WeChat: okaru6
WhatsApp: +1.503.779.4519


Re: [integration-dev] [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Ivan Hrasko
 

Documentation chain:

https://git.opendaylight.org/gerrit/c/docs/+/101847/3




Od: integration-dev@... <integration-dev@...> v mene používateľa Daniel de la Rosa <ddelarosa0707@...>
Odoslané: sobota, 9. júla 2022 5:21
Komu: Anil Belur; THOUENON Gilles TGI/OLN
Kópia: 'integration-dev@...' (integration-dev@...) (integration-dev@...); Release; TSC; OpenDaylight Infrastructure; Andrew Grimberg; Casey Cain; Rudy Grigar; Luis Gomez; LAMBERT Guillaume TGI/OLN
Predmet: Re: [integration-dev] [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status
 
Thanks Anil. 

Luis, as @LAMBERT Guillaume TGI/OLN  and @THOUENON Gilles TGI/OLN mentioned, they are done with their artifacts so please proceed with the distribution for Sulfur SR1

Now, in the absence of Guillaume, who can help with updating documentation? 

On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:

Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval


Re: [release] 2022.09 Chlorine MRI status

Daniel de la Rosa
 



On Tue, Jul 12, 2022 at 5:55 PM Robert Varga <nite@...> wrote:
On 07/07/2022 09:00, Daniel de la Rosa wrote:
>
>
>
> On Wed, Jul 6, 2022 at 4:35 PM Robert Varga <nite@...
> <mailto:nite@...>> wrote:
>
>     Hello everyone,
>
>     Since we are well in the 2022.09 Simultaneous Release (Chlorine), here
>     is a quick summary of where we are at:
>
>     - MRI projects up to and including AAA have released
>     - MSI projects have preliminary patches staged at
>     https://git.opendaylight.org/gerrit/q/topic:chlorine-mri
>     <https://git.opendaylight.org/gerrit/q/topic:chlorine-mri>

These are complete AFAICT. No major headaches. I expect to write up the
docs for that in August (I am on PTO for the rest of the month).

>     - NETCONF is awaiting a bug scrub and the corresponding release. There
>     are quite a few issues to scrub and we also need some amount of code
>     reorg withing the repo, which in itself may entail breaking changes.
>     There are quite a few unreviewed patches pendign as well. Given the
>     raging summer in the northern hemosphere, I expect netconf-4.0.0
>     release
>     to happen in about 2-3 weeks' time (i.e. last week of July 2022)

This has been scrubbed, there are 7 outstanding issues as of now. Some
of those may be postponed, I'll know more as I dive into them ~2 weeks
from how.

>     - BGPCEP has a few deliverables yet to be finished and the
>     corresponding
>     0.18.0 release being dependent on NETCONF, my working assumption is
>     having the release available mid-August 2022
>
>     As such, everyone running Java should have Java 17 as their default
>     environment. Not only is it cool as $EXPLENTIVE, but it is becoming a
>     requirement very soon. 2022.03 Sulfur is handling it just fine (as far
>     as I know) and you cannot interact with 2022.09 Chlorine without it.
>
>     Daniel: is the Chlorine scheduled approved? My (inprefect) tracking
>     says
>     it is yet to be voted on.
>
>
> Well I got your approval
>
> https://lists.opendaylight.org/g/release/message/20312
> <https://lists.opendaylight.org/g/release/message/20312>
>
> but I can put out for vote and hopefully get it approved on next TSC
> meeting

That would be great, I do not believe we have it reflected in the docs
project yet...

It is already in the documentation 


but we can edit it if it needed

 

Regards,
Robert


Re: TSC Meeting for July 7, 2022 at 10 pm Pacific

Robert Varga
 

Hello,

just a heads up, I will be on PTO for times this meeting is at for the remainder of July. I'll be back ... for the meeting of 4th of August.

Bye,
Robert

On 06/07/2022 07:23, Guillaume Lambert via lists.opendaylight.org wrote:
Hello OpenDaylight Community,
The next TSC meeting is July 7, 2022 at 10 pm Pacific Time.
As usual, the agenda proposal and the connection details for this meeting are available in the wiki
at the following URL:
<https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+yHgEAQ><https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+6nsEAQ><https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+Wn4EAQ><https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+ZACdAQ>https://wiki.opendaylight.org/x/4QCdAQ <https://mail.site2.orange.com/x/,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+4QCdAQ>
If you need to add anything, please let me know or add it there.
The meeting minutes will be at the same location after the meeting is over.
Best Regards
Guillaume
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.


Re: [release] 2022.09 Chlorine MRI status

Robert Varga
 

On 07/07/2022 09:00, Daniel de la Rosa wrote:
On Wed, Jul 6, 2022 at 4:35 PM Robert Varga <nite@... <mailto:nite@...>> wrote:
Hello everyone,
Since we are well in the 2022.09 Simultaneous Release (Chlorine), here
is a quick summary of where we are at:
- MRI projects up to and including AAA have released
- MSI projects have preliminary patches staged at
https://git.opendaylight.org/gerrit/q/topic:chlorine-mri
<https://git.opendaylight.org/gerrit/q/topic:chlorine-mri>
These are complete AFAICT. No major headaches. I expect to write up the docs for that in August (I am on PTO for the rest of the month).

- NETCONF is awaiting a bug scrub and the corresponding release. There
are quite a few issues to scrub and we also need some amount of code
reorg withing the repo, which in itself may entail breaking changes.
There are quite a few unreviewed patches pendign as well. Given the
raging summer in the northern hemosphere, I expect netconf-4.0.0
release
to happen in about 2-3 weeks' time (i.e. last week of July 2022)
This has been scrubbed, there are 7 outstanding issues as of now. Some of those may be postponed, I'll know more as I dive into them ~2 weeks from how.

- BGPCEP has a few deliverables yet to be finished and the
corresponding
0.18.0 release being dependent on NETCONF, my working assumption is
having the release available mid-August 2022
As such, everyone running Java should have Java 17 as their default
environment. Not only is it cool as $EXPLENTIVE, but it is becoming a
requirement very soon. 2022.03 Sulfur is handling it just fine (as far
as I know) and you cannot interact with 2022.09 Chlorine without it.
Daniel: is the Chlorine scheduled approved? My (inprefect) tracking
says
it is yet to be voted on.
Well I got your approval
https://lists.opendaylight.org/g/release/message/20312 <https://lists.opendaylight.org/g/release/message/20312>
but I can put out for vote and hopefully get it approved on next TSC meeting
That would be great, I do not believe we have it reflected in the docs project yet...

Regards,
Robert


June LFN DTF Event Survey

Kenny Paul
 

Dear ODL Community Members,

 

If you attended the June DTF, either in-person or remotely, and have not already provided your survey feedback yet, please take a moment to do so.

https://linuxfoundation.surveymonkey.com/r/LFNDTFJune22

 

 

Thanks!

-kenny


Kenny Paul, Sr. Technical Community Architect

  ONAP Project & LFN Governing Board

  kpaul@...,  +1.510.766.5945, US Pacific time zone.

  Find time on my calendar: https://doodle.com/mm/kennypaul/book-a-time

 


Re: [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Daniel de la Rosa
 

Thanks Anil. 

Luis, as @LAMBERT Guillaume TGI/OLN  and @THOUENON Gilles TGI/OLN mentioned, they are done with their artifacts so please proceed with the distribution for Sulfur SR1

Now, in the absence of Guillaume, who can help with updating documentation? 

On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:
Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval


Invitation: ODL Pipelines Meeting @ Tue Jul 12, 2022 22:00 - 23:00 (PDT) (tsc@lists.opendaylight.org)

Casey Cain
 

ODL Pipelines Meeting
Please use this registration link to register for the meeting. Once you've registered you will receive a unique Zoom URL to participate in the meeting. If you'd like to have someone else participate
 
Please use this registration link to register for the meeting. Once you've registered you will receive a unique Zoom URL to participate in the meeting. If you'd like to have someone else participate, do not share your Zoom URL, please use the URL in this invite:
https://zoom-lfx.platform.linuxfoundation.org/meeting/98955795719

When

Tuesday Jul 12, 2022 ⋅ 22:00 – 23:00 (Pacific Time - Los Angeles)

Location

https://zoom-lfx.platform.linuxfoundation.org/meeting/98955795719
View map
Reply for tsc@...

Invitation from Google Calendar

You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event.

Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more


[opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Anil Belur
 

Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval


Re: [release] 2022.09 Chlorine MRI status

Daniel de la Rosa
 




On Wed, Jul 6, 2022 at 4:35 PM Robert Varga <nite@...> wrote:
Hello everyone,

Since we are well in the 2022.09 Simultaneous Release (Chlorine), here
is a quick summary of where we are at:

- MRI projects up to and including AAA have released
- MSI projects have preliminary patches staged at
https://git.opendaylight.org/gerrit/q/topic:chlorine-mri
- NETCONF is awaiting a bug scrub and the corresponding release. There
are quite a few issues to scrub and we also need some amount of code
reorg withing the repo, which in itself may entail breaking changes.
There are quite a few unreviewed patches pendign as well. Given the
raging summer in the northern hemosphere, I expect netconf-4.0.0 release
to happen in about 2-3 weeks' time (i.e. last week of July 2022)
- BGPCEP has a few deliverables yet to be finished and the corresponding
0.18.0 release being dependent on NETCONF, my working assumption is
having the release available mid-August 2022

As such, everyone running Java should have Java 17 as their default
environment. Not only is it cool as $EXPLENTIVE, but it is becoming a
requirement very soon. 2022.03 Sulfur is handling it just fine (as far
as I know) and you cannot interact with 2022.09 Chlorine without it.

Daniel: is the Chlorine scheduled approved? My (inprefect) tracking says
it is yet to be voted on.

Well I got your approval 


but I can put out for vote and hopefully get it approved on next TSC meeting 



 

Regards,
Robert

P.S.: my default JDK is Java 17 and I am encountering zero issues with
it on either 2022.09 or 2022.03 release streams. Please switch to Java
17 if you can and report any issues you encounter.






[integration-dev][it-infrastructure-alerts][notice] ODL Gerrit maintenance window (17:30 Sun, July 10 2022 - 19:30 Sun, July 10 2022 PT)

Anil Belur
 

What: LF will update the ODL Gerrit system to 3.5.1.

When: 17:30 Sun, July 10 2022 - 19:30 Sun, July 10 2022 PT (10:00 Mon, July 11 - 12:00 Mon, July 11, 2022 AEST)

Why: LF will install system updates and update the Gerrit version to 3.5.1.
 
Impact: Users may not able to access other services (Gerrit, Jenkins, Sonar, Nexus) during this time.

Jenkins will be put in shutdown mode before the window starts and any long-running Jenkins jobs _will_ be canceled if they don't complete before the start of the window.
Notices will be posted to the mailing lists and in the #opendaylight channel on LFN-tech slack at the start and end of the maintenance.

Thanks,
Anil Belur


Re: The road to Java 17

Robert Varga
 

On 25/04/2022 16:42, Robert Varga wrote:
On 25/09/2021 00:00, Robert Varga wrote:
Hello yet again,

with not may replies in this thread, here is an update on where we are.

With all this in picture, I believe the proper course in OpenDaylight is to have:
- Sulfur (22.03) supporting both JDK11 and JDK17 at compile-time, with artifacts compatible with JDK11+
- All of Sulfur being validated with JDK17
Both these items are delivered, all projects participating on Sulfur GA verify each patch with both JDK11 and JDK17.
As it stands 2022.03 Sulfur SR1 works just fine with Java 17. Please share your experience, as I am currently tracking no outstanding issues at this time.

- Chlorine (22.09) to require JDK17+
This is now slated for delivery: odlparent/master and yangtools/master both require JDK17 and are taking advantage of JDK17 features. More projects are slated to follow.
2022.09 Chlorine platform components (e.g. MRI projects up to and including NETCONF) now require Java 17 on their master branch. I have done some amount of exploration in other support projects and it seems there are no blockers to adoption.

As such, I believe(*) we are committed to Java 17 for 2022.09 Chlorine Simultaneous Release.

Regards,
Robert

(*) Please switch to Java 17 now and report any issues you find. At this point we are very much committed to Java 17 and the sooner you test, the better experience of this switch all of us will have.


2022.09 Chlorine MRI status

Robert Varga
 

Hello everyone,

Since we are well in the 2022.09 Simultaneous Release (Chlorine), here is a quick summary of where we are at:

- MRI projects up to and including AAA have released
- MSI projects have preliminary patches staged at https://git.opendaylight.org/gerrit/q/topic:chlorine-mri
- NETCONF is awaiting a bug scrub and the corresponding release. There are quite a few issues to scrub and we also need some amount of code reorg withing the repo, which in itself may entail breaking changes. There are quite a few unreviewed patches pendign as well. Given the raging summer in the northern hemosphere, I expect netconf-4.0.0 release to happen in about 2-3 weeks' time (i.e. last week of July 2022)
- BGPCEP has a few deliverables yet to be finished and the corresponding 0.18.0 release being dependent on NETCONF, my working assumption is having the release available mid-August 2022

As such, everyone running Java should have Java 17 as their default environment. Not only is it cool as $EXPLENTIVE, but it is becoming a requirement very soon. 2022.03 Sulfur is handling it just fine (as far as I know) and you cannot interact with 2022.09 Chlorine without it.

Daniel: is the Chlorine scheduled approved? My (inprefect) tracking says it is yet to be voted on.

Regards,
Robert

P.S.: my default JDK is Java 17 and I am encountering zero issues with it on either 2022.09 or 2022.03 release streams. Please switch to Java 17 if you can and report any issues you encounter.