For the 1st one: when you say you tried with the official Helm charts - which helm charts are you referring to? Can you send more details on how (parameters in values.yaml that you used) when you deployed these charts.
What was the Temporary fix that reduced the occurrence of the issue. Can you point to the check-in made or change in configuration parameters? Would be helpful to diagnose a proper fix.
On Thu, Jul 28, 2022 at 2:21 AM Rohini Ambika <rohini.ambika@...> wrote:
Hi Anil,
Thanks for the response.
Please find the details below:
1. Is the Test deployment using our Helm charts (ODL Helm Chart)? – We have created our own helm chart for the ODL deployment. Have also tried the use case with official helm chart.
2. I see that the JIRA mentioned in the below email ( https://jira.opendaylight.org/browse/CONTROLLER-2035 ) is already marked Resolved. Has somebody fixed it in the latest version. –
This was a temporary fix from our end and the failure rate has reduced due to the fix, however we are still facing the issue when we do multiple restarts of master node.
ODL version used is Phosphorous SR2
All the configurations are provided and attached in the initial mail .
Subject: FW: ODL Clustering issue - High Availability
Hi All,
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
* Requirement : ODL clustering for high availability (HA) on data distribution
* Env Configuration:
* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
* CPU : 8 Cores
* RAM : 20GB
* Java Heap size : Min - 512MB Max - 16GB
* JDK version : 11
* Kubernetes version : 1.19.1
* Docker version : 20.10.7
* Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
* Use Case:
* Fail Over/High Availability:
* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master
by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
* Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination
of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.
Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Thanks & Regards,
Rohini
A complete copy of this message has been attached for your convenience.
To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during
the performance testing.
Details and configurations as follows:
Requirement : ODL clustering for high availability (HA) ondata distribution
Env Configuration:
3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
CPU : 8 Cores
RAM : 20GB
Java Heap size : Min – 512MB Max – 16GB
JDK version : 11
Kubernetes version : 1.19.1
Docker version : 20.10.7
ODL features installed to enable clustering:
odl-netconf-clustered-topology
odl-restconf-all
Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
Use Case:
Fail Over/High Availability:
Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected
as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due
to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.
Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)
Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled
in other available K8s node and operations will be resumed.
Subject: FW: ODL Clustering issue - High Availability
Hi All,
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
* Requirement : ODL clustering for high availability (HA) on data distribution * Env Configuration:
* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node * CPU : 8 Cores * RAM : 20GB * Java Heap size : Min - 512MB Max - 16GB * JDK version : 11 * Kubernetes version : 1.19.1 * Docker version : 20.10.7
* Device configured : Netconf devices , all devices having same schema(tested with 250 devices) * Use Case:
* Fail Over/High Availability:
* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts. * Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.
Requesting your support to identify if there is any mis-configurations or any known solution for the issue . Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Thanks & Regards, Rohini
A complete copy of this message has been attached for your convenience.
To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
Requirement : ODL clustering for high availability (HA) ondata distribution
Env Configuration:
3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
CPU : 8 Cores
RAM : 20GB
Java Heap size : Min – 512MB Max – 16GB
JDK version : 11
Kubernetes version : 1.19.1
Docker version : 20.10.7
ODL features installed to enable clustering:
odl-netconf-clustered-topology
odl-restconf-all
Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
Use Case:
Fail Over/High Availability:
Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional.
If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member
node and register the slave mounts.
Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the
new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration
of already mounted device/ new mounts will fail.
Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)
Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Subject: FW: ODL Clustering issue - High Availability
Hi All,
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
* Requirement : ODL clustering for high availability (HA) on data distribution * Env Configuration:
* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node * CPU : 8 Cores * RAM : 20GB * Java Heap size : Min - 512MB Max - 16GB * JDK version : 11 * Kubernetes version : 1.19.1 * Docker version : 20.10.7
* Device configured : Netconf devices , all devices having same schema(tested with 250 devices) * Use Case:
* Fail Over/High Availability:
* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts. * Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.
Requesting your support to identify if there is any mis-configurations or any known solution for the issue . Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Thanks & Regards, Rohini
A complete copy of this message has been attached for your convenience.
To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
Requirement : ODL clustering for high availability (HA) ondata distribution
Env Configuration:
3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
CPU : 8 Cores
RAM : 20GB
Java Heap size : Min – 512MB Max – 16GB
JDK version : 11
Kubernetes version : 1.19.1
Docker version : 20.10.7
ODL features installed to enable clustering:
odl-netconf-clustered-topology
odl-restconf-all
Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
Use Case:
Fail Over/High Availability:
Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional.
If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member
node and register the slave mounts.
Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the
new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration
of already mounted device/ new mounts will fail.
Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)
Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Subject: FW: ODL Clustering issue - High Availability
Hi All,
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
* Requirement : ODL clustering for high availability (HA) on data distribution * Env Configuration:
* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node * CPU : 8 Cores * RAM : 20GB * Java Heap size : Min - 512MB Max - 16GB * JDK version : 11 * Kubernetes version : 1.19.1 * Docker version : 20.10.7
* Device configured : Netconf devices , all devices having same schema(tested with 250 devices) * Use Case:
* Fail Over/High Availability:
* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts. * Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.
Requesting your support to identify if there is any mis-configurations or any known solution for the issue . Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Thanks & Regards, Rohini
A complete copy of this message has been attached for your convenience.
To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
Requirement : ODL clustering for high availability (HA) ondata distribution
Env Configuration:
3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
CPU : 8 Cores
RAM : 20GB
Java Heap size : Min – 512MB Max – 16GB
JDK version : 11
Kubernetes version : 1.19.1
Docker version : 20.10.7
ODL features installed to enable clustering:
odl-netconf-clustered-topology
odl-restconf-all
Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
Use Case:
Fail Over/High Availability:
Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional.
If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member
node and register the slave mounts.
Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the
new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration
of already mounted device/ new mounts will fail.
Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)
Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Subject: FW: ODL Clustering issue - High Availability
Hi All,
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
* Requirement : ODL clustering for high availability (HA) on data distribution * Env Configuration:
* 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node * CPU : 8 Cores * RAM : 20GB * Java Heap size : Min - 512MB Max - 16GB * JDK version : 11 * Kubernetes version : 1.19.1 * Docker version : 20.10.7
* Device configured : Netconf devices , all devices having same schema(tested with 250 devices) * Use Case:
* Fail Over/High Availability:
* Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts. * Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.
Requesting your support to identify if there is any mis-configurations or any known solution for the issue . Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
Thanks & Regards, Rohini
A complete copy of this message has been attached for your convenience.
To approve this using email, reply to this message. You do not need to attach the original message, just reply and send.
As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.
Details and configurations as follows:
Requirement : ODL clustering for high availability (HA) ondata distribution
Env Configuration:
3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
CPU : 8 Cores
RAM : 20GB
Java Heap size : Min – 512MB Max – 16GB
JDK version : 11
Kubernetes version : 1.19.1
Docker version : 20.10.7
ODL features installed to enable clustering:
odl-netconf-clustered-topology
odl-restconf-all
Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
Use Case:
Fail Over/High Availability:
Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional.
If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member
node and register the slave mounts.
Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the
new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration
of already mounted device/ new mounts will fail.
Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)
Requesting your support to identify if there is any mis-configurations or any known solution for the issue .
Please let us know if any further information required.
Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.
If you need to add anything, please let me know or add it there.
The meeting minutes will be at the same location after the meeting is over.
Best Regards
Guillaume
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
Now, in the absence of Guillaume, who can help with updating documentation?
On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:
Hello All,
OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.
Pending activities required to be complete for the release: 1. Self-managed projects to release artifacts for Sulfur SR1. 2. Release Distribution once step 1. is complete. 3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776 4. Update ODL downloads page [1.].
Thanks to everyone who contributed to the Sulfur SR1 release.
ONE Summit North America, LFN’s flagship event, returns in-person, November 15-16 in Seattle, WA (followed by the Fall LFN Developer & Testing Forum Nov. 17-18)! Speaking submissions are now being accepted through July 29, 2022.
ONE Summit is the one industry event focused on best practices, technical challenges, and business opportunities facing network decision makers across Access, Edge, and Cloud.
We encourage you & your peers to submit to speak, and we hope to see you in person in November!
More details on ONE Summit:
For anyone using networking and automation to transform business, whether it’s deploying a 5G network, building government infrastructure, or innovating at their industry’s network edge, the ONE Summit collaborative environment enables peer interaction and learning focused on open source technologies that are redefining the ecosystem. As the network is key to new opportunities across Telecommunications, Industry 4.0, Public and Government Infrastructure, the new paradigm will be open. Come join this interactive and collaborative event, the ONE place to learn, innovate, and create the networks our organizations require.
OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.
Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].
Thanks to everyone who contributed to the Sulfur SR1 release.
These are complete AFAICT. No major headaches. I expect to write up the
docs for that in August (I am on PTO for the rest of the month).
> - NETCONF is awaiting a bug scrub and the corresponding release. There
> are quite a few issues to scrub and we also need some amount of code
> reorg withing the repo, which in itself may entail breaking changes.
> There are quite a few unreviewed patches pendign as well. Given the
> raging summer in the northern hemosphere, I expect netconf-4.0.0
> release
> to happen in about 2-3 weeks' time (i.e. last week of July 2022)
This has been scrubbed, there are 7 outstanding issues as of now. Some
of those may be postponed, I'll know more as I dive into them ~2 weeks
from how.
> - BGPCEP has a few deliverables yet to be finished and the
> corresponding
> 0.18.0 release being dependent on NETCONF, my working assumption is
> having the release available mid-August 2022
>
> As such, everyone running Java should have Java 17 as their default
> environment. Not only is it cool as $EXPLENTIVE, but it is becoming a
> requirement very soon. 2022.03 Sulfur is handling it just fine (as far
> as I know) and you cannot interact with 2022.09 Chlorine without it.
>
> Daniel: is the Chlorine scheduled approved? My (inprefect) tracking
> says
> it is yet to be voted on.
>
>
> Well I got your approval
>
> https://lists.opendaylight.org/g/release/message/20312
> <https://lists.opendaylight.org/g/release/message/20312>
>
> but I can put out for vote and hopefully get it approved on next TSC
> meeting
That would be great, I do not believe we have it reflected in the docs
project yet...
On 06/07/2022 07:23, Guillaume Lambert via lists.opendaylight.org wrote:
Hello OpenDaylight Community, The next TSC meeting is July 7, 2022 at 10 pm Pacific Time. As usual, the agenda proposal and the connection details for this meeting are available in the wiki at the following URL: <https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+yHgEAQ><https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+6nsEAQ><https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+Wn4EAQ><https://mail.site2.orange.com/x/,DanaInfo=.ambkoDxo0m-Jz3n1vuQu76,SSL+,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+ZACdAQ>https://wiki.opendaylight.org/x/4QCdAQ <https://mail.site2.orange.com/x/,DanaInfo=.awjmlDtvlvmk9xvuw9Q6-0,SSL+4QCdAQ> If you need to add anything, please let me know or add it there. The meeting minutes will be at the same location after the meeting is over. Best Regards Guillaume _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you.
These are complete AFAICT. No major headaches. I expect to write up the docs for that in August (I am on PTO for the rest of the month).
- NETCONF is awaiting a bug scrub and the corresponding release. There are quite a few issues to scrub and we also need some amount of code reorg withing the repo, which in itself may entail breaking changes. There are quite a few unreviewed patches pendign as well. Given the raging summer in the northern hemosphere, I expect netconf-4.0.0 release to happen in about 2-3 weeks' time (i.e. last week of July 2022)
This has been scrubbed, there are 7 outstanding issues as of now. Some of those may be postponed, I'll know more as I dive into them ~2 weeks from how.
- BGPCEP has a few deliverables yet to be finished and the corresponding 0.18.0 release being dependent on NETCONF, my working assumption is having the release available mid-August 2022 As such, everyone running Java should have Java 17 as their default environment. Not only is it cool as $EXPLENTIVE, but it is becoming a requirement very soon. 2022.03 Sulfur is handling it just fine (as far as I know) and you cannot interact with 2022.09 Chlorine without it. Daniel: is the Chlorine scheduled approved? My (inprefect) tracking says it is yet to be voted on. Well I got your approval https://lists.opendaylight.org/g/release/message/20312 <https://lists.opendaylight.org/g/release/message/20312> but I can put out for vote and hopefully get it approved on next TSC meeting
That would be great, I do not believe we have it reflected in the docs project yet...
On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:
Hello All,
OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.
Pending activities required to be complete for the release: 1. Self-managed projects to release artifacts for Sulfur SR1. 2. Release Distribution once step 1. is complete. 3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776 4. Update ODL downloads page [1.].
Thanks to everyone who contributed to the Sulfur SR1 release.
Please use this registration link to register for the meeting. Once you've registered you will receive a unique Zoom URL to participate in the meeting. If you'd like to have someone else participate
Please use this registration link to register for the meeting. Once you've registered you will receive a unique Zoom URL to participate in the meeting. If you'd like to have someone else participate, do not share your Zoom URL, please use the URL in this invite: https://zoom-lfx.platform.linuxfoundation.org/meeting/98955795719
When
Tuesday Jul 12, 2022 ⋅ 22:00 – 23:00 (Pacific Time - Los Angeles)
You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event.
Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more
OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.
Pending activities required to be complete for the release: 1. Self-managed projects to release artifacts for Sulfur SR1. 2. Release Distribution once step 1. is complete. 3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776 4. Update ODL downloads page [1.].
Thanks to everyone who contributed to the Sulfur SR1 release.
On Wed, Jul 6, 2022 at 4:35 PM Robert Varga <nite@...> wrote:
Hello everyone,
Since we are well in the 2022.09 Simultaneous Release (Chlorine), here
is a quick summary of where we are at:
- MRI projects up to and including AAA have released
- MSI projects have preliminary patches staged at https://git.opendaylight.org/gerrit/q/topic:chlorine-mri
- NETCONF is awaiting a bug scrub and the corresponding release. There
are quite a few issues to scrub and we also need some amount of code
reorg withing the repo, which in itself may entail breaking changes.
There are quite a few unreviewed patches pendign as well. Given the
raging summer in the northern hemosphere, I expect netconf-4.0.0 release
to happen in about 2-3 weeks' time (i.e. last week of July 2022)
- BGPCEP has a few deliverables yet to be finished and the corresponding
0.18.0 release being dependent on NETCONF, my working assumption is
having the release available mid-August 2022
As such, everyone running Java should have Java 17 as their default
environment. Not only is it cool as $EXPLENTIVE, but it is becoming a
requirement very soon. 2022.03 Sulfur is handling it just fine (as far
as I know) and you cannot interact with 2022.09 Chlorine without it.
Daniel: is the Chlorine scheduled approved? My (inprefect) tracking says
it is yet to be voted on.
but I can put out for vote and hopefully get it approved on next TSC meeting
Regards,
Robert
P.S.: my default JDK is Java 17 and I am encountering zero issues with
it on either 2022.09 or 2022.03 release streams. Please switch to Java
17 if you can and report any issues you encounter.
What: LF will update the ODL Gerrit system to 3.5.1.
When: 17:30 Sun, July 10 2022 - 19:30 Sun, July 10 2022 PT (10:00 Mon, July 11 - 12:00 Mon, July 11, 2022 AEST)
Why: LF will install system updates and update the Gerrit version to 3.5.1.
Impact: Users may not able to access other services (Gerrit, Jenkins, Sonar, Nexus) during this time.
Jenkins will be put in shutdown mode before the window starts and any long-running Jenkins jobs _will_ be canceled if they don't complete before the start of the window. Notices will be posted to the mailing lists and in the #opendaylight channel on LFN-tech slack at the start and end of the maintenance.
with not may replies in this thread, here is an update on where we are.
With all this in picture, I believe the proper course in OpenDaylight is to have: - Sulfur (22.03) supporting both JDK11 and JDK17 at compile-time, with artifacts compatible with JDK11+ - All of Sulfur being validated with JDK17
Both these items are delivered, all projects participating on Sulfur GA verify each patch with both JDK11 and JDK17.
As it stands 2022.03 Sulfur SR1 works just fine with Java 17. Please share your experience, as I am currently tracking no outstanding issues at this time.
- Chlorine (22.09) to require JDK17+
This is now slated for delivery: odlparent/master and yangtools/master both require JDK17 and are taking advantage of JDK17 features. More projects are slated to follow.
2022.09 Chlorine platform components (e.g. MRI projects up to and including NETCONF) now require Java 17 on their master branch. I have done some amount of exploration in other support projects and it seems there are no blockers to adoption.
As such, I believe(*) we are committed to Java 17 for 2022.09 Chlorine Simultaneous Release.
Regards, Robert
(*) Please switch to Java 17 now and report any issues you find. At this point we are very much committed to Java 17 and the sooner you test, the better experience of this switch all of us will have.
Since we are well in the 2022.09 Simultaneous Release (Chlorine), here is a quick summary of where we are at:
- MRI projects up to and including AAA have released - MSI projects have preliminary patches staged at https://git.opendaylight.org/gerrit/q/topic:chlorine-mri - NETCONF is awaiting a bug scrub and the corresponding release. There are quite a few issues to scrub and we also need some amount of code reorg withing the repo, which in itself may entail breaking changes. There are quite a few unreviewed patches pendign as well. Given the raging summer in the northern hemosphere, I expect netconf-4.0.0 release to happen in about 2-3 weeks' time (i.e. last week of July 2022) - BGPCEP has a few deliverables yet to be finished and the corresponding 0.18.0 release being dependent on NETCONF, my working assumption is having the release available mid-August 2022
As such, everyone running Java should have Java 17 as their default environment. Not only is it cool as $EXPLENTIVE, but it is becoming a requirement very soon. 2022.03 Sulfur is handling it just fine (as far as I know) and you cannot interact with 2022.09 Chlorine without it.
Daniel: is the Chlorine scheduled approved? My (inprefect) tracking says it is yet to be voted on.
Regards, Robert
P.S.: my default JDK is Java 17 and I am encountering zero issues with it on either 2022.09 or 2022.03 release streams. Please switch to Java 17 if you can and report any issues you encounter.