Date   

Re: [opendaylight-dev][release] Sangwook Ha as committer for releng/builder

Venkatrangan Govindarajan
 

+1


On Fri, Sep 30, 2022, 08:03 Anil Belur <abelur@...> wrote:
Hello all,

Considering Sangwook Ha contributions [0] to releng/builder project, I'd like to nominate Sangwook as a committer.

Please vote: -1, 0, +1

- Anil

[0] https://git.opendaylight.org/gerrit/q/project:releng/builder+owner:sangwook.ha%2540verizon.com




[tsc] Proposal to archive inactive projects.

Anil Belur
 

Hello TSC: 

The list of $projects (and last activity on the repo) are inactive for the last ~2 release cycles and good candidates for project archival.

- l2switch: Apr, 2021
- dlux: Sept, 2020
- dluxapps: Oct, 2020
- netvirt: June, 2021
- odlguice: Oct, 2020
- odlmicro: Dec, 2020
- odlsaf: Dec, 2020
- odltools: Oct, 2020
- p4plugin: Oct, 2020
- plastic: Oct, 2020
- unimgr: Sept, 2021

Please vote (+1,0,-1) for each of the projects (individually). 

Once the voting is complete, I'll initiate the archival process that includes purging any jobs from JJB and marking the Gerrit repo as 'read-only'.  

Regards,
Anil Belur


[opendaylight-dev][release] Sangwook Ha as committer for releng/builder

Anil Belur
 

Hello all,

Considering Sangwook Ha contributions [0] to releng/builder project, I'd like to nominate Sangwook as a committer.

Please vote: -1, 0, +1

- Anil

[0] https://git.opendaylight.org/gerrit/q/project:releng/builder+owner:sangwook.ha%2540verizon.com


Re: [tsc][release] OpenDaylight - Sulfur SR2 release status

Anil Belur
 

Hi Luis, 

Request to open a ticket on this. I've added a work around for this by creating a separate release management job that will pick up CentOS7 for signing the tag.

On Tue, Sep 27, 2022 at 12:54 AM Luis Gomez <ecelgp@...> wrote:
Hi Anil, something failed in the distribution release job, any idea what is the cause?


I used same parameters as usual.

BR/Luis

On Sep 22, 2022, at 7:52 PM, Anil Shashikumar Belur <abelur@...> wrote:

Hello All,

OpenDaylight Sulfur SR2 version bump is complete and the staging repository has been promoted. The 'stable/sulfur' branch has been unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR2.
2. Release Distribution once the 1. is complete.
3. Release notes - need to re-run the job once the releng/builder CR is merged.
4. Update ODL downloads page.

Thanks to everyone who contributed to the Sulfur SR2 release.

There were a few issues with the version bump and release notes that have been resolved.


Re: [tsc][release] OpenDaylight - Sulfur SR2 release status

Luis Gomez
 

Hi Anil, something failed in the distribution release job, any idea what is the cause?


I used same parameters as usual.

BR/Luis

On Sep 22, 2022, at 7:52 PM, Anil Shashikumar Belur <abelur@...> wrote:

Hello All,

OpenDaylight Sulfur SR2 version bump is complete and the staging repository has been promoted. The 'stable/sulfur' branch has been unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR2.
2. Release Distribution once the 1. is complete.
3. Release notes - need to re-run the job once the releng/builder CR is merged.
4. Update ODL downloads page.

Thanks to everyone who contributed to the Sulfur SR2 release.

There were a few issues with the version bump and release notes that have been resolved.


[release][TSC] stable/chlorine branch cut, master moved to next (argon)

Anil Belur
 

Hello Everyone,

The new branch "stable/chlorine" has been cut. The version bump for "stable/chlorine" is complete and the master branch is moved to the next release (argon). The branches "stable/chlorine" and 
master (argon) is unlocked and open for development.

Please review the below CR which updates the jobs config Jenkins/CI with the new branches for "stable/chlorine" and the master is moved to next (Ar).

Note: A reminder for projects should rebase their existing master branch patches to ensure they are building against the post-version bumped versions of the code base.

Regards,
Anil Belur


Re: [tsc][release] OpenDaylight - Sulfur SR2 release status

Anil Belur
 



On Fri, Sep 23, 2022 at 12:52 PM Anil Shashikumar Belur <abelur@...> wrote:
Hello All,

OpenDaylight Sulfur SR2 version bump is complete and the staging repository has been promoted. The 'stable/sulfur' branch has been unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR2.
2. Release Distribution once the 1. is complete.
3. Release notes - need to re-run the job once the releng/builder CR is merged.
4. Update ODL downloads page.

Thanks to everyone who contributed to the Sulfur SR2 release.

There were a few issues with the version bump and release notes that have been resolved.

Regards,
Anil Belur


3. Release notes updated: 


[tsc][release] OpenDaylight - Sulfur SR2 release status

Anil Belur
 

Hello All,

OpenDaylight Sulfur SR2 version bump is complete and the staging repository has been promoted. The 'stable/sulfur' branch has been unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR2.
2. Release Distribution once the 1. is complete.
3. Release notes - need to re-run the job once the releng/builder CR is merged.
4. Update ODL downloads page.

Thanks to everyone who contributed to the Sulfur SR2 release.

There were a few issues with the version bump and release notes that have been resolved.


Re: [OpenDaylight TSC] [integration-dev] [opendaylight-dev] ODL Clustering issue - High Availability

Rohini Ambika
 

Thanks.

 

We have already tested CONTROLLER-2035 with Phosphorous SR2(created a patch with the fix) and the issue still persists when we do multiple restarts of the master node(approx. after 10th restart).

 

Thanks & Regards,

Rohini

Cell: +91.9995241298 | VoIP: +91.471.3025332

 

From: TSC@... <TSC@...> On Behalf Of Daniel de la Rosa
Sent: Thursday, July 28, 2022 9:50 PM
To: Rohini Ambika <rohini.ambika@...>; Venkatrangan Govindarajan <gvrangan@...>
Cc: Ivan Hrasko <ivan.hrasko@...>; integration-dev@...; dev@...; kernel-dev@...; TSC <tsc@...>
Subject: Re: [OpenDaylight TSC] [integration-dev] [opendaylight-dev] ODL Clustering issue - High Availability

 

[**EXTERNAL EMAIL**]

Rohini and all

 

Please use Phosphorus SR3 since CONTROLLER-2035 is fixed in that version. In any case, @Venkatrangan Govindarajan  will also get back to you in case he finds anything in the logs you provided

 

thanks

 

On Wed, Jul 27, 2022 at 5:45 AM Rohini Ambika via lists.opendaylight.org <rohini.ambika=infosys.com@...> wrote:

Hi,

 

ODL version – Phosphorous SR2

 

Thanks & Regards,

Rohini

 

From: dev@... <dev@...> On Behalf Of Ivan Hrasko
Sent: Wednesday, July 27, 2022 5:31 PM
To: integration-dev@...; dev@...; kernel-dev@...; kernel-dev@...
Subject: Re: [opendaylight-dev] ODL Clustering issue - High Availability

 

[**EXTERNAL EMAIL**]

Hello,

 

what is the ODL version please?

 

Best,

 

Ivan Hraško

Senior Software Engineer

 

PANTHEON .tech

Mlynské Nivy 56, 821 05 Bratislava

Slovakia

Tel / +421 220 665 111

 

MAIL / ivan.hrasko@...

WEB / https://pantheon.tech

 


Od: integration-dev@... <integration-dev@...> v mene používateľa Rohini Ambika via lists.opendaylight.org <rohini.ambika=infosys.com@...>
Odoslané: streda, 27. júla 2022 13:19
Komu: integration-dev@...; dev@...; kernel-dev@...; kernel-dev@...
Predmet: [integration-dev] ODL Clustering issue - High Availability

 

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

·         Requirement : ODL clustering for high availability (HA) on data distribution

·         Env Configuration:

o    3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node

o    CPU :  8 Cores

o    RAM : 20GB

o    Java Heap size : Min – 512MB Max – 16GB

o    JDK version : 11

o    Kubernetes version : 1.19.1

o    Docker version : 20.10.7

·         ODL features installed to enable clustering:

o    odl-netconf-clustered-topology

o    odl-restconf-all

·         Device configured : Netconf devices , all devices having same schema(tested with 250 devices)

·         Use Case:

o    Fail Over/High Availability:

§  Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.

§  Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

·         JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035  

·         Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 




Re: [opendaylight-dev] ODL Clustering issue - High Availability

Daniel de la Rosa
 

Rohini and all

Please use Phosphorus SR3 since CONTROLLER-2035 is fixed in that version. In any case, @Venkatrangan Govindarajan  will also get back to you in case he finds anything in the logs you provided

thanks

On Wed, Jul 27, 2022 at 5:45 AM Rohini Ambika via lists.opendaylight.org <rohini.ambika=infosys.com@...> wrote:

Hi,

 

ODL version – Phosphorous SR2

 

Thanks & Regards,

Rohini

 

From: dev@... <dev@...> On Behalf Of Ivan Hrasko
Sent: Wednesday, July 27, 2022 5:31 PM
To: integration-dev@...; dev@...; kernel-dev@...; kernel-dev@...
Subject: Re: [opendaylight-dev] ODL Clustering issue - High Availability

 

[**EXTERNAL EMAIL**]

Hello,

 

what is the ODL version please?

 

Best,

 

Ivan Hraško

Senior Software Engineer

 

PANTHEON .tech

Mlynské Nivy 56, 821 05 Bratislava

Slovakia

Tel / +421 220 665 111

 

MAIL / ivan.hrasko@...

WEB / https://pantheon.tech

 


Od: integration-dev@... <integration-dev@...> v mene používateľa Rohini Ambika via lists.opendaylight.org <rohini.ambika=infosys.com@...>
Odoslané: streda, 27. júla 2022 13:19
Komu: integration-dev@...; dev@...; kernel-dev@...; kernel-dev@...
Predmet: [integration-dev] ODL Clustering issue - High Availability

 

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

·         Requirement : ODL clustering for high availability (HA) on data distribution

·         Env Configuration:

o    3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node

o    CPU :  8 Cores

o    RAM : 20GB

o    Java Heap size : Min – 512MB Max – 16GB

o    JDK version : 11

o    Kubernetes version : 1.19.1

o    Docker version : 20.10.7

·         ODL features installed to enable clustering:

o    odl-netconf-clustered-topology

o    odl-restconf-all

·         Device configured : Netconf devices , all devices having same schema(tested with 250 devices)

·         Use Case:

o    Fail Over/High Availability:

§  Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.

§  Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

·         JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035  

·         Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 





ODL Clustering issue - High Availability

Rohini Ambika <rohini.ambika@...>
 

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 


Re: [opendaylight-dev] ODL Clustering issue - High Availability

Rohini Ambika
 

Hi,

 

ODL version – Phosphorous SR2

 

Thanks & Regards,

Rohini

 

From: dev@... <dev@...> On Behalf Of Ivan Hrasko
Sent: Wednesday, July 27, 2022 5:31 PM
To: integration-dev@...; dev@...; kernel-dev@...; kernel-dev@...
Subject: Re: [opendaylight-dev] ODL Clustering issue - High Availability

 

[**EXTERNAL EMAIL**]

Hello,

 

what is the ODL version please?

 

Best,

 

Ivan Hraško

Senior Software Engineer

 

PANTHEON .tech

Mlynské Nivy 56, 821 05 Bratislava

Slovakia

Tel / +421 220 665 111

 

MAIL / ivan.hrasko@...

WEB / https://pantheon.tech

 


Od: integration-dev@... <integration-dev@...> v mene používateľa Rohini Ambika via lists.opendaylight.org <rohini.ambika=infosys.com@...>
Odoslané: streda, 27. júla 2022 13:19
Komu: integration-dev@...; dev@...; kernel-dev@...; kernel-dev@...
Predmet: [integration-dev] ODL Clustering issue - High Availability

 

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

·         Requirement : ODL clustering for high availability (HA) on data distribution

·         Env Configuration:

o    3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node

o    CPU :  8 Cores

o    RAM : 20GB

o    Java Heap size : Min – 512MB Max – 16GB

o    JDK version : 11

o    Kubernetes version : 1.19.1

o    Docker version : 20.10.7

·         ODL features installed to enable clustering:

o    odl-netconf-clustered-topology

o    odl-restconf-all

·         Device configured : Netconf devices , all devices having same schema(tested with 250 devices)

·         Use Case:

o    Fail Over/High Availability:

§  Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.

§  Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

·         JIRA reference : https://jira.opendaylight.org/browse/CONTROLLER-2035  

·         Akka configuration of all the nodes attached. (Increased the gossip-interval time to 5s in akka.conf file to avoid Akka AskTimedOut issue while mounting multiple devices at a time.)

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 


Re: ODL Clustering issue - High Availability

Ivan Hrasko
 

Hello,


what is the ODL version please?


Best,


Ivan Hraško

Senior Software Engineer

 

PANTHEON .tech

Mlynské Nivy 56, 821 05 Bratislava

Slovakia

Tel / +421 220 665 111

 

MAIL / ivan.hrasko@...

WEB / https://pantheon.tech




Od: integration-dev@... <integration-dev@...> v mene používateľa Rohini Ambika via lists.opendaylight.org <rohini.ambika=infosys.com@...>
Odoslané: streda, 27. júla 2022 13:19
Komu: integration-dev@...; dev@...; kernel-dev@...; kernel-dev@...
Predmet: [integration-dev] ODL Clustering issue - High Availability
 

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 


ODL Clustering issue - High Availability

Rohini Ambika
 

Hi All,

 

As presented/discussed in the ODL TSC meeting held on 22nd Friday 10.30 AM IST, posting this email to highlight the issues on ODL clustering use cases encountered during the performance testing.

 

Details and configurations as follows:

 

  • Requirement : ODL clustering for high availability (HA) on data distribution
  • Env Configuration:
    • 3 node k8s Cluster ( 1 master & 3 worker nodes) with 3 ODL instances running on each node
    • CPU :  8 Cores
    • RAM : 20GB
    • Java Heap size : Min – 512MB Max – 16GB
    • JDK version : 11
    • Kubernetes version : 1.19.1
    • Docker version : 20.10.7
  • ODL features installed to enable clustering:
    • odl-netconf-clustered-topology
    • odl-restconf-all
  • Device configured : Netconf devices , all devices having same schema(tested with 250 devices)
  • Use Case:
    • Fail Over/High Availability:
      • Expected : In case any of the ODL instance gets down/restarted due to network splits or internal error, other instance in cluster should be available and functional. If the affected instance is having master mount, the other instance who is elected as master by re-election should be able to re-register the devices and resume the operations. Once the affected instance comes up, it should be able to join the cluster as member node and register the slave mounts.
      • Observation : When the odl instance which is having the master mount restarts, election happens among the other node in the cluster and elects the new leader. Now the new leader is trying to re-register the master mount but failed at a point due to the termination of the Akka Cluster Singleton Actor. Hence the cluster goes to idle state and failed to assign owner for the device DOM entity. In this case, the configuration of already mounted device/ new mounts will fail.  

 

 

Requesting your support to identify if there is any mis-configurations or any known solution for the issue .

Please let us know if any further information required.

 

Note : We have tested the single ODL instance without enabling cluster features in K8s cluster. In case of K8s node failure, ODL instance will be re-scheduled in other available K8s node and operations will be resumed.  

             

 

Thanks & Regards,

Rohini

 


Re: [OpenDaylight Infrastructure] [integration-dev] [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Luis Gomez
 

OK, release is done and patches are merged now.

On Jul 14, 2022, at 9:07 AM, Ivan Hrasko <ivan.hrasko@...> wrote:

Documentation chain:



Od: integration-dev@... <integration-dev@...> v mene používateľa Daniel de la Rosa <ddelarosa0707@...>
Odoslané: sobota, 9. júla 2022 5:21
Komu: Anil Belur; THOUENON Gilles TGI/OLN
Kópia: 'integration-dev@...' (integration-dev@...) (integration-dev@...); Release; TSC; OpenDaylight Infrastructure; Andrew Grimberg; Casey Cain; Rudy Grigar; Luis Gomez; LAMBERT Guillaume TGI/OLN
Predmet: Re: [integration-dev] [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status
 
Thanks Anil. 

Luis, as @LAMBERT Guillaume TGI/OLN  and @THOUENON Gilles TGI/OLN mentioned, they are done with their artifacts so please proceed with the distribution for Sulfur SR1

Now, in the absence of Guillaume, who can help with updating documentation? 

On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:
Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval




Re: [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Ivan Hrasko
 

Documentation chain:

https://git.opendaylight.org/gerrit/c/docs/+/101847/3




Od: integration-dev@... <integration-dev@...> v mene používateľa Daniel de la Rosa <ddelarosa0707@...>
Odoslané: sobota, 9. júla 2022 5:21
Komu: Anil Belur; THOUENON Gilles TGI/OLN
Kópia: 'integration-dev@...' (integration-dev@...) (integration-dev@...); Release; TSC; OpenDaylight Infrastructure; Andrew Grimberg; Casey Cain; Rudy Grigar; Luis Gomez; LAMBERT Guillaume TGI/OLN
Predmet: Re: [integration-dev] [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status
 
Thanks Anil. 

Luis, as @LAMBERT Guillaume TGI/OLN  and @THOUENON Gilles TGI/OLN mentioned, they are done with their artifacts so please proceed with the distribution for Sulfur SR1

Now, in the absence of Guillaume, who can help with updating documentation? 

On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:

Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval


Re: [OpenDaylight TSC] [opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Daniel de la Rosa
 

Thanks Anil. 

Luis, as @LAMBERT Guillaume TGI/OLN  and @THOUENON Gilles TGI/OLN mentioned, they are done with their artifacts so please proceed with the distribution for Sulfur SR1

Now, in the absence of Guillaume, who can help with updating documentation? 

On Thu, Jul 7, 2022 at 3:56 PM Anil Belur <abelur@...> wrote:
Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval


Invitation: ODL Pipelines Meeting @ Tue Jul 12, 2022 22:00 - 23:00 (PDT) (integration-dev@lists.opendaylight.org)

Casey Cain
 

ODL Pipelines Meeting
Please use this registration link to register for the meeting. Once you've registered you will receive a unique Zoom URL to participate in the meeting. If you'd like to have someone else participate
 
Please use this registration link to register for the meeting. Once you've registered you will receive a unique Zoom URL to participate in the meeting. If you'd like to have someone else participate, do not share your Zoom URL, please use the URL in this invite:
https://zoom-lfx.platform.linuxfoundation.org/meeting/98955795719

When

Tuesday Jul 12, 2022 ⋅ 22:00 – 23:00 (Pacific Time - Los Angeles)

Location

https://zoom-lfx.platform.linuxfoundation.org/meeting/98955795719
View map

Invitation from Google Calendar

You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event.

Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more


[opendaylight-dev][release] OpenDaylight - Sulfur SR1 release status

Anil Belur
 

Hello All,

OpenDaylight Sulfur SR1 version bump is complete and the staging repository is being promoted. The 'stable/sulfur' branch is unlocked and ready for development.

Pending activities required to be complete for the release:
1. Self-managed projects to release artifacts for Sulfur SR1.
2. Release Distribution once step 1. is complete.
3. Release notes merge CR: https://git.opendaylight.org/gerrit/c/docs/+/101776
4. Update ODL downloads page [1.].

Thanks to everyone who contributed to the Sulfur SR1 release.

Regards,
Anil Belur

[0.] https://docs.opendaylight.org/en/latest/downloads.html
[1.] https://wiki.opendaylight.org/display/ODL/Sulfur+SR1+Release+Approval


[it-infrastructure-alerts][notice] ODL Gerrit maintenance window (17:30 Sun, July 10 2022 - 19:30 Sun, July 10 2022 PT)

Anil Belur
 

What: LF will update the ODL Gerrit system to 3.5.1.

When: 17:30 Sun, July 10 2022 - 19:30 Sun, July 10 2022 PT (10:00 Mon, July 11 - 12:00 Mon, July 11, 2022 AEST)

Why: LF will install system updates and update the Gerrit version to 3.5.1.
 
Impact: Users may not able to access other services (Gerrit, Jenkins, Sonar, Nexus) during this time.

Jenkins will be put in shutdown mode before the window starts and any long-running Jenkins jobs _will_ be canceled if they don't complete before the start of the window.
Notices will be posted to the mailing lists and in the #opendaylight channel on LFN-tech slack at the start and end of the maintenance.

Thanks,
Anil Belur

21 - 40 of 14659