Date   

Removing old plugin and ovsdb-sal-compatibility layer from OVSDB project

Anil Vishnoi
 

Hi VTN devs,

We are cleaning up the dead code from the OVSDB repo. OVSDB project is no more using the bundle mentioned in the subject. These were kept for VTN project as they did not had bandwidth to migrate the dependent code.

Following patch clean it up from the OVSDB project.

We will merge this patch after one month (19th Sep), so i would request you to please migrate the dependent code so that we can merge this patch.

Please reach out to me or sam, if you have any questions.


--
Thanks
Anil


Re: Query on East-West traffic

Anil Vishnoi
 

Ravi, if you need more details about it, look at following awesome blogs by flavio


Thanks
Anil

On Tue, Aug 18, 2015 at 5:45 PM, Sam Hague <shague@...> wrote:
Ravi,

for 2 I think we normally just ping from the dhcp namespace which is similar to a vrf. The namespace is also tenant specific so that will match the right flows. All traffic coming from certain ports will have the segId/tenant info tagged to it to identify it.

Sam

On Tue, Aug 18, 2015 at 4:01 AM, <Ravi_Sabapathy@...> wrote:

Hi All,

     

     I have a query on East-West traffic and how it is handled by OVSDB and openstack. There are 2 possible cases in East West traffic.

 

  Case 1 - Tenants having different network:

 

     Consider the below case,

     Tenant 1 with network 2.0.0.0/24

     Tenant 2 with network 1.0.0.0/24

 

     Tenant 1 tries to ping to tenant 2. In this case a tuple of [tunnel_id/vxlan_id, des_ip] will be used by openvswitch to identify and switch packet to the destination tenant network.           

 

 

Flow Rules for reaching different tenant (Ref: Flavio’s how-to-odl-with-openstack-part2.html blog):

 

cookie=0x0, duration=9662.085s, table=60, n_packets=122, n_bytes=11222, priority=2048,ip,tun_id=0x3e9,nw_dst=2.0.0.0/24 actions=set_field:fa:16:3e:cb:14:47->eth_src,dec_ttl,set_field:0x3ea->tun_id,goto_table:70

cookie=0x0, duration=9661.045s, table=60, n_packets=4, n_bytes=392, priority=2048,ip,tun_id=0x3ea,nw_dst=1.0.0.0/24 actions=set_field:fa:16:3e:69:5a:42->eth_src,dec_ttl,set_field:0x3e9->tun_id,goto_table:70

              I have verified in my local setup that East – West traffic is working fine with tenants with different networks.

 Case 2 – Two or more tenants having same network:

 

     Consider the below case,

     Tenant 1 with network 1.0.0.0/24

     Tenant 2 with network 1.0.0.0/24

 

             How does the openvswitch create rules to reach tenant 2, when tenant 1 tries to ping ? The ping  binary does not seem to provide any option for tunnel_id/segmentation ID.

 

Legacy behavior:

In the legacy network, we can have the same network in different Virtual routing and forwarding (VRF). The ping binary has options to ping to a specific VRF id and destination IP.

 

              So, there are 2 options

1.       Have Vxlan ID/tunnel ID as part of ping/application. By this way the openvswitch can form a unique tuple of [tunnel_id/vxlan_id, des_ip]. Please give your comment on this.

2.       Use floating IP option and assign

a.       Static floating IP to each of the VM’s in the tenant network

a.       In a large scale deployment we might run out of floating IP’s. This might not be an ideal solution.

b.      Assign floating IP per compute node or each tenant network in the deployment

a.       In this case ODL has to internally maintain which ports to reach for a particular floating IP.   

 

               Is the IP overlap use case possible in current scenario with ODL + openstack?

I hope it is a valid use case from deployment perspective? Please correct me if I am wrong and give your valid inputs.

               

 

Regards,

Ravi

 


_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev



_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




--
Thanks
Anil


Re: Retrying connections, persistence, controller restart for ovsdb southbound

Anil Vishnoi
 

Agree, this is on priority, i will look into it.

On Tue, Aug 18, 2015 at 5:03 AM, Edward Warnicke <hagbard@...> wrote:
I don't object to adding these to clustering... but they are actually completely orthogonal to it, and can be fixed immediately, independent of what we do for HA for ovsdb-southbound :)

Ed

On Mon, Aug 17, 2015 at 9:42 AM, Anil Vishnoi <vishnoianil@...> wrote:
We welcome everyone who wants to contribute, so please feel free to pick up the task and add it to the clustering trello card.

Anil

On Mon, Aug 17, 2015 at 7:11 PM, Ryan Goulding <ryandgoulding@...> wrote:
Thanks for the information, Sam.  I look forward to discussing this in tomorrow's meeting.

Regards,

Ryan Goulding

On Mon, Aug 17, 2015 at 9:39 AM, Sam Hague <shague@...> wrote:
Ryan, Daya,

these items would be part of the work Anil and Flavio are driving for the clustering,persistence and ha in Be. There are high-level cards in the Be Trello board.  They are hoping to present some initial findings in tomorrow's meeting. This will be an evolving design because there are so many different pieces and mechanisms to bring together.

They will gladly take any volunteers to work on these pieces.

Thanks, Sam

On Mon, Aug 17, 2015 at 9:32 AM, Ryan Goulding <ryandgoulding@...> wrote:
Hi Ed,

Should we make a trello card for this?  If no one has started work on this, I would be interested in picking this up.

Thanks,

Ryan Goulding

On Fri, Aug 14, 2015 at 11:53 AM, Edward Warnicke <hagbard@...> wrote:
So we probably also need to get one for retrying connections.. because currently if the ovsdb node is not available or reachable when we configure a connection, we never retry, and if it goes away temporarily, we never retry.

Ed

On Fri, Aug 14, 2015 at 8:49 AM, Ryan Goulding <ryandgoulding@...> wrote:
https://trello.com/c/bOrmGbXQ/46-resync-persisted-config-to-ovsdb-correctly-on-restart-of-controller is a trello card that Eric has taken involving controller restart for OVSDB southbound.  This is just a few of the restart scenarios though, IIRC.

Regards,

Ryan Goulding

On Fri, Aug 14, 2015 at 11:48 AM, Edward Warnicke <hagbard@...> wrote:
Guys,

     Is anyone working on these issues with OVSDB SB?

Ed



_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev





_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




--
Thanks
Anil

_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev





--
Thanks
Anil


Re: Pointers on the ovsdb bug - https://bugs.opendaylight.org/show_bug.cgi?id=3989

Anil Vishnoi
 

Hi Raksha,

I added comment on the gerrit, lets discuss it over the gerrit, to keep track of the discussion for others reference.

Thanks
Anil

On Tue, Aug 18, 2015 at 11:58 PM, Madhava Bangera, Raksha <raksha.madhava.bangera@...> wrote:

Hi All,

 

I am working on the bug https://bugs.opendaylight.org/show_bug.cgi?id=3989 and the fix is based on the approach 1 suggested in the description. I submitted https://git.opendaylight.org/gerrit/#/c/24739/ patch. This patch prevents the duplicate node( having same connection info) from being added to operational datastore. But the duplicate node is present in the config store. As per the review comments, the duplicate node should be prevented from being added to config DS too.

 

I have made the below highlighted changes to the patch. But with this logic, the duplicate node is not completely removed from the config. The {remote IP, port} tuple gets deleted but the node ID stills floats in the config. Could anyone give me pointers on how I can delete the duplicate node completely from the config? Or, is there a way to block the entry of duplicate node to config somewhere?

 

public void onDataChanged(

           AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> changes) {

       LOG.trace("onDataChanged: {}", changes);

       for (Entry<InstanceIdentifier<?>, DataObject> created : changes.getCreatedData().entrySet()) {

           // TODO validate we have the correct kind of InstanceIdentifier

           if (created.getValue() instanceof OvsdbNodeAugmentation) {

               OvsdbNodeAugmentation ovsdbNode = (OvsdbNodeAugmentation)created.getValue();

               ConnectionInfo key = ovsdbNode.getConnectionInfo();

               InstanceIdentifier<Node> iid = cm.getInstanceIdentifier(key);

               if ( iid != null) {

                   InstanceIdentifier<Node> dupiid = (InstanceIdentifier<Node>) created.getKey();

                   ReadWriteTransaction transaction = db.newReadWriteTransaction();

                   transaction.delete(LogicalDatastoreType.CONFIGURATION, dupiid);

                   CheckedFuture<Void, TransactionCommitFailedException> future = transaction.submit();

                   try {

                       future.checkedGet();

                   } catch (TransactionCommitFailedException

                           e) {

                       LOG.warn("Failed to delete {} ", dupiid, e);

                   }

                   return;

               }

           }

       }

       // Connect first if we have to:

       connect(changes);

       rest of the code

}

 

Thanks & Regards,

Raksha

 


_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




--
Thanks
Anil


Re: Anybody knows how to check if neutron is working normally with ODL?

Yang, Yi Y <yi.y.yang@...>
 

Hi, Sam

 

I followed your document and ova file and reproduced this integration environment, everything is very smooth, thank you very much.

 

But I need to use ovsdb and SFC master git tree to do development, how can I reproduce it repeatedly after I replace Opendaylight, I need to do test repeatedly.

 

From: ovsdb-dev-bounces@... [mailto:ovsdb-dev-bounces@...] On Behalf Of Yang, Yi Y
Sent: Tuesday, August 18, 2015 11:59 AM
To: Sam Hague
Cc: ovsdb-dev@...
Subject: Re: [ovsdb-dev] Anybody knows how to check if neutron is working normally with ODL?

 

Sam, thank you so much, I’ll use your ova image to set up test.

 

From: Sam Hague [mailto:shague@...]
Sent: Monday, August 17, 2015 9:33 PM
To: Yang, Yi Y
Cc: ovsdb-dev@...
Subject: Re: [ovsdb-dev] Anybody knows how to check if neutron is working normally with ODL?

 

Yi,

it is only partially working in your setup. Your ovsdb nodes are connected to ODL and the pipeline flows are created which is good. You can see that in the final dump-flows in your output. But there should be more flows for the neutron networks you created. So something is failing at that point. Normally this is  config issue and the neutron commands are not even making it to ODL. Looking at the neutron logs helps ehre.

You can look at different logs to find the issue:

- odl logs, look for exceptions

- neutron logs, grep -ir 'error\|fail\|usage\|not found' <path to neutron logs, devstack logs>

That wiki you followed is older. You might have better luck using our tutorial vms from the summit: [ova] [slides]. The ova has all the vms needed to bring up devstack to show integration between openstack and odl. The slides show how everything is connected, how to run the neutron commands and how to verify everything is working.

 

Thanks, Sam

[ova] https://wiki.opendaylight.org/images/HostedFiles/2015Summit/ovsdbtutorial15_2.ova

[slides] https://drive.google.com/open?id=1KIuNDuUJGGEV37Zk9yzx9OSnWExt4iD2Z7afycFLf_I

 

On Mon, Aug 17, 2015 at 4:50 AM, Yang, Yi Y <yi.y.yang@...> wrote:

Hi, All

 

I followed https://wiki.opendaylight.org/view/OpenStack_and_OpenDaylight to integrate Openstack and SFC. I saw br-int was created by ODL.

 

[root@localhost ~(keystone_admin)]# ovs-vsctl show

795a890a-9a70-4340-98c6-d3f6db82264c

    Manager "tcp:10.240.224.185:6640"

        is_connected: true

    Bridge br-int

        Controller "tcp:10.240.224.185:6653"

            is_connected: true

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

    ovs_version: "2.3.1-git4750c96"

[root@localhost ~(keystone_admin)]#

 

Im not sure if Openstack neutron really connected to ODL.

 

[root@localhost ~(keystone_admin)]# curl -u admin:admin http://10.240.224.185:8181/controller/nb/v2/neutron/networks

{

   "networks" : [ ]

}[root@localhost ~(keystone_admin)]#

 

When I ran neutron commands to create net, subnet, router, etc, neutron didnt report any error, but it seems openflow tables arent changed after I created net, subnet, router and start a VM. Anybody knows how to check if neutron is working normally with ODL?

 

[root@localhost ~(keystone_admin)]# neutron router-create router1

Created a new router:

+-----------------------+--------------------------------------+

| Field                 | Value                                |

+-----------------------+--------------------------------------+

| admin_state_up        | True                                 |

| external_gateway_info |                                      |

| id                    | ba8603f7-68ea-48ed-88a2-fdf49cd5d8a8 |

| name                  | router1                              |

| status                | ACTIVE                               |

| tenant_id             | fa82c46cf8ed48d39ca516699a81032d     |

+-----------------------+--------------------------------------+

[root@localhost ~(keystone_admin)]# neutron subnet-create private --name=private_subnet 192.168.1.0/24

Created a new subnet:

+------------------+--------------------------------------------------+

| Field            | Value                                            |

+------------------+--------------------------------------------------+

| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |

| cidr             | 192.168.1.0/24                                   |

| dns_nameservers  |                                                  |

| enable_dhcp      | True                                             |

| gateway_ip       | 192.168.1.1                                      |

| host_routes      |                                                  |

| id               | 53d03079-e689-4292-a10f-317b8cb012f0             |

| ip_version       | 4                                                |

| name             | private_subnet                                   |

| network_id       | 5b1a7624-b8b5-4ce3-b264-1445df528ec6             |

| tenant_id        | fa82c46cf8ed48d39ca516699a81032d                 |

+------------------+--------------------------------------------------+

[root@localhost ~(keystone_admin)]# neutron router-interface-add router1 private_subnet

Added interface 4f123e5f-0285-4b55-b316-59709b19921c to router router1.

[root@localhost ~(keystone_admin)]# glance image-create --name='cirros image' --container-format=bare --disk-format=qcow2 < cirros-0.3.1-x86_64-disk.img

+------------------+--------------------------------------+

| Property         | Value                                |

+------------------+--------------------------------------+

| checksum         | d972013792949d0d3ba628fbe8685bce     |

| container_format | bare                                 |

| created_at       | 2015-08-17T08:46:09                  |

| deleted          | False                                |

| deleted_at       | None                                 |

| disk_format      | qcow2                                |

| id               | 11553256-c2f9-4f80-93b7-1615046dc8c3 |

| is_public        | False                                |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | cirros image                         |

| owner            | fa82c46cf8ed48d39ca516699a81032d     |

| protected        | False                                |

| size             | 13147648                             |

| status           | active                               |

| updated_at       | 2015-08-17T08:46:09                  |

| virtual_size     | None                                 |

+------------------+--------------------------------------+

[root@localhost ~(keystone_admin)]# nova flavor-list

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |

| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |

| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |

| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |

| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

[root@localhost ~(keystone_admin)]# glance image-list

+--------------------------------------+--------------+-------------+------------------+----------+--------+

| ID                                   | Name         | Disk Format | Container Format | Size     | Status |

+--------------------------------------+--------------+-------------+------------------+----------+--------+

| 11553256-c2f9-4f80-93b7-1615046dc8c3 | cirros image | qcow2       | bare             | 13147648 | active |

+--------------------------------------+--------------+-------------+------------------+----------+--------+

[root@localhost ~(keystone_admin)]# neutron net-list

+--------------------------------------+---------+-----------------------------------------------------+

| id                                   | name    | subnets                                             |

+--------------------------------------+---------+-----------------------------------------------------+

| 5b1a7624-b8b5-4ce3-b264-1445df528ec6 | private | 53d03079-e689-4292-a10f-317b8cb012f0 192.168.1.0/24 |

+--------------------------------------+---------+-----------------------------------------------------+

[root@localhost ~(keystone_admin)]# nova boot --flavor m1.small --image 11553256-c2f9-4f80-93b7-1615046dc8c3 --nic net-id=5b1a7624-b8b5-4ce3-b264-1445df528ec6 test1

+--------------------------------------+-----------------------------------------------------+

| Property                             | Value                                               |

+--------------------------------------+-----------------------------------------------------+

| OS-DCF:diskConfig                    | MANUAL                                              |

| OS-EXT-AZ:availability_zone          | nova                                                |

| OS-EXT-SRV-ATTR:host                 | -                                                   |

| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |

| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                   |

| OS-EXT-STS:power_state               | 0                                                   |

| OS-EXT-STS:task_state                | scheduling                                          |

| OS-EXT-STS:vm_state                  | building                                            |

| OS-SRV-USG:launched_at               | -                                                   |

| OS-SRV-USG:terminated_at             | -                                                   |

| accessIPv4                           |                                                     |

| accessIPv6                           |                                                     |

| adminPass                            | 9ZxWJ94RKjyH                                        |

| config_drive                         |                                                     |

| created                              | 2015-08-17T08:51:18Z                                |

| flavor                               | m1.small (2)                                        |

| hostId                               |                                                     |

| id                                   | fa6be9cf-875e-4a58-86d8-58f5f6a2d6bd                |

| image                                | cirros image (11553256-c2f9-4f80-93b7-1615046dc8c3) |

| key_name                             | -                                                   |

| metadata                             | {}                                                  |

| name                                 | test1                                               |

| os-extended-volumes:volumes_attached | []                                                  |

| progress                             | 0                                                   |

| security_groups                      | default                                             |

| status                               | BUILD                                               |

| tenant_id                            | fa82c46cf8ed48d39ca516699a81032d                    |

| updated                              | 2015-08-17T08:51:19Z                                |

| user_id                              | 1058503dc1434ab783fc79bdb9626626                    |

+--------------------------------------+-----------------------------------------------------+

[root@localhost ~(keystone_admin)]# ovs-vsctl show

795a890a-9a70-4340-98c6-d3f6db82264c

    Manager "tcp:10.240.224.185:6640"

        is_connected: true

    Bridge br-int

        Controller "tcp:10.240.224.185:6653"

            is_connected: true

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

        Port "tap6e750af4-e9"

            Interface "tap6e750af4-e9"

        Port "tap5efa303c-10"

            Interface "tap5efa303c-10"

                type: internal

    ovs_version: "2.3.1-git4750c96"

[root@localhost ~(keystone_admin)]# ovs-ofctl --protocol=OpenFlow13 dump-flows br-int

OFPST_FLOW reply (OF1.3) (xid=0x2):

cookie=0x0, duration=11013.425s, table=0, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:20

cookie=0x0, duration=11014.721s, table=0, n_packets=0, n_bytes=0, dl_type=0x88cc actions=CONTROLLER:65535

cookie=0x0, duration=11013.404s, table=20, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:30

cookie=0x0, duration=11013.328s, table=30, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:40

cookie=0x0, duration=11013.298s, table=40, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:50

cookie=0x0, duration=11013.246s, table=50, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:60

cookie=0x0, duration=11013.216s, table=60, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:70

cookie=0x0, duration=11013.154s, table=70, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:80

cookie=0x0, duration=11013.135s, table=80, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:90

cookie=0x0, duration=11013.114s, table=90, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:100

cookie=0x0, duration=11013.035s, table=100, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:110

cookie=0x0, duration=11013.014s, table=110, n_packets=17, n_bytes=2082, priority=0 actions=drop

[root@localhost ~(keystone_admin)]#


_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev

 


Pointers on the ovsdb bug - https://bugs.opendaylight.org/show_bug.cgi?id=3989

Madhava Bangera, Raksha <raksha.madhava.bangera@...>
 

Hi All,

 

I am working on the bug https://bugs.opendaylight.org/show_bug.cgi?id=3989 and the fix is based on the approach 1 suggested in the description. I submitted https://git.opendaylight.org/gerrit/#/c/24739/ patch. This patch prevents the duplicate node( having same connection info) from being added to operational datastore. But the duplicate node is present in the config store. As per the review comments, the duplicate node should be prevented from being added to config DS too.

 

I have made the below highlighted changes to the patch. But with this logic, the duplicate node is not completely removed from the config. The {remote IP, port} tuple gets deleted but the node ID stills floats in the config. Could anyone give me pointers on how I can delete the duplicate node completely from the config? Or, is there a way to block the entry of duplicate node to config somewhere?

 

public void onDataChanged(

           AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> changes) {

       LOG.trace("onDataChanged: {}", changes);

       for (Entry<InstanceIdentifier<?>, DataObject> created : changes.getCreatedData().entrySet()) {

           // TODO validate we have the correct kind of InstanceIdentifier

           if (created.getValue() instanceof OvsdbNodeAugmentation) {

               OvsdbNodeAugmentation ovsdbNode = (OvsdbNodeAugmentation)created.getValue();

               ConnectionInfo key = ovsdbNode.getConnectionInfo();

               InstanceIdentifier<Node> iid = cm.getInstanceIdentifier(key);

               if ( iid != null) {

                   InstanceIdentifier<Node> dupiid = (InstanceIdentifier<Node>) created.getKey();

                   ReadWriteTransaction transaction = db.newReadWriteTransaction();

                   transaction.delete(LogicalDatastoreType.CONFIGURATION, dupiid);

                   CheckedFuture<Void, TransactionCommitFailedException> future = transaction.submit();

                   try {

                       future.checkedGet();

                   } catch (TransactionCommitFailedException

                           e) {

                       LOG.warn("Failed to delete {} ", dupiid, e);

                   }

                   return;

               }

           }

       }

       // Connect first if we have to:

       connect(changes);

       rest of the code

}

 

Thanks & Regards,

Raksha

 


OVSDB Agenda 8/18/15: still currently at 12:00p PST

Sam Hague
 

Topics for this week:

1. Task status updates: please update the Trello cards. [trello]

2. Bug updates: please grab a bug if you want to dig into something and learn the code. [bugs]

3. Modify meeting time from Tuesday 12:00p PST to 10:00a PST. We will move to the new time once the WebEx has been updated. This has not happened yet so today we will still be at 12:00p PST.

4. Clustering, HA

Future topics:

1. Security groups and conntrack
2. neutron plugin evolution



Canceled Event: OpenDaylight - OPNFV community sync meeting @ Thu Aug 20, 2015 10am - 11am (dneary@redhat.com)

dneary@...
 

This event has been canceled and removed from your calendar.

OpenDaylight - OPNFV community sync meeting

OpenDaylight - OPNFV monthly sync call
======================================

This is a monthly call to allow integration issues between OpenDaylight and OpenStack in the context of OPNFV to be shared and resolved, and to encourage better communication between OPNFV and the OpenDaylight project.

Agenda this month:
* OpenDaylight issues in BGS (Fuel and Foreman participants: Please communicate any current issues you have to allow me to add them to the agenda)
* Deploying OpenDaylight in a Docker container (Dan Smith, any issues?)
* OpenDaylight development/roadmap process: How can OPNFV participants define priorities and communicate feature gaps to the OpenDaylight community? https://wiki.opnfv.org/community/opendaylight

Dial-in numbers:

US Toll-Free Dial-In Number: 800 451 8679
US local dial-in number: +1 (212) 729-5016

Global Access Numbers Local:

China Domestic Dial-in # 4006205013
China Domestic Dial-in # 8008190132
Finland Helsinki Dial-in # 0923194436
France Paris Dial-in # 0170377140
Germany Berlin Dial-in # 030300190579
Germany Frankfurt Dial-in # 06922222594
Spain Madrid Dial-in # 914146284
Sweden Stockholm Dial-in # 0850513770
United Kingdom Dial-in # 02035746870
United Kingdom LocalCall Dial-in # 08445790678

Global Access Numbers Toll-Free

Australia Dial-in # 1800337169
France Dial-in # 0805632867
Germany Dial-in # 08006647541
India Dial-in # 180030104350
Japan Dial-in # 0120994948
Japan Dial-in # 00531250120
Netherlands Dial-in # 08000222329
Spain Dial-in # 800300524
Sweden Dial-in # 0200896860
Switzerland Dial-in # 0800650077
United Kingdom Dial-in # 08006948057

When
Thu Aug 20, 2015 10am – 11am Eastern Time
Where
Intercall (numbers below) Bridge: 915 507 3783# (map)
Calendar
dneary@...
Who
Dave Neary - organizer
controller-dev@...
opnfv-tsc@...
sandy.turnbull@...
mageshkumar@...
shanan@...
paparao.palacharla@...
martin.lipka@...
mark.szczesniak@...
groupbasedpolicy-dev@...
laurent.laporte@...
smazziot@...
narayana_perumal@...
vzelcamo@...
rbrar@...
sma@...
sharis@...
louis.fourie@...
zhang.jun3g@...
cficik@...
desilva@...
opnfv-tech-discuss@...
yunchao.hu@...
ovsdb-dev@...
michael.shevenell@...
dfarrell@...
dave.hood@...
canio.cillis@...
fzdarsky@...
george.y.zhao@...
kkoushik@...
dominik.schatzmann@...
mc3124@...
tadi.bhargava@...
glenn.seiler@...
gershon.schatzberg@...
paul-andre.raymond@...
nsowatsk@...
dongkansheng@...
tapio.tallgren@...
ville.pesonen@...
ssaxena@...
azhar.saleem@...
gmainzer@...
carol.sanders@...
dayavanti.gopal.kamath@...
eric.hansander@...
arobinson@...
daniel.smith@...
dwcarder@...
zxing@...
jiangmk@...
wangjinzhu@...
vguntaka@...
john.borz@...
vijamann@...
scott.mansfield@...
iben.rodriguez@...
christopher.price@...
nlemieux@...
bs3131@...
rmoats@...
rapenno@...
psarwal@...
yafit.hadar@...
helen.chen@...
tnadeau@...
chilung@...
marc.rapoport@...
Keith Burns
discuss@...
bh526r@...
thinrichs@...
shague@...
james.luhrsen@...
slowe@...
daya_k@...
michael.a.lynch@...
raymond.nugent@...
ramkri123@...
sfc-dev@...
neutron-dev@...
dkutenic@...
rovarga@...
peter.pozar@...
manohar.sl@...
ganesh@...
jack.pugaczewski@...
dirk.kutscher@...

Invitation from Google Calendar

You are receiving this courtesy email at the account ovsdb-dev@... because you are an attendee of this event.

To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar.

Forwarding this invitation could allow any recipient to modify your RSVP response. Learn More.


question on separate client interfaces for separate ovsdb servers

daya k
 

hi all,
i have a question on whether ovsdb today supports different local client IPs for different ovsdb server connections.

while the local IP and port info are in the ovsdb.yang file, a brief look at the code shows this snippet in OvsdbConnectionManager.java

private void putConnectionInstance(ConnectionInfo key,OvsdbConnectionInstance instance) {
        ConnectionInfo connectionInfo = SouthboundMapper.suppressLocalIpPort(key);
        clients.put(connectionInfo, instance);
    }

are we always supressing the local IP and port before storing the connection in the datastore?

thanks,
daya


Re: Query on East-West traffic

Sam Hague
 

Ravi,

for 2 I think we normally just ping from the dhcp namespace which is similar to a vrf. The namespace is also tenant specific so that will match the right flows. All traffic coming from certain ports will have the segId/tenant info tagged to it to identify it.

Sam

On Tue, Aug 18, 2015 at 4:01 AM, <Ravi_Sabapathy@...> wrote:

Hi All,

     

     I have a query on East-West traffic and how it is handled by OVSDB and openstack. There are 2 possible cases in East West traffic.

 

  Case 1 - Tenants having different network:

 

     Consider the below case,

     Tenant 1 with network 2.0.0.0/24

     Tenant 2 with network 1.0.0.0/24

 

     Tenant 1 tries to ping to tenant 2. In this case a tuple of [tunnel_id/vxlan_id, des_ip] will be used by openvswitch to identify and switch packet to the destination tenant network.           

 

 

Flow Rules for reaching different tenant (Ref: Flavio’s how-to-odl-with-openstack-part2.html blog):

 

cookie=0x0, duration=9662.085s, table=60, n_packets=122, n_bytes=11222, priority=2048,ip,tun_id=0x3e9,nw_dst=2.0.0.0/24 actions=set_field:fa:16:3e:cb:14:47->eth_src,dec_ttl,set_field:0x3ea->tun_id,goto_table:70

cookie=0x0, duration=9661.045s, table=60, n_packets=4, n_bytes=392, priority=2048,ip,tun_id=0x3ea,nw_dst=1.0.0.0/24 actions=set_field:fa:16:3e:69:5a:42->eth_src,dec_ttl,set_field:0x3e9->tun_id,goto_table:70

              I have verified in my local setup that East – West traffic is working fine with tenants with different networks.

 Case 2 – Two or more tenants having same network:

 

     Consider the below case,

     Tenant 1 with network 1.0.0.0/24

     Tenant 2 with network 1.0.0.0/24

 

             How does the openvswitch create rules to reach tenant 2, when tenant 1 tries to ping ? The ping  binary does not seem to provide any option for tunnel_id/segmentation ID.

 

Legacy behavior:

In the legacy network, we can have the same network in different Virtual routing and forwarding (VRF). The ping binary has options to ping to a specific VRF id and destination IP.

 

              So, there are 2 options

1.       Have Vxlan ID/tunnel ID as part of ping/application. By this way the openvswitch can form a unique tuple of [tunnel_id/vxlan_id, des_ip]. Please give your comment on this.

2.       Use floating IP option and assign

a.       Static floating IP to each of the VM’s in the tenant network

a.       In a large scale deployment we might run out of floating IP’s. This might not be an ideal solution.

b.      Assign floating IP per compute node or each tenant network in the deployment

a.       In this case ODL has to internally maintain which ports to reach for a particular floating IP.   

 

               Is the IP overlap use case possible in current scenario with ODL + openstack?

I hope it is a valid use case from deployment perspective? Please correct me if I am wrong and give your valid inputs.

               

 

Regards,

Ravi

 


_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev



Re: Openstack-ODL integration - stacking issues with pbr

Ihar Hrachyshka <ihrachys@...>
 

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

UPD

stable/kilo branch for the neutron driver is added to projects.txt,
and the bot proposed the first update patch in the branch that failed
because you were running your Kilo driver against neutron Liberty.
I've modified the patch to pass the Kilo gate and merged it in:

https://review.openstack.org/#/c/213878/

Hopefully, this finally solves your issue with the branch.

For the future, make sure that once neutron creates a stable branch,
so do you, and make sure that you gate it against the stable branch of
neutron repos and not master.

Cheers
Ihar

On 08/14/2015 05:09 PM, Kyle Mestery wrote:
On Fri, Aug 14, 2015 at 9:56 AM, Flavio Fernandes
<ffernand@... <mailto:ffernand@...>> wrote:


On Aug 14, 2015, at 7:00 AM, Ihar Hrachyshka
<ihrachys@... <mailto:ihrachys@...>> wrote:
On 08/14/2015 12:32 PM, Flavio Fernandes wrote:
[cc odl neutron-dev, Ankur, Isaku]


On Aug 14, 2015, at 6:23 AM, Ihar Hrachyshka
<ihrachys@... <mailto:ihrachys@...>
<mailto:ihrachys@...>> wrote:
On 08/14/2015 12:15 PM, Flavio Fernandes wrote:

On Aug 13, 2015, at 9:04 AM, Ihar Hrachyshka
<ihrachys@... <mailto:ihrachys@...>
<mailto:ihrachys@...>
<mailto:ihrachys@...>> wrote:
On 08/13/2015 02:46 PM, Flavio Fernandes wrote:

On Aug 13, 2015, at 8:30 AM, Ihar Hrachyshka
<ihrachys@...
<mailto:ihrachys@...>
<mailto:ihrachys@...>
<mailto:ihrachys@...>
<mailto:ihrachys@...>> wrote:
Looking at the logs in the email thread:

2015-08-05 14:59:53.160 | Download error on
https://pypi.python.org/simple/pbr/: [Errno 110]
Connection timed out -- Some packages may not be
found! 2015-08-05 15:02:00.392 | Download error
on https://pypi.python.org/simple/: [Errno 110]
Connection timed out -- Some packages may not be
found! 2015-08-05 15:02:00.392 | No local
packages or download links found for pbr>=1.3

So why can't your machine download the satisfying
pbr version? It's available on pypi, so assuming
you fix the download error, I think it should
proceed.


That is the issue. This happens because
OFFLINE=True and there is still something in
pbr requirements that is looking for a version
of pbr that is not used in stable/kilo.
I suspect this is happening because
stable/kilo branch in networking-odl was
created ‘late’ and the new version of pbr was
added as part of liberty?!?
So, to easily reproduce this issue: 1) stack
with devstack+networking-odl on stable/kilo; 2)
untack; 3) change OFFLINE=True and 4) attempt
to stack again.
Are your repo [test-]requirements.txt synchronized
with neutron's kilo requirements? They should,
otherwise it won't be ever supported.


They do not for requirements.txt:
$ diff requirements.txt.neutron
requirements.txt.networking-odl | grep pbr <
pbr!=0.7,<1.0,>=0.6
pbr<2.0,>=1.3
$ diff test-requirements.txt.neutron
test-requirements.txt.networking-odl | grep pbr $
So that's a problem. You don't even have a common pbr version
that would satisfy both projects.


indeed!



If you look closely, that is what is changed in the
abandoned gerrit [1]. Maybe the right thing to do is
to re-visit that gerrit and make sure the
requirement.txt files are inline?!?
The fix is obvious: make stable/kilo requirements in your
repo synchronized with what is in stable/kilo for neutron.
Same for all other branches.

Openstack requirements proposal bot can help you maintain
the lists synchronized. To make sure your repo gets updates
from the bot, add it in projects.txt in
openstack/requirements repo (in master and in stable/kilo).


It is ironic reading this… and then looking at the ‘owner'
of the commit that caused all this mess [1].
The bot was proposing when it was still master. After you branched
out stable/kilo, you should have sticked to the deps that belong
to stable/kilo.

That aside; I have very little experience on that… some
one else’s help to take care of this would be greatly
appreciated.
Anybody of for that task?
OK, so your repo is in projects.txt, but only on master. That's
why you don't receive updates on stable/kilo. I've cherry-picked
the appropriate openstack/requirements patch to enable the bot for
your stable branch too: https://review.openstack.org/#/c/213084/

I see that you created stable/kilo branch based on some incorrect
hash that already included changes from liberty cycle. That's why
you got the bot requirements update there. You should probably
reset your branch to start from a patch that does not include any
Liberty changes.


yes, make sense. How and where is this hash kept? I think
whatever hash it has now is fine, except for the gerrit I propose
to be reverted [1]. @Kyle: agree?
One other glitch I see in your repository is that you have
stable/juno and stable/icehouse there while they contain neutron. I
guess they were created when you spinned out your driver out of
neutron tree. I think they can be safely killed now.

Yes. Can you give me more concrete steps on how to accomplish
that ‘killing'?

I can get rid of those branches in gerrit, let me do that now.
— flavio
[1]: https://review.openstack.org/#/c/197258/


Thanks,
— flavio
[1]: https://review.openstack.org/#/c/197258/


Ihar
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJV0xjXAAoJEC5aWaUY1u57X/kIAOGPdJUAkHRWG1jH/HdvHezY
uFK4WAOCv8pkEqwIhKCWbEKeeOHMX3T3UpuI5oHcldm6A8bSoXkuB5f2khUy1rLm
QjMzu+qfiQpIJUF+I8kqGU661ambdZTiSJ6+SqjlNiRrsX/QrgDayvsX1WelomPH
Z97UKRCN4rEYctnX+qLRrXYQ/wo0h4eR0dOhtMErain0vrk3rTPDsEQ3rMIJijR6
t/zMlyAnWF0Y8NASwjK8qWlypPsf6vywfgckCDCjZZusevnrQe42DroKldkuwGVe
bmJwZi6CdboqzzMsWOwTArJSl7KPODiy9BxglIFvk2x+xqx3DrXi6Jx+pjh1/OY=
=sky6
-----END PGP SIGNATURE-----


Query on East-West traffic

Ravi Shankar S
 

Hi All,

     

     I have a query on East-West traffic and how it is handled by OVSDB and openstack. There are 2 possible cases in East West traffic.

 

  Case 1 - Tenants having different network:

 

     Consider the below case,

     Tenant 1 with network 2.0.0.0/24

     Tenant 2 with network 1.0.0.0/24

 

     Tenant 1 tries to ping to tenant 2. In this case a tuple of [tunnel_id/vxlan_id, des_ip] will be used by openvswitch to identify and switch packet to the destination tenant network.           

 

 

Flow Rules for reaching different tenant (Ref: Flavio’s how-to-odl-with-openstack-part2.html blog):

 

cookie=0x0, duration=9662.085s, table=60, n_packets=122, n_bytes=11222, priority=2048,ip,tun_id=0x3e9,nw_dst=2.0.0.0/24 actions=set_field:fa:16:3e:cb:14:47->eth_src,dec_ttl,set_field:0x3ea->tun_id,goto_table:70

cookie=0x0, duration=9661.045s, table=60, n_packets=4, n_bytes=392, priority=2048,ip,tun_id=0x3ea,nw_dst=1.0.0.0/24 actions=set_field:fa:16:3e:69:5a:42->eth_src,dec_ttl,set_field:0x3e9->tun_id,goto_table:70

              I have verified in my local setup that East – West traffic is working fine with tenants with different networks.

 Case 2 – Two or more tenants having same network:

 

     Consider the below case,

     Tenant 1 with network 1.0.0.0/24

     Tenant 2 with network 1.0.0.0/24

 

             How does the openvswitch create rules to reach tenant 2, when tenant 1 tries to ping ? The ping  binary does not seem to provide any option for tunnel_id/segmentation ID.

 

Legacy behavior:

In the legacy network, we can have the same network in different Virtual routing and forwarding (VRF). The ping binary has options to ping to a specific VRF id and destination IP.

 

              So, there are 2 options

1.       Have Vxlan ID/tunnel ID as part of ping/application. By this way the openvswitch can form a unique tuple of [tunnel_id/vxlan_id, des_ip]. Please give your comment on this.

2.       Use floating IP option and assign

a.       Static floating IP to each of the VM’s in the tenant network

a.       In a large scale deployment we might run out of floating IP’s. This might not be an ideal solution.

b.      Assign floating IP per compute node or each tenant network in the deployment

a.       In this case ODL has to internally maintain which ports to reach for a particular floating IP.   

 

               Is the IP overlap use case possible in current scenario with ODL + openstack?

I hope it is a valid use case from deployment perspective? Please correct me if I am wrong and give your valid inputs.

               

 

Regards,

Ravi

 


Re: ovsdb-dev Digest, Vol 26, Issue 22

Yang, Yi Y <yi.y.yang@...>
 

Here is my l3 agent config, do you know how to configure it to use ODL to handle?

 

[root@localhost ~]# cat /etc/neutron/l3_agent.ini

[DEFAULT]

# Show debugging output in log (sets DEBUG log level output)

# debug = False

debug = False

 

# L3 requires that an interface driver be set. Choose the one that best

# matches your plugin.

# interface_driver =

interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver

 

# Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC)

# that supports L3 agent

# interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

 

# Use veth for an OVS interface or not.

# Support kernels with limited namespace support

# (e.g. RHEL 6.5) so long as ovs_use_veth is set to True.

# ovs_use_veth = False

 

# Example of interface_driver option for LinuxBridge

# interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

 

# Allow overlapping IP (Must have kernel build with CONFIG_NET_NS=y and

# iproute2 package that supports namespaces).

# use_namespaces = True

use_namespaces = True

 

# If use_namespaces is set as False then the agent can only configure one router.

 

# This is done by setting the specific router_id.

# router_id =

 

# When external_network_bridge is set, each L3 agent can be associated

# with no more than one external network. This value should be set to the UUID

# of that external network. To allow L3 agent support multiple external

# networks, both the external_network_bridge and gateway_external_network_id

# must be left empty.

# gateway_external_network_id =

 

# Indicates that this L3 agent should also handle routers that do not have

# an external network gateway configured.  This option should be True only

# for a single agent in a Neutron deployment, and may be False for all agents

# if all routers must have an external network gateway

# handle_internal_only_routers = True

handle_internal_only_routers = True

 

# Name of bridge used for external network traffic. This should be set to

# empty value for the linux bridge. when this parameter is set, each L3 agent

# can be associated with no more than one external network.

# external_network_bridge = br-ex

external_network_bridge = br-ex

 

# TCP Port used by Neutron metadata server

# metadata_port = 9697

metadata_port = 9697

 

# Send this many gratuitous ARPs for HA setup. Set it below or equal to 0

# to disable this feature.

# send_arp_for_ha = 0

send_arp_for_ha = 3

 

# seconds between re-sync routers' data if needed

# periodic_interval = 40

periodic_interval = 40

 

# seconds to start to sync routers' data after

# starting agent

# periodic_fuzzy_delay = 5

periodic_fuzzy_delay = 5

 

# enable_metadata_proxy, which is true by default, can be set to False

# if the Nova metadata server is not available

# enable_metadata_proxy = True

enable_metadata_proxy = True

 

# Location of Metadata Proxy UNIX domain socket

# metadata_proxy_socket = $state_path/metadata_proxy

 

# router_delete_namespaces, which is false by default, can be set to True if

# namespaces can be deleted cleanly on the host running the L3 agent.

# Do not enable this until you understand the problem with the Linux iproute

# utility mentioned in https://bugs.launchpad.net/neutron/+bug/1052535 and

# you are sure that your version of iproute does not suffer from the problem.

# If True, namespaces will be deleted when a router is destroyed.

# router_delete_namespaces = False

router_delete_namespaces = False

 

# Timeout for ovs-vsctl commands.

# If the timeout expires, ovs commands will fail with ALARMCLOCK error.

# ovs_vsctl_timeout = 10

[root@localhost ~]#

 

From: ovsdb-dev-bounces@... [mailto:ovsdb-dev-bounces@...] On Behalf Of Hemanth N
Sent: Monday, August 17, 2015 5:19 PM
To: ovsdb-dev@...
Subject: Re: [ovsdb-dev] ovsdb-dev Digest, Vol 26, Issue 22

 

Hi Yi

The flows in br-int are configured by ODL. OpenStack normally uses tags instead of flows to route the packets.

However  please check the service plugin set for L3 services in Neutron config file or process l3-agt on network node.

L3 can be configured either to be handled by OpenStack or ODL.

 

// Hemanth

 

On Mon, Aug 17, 2015 at 2:22 PM, <ovsdb-dev-request@...> wrote:

Send ovsdb-dev mailing list submissions to
        ovsdb-dev@...

To subscribe or unsubscribe via the World Wide Web, visit
        https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
or, via email, send a message with subject or body 'help' to
        ovsdb-dev-request@...

You can reach the person managing the list at
        ovsdb-dev-owner@...

When replying, please edit your Subject line so it is more specific
than "Re: Contents of ovsdb-dev digest..."


Today's Topics:

   1. Anybody knows how to check if neutron is working  normally
      with ODL? (Yang, Yi Y)


----------------------------------------------------------------------

Message: 1
Date: Mon, 17 Aug 2015 08:50:37 +0000
From: "Yang, Yi Y" <yi.y.yang@...>
To: "ovsdb-dev@..."
        <ovsdb-dev@...>
Subject: [ovsdb-dev] Anybody knows how to check if neutron is working
        normally with ODL?
Message-ID:
        <79BBBFE6CB6C9B488C1A45ACD284F51910F5D544@...>

Content-Type: text/plain; charset="us-ascii"

Hi, All

I followed https://wiki.opendaylight.org/view/OpenStack_and_OpenDaylight to integrate Openstack and SFC. I saw br-int was created by ODL.

[root@localhost ~(keystone_admin)]# ovs-vsctl show
795a890a-9a70-4340-98c6-d3f6db82264c
    Manager "tcp:10.240.224.185:6640"
        is_connected: true
    Bridge br-int
        Controller "tcp:10.240.224.185:6653"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.3.1-git4750c96"
[root@localhost ~(keystone_admin)]#

I'm not sure if Openstack neutron really connected to ODL.

[root@localhost ~(keystone_admin)]# curl -u admin:admin http://10.240.224.185:8181/controller/nb/v2/neutron/networks
{
   "networks" : [ ]
}[root@localhost ~(keystone_admin)]#

When I ran neutron commands to create net, subnet, router, etc, neutron didn't report any error, but it seems openflow tables aren't changed after I created net, subnet, router and start a VM. Anybody knows how to check if neutron is working normally with ODL?

[root@localhost ~(keystone_admin)]# neutron router-create router1
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | ba8603f7-68ea-48ed-88a2-fdf49cd5d8a8 |
| name                  | router1                              |
| status                | ACTIVE                               |
| tenant_id             | fa82c46cf8ed48d39ca516699a81032d     |
+-----------------------+--------------------------------------+
[root@localhost ~(keystone_admin)]# neutron subnet-create private --name=private_subnet 192.168.1.0/24
Created a new subnet:
+------------------+--------------------------------------------------+
| Field            | Value                                            |
+------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr             | 192.168.1.0/24                                   |
| dns_nameservers  |                                                  |
| enable_dhcp      | True                                             |
| gateway_ip       | 192.168.1.1                                      |
| host_routes      |                                                  |
| id               | 53d03079-e689-4292-a10f-317b8cb012f0             |
| ip_version       | 4                                                |
| name             | private_subnet                                   |
| network_id       | 5b1a7624-b8b5-4ce3-b264-1445df528ec6             |
| tenant_id        | fa82c46cf8ed48d39ca516699a81032d                 |
+------------------+--------------------------------------------------+
[root@localhost ~(keystone_admin)]# neutron router-interface-add router1 private_subnet
Added interface 4f123e5f-0285-4b55-b316-59709b19921c to router router1.
[root@localhost ~(keystone_admin)]# glance image-create --name='cirros image' --container-format=bare --disk-format=qcow2 < cirros-0.3.1-x86_64-disk.img
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | d972013792949d0d3ba628fbe8685bce     |
| container_format | bare                                 |
| created_at       | 2015-08-17T08:46:09                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 11553256-c2f9-4f80-93b7-1615046dc8c3 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros image                         |
| owner            | fa82c46cf8ed48d39ca516699a81032d     |
| protected        | False                                |
| size             | 13147648                             |
| status           | active                               |
| updated_at       | 2015-08-17T08:46:09                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[root@localhost ~(keystone_admin)]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@localhost ~(keystone_admin)]# glance image-list
+--------------------------------------+--------------+-------------+------------------+----------+--------+
| ID                                   | Name         | Disk Format | Container Format | Size     | Status |
+--------------------------------------+--------------+-------------+------------------+----------+--------+
| 11553256-c2f9-4f80-93b7-1615046dc8c3 | cirros image | qcow2       | bare             | 13147648 | active |
+--------------------------------------+--------------+-------------+------------------+----------+--------+
[root@localhost ~(keystone_admin)]# neutron net-list
+--------------------------------------+---------+-----------------------------------------------------+
| id                                   | name    | subnets                                             |
+--------------------------------------+---------+-----------------------------------------------------+
| 5b1a7624-b8b5-4ce3-b264-1445df528ec6 | private | 53d03079-e689-4292-a10f-317b8cb012f0 192.168.1.0/24 |
+--------------------------------------+---------+-----------------------------------------------------+
[root@localhost ~(keystone_admin)]# nova boot --flavor m1.small --image 11553256-c2f9-4f80-93b7-1615046dc8c3 --nic net-id=5b1a7624-b8b5-4ce3-b264-1445df528ec6 test1
+--------------------------------------+-----------------------------------------------------+
| Property                             | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          | nova                                                |
| OS-EXT-SRV-ATTR:host                 | -                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                   |
| OS-EXT-STS:power_state               | 0                                                   |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | -                                                   |
| OS-SRV-USG:terminated_at             | -                                                   |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| adminPass                            | 9ZxWJ94RKjyH                                        |
| config_drive                         |                                                     |
| created                              | 2015-08-17T08:51:18Z                                |
| flavor                               | m1.small (2)                                        |
| hostId                               |                                                     |
| id                                   | fa6be9cf-875e-4a58-86d8-58f5f6a2d6bd                |
| image                                | cirros image (11553256-c2f9-4f80-93b7-1615046dc8c3) |
| key_name                             | -                                                   |
| metadata                             | {}                                                  |
| name                                 | test1                                               |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| security_groups                      | default                                             |
| status                               | BUILD                                               |
| tenant_id                            | fa82c46cf8ed48d39ca516699a81032d                    |
| updated                              | 2015-08-17T08:51:19Z                                |
| user_id                              | 1058503dc1434ab783fc79bdb9626626                    |
+--------------------------------------+-----------------------------------------------------+
[root@localhost ~(keystone_admin)]# ovs-vsctl show
795a890a-9a70-4340-98c6-d3f6db82264c
    Manager "tcp:10.240.224.185:6640"
        is_connected: true
    Bridge br-int
        Controller "tcp:10.240.224.185:6653"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tap6e750af4-e9"
            Interface "tap6e750af4-e9"
        Port "tap5efa303c-10"
            Interface "tap5efa303c-10"
                type: internal
    ovs_version: "2.3.1-git4750c96"
[root@localhost ~(keystone_admin)]# ovs-ofctl --protocol=OpenFlow13 dump-flows br-int
OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x0, duration=11013.425s, table=0, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:20
cookie=0x0, duration=11014.721s, table=0, n_packets=0, n_bytes=0, dl_type=0x88cc actions=CONTROLLER:65535
cookie=0x0, duration=11013.404s, table=20, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:30
cookie=0x0, duration=11013.328s, table=30, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:40
cookie=0x0, duration=11013.298s, table=40, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:50
cookie=0x0, duration=11013.246s, table=50, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:60
cookie=0x0, duration=11013.216s, table=60, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:70
cookie=0x0, duration=11013.154s, table=70, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:80
cookie=0x0, duration=11013.135s, table=80, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:90
cookie=0x0, duration=11013.114s, table=90, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:100
cookie=0x0, duration=11013.035s, table=100, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:110
cookie=0x0, duration=11013.014s, table=110, n_packets=17, n_bytes=2082, priority=0 actions=drop
[root@localhost ~(keystone_admin)]#
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opendaylight.org/pipermail/ovsdb-dev/attachments/20150817/16e1bc8c/attachment.html>

------------------------------

_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev


End of ovsdb-dev Digest, Vol 26, Issue 22
*****************************************

 


Re: Anybody knows how to check if neutron is working normally with ODL?

Yang, Yi Y <yi.y.yang@...>
 

Sam, thank you so much, I’ll use your ova image to set up test.

 

From: Sam Hague [mailto:shague@...]
Sent: Monday, August 17, 2015 9:33 PM
To: Yang, Yi Y
Cc: ovsdb-dev@...
Subject: Re: [ovsdb-dev] Anybody knows how to check if neutron is working normally with ODL?

 

Yi,

it is only partially working in your setup. Your ovsdb nodes are connected to ODL and the pipeline flows are created which is good. You can see that in the final dump-flows in your output. But there should be more flows for the neutron networks you created. So something is failing at that point. Normally this is  config issue and the neutron commands are not even making it to ODL. Looking at the neutron logs helps ehre.

You can look at different logs to find the issue:

- odl logs, look for exceptions

- neutron logs, grep -ir 'error\|fail\|usage\|not found' <path to neutron logs, devstack logs>

That wiki you followed is older. You might have better luck using our tutorial vms from the summit: [ova] [slides]. The ova has all the vms needed to bring up devstack to show integration between openstack and odl. The slides show how everything is connected, how to run the neutron commands and how to verify everything is working.

 

Thanks, Sam

[ova] https://wiki.opendaylight.org/images/HostedFiles/2015Summit/ovsdbtutorial15_2.ova

[slides] https://drive.google.com/open?id=1KIuNDuUJGGEV37Zk9yzx9OSnWExt4iD2Z7afycFLf_I

 

On Mon, Aug 17, 2015 at 4:50 AM, Yang, Yi Y <yi.y.yang@...> wrote:

Hi, All

 

I followed https://wiki.opendaylight.org/view/OpenStack_and_OpenDaylight to integrate Openstack and SFC. I saw br-int was created by ODL.

 

[root@localhost ~(keystone_admin)]# ovs-vsctl show

795a890a-9a70-4340-98c6-d3f6db82264c

    Manager "tcp:10.240.224.185:6640"

        is_connected: true

    Bridge br-int

        Controller "tcp:10.240.224.185:6653"

            is_connected: true

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

    ovs_version: "2.3.1-git4750c96"

[root@localhost ~(keystone_admin)]#

 

I’m not sure if Openstack neutron really connected to ODL.

 

[root@localhost ~(keystone_admin)]# curl -u admin:admin http://10.240.224.185:8181/controller/nb/v2/neutron/networks

{

   "networks" : [ ]

}[root@localhost ~(keystone_admin)]#

 

When I ran neutron commands to create net, subnet, router, etc, neutron didn’t report any error, but it seems openflow tables aren’t changed after I created net, subnet, router and start a VM. Anybody knows how to check if neutron is working normally with ODL?

 

[root@localhost ~(keystone_admin)]# neutron router-create router1

Created a new router:

+-----------------------+--------------------------------------+

| Field                 | Value                                |

+-----------------------+--------------------------------------+

| admin_state_up        | True                                 |

| external_gateway_info |                                      |

| id                    | ba8603f7-68ea-48ed-88a2-fdf49cd5d8a8 |

| name                  | router1                              |

| status                | ACTIVE                               |

| tenant_id             | fa82c46cf8ed48d39ca516699a81032d     |

+-----------------------+--------------------------------------+

[root@localhost ~(keystone_admin)]# neutron subnet-create private --name=private_subnet 192.168.1.0/24

Created a new subnet:

+------------------+--------------------------------------------------+

| Field            | Value                                            |

+------------------+--------------------------------------------------+

| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |

| cidr             | 192.168.1.0/24                                   |

| dns_nameservers  |                                                  |

| enable_dhcp      | True                                             |

| gateway_ip       | 192.168.1.1                                      |

| host_routes      |                                                  |

| id               | 53d03079-e689-4292-a10f-317b8cb012f0             |

| ip_version       | 4                                                |

| name             | private_subnet                                   |

| network_id       | 5b1a7624-b8b5-4ce3-b264-1445df528ec6             |

| tenant_id        | fa82c46cf8ed48d39ca516699a81032d                 |

+------------------+--------------------------------------------------+

[root@localhost ~(keystone_admin)]# neutron router-interface-add router1 private_subnet

Added interface 4f123e5f-0285-4b55-b316-59709b19921c to router router1.

[root@localhost ~(keystone_admin)]# glance image-create --name='cirros image' --container-format=bare --disk-format=qcow2 < cirros-0.3.1-x86_64-disk.img

+------------------+--------------------------------------+

| Property         | Value                                |

+------------------+--------------------------------------+

| checksum         | d972013792949d0d3ba628fbe8685bce     |

| container_format | bare                                 |

| created_at       | 2015-08-17T08:46:09                  |

| deleted          | False                                |

| deleted_at       | None                                 |

| disk_format      | qcow2                                |

| id               | 11553256-c2f9-4f80-93b7-1615046dc8c3 |

| is_public        | False                                |

| min_disk         | 0                                    |

| min_ram          | 0                                    |

| name             | cirros image                         |

| owner            | fa82c46cf8ed48d39ca516699a81032d     |

| protected        | False                                |

| size             | 13147648                             |

| status           | active                               |

| updated_at       | 2015-08-17T08:46:09                  |

| virtual_size     | None                                 |

+------------------+--------------------------------------+

[root@localhost ~(keystone_admin)]# nova flavor-list

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |

| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |

| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |

| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |

| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |

+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

[root@localhost ~(keystone_admin)]# glance image-list

+--------------------------------------+--------------+-------------+------------------+----------+--------+

| ID                                   | Name         | Disk Format | Container Format | Size     | Status |

+--------------------------------------+--------------+-------------+------------------+----------+--------+

| 11553256-c2f9-4f80-93b7-1615046dc8c3 | cirros image | qcow2       | bare             | 13147648 | active |

+--------------------------------------+--------------+-------------+------------------+----------+--------+

[root@localhost ~(keystone_admin)]# neutron net-list

+--------------------------------------+---------+-----------------------------------------------------+

| id                                   | name    | subnets                                             |

+--------------------------------------+---------+-----------------------------------------------------+

| 5b1a7624-b8b5-4ce3-b264-1445df528ec6 | private | 53d03079-e689-4292-a10f-317b8cb012f0 192.168.1.0/24 |

+--------------------------------------+---------+-----------------------------------------------------+

[root@localhost ~(keystone_admin)]# nova boot --flavor m1.small --image 11553256-c2f9-4f80-93b7-1615046dc8c3 --nic net-id=5b1a7624-b8b5-4ce3-b264-1445df528ec6 test1

+--------------------------------------+-----------------------------------------------------+

| Property                             | Value                                               |

+--------------------------------------+-----------------------------------------------------+

| OS-DCF:diskConfig                    | MANUAL                                              |

| OS-EXT-AZ:availability_zone          | nova                                                |

| OS-EXT-SRV-ATTR:host                 | -                                                   |

| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |

| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                   |

| OS-EXT-STS:power_state               | 0                                                   |

| OS-EXT-STS:task_state                | scheduling                                          |

| OS-EXT-STS:vm_state                  | building                                            |

| OS-SRV-USG:launched_at               | -                                                   |

| OS-SRV-USG:terminated_at             | -                                                   |

| accessIPv4                           |                                                     |

| accessIPv6                           |                                                     |

| adminPass                            | 9ZxWJ94RKjyH                                        |

| config_drive                         |                                                     |

| created                              | 2015-08-17T08:51:18Z                                |

| flavor                               | m1.small (2)                                        |

| hostId                               |                                                     |

| id                                   | fa6be9cf-875e-4a58-86d8-58f5f6a2d6bd                |

| image                                | cirros image (11553256-c2f9-4f80-93b7-1615046dc8c3) |

| key_name                             | -                                                   |

| metadata                             | {}                                                  |

| name                                 | test1                                               |

| os-extended-volumes:volumes_attached | []                                                  |

| progress                             | 0                                                   |

| security_groups                      | default                                             |

| status                               | BUILD                                               |

| tenant_id                            | fa82c46cf8ed48d39ca516699a81032d                    |

| updated                              | 2015-08-17T08:51:19Z                                |

| user_id                              | 1058503dc1434ab783fc79bdb9626626                    |

+--------------------------------------+-----------------------------------------------------+

[root@localhost ~(keystone_admin)]# ovs-vsctl show

795a890a-9a70-4340-98c6-d3f6db82264c

    Manager "tcp:10.240.224.185:6640"

        is_connected: true

    Bridge br-int

        Controller "tcp:10.240.224.185:6653"

            is_connected: true

        fail_mode: secure

        Port br-int

            Interface br-int

                type: internal

        Port "tap6e750af4-e9"

            Interface "tap6e750af4-e9"

        Port "tap5efa303c-10"

            Interface "tap5efa303c-10"

                type: internal

    ovs_version: "2.3.1-git4750c96"

[root@localhost ~(keystone_admin)]# ovs-ofctl --protocol=OpenFlow13 dump-flows br-int

OFPST_FLOW reply (OF1.3) (xid=0x2):

cookie=0x0, duration=11013.425s, table=0, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:20

cookie=0x0, duration=11014.721s, table=0, n_packets=0, n_bytes=0, dl_type=0x88cc actions=CONTROLLER:65535

cookie=0x0, duration=11013.404s, table=20, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:30

cookie=0x0, duration=11013.328s, table=30, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:40

cookie=0x0, duration=11013.298s, table=40, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:50

cookie=0x0, duration=11013.246s, table=50, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:60

cookie=0x0, duration=11013.216s, table=60, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:70

cookie=0x0, duration=11013.154s, table=70, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:80

cookie=0x0, duration=11013.135s, table=80, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:90

cookie=0x0, duration=11013.114s, table=90, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:100

cookie=0x0, duration=11013.035s, table=100, n_packets=17, n_bytes=2082, priority=0 actions=goto_table:110

cookie=0x0, duration=11013.014s, table=110, n_packets=17, n_bytes=2082, priority=0 actions=drop

[root@localhost ~(keystone_admin)]#


_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev

 


Re: Retrying connections, persistence, controller restart for ovsdb southbound

Edward Warnicke <hagbard@...>
 

I don't object to adding these to clustering... but they are actually completely orthogonal to it, and can be fixed immediately, independent of what we do for HA for ovsdb-southbound :)

Ed

On Mon, Aug 17, 2015 at 9:42 AM, Anil Vishnoi <vishnoianil@...> wrote:
We welcome everyone who wants to contribute, so please feel free to pick up the task and add it to the clustering trello card.

Anil

On Mon, Aug 17, 2015 at 7:11 PM, Ryan Goulding <ryandgoulding@...> wrote:
Thanks for the information, Sam.  I look forward to discussing this in tomorrow's meeting.

Regards,

Ryan Goulding

On Mon, Aug 17, 2015 at 9:39 AM, Sam Hague <shague@...> wrote:
Ryan, Daya,

these items would be part of the work Anil and Flavio are driving for the clustering,persistence and ha in Be. There are high-level cards in the Be Trello board.  They are hoping to present some initial findings in tomorrow's meeting. This will be an evolving design because there are so many different pieces and mechanisms to bring together.

They will gladly take any volunteers to work on these pieces.

Thanks, Sam

On Mon, Aug 17, 2015 at 9:32 AM, Ryan Goulding <ryandgoulding@...> wrote:
Hi Ed,

Should we make a trello card for this?  If no one has started work on this, I would be interested in picking this up.

Thanks,

Ryan Goulding

On Fri, Aug 14, 2015 at 11:53 AM, Edward Warnicke <hagbard@...> wrote:
So we probably also need to get one for retrying connections.. because currently if the ovsdb node is not available or reachable when we configure a connection, we never retry, and if it goes away temporarily, we never retry.

Ed

On Fri, Aug 14, 2015 at 8:49 AM, Ryan Goulding <ryandgoulding@...> wrote:
https://trello.com/c/bOrmGbXQ/46-resync-persisted-config-to-ovsdb-correctly-on-restart-of-controller is a trello card that Eric has taken involving controller restart for OVSDB southbound.  This is just a few of the restart scenarios though, IIRC.

Regards,

Ryan Goulding

On Fri, Aug 14, 2015 at 11:48 AM, Edward Warnicke <hagbard@...> wrote:
Guys,

     Is anyone working on these issues with OVSDB SB?

Ed



_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev





_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




--
Thanks
Anil

_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev



Re: OVSDB Agenda 8/11/15

Sam Hague
 

Isaku,

thanks for reminding me... I fired an email off to Phil to move the meeting to 10:00am PST. I will reply back if he is able to change it for tomorrow's meeting.

Thanks, Sam

On Mon, Aug 17, 2015 at 2:00 PM, Isaku Yamahata <yamahata@...> wrote:
Any final decision on meeting time slot?
When will the meeting be held In this week(Week of aug 17)?


On Tue, Aug 11, 2015 at 11:17:46AM -0400,
Sam Hague <shague@...> wrote:

> Hi all,
>
> here are the topics for this week. Add to the list if there is anything you
> would like to dicsuss.
>
> 1. Task status updates: please update the Trello cards
> 2. Bug updates: please grab a bug if you want to dig into something and
> learn the code.
> 3. Modify meeting time from Tuesday 12:00p PST. Some choices:
> - Tuesday 10:30p IST/10:00a PST: this seems to be the best to avoid other
> standing ODL meetings.
> - Before 10:30p IST/10:00a PST
> - After 6:00a IST/5:30p PST
>
> [ODL calendar] shows many meetings at the 9a-10a PST time. MDSAL is Tuesday
> at 9a PST and maybe we could overlap there. Also Wednesday at 10a PST would
> work.
>
> Some future topics I would like to get to:
>
> 1. neutron plugin evolution
> 2. security groups using conntrack
> 3. ha, clustering, persistence
>
> Thanks, Sam
>
> [Trello]- https://trello.com/b/FJAa9wyl/ovsdb-beryllium
>
> [ODL calendar]
> https://www.google.com/calendar/embed?src=aDc5aGltYm9rcThhYXVyOWxlZDhvYzc5MGdAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ

> _______________________________________________
> ovsdb-dev mailing list
> ovsdb-dev@...
> https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev


--
Isaku Yamahata <isaku.yamahata@...>


Re: [integration-dev] finally posted: OpenStack with Opendaylight Part 3: L3 North-South

Luis Gomez
 

This is really cool write-up, I hope I can get the time to try this soon :)

On Aug 17, 2015, at 7:46 AM, Flavio Fernandes <ffernand@...> wrote:

Greetings!

For the folks following the series on ODL OVSDB net-virt, I finally wrapped up the page on how we
are handling one-to-one nat in Lithium:

   http://www.flaviof.com/blog/work/how-to-odl-with-openstack-part3.html

Enjoy,

— flavio

_______________________________________________
integration-dev mailing list
integration-dev@...
https://lists.opendaylight.org/mailman/listinfo/integration-dev


Re: OVSDB Agenda 8/11/15

Isaku Yamahata <yamahata@...>
 

Any final decision on meeting time slot?
When will the meeting be held In this week(Week of aug 17)?


On Tue, Aug 11, 2015 at 11:17:46AM -0400,
Sam Hague <shague@...> wrote:

Hi all,

here are the topics for this week. Add to the list if there is anything you
would like to dicsuss.

1. Task status updates: please update the Trello cards
2. Bug updates: please grab a bug if you want to dig into something and
learn the code.
3. Modify meeting time from Tuesday 12:00p PST. Some choices:
- Tuesday 10:30p IST/10:00a PST: this seems to be the best to avoid other
standing ODL meetings.
- Before 10:30p IST/10:00a PST
- After 6:00a IST/5:30p PST

[ODL calendar] shows many meetings at the 9a-10a PST time. MDSAL is Tuesday
at 9a PST and maybe we could overlap there. Also Wednesday at 10a PST would
work.

Some future topics I would like to get to:

1. neutron plugin evolution
2. security groups using conntrack
3. ha, clustering, persistence

Thanks, Sam

[Trello]- https://trello.com/b/FJAa9wyl/ovsdb-beryllium

[ODL calendar]
https://www.google.com/calendar/embed?src=aDc5aGltYm9rcThhYXVyOWxlZDhvYzc5MGdAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ
_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev

--
Isaku Yamahata <isaku.yamahata@...>


Re: Retrying connections, persistence, controller restart for ovsdb southbound

Anil Vishnoi
 

We welcome everyone who wants to contribute, so please feel free to pick up the task and add it to the clustering trello card.

Anil

On Mon, Aug 17, 2015 at 7:11 PM, Ryan Goulding <ryandgoulding@...> wrote:
Thanks for the information, Sam.  I look forward to discussing this in tomorrow's meeting.

Regards,

Ryan Goulding

On Mon, Aug 17, 2015 at 9:39 AM, Sam Hague <shague@...> wrote:
Ryan, Daya,

these items would be part of the work Anil and Flavio are driving for the clustering,persistence and ha in Be. There are high-level cards in the Be Trello board.  They are hoping to present some initial findings in tomorrow's meeting. This will be an evolving design because there are so many different pieces and mechanisms to bring together.

They will gladly take any volunteers to work on these pieces.

Thanks, Sam

On Mon, Aug 17, 2015 at 9:32 AM, Ryan Goulding <ryandgoulding@...> wrote:
Hi Ed,

Should we make a trello card for this?  If no one has started work on this, I would be interested in picking this up.

Thanks,

Ryan Goulding

On Fri, Aug 14, 2015 at 11:53 AM, Edward Warnicke <hagbard@...> wrote:
So we probably also need to get one for retrying connections.. because currently if the ovsdb node is not available or reachable when we configure a connection, we never retry, and if it goes away temporarily, we never retry.

Ed

On Fri, Aug 14, 2015 at 8:49 AM, Ryan Goulding <ryandgoulding@...> wrote:
https://trello.com/c/bOrmGbXQ/46-resync-persisted-config-to-ovsdb-correctly-on-restart-of-controller is a trello card that Eric has taken involving controller restart for OVSDB southbound.  This is just a few of the restart scenarios though, IIRC.

Regards,

Ryan Goulding

On Fri, Aug 14, 2015 at 11:48 AM, Edward Warnicke <hagbard@...> wrote:
Guys,

     Is anyone working on these issues with OVSDB SB?

Ed



_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev





_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




--
Thanks
Anil


Re: Verification of br-ex traffic - BR 3378

Anil Vishnoi
 

Yes.

On Mon, Aug 17, 2015 at 7:26 PM, <Badrinath_Viswanatha@...> wrote:

Hi,

   As I understand, the commit for BR 3378 would

1)      Take away the need to add the gateway mac in the /etc/custom.properties.

2)      Help verify the VM’s ability to reach the external network in an multi-node setup.

 

Can someone please confirm.

 

Thanks

Badri

 

BR 3378 -  ovsdb netvirt needs help in getting mac for a given ip in br-ex

 


_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev




--
Thanks
Anil

3021 - 3040 of 4855