Hugo Trippaers <hugo@...>
Hey guys,
Do we have a document describing the call flow between Neutron and ODL? Would like to use that as a basis to put the same functionality in CloudStack.
I’ve already got some of the skeleton work done for the plugin, but need to start filling in the blanks.
The main thing that i can’t seem to figure out is how neutron tells ODL which hypervisors are part of that tenants network and how neutron creates the OVS nodes.
Cheers,
Hugo
|
|
Hi Hugo,
Since OpenDaylight has atleast 3 Neutron based south-bounds (OVSDB, OpenDove & VTN), we centralized the Neutron NB-API on the controller project. And each of the above 3 south-bound plugin provides common services for handling Network, Subnet and Port events. You can see all of these under the controller projects : -> networkconfig.neutron -> networkconfig.neutron.implementation -> networkconfig.neutron.northbound
On the OVSDB side, Please take a look @ -> ovsdb.neutron
- NetworkHandler (handles Network creation events) - PortHandler (VM / Port creation events).
Now, which hypervisors are part of the tenant network is something that can derive out of the above 2 events and the centralized cache maintained in the networkconfig.neutron plugin.
BTW, I dont like to maintain caches in OVSDB unless it is strictly necessary (fearing the caches going out-of-sync and chasing those problems are nightmare). So, I depend on events and dont mind CPU cycles to form the picture every time an event happens. So, we dont have a DB, that will give the info on all the hypervisors that are part of a tenant network. All you get is all the Ports belong to a Network. From the Neutron Port and the OVSDB Port database, we can derive the exact set of hypervisors / nodes that make a given tenant network.
Thanks, Madhu
toggle quoted message
Show quoted text
On 11/27/13, 6:08 AM, Hugo Trippaers wrote: Hey guys,
Do we have a document describing the call flow between Neutron and ODL? Would like to use that as a basis to put the same functionality in CloudStack.
I’ve already got some of the skeleton work done for the plugin, but need to start filling in the blanks.
The main thing that i can’t seem to figure out is how neutron tells ODL which hypervisors are part of that tenants network and how neutron creates the OVS nodes.
Cheers,
Hugo _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
|
|
Hugo Trippaers <hugo@...>
Ok. So it is pretty tightly coupled with the Neutron way of working right? So we don’t have generic apis yet that can be used by other orchestration platforms? Do we have some documentation online on which networks type to use etc, like a poor mans guide to Neutron southbound?
So i guess i have to get into the neutron way of working to see if that is applicable to the way i’m doing networking now in CS. :-)
Cheers,
Hugo
toggle quoted message
Show quoted text
On 27 nov. 2013, at 15:17, Madhu Venguopal <mavenugo@...> wrote: Hi Hugo,
Since OpenDaylight has atleast 3 Neutron based south-bounds (OVSDB, OpenDove & VTN), we centralized the Neutron NB-API on the controller project. And each of the above 3 south-bound plugin provides common services for handling Network, Subnet and Port events. You can see all of these under the controller projects : -> networkconfig.neutron -> networkconfig.neutron.implementation -> networkconfig.neutron.northbound
On the OVSDB side, Please take a look @ -> ovsdb.neutron
- NetworkHandler (handles Network creation events) - PortHandler (VM / Port creation events).
Now, which hypervisors are part of the tenant network is something that can derive out of the above 2 events and the centralized cache maintained in the networkconfig.neutron plugin.
BTW, I dont like to maintain caches in OVSDB unless it is strictly necessary (fearing the caches going out-of-sync and chasing those problems are nightmare). So, I depend on events and dont mind CPU cycles to form the picture every time an event happens. So, we dont have a DB, that will give the info on all the hypervisors that are part of a tenant network. All you get is all the Ports belong to a Network. From the Neutron Port and the OVSDB Port database, we can derive the exact set of hypervisors / nodes that make a given tenant network.
Thanks, Madhu
On 11/27/13, 6:08 AM, Hugo Trippaers wrote:
Hey guys,
Do we have a document describing the call flow between Neutron and ODL? Would like to use that as a basis to put the same functionality in CloudStack.
I’ve already got some of the skeleton work done for the plugin, but need to start filling in the blanks.
The main thing that i can’t seem to figure out is how neutron tells ODL which hypervisors are part of that tenants network and how neutron creates the OVS nodes.
Cheers,
Hugo _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
|
|
toggle quoted message
Show quoted text
Ok. So it is pretty tightly coupled with the Neutron way of working right? So we don’t have generic apis yet that can be used by other orchestration platforms? Do we have some documentation online on which networks type to use etc, like a poor mans guide to Neutron southbound?
So i guess i have to get into the neutron way of working to see if that is applicable to the way i’m doing networking now in CS. :-)
Cheers,
Hugo
On 27 nov. 2013, at 15:17, Madhu Venguopal <mavenugo@...> wrote:
Hi Hugo,
Since OpenDaylight has atleast 3 Neutron based south-bounds (OVSDB, OpenDove & VTN), we centralized the Neutron NB-API
on the controller project. And each of the above 3 south-bound plugin provides common services for handling Network, Subnet and Port events.
You can see all of these under the controller projects :
-> networkconfig.neutron
-> networkconfig.neutron.implementation
-> networkconfig.neutron.northbound
On the OVSDB side, Please take a look @
-> ovsdb.neutron
- NetworkHandler (handles Network creation events)
- PortHandler (VM / Port creation events).
Now, which hypervisors are part of the tenant network is something that can derive out of the above 2 events and the
centralized cache maintained in the networkconfig.neutron plugin.
BTW, I dont like to maintain caches in OVSDB unless it is strictly necessary (fearing the caches going out-of-sync and chasing those problems are nightmare).
So, I depend on events and dont mind CPU cycles to form the picture every time an event happens.
So, we dont have a DB, that will give the info on all the hypervisors that are part of a tenant network.
All you get is all the Ports belong to a Network. From the Neutron Port and the OVSDB Port database, we can derive the
exact set of hypervisors / nodes that make a given tenant network.
Thanks,
Madhu
On 11/27/13, 6:08 AM, Hugo Trippaers wrote:
Hey guys,
Do we have a document describing the call flow between Neutron and ODL? Would like to use that as a basis to put the same functionality in CloudStack.
I’ve already got some of the skeleton work done for the plugin, but need to start filling in the blanks.
The main thing that i can’t seem to figure out is how neutron tells ODL which hypervisors are part of that tenants network and how neutron creates the OVS nodes.
Cheers,
Hugo
_______________________________________________
ovsdb-dev mailing list
ovsdb-dev@...
https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
|
|
Hi Hugo,
Below are a couple of example Neutron API calls (GRE/VXLAN) that might help. I have the OVS logs also if that would help. We should have a Hangout and see how we can help with CLoudStack integration buddy.
You have the team behind you so please don't hesitate it is the least we can do for as lucky as we are to have you onboard.
============== GRE ==================
2013-11-1903: 59: 06.625INFOneutron.tests.unit.ml2.drivers.mechanism_logger[ - ]update_port_precommitcalledwithportsettings{ 'status': 'DOWN', 'binding: host_id': 'fedora2', 'allowed_address_pairs': [ ], 'extra_dhcp_opts': [ ], 'device_owner': 'network: dhcp', 'fixed_ips': [ { 'subnet_id': 'fdb81e43-744c-4645-9b0e-12088e372a89', 'ip_address': '10.0.0.2' } ], 'id': '39913bf1-c8e6-4270-a20a-244960b56cd9', 'security_groups': [ ], 'device_id': 'dhcp59398d03-bd37-55aa-92f1-91d1c4e8624a-899ba619-8322-4667-9283-ae0d59ffe 89b', 'name': '', 'admin_state_up': True, 'network_id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'binding: vif_type': 'ovs', 'binding: capabilities': { 'port_filter': False }, 'mac_address': 'fa: 16: 3e: f3: 47: b7' }(originalsettings{ 'status': u'DOWN', 'binding: host_id': 'fedora2', 'allowed_address_pairs': [ ], 'extra_dhcp_opts': [ ], 'device_owner': 'network: dhcp', 'fixed_ips': [ { 'subnet_id': 'fdb81e43-744c-4645-9b0e-12088e372a89', 'ip_address': '10.0.0.2' } ], 'id': '39913bf1-c8e6-4270-a20a-244960b56cd9', 'security_groups': [ ], 'device_id': 'dhcp59398d03-bd37-55aa-92f1-91d1c4e8624a-899ba619-8322-4667-9283-ae0d59ffe 89b', 'name': '', 'admin_state_up': True, 'network_id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'binding: vif_type': 'ovs', 'binding: capabilities': { 'port_filter': False }, 'mac_address': 'fa: 16: 3e: f3: 47: b7' })onnetwork{ 'status': 'ACTIVE', 'subnets': [ 'fdb81e43-744c-4645-9b0e-12088e372a89' ], 'name': 'private', 'provider: physical_network': None, 'admin_state_up': True, 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'provider: network_type': 'gre', 'router: external': False, 'shared': False, 'id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'provider: segmentation_id': 1L }
============= VXLAN ===============
{ '_context_roles': [ 'admin' ], '_context_read_deleted': 'no', '_context_tenant_id': None, 'args': { 'segmentation_id': 1001, 'physical_network': None, 'port': { 'status': 'ACTIVE', 'binding: host_id': 'ryu1', 'name': '', 'allowed_address_pairs': [ ], 'admin_state_up': True, 'network_id': 'f291f8a5-41e9-40db-ba84-eeac8285e594', 'tenant_id': '8f9ef230879747d4808ebf376306a46e', 'extra_dhcp_opts': [ ], 'binding: vif_type': 'ovs', 'device_owner': 'network: dhcp', 'binding: capabilities': { 'port_filter': True }, 'mac_address': 'fa: 16: 3e: a4: cd: 57', 'fixed_ips': [ { 'subnet_id': '120a7dc9-a8f6-48dd-8136-2caa47d20cca', 'ip_address': '10.0.0.2' } ], 'id': 'ed8bdd3e-e820-48f5-bfd7-54fa0dac6804', 'security_groups': [ ], 'device_id': 'dhcpc3a4ecdf-fd9b-5291-9047-1f9bab5081f6-f291f8a5-41e9-40db-ba84-eeac8285e 594' }, 'network_type': 'vxlan' }, 'namespace': None, '_unique_id': 'b5fb1a20556a47c6b237c28ab27bfd20', '_context_is_admin': True, 'version': '1.1', '_context_project_id': None, '_context_timestamp': '2013-10-2707: 11: 59.302676', '_context_user_id': None, 'method': 'port_update' }
==================================================
Thanks, -Brent
toggle quoted message
Show quoted text
On 11/27/13 9:57 AM, "Hugo Trippaers" <hugo@...> wrote: Ok. So it is pretty tightly coupled with the Neutron way of working right? So we don¹t have generic apis yet that can be used by other orchestration platforms? Do we have some documentation online on which networks type to use etc, like a poor mans guide to Neutron southbound?
So i guess i have to get into the neutron way of working to see if that is applicable to the way i¹m doing networking now in CS. :-)
Cheers,
Hugo
On 27 nov. 2013, at 15:17, Madhu Venguopal <mavenugo@...> wrote:
Hi Hugo,
Since OpenDaylight has atleast 3 Neutron based south-bounds (OVSDB, OpenDove & VTN), we centralized the Neutron NB-API on the controller project. And each of the above 3 south-bound plugin provides common services for handling Network, Subnet and Port events. You can see all of these under the controller projects : -> networkconfig.neutron -> networkconfig.neutron.implementation -> networkconfig.neutron.northbound
On the OVSDB side, Please take a look @ -> ovsdb.neutron
- NetworkHandler (handles Network creation events) - PortHandler (VM / Port creation events).
Now, which hypervisors are part of the tenant network is something that can derive out of the above 2 events and the centralized cache maintained in the networkconfig.neutron plugin.
BTW, I dont like to maintain caches in OVSDB unless it is strictly necessary (fearing the caches going out-of-sync and chasing those problems are nightmare). So, I depend on events and dont mind CPU cycles to form the picture every time an event happens. So, we dont have a DB, that will give the info on all the hypervisors that are part of a tenant network. All you get is all the Ports belong to a Network. From the Neutron Port and the OVSDB Port database, we can derive the exact set of hypervisors / nodes that make a given tenant network.
Thanks, Madhu
On 11/27/13, 6:08 AM, Hugo Trippaers wrote:
Hey guys,
Do we have a document describing the call flow between Neutron and ODL? Would like to use that as a basis to put the same functionality in CloudStack.
I¹ve already got some of the skeleton work done for the plugin, but need to start filling in the blanks.
The main thing that i can¹t seem to figure out is how neutron tells ODL which hypervisors are part of that tenants network and how neutron creates the OVS nodes.
Cheers,
Hugo _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
|
|
Hugo Trippaers <hugo@...>
Thanks guys! Really appreciate all the help :-)
So the flow from the CloudStack side is pretty much like this:
1. User creates a network Isolated guest network, meaning internal ip range dedicated for this tennant and a software based router to get traffic to the outside world (NAT, Firewall etc) 2. CloudStack network gurus get asked to allocate the network Administrative process, make entries in the database 3. One of the gurus decides to handle this call based on set criteria (network isolation method (gre|vxlan|etc), traffic type (guest, management, storage etc)) This is where the Opendaylight guru would answer the call when ODL is the network provider
4. User creates a virtual machine in that network This triggers the creation of network resources 5. Opendaylight guru is called to implement the network Allocate and configure all “physical” resources for the network 6. Opendaylight element receives a plug command to plug the VM into the network This configures all resources for the new port on the network 7. HypervisorResource configures and creates the vm This configures the VM on the hypervisor and this creates the vif (on KVM done by libvirt) 8. Happy user goes off and eats pizza
So we have the Guru dealing with the creation of the network, the element deals with preparing ports and the hypervisor resource dealing with the creation of the VIF.
Steps 5, 6 and 7 is where the ODL magic should happen. With my limited understanding of the neutron interface i’m thinking about the following. In step 5 create network (push the network json object to ODL) In step 6 i know which hypervisor is involved so tell ODL to create tunnels for this hypervisor to already existing hypervisors if we have any. In step 7 the vif will be created by libvirt and should be connected using the flows? (Create port in ODL neutron api?)
Hope this makes any sense :-)
Cheers,
Hugo
toggle quoted message
Show quoted text
On 27 nov. 2013, at 17:08, Brent Salisbury <brent.salisbury@...> wrote: Hi Hugo,
Below are a couple of example Neutron API calls (GRE/VXLAN) that might help. I have the OVS logs also if that would help. We should have a Hangout and see how we can help with CLoudStack integration buddy.
You have the team behind you so please don't hesitate it is the least we can do for as lucky as we are to have you onboard.
============== GRE ==================
2013-11-1903: 59: 06.625INFOneutron.tests.unit.ml2.drivers.mechanism_logger[ - ]update_port_precommitcalledwithportsettings{ 'status': 'DOWN', 'binding: host_id': 'fedora2', 'allowed_address_pairs': [ ], 'extra_dhcp_opts': [ ], 'device_owner': 'network: dhcp', 'fixed_ips': [ { 'subnet_id': 'fdb81e43-744c-4645-9b0e-12088e372a89', 'ip_address': '10.0.0.2' } ], 'id': '39913bf1-c8e6-4270-a20a-244960b56cd9', 'security_groups': [ ], 'device_id': 'dhcp59398d03-bd37-55aa-92f1-91d1c4e8624a-899ba619-8322-4667-9283-ae0d59ffe 89b', 'name': '', 'admin_state_up': True, 'network_id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'binding: vif_type': 'ovs', 'binding: capabilities': { 'port_filter': False }, 'mac_address': 'fa: 16: 3e: f3: 47: b7' }(originalsettings{ 'status': u'DOWN', 'binding: host_id': 'fedora2', 'allowed_address_pairs': [ ], 'extra_dhcp_opts': [ ], 'device_owner': 'network: dhcp', 'fixed_ips': [ { 'subnet_id': 'fdb81e43-744c-4645-9b0e-12088e372a89', 'ip_address': '10.0.0.2' } ], 'id': '39913bf1-c8e6-4270-a20a-244960b56cd9', 'security_groups': [ ], 'device_id': 'dhcp59398d03-bd37-55aa-92f1-91d1c4e8624a-899ba619-8322-4667-9283-ae0d59ffe 89b', 'name': '', 'admin_state_up': True, 'network_id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'binding: vif_type': 'ovs', 'binding: capabilities': { 'port_filter': False }, 'mac_address': 'fa: 16: 3e: f3: 47: b7' })onnetwork{ 'status': 'ACTIVE', 'subnets': [ 'fdb81e43-744c-4645-9b0e-12088e372a89' ], 'name': 'private', 'provider: physical_network': None, 'admin_state_up': True, 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'provider: network_type': 'gre', 'router: external': False, 'shared': False, 'id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'provider: segmentation_id': 1L }
============= VXLAN ===============
{ '_context_roles': [ 'admin' ], '_context_read_deleted': 'no', '_context_tenant_id': None, 'args': { 'segmentation_id': 1001, 'physical_network': None, 'port': { 'status': 'ACTIVE', 'binding: host_id': 'ryu1', 'name': '', 'allowed_address_pairs': [ ], 'admin_state_up': True, 'network_id': 'f291f8a5-41e9-40db-ba84-eeac8285e594', 'tenant_id': '8f9ef230879747d4808ebf376306a46e', 'extra_dhcp_opts': [ ], 'binding: vif_type': 'ovs', 'device_owner': 'network: dhcp', 'binding: capabilities': { 'port_filter': True }, 'mac_address': 'fa: 16: 3e: a4: cd: 57', 'fixed_ips': [ { 'subnet_id': '120a7dc9-a8f6-48dd-8136-2caa47d20cca', 'ip_address': '10.0.0.2' } ], 'id': 'ed8bdd3e-e820-48f5-bfd7-54fa0dac6804', 'security_groups': [ ], 'device_id': 'dhcpc3a4ecdf-fd9b-5291-9047-1f9bab5081f6-f291f8a5-41e9-40db-ba84-eeac8285e 594' }, 'network_type': 'vxlan' }, 'namespace': None, '_unique_id': 'b5fb1a20556a47c6b237c28ab27bfd20', '_context_is_admin': True, 'version': '1.1', '_context_project_id': None, '_context_timestamp': '2013-10-2707: 11: 59.302676', '_context_user_id': None, 'method': 'port_update' }
==================================================
Thanks, -Brent
On 11/27/13 9:57 AM, "Hugo Trippaers" <hugo@...> wrote:
Ok. So it is pretty tightly coupled with the Neutron way of working right? So we don¹t have generic apis yet that can be used by other orchestration platforms? Do we have some documentation online on which networks type to use etc, like a poor mans guide to Neutron southbound?
So i guess i have to get into the neutron way of working to see if that is applicable to the way i¹m doing networking now in CS. :-)
Cheers,
Hugo
On 27 nov. 2013, at 15:17, Madhu Venguopal <mavenugo@...> wrote:
Hi Hugo,
Since OpenDaylight has atleast 3 Neutron based south-bounds (OVSDB, OpenDove & VTN), we centralized the Neutron NB-API on the controller project. And each of the above 3 south-bound plugin provides common services for handling Network, Subnet and Port events. You can see all of these under the controller projects : -> networkconfig.neutron -> networkconfig.neutron.implementation -> networkconfig.neutron.northbound
On the OVSDB side, Please take a look @ -> ovsdb.neutron
- NetworkHandler (handles Network creation events) - PortHandler (VM / Port creation events).
Now, which hypervisors are part of the tenant network is something that can derive out of the above 2 events and the centralized cache maintained in the networkconfig.neutron plugin.
BTW, I dont like to maintain caches in OVSDB unless it is strictly necessary (fearing the caches going out-of-sync and chasing those problems are nightmare). So, I depend on events and dont mind CPU cycles to form the picture every time an event happens. So, we dont have a DB, that will give the info on all the hypervisors that are part of a tenant network. All you get is all the Ports belong to a Network. From the Neutron Port and the OVSDB Port database, we can derive the exact set of hypervisors / nodes that make a given tenant network.
Thanks, Madhu
On 11/27/13, 6:08 AM, Hugo Trippaers wrote:
Hey guys,
Do we have a document describing the call flow between Neutron and ODL? Would like to use that as a basis to put the same functionality in CloudStack.
I¹ve already got some of the skeleton work done for the plugin, but need to start filling in the blanks.
The main thing that i can¹t seem to figure out is how neutron tells ODL which hypervisors are part of that tenants network and how neutron creates the OVS nodes.
Cheers,
Hugo _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
|
|
Hi Hugo,
Thanks for the summary. This sounds a lot like today's Neutron integration. Neutron gives Network, Subnet and Port creates from Northbound. And OVS provides the libvirt triggered new VM-Port attached notification via OVSDB.
Our job in the ODL is to correlate these Northbound and Southbound events and program appropriate entries : 1. Tunnels (GRE, VXLAN,...) 2. Bridges and Patch ports (if missing). 3. Flow programming (Mac-Vlan based or Flood or others).
I think we can generalize these very easily and make the ovsdb.neutron plugin common for both Openstack and CloudStack. (ofcourse we can change the name or move it under a more common bundle.
More on IRC & hangouts ;)
Thanks, -Madhu
toggle quoted message
Show quoted text
On 11/28/13, 1:53 AM, Hugo Trippaers wrote: Thanks guys! Really appreciate all the help :-)
So the flow from the CloudStack side is pretty much like this:
1. User creates a network Isolated guest network, meaning internal ip range dedicated for this tennant and a software based router to get traffic to the outside world (NAT, Firewall etc) 2. CloudStack network gurus get asked to allocate the network Administrative process, make entries in the database 3. One of the gurus decides to handle this call based on set criteria (network isolation method (gre|vxlan|etc), traffic type (guest, management, storage etc)) This is where the Opendaylight guru would answer the call when ODL is the network provider
4. User creates a virtual machine in that network This triggers the creation of network resources 5. Opendaylight guru is called to implement the network Allocate and configure all “physical” resources for the network 6. Opendaylight element receives a plug command to plug the VM into the network This configures all resources for the new port on the network 7. HypervisorResource configures and creates the vm This configures the VM on the hypervisor and this creates the vif (on KVM done by libvirt) 8. Happy user goes off and eats pizza
So we have the Guru dealing with the creation of the network, the element deals with preparing ports and the hypervisor resource dealing with the creation of the VIF.
Steps 5, 6 and 7 is where the ODL magic should happen. With my limited understanding of the neutron interface i’m thinking about the following. In step 5 create network (push the network json object to ODL) In step 6 i know which hypervisor is involved so tell ODL to create tunnels for this hypervisor to already existing hypervisors if we have any. In step 7 the vif will be created by libvirt and should be connected using the flows? (Create port in ODL neutron api?)
Hope this makes any sense :-)
Cheers,
Hugo
On 27 nov. 2013, at 17:08, Brent Salisbury <brent.salisbury@...> wrote:
Hi Hugo,
Below are a couple of example Neutron API calls (GRE/VXLAN) that might help. I have the OVS logs also if that would help. We should have a Hangout and see how we can help with CLoudStack integration buddy.
You have the team behind you so please don't hesitate it is the least we can do for as lucky as we are to have you onboard.
============== GRE ==================
2013-11-1903: 59: 06.625INFOneutron.tests.unit.ml2.drivers.mechanism_logger[ - ]update_port_precommitcalledwithportsettings{ 'status': 'DOWN', 'binding: host_id': 'fedora2', 'allowed_address_pairs': [ ], 'extra_dhcp_opts': [ ], 'device_owner': 'network: dhcp', 'fixed_ips': [ { 'subnet_id': 'fdb81e43-744c-4645-9b0e-12088e372a89', 'ip_address': '10.0.0.2' } ], 'id': '39913bf1-c8e6-4270-a20a-244960b56cd9', 'security_groups': [ ], 'device_id': 'dhcp59398d03-bd37-55aa-92f1-91d1c4e8624a-899ba619-8322-4667-9283-ae0d59ffe 89b', 'name': '', 'admin_state_up': True, 'network_id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'binding: vif_type': 'ovs', 'binding: capabilities': { 'port_filter': False }, 'mac_address': 'fa: 16: 3e: f3: 47: b7' }(originalsettings{ 'status': u'DOWN', 'binding: host_id': 'fedora2', 'allowed_address_pairs': [ ], 'extra_dhcp_opts': [ ], 'device_owner': 'network: dhcp', 'fixed_ips': [ { 'subnet_id': 'fdb81e43-744c-4645-9b0e-12088e372a89', 'ip_address': '10.0.0.2' } ], 'id': '39913bf1-c8e6-4270-a20a-244960b56cd9', 'security_groups': [ ], 'device_id': 'dhcp59398d03-bd37-55aa-92f1-91d1c4e8624a-899ba619-8322-4667-9283-ae0d59ffe 89b', 'name': '', 'admin_state_up': True, 'network_id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'binding: vif_type': 'ovs', 'binding: capabilities': { 'port_filter': False }, 'mac_address': 'fa: 16: 3e: f3: 47: b7' })onnetwork{ 'status': 'ACTIVE', 'subnets': [ 'fdb81e43-744c-4645-9b0e-12088e372a89' ], 'name': 'private', 'provider: physical_network': None, 'admin_state_up': True, 'tenant_id': '7827e83d1e3a4c86a11fe51867954941', 'provider: network_type': 'gre', 'router: external': False, 'shared': False, 'id': '899ba619-8322-4667-9283-ae0d59ffe89b', 'provider: segmentation_id': 1L }
============= VXLAN ===============
{ '_context_roles': [ 'admin' ], '_context_read_deleted': 'no', '_context_tenant_id': None, 'args': { 'segmentation_id': 1001, 'physical_network': None, 'port': { 'status': 'ACTIVE', 'binding: host_id': 'ryu1', 'name': '', 'allowed_address_pairs': [ ], 'admin_state_up': True, 'network_id': 'f291f8a5-41e9-40db-ba84-eeac8285e594', 'tenant_id': '8f9ef230879747d4808ebf376306a46e', 'extra_dhcp_opts': [ ], 'binding: vif_type': 'ovs', 'device_owner': 'network: dhcp', 'binding: capabilities': { 'port_filter': True }, 'mac_address': 'fa: 16: 3e: a4: cd: 57', 'fixed_ips': [ { 'subnet_id': '120a7dc9-a8f6-48dd-8136-2caa47d20cca', 'ip_address': '10.0.0.2' } ], 'id': 'ed8bdd3e-e820-48f5-bfd7-54fa0dac6804', 'security_groups': [ ], 'device_id': 'dhcpc3a4ecdf-fd9b-5291-9047-1f9bab5081f6-f291f8a5-41e9-40db-ba84-eeac8285e 594' }, 'network_type': 'vxlan' }, 'namespace': None, '_unique_id': 'b5fb1a20556a47c6b237c28ab27bfd20', '_context_is_admin': True, 'version': '1.1', '_context_project_id': None, '_context_timestamp': '2013-10-2707: 11: 59.302676', '_context_user_id': None, 'method': 'port_update' }
==================================================
Thanks, -Brent
On 11/27/13 9:57 AM, "Hugo Trippaers" <hugo@...> wrote:
Ok. So it is pretty tightly coupled with the Neutron way of working right? So we don¹t have generic apis yet that can be used by other orchestration platforms? Do we have some documentation online on which networks type to use etc, like a poor mans guide to Neutron southbound?
So i guess i have to get into the neutron way of working to see if that is applicable to the way i¹m doing networking now in CS. :-)
Cheers,
Hugo
On 27 nov. 2013, at 15:17, Madhu Venguopal <mavenugo@...> wrote:
Hi Hugo,
Since OpenDaylight has atleast 3 Neutron based south-bounds (OVSDB, OpenDove & VTN), we centralized the Neutron NB-API on the controller project. And each of the above 3 south-bound plugin provides common services for handling Network, Subnet and Port events. You can see all of these under the controller projects : -> networkconfig.neutron -> networkconfig.neutron.implementation -> networkconfig.neutron.northbound
On the OVSDB side, Please take a look @ -> ovsdb.neutron
- NetworkHandler (handles Network creation events) - PortHandler (VM / Port creation events).
Now, which hypervisors are part of the tenant network is something that can derive out of the above 2 events and the centralized cache maintained in the networkconfig.neutron plugin.
BTW, I dont like to maintain caches in OVSDB unless it is strictly necessary (fearing the caches going out-of-sync and chasing those problems are nightmare). So, I depend on events and dont mind CPU cycles to form the picture every time an event happens. So, we dont have a DB, that will give the info on all the hypervisors that are part of a tenant network. All you get is all the Ports belong to a Network. From the Neutron Port and the OVSDB Port database, we can derive the exact set of hypervisors / nodes that make a given tenant network.
Thanks, Madhu
On 11/27/13, 6:08 AM, Hugo Trippaers wrote:
Hey guys,
Do we have a document describing the call flow between Neutron and ODL? Would like to use that as a basis to put the same functionality in CloudStack.
I¹ve already got some of the skeleton work done for the plugin, but need to start filling in the blanks.
The main thing that i can¹t seem to figure out is how neutron tells ODL which hypervisors are part of that tenants network and how neutron creates the OVS nodes.
Cheers,
Hugo _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev _______________________________________________ ovsdb-dev mailing list ovsdb-dev@... https://lists.opendaylight.org/mailman/listinfo/ovsdb-dev
|
|