README.md 4.76 KB
Newer Older
Balazs's avatar
Balazs committed
1 2
# Short description of Repository

Balazs's avatar
Balazs committed
3 4 5
Many things might seem trivial/self-evident; others might not be clear enough. Please contact me if you are having problems or need some clarification in some point.

### What needs to be changed (partly optional):
Balazs's avatar
Balazs committed
6 7
- sm/so/service_orchestrator.py -> STG_FILE
- bundle/wsgi/so.py -> os_image, ssh_key
Balazs's avatar
Balazs committed
8
- bundle/data/cluster.yaml -> "default" at master_flavor, slave_flavor; network under hadoop_router; floating_network under hadoop_ip
Balazs's avatar
Balazs committed
9 10
- etc/sm.cfg -> manifest, design_uri

Balazs's avatar
Balazs committed
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
## description of mentioned changes

### service_orchestrator.py
Necessary if you run the Service Manager.
- STG_FILE has to point to the bundle/data/service_manifest.json file.

### so.py

This is the actual Service Orchestrator with the necessarily implemented methods for design, deploy, update, delete, etc. Amongst others, it sets some configuration specifics in the Heat templates as well as creating the actual template for the required cluster size.
- os_image has to point to an existing (Debian based) image on the hosting OpenStack.
- ssh_key has to be an existing SSH key within OpenStack which will be inserted into the generated Master instance. This will be the access point of the created cluster.

### cluster.yaml

cluster.yaml is the Heat Template for the basic cluster. In this file is the setup for the components which will be used by all instances (i.e. network, router, master, ...)
- master_flavor, slave_flavor are the flavors of the generated instances. They determine the specific virtual ressources of the created virtual machines. They might have to be changed for a different OpenStack installation. (provided that the administrator has changed the default names)
- network, floating_network need the name of the public network of the local OpenStack installation. This can be looked up in the OpenStack Horizon dashboard under Network Topology for instance.

### sm.cfg

This is the configuration file for the Service Manager. Only needed if the Service Manager is to be run.
- manifest points to the bundle/data/service_manifest.json if the Service Manager is running.
- design_uri is the URI for the keystone endpoint of OpenStack.

Balazs's avatar
Balazs committed
35 36
Below is the pasted command introduction to Service Orchestrators from the Sample SO page mentioned in the Wiki.

Balazs's avatar
Balazs committed
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109
# Testing SO without deploying it using CC

Goto the directory of mcn_cc_sdk & setup virtenv (Note: could be done easier):

    $ virtualenv /tmp/mcn_test_virt
    $ source /tmp/mcn_test_virt/bin/activate

Install SDK and required packages:

    $ pip install pbr six iso8601 babel requests python-heatclient==0.2.9 python-keystoneclient
    $ python setup.py install  # in the mcn_cc_sdk directory.

Run SO:

    $ export OPENSHIFT_PYTHON_DIR=/tmp/mcn_test_virt
    $ export OPENSHIFT_REPO_DIR=<path to sample so>
    $ python ./wsgi/application

Optionally you can also set the DESIGN_URI if your OpenStack install is not local.

In a new terminal do get a token from keystone (token must belong to a user which has the admin role for the tenant):

    $ keystone token-get
    $ export KID='...'
    $ export TENANT='...'

You can now visit the SO interface [here](http://localhost:8051/orchestrator/default).

## Sample requests

Initialize the SO:

    $ curl -v -X PUT http://localhost:8051/orchestrator/default \
          -H 'Content-Type: text/occi' \
          -H 'Category: orchestrator; scheme="http://schemas.mobile-cloud-networking.eu/occi/service#"' \
          -H 'X-Auth-Token: '$KID \
          -H 'X-Tenant-Name: '$TENANT

Get state of the SO + service instance:

    $ curl -v -X GET http://localhost:8051/orchestrator/default \
          -H 'X-Auth-Token: '$KID \
          -H 'X-Tenant-Name: '$TENANT

Trigger deployment of the service instance:

    $ curl -v -X POST http://localhost:8051/orchestrator/default?action=deploy \
          -H 'Content-Type: text/occi' \
          -H 'Category: deploy; scheme="http://schemas.mobile-cloud-networking.eu/occi/service#"' \
          -H 'X-Auth-Token: '$KID \
          -H 'X-Tenant-Name: '$TENANT

Trigger provisioning of the service instance:

    $ curl -v -X POST http://localhost:8051/orchestrator/default?action=provision \
          -H 'Content-Type: text/occi' \
          -H 'Category: provision; scheme="http://schemas.mobile-cloud-networking.eu/occi/service#"' \
          -H 'X-Auth-Token: '$KID \
          -H 'X-Tenant-Name: '$TENANT

Trigger update on SO + service instance:

    $ curl -v -X POST http://localhost:8051/orchestrator/default \
          -H 'Content-Type: text/occi' \
          -H 'X-Auth-Token: '$KID \
          -H 'X-Tenant-Name: '$TENANT \
          -H 'X-OCCI-Attribute: occi.epc.attr_1="foo"'

Trigger delete of SO + service instance:

    $ curl -v -X DELETE http://localhost:8051/orchestrator/default \
          -H 'X-Auth-Token: '$KID \
          -H 'X-Tenant-Name: '$TENANT