Commit 4cf91e97 authored by Balazs's avatar Balazs

more extensive documentation

parent 829857f3
......@@ -5,19 +5,29 @@ clone the necessary Git repositories
git clone https://git.mobile-cloud-networking.eu/cloudcontroller/mcn_cc_sdk.git
git clone https://github.com/icclab/hurtle_sm.git
git clone https://gitlab.switch.ch/Verbaryerba/disco.git
The first one contains the SDK which hurtle is based on; the second contains the service manager which will keep track of all the deployed service orchestrator instances; the third contains the actual service orchestrator.
Docker needs to be installed if you would like to build the Docker image.
https://docs.docker.com/engine/installation/
The Docker image has to be built for the Cloud Controller. The bundle for the Docker image is within the disco Git repository. As the currently published image on Docker Hub (amnion/haas) is accepting all the system specific settings as attributes, this step might not be necessary. But for completeness' sake: The Dockerfile file for the instructions lies within the bundle directory. After configuring the bundle/wsgi/so.py file as well as the needed files within the bundle/data subdirectory (especially the service_manifest.json), the Docker image can be built and pushed to the Docker Hub.
Docker needs to be installed if you would like to build the Docker image with the service orchestrator.
For reference, consult https://docs.docker.com/engine/installation/
The Docker image has to be built for the Cloud Controller. As the currently published image on Docker Hub (amnion/haas) is accepting all the system specific settings as attributes, this step might not be necessary. But for completeness' sake: The Dockerfile file for the instructions lies within the bundle directory. After configuring the bundle/wsgi/so.py file as well as the needed files within the bundle/data subdirectory (especially the service_manifest.json), the Docker image can be built and pushed to the Docker Hub.
Note: for debugging, see paragraph below
In the /bundle/data/service_manifest.json file, the optional attributes to the service orchestrator have to be declared. All the other files are specific to the individual service orchestrator's operation; in this case, those are Heat Orchestration Template (HOT) templates, bash scripts for setting up the cluster and the Hadoop configuration files.
# Service Manager
The service manager has to be configured as well, though this is only one config file. In this case, this file is located at etc/sm.cfg. The main entries that have to be changed are the following: in the section [service_manager], the entry manifest has to point to the service_manifest.json file of the service orchestrator. bundle_location is this case (OpenShift v3) the path to the Docker image on the Docker Hub. design_uri is the keystone endpoint where the HOT should be deployed.
If the service shouldn't be registered in keystone, the entry register_service in subsection [service_manager_admin] can be set to False and the further settings are not important.
Under the section [cloud_controller], the access configuration to the cloud controller API has to be set. Currently, this has to be an OpenShift installation; in this case, it's an OpenShift v3 instance.
# Bash configuration
create virtual environment if you want to have a contained environment
virtualenv /tmp/mcn
......@@ -32,6 +42,8 @@ For starting the service manager, only the config file has to be given.
service_manager -c /path/to/disco/etc/sm.cfg
# Placing commands for the Service Manager
As the SM is blocking the terminal, in a new terminal, the commands to the SM can be executed.
export URL="http://127.0.0.1:8888" # URL & port of executed SM
......@@ -54,4 +66,13 @@ $ID is the last part of the url above returned
delete service orchestrator instance
curl -v -X DELETE -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT $URL/$OTERM/$ID
\ No newline at end of file
# Debugging
Debugging can be done in different ways.
The easiest is to run the service orchestrator locally by starting the application python program in the same directory. (/bundle/wsgi/application) This way, the logging output will be directly on the terminal output.
If a new error occurs after deploying the service orchestrator on the Cloud Controller, the log messages can be seen by connecting with the OpenShift CLI client and entering `oc logs <pod_name>`.
If this isn't helping either, a terminal within the pod can be started by `oc rsh <pod_name> /bin/sh` as there is no bash shell in this pod.
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment