# Configuration The system has to be configured before usage. First of all, within the data directory, there is a file defaultSettings.cfg which contains the default settings for the distributed computing cluster if they are not provided by the user. ## Software Volume In order to make deployment faster, all software that is to be installed has to be saved within a Cinder volume. This way, it doesn't have to be downloaded from some public server. In order to achieve this, a Cinder Volume has to be created with all the necessary software saved on it, either in compressed or uncompressed form. (depending on the setting how you want to deploy the cluster) The directory structure which needs to be followed on this volume is the following: ![software volume](softwarevolume.png) The subdirectories are always the unpacked files. The packed files can be downloaded at the addresses http://mirror.switch.ch/mirror/apache/dist/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.tar.gz (cannot be downloaded directly) https://github.com/jcmcken/parallel-ssh (master zip file; the renamed pssh.zip isn't actually used at all but it's unpacked to the pssh subdirectory - you can have a look at the master_bash.sh file to see the clear structure what has to be within that subdirectory; the actual pssh/pscp files are under pssh/pssh/bin) As soon as the cluster has been setup, the volume is unmounted and can be used for other clusters as well. (in a later version, a new volume will be created for each individual cluster) # Run Service Manager and instantiate Service Orchestrator Instances clone the necessary Git repositories git clone https://git.mobile-cloud-networking.eu/cloudcontroller/mcn_cc_sdk.git git clone https://github.com/icclab/hurtle_sm.git git clone https://gitlab.switch.ch/Verbaryerba/disco.git The first one contains the SDK which hurtle is based on; the second contains the service manager which will keep track of all the deployed service orchestrator instances; the third contains the actual service orchestrator. Docker needs to be installed if you would like to build the Docker image with the service orchestrator. For reference, consult https://docs.docker.com/engine/installation/ The Docker image has to be built for the Cloud Controller. As the currently published image on Docker Hub (amnion/haas) is accepting all the system specific settings as attributes, this step might not be necessary. But for completeness' sake: The Dockerfile file for the instructions lies within the bundle directory. After configuring the bundle/wsgi/so.py file as well as the needed files within the bundle/data subdirectory (especially the service_manifest.json), the Docker image can be built and pushed to the Docker Hub. Note: for debugging, see paragraph below In the /bundle/data/service_manifest.json file, the optional attributes to the service orchestrator have to be declared. All the other files are specific to the individual service orchestrator's operation; in this case, those are Heat Orchestration Template (HOT) templates, bash scripts for setting up the cluster and the Hadoop configuration files. # Service Manager The service manager has to be configured as well, though this is only one config file. In this case, this file is located at etc/sm.cfg. The main entries that have to be changed are the following: in the section [service_manager], the entry manifest has to point to the service_manifest.json file of the service orchestrator. bundle_location is this case (OpenShift v3) the path to the Docker image on the Docker Hub. design_uri is the keystone endpoint where the HOT should be deployed. If the service shouldn't be registered in keystone, the entry register_service in subsection [service_manager_admin] can be set to False and the further settings are not important. Under the section [cloud_controller], the access configuration to the cloud controller API has to be set. Currently, this has to be an OpenShift installation; in this case, it's an OpenShift v3 instance. # Bash configuration create virtual environment if you want to have a contained environment virtualenv /tmp/mcn source /tmp/mcn/bin/activate maybe some dependencies won't be met - I had to install python-monascaclient with pip for instance cd mcn_cc_sdk && python setup.py install && cd .. cd hurtle_sm && python setup.py install && cd .. For starting the service manager, only the config file has to be given. service_manager -c /path/to/disco/etc/sm.cfg # Placing commands for the Service Manager As the SM is blocking the terminal, in a new terminal, the commands to the SM can be executed. export URL="http://127.0.0.1:8888" # URL & port of executed SM export OTERM="haas" # name of the service export KID=xxx # keystone token export TENANT=xxx # OpenStack tenant name check for the service type curl -v -X GET -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT $URL/-/ create a service orchestrator instance - will return instance URL in the Location header; in this example, also the attribute icclab.haas.slave.number will be set to 2 curl -v -X POST $URL/$OTERM/ -H 'Category: '$OTERM'; scheme="http://schemas.cloudcomplab.ch/occi/sm#"; class="kind";' -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT -H 'X-OCCI-Attribute: icclab.haas.slave.number="2"' $ID is the last part of the url above returned curl -v -X GET -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT $URL/$OTERM/$ID delete service orchestrator instance curl -v -X DELETE -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT $URL/$OTERM/$ID # Debugging Debugging can be done in different ways. The easiest is to run the service orchestrator locally by starting the application python program in the same directory. (/bundle/wsgi/application) This way, the logging output will be directly on the terminal output. If a new error occurs after deploying the service orchestrator on the Cloud Controller, the log messages can be seen by connecting with the OpenShift CLI client and entering `oc logs `. If this isn't helping either, a terminal within the pod can be started by `oc rsh /bin/sh` as there is no bash shell in this pod.