documentation.md 6.3 KB
Newer Older
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
# Configuration

The system has to be configured before usage. First of all, within the data directory, there is a file defaultSettings.cfg which contains the default settings for the distributed computing cluster if they are not provided by the user.

## Software Volume
In order to make deployment faster, all software that is to be installed has to be saved within a Cinder volume. This way, it doesn't have to be downloaded from some public server. In order to achieve this, a Cinder Volume has to be created with all the necessary software saved on it, either in compressed or uncompressed form. (depending on the setting how you want to deploy the cluster) The directory structure which needs to be followed on this volume is the following:

![software volume](softwarevolume.png)

The subdirectories are always the unpacked files. The packed files can be downloaded at the addresses

http://mirror.switch.ch/mirror/apache/dist/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz

http://download.oracle.com/otn-pub/java/jdk/8u60-b27/jdk-8u60-linux-x64.tar.gz (cannot be downloaded directly)

https://github.com/jcmcken/parallel-ssh (master zip file; the renamed pssh.zip isn't actually used at all but it's unpacked to the pssh subdirectory - you can have a look at the master_bash.sh file to see the clear structure what has to be within that subdirectory; the actual pssh/pscp files are under pssh/pssh/bin)

As soon as the cluster has been setup, the volume is unmounted and can be used for other clusters as well. (in a later version, a new volume will be created for each individual cluster)

20 21
# Run Service Manager and instantiate Service Orchestrator Instances

Balazs's avatar
Balazs committed
22
clone the necessary Git repositories
23

Balazs's avatar
Balazs committed
24 25 26
    git clone https://git.mobile-cloud-networking.eu/cloudcontroller/mcn_cc_sdk.git
    git clone https://github.com/icclab/hurtle_sm.git
    git clone https://gitlab.switch.ch/Verbaryerba/disco.git
Balazs's avatar
Balazs committed
27 28
    
The first one contains the SDK which hurtle is based on; the second contains the service manager which will keep track of all the deployed service orchestrator instances; the third contains the actual service orchestrator.
29

Balazs's avatar
Balazs committed
30 31 32 33 34 35 36
Docker needs to be installed if you would like to build the Docker image with the service orchestrator.

For reference, consult https://docs.docker.com/engine/installation/

The Docker image has to be built for the Cloud Controller. As the currently published image on Docker Hub (amnion/haas) is accepting all the system specific settings as attributes, this step might not be necessary. But for completeness' sake: The Dockerfile file for the instructions lies within the bundle directory. After configuring the bundle/wsgi/so.py file as well as the needed files within the bundle/data subdirectory (especially the service_manifest.json), the Docker image can be built and pushed to the Docker Hub.

Note: for debugging, see paragraph below
37

Balazs's avatar
Balazs committed
38
In the /bundle/data/service_manifest.json file, the optional attributes to the service orchestrator have to be declared. All the other files are specific to the individual service orchestrator's operation; in this case, those are Heat Orchestration Template (HOT) templates, bash scripts for setting up the cluster and the Hadoop configuration files.
39

Balazs's avatar
Balazs committed
40 41
# Service Manager

Balazs's avatar
Balazs committed
42
The service manager has to be configured as well, though this is only one config file. In this case, this file is located at etc/sm.cfg. The main entries that have to be changed are the following: in the section [service_manager], the entry manifest has to point to the service_manifest.json file of the service orchestrator. bundle_location is this case (OpenShift v3) the path to the Docker image on the Docker Hub. design_uri is the keystone endpoint where the HOT should be deployed.
43

Balazs's avatar
Balazs committed
44
If the service shouldn't be registered in keystone, the entry register_service in subsection [service_manager_admin] can be set to False and the further settings are not important.
45

Balazs's avatar
Balazs committed
46
Under the section [cloud_controller], the access configuration to the cloud controller API has to be set. Currently, this has to be an OpenShift installation; in this case, it's an OpenShift v3 instance.
47

Balazs's avatar
Balazs committed
48 49
# Bash configuration

Balazs's avatar
Balazs committed
50
create virtual environment if you want to have a contained environment
51

Balazs's avatar
Balazs committed
52 53
    virtualenv /tmp/mcn
    source /tmp/mcn/bin/activate
54

Balazs's avatar
Balazs committed
55
maybe some dependencies won't be met - I had to install python-monascaclient with pip for instance
56

Balazs's avatar
Balazs committed
57 58
    cd mcn_cc_sdk && python setup.py install && cd ..
    cd hurtle_sm && python setup.py install && cd ..
59

Balazs's avatar
Balazs committed
60 61 62 63
For starting the service manager, only the config file has to be given.

    service_manager -c /path/to/disco/etc/sm.cfg

Balazs's avatar
Balazs committed
64 65
# Placing commands for the Service Manager

Balazs's avatar
Balazs committed
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87
As the SM is blocking the terminal, in a new terminal, the commands to the SM can be executed.

    export URL="http://127.0.0.1:8888" # URL & port of executed SM
    export OTERM="haas" # name of the service
    export KID=xxx # keystone token
    export TENANT=xxx # OpenStack tenant name

check for the service type

    curl -v -X GET -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT $URL/-/

create a service orchestrator instance - will return instance URL in the Location header; in this example, also the attribute icclab.haas.slave.number will be set to 2

    curl -v -X POST $URL/$OTERM/ -H 'Category: '$OTERM'; scheme="http://schemas.cloudcomplab.ch/occi/sm#"; class="kind";' -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT -H 'X-OCCI-Attribute: icclab.haas.slave.number="2"'

$ID is the last part of the url above returned

    curl -v -X GET -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT $URL/$OTERM/$ID

delete service orchestrator instance

    curl -v -X DELETE -H 'Content-type: text/occi' -H 'X-Auth-Token: '$KID -H 'X-Tenant-Name: '$TENANT $URL/$OTERM/$ID
Balazs's avatar
Balazs committed
88 89 90 91 92 93 94 95 96
    
# Debugging

Debugging can be done in different ways.

The easiest is to run the service orchestrator locally by starting the application python program in the same directory. (/bundle/wsgi/application) This way, the logging output will be directly on the terminal output.

If a new error occurs after deploying the service orchestrator on the Cloud Controller, the log messages can be seen by connecting with the OpenShift CLI client and entering `oc logs <pod_name>`.

97 98
If this isn't helping either, a terminal within the pod can be started by `oc rsh <pod_name> /bin/sh` as there is no bash shell in this pod.