Commit 8ece89e9 authored by Lionel Walter's avatar Lionel Walter Committed by Matthias

Don't provide kubeconfig and kubecontext, use the default used by kubectl so...

Don't provide kubeconfig and kubecontext, use the default used by kubectl so that it work in the cluster and locally
parent 00b7ce81
variables: # Global definierte Umgebungsvariablen zur Nutzung in einzelnen jobs
DOCKER_TLS_CERTDIR: "" # Notwendig, damit mit Docker Registry kommuniziert werden kann
stages: # Stages sind Gruppen von jobs, welche parallel laufen können
- test
- publish
test:
stage: test
image: python:3.8
before_script:
- pip install pip --upgrade
- pip install -r requirements.txt
script:
- flake8 ./app
.build-image: # .-Präfix markiert ein job-template
stage: publish
image: docker:stable
services:
- docker:dind
script:
- docker login -u "$REGISTRY_USER" -p "$REGISTRY_PASSWORD" "$REGISTRY"
- docker build --pull -t "$IMAGE_TAG" -f "$DOCKERFILE" .
- docker push "$IMAGE_TAG"
- docker logout
build-tagged-image:
extends: .build-image # Implementiert das job-template .build-image
variables:
IMAGE_TAG: "$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG" # $CI_*-Variablen gehören zu einem set an vordefinierten Variablen
REGISTRY_PASSWORD: "$CI_REGISTRY_PASSWORD"
REGISTRY_USER: "$CI_REGISTRY_USER"
REGISTRY: "$CI_REGISTRY"
DOCKERFILE: "Dockerfile"
only: # job wird nur in folgenden Branches verwendet, wobei tags = getaggte commits
- tags
build-latest-image:
extends: .build-image
variables:
IMAGE_TAG: "$CI_REGISTRY_IMAGE:latest"
REGISTRY_PASSWORD: "$CI_REGISTRY_PASSWORD"
REGISTRY_USER: "$CI_REGISTRY_USER"
REGISTRY: "$CI_REGISTRY"
DOCKERFILE: "Dockerfile"
only:
- master
build-feature-branch-image:
extends: .build-image
variables:
IMAGE_TAG: "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME"
REGISTRY_PASSWORD: "$CI_REGISTRY_PASSWORD"
REGISTRY_USER: "$CI_REGISTRY_USER"
REGISTRY: "$CI_REGISTRY"
DOCKERFILE: "Dockerfile"
except: # job wird in allen branches ausser den folgenden verwendet
- master
- tags
\ No newline at end of file
FROM python:3.8
RUN mkdir /app
WORKDIR /app
ADD app /app/
ADD requirements.txt /app/
RUN mkdir /kubectl
WORKDIR /app/kubectl
#install kubectl so that helm can work in the docker container
RUN wget https://storage.googleapis.com/kubernetes-release/release/v1.16.2/bin/linux/amd64/kubectl && chmod +x ./kubectl
ENV PATH /app/kubectl:$PATH
WORKDIR /app
#install helm
RUN wget https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz && tar -xvzf helm-v3.2.1-linux-amd64.tar.gz && mv linux-amd64 helm
ENV PATH /app/helm:$PATH
RUN pip install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python"]
CMD ["/app/main.py"]
This diff is collapsed.
# How to start
- Install microK8s following https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview (better than the microK8s site)
- Don't forget the firewall rules :
# Import Api App
API to start, stop, manage import processes for memobase. Will be used in the Admin Interface (Drupal).
See <https://import.memobase.k8s.unibas.ch/>.
# Deploy in Kubernetes
```
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
cd kubernetes-manifests
kubectl apply -f .
```
- At the end do `sudo microk8s kubectl config view --raw > $HOME/.kube/config`. This way, the python client will find the kubernetes configuration
Then
# Start the app
```
cd app
python3 -m venv venv
. venv/bin/activate
pip3 install -r requirements.txt
cd app
python3 main.py
```
This will start a development web server hosting your application, which you will be able to see by navigating to http://localhost:5000.
\ No newline at end of file
This will start a development web server hosting your application, which you will be able to see by navigating to http://0.0.0.0:5000.
# Update the charts
Currently the app only use the helm charts stored in /charts. To update them use `/pull-charts.sh`. The source is
# How to work with microK8s if needed
- Install microK8s following https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview (better than the microK8s site)
- Don't forget the firewall rules :
```
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
```
- At the end do `sudo microk8s kubectl config view --raw > $HOME/.kube/config`. This way, the python client will find the kubernetes configuration
\ No newline at end of file
FROM python:3.7
RUN mkdir /app
WORKDIR /app
ADD . /app/
RUN pip install -r requirements.txt
EXPOSE 5000
CMD ["python", "/app/main.py"]
apiVersion: v2
appVersion: 0.4.1
description: The mapper service job for the import process.
name: mapper-service
type: application
version: 0.1.4
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-app-config"
namespace: memobase
data:
JOB_ID: "{{ .Values.jobId }}"
APPLICATION_ID: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
INSTITUTION_ID: "{{ .Values.institutionId }}"
RECORD_SET_ID: "{{ .Values.recordSetId }}"
TOPIC_IN: "{{ .Values.processId }}-{{ .Values.lastJobId }}-{{ .Values.lastJobName }}"
TOPIC_OUT: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
TOPIC_PROCESS: "{{ .Values.processId }}-reporting"
\ No newline at end of file
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
namespace: memobase
labels:
institutionId: "{{ .Values.institutionId }}"
recordSetId: "{{ .Values.recordSetId }}"
jobType: "import-job"
spec:
template:
spec:
containers:
- name: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
image: "{{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}"
envFrom:
- configMapRef:
name: "{{ .Values.kafkaConfigs }}"
- configMapRef:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-app-config"
volumeMounts:
- name: config-volume
mountPath: "/configs/mappings/{{ .Values.configFileName }}"
subPath: "{{ .Values.configFileName }}"
volumes:
- name: config-volume
configMap:
name: "{{ .Values.configMapName }}"
restartPolicy: Never
backoffLimit: 0
#image values
registry: "cr.gitlab.switch.ch"
image: "memoriav/memobase-2020/services/import-process/mapper-service"
tag: "latest"
jobName: mapper-service
processId: p0001
jobId: j0003
lastJobId: j0002
lastJobName: table-data-transform
kafkaConfigs: prod-kafka-bootstrap-servers
institutionId: placeholder
recordSetId: placeholder
# configMapName holds the name of the config with the mappings for the service.
configMapName: placeholder
configFileName: mapping.yml
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
apiVersion: v2
appVersion: 0.1.0
description: This helm chart combines all of the steps for a complete table data import
in a single chart.
name: table-data-import-process
type: application
version: 0.1.1
#maybe we could use global values for some fields
#https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#global-chart-values
mapper-service:
#image values
registry: "cr.gitlab.switch.ch"
image: "memoriav/memobase-2020/services/import-process/mapper-service"
tag: "0.4.1"
jobName: mapper-service
processId: p0001
jobId: j0003
lastJobId: j0002
lastJobName: table-data-transform
kafkaConfigs: prod-kafka-bootstrap-servers
institutionId: placeholder
recordSetId: placeholder
# configMapName holds the name of the config with the mappings for the service.
configMapName: placeholder
configFileName: mapping.yml
table-data-transform:
#image values
registry: "cr.gitlab.switch.ch"
image: "memoriav/memobase-2020/services/import-process/table-data-transform"
tag: "1.1.1"
jobName: table-data-transform
processId: p0001
jobId: j0002
lastJobId: j0001
lastJobName: text-file-validation
institutionId: placeholder
recordSetId: placeholder
kafkaConfigs: prod-kafka-bootstrap-servers
sftpConfigs: internal-sftp-config
# DEFAULT APP CONFIG VALUES
# all index values begin at ONE and not at ZERO!
# sheetIndex is the index of the sheet inside of an .xslx, .xsl or .ods file. Ignored for .csv, .tsv.
sheetIndex: "1"
# headerCount is the number of lines to skip before creating records
headerCount: "1"
# headerLineIndex is the index of the header row to use for property names.
headerLineIndex: "1"
# identifierIndex is the index of the column used as an unique identifier for each row.
identifierIndex: "1"
text-file-validation:
#image values
registry: "cr.gitlab.switch.ch"
image: "memoriav/memobase-2020/services/import-process/text-file-validation"
tag: "0.2.0"
jobName: text-file-validation
processId: p0001
jobId: j0001
institutionId: placeholder
recordSetId: placeholder
kafkaConfigs: prod-kafka-bootstrap-servers
sftpConfigs: internal-sftp-config
## Needs to be set to the directory on the sftp server.
## this is a relative path built like this:
## "./{INSTITUTION_ID}/{RECORD_SET_ID}"
appDirectory: placeholderValue
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
apiVersion: v2
appVersion: 0.2.0
description: A Helm chart for Kubernetes
name: table-data-transform
type: application
version: 0.1.4
No notes
\ No newline at end of file
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-app-config"
namespace: memobase
data:
JOB_ID: "{{ .Values.jobId }}"
APPLICATION_ID: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
SHEET_INDEX: "{{ .Values.sheetIndex }}"
HEADER_COUNT: "{{ .Values.headerCount }}"
HEADER_LINE_INDEX: "{{ .Values.headerLineIndex }}"
IDENTIFIER_INDEX: "{{ .Values.identifierIndex }}"
TOPIC_IN: "{{ .Values.processId }}-{{ .Values.lastJobId }}-{{ .Values.lastJobName }}"
TOPIC_OUT: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
TOPIC_PROCESS: "{{ .Values.processId }}-reporting"
\ No newline at end of file
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
namespace: memobase
labels:
institutionId: "{{ .Values.institutionId }}"
recordSetId: "{{ .Values.recordSetId }}"
jobType: "import-job"
spec:
template:
spec:
containers:
- name: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
image: "{{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}"
envFrom:
- secretRef:
name: "{{ .Values.sftpConfigs }}"
- configMapRef:
name: "{{ .Values.kafkaConfigs }}"
- configMapRef:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-app-config"
restartPolicy: Never
backoffLimit: 0
#image values
registry: "cr.gitlab.switch.ch"
image: "memoriav/memobase-2020/services/import-process/table-data-transform"
tag: "latest"
jobName: table-data-transform
processId: p0001
jobId: j0002
lastJobId: j0001
lastJobName: text-file-validation
institutionId: placeholder
recordSetId: placeholder
kafkaConfigs: prod-kafka-bootstrap-servers
sftpConfigs: internal-sftp-config
# DEFAULT APP CONFIG VALUES
# all index values begin at ONE and not at ZERO!
# sheetIndex is the index of the sheet inside of an .xslx, .xsl or .ods file. Ignored for .csv, .tsv.
sheetIndex: "1"
# headerCount is the number of lines to skip before creating records
headerCount: "1"
# headerLineIndex is the index of the header row to use for property names.
headerLineIndex: "1"
# identifierIndex is the index of the column used as an unique identifier for each row.
identifierIndex: "1"
\ No newline at end of file
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
apiVersion: v2
appVersion: 0.2.0
description: A helm chart for the text-file-validation service.
name: text-file-validation
type: application
version: 0.1.3
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-app-config"
namespace: memobase
data:
JOB_ID: "{{ .Values.jobId }}"
APP_DIRECTORY: "{{ .Values.appDirectory }}"
CLIENT_ID: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
TOPIC_IN: "{{ .Values.processId }}-{{ .Values.lastJobId }}-{{ .Values.jobName }}"
TOPIC_PROCESS: "{{ .Values.processId }}-reporting"
\ No newline at end of file
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
namespace: memobase
labels:
institutionId: "{{ .Values.institutionId }}"
recordSetId: "{{ .Values.recordSetId }}"
jobType: "import-job"
spec:
template:
spec:
containers:
- name: "{{ .Values.processId }}-{{ .Values.jobId }}-{{ .Values.jobName }}"
image: "{{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}"
envFrom:
- secretRef:
name: "{{ .Values.sftpConfigs }}"
- configMapRef:
name: "{{ .Values.kafkaConfigs }}"
- configMapRef:
name: "{{ .Values.processId }}-{{ .Values.jobId }}-app-config"
restartPolicy: Never
backoffLimit: 0
\ No newline at end of file
#image values
registry: "cr.gitlab.switch.ch"
image: "memoriav/memobase-2020/services/import-process/text-file-validation"
tag: "latest"
jobName: text-file-validation
processId: p0001
jobId: j0001
institutionId: placeholder
recordSetId: placeholder
kafkaConfigs: prod-kafka-bootstrap-servers
sftpConfigs: internal-sftp-config
## Needs to be set to the directory on the sftp server.
## this is a relative path built like this:
## "./{INSTITUTION_ID}/{RECORD_SET_ID}"
appDirectory: placeholderValue
# Copyright (C) 2019 The Android Open Source Project
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# https://gerrit.googlesource.com/k8s-gerrit/+/refs/heads/master/tests/helpers/helm.py
import json
import subprocess
class Helm:
def _exec_command(self, cmd, fail_on_err=True):
base_cmd = [
"helm",
]
# for debug print (' '.join(base_cmd+cmd))
return subprocess.run(
base_cmd + cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
check=fail_on_err,
text=True
)
def install(
self,
chart,
name,
values_file=None,
set_values=None,
namespace=None,
fail_on_err=True,
wait=True,
):
"""Installs a chart on the cluster
Arguments:
chart {str} -- Release name or path of a helm chart
name {str} -- Name with which the chart will be installed on the cluster
Keyword Arguments:
values_file {str} -- Path to a custom values.yaml file (default: {None})
set_values {dict} -- Dictionary containing key-value-pairs that are used
to overwrite values in the values.yaml-file.
(default: {None})
namespace {str} -- Namespace to install the release into (default: {default})
fail_on_err {bool} -- Whether to fail with an exception if the installation
fails (default: {True})
wait {bool} -- Whether to wait for all pods to be ready (default: {True})
Returns:
CompletedProcess -- CompletedProcess-object returned by subprocess
containing details about the result and output of the
executed command.
"""
helm_cmd = ["install", name, chart]
if values_file:
helm_cmd.extend(("-f", values_file))
if set_values:
opt_list = ["%s=%s" % (k, v) for k, v in set_values.items()]
helm_cmd.extend(("--set", ",".join(opt_list)))
if namespace:
helm_cmd.extend(("--namespace", namespace))
if wait:
helm_cmd.append("--wait")
return self._exec_command(helm_cmd, fail_on_err)
def list(self):
"""Lists helm charts installed on the cluster.
Returns:
list -- List of helm chart realeases installed on the cluster.
"""
helm_cmd = ["list", "--all", "--output", "json"]
output = self._exec_command(helm_cmd).stdout
output = json.loads(output)
return output["Releases"]
def upgrade(
self,
chart,
name,
values_file=None,
set_values=None,
reuse_values=True,
recreate_pods=False,
fail_on_err=True,
):
"""Updates a chart on the cluster
Arguments:
chart {str} -- Release name or path of a helm chart
name {str} -- Name with which the chart will be installed on the cluster
Keyword Arguments:
values_file {str} -- Path to a custom values.yaml file (default: {None})
set_values {dict} -- Dictionary containing key-value-pairs that are used
to overwrite values in the values.yaml-file.
(default: {None})
reuse_values {bool} -- Whether to reuse existing not overwritten values
(default: {True})
recreate_pods {bool} -- Whether to restart changed pods (default: {False})
fail_on_err {bool} -- Whether to fail with an exception if the installation
fails (default: {True})
Returns:
CompletedProcess -- CompletedProcess-object returned by subprocess
containing details about the result and output of the
executed command.
"""
helm_cmd = ["upgrade", name, chart, "--wait"]
if values_file:
helm_cmd.extend(("-f", values_file))
if reuse_values:
helm_cmd.append("--reuse-values")
if recreate_pods:
helm_cmd.append("--recreate-pods")
if set_values:
opt_list = ["%s=%s" % (k, v) for k, v in set_values.items()]
helm_cmd.extend(("--set", ",".join(opt_list)))
return self._exec_command(helm_cmd, fail_on_err)
def uninstall(self, name, namespace=None, fail_on_err=True):
"""Uninstall a chart from the cluster
Arguments:
name {str} -- Name of the chart to delete
Keyword Arguments:
namespace {str} -- Namespace to uninstallnstall the release into (default: {default})
fail_on_err {bool} -- Whether to fail with an exception if the installation
fails (default: {True})
Returns:
CompletedProcess -- CompletedProcess-object returned by subprocess
containing details about the result and output of the
executed command.
"""
helm_cmd = ["uninstall", name]
if namespace:
helm_cmd.extend(("--namespace", namespace))
return self._exec_command(helm_cmd, fail_on_err)
def uninstall_all(self, exceptions=None):
"""Deletes all charts on the cluster
Keyword Arguments:
exceptions {list} -- List of chart names not to delete (default: {None})
"""
charts = self.list()
for chart in charts:
if chart["Name"] in exceptions:
continue
self.uninstall(chart["Name"])
This diff is collapsed.
Flask
kubernetes
\ No newline at end of file
apiVersion: apps/v1
kind: Deployment
metadata:
name: import-api-deployment
namespace: memobase
spec:
selector:
matchLabels:
app: import-api-app
replicas: 1
template:
metadata:
labels:
app: import-api-app
tier: web
spec:
serviceAccountName: import-api-service-account #to be able to manage other pods inside the cluster
containers:
- name: import-api-container
image: cr.gitlab.switch.ch/memoriav/memobase-2020/services/import-process/import-api:MEMO-134-start-stop-job