mirror of
https://github.com/samply/bridgehead.git
synced 2025-06-16 20:40:15 +02:00
Compare commits
40 Commits
refactor/i
...
ehds2_deve
Author | SHA1 | Date | |
---|---|---|---|
2c79799815 | |||
ca93451357 | |||
892e2c2cf1 | |||
520c560be0 | |||
bb81617873 | |||
c27536b566 | |||
f213e6909a | |||
6887264a5b | |||
5b2c3d7725 | |||
a8a15aaad8 | |||
2e8b1dc96c | |||
105495d6cd | |||
93e73838c6 | |||
9e72b04824 | |||
fbe68bc778 | |||
17f372b06c | |||
a250d52998 | |||
e309dc495a | |||
53cdd49fd7 | |||
36a97ecc20 | |||
1b82207934 | |||
4880c5cc9b | |||
b9b44d2530 | |||
0d1f425df0 | |||
7a33b54416 | |||
40e15f4d84 | |||
99956f3477 | |||
d73e6ee7a3 | |||
b6bfaba855 | |||
5bd9baaff7 | |||
a629d87a5f | |||
4abe193c58 | |||
5e8db39e2a | |||
b28a48da0a | |||
7c2e9af947 | |||
81e5a6ea3f | |||
a364a4b3f8 | |||
9459e1a979 | |||
d5760ed3d0 | |||
8af5cf3f01 |
2
.gitignore
vendored
2
.gitignore
vendored
@ -1,7 +1,7 @@
|
||||
##Ignore site configuration
|
||||
.gitmodules
|
||||
site-config/*
|
||||
.idea
|
||||
|
||||
## Ignore site configuration
|
||||
*/docker-compose.override.yml
|
||||
|
||||
|
122
README.md
122
README.md
@ -9,6 +9,7 @@ This repository is the starting point for any information and tools you will nee
|
||||
- [Software](#software)
|
||||
- [Network](#network)
|
||||
2. [Deployment](#deployment)
|
||||
- [EHDS2/ECDC](#ehds2-ecdc)
|
||||
- [Site name](#site-name)
|
||||
- [Projects](#projects)
|
||||
- [GitLab repository](#gitlab-repository)
|
||||
@ -34,7 +35,7 @@ This repository is the starting point for any information and tools you will nee
|
||||
|
||||
## Requirements
|
||||
|
||||
The data protection officer at your site will probably want to know exactly what our software does with patient data, and you may need to get their approval before you are allowed to install a Bridgehead. To help you with this, we have provided some data protection concepts:
|
||||
The data protection group at your site will probably want to know exactly what our software does with patient data, and you may need to get their approval before you are allowed to install a Bridgehead. To help you with this, we have provided some data protection concepts:
|
||||
|
||||
- [Germany](https://www.bbmri.de/biobanking/it/infrastruktur/datenschutzkonzept/)
|
||||
|
||||
@ -46,8 +47,6 @@ Hardware requirements strongly depend on the specific use-cases of your network
|
||||
- 32 GB RAM
|
||||
- 160GB Hard Drive, SSD recommended
|
||||
|
||||
We recommend using a dedicated VM for the Bridgehead, with no other applications running on it. While the Bridgehead can, in principle, run on a shared VM, you might run into surprising problems such as resource conflicts (e.g., two apps using tcp port 443).
|
||||
|
||||
### Software
|
||||
|
||||
You are strongly recommended to install the Bridgehead under a Linux operating system (but see the section [Non-Linux OS](#non-linux-os)). You will need root (administrator) priveleges on this machine in order to perform the deployment. We recommend the newest Ubuntu LTS server release.
|
||||
@ -89,6 +88,8 @@ The following URLs need to be accessible (prefix with `https://`):
|
||||
* gitlab.bbmri-eric.eu
|
||||
* only for German Biobank Node
|
||||
* broker.bbmri.de
|
||||
* only for EHDS2/ECDC
|
||||
* ecdc-vm-ehds-test1.swedencentral.cloudapp.azure.com
|
||||
|
||||
> 📝 This URL list is subject to change. Instead of the individual names, we highly recommend whitelisting wildcard domains: *.dkfz.de, github.com, *.docker.com, *.docker.io, *.samply.de, *.bbmri.de.
|
||||
|
||||
@ -96,6 +97,34 @@ The following URLs need to be accessible (prefix with `https://`):
|
||||
|
||||
## Deployment
|
||||
|
||||
### EHDS2/ECDC
|
||||
|
||||
The ECDC Bridgehead allows you to connect your site/node to the [AMR Explorer](http://ehds2-lens.swedencentral.cloudapp.azure.com/), a non-public central web site that allow certified researchers to search for information relating to antiobiotic resistance, Europe-wide. You can supply the Bridgehead with data from your site in the form of CSV files, which will then be made available to the Explorer for searching purposes.
|
||||
|
||||
You will need to set up some configuration before you can start a Bridgehead. This can be done as follows:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /etc/bridgehead
|
||||
sudo cp /srv/docker/bridgehead/bbmri/modules/bbmri.conf /etc/bridgehead
|
||||
```
|
||||
|
||||
Now edit ```/etc/bridgehead/bbmri.conf``` and customize the following variables for your site:
|
||||
|
||||
- SITE_NAME
|
||||
- SITE_ID
|
||||
- OPERATOR_FIRST_NAME
|
||||
- OPERATOR_LAST_NAME
|
||||
- OPERATOR_EMAIL
|
||||
|
||||
If you run a proxy at your site, you will also need to give values to the ```HTTP*_PROXY*``` variables.
|
||||
|
||||
When you first start the Bridgehead, it will clone two extra repositories into /srv/docker, namely, ```focus``` and ```transfair```. It will automatically build local images of these repositories for you. These components have the following functionality that has been customized for ECDC:
|
||||
|
||||
- *focus.* This component is responsible for completing the CQL that is used for running queries against the Blaze FHIR store. It uses a set of templates for doing this. Extra templates have been written for the ECDC use case. They can be found in /srv/docker/focus/resources/cql/EHDS2*.
|
||||
- *transfair.* This is an ETL component. It takes the CSV data that you provide, converts it to FHIR, and loads it to Blaze. This will be run once, if there is data in /srv/docker/ecdc/data. A lock file in the data directory ensures that it does not get run again. Remove this lock file and restart the Bridgehead if you want to load new data.
|
||||
|
||||
These images will normally be rebuilt every time you restart the Bridgehead. This is a workaround to fix a bug: if you don't rebuild these images for every start, then legacy versions will be used and you will lose the new ECDC functionality. The reason for this is still under investigation.
|
||||
|
||||
### Site name
|
||||
|
||||
You will need to choose a short name for your site. This is not a URL, just a simple identifying string. For the examples below, we will use "your-site-name", but you should obviously choose something that is meaningful to you and which is unique.
|
||||
@ -110,6 +139,8 @@ Site names should adhere to the following conventions:
|
||||
|
||||
### GitLab repository
|
||||
|
||||
You can skip this section if you are doing an ECDC/EHDS2 installation.
|
||||
|
||||
In order to be able to install, you will need to have your own repository in GitLab for your site's configuration settings. This allows automated updates of the Bridgehead software.
|
||||
|
||||
To request a new repository, please contact your research network administration or send an email to one of the project specific addresses:
|
||||
@ -132,7 +163,24 @@ During the installation, your Bridgehead will download your site's configuration
|
||||
|
||||
### Base Installation
|
||||
|
||||
First, download your site specific configuration repository:
|
||||
Clone the bridgehead repository:
|
||||
```shell
|
||||
sudo mkdir -p /srv/docker/
|
||||
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
|
||||
```
|
||||
|
||||
If this is an ECDC/EHDS2 installation, switch to the ```ehds2``` branch and copy the configuration file to the required location:
|
||||
```shell
|
||||
cd /srv/docker/bridgehead
|
||||
sudo git checkout ehds2
|
||||
sudo mkdir -p /etc/bridgehead/
|
||||
sudo cp bbmri/modules/bbmri.conf /etc/bridgehead/
|
||||
sudo vi /etc/bridgehead/bbmri.conf # Modify to include national node name and admin contact details
|
||||
```
|
||||
|
||||
For an ECDC/EHDS2 installation, you will also need to copy your data in a comma-separated value (CSV) formatted file to ```/srv/docker/ecdc/data```. Make sure it is readable by all. Only files with the ending ```.csv``` will be read in, all other files will be ignored.
|
||||
|
||||
If this is not an ECDC/EHDS2 installation, then download your site specific configuration repository:
|
||||
```shell
|
||||
sudo mkdir -p /etc/bridgehead/
|
||||
sudo git clone <REPO_URL_FROM_EMAIL> /etc/bridgehead/
|
||||
@ -151,12 +199,6 @@ Pay special attention to:
|
||||
- OPERATOR_LAST_NAME
|
||||
- OPERATOR_EMAIL
|
||||
|
||||
Clone the bridgehead repository:
|
||||
```shell
|
||||
sudo mkdir -p /srv/docker/
|
||||
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
|
||||
```
|
||||
|
||||
Then, run the installation script:
|
||||
|
||||
```shell
|
||||
@ -175,8 +217,38 @@ sudo ./bridgehead enroll <PROJECT>
|
||||
|
||||
... and follow the instructions on the screen. Please send your default Collection ID and the display name of your site together with the certificate request when you enroll. You should then be prompted to do the next step:
|
||||
|
||||
Note: if you are doing an ECDC/EHDS2 installation, you will need to perform the Beam certificate signing yourself. Do not send an email to either of the email addreesses suggested by the bridgehead enroll procedure. Instead, log on to the VM where Beam is running and perform the following (you will need root permissions):
|
||||
```shell
|
||||
cd /srv/docker/beam-broker
|
||||
sudo mkdir -p csr
|
||||
sudo vi csr/ecdc-bridgehead-<national node name>.csr # Copy and paste the certificate printed during the enroll
|
||||
sudo pki-scripts/managepki sign --csr-file csr/ecdc-bridgehead-<national node name>.csr --common-name=ecdc-bridgehead-<national node name>.broker.bbmri.samply.de
|
||||
```
|
||||
|
||||
You can check that the Bridgehead has connected to Beam with the following command:
|
||||
```shell
|
||||
pki-scripts/managepki list
|
||||
|
||||
```
|
||||
|
||||
### Starting and stopping your Bridgehead
|
||||
|
||||
For an ECDC/EHDS2 installation, this is done with the help of specialized scripts:
|
||||
|
||||
To start:
|
||||
|
||||
```shell
|
||||
sudo /srv/docker/bridgehead/run.sh
|
||||
```
|
||||
|
||||
To stop (you generally won't need to do this):
|
||||
|
||||
```shell
|
||||
sudo /srv/docker/bridgehead/stop.sh
|
||||
```
|
||||
|
||||
For regular installations, read on.
|
||||
|
||||
If you followed the above steps, your Bridgehead should already be configured to autostart (via systemd). If you would like to start/stop manually:
|
||||
|
||||
To start, run
|
||||
@ -202,7 +274,7 @@ sudo systemctl [enable|disable] bridgehead@<PROJECT>.service
|
||||
After starting the Bridgehead, you can watch the initialization process with the following command:
|
||||
|
||||
```shell
|
||||
/srv/docker/bridgehead/bridgehead logs <project> -f
|
||||
journalctl -u bridgehead@bbmri -f
|
||||
```
|
||||
|
||||
if this exits with something similar to the following:
|
||||
@ -222,9 +294,8 @@ docker ps
|
||||
There should be 6 - 10 Docker proceses. If there are fewer, then you know that something has gone wrong. To see what is going on, run:
|
||||
|
||||
```shell
|
||||
/srv/docker/bridgehead/bridgehead logs <Project> -f
|
||||
journalctl -u bridgehead@bbmri -f
|
||||
```
|
||||
This translates to a journalctl command so all the regular journalctl flags can be used.
|
||||
|
||||
Once the Bridgehead has passed these checks, take a look at the landing page:
|
||||
|
||||
@ -238,7 +309,7 @@ You can either do this in a browser or with curl. If you visit the URL in the br
|
||||
curl -k https://localhost
|
||||
```
|
||||
|
||||
Should the landing page not show anything, you can inspect the logs of the containers to determine what is going wrong. To do this you can use `./bridgehead docker-logs <Project> -f` to follow the logs of the container. This transaltes to a docker compose logs command meaning all the ususal docker logs flags work.
|
||||
If you get errors when you do this, you need to use ```docker logs``` to examine your landing page container in order to determine what is going wrong.
|
||||
|
||||
If you have chosen to take part in our monitoring program (by setting the ```MONITOR_APIKEY``` variable in the configuration), you will be informed by email when problems are detected in your Bridgehead.
|
||||
|
||||
@ -301,19 +372,19 @@ Once you have added your biobank to the Directory you got persistent identifier
|
||||
|
||||
The Bridgehead's **Directory Sync** is an optional feature that keeps the Directory up to date with your local data, e.g. number of samples. Conversely, it also updates the local FHIR store with the latest contact details etc. from the Directory. You must explicitly set your country specific directory URL, username and password to enable this feature.
|
||||
|
||||
You should talk with your local data protection group regarding the information that is published by Directory sync.
|
||||
|
||||
Full details can be found in [directory_sync_service](https://github.com/samply/directory_sync_service).
|
||||
|
||||
To enable it, you will need to set these variables to the ```bbmri.conf``` file of your GitLab repository. Here is an example config:
|
||||
|
||||
```
|
||||
DS_DIRECTORY_URL=https://directory.bbmri-eric.eu
|
||||
DS_DIRECTORY_USER_NAME=your_directory_username
|
||||
DS_DIRECTORY_USER_PASS=your_directory_password
|
||||
DS_DIRECTORY_USER_PASS=qwdnqwswdvqHBVGFR9887
|
||||
DS_TIMER_CRON="0 22 * * *"
|
||||
```
|
||||
Please contact your National Node to obtain this information.
|
||||
You must contact the Directory team for your national node to find the URL, and to register as a user.
|
||||
|
||||
Optionally, you **may** change when you want Directory sync to run by specifying a [cron](https://crontab.guru) expression, e.g. `DS_TIMER_CRON="0 22 * * *"` for 10 pm every evening.
|
||||
Additionally, you should choose when you want Directory sync to run. In the example above, this is set to happen at 10 pm every evening. You can modify this to suit your requirements. The timer specification should follow the [cron](https://crontab.guru) convention.
|
||||
|
||||
Once you edited the gitlab config, the bridgehead will autoupdate the config with the values and will sync the data.
|
||||
|
||||
@ -323,6 +394,19 @@ There will be a delay before the effects of Directory sync become visible. First
|
||||
|
||||
The data accessed by the federated search is held in the Bridgehead in a FHIR store (we use Blaze).
|
||||
|
||||
For an ECDC/EHDS2 installation, you need to provide your data as a table in a CSV (comma-separated value) files and place it in the directory /srv/docker/ecdc/data. You can provide as many data files as you like, and you can add new files incrementally over time.
|
||||
|
||||
In order for this new data to be loaded, you will need to execute the ```run.sh``` script with the appropriate arguments:
|
||||
|
||||
- To read just the most recently added data files: ```/srv/docker/bridgehead run.sh --upload```.
|
||||
- To read in all data from scratch: ```/srv/docker/bridgehead run.sh --upload-all```.
|
||||
|
||||
These two variants give you the choice between uploading data in an incremental way that preserves the date used for statistics or as a single upload that date stamps everything with the current date.
|
||||
|
||||
The Bridgehead can be started without data, but obviously, any searches run from the Explorer will return zero results for your site if you do that. Note that an empty data directory will automatically be inserted on the first start of the Bridgehead if you don't set one up yourself.
|
||||
|
||||
For non-ECDC setups, read on.
|
||||
|
||||
You can load data into this store by using its FHIR API:
|
||||
|
||||
```
|
||||
|
@ -1,16 +1,18 @@
|
||||
version: "3.7"
|
||||
|
||||
# This includes only the shared persistence for BBMRI-ERIC and GBN. Federation components are included as modules, see vars.
|
||||
# This includes only the shared persistence for BBMRI-ERIC and GBN and EHDS2. Federation components are included as modules, see vars.
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
#image: docker.verbis.dkfz.de/cache/samply/blaze:latest
|
||||
# Blaze versions 0.26 and 0.27 do not return anything when you run a
|
||||
# CQL query, so I am pinning the version at 0.25.
|
||||
image: samply/blaze:0.25
|
||||
container_name: bridgehead-bbmri-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-bbmri-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
JAVA_TOOL_OPTIONS: "-Xmx4g"
|
||||
LOG_LEVEL: "debug"
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
@ -21,6 +23,8 @@ services:
|
||||
- "traefik.http.services.blaze_ccp.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth"
|
||||
- "traefik.http.routers.blaze_ccp.tls=true"
|
||||
ports:
|
||||
- "8081:8080"
|
||||
|
||||
volumes:
|
||||
blaze-data:
|
||||
|
81
bbmri/modules/bbmri.conf
Normal file
81
bbmri/modules/bbmri.conf
Normal file
@ -0,0 +1,81 @@
|
||||
### DO NOT EDIT THIS FILE DIRECTLY.
|
||||
###
|
||||
### This file is collaboratively managed by yourself and the CCP-IT team at DKFZ.
|
||||
### The Bridgehead will pull it from git every night and restart if required.
|
||||
### To make any changes (or review changes by CCP-IT), please login here:
|
||||
### [URL_TO_SITE_SPECIFIC_GIT_REPO]
|
||||
###
|
||||
### DO NOT EDIT THIS FILE DIRECTLY.
|
||||
|
||||
### A note on Secrets:
|
||||
###
|
||||
### Variable with a value of <VAULT> will be fetched from a central component
|
||||
### upon each bridgehead startup.
|
||||
### Using the proven Vaultwarden password manager puts you in full control of
|
||||
### who can read the passwords. In particular, as long as you don't declare a
|
||||
### secret as shared ("SITE+DKFZ"), DKFZ cannot read these strings.
|
||||
### We recommend putting credentials such as local passwords into the password
|
||||
### store, not the git repo. Please keep your master password safe (vault.conf).
|
||||
|
||||
|
||||
### Common Configuration of all Components
|
||||
## This is a descriptive human readable name of your site (e.g. Belgium)
|
||||
SITE_NAME=<National node>
|
||||
## This is the id for your site used in machine to machine communication (should be
|
||||
## lower-case, e.g. belgium)
|
||||
SITE_ID=<National node>
|
||||
## This server's hostname, for access from other computers within your institution
|
||||
## (e.g. mybridgehead.intern.myinstitution.org)
|
||||
## Optional. If left empty, this is auto-generated via the `hostname` command.
|
||||
HOST=
|
||||
|
||||
## Proxy Configuration
|
||||
# leave empty if not applicable
|
||||
# eg.: http://my-proxy-host:my-proxy-port
|
||||
HTTP_PROXY_URL=
|
||||
HTTP_PROXY_USERNAME=
|
||||
HTTP_PROXY_PASSWORD=
|
||||
HTTPS_PROXY_URL=$HTTP_PROXY_URL
|
||||
HTTPS_PROXY_USERNAME=$HTTP_PROXY_USERNAME
|
||||
HTTPS_PROXY_PASSWORD=$HTTP_PROXY_PASSWORD
|
||||
|
||||
## Maintenance Configuration
|
||||
# By default, the bridgehead regularly performs certain housekeeping tasks such as pruning of old docker images to not run out of disk space.
|
||||
# Set the following to false to opt-out. (Default: true)
|
||||
#AUTO_HOUSEKEEPING=
|
||||
|
||||
### Connector Configuration
|
||||
## The operator of the specific site.
|
||||
OPERATOR_FIRST_NAME=
|
||||
OPERATOR_LAST_NAME=
|
||||
OPERATOR_EMAIL=
|
||||
OPERATOR_PHONE=
|
||||
## SMTP Server
|
||||
# ex.: mailhost.intern.klinik.de
|
||||
MAIL_HOST=
|
||||
MAIL_PORT=
|
||||
# ex.: no-reply@bridgehead.intern.klinik.de
|
||||
MAIL_FROM_ADDRESS=
|
||||
MAIL_FROM_NAME=
|
||||
|
||||
### Monitoring
|
||||
# The apikey used for reporting to the central DKFZ monitoring. Leave empty to opt out.
|
||||
MONITOR_APIKEY=
|
||||
|
||||
### Biobanking (BBMRI) specifics
|
||||
## We consider BBMRI as BBMRI-ERIC (European) and German Biobank Node (Germany).
|
||||
## Obviously, all German biobanks are by definition also European. Thus,
|
||||
## any Bridgehead will by default connect to the BBMRI-ERIC services but not
|
||||
## the national ones. We aim to proceed similarly for other BBMRI-ERIC National Nodes.
|
||||
##
|
||||
## The default values are correct for biobanks outside Germany.
|
||||
## For a biobank inside Germany, set ENABLE_GBN=true.
|
||||
# Connect to the European services, e.g. BBMRI-ERIC Sample Locator (Default: true)
|
||||
ENABLE_ERIC=false
|
||||
# Connect to the German services, e.g. Biobank Node Sample Locator (Default: false)
|
||||
# Set this to true in German biobanks!
|
||||
ENABLE_GBN=false
|
||||
# Connect to the ECDC services, e.g. ECDC Sample Locator (Default: false)
|
||||
# Set this to true in ECDC national nodes!
|
||||
ENABLE_EHDS2=true
|
||||
|
@ -1,16 +1,8 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
directory_sync_service:
|
||||
image: "docker.verbis.dkfz.de/cache/samply/directory_sync_service"
|
||||
environment:
|
||||
DS_DIRECTORY_URL: ${DS_DIRECTORY_URL:-https://directory.bbmri-eric.eu}
|
||||
DS_DIRECTORY_URL: ${DS_DIRECTORY_URL}
|
||||
DS_DIRECTORY_USER_NAME: ${DS_DIRECTORY_USER_NAME}
|
||||
DS_DIRECTORY_USER_PASS: ${DS_DIRECTORY_USER_PASS}
|
||||
DS_TIMER_CRON: ${DS_TIMER_CRON:-0 22 * * *}
|
||||
DS_DIRECTORY_ALLOW_STAR_MODEL: ${DS_DIRECTORY_ALLOW_STAR_MODEL:-true}
|
||||
DS_DIRECTORY_MOCK: ${DS_DIRECTORY_MOCK}
|
||||
DS_DIRECTORY_DEFAULT_COLLECTION_ID: ${DS_DIRECTORY_DEFAULT_COLLECTION_ID}
|
||||
DS_DIRECTORY_COUNTRY: ${DS_DIRECTORY_COUNTRY}
|
||||
depends_on:
|
||||
- "blaze"
|
||||
DS_DIRECTORY_PASS_CODE: ${DS_DIRECTORY_PASS_CODE}
|
||||
DS_TIMER_CRON: ${DS_TIMER_CRON}
|
||||
|
53
bbmri/modules/dnpm-compose.yml
Normal file
53
bbmri/modules/dnpm-compose.yml
Normal file
@ -0,0 +1,53 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-beam-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-dnpm-beam-proxy
|
||||
environment:
|
||||
BROKER_URL: ${DNPM_BROKER_URL}
|
||||
PROXY_ID: ${DNPM_PROXY_ID}
|
||||
APP_dnpm-connect_KEY: ${DNPM_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
dnpm-beam-connect:
|
||||
depends_on: [ dnpm-beam-proxy ]
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
|
||||
container_name: bridgehead-dnpm-beam-connect
|
||||
environment:
|
||||
PROXY_URL: http://dnpm-beam-proxy:8081
|
||||
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
|
||||
APP_ID: dnpm-connect.${DNPM_PROXY_ID}
|
||||
DISCOVERY_URL: "./conf/central_targets.json"
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
HTTP_PROXY: http://forward_proxy:3128
|
||||
HTTPS_PROXY: http://forward_proxy:3128
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal
|
||||
RUST_LOG: ${RUST_LOG:-info}
|
||||
NO_AUTH: "true"
|
||||
extra_host:
|
||||
- "host.docker.internal:host-gateway"
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
|
||||
- "traefik.http.middlewares.dnpm-connect-strip.stripprefix.prefixes=/dnpm-connect"
|
||||
- "traefik.http.routers.dnpm-connect.middlewares=dnpm-connect-strip"
|
||||
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
|
||||
- "traefik.http.routers.dnpm-connect.tls=true"
|
||||
|
||||
secrets:
|
||||
proxy.pem:
|
||||
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem
|
@ -6,7 +6,6 @@ services:
|
||||
container_name: bridgehead-dnpm-backend
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
- N_RANDOM_FILES=${DNPM_SYNTH_NUM}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
27
bbmri/modules/dnpm-node-setup.sh
Normal file
27
bbmri/modules/dnpm-node-setup.sh
Normal file
@ -0,0 +1,27 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
fi
|
12
bbmri/modules/dnpm-setup.sh
Normal file
12
bbmri/modules/dnpm-setup.sh
Normal file
@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM}" ]; then
|
||||
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam and Beam.Connect for DNPM."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
|
||||
|
||||
# Set variables required for Beam-Connect
|
||||
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
DNPM_BROKER_ID="broker.ccp-it.dktk.dkfz.de"
|
||||
DNPM_BROKER_URL="https://${DNPM_BROKER_ID}"
|
||||
DNPM_PROXY_ID="${SITE_ID}.${DNPM_BROKER_ID}"
|
||||
fi
|
82
bbmri/modules/ehds2-compose.yml
Normal file
82
bbmri/modules/ehds2-compose.yml
Normal file
@ -0,0 +1,82 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
focus-ehds2:
|
||||
#image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
image: samply/focus
|
||||
container_name: bridgehead-focus-ehds2
|
||||
environment:
|
||||
API_KEY: ${EHDS2_FOCUS_BEAM_SECRET_SHORT}
|
||||
BEAM_APP_ID_LONG: focus.${EHDS2_PROXY_ID}
|
||||
PROXY_ID: ${EHDS2_PROXY_ID}
|
||||
BLAZE_URL: "http://blaze:8080/fhir/"
|
||||
BEAM_PROXY_URL: http://beam-proxy-ehds2:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
OBFUSCATE: "no"
|
||||
depends_on:
|
||||
- "beam-proxy-ehds2"
|
||||
- "blaze"
|
||||
|
||||
beam-proxy-ehds2:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-beam-proxy-ehds2
|
||||
environment:
|
||||
BROKER_URL: ${EHDS2_BROKER_URL}
|
||||
PROXY_ID: ${EHDS2_PROXY_ID}
|
||||
APP_focus_KEY: ${EHDS2_FOCUS_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/bbmri/modules/${EHDS2_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
# Convert ECDC CSV file into FHIR and push to Blaze
|
||||
transfair:
|
||||
container_name: transfair
|
||||
image: samply/transfair
|
||||
environment:
|
||||
FHIR_INPUT_URL: "http://source_blaze:8080/fhir"
|
||||
FHIR_OUTPUT_URL: "http://bridgehead-bbmri-blaze:8080/fhir"
|
||||
PROFILE: "amr2fhir"
|
||||
#WRITE_BUNDLES_TO_FILE: "true"
|
||||
AMR_FILE_PATH: "/app/data"
|
||||
restart: on-failure
|
||||
# The start up logic for TransFAIR is kind of complicated for the ECDC/EHDS2
|
||||
# pilot. This is because we only want to run it if 1. there are source data
|
||||
# files to be transformed and 2. if there is no lock file. We also need to
|
||||
# wait for Blaze to start, TransFAIR does not check for this. And finally,
|
||||
# once TransFAIR has finished loading data, a lock file is created, to stop
|
||||
# a time-consuming repeat run.
|
||||
command: bash -c " \
|
||||
echo listing /app/data && \
|
||||
ls -la /app/data && \
|
||||
ls /app/data/*.[cC][sS][vV] 1> /dev/null 2>&1 && \
|
||||
[ ! -f /app/data/lock ] && \
|
||||
( \
|
||||
echo 'Wait for Blaze to finish initializing' ; \
|
||||
sleep 360 ; \
|
||||
echo 'Remove old output files' ; \
|
||||
rm -rf /app/test/* ; \
|
||||
cd /app ; \
|
||||
echo 'Run TransFAIR' ; \
|
||||
java -jar transFAIR.jar ; \
|
||||
echo 'Touching lock file' ; \
|
||||
touch /app/data/lock \
|
||||
) & tail -f /dev/null"
|
||||
# If you put .csv files into ./../ecdc/data, TransFAIR will try to process them.
|
||||
volumes:
|
||||
- ../../ecdc/test:/app/test/
|
||||
- ../../ecdc/data:/app/data/
|
||||
|
||||
# Report on the data pushed to Blaze by TransFAIR
|
||||
test-data-loader:
|
||||
container_name: test-data-loader
|
||||
image: samply/test-data-loader
|
||||
command: sh -c "sleep 420 && echo Listing all resources in FHIR store && blazectl --server http://bridgehead-bbmri-blaze:8080/fhir count-resources && tail -f /dev/null"
|
||||
|
28
bbmri/modules/ehds2-setup.sh
Normal file
28
bbmri/modules/ehds2-setup.sh
Normal file
@ -0,0 +1,28 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ "${ENABLE_EHDS2}" == "true" ]; then
|
||||
log INFO "EHDS2 setup detected -- will start services for German Biobank Node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/ehds2-compose.yml"
|
||||
|
||||
# The environment needs to be defined in /etc/bridgehead
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
export EHDS2_BROKER_ID=broker.bbmri.samply.de
|
||||
export EHDS2_ROOT_CERT=ehds2
|
||||
;;
|
||||
"test")
|
||||
export EHDS2_BROKER_ID=broker.test.bbmri.samply.de
|
||||
export EHDS2_ROOT_CERT=ehds2.test
|
||||
;;
|
||||
*)
|
||||
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
export EHDS2_BROKER_ID=broker.bbmri.samply.de
|
||||
export EHDS2_ROOT_CERT=ehds2
|
||||
;;
|
||||
esac
|
||||
|
||||
EHDS2_BROKER_URL=https://${EHDS2_BROKER_ID}
|
||||
EHDS2_PROXY_ID=${SITE_ID}.${EHDS2_BROKER_ID}
|
||||
EHDS2_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
EHDS2_SUPPORT_EMAIL=feedback@germanbiobanknode.de
|
||||
fi
|
22
bbmri/modules/ehds2.root.crt.pem
Normal file
22
bbmri/modules/ehds2.root.crt.pem
Normal file
@ -0,0 +1,22 @@
|
||||
# DKFZ certificate
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUMy/n0zFRihhVR3aAD54LumzeYdwwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjIxMDI1MDczNTA4WhcNMzIx
|
||||
MDIyMDczNTM3WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAL3qWliHIlIT1Qlsyq/NKJ1uj6/AF0STNg5NTNpb
|
||||
Xqe5rmUqs6jmQepputGStBVe5TthFw56whISv9FqD5s1PZUGyFikW1pJUnF7ZYRf
|
||||
MfrJHRi1vUnD3Gw36FCot+i6BAxfw/rdp9hoqFZ6erRkULLaYZ5S2cDHN0DWc18V
|
||||
3VgZ66ah8QXSx7ERRNa/eWRkHrPIYhyVSoKuyZfvbVgsYZADSlviCgIHPrGLerLr
|
||||
ylNUyuTxJ5RKStOwPn7A+Jp7nRT+MRh9BphA7s6NuK9h+eVe1DiLbIETWyCEfN3Y
|
||||
INpunatn3QDhqOIfNcuBArjsAj7mg8l5KNba8nUP4v0EJYECAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMvc5Fizz1vO
|
||||
MEG3MIsy7UY69ZNIMB8GA1UdIwQYMBaAFMvc5Fizz1vOMEG3MIsy7UY69ZNIMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQBb8a5su820
|
||||
h8JStJC+KpvXmDrGkwx9bHlEZMgQQejIrwPLEbA32KBvNxdoUxF9q1Y773MKdqbc
|
||||
cCJwzQXE/NPZ13hCGrEIXs8DgH52GhEB5592k5/bRNcAvUwbZSXPPiT0rgq/eUOt
|
||||
BYhgN0ov7h1MC5L6CYB/rQwqck7JPlmrXTkh2gix4/dEdBRzsHsn/xlo8ay5QYHG
|
||||
rx2Adit76eZu/MJoJNzl1r8MPxLqyAie3KcIU54A+UMozLrWEQP/TyOyWZdjUjJt
|
||||
cBYgkKJTjwdRhc+ehI3kFo7b/a/Z/jl9szKsAPHozMixSi8lGnsYwN80oqeRvT7h
|
||||
wcMUK+igv3/K
|
||||
-----END CERTIFICATE-----
|
||||
|
22
bbmri/modules/ehds2.test.root.crt.pem
Normal file
22
bbmri/modules/ehds2.test.root.crt.pem
Normal file
@ -0,0 +1,22 @@
|
||||
# DKFZ certificate
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUJ0g7k2vrdAwNTU38S1/mU8NO26MwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNzEwMTIyMzQxWhcNMzMw
|
||||
NzA3MTIyNDExWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBALMvc/fApbsAl+/NXDszNgffNR5llAb9CfxzdnRn
|
||||
ryoBqZdPevBYZZfKBARRKjFbXRDdPWbE7erDeo1LiCM6PObXCuT9wmGWJtvfkmqW
|
||||
3Z/a75e4r360kceMEGVn4kWpi9dz8s7+oXVZURjW2r13h6pq6xQNZDNlXmpR8wHG
|
||||
58TSrQC4n1vzdSwMWdptgOA8Sw8adR7ZJI1yNZpmynB2QolKKNESI7FcSKC/+b+H
|
||||
LoPkseAwQG9yJo23qEw1GZS67B47iKIqX2wp9VLQobHw7ncrhKXQLSWq973k/Swp
|
||||
7lBdfOsTouf72flLiF1HbdOLcFDmWgIbf5scj2HaQe8b/UcCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHYxBJiJZieW
|
||||
e6G1vwn6Q36/crgNMB8GA1UdIwQYMBaAFHYxBJiJZieWe6G1vwn6Q36/crgNMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCN6WVNYpWJ
|
||||
6Z1Ee+otLZYMXhjyR6NUQ5s0aHiug97gB8mTiNlgXiiTgipCbofEmENgh1inYrPC
|
||||
WfdXxqOaekSXCQW6nSO1KtBzEYtkN5LrN1cjKqt51P2DbkllinK37wwCS2Kfup1+
|
||||
yjhTRxrehSIfsMVK6bTUeSoc8etkgwErZpORhlpqZKWhmOwcMpgsYJJOLhUetqc1
|
||||
UNe/254bc0vqHEPT6VI/86c7qAmk1xR0RUfrnKAEqZtUeuoj2fe1L/6yOB16fxt5
|
||||
3V3oim7EO6eZCTjDo9fU5DaFiqSMe7WVdr03Na0cWet60XKRH/xaiC6gMWdHWcbh
|
||||
vZdXnV1qjlM2
|
||||
-----END CERTIFICATE-----
|
||||
|
@ -16,7 +16,7 @@ services:
|
||||
- "blaze"
|
||||
|
||||
beam-proxy-eric:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-beam-proxy-eric
|
||||
environment:
|
||||
BROKER_URL: ${ERIC_BROKER_URL}
|
||||
|
@ -16,7 +16,7 @@ services:
|
||||
- "blaze"
|
||||
|
||||
beam-proxy-gbn:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-beam-proxy-gbn
|
||||
environment:
|
||||
BROKER_URL: ${GBN_BROKER_URL}
|
||||
|
15
bbmri/vars
15
bbmri/vars
@ -4,7 +4,10 @@
|
||||
# Makes only sense for German Biobanks
|
||||
: ${ENABLE_GBN:=false}
|
||||
|
||||
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
|
||||
# Makes only sense for EHDS2 project
|
||||
: ${ENABLE_EHDS2:=false}
|
||||
|
||||
FOCUS_RETRY_COUNT=128
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
for module in $PROJECT/modules/*.sh
|
||||
@ -14,12 +17,16 @@ do
|
||||
done
|
||||
|
||||
SUPPORT_EMAIL=$ERIC_SUPPORT_EMAIL
|
||||
BROKER_URL_FOR_PREREQ="${ERIC_BROKER_URL:-$GBN_BROKER_URL}"
|
||||
BROKER_URL_FOR_PREREQ="https://ecdc-vm-ehds-test1.swedencentral.cloudapp.azure.com"
|
||||
|
||||
if [ -n "$GBN_SUPPORT_EMAIL" ]; then
|
||||
SUPPORT_EMAIL=$GBN_SUPPORT_EMAIL
|
||||
fi
|
||||
|
||||
if [ -n "$EHDS2_SUPPORT_EMAIL" ]; then
|
||||
SUPPORT_EMAIL=$EHDS2_SUPPORT_EMAIL
|
||||
fi
|
||||
|
||||
function do_enroll {
|
||||
COUNT=0
|
||||
if [ "$ENABLE_ERIC" == "true" ]; then
|
||||
@ -30,6 +37,10 @@ function do_enroll {
|
||||
do_enroll_inner $GBN_PROXY_ID $GBN_SUPPORT_EMAIL
|
||||
COUNT=$((COUNT+1))
|
||||
fi
|
||||
if [ "$ENABLE_EHDS2" == "true" ]; then
|
||||
do_enroll_inner $EHDS2_PROXY_ID $EHDS2_SUPPORT_EMAIL
|
||||
COUNT=$((COUNT+1))
|
||||
fi
|
||||
if [ $COUNT -ge 2 ]; then
|
||||
echo
|
||||
echo "You just received $COUNT certificate signing requests (CSR). Please send $COUNT e-mails, with 1 CSR each, to the respective e-mail address."
|
||||
|
45
bridgehead
45
bridgehead
@ -32,18 +32,6 @@ case "$PROJECT" in
|
||||
bbmri)
|
||||
#nothing extra to do
|
||||
;;
|
||||
cce)
|
||||
#nothing extra to do
|
||||
;;
|
||||
itcc)
|
||||
#nothing extra to do
|
||||
;;
|
||||
kr)
|
||||
#nothing extra to do
|
||||
;;
|
||||
dhki)
|
||||
#nothing extra to do
|
||||
;;
|
||||
minimal)
|
||||
#nothing extra to do
|
||||
;;
|
||||
@ -62,8 +50,6 @@ loadVars() {
|
||||
source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import"
|
||||
fi
|
||||
fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile"
|
||||
setHostname
|
||||
optimizeBlazeMemoryUsage
|
||||
[ -e ./$PROJECT/vars ] && source ./$PROJECT/vars
|
||||
set +a
|
||||
|
||||
@ -78,25 +64,22 @@ loadVars() {
|
||||
OVERRIDE+=" -f ./$PROJECT/docker-compose.override.yml"
|
||||
fi
|
||||
detectCompose
|
||||
setHostname
|
||||
setupProxy
|
||||
|
||||
# Set some project-independent default values
|
||||
: ${ENVIRONMENT:=production}
|
||||
export ENVIRONMENT
|
||||
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
export FOCUS_TAG=main
|
||||
export BEAM_TAG=main
|
||||
;;
|
||||
"test")
|
||||
export FOCUS_TAG=develop
|
||||
export BEAM_TAG=develop
|
||||
;;
|
||||
*)
|
||||
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
export FOCUS_TAG=main
|
||||
export BEAM_TAG=main
|
||||
;;
|
||||
esac
|
||||
}
|
||||
@ -105,15 +88,25 @@ case "$ACTION" in
|
||||
start)
|
||||
loadVars
|
||||
hc_send log "Bridgehead $PROJECT startup: Checking requirements ..."
|
||||
chown -R bridgehead ${BASE}
|
||||
checkRequirements
|
||||
sync_secrets
|
||||
# Note: changes to "bridgehead" script will only take effect after next start.
|
||||
su bridgehead -c "git pull"
|
||||
chown -R bridgehead ${BASE}
|
||||
# Local versions of focus and transfair are needed by EHDS2
|
||||
clone_focus_if_nonexistent ${BASE}/..
|
||||
build_focus ${BASE}/..
|
||||
clone_transfair_if_nonexistent ${BASE}/..
|
||||
build_transfair ${BASE}/..
|
||||
# Location for input data and results for EHDS2
|
||||
mkdir -p ${BASE}/../ecdc/test
|
||||
mkdir -p ${BASE}/../ecdc/data
|
||||
chown -R bridgehead ${BASE}/../ecdc
|
||||
hc_send log "Bridgehead $PROJECT startup: Requirements checked out. Now starting bridgehead ..."
|
||||
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
|
||||
;;
|
||||
stop)
|
||||
loadVars
|
||||
# Kill stale secret-sync instances if present
|
||||
docker kill $(docker ps -q --filter ancestor=docker.verbis.dkfz.de/cache/samply/secret-sync-local) 2>/dev/null || true
|
||||
# HACK: This is temporarily to properly shut down false bridgehead instances (bridgehead-ccp instead ccp)
|
||||
$COMPOSE -p bridgehead-$PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE down
|
||||
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE down
|
||||
@ -122,16 +115,6 @@ case "$ACTION" in
|
||||
bk_is_running
|
||||
exit $?
|
||||
;;
|
||||
logs)
|
||||
loadVars
|
||||
shift 2
|
||||
exec journalctl -u bridgehead@$PROJECT -u bridgehead-update@$PROJECT -a $@
|
||||
;;
|
||||
docker-logs)
|
||||
loadVars
|
||||
shift 2
|
||||
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE logs -f $@
|
||||
;;
|
||||
update)
|
||||
loadVars
|
||||
exec ./lib/update-bridgehead.sh $PROJECT
|
||||
|
@ -1,63 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
container_name: bridgehead-cce-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-cce-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.blaze_cce.rule=PathPrefix(`/cce-localdatamanagement`)"
|
||||
- "traefik.http.middlewares.cce_b_strip.stripprefix.prefixes=/cce-localdatamanagement"
|
||||
- "traefik.http.services.blaze_cce.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.blaze_cce.middlewares=cce_b_strip,auth"
|
||||
- "traefik.http.routers.blaze_cce.tls=true"
|
||||
|
||||
focus:
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
container_name: bridgehead-focus
|
||||
environment:
|
||||
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
BEAM_APP_ID_LONG: focus.${PROXY_ID}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
BLAZE_URL: "http://bridgehead-cce-blaze:8080/fhir/"
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
||||
beam-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
|
||||
container_name: bridgehead-beam-proxy
|
||||
environment:
|
||||
BROKER_URL: ${BROKER_URL}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/cce/root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
|
||||
volumes:
|
||||
blaze-data:
|
||||
|
||||
secrets:
|
||||
proxy.pem:
|
||||
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem
|
@ -1,20 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUW34NEb7bl0+Ywx+I1VKtY5vpAOowDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTIyMTMzNzEzWhcNMzQw
|
||||
MTE5MTMzNzQzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAL5UegLXTlq3XRRj8LyFs3aF0tpRPVoW9RXp5kFI
|
||||
TnBvyO6qjNbMDT/xK+4iDtEX4QQUvsxAKxfXbe9i1jpdwjgH7JHaSGm2IjAiKLqO
|
||||
OXQQtguWwfNmmp96Ql13ArLj458YH08xMO/w2NFWGwB/hfARa4z/T0afFuc/tKJf
|
||||
XbGCG9xzJ9tmcG45QN8NChGhVvaTweNdVxGWlpHxmi0Mn8OM9CEuB7nPtTTiBuiu
|
||||
pRC2zVVmNjVp4ktkAqL7IHOz+/F5nhiz6tOika9oD3376Xj055lPznLcTQn2+4d7
|
||||
K7ZrBopCFxIQPjkgmYRLfPejbpdUjK1UVJw7hbWkqWqH7JMCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFGjvRcaIP4HM
|
||||
poIguUAK9YL2n7fbMB8GA1UdIwQYMBaAFGjvRcaIP4HMpoIguUAK9YL2n7fbMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCbzycJSaDm
|
||||
AXXNJqQ88djrKs5MDXS8RIjS/cu2ayuLaYDe+BzVmUXNA0Vt9nZGdaz63SLLcjpU
|
||||
fNSxBfKbwmf7s30AK8Cnfj9q4W/BlBeVizUHQsg1+RQpDIdMrRQrwkXv8mfLw+w5
|
||||
3oaXNW6W/8KpBp/H8TBZ6myl6jCbeR3T8EMXBwipMGop/1zkbF01i98Xpqmhx2+l
|
||||
n+80ofPsSspOo5XmgCZym8CD/m/oFHmjcvOfpOCvDh4PZ+i37pmbSlCYoMpla3u/
|
||||
7MJMP5lugfLBYNDN2p+V4KbHP/cApCDT5UWLOeAWjgiZQtHH5ilDeYqEc1oPjyJt
|
||||
Rtup0MTxSJtN
|
||||
-----END CERTIFICATE-----
|
14
cce/vars
14
cce/vars
@ -1,14 +0,0 @@
|
||||
BROKER_ID=test-no-real-data.broker.samply.de
|
||||
BROKER_URL=https://${BROKER_ID}
|
||||
PROXY_ID=${SITE_ID}.${BROKER_ID}
|
||||
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
|
||||
SUPPORT_EMAIL=manoj.waikar@dkfz-heidelberg.de
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
|
||||
for module in common/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
source $module
|
||||
done
|
@ -2,13 +2,11 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:latest
|
||||
container_name: bridgehead-ccp-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-ccp-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
JAVA_TOOL_OPTIONS: "-Xmx4g"
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
@ -21,7 +19,7 @@ services:
|
||||
- "traefik.http.routers.blaze_ccp.tls=true"
|
||||
|
||||
focus:
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:main
|
||||
container_name: bridgehead-focus
|
||||
environment:
|
||||
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
@ -31,15 +29,12 @@ services:
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
QUERIES_TO_CACHE: '/queries_to_cache.conf'
|
||||
volumes:
|
||||
- /srv/docker/bridgehead/ccp/queries_to_cache.conf:/queries_to_cache.conf
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
||||
beam-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-beam-proxy
|
||||
environment:
|
||||
BROKER_URL: ${BROKER_URL}
|
||||
|
18
ccp/modules/adt2fhir-rest-compose.yml
Normal file
18
ccp/modules/adt2fhir-rest-compose.yml
Normal file
@ -0,0 +1,18 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
adt2fhir-rest:
|
||||
container_name: bridgehead-adt2fhir-rest
|
||||
image: docker.verbis.dkfz.de/ccp/adt2fhir-rest:main
|
||||
environment:
|
||||
IDTYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
|
||||
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
|
||||
SALT: ${LOCAL_SALT}
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.adt2fhir-rest.rule=PathPrefix(`/adt2fhir-rest`)"
|
||||
- "traefik.http.middlewares.adt2fhir-rest_strip.stripprefix.prefixes=/adt2fhir-rest"
|
||||
- "traefik.http.services.adt2fhir-rest.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.adt2fhir-rest.tls=true"
|
||||
- "traefik.http.routers.adt2fhir-rest.middlewares=adt2fhir-rest_strip,auth"
|
13
ccp/modules/adt2fhir-rest-setup.sh
Normal file
13
ccp/modules/adt2fhir-rest-setup.sh
Normal file
@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
|
||||
function adt2fhirRestSetup() {
|
||||
if [ -n "$ENABLE_ADT2FHIR_REST" ]; then
|
||||
log INFO "ADT2FHIR-REST setup detected -- will start adt2fhir-rest API."
|
||||
if [ ! -n "$IDMANAGER_LOCAL_PATIENTLIST_APIKEY" ]; then
|
||||
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
|
||||
exit 1;
|
||||
fi
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/adt2fhir-rest-compose.yml"
|
||||
LOCAL_SALT="$(echo \"local-random-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
fi
|
||||
}
|
@ -1,157 +0,0 @@
|
||||
<template id="opal-ccp" source-id="blaze-store" opal-project="ccp-demo" target-id="opal" >
|
||||
|
||||
<container csv-filename="Patient-${TIMESTAMP}.csv" opal-table="patient" opal-entity-type="Patient">
|
||||
<attribute csv-column="patient-id" opal-value-type="text" primary-key="true" val-fhir-path="Patient.id.value" anonym="Pat" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="dktk-id-global" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Global').value.value"/>
|
||||
<attribute csv-column="dktk-id-lokal" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Lokal').value.value" />
|
||||
<attribute csv-column="geburtsdatum" opal-value-type="date" val-fhir-path="Patient.birthDate.value"/>
|
||||
<attribute csv-column="geschlecht" opal-value-type="text" val-fhir-path="Patient.gender.value" />
|
||||
<attribute csv-column="datum_des_letztbekannten_vitalstatus" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '75186-7').effective.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
|
||||
<attribute csv-column="vitalstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '75186-7').value.coding.code.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
|
||||
<!--fehlt in ADT2FHIR--><attribute csv-column="tod_tumorbedingt" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://fhir.de/CodeSystem/bfarm/icd-10-gm').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
|
||||
<!--fehlt in ADT2FHIR--><attribute csv-column="todesursachen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/JNUCS').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="Diagnosis-${TIMESTAMP}.csv" opal-table="diagnosis" opal-entity-type="Diagnosis">
|
||||
<attribute csv-column="diagnosis-id" primary-key="true" opal-value-type="text" val-fhir-path="Condition.id.value" anonym="Dia" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Condition.subject.reference.value" anonym="Pat"/>
|
||||
<attribute csv-column="primaerdiagnose" opal-value-type="text" val-fhir-path="Condition.code.coding.code.value"/>
|
||||
<attribute csv-column="tumor_diagnosedatum" opal-value-type="date" val-fhir-path="Condition.onset.value"/>
|
||||
<attribute csv-column="primaertumor_diagnosetext" opal-value-type="text" val-fhir-path="Condition.code.text.value"/>
|
||||
<attribute csv-column="version_des_icd-10_katalogs" opal-value-type="integer" val-fhir-path="Condition.code.coding.version.value"/>
|
||||
<attribute csv-column="lokalisation" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').code.value"/>
|
||||
<attribute csv-column="icd-o_katalog_topographie_version" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').version.value"/>
|
||||
<attribute csv-column="seitenlokalisation_nach_adt-gekid" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/SeitenlokalisationCS').code.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="Progress-${TIMESTAMP}.csv" opal-table="progress" opal-entity-type="Progress">
|
||||
<!--it would be better to generate a an ID, instead of extracting the ClinicalImpression id-->
|
||||
<attribute csv-column="progress-id" primary-key="true" opal-value-type="text" val-fhir-path="ClinicalImpression.id.value" anonym="Pro" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="ClinicalImpression.problem.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="ClinicalImpression.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="untersuchungs-_befunddatum_im_verlauf" opal-value-type="date" val-fhir-path="ClinicalImpression.effective.value" />
|
||||
<!-- just for evaluation: redundant to Untersuchungs-, Befunddatum im Verlauf-->
|
||||
<attribute csv-column="datum_lokales_oder_regionaeres_rezidiv" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').effective.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
<attribute csv-column="gesamtbeurteilung_tumorstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21976-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
|
||||
<attribute csv-column="lokales_oder_regionaeres_rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
|
||||
<attribute csv-column="lymphknoten-rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4370-8').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
<attribute csv-column="fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4226-2').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
</container>
|
||||
|
||||
<container csv-filename="Histology-${TIMESTAMP}.csv" opal-table="histology" opal-entity-type="Histology" >
|
||||
<attribute csv-column="histology-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').id" anonym="His" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="histologie_datum" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '59847-4').effective.value"/>
|
||||
<attribute csv-column="icd-o_katalog_morphologie_version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.version.value" />
|
||||
<attribute csv-column="morphologie" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.code.value"/>
|
||||
<attribute csv-column="morphologie-freitext" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.text.value"/>
|
||||
<attribute csv-column="grading" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59542-1').value.coding.code.value" join-fhir-path="Observation.where(code.coding.code = '59847-4').hasMember.reference.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Metastasis-${TIMESTAMP}.csv" opal-table="metastasis" opal-entity-type="Metastasis" >
|
||||
<attribute csv-column="metastasis-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').id" anonym="Met" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_fernmetastasen" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21907-1').effective.value"/>
|
||||
<attribute csv-column="fernmetastasen_vorhanden" opal-value-type="boolean" val-fhir-path="Observation.where(code.coding.code = '21907-1').value.coding.code.value"/>
|
||||
<attribute csv-column="lokalisation_fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').bodySite.coding.code.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="TNM-${TIMESTAMP}.csv" opal-table="tnm" opal-entity-type="TNM">
|
||||
<attribute csv-column="tnm-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').id" anonym="TNM" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_der_tnm_dokumentation_datum_befund" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').effective.value"/>
|
||||
<attribute csv-column="uicc_stadium" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-y-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '59479-6' or code.coding.code = '59479-6').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-r-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21983-2' or code.coding.code = '21983-2').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-m-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '42030-7' or code.coding.code = '42030-7').value.coding.code.value"/>
|
||||
<!--nur bei UICC, nicht in ADT2FHIR--><attribute csv-column="tnm-version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.version.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="System-Therapy-${TIMESTAMP}.csv" opal-table="system-therapy" opal-entity-type="SystemTherapy">
|
||||
<attribute csv-column="system-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="MedicationStatement.id" anonym="Sys" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="MedicationStatement.reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="MedicationStatement.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="systemische_therapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
<attribute csv-column="intention_chemotherapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value"/>
|
||||
<attribute csv-column="therapieart" opal-value-type="text" val-fhir-path="MedicationStatement.category.coding.code.value"/>
|
||||
<attribute csv-column="systemische_therapie_beginn" opal-value-type="date" val-fhir-path="MedicationStatement.effective.start.value"/>
|
||||
<attribute csv-column="systemische_therapie_ende" opal-value-type="date" val-fhir-path="MedicationStatement.effective.end.value"/>
|
||||
<attribute csv-column="systemische_therapie_protokoll" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SystemischeTherapieProtokoll').value.text.value"/>
|
||||
<attribute csv-column="systemische_therapie_substanzen" opal-value-type="text" val-fhir-path="MedicationStatement.medication.text.value"/>
|
||||
<attribute csv-column="chemotherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'CH').exists().value" />
|
||||
<attribute csv-column="hormontherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'HO').exists().value" />
|
||||
<attribute csv-column="immuntherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'IM').exists().value" />
|
||||
<attribute csv-column="knochenmarktransplantation" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'KM').exists().value" />
|
||||
<attribute csv-column="abwartende_strategie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'WS').exists().value" />
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Surgery-${TIMESTAMP}.csv" opal-table="surgery" opal-entity-type="Surgery">
|
||||
<attribute csv-column="surgery-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').id" anonym="Sur" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="ops-code" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').code.coding.code.value"/>
|
||||
<attribute csv-column="datum_der_op" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'OP').performed.value"/>
|
||||
<attribute csv-column="intention_op" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-OPIntention').value.coding.code.value"/>
|
||||
<attribute csv-column="lokale_beurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/LokaleBeurteilungResidualstatusCS').code.value" />
|
||||
<attribute csv-column="gesamtbeurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/GesamtbeurteilungResidualstatusCS').code.value" />
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Radiation-Therapy-${TIMESTAMP}.csv" opal-table="radiation-therapy" opal-entity-type="RadiationTherapy">
|
||||
<attribute csv-column="radiation-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').id" anonym="Rad" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="strahlentherapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
<attribute csv-column="intention_strahlentherapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value" />
|
||||
<attribute csv-column="strahlentherapie_beginn" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.start.value"/>
|
||||
<attribute csv-column="strahlentherapie_ende" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.end.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Molecular-Marker-${TIMESTAMP}.csv" opal-table="molecular-marker" opal-entity-type="MolecularMarker">
|
||||
<attribute csv-column="mol-marker-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').id" anonym="Mol" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').focus.reference.value" anonym="Dia" />
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_der_datenerhebung" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '69548-6').effective.value"/>
|
||||
<attribute csv-column="marker" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').component.value.coding.code.value"/>
|
||||
<attribute csv-column="status_des_molekularen_markers" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.coding.code.value" />
|
||||
<attribute csv-column="zusaetzliche_alternative_dokumentation" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.text.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Sample-${TIMESTAMP}.csv" opal-table="sample" opal-entity-type="Sample">
|
||||
<attribute csv-column="sample-id" primary-key="true" opal-value-type="text" val-fhir-path="Specimen.id" anonym="Sam" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Specimen.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="entnahmedatum" opal-value-type="date" val-fhir-path="Specimen.collection.collectedDateTime.value"/>
|
||||
<attribute csv-column="probenart" opal-value-type="text" val-fhir-path="Specimen.type.coding.code.value"/>
|
||||
<attribute csv-column="status" opal-value-type="text" val-fhir-path="Specimen.status.code.value"/>
|
||||
<attribute csv-column="projekt" opal-value-type="text" val-fhir-path="Specimen.identifier.system.value"/>
|
||||
<!-- @TODO: it is still necessary to clarify whether it would not be better to take the quantity of collection.quantity -->
|
||||
<attribute csv-column="menge" opal-value-type="integer" val-fhir-path="Specimen.container.specimenQuantity.value.value"/>
|
||||
<attribute csv-column="einheit" opal-value-type="text" val-fhir-path="Specimen.container.specimenQuantity.unit.value"/>
|
||||
<attribute csv-column="aliquot" opal-value-type="text" val-fhir-path="Specimen.parent.reference.exists().value" />
|
||||
</container>
|
||||
|
||||
|
||||
|
||||
|
||||
<fhir-rev-include>Observation:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Condition:patient</fhir-rev-include>
|
||||
<fhir-rev-include>ClinicalImpression:patient</fhir-rev-include>
|
||||
<fhir-rev-include>MedicationStatement:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Procedure:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Specimen:patient</fhir-rev-include>
|
||||
|
||||
</template>
|
@ -1,15 +0,0 @@
|
||||
[
|
||||
"berlin",
|
||||
"muenchen-lmu",
|
||||
"dresden",
|
||||
"freiburg",
|
||||
"muenchen-tum",
|
||||
"tuebingen",
|
||||
"mainz",
|
||||
"frankfurt",
|
||||
"essen",
|
||||
"dktk-datashield-test",
|
||||
"dktk-test",
|
||||
"mannheim",
|
||||
"central-ds-orchestrator"
|
||||
]
|
@ -16,14 +16,12 @@ services:
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
HTTPS_PROXY: "http://forward_proxy:3128"
|
||||
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
|
||||
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal
|
||||
RUST_LOG: ${RUST_LOG:-info}
|
||||
NO_AUTH: "true"
|
||||
TLS_CA_CERTIFICATES_DIR: ./conf/trusted-ca-certs
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
labels:
|
||||
@ -33,7 +31,3 @@ services:
|
||||
- "traefik.http.routers.dnpm-connect.middlewares=dnpm-connect-strip"
|
||||
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
|
||||
- "traefik.http.routers.dnpm-connect.tls=true"
|
||||
|
||||
dnpm-echo:
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-echo:latest
|
||||
container_name: bridgehead-dnpm-echo
|
33
ccp/modules/dnpm-node-compose.yml
Normal file
33
ccp/modules/dnpm-node-compose.yml
Normal file
@ -0,0 +1,33 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-backend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
|
||||
container_name: bridgehead-dnpm-backend
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
|
||||
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.bwhc-backend.tls=true"
|
||||
|
||||
dnpm-frontend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
|
||||
container_name: bridgehead-dnpm-frontend
|
||||
links:
|
||||
- dnpm-backend
|
||||
environment:
|
||||
- NUXT_HOST=0.0.0.0
|
||||
- NUXT_PORT=8080
|
||||
- BACKEND_PROTOCOL=https
|
||||
- BACKEND_HOSTNAME=$HOST
|
||||
- BACKEND_PORT=443
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.bwhc-frontend.tls=true"
|
27
ccp/modules/dnpm-node-setup.sh
Normal file
27
ccp/modules/dnpm-node-setup.sh
Normal file
@ -0,0 +1,27 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
fi
|
9
ccp/modules/dnpm-setup.sh
Normal file
9
ccp/modules/dnpm-setup.sh
Normal file
@ -0,0 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM}" ]; then
|
||||
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam.Connect for DNPM."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
|
||||
|
||||
# Set variables required for Beam-Connect
|
||||
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
fi
|
@ -1,6 +0,0 @@
|
||||
# Full Excel Export
|
||||
curl --location --request POST 'https://${HOST}/ccp-exporter/request?query=Patient&query-format=FHIR_PATH&template-id=ccp&output-format=EXCEL' \
|
||||
--header 'x-api-key: ${EXPORT_API_KEY}'
|
||||
|
||||
# QB
|
||||
curl --location --request POST 'https://${HOST}/ccp-reporter/generate?template-id=ccp'
|
@ -1,5 +1,4 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
id-manager:
|
||||
image: docker.verbis.dkfz.de/bridgehead/magicpl
|
||||
@ -14,22 +13,21 @@ services:
|
||||
MAGICPL_CONNECTOR_APIKEY: ${IDMANAGER_READ_APIKEY}
|
||||
MAGICPL_CENTRAL_PATIENTLIST_APIKEY: ${IDMANAGER_CENTRAL_PATIENTLIST_APIKEY}
|
||||
MAGICPL_CONTROLNUMBERGENERATOR_APIKEY: ${IDMANAGER_CONTROLNUMBERGENERATOR_APIKEY}
|
||||
MAGICPL_OIDC_CLIENT_ID: ${IDMANAGER_AUTH_CLIENT_ID}
|
||||
MAGICPL_OIDC_CLIENT_SECRET: ${IDMANAGER_AUTH_CLIENT_SECRET}
|
||||
depends_on:
|
||||
- patientlist
|
||||
- traefik-forward-auth
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.id-manager.rule=PathPrefix(`/id-manager`)"
|
||||
- "traefik.http.services.id-manager.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.id-manager.tls=true"
|
||||
- "traefik.http.routers.id-manager.middlewares=traefik-forward-auth-idm"
|
||||
|
||||
patientlist:
|
||||
image: docker.verbis.dkfz.de/bridgehead/mainzelliste
|
||||
container_name: bridgehead-patientlist
|
||||
environment:
|
||||
- TOMCAT_REVERSEPROXY_FQDN=${HOST}
|
||||
- TOMCAT_REVERSEPROXY_SSL=true
|
||||
- ML_SITE=${IDMANAGEMENT_FRIENDLY_ID}
|
||||
- ML_DB_PASS=${PATIENTLIST_POSTGRES_PASSWORD}
|
||||
- ML_API_KEY=${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
|
||||
@ -45,7 +43,7 @@ services:
|
||||
- patientlist-db
|
||||
|
||||
patientlist-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
container_name: bridgehead-patientlist-db
|
||||
environment:
|
||||
POSTGRES_USER: "mainzelliste"
|
||||
@ -56,41 +54,5 @@ services:
|
||||
# NOTE: Add backups here. This is only imported if /var/lib/bridgehead/data/patientlist/ is empty!!!
|
||||
- "/tmp/bridgehead/patientlist/:/docker-entrypoint-initdb.d/"
|
||||
|
||||
traefik-forward-auth:
|
||||
image: docker.verbis.dkfz.de/cache/oauth2-proxy/oauth2-proxy:v7.6.0
|
||||
environment:
|
||||
- http_proxy=http://forward_proxy:3128
|
||||
- https_proxy=http://forward_proxy:3128
|
||||
- OAUTH2_PROXY_PROVIDER=oidc
|
||||
- OAUTH2_PROXY_SKIP_PROVIDER_BUTTON=true
|
||||
- OAUTH2_PROXY_OIDC_ISSUER_URL=https://login.verbis.dkfz.de/realms/master
|
||||
- OAUTH2_PROXY_CLIENT_ID=bridgehead-${SITE_ID}
|
||||
- OAUTH2_PROXY_CLIENT_SECRET=${IDMANAGER_AUTH_CLIENT_SECRET}
|
||||
- OAUTH2_PROXY_COOKIE_SECRET=${IDMANAGER_AUTH_COOKIE_SECRET}
|
||||
- OAUTH2_PROXY_COOKIE_DOMAINS=.${HOST}
|
||||
- OAUTH2_PROXY_HTTP_ADDRESS=:4180
|
||||
- OAUTH2_PROXY_REVERSE_PROXY=true
|
||||
- OAUTH2_PROXY_WHITELIST_DOMAINS=.${HOST}
|
||||
- OAUTH2_PROXY_UPSTREAMS=static://202
|
||||
- OAUTH2_PROXY_EMAIL_DOMAINS=*
|
||||
- OAUTH2_PROXY_SCOPE=openid profile email
|
||||
# Pass Authorization Header and some user information to backend services
|
||||
- OAUTH2_PROXY_SET_AUTHORIZATION_HEADER=true
|
||||
- OAUTH2_PROXY_SET_XAUTHREQUEST=true
|
||||
# Keycloak has an expiration time of 60s therefore oauth2-proxy needs to refresh after that
|
||||
- OAUTH2_PROXY_COOKIE_REFRESH=60s
|
||||
- OAUTH2_PROXY_ALLOWED_GROUPS=DKTK-CCP-PPSN
|
||||
- OAUTH2_PROXY_PROXY_PREFIX=/oauth2-idm
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.services.traefik-forward-auth.loadbalancer.server.port=4180"
|
||||
- "traefik.http.routers.traefik-forward-auth.rule=Host(`${HOST}`) && PathPrefix(`/oauth2-idm`)"
|
||||
- "traefik.http.routers.traefik-forward-auth.tls=true"
|
||||
- "traefik.http.middlewares.traefik-forward-auth-idm.forwardauth.address=http://traefik-forward-auth:4180"
|
||||
- "traefik.http.middlewares.traefik-forward-auth-idm.forwardauth.authResponseHeaders=Authorization"
|
||||
depends_on:
|
||||
forward_proxy:
|
||||
condition: service_healthy
|
||||
|
||||
volumes:
|
||||
patientlist-db-data:
|
@ -1,9 +1,9 @@
|
||||
#!/bin/bash -e
|
||||
#!/bin/bash
|
||||
|
||||
function idManagementSetup() {
|
||||
if [ -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
|
||||
log INFO "id-management setup detected -- will start id-management (mainzelliste & magicpl)."
|
||||
OVERRIDE+=" -f ./common/id-management-compose.yml"
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/id-management-compose.yml"
|
||||
|
||||
# Auto Generate local Passwords
|
||||
PATIENTLIST_POSTGRES_PASSWORD="$(echo \"id-management-module-db-password-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
@ -2,7 +2,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
mtba:
|
||||
image: docker.verbis.dkfz.de/cache/samply/mtba:develop
|
||||
image: docker.verbis.dkfz.de/cache/samply/mtba:1.0.0
|
||||
container_name: bridgehead-mtba
|
||||
environment:
|
||||
BLAZE_STORE_URL: http://blaze:8080
|
||||
@ -11,30 +11,22 @@ services:
|
||||
ID_MANAGER_API_KEY: ${IDMANAGER_UPLOAD_APIKEY}
|
||||
ID_MANAGER_PSEUDONYM_ID_TYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
|
||||
ID_MANAGER_URL: http://id-manager:8080/id-manager
|
||||
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER:-FIRST_NAME}
|
||||
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER:-LAST_NAME}
|
||||
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER:-GENDER}
|
||||
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER:-BIRTHDAY}
|
||||
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER}
|
||||
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER}
|
||||
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER}
|
||||
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER}
|
||||
CBIOPORTAL_URL: http://cbioportal:8080
|
||||
FILE_CHARSET: ${MTBA_FILE_CHARSET:-UTF-8}
|
||||
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE:-LF}
|
||||
CSV_DELIMITER: ${MTBA_CSV_DELIMITER:-TAB}
|
||||
HTTP_RELATIVE_PATH: "/mtba"
|
||||
OIDC_ADMIN_GROUP: "${OIDC_ADMIN_GROUP}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PRIVATE_CLIENT_ID}"
|
||||
OIDC_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
OIDC_REALM: "${OIDC_REALM}"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
|
||||
FILE_CHARSET: ${MTBA_FILE_CHARSET}
|
||||
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE}
|
||||
CSV_DELIMITER: ${MTBA_CSV_DELIMITER}
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.mtba_ccp.rule=PathPrefix(`/mtba`)"
|
||||
- "traefik.http.services.mtba_ccp.loadbalancer.server.port=8480"
|
||||
- "traefik.http.routers.mtba_ccp.tls=true"
|
||||
|
||||
- "traefik.http.routers.mtba.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.mtba.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.mtba.tls=true"
|
||||
volumes:
|
||||
- /var/cache/bridgehead/ccp/mtba/input:/app/input
|
||||
- /var/cache/bridgehead/ccp/mtba/persist:/app/persist
|
||||
- /tmp/bridgehead/mtba/input:/app/input
|
||||
- /tmp/bridgehead/mtba/persist:/app/persist
|
||||
|
||||
# TODO: Include CBioPortal in Deployment ...
|
||||
# NOTE: CBioPortal can't load data while the system is running. So after import of data bridgehead needs to be restarted!
|
@ -1,12 +1,12 @@
|
||||
#!/bin/bash -e
|
||||
#!/bin/bash
|
||||
|
||||
function mtbaSetup() {
|
||||
if [ -n "$ENABLE_MTBA" ];then
|
||||
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
|
||||
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
|
||||
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
|
||||
exit 1;
|
||||
fi
|
||||
OVERRIDE+=" -f ./common/mtba-compose.yml"
|
||||
add_private_oidc_redirect_url "/mtba/*"
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/mtba-compose.yml"
|
||||
fi
|
||||
}
|
@ -1,5 +1,4 @@
|
||||
version: "3.7"
|
||||
|
||||
volumes:
|
||||
nngm-rest:
|
||||
|
||||
@ -22,6 +21,9 @@ services:
|
||||
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
|
||||
volumes:
|
||||
- nngm-rest:/var/log
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
#!/bin/bash -e
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "$NNGM_CTS_APIKEY" ]; then
|
||||
log INFO "nNGM setup detected -- will start nNGM Connector."
|
||||
OVERRIDE+=" -f ./common/nngm-compose.yml"
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
|
||||
fi
|
@ -1,2 +0,0 @@
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCgpjb250ZXh0IFBhdGllbnQKCgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX1BSSU1BUllfRElBR05PU0lTX05PX1NPUlRfU1RSQVRJRklFUgpES1RLX1NUUkFUX0FHRV9DTEFTU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfU1BFQ0lNRU5fU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREtUS19TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKCiAgREtUS19TVFJBVF9ISVNUT0xPR1lfU1RSQVRJRklFUgpES1RLX1NUUkFUX0RFRl9JTl9JTklUSUFMX1BPUFVMQVRJT04KdHJ1ZQ==
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCmNvZGVzeXN0ZW0gaWNkMTA6ICdodHRwOi8vZmhpci5kZS9Db2RlU3lzdGVtL2JmYXJtL2ljZC0xMC1nbScKY29kZXN5c3RlbSBtb3JwaDogJ3VybjpvaWQ6Mi4xNi44NDAuMS4xMTM4ODMuNi40My4xJwoKY29udGV4dCBQYXRpZW50CgoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUklNQVJZX0RJQUdOT1NJU19OT19TT1JUX1NUUkFUSUZJRVIKREtUS19TVFJBVF9BR0VfQ0xBU1NfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX1NQRUNJTUVOX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfUFJPQ0VEVVJFX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfTUVESUNBVElPTl9TVFJBVElGSUVSCgogIERLVEtfU1RSQVRfSElTVE9MT0dZX1NUUkFUSUZJRVIKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDNjEnIGZyb20gaWNkMTBdKSBhbmQgCigoZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgxNDAvMycpIG9yIAooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgxNDcvMycpIG9yIAooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg0ODAvMycpIG9yIAooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg1MDAvMycpKQ==
|
18
ccp/vars
18
ccp/vars
@ -2,25 +2,14 @@ BROKER_ID=broker.ccp-it.dktk.dkfz.de
|
||||
BROKER_URL=https://${BROKER_ID}
|
||||
PROXY_ID=${SITE_ID}.${BROKER_ID}
|
||||
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
|
||||
FOCUS_RETRY_COUNT=32
|
||||
SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
|
||||
OIDC_USER_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})"
|
||||
OIDC_ADMIN_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})_Verwalter"
|
||||
OIDC_PRIVATE_CLIENT_ID=${SITE_ID}-private
|
||||
OIDC_PUBLIC_CLIENT_ID=${SITE_ID}-public
|
||||
# Use "test-realm-01" for testing
|
||||
OIDC_REALM="${OIDC_REALM:-master}"
|
||||
OIDC_URL="https://login.verbis.dkfz.de"
|
||||
OIDC_ISSUER_URL="${OIDC_URL}/realms/${OIDC_REALM}"
|
||||
OIDC_GROUP_CLAIM="groups"
|
||||
|
||||
POSTGRES_TAG=15.6-alpine
|
||||
|
||||
for module in common/*.sh
|
||||
for module in $PROJECT/modules/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
source $module
|
||||
@ -28,5 +17,4 @@ done
|
||||
|
||||
idManagementSetup
|
||||
mtbaSetup
|
||||
obds2fhirRestSetup
|
||||
blazeSecondarySetup
|
||||
adt2fhirRestSetup
|
@ -1,32 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
blaze-secondary:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
container_name: bridgehead-ccp-blaze-secondary
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-ccp-blaze-secondary:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-secondary-data:/app/data"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.blaze-secondary_ccp.rule=PathPrefix(`/ccp-localdatamanagement-secondary`)"
|
||||
- "traefik.http.middlewares.ccp_b-secondary_strip.stripprefix.prefixes=/ccp-localdatamanagement-secondary"
|
||||
- "traefik.http.services.blaze-secondary_ccp.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.blaze-secondary_ccp.middlewares=ccp_b-secondary_strip,auth"
|
||||
- "traefik.http.routers.blaze-secondary_ccp.tls=true"
|
||||
|
||||
obds2fhir-rest:
|
||||
environment:
|
||||
STORE_PATH: ${STORE_PATH:-http://blaze:8080/fhir}
|
||||
|
||||
exporter:
|
||||
environment:
|
||||
BLAZE_HOST: "blaze-secondary"
|
||||
|
||||
volumes:
|
||||
blaze-secondary-data:
|
@ -1,11 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
function blazeSecondarySetup() {
|
||||
if [ -n "$ENABLE_SECONDARY_BLAZE" ]; then
|
||||
log INFO "Secondary Blaze setup detected -- will start second blaze."
|
||||
OVERRIDE+=" -f ./common/blaze-secondary-compose.yml"
|
||||
#make oBDS2FHIR ignore ID-Management and replace target Blaze
|
||||
PATIENTLIST_URL=" "
|
||||
STORE_PATH="http://blaze-secondary:8080/fhir"
|
||||
fi
|
||||
}
|
@ -1,171 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
rstudio:
|
||||
container_name: bridgehead-rstudio
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rstudio:latest
|
||||
environment:
|
||||
#DEFAULT_USER: "rstudio" # This line is kept for informational purposes
|
||||
PASSWORD: "${RSTUDIO_ADMIN_PASSWORD}" # It is required, even if the authentication is disabled
|
||||
DISABLE_AUTH: "true" # https://rocker-project.org/images/versioned/rstudio.html#how-to-use
|
||||
HTTP_RELATIVE_PATH: "/rstudio"
|
||||
ALL_PROXY: "http://forward_proxy:3128" # https://rocker-project.org/use/networking.html
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.rstudio_ccp.rule=PathPrefix(`/rstudio`)"
|
||||
- "traefik.http.services.rstudio_ccp.loadbalancer.server.port=8787"
|
||||
- "traefik.http.middlewares.rstudio_ccp_strip.stripprefix.prefixes=/rstudio"
|
||||
- "traefik.http.routers.rstudio_ccp.tls=true"
|
||||
- "traefik.http.routers.rstudio_ccp.middlewares=oidcAuth,rstudio_ccp_strip"
|
||||
networks:
|
||||
- rstudio
|
||||
|
||||
opal:
|
||||
container_name: bridgehead-opal
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-opal:latest
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.opal_ccp.rule=PathPrefix(`/opal`)"
|
||||
- "traefik.http.services.opal_ccp.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.opal_ccp.tls=true"
|
||||
links:
|
||||
- opal-rserver
|
||||
- opal-db
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC -Dhttps.proxyHost=forward_proxy -Dhttps.proxyPort=3128"
|
||||
# OPAL_ADMINISTRATOR_USER: "administrator" # This line is kept for informational purposes
|
||||
OPAL_ADMINISTRATOR_PASSWORD: "${OPAL_ADMIN_PASSWORD}"
|
||||
POSTGRESDATA_HOST: "opal-db"
|
||||
POSTGRESDATA_DATABASE: "opal"
|
||||
POSTGRESDATA_USER: "opal"
|
||||
POSTGRESDATA_PASSWORD: "${OPAL_DB_PASSWORD}"
|
||||
ROCK_HOSTS: "opal-rserver:8085"
|
||||
APP_URL: "https://${HOST}/opal"
|
||||
APP_CONTEXT_PATH: "/opal"
|
||||
OPAL_PRIVATE_KEY: "/run/secrets/opal-key.pem"
|
||||
OPAL_CERTIFICATE: "/run/secrets/opal-cert.pem"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
OIDC_REALM: "${OIDC_REALM}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PRIVATE_CLIENT_ID}"
|
||||
OIDC_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
OIDC_ADMIN_GROUP: "${OIDC_ADMIN_GROUP}"
|
||||
TOKEN_MANAGER_PASSWORD: "${TOKEN_MANAGER_OPAL_PASSWORD}"
|
||||
EXPORTER_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
BEAM_APP_ID: token-manager.${PROXY_ID}
|
||||
BEAM_SECRET: ${TOKEN_MANAGER_SECRET}
|
||||
BEAM_DATASHIELD_PROXY: request-manager
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/opal-metadata-db:/srv" # Opal metadata
|
||||
secrets:
|
||||
- opal-cert.pem
|
||||
- opal-key.pem
|
||||
|
||||
opal-db:
|
||||
container_name: bridgehead-opal-db
|
||||
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
|
||||
environment:
|
||||
POSTGRES_PASSWORD: "${OPAL_DB_PASSWORD}" # Set in datashield-setup.sh
|
||||
POSTGRES_USER: "opal"
|
||||
POSTGRES_DB: "opal"
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/opal-db:/var/lib/postgresql/data" # Opal project data (imported from exporter)
|
||||
|
||||
opal-rserver:
|
||||
container_name: bridgehead-opal-rserver
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rserver # datashield/rock-base + dsCCPhos
|
||||
tmpfs:
|
||||
- /srv
|
||||
|
||||
beam-connect:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
|
||||
container_name: bridgehead-datashield-connect
|
||||
environment:
|
||||
PROXY_URL: "http://beam-proxy:8081"
|
||||
TLS_CA_CERTIFICATES_DIR: /run/secrets
|
||||
APP_ID: datashield-connect.${SITE_ID}.${BROKER_ID}
|
||||
PROXY_APIKEY: ${DATASHIELD_CONNECT_SECRET}
|
||||
DISCOVERY_URL: "./map/central.json"
|
||||
LOCAL_TARGETS_FILE: "./map/local.json"
|
||||
NO_AUTH: "true"
|
||||
secrets:
|
||||
- opal-cert.pem
|
||||
depends_on:
|
||||
- beam-proxy
|
||||
volumes:
|
||||
- /tmp/bridgehead/opal-map/:/map/:ro
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.address=http://oauth2-proxy:4180/"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.trustForwardHeader=true"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.authResponseHeaders=X-Auth-Request-Access-Token,Authorization"
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
forward_proxy:
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
beam-proxy:
|
||||
environment:
|
||||
APP_datashield-connect_KEY: ${DATASHIELD_CONNECT_SECRET}
|
||||
APP_token-manager_KEY: ${TOKEN_MANAGER_SECRET}
|
||||
|
||||
# TODO: Allow users of group /DataSHIELD and OIDC_USER_GROUP at the same time:
|
||||
# Maybe a solution would be (https://oauth2-proxy.github.io/oauth2-proxy/configuration/oauth_provider):
|
||||
# --allowed-groups=/DataSHIELD,OIDC_USER_GROUP
|
||||
oauth2-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/oauth2-proxy/oauth2-proxy:latest
|
||||
container_name: bridgehead-oauth2proxy
|
||||
command: >-
|
||||
--allowed-group=DataSHIELD
|
||||
--oidc-groups-claim=${OIDC_GROUP_CLAIM}
|
||||
--auth-logging=true
|
||||
--whitelist-domain=${HOST}
|
||||
--http-address="0.0.0.0:4180"
|
||||
--reverse-proxy=true
|
||||
--upstream="static://202"
|
||||
--email-domain="*"
|
||||
--cookie-name="_BRIDGEHEAD_oauth2"
|
||||
--cookie-secret="${OAUTH2_PROXY_SECRET}"
|
||||
--cookie-expire="12h"
|
||||
--cookie-secure="true"
|
||||
--cookie-httponly="true"
|
||||
#OIDC settings
|
||||
--provider="keycloak-oidc"
|
||||
--provider-display-name="VerbIS Login"
|
||||
--client-id="${OIDC_PRIVATE_CLIENT_ID}"
|
||||
--client-secret="${OIDC_CLIENT_SECRET}"
|
||||
--redirect-url="https://${HOST}${OAUTH2_CALLBACK}"
|
||||
--oidc-issuer-url="${OIDC_ISSUER_URL}"
|
||||
--scope="openid email profile"
|
||||
--code-challenge-method="S256"
|
||||
--skip-provider-button=true
|
||||
#X-Forwarded-Header settings - true/false depending on your needs
|
||||
--pass-basic-auth=true
|
||||
--pass-user-headers=false
|
||||
--pass-access-token=false
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.oauth2_proxy.rule=Host(`${HOST}`) && PathPrefix(`/oauth2`)"
|
||||
- "traefik.http.services.oauth2_proxy.loadbalancer.server.port=4180"
|
||||
- "traefik.http.routers.oauth2_proxy.tls=true"
|
||||
environment:
|
||||
http_proxy: "http://forward_proxy:3128"
|
||||
https_proxy: "http://forward_proxy:3128"
|
||||
depends_on:
|
||||
forward_proxy:
|
||||
condition: service_healthy
|
||||
|
||||
secrets:
|
||||
opal-cert.pem:
|
||||
file: /tmp/bridgehead/opal-cert.pem
|
||||
opal-key.pem:
|
||||
file: /tmp/bridgehead/opal-key.pem
|
||||
|
||||
networks:
|
||||
rstudio:
|
@ -1,44 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_DATASHIELD" == true ]; then
|
||||
# HACK: This only works because exporter-setup.sh and teiler-setup.sh are sourced after datashield-setup.sh
|
||||
if [ -z "${ENABLE_EXPORTER}" ] || [ "${ENABLE_EXPORTER}" != "true" ]; then
|
||||
log WARN "The ENABLE_EXPORTER variable is either not set or not set to 'true'."
|
||||
fi
|
||||
OAUTH2_CALLBACK=/oauth2/callback
|
||||
OAUTH2_PROXY_SECRET="$(echo \"This is a salt string to generate one consistent encryption key for the oauth2_proxy. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 32)"
|
||||
add_private_oidc_redirect_url "${OAUTH2_CALLBACK}"
|
||||
|
||||
log INFO "DataSHIELD setup detected -- will start DataSHIELD services."
|
||||
OVERRIDE+=" -f ./common/datashield-compose.yml"
|
||||
EXPORTER_OPAL_PASSWORD="$(generate_password \"exporter in Opal\")"
|
||||
TOKEN_MANAGER_OPAL_PASSWORD="$(generate_password \"Token Manager in Opal\")"
|
||||
OPAL_DB_PASSWORD="$(echo \"Opal DB\" | generate_simple_password)"
|
||||
OPAL_ADMIN_PASSWORD="$(generate_password \"admin password for Opal\")"
|
||||
RSTUDIO_ADMIN_PASSWORD="$(generate_password \"admin password for R-Studio\")"
|
||||
DATASHIELD_CONNECT_SECRET="$(echo \"DataShield Connect\" | generate_simple_password)"
|
||||
TOKEN_MANAGER_SECRET="$(echo \"Token Manager\" | generate_simple_password)"
|
||||
if [ ! -e /tmp/bridgehead/opal-cert.pem ]; then
|
||||
mkdir -p /tmp/bridgehead/
|
||||
openssl req -x509 -newkey rsa:4096 -nodes -keyout /tmp/bridgehead/opal-key.pem -out /tmp/bridgehead/opal-cert.pem -days 3650 -subj "/CN=opal/C=DE"
|
||||
fi
|
||||
mkdir -p /tmp/bridgehead/opal-map
|
||||
sites="$(cat ./common/datashield-sites.json)"
|
||||
echo "$sites" | docker_jq -n --args '{"sites": input | map({
|
||||
"name": .,
|
||||
"id": .,
|
||||
"virtualhost": "\(.):443",
|
||||
"beamconnect": "datashield-connect.\(.).'"$BROKER_ID"'"
|
||||
})}' $sites >/tmp/bridgehead/opal-map/central.json
|
||||
echo "$sites" | docker_jq -n --args '[{
|
||||
"external": "'"$SITE_ID"':443",
|
||||
"internal": "opal:8443",
|
||||
"allowed": input | map("\(.).'"$BROKER_ID"'")
|
||||
}]' >/tmp/bridgehead/opal-map/local.json
|
||||
if [ "$USER" == "root" ]; then
|
||||
chown -R bridgehead:docker /tmp/bridgehead
|
||||
chmod g+wr /tmp/bridgehead/opal-map/*
|
||||
chmod g+r /tmp/bridgehead/opal-key.pem
|
||||
fi
|
||||
add_private_oidc_redirect_url "/opal/*"
|
||||
fi
|
@ -1,15 +0,0 @@
|
||||
[
|
||||
"berlin",
|
||||
"muenchen-lmu",
|
||||
"dresden",
|
||||
"freiburg",
|
||||
"muenchen-tum",
|
||||
"tuebingen",
|
||||
"mainz",
|
||||
"frankfurt",
|
||||
"essen",
|
||||
"dktk-datashield-test",
|
||||
"dktk-test",
|
||||
"mannheim",
|
||||
"central-ds-orchestrator"
|
||||
]
|
@ -1,28 +0,0 @@
|
||||
# DataSHIELD
|
||||
This module constitutes the infrastructure to run DataSHIELD within the bridgehead.
|
||||
For more information about DataSHIELD, please visit https://www.datashield.org/
|
||||
|
||||
## R-Studio
|
||||
To connect to the different bridgeheads of the CCP through DataSHIELD, you can use your own R-Studio environment.
|
||||
However, this R-Studio has already installed the DataSHIELD libraries and is integrated within the bridgehead.
|
||||
This can save you some time for extra configuration of your R-Studio environment.
|
||||
|
||||
## Opal
|
||||
This is the core of DataSHIELD. It is made up of Opal, a Postgres database and an R-server.
|
||||
For more information about Opal, please visit https://opaldoc.obiba.org
|
||||
|
||||
### Opal
|
||||
Opal is OBiBa’s core database application for biobanks.
|
||||
|
||||
### Opal-DB
|
||||
Opal requires a database to import the data for DataSHIELD. We use a Postgres instance as database.
|
||||
The data is imported within the bridgehead through the exporter.
|
||||
|
||||
### Opal-R-Server
|
||||
R-Server to execute R scripts in DataSHIELD.
|
||||
|
||||
## Beam
|
||||
### Beam-Connect
|
||||
Beam-Connect is used to route http(s) traffic through beam to enable R-Studio to access data from other bridgeheads that have datashield enabled.
|
||||
### Beam-Proxy
|
||||
The usual beam proxy used for communication.
|
@ -1,28 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
OVERRIDE+=" -f ./common/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
fi
|
@ -1,15 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ -n "${ENABLE_DNPM}" ]; then
|
||||
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam.Connect for DNPM."
|
||||
OVERRIDE+=" -f ./common/dnpm-compose.yml"
|
||||
|
||||
# Set variables required for Beam-Connect
|
||||
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
# If the DNPM_NO_PROXY variable is set, prefix it with a comma (as it gets added to a comma separated list)
|
||||
if [ -n "${DNPM_NO_PROXY}" ]; then
|
||||
DNPM_ADDITIONAL_NO_PROXY=",${DNPM_NO_PROXY}"
|
||||
else
|
||||
DNPM_ADDITIONAL_NO_PROXY=""
|
||||
fi
|
||||
fi
|
@ -1,6 +0,0 @@
|
||||
# Full Excel Export
|
||||
curl --location --request POST 'https://${HOST}/ccp-exporter/request?query=Patient&query-format=FHIR_PATH&template-id=ccp&output-format=EXCEL' \
|
||||
--header 'x-api-key: ${EXPORT_API_KEY}'
|
||||
|
||||
# QB
|
||||
curl --location --request POST 'https://${HOST}/ccp-reporter/generate?template-id=ccp'
|
@ -1,67 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
exporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
|
||||
container_name: bridgehead-ccp-exporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
EXPORTER_DB_USER: "exporter"
|
||||
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
|
||||
HTTP_RELATIVE_PATH: "/ccp-exporter"
|
||||
SITE: "${SITE_ID}"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
|
||||
- "traefik.http.services.exporter_ccp.loadbalancer.server.port=8092"
|
||||
- "traefik.http.routers.exporter_ccp.tls=true"
|
||||
- "traefik.http.middlewares.exporter_ccp_strip.stripprefix.prefixes=/ccp-exporter"
|
||||
- "traefik.http.routers.exporter_ccp.middlewares=exporter_ccp_strip"
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/exporter-files:/app/exporter-files/output"
|
||||
|
||||
exporter-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
|
||||
container_name: bridgehead-ccp-exporter-db
|
||||
environment:
|
||||
POSTGRES_USER: "exporter"
|
||||
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
POSTGRES_DB: "exporter"
|
||||
volumes:
|
||||
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
|
||||
- "/var/cache/bridgehead/ccp/exporter-db:/var/lib/postgresql/data"
|
||||
|
||||
reporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
|
||||
container_name: bridgehead-ccp-reporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-reporter"
|
||||
SITE: "${SITE_ID}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
EXPORTER_URL: "http://exporter:8092"
|
||||
LOG_FHIR_VALIDATION: "false"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
|
||||
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
|
||||
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
|
||||
# a process that can take several hours, because it depends on the exporter.
|
||||
# There is a risk that the bridgehead restarts, losing the already created export.
|
||||
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/reporter-files:/app/reports"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.reporter_ccp.rule=PathPrefix(`/ccp-reporter`)"
|
||||
- "traefik.http.services.reporter_ccp.loadbalancer.server.port=8095"
|
||||
- "traefik.http.routers.reporter_ccp.tls=true"
|
||||
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
|
||||
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"
|
@ -1,8 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_EXPORTER" == true ]; then
|
||||
log INFO "Exporter setup detected -- will start Exporter service."
|
||||
OVERRIDE+=" -f ./common/exporter-compose.yml"
|
||||
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
|
||||
fi
|
@ -1,15 +0,0 @@
|
||||
# Exporter and Reporter
|
||||
|
||||
|
||||
## Exporter
|
||||
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
|
||||
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
|
||||
|
||||
## Exporter-DB
|
||||
It is a database to save queries for its execution in the exporter.
|
||||
The exporter manages also the different executions of the same query in through the database.
|
||||
|
||||
## Reporter
|
||||
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
|
||||
It is compatible with different template engines as Groovy, Thymeleaf,...
|
||||
It is perfect to generate a document as our traditional CCP quality report.
|
@ -1,25 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
fhir2sql:
|
||||
depends_on:
|
||||
- "dashboard-db"
|
||||
- "blaze"
|
||||
image: docker.verbis.dkfz.de/cache/samply/fhir2sql:latest
|
||||
container_name: bridgehead-ccp-dashboard-fhir2sql
|
||||
environment:
|
||||
BLAZE_BASE_URL: "http://bridgehead-ccp-blaze:8080"
|
||||
PG_HOST: "dashboard-db"
|
||||
PG_USERNAME: "dashboard"
|
||||
PG_PASSWORD: "${DASHBOARD_DB_PASSWORD}" # Set in dashboard-setup.sh
|
||||
PG_DBNAME: "dashboard"
|
||||
|
||||
dashboard-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
|
||||
container_name: bridgehead-ccp-dashboard-db
|
||||
environment:
|
||||
POSTGRES_USER: "dashboard"
|
||||
POSTGRES_PASSWORD: "${DASHBOARD_DB_PASSWORD}" # Set in dashboard-setup.sh
|
||||
POSTGRES_DB: "dashboard"
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/dashboard-db:/var/lib/postgresql/data"
|
@ -1,7 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_FHIR2SQL" == true ]; then
|
||||
log INFO "Dashboard setup detected -- will start Dashboard backend and FHIR2SQL service."
|
||||
OVERRIDE+=" -f ./common/fhir2sql-compose.yml"
|
||||
DASHBOARD_DB_PASSWORD="$(generate_simple_password 'fhir2sql')"
|
||||
fi
|
@ -1,36 +0,0 @@
|
||||
# fhir2sql
|
||||
fhir2sql connects to Blaze, retrieves data, and syncs it with a PostgreSQL database. The application is designed to run continuously, syncing data at regular intervals.
|
||||
The Dashboard module is a optional component of the Bridgehead CCP setup. When enabled, it starts two Docker services: **fhir2sql** and **dashboard-db**. Data held in PostgreSQL is only stored temporarily and Blaze is considered to be the 'leading system' or 'source of truth'.
|
||||
|
||||
## Services
|
||||
### fhir2sql
|
||||
* Image: docker.verbis.dkfz.de/cache/samply/fhir2sql:latest
|
||||
* Container name: bridgehead-ccp-dashboard-fhir2sql
|
||||
* Depends on: dashboard-db
|
||||
* Environment variables:
|
||||
- BLAZE_BASE_URL: The base URL of the Blaze FHIR server (set to http://blaze:8080/fhir/)
|
||||
- PG_HOST: The hostname of the PostgreSQL database (set to dashboard-db)
|
||||
- PG_USERNAME: The username for the PostgreSQL database (set to dashboard)
|
||||
- PG_PASSWORD: The password for the PostgreSQL database (set to the value of DASHBOARD_DB_PASSWORD)
|
||||
- PG_DBNAME: The name of the PostgreSQL database (set to dashboard)
|
||||
|
||||
### dashboard-db
|
||||
|
||||
* Image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
|
||||
* Container name: bridgehead-ccp-dashboard-db
|
||||
* Environment variables:
|
||||
- POSTGRES_USER: The username for the PostgreSQL database (set to dashboard)
|
||||
- POSTGRES_PASSWORD: The password for the PostgreSQL database (set to the value of DASHBOARD_DB_PASSWORD)
|
||||
- POSTGRES_DB: The name of the PostgreSQL database (set to dashboard)
|
||||
* Volumes:
|
||||
- /var/cache/bridgehead/ccp/dashboard-db:/var/lib/postgresql/data
|
||||
|
||||
The volume used by dashboard-db can be removed safely and should be restored to a working order by re-importing data from Blaze.
|
||||
|
||||
### Environment Variables
|
||||
* DASHBOARD_DB_PASSWORD: A generated password for the PostgreSQL database, created using a salt string and the SHA1 hash function.
|
||||
* POSTGRES_TAG: The tag of the PostgreSQL image to use (not set in this module, but required by the dashboard-db service).
|
||||
|
||||
|
||||
### Setup
|
||||
To enable the Dashboard module, set the ENABLE_FHIR2SQL environment variable to true. The dashboard-setup.sh script will then start the fhir2sql and dashboard-db services, using the environment variables and volumes defined above.
|
@ -1,33 +0,0 @@
|
||||
version: "3.7"
|
||||
services:
|
||||
landing:
|
||||
container_name: lens_federated-search
|
||||
image: docker.verbis.dkfz.de/ccp/lens:${SITE_ID}
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.landing.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.landing.tls=true"
|
||||
|
||||
spot:
|
||||
image: docker.verbis.dkfz.de/ccp-private/central-spot
|
||||
environment:
|
||||
BEAM_SECRET: "${FOCUS_BEAM_SECRET_SHORT}"
|
||||
BEAM_URL: http://beam-proxy:8081
|
||||
BEAM_PROXY_ID: ${SITE_ID}
|
||||
BEAM_BROKER_ID: ${BROKER_ID}
|
||||
BEAM_APP_ID: "focus"
|
||||
PROJECT_METADATA: "dktk_supervisors"
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.services.spot.loadbalancer.server.port=8080"
|
||||
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowmethods=GET,OPTIONS,POST"
|
||||
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolalloworiginlist=https://${HOST}"
|
||||
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowcredentials=true"
|
||||
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolmaxage=-1"
|
||||
- "traefik.http.routers.spot.rule=Host(`${HOST}`) && PathPrefix(`/backend`)"
|
||||
- "traefik.http.middlewares.stripprefix_spot.stripprefix.prefixes=/backend"
|
||||
- "traefik.http.routers.spot.tls=true"
|
||||
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot"
|
@ -1,5 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "$ENABLE_LENS" ];then
|
||||
OVERRIDE+=" -f ./common/lens-compose.yml"
|
||||
fi
|
@ -1,6 +0,0 @@
|
||||
# Molecular Tumor Board Alliance (MTBA)
|
||||
|
||||
In this module, the genetic data to import is stored in a directory (/tmp/bridgehead/mtba/input). A process checks
|
||||
regularly if there are files in the directory. The files are pseudonomized when the IDAT is provided. The files are
|
||||
combined with clinical data of the blaze and imported in cBioPortal. On the other hand, this files are also imported in
|
||||
Blaze.
|
@ -1,20 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
obds2fhir-rest:
|
||||
container_name: bridgehead-obds2fhir-rest
|
||||
image: docker.verbis.dkfz.de/ccp/obds2fhir-rest:main
|
||||
environment:
|
||||
IDTYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
|
||||
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
|
||||
SALT: ${LOCAL_SALT}
|
||||
KEEP_INTERNAL_ID: ${KEEP_INTERNAL_ID:-false}
|
||||
MAINZELLISTE_URL: ${PATIENTLIST_URL:-http://patientlist:8080/patientlist}
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.obds2fhir-rest.rule=PathPrefix(`/obds2fhir-rest`) || PathPrefix(`/adt2fhir-rest`)"
|
||||
- "traefik.http.middlewares.obds2fhir-rest_strip.stripprefix.prefixes=/obds2fhir-rest,/adt2fhir-rest"
|
||||
- "traefik.http.services.obds2fhir-rest.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.obds2fhir-rest.tls=true"
|
||||
- "traefik.http.routers.obds2fhir-rest.middlewares=obds2fhir-rest_strip,auth"
|
@ -1,13 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
function obds2fhirRestSetup() {
|
||||
if [ -n "$ENABLE_OBDS2FHIR_REST" ]; then
|
||||
log INFO "oBDS2FHIR-REST setup detected -- will start obds2fhir-rest module."
|
||||
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
|
||||
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
|
||||
PATIENTLIST_URL=" "
|
||||
fi
|
||||
OVERRIDE+=" -f ./common/obds2fhir-rest-compose.yml"
|
||||
LOCAL_SALT="$(echo \"local-random-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
fi
|
||||
}
|
@ -1,81 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
teiler-orchestrator:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-orchestrator:latest
|
||||
container_name: bridgehead-teiler-orchestrator
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.rule=PathPrefix(`/ccp-teiler`)"
|
||||
- "traefik.http.services.teiler_orchestrator_ccp.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_orchestrator_ccp_strip.stripprefix.prefixes=/ccp-teiler"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.middlewares=teiler_orchestrator_ccp_strip"
|
||||
environment:
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "https://${HOST}/ccp-teiler-dashboard"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE_LOWER_CASE}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
|
||||
teiler-dashboard:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:develop
|
||||
container_name: bridgehead-teiler-dashboard
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.rule=PathPrefix(`/ccp-teiler-dashboard`)"
|
||||
- "traefik.http.services.teiler_dashboard_ccp.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_dashboard_ccp_strip.stripprefix.prefixes=/ccp-teiler-dashboard"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.middlewares=teiler_dashboard_ccp_strip"
|
||||
environment:
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
OIDC_REALM: "${OIDC_REALM}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PUBLIC_CLIENT_ID}"
|
||||
OIDC_TOKEN_GROUP: "${OIDC_GROUP_CLAIM}"
|
||||
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
|
||||
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
|
||||
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
|
||||
TEILER_PROJECT: "${PROJECT}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_HTTP_RELATIVE_PATH: "/ccp-teiler-dashboard"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_USER: "${OIDC_USER_GROUP}"
|
||||
TEILER_ADMIN: "${OIDC_ADMIN_GROUP}"
|
||||
REPORTER_DEFAULT_TEMPLATE_ID: "ccp-qb"
|
||||
EXPORTER_DEFAULT_TEMPLATE_ID: "ccp"
|
||||
|
||||
|
||||
teiler-backend:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-teiler-backend:latest
|
||||
container_name: bridgehead-teiler-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_backend_ccp.rule=PathPrefix(`/ccp-teiler-backend`)"
|
||||
- "traefik.http.services.teiler_backend_ccp.loadbalancer.server.port=8085"
|
||||
- "traefik.http.routers.teiler_backend_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_backend_ccp_strip.stripprefix.prefixes=/ccp-teiler-backend"
|
||||
- "traefik.http.routers.teiler_backend_ccp.middlewares=teiler_backend_ccp_strip"
|
||||
environment:
|
||||
LOG_LEVEL: "INFO"
|
||||
APPLICATION_PORT: "8085"
|
||||
APPLICATION_ADDRESS: "${HOST}"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
CONFIG_ENV_VAR_PATH: "/run/secrets/ccp.conf"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "https://${HOST}/ccp-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "https://${HOST}/ccp-teiler-dashboard/en"
|
||||
CENTRAX_URL: "${CENTRAXX_URL}"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
ENABLE_MTBA: "${ENABLE_MTBA}"
|
||||
ENABLE_DATASHIELD: "${ENABLE_DATASHIELD}"
|
||||
secrets:
|
||||
- ccp.conf
|
||||
|
||||
secrets:
|
||||
ccp.conf:
|
||||
file: /etc/bridgehead/ccp.conf
|
@ -1,9 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_TEILER" == true ];then
|
||||
log INFO "Teiler setup detected -- will start Teiler services."
|
||||
OVERRIDE+=" -f ./common/teiler-compose.yml"
|
||||
TEILER_DEFAULT_LANGUAGE=DE
|
||||
TEILER_DEFAULT_LANGUAGE_LOWER_CASE=${TEILER_DEFAULT_LANGUAGE,,}
|
||||
add_public_oidc_redirect_url "/ccp-teiler/*"
|
||||
fi
|
@ -1,19 +0,0 @@
|
||||
# Teiler
|
||||
This module orchestrates the different microfrontends of the bridgehead as a single page application.
|
||||
|
||||
## Teiler Orchestrator
|
||||
Single SPA component that consists on the root HTML site of the single page application and a javascript code that
|
||||
gets the information about the microfrontend calling the teiler backend and is responsible for registering them. With the
|
||||
resulting mapping, it can initialize, mount and unmount the required microfrontends on the fly.
|
||||
|
||||
The microfrontends run independently in different containers and can be based on different frameworks (Angular, Vue, React,...)
|
||||
This microfrontends can run as single alone but need an extension with Single-SPA (https://single-spa.js.org/docs/ecosystem).
|
||||
There are also available three templates (Angular, Vue, React) to be directly extended to be used directly in the teiler.
|
||||
|
||||
## Teiler Dashboard
|
||||
It consists on the main dashboard and a set of embedded services.
|
||||
### Login
|
||||
user and password in ccp.local.conf
|
||||
|
||||
## Teiler Backend
|
||||
In this component, the microfrontends are configured.
|
@ -1,66 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
container_name: bridgehead-dhki-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-dhki-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.blaze_dhki.rule=PathPrefix(`/dhki-localdatamanagement`)"
|
||||
- "traefik.http.middlewares.dhki_b_strip.stripprefix.prefixes=/dhki-localdatamanagement"
|
||||
- "traefik.http.services.blaze_dhki.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.blaze_dhki.middlewares=dhki_b_strip,auth"
|
||||
- "traefik.http.routers.blaze_dhki.tls=true"
|
||||
|
||||
focus:
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
container_name: bridgehead-focus
|
||||
environment:
|
||||
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
BEAM_APP_ID_LONG: focus.${PROXY_ID}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
BLAZE_URL: "http://bridgehead-dhki-blaze:8080/fhir/"
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
QUERIES_TO_CACHE: '/queries_to_cache.conf'
|
||||
volumes:
|
||||
- /srv/docker/bridgehead/dhki/queries_to_cache.conf:/queries_to_cache.conf
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
||||
beam-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-beam-proxy
|
||||
environment:
|
||||
BROKER_URL: ${BROKER_URL}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/dhki/root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
|
||||
volumes:
|
||||
blaze-data:
|
||||
|
||||
secrets:
|
||||
proxy.pem:
|
||||
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem
|
@ -1,2 +0,0 @@
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwoKY29udGV4dCBQYXRpZW50CgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0FHRV9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRIS0lfU1RSQVRfU1BFQ0lNRU5fU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREhLSV9TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKCkRIS0lfU1RSQVRfRU5DT1VOVEVSX1NUUkFUSUZJRVIKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OCnRydWU=
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwpjb2Rlc3lzdGVtIGljZDEwOiAnaHR0cDovL2ZoaXIuZGUvQ29kZVN5c3RlbS9iZmFybS9pY2QtMTAtZ20nCmNvZGVzeXN0ZW0gbW9ycGg6ICd1cm46b2lkOjIuMTYuODQwLjEuMTEzODgzLjYuNDMuMScKCmNvbnRleHQgUGF0aWVudAoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9BR0VfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpESEtJX1NUUkFUX1NQRUNJTUVOX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfUFJPQ0VEVVJFX1NUUkFUSUZJRVIKCkRIS0lfU1RSQVRfTUVESUNBVElPTl9TVFJBVElGSUVSCgpESEtJX1NUUkFUX0VOQ09VTlRFUl9TVFJBVElGSUVSCkRLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTgooKChleGlzdHMgW0NvbmRpdGlvbjogQ29kZSAnQzM0LjknIGZyb20gaWNkMTBdKSBvcgooZXhpc3RzIFtDb25kaXRpb246IENvZGUgJ0MzNC44JyBmcm9tIGljZDEwXSkgb3IKKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDMzQuMCcgZnJvbSBpY2QxMF0pIG9yCihleGlzdHMgW0NvbmRpdGlvbjogQ29kZSAnQzM0LjInIGZyb20gaWNkMTBdKSBvcgooZXhpc3RzIFtDb25kaXRpb246IENvZGUgJ0MzNC4xJyBmcm9tIGljZDEwXSkgb3IKKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDMzQuMycgZnJvbSBpY2QxMF0pKSBhbmQKKChleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODE0MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQxLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgxNDMvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODE0Ny8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MjUwLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgyNTEvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODI1Mi8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MjUzLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgyNTUvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODI2MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MzEwLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgzMzMvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODQ3MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4NDgwLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg0OTAvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODU1MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MDUyLzMnKSkp
|
@ -1,20 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUSWUPebUMNfJvPKMjdgX+WiH+OXgwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTA1MDg1NTM4WhcNMzQw
|
||||
MTAyMDg1NjA4WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAL/nvo9Bn1/6Z/K4BKoLM6/mVziM4cmXTVx4npVz
|
||||
pnptwPPFU4rz47akRZ6ZMD5MO0bsyvaxG1nwVrW3aAGC42JIGTdZHKwMKrd35sxw
|
||||
k3YlGJagGUs+bKHUCL55OcSmyDWlh/UhA8+eeJWjOt9u0nYXv+vi+N4JSHA0oC9D
|
||||
bTF1v+7blrTQagf7PTPSF3pe22iXOjJYdOkZMWoMoNAjn6F958fkLNLY3csOZwvP
|
||||
/3eyNNawyAEPWeIm33Zk630NS8YHggz6WCqwXvuaKb6910mRP8jgauaYsqgsOyDt
|
||||
pbWuvk//aZWdGeN9RNsAA8eGppygiwm/m9eRC6I0shDwv6ECAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFFn/dbW1J3ry
|
||||
7TBzbKo3H4vJr2MiMB8GA1UdIwQYMBaAFFn/dbW1J3ry7TBzbKo3H4vJr2MiMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCa2V8B8aad
|
||||
XNDS1EUIi9oMdvGvkolcdFwx9fI++qu9xSIaZs5GETHck3oYKZF0CFP5ESnKDn5w
|
||||
enWgm5M0y+hVZppzB163WmET1efBXwrdyn8j4336NjX352h63JGWCaI2CfZ1qG1p
|
||||
kf5W9CVXllSFaJe5r994ovgyHvK2ucWwe8l8iMJbQhH79oKi/9uJMCD6aUXnpg1K
|
||||
nPHW1lsVx6foqYWijdBdtFU2i7LSH2OYo0nb1PgRnY/SABV63JHfJnqW9dZy4f7G
|
||||
rpsvvrmFrKmEnCZH0n6qveY3Z5bMD94Yx0ebkCTYEqAw3pV65gwxrzBTpEg6dgF0
|
||||
eG0eKFUS0REJ
|
||||
-----END CERTIFICATE-----
|
20
dhki/vars
20
dhki/vars
@ -1,20 +0,0 @@
|
||||
BROKER_ID=broker.hector.dkfz.de
|
||||
BROKER_URL=https://${BROKER_ID}
|
||||
PROXY_ID=${SITE_ID}.${BROKER_ID}
|
||||
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
|
||||
SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
|
||||
POSTGRES_TAG=15.6-alpine
|
||||
|
||||
for module in common/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
source $module
|
||||
done
|
||||
|
||||
idManagementSetup
|
||||
obds2fhirRestSetup
|
14
ecdc.service
Normal file
14
ecdc.service
Normal file
@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=Start ECDC Bridgehead
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/srv/docker/bridgehead/restart_service.sh
|
||||
ExecStop=/srv/docker/bridgehead/shutdown_service.sh
|
||||
Restart=always
|
||||
RestartSec=36000
|
||||
KillMode=mixed
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
|
@ -1,63 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
container_name: bridgehead-itcc-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-itcc-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.blaze_itcc.rule=PathPrefix(`/itcc-localdatamanagement`)"
|
||||
- "traefik.http.middlewares.itcc_b_strip.stripprefix.prefixes=/itcc-localdatamanagement"
|
||||
- "traefik.http.services.blaze_itcc.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.blaze_itcc.middlewares=itcc_b_strip,auth"
|
||||
- "traefik.http.routers.blaze_itcc.tls=true"
|
||||
|
||||
focus:
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
container_name: bridgehead-focus
|
||||
environment:
|
||||
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
BEAM_APP_ID_LONG: focus.${PROXY_ID}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
BLAZE_URL: "http://bridgehead-itcc-blaze:8080/fhir/"
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
||||
beam-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
|
||||
container_name: bridgehead-beam-proxy
|
||||
environment:
|
||||
BROKER_URL: ${BROKER_URL}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/itcc/root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
|
||||
volumes:
|
||||
blaze-data:
|
||||
|
||||
secrets:
|
||||
proxy.pem:
|
||||
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem
|
@ -1,20 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUW34NEb7bl0+Ywx+I1VKtY5vpAOowDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTIyMTMzNzEzWhcNMzQw
|
||||
MTE5MTMzNzQzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAL5UegLXTlq3XRRj8LyFs3aF0tpRPVoW9RXp5kFI
|
||||
TnBvyO6qjNbMDT/xK+4iDtEX4QQUvsxAKxfXbe9i1jpdwjgH7JHaSGm2IjAiKLqO
|
||||
OXQQtguWwfNmmp96Ql13ArLj458YH08xMO/w2NFWGwB/hfARa4z/T0afFuc/tKJf
|
||||
XbGCG9xzJ9tmcG45QN8NChGhVvaTweNdVxGWlpHxmi0Mn8OM9CEuB7nPtTTiBuiu
|
||||
pRC2zVVmNjVp4ktkAqL7IHOz+/F5nhiz6tOika9oD3376Xj055lPznLcTQn2+4d7
|
||||
K7ZrBopCFxIQPjkgmYRLfPejbpdUjK1UVJw7hbWkqWqH7JMCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFGjvRcaIP4HM
|
||||
poIguUAK9YL2n7fbMB8GA1UdIwQYMBaAFGjvRcaIP4HMpoIguUAK9YL2n7fbMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCbzycJSaDm
|
||||
AXXNJqQ88djrKs5MDXS8RIjS/cu2ayuLaYDe+BzVmUXNA0Vt9nZGdaz63SLLcjpU
|
||||
fNSxBfKbwmf7s30AK8Cnfj9q4W/BlBeVizUHQsg1+RQpDIdMrRQrwkXv8mfLw+w5
|
||||
3oaXNW6W/8KpBp/H8TBZ6myl6jCbeR3T8EMXBwipMGop/1zkbF01i98Xpqmhx2+l
|
||||
n+80ofPsSspOo5XmgCZym8CD/m/oFHmjcvOfpOCvDh4PZ+i37pmbSlCYoMpla3u/
|
||||
7MJMP5lugfLBYNDN2p+V4KbHP/cApCDT5UWLOeAWjgiZQtHH5ilDeYqEc1oPjyJt
|
||||
Rtup0MTxSJtN
|
||||
-----END CERTIFICATE-----
|
14
itcc/vars
14
itcc/vars
@ -1,14 +0,0 @@
|
||||
BROKER_ID=test-no-real-data.broker.samply.de
|
||||
BROKER_URL=https://${BROKER_ID}
|
||||
PROXY_ID=${SITE_ID}.${BROKER_ID}
|
||||
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
|
||||
SUPPORT_EMAIL=arturo.macias@dkfz-heidelberg.de
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
|
||||
for module in common/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
source $module
|
||||
done
|
@ -1,63 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
container_name: bridgehead-kr-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-kr-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.blaze_kr.rule=PathPrefix(`/kr-localdatamanagement`)"
|
||||
- "traefik.http.middlewares.kr_b_strip.stripprefix.prefixes=/kr-localdatamanagement"
|
||||
- "traefik.http.services.blaze_kr.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.blaze_kr.middlewares=kr_b_strip,auth"
|
||||
- "traefik.http.routers.blaze_kr.tls=true"
|
||||
|
||||
focus:
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
container_name: bridgehead-focus
|
||||
environment:
|
||||
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
BEAM_APP_ID_LONG: focus.${PROXY_ID}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
BLAZE_URL: "http://bridgehead-kr-blaze:8080/fhir/"
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
||||
beam-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-beam-proxy
|
||||
environment:
|
||||
BROKER_URL: ${BROKER_URL}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/kr/root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
|
||||
volumes:
|
||||
blaze-data:
|
||||
|
||||
secrets:
|
||||
proxy.pem:
|
||||
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem
|
@ -1,20 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUW34NEb7bl0+Ywx+I1VKtY5vpAOowDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTIyMTMzNzEzWhcNMzQw
|
||||
MTE5MTMzNzQzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAL5UegLXTlq3XRRj8LyFs3aF0tpRPVoW9RXp5kFI
|
||||
TnBvyO6qjNbMDT/xK+4iDtEX4QQUvsxAKxfXbe9i1jpdwjgH7JHaSGm2IjAiKLqO
|
||||
OXQQtguWwfNmmp96Ql13ArLj458YH08xMO/w2NFWGwB/hfARa4z/T0afFuc/tKJf
|
||||
XbGCG9xzJ9tmcG45QN8NChGhVvaTweNdVxGWlpHxmi0Mn8OM9CEuB7nPtTTiBuiu
|
||||
pRC2zVVmNjVp4ktkAqL7IHOz+/F5nhiz6tOika9oD3376Xj055lPznLcTQn2+4d7
|
||||
K7ZrBopCFxIQPjkgmYRLfPejbpdUjK1UVJw7hbWkqWqH7JMCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFGjvRcaIP4HM
|
||||
poIguUAK9YL2n7fbMB8GA1UdIwQYMBaAFGjvRcaIP4HMpoIguUAK9YL2n7fbMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCbzycJSaDm
|
||||
AXXNJqQ88djrKs5MDXS8RIjS/cu2ayuLaYDe+BzVmUXNA0Vt9nZGdaz63SLLcjpU
|
||||
fNSxBfKbwmf7s30AK8Cnfj9q4W/BlBeVizUHQsg1+RQpDIdMrRQrwkXv8mfLw+w5
|
||||
3oaXNW6W/8KpBp/H8TBZ6myl6jCbeR3T8EMXBwipMGop/1zkbF01i98Xpqmhx2+l
|
||||
n+80ofPsSspOo5XmgCZym8CD/m/oFHmjcvOfpOCvDh4PZ+i37pmbSlCYoMpla3u/
|
||||
7MJMP5lugfLBYNDN2p+V4KbHP/cApCDT5UWLOeAWjgiZQtHH5ilDeYqEc1oPjyJt
|
||||
Rtup0MTxSJtN
|
||||
-----END CERTIFICATE-----
|
16
kr/vars
16
kr/vars
@ -1,16 +0,0 @@
|
||||
BROKER_ID=test-no-real-data.broker.samply.de
|
||||
BROKER_URL=https://${BROKER_ID}
|
||||
PROXY_ID=${SITE_ID}.${BROKER_ID}
|
||||
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
|
||||
SUPPORT_EMAIL=arturo.macias@dkfz-heidelberg.de
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
|
||||
for module in common/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
source $module
|
||||
done
|
||||
|
||||
obds2fhirRestSetup
|
186
lib/functions.sh
186
lib/functions.sh
@ -53,8 +53,8 @@ checkOwner(){
|
||||
}
|
||||
|
||||
printUsage() {
|
||||
echo "Usage: bridgehead start|stop|logs|docker-logs|is-running|update|install|uninstall|adduser|enroll PROJECTNAME"
|
||||
echo "PROJECTNAME should be one of ccp|bbmri|cce|itcc|kr|dhki"
|
||||
echo "Usage: bridgehead start|stop|is-running|update|install|uninstall|adduser|enroll PROJECTNAME"
|
||||
echo "PROJECTNAME should be one of ccp|bbmri"
|
||||
}
|
||||
|
||||
checkRequirements() {
|
||||
@ -155,28 +155,6 @@ setHostname() {
|
||||
fi
|
||||
}
|
||||
|
||||
# This function optimizes the usage of memory through blaze, according to the official performance tuning guide:
|
||||
# https://github.com/samply/blaze/blob/master/docs/tuning-guide.md
|
||||
# Short summary of the adjustments made:
|
||||
# - set blaze memory cap to a quarter of the system memory
|
||||
# - set db block cache size to a quarter of the system memory
|
||||
# - limit resource count allowed in blaze to 1,25M per 4GB available system memory
|
||||
optimizeBlazeMemoryUsage() {
|
||||
if [ -z "$BLAZE_MEMORY_CAP" ]; then
|
||||
system_memory_in_mb=$(LC_ALL=C free -m | grep 'Mem:' | awk '{print $2}');
|
||||
export BLAZE_MEMORY_CAP=$(($system_memory_in_mb/4));
|
||||
fi
|
||||
if [ -z "$BLAZE_RESOURCE_CACHE_CAP" ]; then
|
||||
available_system_memory_chunks=$((BLAZE_MEMORY_CAP / 1000))
|
||||
if [ $available_system_memory_chunks -eq 0 ]; then
|
||||
log WARN "Only ${BLAZE_MEMORY_CAP} system memory available for Blaze. If your Blaze stores more than 128000 fhir ressources it will run significally slower."
|
||||
export BLAZE_RESOURCE_CACHE_CAP=128000;
|
||||
else
|
||||
export BLAZE_RESOURCE_CACHE_CAP=$((available_system_memory_chunks * 312500))
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Takes 1) The Backup Directory Path 2) The name of the Service to be backuped
|
||||
# Creates 3 Backups: 1) For the past seven days 2) For the current month and 3) for each calendar week
|
||||
createEncryptedPostgresBackup(){
|
||||
@ -262,112 +240,66 @@ add_basic_auth_user() {
|
||||
sed -i "/^$NAME/ s|$|\n# User: $USER\n# Password: $PASSWORD|" $FILE
|
||||
}
|
||||
|
||||
OIDC_PUBLIC_REDIRECT_URLS=${OIDC_PUBLIC_REDIRECT_URLS:-""}
|
||||
OIDC_PRIVATE_REDIRECT_URLS=${OIDC_PRIVATE_REDIRECT_URLS:-""}
|
||||
function clone_repo_if_nonexistent() {
|
||||
local repo_url="$1" # First argument: Repository URL
|
||||
local target_dir="$2" # Second argument: Target directory
|
||||
local branch_name="$3" # Third argument: Branch name
|
||||
|
||||
# Add a redirect url to the public oidc client of the bridgehead
|
||||
function add_public_oidc_redirect_url() {
|
||||
if [[ $OIDC_PUBLIC_REDIRECT_URLS == "" ]]; then
|
||||
OIDC_PUBLIC_REDIRECT_URLS+="$(generate_redirect_urls $1)"
|
||||
else
|
||||
OIDC_PUBLIC_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
|
||||
echo Repo directory: $target_dir
|
||||
|
||||
# Check if the target directory exists
|
||||
if [ ! -d "$target_dir" ]; then
|
||||
echo "Directory '$target_dir' does not exist. Cloning the repository..."
|
||||
# Clone the repository
|
||||
git clone "$repo_url" "$target_dir"
|
||||
fi
|
||||
|
||||
# Change to the cloned directory
|
||||
cd "$target_dir"
|
||||
|
||||
# Checkout the specified branch
|
||||
chown -R bridgehead .
|
||||
su bridgehead -c "git checkout $branch_name"
|
||||
|
||||
cd -
|
||||
}
|
||||
|
||||
function clone_transfair_if_nonexistent() {
|
||||
local base_dir="$1"
|
||||
|
||||
clone_repo_if_nonexistent https://github.com/samply/transFAIR.git $base_dir/transfair ehds2_develop
|
||||
}
|
||||
|
||||
function clone_focus_if_nonexistent() {
|
||||
local base_dir="$1"
|
||||
|
||||
clone_repo_if_nonexistent https://github.com/samply/focus.git $base_dir/focus ehds2
|
||||
}
|
||||
|
||||
|
||||
function build_transfair() {
|
||||
local base_dir="$1"
|
||||
|
||||
# We only take the touble to build transfair if:
|
||||
#
|
||||
# 1. There is data available (any CSV files) and
|
||||
# 2. There is no data lock file (which means that no ETL has yet been run).
|
||||
if ls ../ecdc/data/*.[cC][sS][vV] 1> /dev/null 2>&1 && [ ! -f ../ecdc/data/lock ]; then
|
||||
cd $base_dir/transfair
|
||||
su bridgehead -c "git pull"
|
||||
docker build --progress=plain -t samply/transfair --no-cache .
|
||||
chown -R bridgehead .
|
||||
cd -
|
||||
fi
|
||||
}
|
||||
|
||||
# Add a redirect url to the private oidc client of the bridgehead
|
||||
function add_private_oidc_redirect_url() {
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS == "" ]]; then
|
||||
OIDC_PRIVATE_REDIRECT_URLS+="$(generate_redirect_urls $1)"
|
||||
else
|
||||
OIDC_PRIVATE_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
|
||||
fi
|
||||
function build_focus() {
|
||||
local base_dir="$1"
|
||||
|
||||
cd $base_dir/focus
|
||||
su bridgehead -c "git pull"
|
||||
docker build --progress=plain -f DockerfileWithBuild -t samply/focus --no-cache .
|
||||
chown -R bridgehead .
|
||||
cd -
|
||||
}
|
||||
|
||||
function sync_secrets() {
|
||||
local delimiter=$'\x1E'
|
||||
local secret_sync_args=""
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS != "" ]]; then
|
||||
secret_sync_args="OIDC:OIDC_CLIENT_SECRET:private;$OIDC_PRIVATE_REDIRECT_URLS"
|
||||
fi
|
||||
if [[ $OIDC_PUBLIC_REDIRECT_URLS != "" ]]; then
|
||||
if [[ $secret_sync_args == "" ]]; then
|
||||
secret_sync_args="OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
|
||||
else
|
||||
secret_sync_args+="${delimiter}OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
|
||||
fi
|
||||
fi
|
||||
if [[ $secret_sync_args == "" ]]; then
|
||||
return
|
||||
fi
|
||||
mkdir -p /var/cache/bridgehead/secrets/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/secrets/'. Please run sudo './bridgehead install $PROJECT' again."
|
||||
touch /var/cache/bridgehead/secrets/oidc
|
||||
docker run --rm \
|
||||
-v /var/cache/bridgehead/secrets/oidc:/usr/local/cache \
|
||||
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
|
||||
-v /srv/docker/bridgehead/$PROJECT/root.crt.pem:/run/secrets/root.crt.pem:ro \
|
||||
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
|
||||
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
|
||||
-e NO_PROXY=localhost,127.0.0.1 \
|
||||
-e ALL_PROXY=$HTTPS_PROXY_FULL_URL \
|
||||
-e PROXY_ID=$PROXY_ID \
|
||||
-e BROKER_URL=$BROKER_URL \
|
||||
-e OIDC_PROVIDER=secret-sync-central.oidc-client-enrollment.$BROKER_ID \
|
||||
-e SECRET_DEFINITIONS=$secret_sync_args \
|
||||
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
|
||||
|
||||
set -a # Export variables as environment variables
|
||||
source /var/cache/bridgehead/secrets/*
|
||||
set +a # Export variables in the regular way
|
||||
}
|
||||
|
||||
capitalize_first_letter() {
|
||||
input="$1"
|
||||
capitalized="$(tr '[:lower:]' '[:upper:]' <<< ${input:0:1})${input:1}"
|
||||
echo "$capitalized"
|
||||
}
|
||||
|
||||
# Generate a string of ',' separated string of redirect urls relative to $HOST.
|
||||
# $1 will be appended to the url
|
||||
# If the host looks like dev-jan.inet.dkfz-heidelberg.de it will generate urls with dev-jan and the original $HOST as url Authorities
|
||||
function generate_redirect_urls(){
|
||||
local redirect_urls="https://${HOST}$1"
|
||||
local host_without_proxy="$(echo "$HOST" | cut -d '.' -f1)"
|
||||
# Only append second url if its different and the host is not an ip address
|
||||
if [[ "$HOST" != "$host_without_proxy" && ! "$HOST" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
redirect_urls+=",https://$host_without_proxy$1"
|
||||
fi
|
||||
echo "$redirect_urls"
|
||||
}
|
||||
|
||||
# This password contains at least one special char, a random number and a random upper and lower case letter
|
||||
generate_password(){
|
||||
local seed_text="$1"
|
||||
local seed_num=$(awk 'BEGIN{FS=""} NR==1{print $10}' /etc/bridgehead/pki/${SITE_ID}.priv.pem | od -An -tuC)
|
||||
local nums="1234567890"
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 10}')
|
||||
local random_digit=${nums:$n:1}
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 26}')
|
||||
local upper="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
local lower="abcdefghijklmnopqrstuvwxyz"
|
||||
local random_upper=${upper:$n:1}
|
||||
local random_lower=${lower:$n:1}
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 8}')
|
||||
local special='@#$%^&+='
|
||||
local random_special=${special:$n:1}
|
||||
|
||||
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
|
||||
local main_password=$(echo "${combined_text}" | sha1sum | openssl pkeyutl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/\//A/g')
|
||||
|
||||
echo "${main_password}${random_digit}${random_upper}${random_lower}${random_special}"
|
||||
}
|
||||
|
||||
# This password only contains alphanumeric characters
|
||||
generate_simple_password(){
|
||||
local seed_text="$1"
|
||||
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
|
||||
echo "${combined_text}" | sha1sum | openssl pkeyutl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/[+\/]/A/g'
|
||||
}
|
||||
|
||||
docker_jq() {
|
||||
docker run --rm -i docker.verbis.dkfz.de/cache/jqlang/jq:latest "$@"
|
||||
}
|
||||
|
@ -52,21 +52,6 @@ case "$PROJECT" in
|
||||
bbmri)
|
||||
site_configuration_repository_middle="git.verbis.dkfz.de/bbmri-bridgehead-configs/"
|
||||
;;
|
||||
cce)
|
||||
site_configuration_repository_middle="git.verbis.dkfz.de/cce-sites/"
|
||||
;;
|
||||
itcc)
|
||||
site_configuration_repository_middle="git.verbis.dkfz.de/itcc-sites/"
|
||||
;;
|
||||
dhki)
|
||||
site_configuration_repository_middle="git.verbis.dkfz.de/dhki/"
|
||||
;;
|
||||
kr)
|
||||
site_configuration_repository_middle="git.verbis.dkfz.de/krebsregister-sites/"
|
||||
;;
|
||||
dhki)
|
||||
site_configuration_repository_middle="git.verbis.dkfz.de/dhki/"
|
||||
;;
|
||||
minimal)
|
||||
site_configuration_repository_middle="git.verbis.dkfz.de/minimal-bridgehead-configs/"
|
||||
;;
|
||||
@ -104,9 +89,6 @@ elif [[ "$DEV_MODE" == "DEV" ]]; then
|
||||
fi
|
||||
|
||||
chown -R bridgehead /etc/bridgehead /srv/docker/bridgehead
|
||||
mkdir -p /tmp/bridgehead /var/cache/bridgehead
|
||||
chown -R bridgehead:docker /tmp/bridgehead /var/cache/bridgehead
|
||||
chmod -R g+wr /var/cache/bridgehead /tmp/bridgehead
|
||||
|
||||
log INFO "System preparation is completed and configuration is present."
|
||||
|
||||
|
@ -67,30 +67,29 @@ log INFO "Checking network access ($BROKER_URL_FOR_PREREQ) ..."
|
||||
source /etc/bridgehead/${PROJECT}.conf
|
||||
source ${PROJECT}/vars
|
||||
|
||||
if [ "${PROJECT}" != "minimal" ]; then
|
||||
set +e
|
||||
SERVERTIME="$(https_proxy=$HTTPS_PROXY_FULL_URL curl -m 5 -s -I $BROKER_URL_FOR_PREREQ 2>&1 | grep -i -e '^Date: ' | sed -e 's/^Date: //i')"
|
||||
RET=$?
|
||||
set -e
|
||||
if [ $RET -ne 0 ]; then
|
||||
log WARN "Unable to connect to Samply.Beam broker at $BROKER_URL_FOR_PREREQ. Please check your proxy settings.\nThe currently configured proxy was \"$HTTPS_PROXY_URL\". This error is normal when using proxy authentication."
|
||||
log WARN "Unable to check clock skew due to previous error."
|
||||
else
|
||||
log INFO "Checking clock skew ..."
|
||||
set +e
|
||||
SERVERTIME="$(https_proxy=$HTTPS_PROXY_FULL_URL curl -m 5 -s -I $BROKER_URL_FOR_PREREQ 2>&1 | grep -i -e '^Date: ' | sed -e 's/^Date: //i')"
|
||||
RET=$?
|
||||
set -e
|
||||
if [ $RET -ne 0 ]; then
|
||||
log WARN "Unable to connect to Samply.Beam broker at $BROKER_URL_FOR_PREREQ. Please check your proxy settings.\nThe currently configured proxy was \"$HTTPS_PROXY_URL\". This error is normal when using proxy authentication."
|
||||
log WARN "Unable to check clock skew due to previous error."
|
||||
else
|
||||
log INFO "Checking clock skew ..."
|
||||
|
||||
SERVERTIME_AS_TIMESTAMP=$(date --date="$SERVERTIME" +%s)
|
||||
MYTIME=$(date +%s)
|
||||
SKEW=$(($SERVERTIME_AS_TIMESTAMP - $MYTIME))
|
||||
SKEW=$(echo $SKEW | awk -F- '{print $NF}')
|
||||
SYNCTEXT="For example, consider entering a correct NTP server (e.g. your institution's Active Directory Domain Controller in /etc/systemd/timesyncd.conf (option NTP=) and restart systemd-timesyncd."
|
||||
if [ $SKEW -ge 300 ]; then
|
||||
report_error 5 "Your clock is not synchronized (${SKEW}s off). This will cause Samply.Beam's certificate will fail. Please setup time synchronization. $SYNCTEXT"
|
||||
exit 1
|
||||
elif [ $SKEW -ge 60 ]; then
|
||||
log WARN "Your clock is more than a minute off (${SKEW}s). Consider syncing to a time server. $SYNCTEXT"
|
||||
fi
|
||||
fi
|
||||
SERVERTIME_AS_TIMESTAMP=$(date --date="$SERVERTIME" +%s)
|
||||
MYTIME=$(date +%s)
|
||||
SKEW=$(($SERVERTIME_AS_TIMESTAMP - $MYTIME))
|
||||
SKEW=$(echo $SKEW | awk -F- '{print $NF}')
|
||||
SYNCTEXT="For example, consider entering a correct NTP server (e.g. your institution's Active Directory Domain Controller in /etc/systemd/timesyncd.conf (option NTP=) and restart systemd-timesyncd."
|
||||
if [ $SKEW -ge 300 ]; then
|
||||
report_error 5 "Your clock is not synchronized (${SKEW}s off). This will cause Samply.Beam's certificate will fail. Please setup time synchronization. $SYNCTEXT"
|
||||
log WARN "Server Time Error"
|
||||
elif [ $SKEW -ge 60 ]; then
|
||||
log WARN "Your clock is more than a minute off (${SKEW}s). Consider syncing to a time server. $SYNCTEXT"
|
||||
fi
|
||||
fi
|
||||
|
||||
checkPrivKey() {
|
||||
if [ -e /etc/bridgehead/pki/${SITE_ID}.priv.pem ]; then
|
||||
log INFO "Success - private key found."
|
||||
@ -101,7 +100,7 @@ checkPrivKey() {
|
||||
return 0
|
||||
}
|
||||
|
||||
if [[ "$@" =~ "noprivkey" || "${PROJECT}" != "minimal" ]]; then
|
||||
if [[ "$@" =~ "noprivkey" ]]; then
|
||||
log INFO "Skipping check for private key for now."
|
||||
else
|
||||
checkPrivKey || exit 1
|
||||
|
@ -1,9 +1,8 @@
|
||||
[Unit]
|
||||
Description=Daily Updates at 6am of Bridgehead (%i)
|
||||
Description=Hourly Updates of Bridgehead (%i)
|
||||
|
||||
[Timer]
|
||||
OnCalendar=*-*-* 06:00:00
|
||||
Persistent=true
|
||||
OnCalendar=*-*-* *:00:00
|
||||
|
||||
[Install]
|
||||
WantedBy=basic.target
|
||||
|
@ -86,7 +86,7 @@ done
|
||||
# Check docker updates
|
||||
log "INFO" "Checking for updates to running docker images ..."
|
||||
docker_updated="false"
|
||||
for IMAGE in $($COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE config | grep "image:" | sed -e 's_^.*image: \(.*\).*$_\1_g; s_\"__g'); do
|
||||
for IMAGE in $(cat $PROJECT/docker-compose.yml ${OVERRIDE//-f/} minimal/docker-compose.yml | grep -v "^#" | grep "image:" | sed -e 's_^.*image: \(.*\).*$_\1_g; s_\"__g'); do
|
||||
log "INFO" "Checking for Updates of Image: $IMAGE"
|
||||
if docker pull $IMAGE | grep "Downloaded newer image"; then
|
||||
CHANGE="Image $IMAGE updated."
|
||||
|
@ -42,13 +42,10 @@ services:
|
||||
- /var/spool/squid
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/docker/custom-certs/:ro
|
||||
healthcheck:
|
||||
# Wait 1s before marking this service healthy. Required for the oauth2-proxy to talk to the OIDC provider on startup which will fail if the forward proxy is not started yet.
|
||||
test: ["CMD", "sleep", "1"]
|
||||
|
||||
landing:
|
||||
container_name: bridgehead-landingpage
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:main
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:master
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
|
||||
@ -58,4 +55,5 @@ services:
|
||||
HOST: ${HOST}
|
||||
PROJECT: ${PROJECT}
|
||||
SITE_NAME: ${SITE_NAME}
|
||||
ENVIRONMENT: ${ENVIRONMENT}
|
||||
|
||||
|
||||
|
@ -2,7 +2,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-beam-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-dnpm-beam-proxy
|
||||
environment:
|
||||
BROKER_URL: ${DNPM_BROKER_URL}
|
||||
@ -32,14 +32,12 @@ services:
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
HTTP_PROXY: http://forward_proxy:3128
|
||||
HTTPS_PROXY: http://forward_proxy:3128
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal
|
||||
RUST_LOG: ${RUST_LOG:-info}
|
||||
NO_AUTH: "true"
|
||||
TLS_CA_CERTIFICATES_DIR: ./conf/trusted-ca-certs
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
labels:
|
||||
@ -50,10 +48,6 @@ services:
|
||||
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
|
||||
- "traefik.http.routers.dnpm-connect.tls=true"
|
||||
|
||||
dnpm-echo:
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-echo:latest
|
||||
container_name: bridgehead-dnpm-echo
|
||||
|
||||
secrets:
|
||||
proxy.pem:
|
||||
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
@ -6,7 +6,6 @@ services:
|
||||
container_name: bridgehead-dnpm-backend
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
- N_RANDOM_FILES=${DNPM_SYNTH_NUM}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
||||
|
@ -14,15 +14,14 @@ if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
@ -13,10 +13,4 @@ if [ -n "${ENABLE_DNPM}" ]; then
|
||||
log DEBUG "No Broker for clock check set; using $DNPM_BROKER_URL"
|
||||
fi
|
||||
DNPM_PROXY_ID="${SITE_ID}.${DNPM_BROKER_ID}"
|
||||
# If the DNPM_NO_PROXY variable is set, prefix it with a comma (as it gets added to a comma separated list)
|
||||
if [ -n "${DNPM_NO_PROXY}" ]; then
|
||||
DNPM_ADDITIONAL_NO_PROXY=",${DNPM_NO_PROXY}"
|
||||
else
|
||||
DNPM_ADDITIONAL_NO_PROXY=""
|
||||
fi
|
||||
fi
|
||||
|
30
restart_service.sh
Executable file
30
restart_service.sh
Executable file
@ -0,0 +1,30 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Start a running Bridgehead. If there is already a Bridgehead running,
|
||||
# stop it first.
|
||||
# This is intended to be used by systemctl.
|
||||
|
||||
cd /srv/docker/bridgehead
|
||||
|
||||
echo "git status before stop"
|
||||
git status
|
||||
|
||||
echo "Stopping running Bridgehead, if present"
|
||||
./bridgehead stop bbmri
|
||||
|
||||
# If "flush_blaze" is present, delete the Blaze volume before starting
|
||||
# the Bridgehead again. This allows a user to upload all data, if
|
||||
# requested.
|
||||
if [ -f "/srv/docker/ecdc/data/flush_blaze" ]; then
|
||||
docker volume rm bbmri_blaze-data
|
||||
rm -f /srv/docker/ecdc/data/flush_blaze
|
||||
fi
|
||||
|
||||
echo "git status before start"
|
||||
git status | systemd-cat -p info
|
||||
|
||||
echo "Start the Bridgehead anew"
|
||||
./bridgehead start bbmri
|
||||
|
||||
echo "Bridgehead has unexpectedly terminated"
|
||||
|
83
run.sh
Executable file
83
run.sh
Executable file
@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Start a Bridgehead from the command line. Upload data if requested.
|
||||
# Behind the scenes we use systemctl to do the work.
|
||||
|
||||
# Function to print usage
|
||||
print_usage() {
|
||||
echo "Start a Bridghead, optionally upload data"
|
||||
echo "Usage: $0 [--upload | --upload-all | --help | -h]"
|
||||
echo "Options:"
|
||||
echo " --upload Run Bridgehead and upload just the new CSV data files."
|
||||
echo " --upload-all Run Bridgehead and upload all CSV data files."
|
||||
echo " --help, -h Display this help message."
|
||||
echo " No options Run Bridgehead only."
|
||||
}
|
||||
|
||||
# Initialize variables
|
||||
UPLOAD=false
|
||||
UPLOAD_ALL=false
|
||||
|
||||
# Parse arguments
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--upload)
|
||||
UPLOAD=true
|
||||
;;
|
||||
--upload-all)
|
||||
UPLOAD_ALL=true
|
||||
;;
|
||||
--help|-h)
|
||||
print_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown argument '$1'"
|
||||
print_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Check for conflicting options
|
||||
if [ "$UPLOAD" = true ] && [ "$UPLOAD_ALL" = true ]; then
|
||||
echo "Error: you must specify either --upload or --upload-all, specifying both is not permitted."
|
||||
print_usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Disable/stop standard Bridgehead systemctl services, if present
|
||||
sudo systemctl disable bridgehead@bbmri.service >& /dev/null
|
||||
sudo systemctl disable system-bridgehead.slice >& /dev/null
|
||||
sudo systemctl disable bridgehead-update@bbmri.timer >& /dev/null
|
||||
sudo systemctl stop bridgehead@bbmri.service >& /dev/null
|
||||
sudo systemctl stop system-bridgehead.slice >& /dev/null
|
||||
sudo systemctl stop bridgehead-update@bbmri.timer >& /dev/null
|
||||
|
||||
# Set up systemctl for EHDS2/ECDC if necessary
|
||||
cp /srv/docker/bridgehead/ecdc.service /etc/systemd/system
|
||||
systemctl daemon-reload
|
||||
systemctl enable ecdc.service
|
||||
|
||||
# Use systemctl to stop the Bridgehead if it is running
|
||||
sudo systemctl stop ecdc.service
|
||||
|
||||
# Use files to tell the Bridgehead what to do with any data present
|
||||
if [ "$UPLOAD" = true ] || [ "$UPLOAD_ALL" = true ]; then
|
||||
if [ -f /srv/docker/ecdc/data/lock ]; then
|
||||
rm /srv/docker/ecdc/data/lock
|
||||
fi
|
||||
fi
|
||||
if [ "$UPLOAD_ALL" = true ]; then
|
||||
echo "All CSV files in /srv/docker/ecdc/data will be uploaded"
|
||||
touch /srv/docker/ecdc/data/flush_blaze
|
||||
fi
|
||||
|
||||
# Start up the Bridgehead
|
||||
sudo systemctl start ecdc.service
|
||||
|
||||
# Show status of Bridgehead service
|
||||
sleep 10
|
||||
systemctl status ecdc.service
|
||||
|
13
shutdown_service.sh
Executable file
13
shutdown_service.sh
Executable file
@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Shut down a running Bridgehead.
|
||||
# This is intended to be used by systemctl.
|
||||
|
||||
cd /srv/docker/bridgehead
|
||||
|
||||
echo "git status before stop"
|
||||
git status
|
||||
|
||||
echo "Stopping running Bridgehead, if present"
|
||||
./bridgehead stop bbmri
|
||||
|
43
stop.sh
Executable file
43
stop.sh
Executable file
@ -0,0 +1,43 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Shut down a running Bridgehead.
|
||||
# Behind the scenes we use systemctl to do the work.
|
||||
|
||||
# Function to print usage
|
||||
print_usage() {
|
||||
echo "Stop the running Bridgehead"
|
||||
echo "Usage: $0 [--help | -h]"
|
||||
echo "Options:"
|
||||
echo " --help, -h Display this help message."
|
||||
echo " No options Stop Bridgehead only."
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--help|-h)
|
||||
print_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown argument '$1'"
|
||||
print_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Set up systemctl for EHDS2/ECDC if necessary
|
||||
cp /srv/docker/bridgehead/ecdc.service /etc/systemd/system
|
||||
systemctl daemon-reload
|
||||
systemctl enable ecdc.service
|
||||
|
||||
# Use systemctl to stop the Bridgehead if it is running
|
||||
sudo systemctl stop ecdc.service
|
||||
|
||||
# Show status of Bridgehead service
|
||||
sleep 20
|
||||
systemctl status ecdc.service
|
||||
docker ps
|
||||
|
Reference in New Issue
Block a user