mirror of
https://github.com/samply/bridgehead.git
synced 2025-06-16 20:40:15 +02:00
Compare commits
177 Commits
ehds2_deve
...
feature/cb
Author | SHA1 | Date | |
---|---|---|---|
5ca134c35d | |||
c565d14ee4 | |||
a9c09c0d05 | |||
2f663c45e8 | |||
ebbe64abee | |||
26c9e1286d | |||
92984d24f3 | |||
abedfdaf64 | |||
c751b72e0e | |||
65eee84a4f | |||
c0f497255c | |||
56c2955b5d | |||
0ced9b0e4e | |||
bb23d6f25b | |||
8ac4e36ac1 | |||
4e809f66cb | |||
ca9f88421c | |||
8579bc879f | |||
18dda72e84 | |||
164d1a66fb | |||
ca7772421b | |||
2ec44c9d48 | |||
9231c99141 | |||
2542c478c1 | |||
7fb27efae3 | |||
efd26aa761 | |||
cd36ab455b | |||
e9eccd5cab | |||
381633d4a0 | |||
30760075a6 | |||
1299ee2ab4 | |||
fe56cbbc19 | |||
bd784e703a | |||
109426cf43 | |||
40d6c7cae5 | |||
a3c7a002fd | |||
91903fae24 | |||
ea0435bee3 | |||
ad1b00d16e | |||
e696d3b5dd | |||
b683d07f48 | |||
eea53cd877 | |||
b0ced71197 | |||
185143c084 | |||
0a1070c3a5 | |||
8505057863 | |||
72d37c87f9 | |||
8aa851bfe4 | |||
67831ba57b | |||
0545189cec | |||
faf46f9fea | |||
c8e215199c | |||
c2b994e0d1 | |||
4cd66f6689 | |||
e99913cdc3 | |||
afacdae20e | |||
33d955f17d | |||
c0ff03da6b | |||
6582db0523 | |||
27948c2a64 | |||
c8e7cd5f25 | |||
f2c55ade84 | |||
b20758bc3d | |||
8d22ea2c19 | |||
9eaaeb5064 | |||
c9b19a9368 | |||
b0b599c96b | |||
b7c6b15425 | |||
8e181a85c2 | |||
b9dbfd4803 | |||
3389145c1d | |||
7d8b83b10c | |||
d799554f86 | |||
4a4a1d76a7 | |||
785dff29bf | |||
1f9733aa4d | |||
c5578f81ee | |||
fa5459c4dd | |||
31320a856c | |||
a340b959c2 | |||
c7d0bcf94d | |||
51ca9efe11 | |||
1ef6142306 | |||
e71495c70f | |||
e6b7a63ef7 | |||
86ca652e8d | |||
d1a0153a6e | |||
5b88df4912 | |||
8345a9b1f1 | |||
3d210ea303 | |||
d8451c0426 | |||
b63eb141f6 | |||
f24dfa43ef | |||
b6e03f3a78 | |||
8d1a7e7374 | |||
4b705be264 | |||
3e79598533 | |||
6a97630bf1 | |||
3fdda8b8d4 | |||
de0193b99b | |||
08c9e6c822 | |||
61bfd661eb | |||
9e0c7fb5ca | |||
ce5f94881f | |||
cb332142c3 | |||
0cad79575b | |||
5f77168f65 | |||
361950ba7f | |||
cb659b87f1 | |||
0801ebe5a5 | |||
ceb089ddd7 | |||
f71832dd65 | |||
17036d459e | |||
146417f103 | |||
0cd0dc555b | |||
d3ecef5f04 | |||
0caf98224f | |||
412419494c | |||
2f7797b1f1 | |||
dcd31bfd7c | |||
90afe71b1b | |||
d5924e64a3 | |||
dfa12ca686 | |||
213ac6370a | |||
9732ef33b7 | |||
dc6e0a349f | |||
cb909dbcd4 | |||
69f1748ae7 | |||
0cbbee6906 | |||
7170612376 | |||
44134550ac | |||
4ba5144140 | |||
0d07a09296 | |||
5b62a1a248 | |||
bdf943f94e | |||
bc476ed0a8 | |||
f351cc931c | |||
9dd3c24a6d | |||
b4581e8b3a | |||
da49437ada | |||
1f4c2cad03 | |||
cae40aa39a | |||
8679d46b62 | |||
1438c32455 | |||
bfd33c0c1b | |||
0230303bd5 | |||
a86e594e85 | |||
0807e52160 | |||
75f9b73e98 | |||
90e2f2e40b | |||
b0da23ac1c | |||
c836a7554f | |||
fdd26083b6 | |||
3dcf83e4e8 | |||
a13b851edd | |||
a2e9a86bc0 | |||
1bb6df65fe | |||
8e591773f4 | |||
d915debbbb | |||
2c7bdfd868 | |||
cf255bf08b | |||
813a698dbb | |||
30090f3633 | |||
52f311ba1c | |||
f32d124fda | |||
8140b6dd7b | |||
a63bdbde54 | |||
7b24e2b427 | |||
5a9ab31fa4 | |||
85d333cfe8 | |||
0b61fc7f29 | |||
39981c310c | |||
2f72ac2dc9 | |||
81c0db0349 | |||
005e5a1bf0 | |||
e90c087547 | |||
44ac09b9c1 |
2
.gitignore
vendored
2
.gitignore
vendored
@ -1,7 +1,7 @@
|
||||
##Ignore site configuration
|
||||
.gitmodules
|
||||
site-config/*
|
||||
|
||||
.idea
|
||||
## Ignore site configuration
|
||||
*/docker-compose.override.yml
|
||||
|
||||
|
101
README.md
101
README.md
@ -9,7 +9,6 @@ This repository is the starting point for any information and tools you will nee
|
||||
- [Software](#software)
|
||||
- [Network](#network)
|
||||
2. [Deployment](#deployment)
|
||||
- [EHDS2/ECDC](#ehds2-ecdc)
|
||||
- [Site name](#site-name)
|
||||
- [Projects](#projects)
|
||||
- [GitLab repository](#gitlab-repository)
|
||||
@ -88,8 +87,6 @@ The following URLs need to be accessible (prefix with `https://`):
|
||||
* gitlab.bbmri-eric.eu
|
||||
* only for German Biobank Node
|
||||
* broker.bbmri.de
|
||||
* only for EHDS2/ECDC
|
||||
* ecdc-vm-ehds-test1.swedencentral.cloudapp.azure.com
|
||||
|
||||
> 📝 This URL list is subject to change. Instead of the individual names, we highly recommend whitelisting wildcard domains: *.dkfz.de, github.com, *.docker.com, *.docker.io, *.samply.de, *.bbmri.de.
|
||||
|
||||
@ -97,34 +94,6 @@ The following URLs need to be accessible (prefix with `https://`):
|
||||
|
||||
## Deployment
|
||||
|
||||
### EHDS2/ECDC
|
||||
|
||||
The ECDC Bridgehead allows you to connect your site/node to the [AMR Explorer](http://ehds2-lens.swedencentral.cloudapp.azure.com/), a non-public central web site that allow certified researchers to search for information relating to antiobiotic resistance, Europe-wide. You can supply the Bridgehead with data from your site in the form of CSV files, which will then be made available to the Explorer for searching purposes.
|
||||
|
||||
You will need to set up some configuration before you can start a Bridgehead. This can be done as follows:
|
||||
|
||||
```shell
|
||||
sudo mkdir -p /etc/bridgehead
|
||||
sudo cp /srv/docker/bridgehead/bbmri/modules/bbmri.conf /etc/bridgehead
|
||||
```
|
||||
|
||||
Now edit ```/etc/bridgehead/bbmri.conf``` and customize the following variables for your site:
|
||||
|
||||
- SITE_NAME
|
||||
- SITE_ID
|
||||
- OPERATOR_FIRST_NAME
|
||||
- OPERATOR_LAST_NAME
|
||||
- OPERATOR_EMAIL
|
||||
|
||||
If you run a proxy at your site, you will also need to give values to the ```HTTP*_PROXY*``` variables.
|
||||
|
||||
When you first start the Bridgehead, it will clone two extra repositories into /srv/docker, namely, ```focus``` and ```transfair```. It will automatically build local images of these repositories for you. These components have the following functionality that has been customized for ECDC:
|
||||
|
||||
- *focus.* This component is responsible for completing the CQL that is used for running queries against the Blaze FHIR store. It uses a set of templates for doing this. Extra templates have been written for the ECDC use case. They can be found in /srv/docker/focus/resources/cql/EHDS2*.
|
||||
- *transfair.* This is an ETL component. It takes the CSV data that you provide, converts it to FHIR, and loads it to Blaze. This will be run once, if there is data in /srv/docker/ecdc/data. A lock file in the data directory ensures that it does not get run again. Remove this lock file and restart the Bridgehead if you want to load new data.
|
||||
|
||||
These images will normally be rebuilt every time you restart the Bridgehead. This is a workaround to fix a bug: if you don't rebuild these images for every start, then legacy versions will be used and you will lose the new ECDC functionality. The reason for this is still under investigation.
|
||||
|
||||
### Site name
|
||||
|
||||
You will need to choose a short name for your site. This is not a URL, just a simple identifying string. For the examples below, we will use "your-site-name", but you should obviously choose something that is meaningful to you and which is unique.
|
||||
@ -139,8 +108,6 @@ Site names should adhere to the following conventions:
|
||||
|
||||
### GitLab repository
|
||||
|
||||
You can skip this section if you are doing an ECDC/EHDS2 installation.
|
||||
|
||||
In order to be able to install, you will need to have your own repository in GitLab for your site's configuration settings. This allows automated updates of the Bridgehead software.
|
||||
|
||||
To request a new repository, please contact your research network administration or send an email to one of the project specific addresses:
|
||||
@ -163,24 +130,7 @@ During the installation, your Bridgehead will download your site's configuration
|
||||
|
||||
### Base Installation
|
||||
|
||||
Clone the bridgehead repository:
|
||||
```shell
|
||||
sudo mkdir -p /srv/docker/
|
||||
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
|
||||
```
|
||||
|
||||
If this is an ECDC/EHDS2 installation, switch to the ```ehds2``` branch and copy the configuration file to the required location:
|
||||
```shell
|
||||
cd /srv/docker/bridgehead
|
||||
sudo git checkout ehds2
|
||||
sudo mkdir -p /etc/bridgehead/
|
||||
sudo cp bbmri/modules/bbmri.conf /etc/bridgehead/
|
||||
sudo vi /etc/bridgehead/bbmri.conf # Modify to include national node name and admin contact details
|
||||
```
|
||||
|
||||
For an ECDC/EHDS2 installation, you will also need to copy your data in a comma-separated value (CSV) formatted file to ```/srv/docker/ecdc/data```. Make sure it is readable by all. Only files with the ending ```.csv``` will be read in, all other files will be ignored.
|
||||
|
||||
If this is not an ECDC/EHDS2 installation, then download your site specific configuration repository:
|
||||
First, download your site specific configuration repository:
|
||||
```shell
|
||||
sudo mkdir -p /etc/bridgehead/
|
||||
sudo git clone <REPO_URL_FROM_EMAIL> /etc/bridgehead/
|
||||
@ -199,6 +149,12 @@ Pay special attention to:
|
||||
- OPERATOR_LAST_NAME
|
||||
- OPERATOR_EMAIL
|
||||
|
||||
Clone the bridgehead repository:
|
||||
```shell
|
||||
sudo mkdir -p /srv/docker/
|
||||
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
|
||||
```
|
||||
|
||||
Then, run the installation script:
|
||||
|
||||
```shell
|
||||
@ -217,38 +173,8 @@ sudo ./bridgehead enroll <PROJECT>
|
||||
|
||||
... and follow the instructions on the screen. Please send your default Collection ID and the display name of your site together with the certificate request when you enroll. You should then be prompted to do the next step:
|
||||
|
||||
Note: if you are doing an ECDC/EHDS2 installation, you will need to perform the Beam certificate signing yourself. Do not send an email to either of the email addreesses suggested by the bridgehead enroll procedure. Instead, log on to the VM where Beam is running and perform the following (you will need root permissions):
|
||||
```shell
|
||||
cd /srv/docker/beam-broker
|
||||
sudo mkdir -p csr
|
||||
sudo vi csr/ecdc-bridgehead-<national node name>.csr # Copy and paste the certificate printed during the enroll
|
||||
sudo pki-scripts/managepki sign --csr-file csr/ecdc-bridgehead-<national node name>.csr --common-name=ecdc-bridgehead-<national node name>.broker.bbmri.samply.de
|
||||
```
|
||||
|
||||
You can check that the Bridgehead has connected to Beam with the following command:
|
||||
```shell
|
||||
pki-scripts/managepki list
|
||||
|
||||
```
|
||||
|
||||
### Starting and stopping your Bridgehead
|
||||
|
||||
For an ECDC/EHDS2 installation, this is done with the help of specialized scripts:
|
||||
|
||||
To start:
|
||||
|
||||
```shell
|
||||
sudo /srv/docker/bridgehead/run.sh
|
||||
```
|
||||
|
||||
To stop (you generally won't need to do this):
|
||||
|
||||
```shell
|
||||
sudo /srv/docker/bridgehead/stop.sh
|
||||
```
|
||||
|
||||
For regular installations, read on.
|
||||
|
||||
If you followed the above steps, your Bridgehead should already be configured to autostart (via systemd). If you would like to start/stop manually:
|
||||
|
||||
To start, run
|
||||
@ -394,19 +320,6 @@ There will be a delay before the effects of Directory sync become visible. First
|
||||
|
||||
The data accessed by the federated search is held in the Bridgehead in a FHIR store (we use Blaze).
|
||||
|
||||
For an ECDC/EHDS2 installation, you need to provide your data as a table in a CSV (comma-separated value) files and place it in the directory /srv/docker/ecdc/data. You can provide as many data files as you like, and you can add new files incrementally over time.
|
||||
|
||||
In order for this new data to be loaded, you will need to execute the ```run.sh``` script with the appropriate arguments:
|
||||
|
||||
- To read just the most recently added data files: ```/srv/docker/bridgehead run.sh --upload```.
|
||||
- To read in all data from scratch: ```/srv/docker/bridgehead run.sh --upload-all```.
|
||||
|
||||
These two variants give you the choice between uploading data in an incremental way that preserves the date used for statistics or as a single upload that date stamps everything with the current date.
|
||||
|
||||
The Bridgehead can be started without data, but obviously, any searches run from the Explorer will return zero results for your site if you do that. Note that an empty data directory will automatically be inserted on the first start of the Bridgehead if you don't set one up yourself.
|
||||
|
||||
For non-ECDC setups, read on.
|
||||
|
||||
You can load data into this store by using its FHIR API:
|
||||
|
||||
```
|
||||
|
@ -1,13 +1,10 @@
|
||||
version: "3.7"
|
||||
|
||||
# This includes only the shared persistence for BBMRI-ERIC and GBN and EHDS2. Federation components are included as modules, see vars.
|
||||
# This includes only the shared persistence for BBMRI-ERIC and GBN. Federation components are included as modules, see vars.
|
||||
|
||||
services:
|
||||
blaze:
|
||||
#image: docker.verbis.dkfz.de/cache/samply/blaze:latest
|
||||
# Blaze versions 0.26 and 0.27 do not return anything when you run a
|
||||
# CQL query, so I am pinning the version at 0.25.
|
||||
image: samply/blaze:0.25
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:latest
|
||||
container_name: bridgehead-bbmri-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-bbmri-blaze:8080"
|
||||
@ -23,8 +20,6 @@ services:
|
||||
- "traefik.http.services.blaze_ccp.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth"
|
||||
- "traefik.http.routers.blaze_ccp.tls=true"
|
||||
ports:
|
||||
- "8081:8080"
|
||||
|
||||
volumes:
|
||||
blaze-data:
|
||||
|
@ -1,81 +0,0 @@
|
||||
### DO NOT EDIT THIS FILE DIRECTLY.
|
||||
###
|
||||
### This file is collaboratively managed by yourself and the CCP-IT team at DKFZ.
|
||||
### The Bridgehead will pull it from git every night and restart if required.
|
||||
### To make any changes (or review changes by CCP-IT), please login here:
|
||||
### [URL_TO_SITE_SPECIFIC_GIT_REPO]
|
||||
###
|
||||
### DO NOT EDIT THIS FILE DIRECTLY.
|
||||
|
||||
### A note on Secrets:
|
||||
###
|
||||
### Variable with a value of <VAULT> will be fetched from a central component
|
||||
### upon each bridgehead startup.
|
||||
### Using the proven Vaultwarden password manager puts you in full control of
|
||||
### who can read the passwords. In particular, as long as you don't declare a
|
||||
### secret as shared ("SITE+DKFZ"), DKFZ cannot read these strings.
|
||||
### We recommend putting credentials such as local passwords into the password
|
||||
### store, not the git repo. Please keep your master password safe (vault.conf).
|
||||
|
||||
|
||||
### Common Configuration of all Components
|
||||
## This is a descriptive human readable name of your site (e.g. Belgium)
|
||||
SITE_NAME=<National node>
|
||||
## This is the id for your site used in machine to machine communication (should be
|
||||
## lower-case, e.g. belgium)
|
||||
SITE_ID=<National node>
|
||||
## This server's hostname, for access from other computers within your institution
|
||||
## (e.g. mybridgehead.intern.myinstitution.org)
|
||||
## Optional. If left empty, this is auto-generated via the `hostname` command.
|
||||
HOST=
|
||||
|
||||
## Proxy Configuration
|
||||
# leave empty if not applicable
|
||||
# eg.: http://my-proxy-host:my-proxy-port
|
||||
HTTP_PROXY_URL=
|
||||
HTTP_PROXY_USERNAME=
|
||||
HTTP_PROXY_PASSWORD=
|
||||
HTTPS_PROXY_URL=$HTTP_PROXY_URL
|
||||
HTTPS_PROXY_USERNAME=$HTTP_PROXY_USERNAME
|
||||
HTTPS_PROXY_PASSWORD=$HTTP_PROXY_PASSWORD
|
||||
|
||||
## Maintenance Configuration
|
||||
# By default, the bridgehead regularly performs certain housekeeping tasks such as pruning of old docker images to not run out of disk space.
|
||||
# Set the following to false to opt-out. (Default: true)
|
||||
#AUTO_HOUSEKEEPING=
|
||||
|
||||
### Connector Configuration
|
||||
## The operator of the specific site.
|
||||
OPERATOR_FIRST_NAME=
|
||||
OPERATOR_LAST_NAME=
|
||||
OPERATOR_EMAIL=
|
||||
OPERATOR_PHONE=
|
||||
## SMTP Server
|
||||
# ex.: mailhost.intern.klinik.de
|
||||
MAIL_HOST=
|
||||
MAIL_PORT=
|
||||
# ex.: no-reply@bridgehead.intern.klinik.de
|
||||
MAIL_FROM_ADDRESS=
|
||||
MAIL_FROM_NAME=
|
||||
|
||||
### Monitoring
|
||||
# The apikey used for reporting to the central DKFZ monitoring. Leave empty to opt out.
|
||||
MONITOR_APIKEY=
|
||||
|
||||
### Biobanking (BBMRI) specifics
|
||||
## We consider BBMRI as BBMRI-ERIC (European) and German Biobank Node (Germany).
|
||||
## Obviously, all German biobanks are by definition also European. Thus,
|
||||
## any Bridgehead will by default connect to the BBMRI-ERIC services but not
|
||||
## the national ones. We aim to proceed similarly for other BBMRI-ERIC National Nodes.
|
||||
##
|
||||
## The default values are correct for biobanks outside Germany.
|
||||
## For a biobank inside Germany, set ENABLE_GBN=true.
|
||||
# Connect to the European services, e.g. BBMRI-ERIC Sample Locator (Default: true)
|
||||
ENABLE_ERIC=false
|
||||
# Connect to the German services, e.g. Biobank Node Sample Locator (Default: false)
|
||||
# Set this to true in German biobanks!
|
||||
ENABLE_GBN=false
|
||||
# Connect to the ECDC services, e.g. ECDC Sample Locator (Default: false)
|
||||
# Set this to true in ECDC national nodes!
|
||||
ENABLE_EHDS2=true
|
||||
|
@ -1,3 +1,5 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
directory_sync_service:
|
||||
image: "docker.verbis.dkfz.de/cache/samply/directory_sync_service"
|
||||
|
@ -1,82 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
focus-ehds2:
|
||||
#image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
image: samply/focus
|
||||
container_name: bridgehead-focus-ehds2
|
||||
environment:
|
||||
API_KEY: ${EHDS2_FOCUS_BEAM_SECRET_SHORT}
|
||||
BEAM_APP_ID_LONG: focus.${EHDS2_PROXY_ID}
|
||||
PROXY_ID: ${EHDS2_PROXY_ID}
|
||||
BLAZE_URL: "http://blaze:8080/fhir/"
|
||||
BEAM_PROXY_URL: http://beam-proxy-ehds2:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
OBFUSCATE: "no"
|
||||
depends_on:
|
||||
- "beam-proxy-ehds2"
|
||||
- "blaze"
|
||||
|
||||
beam-proxy-ehds2:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
|
||||
container_name: bridgehead-beam-proxy-ehds2
|
||||
environment:
|
||||
BROKER_URL: ${EHDS2_BROKER_URL}
|
||||
PROXY_ID: ${EHDS2_PROXY_ID}
|
||||
APP_focus_KEY: ${EHDS2_FOCUS_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/bbmri/modules/${EHDS2_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
# Convert ECDC CSV file into FHIR and push to Blaze
|
||||
transfair:
|
||||
container_name: transfair
|
||||
image: samply/transfair
|
||||
environment:
|
||||
FHIR_INPUT_URL: "http://source_blaze:8080/fhir"
|
||||
FHIR_OUTPUT_URL: "http://bridgehead-bbmri-blaze:8080/fhir"
|
||||
PROFILE: "amr2fhir"
|
||||
#WRITE_BUNDLES_TO_FILE: "true"
|
||||
AMR_FILE_PATH: "/app/data"
|
||||
restart: on-failure
|
||||
# The start up logic for TransFAIR is kind of complicated for the ECDC/EHDS2
|
||||
# pilot. This is because we only want to run it if 1. there are source data
|
||||
# files to be transformed and 2. if there is no lock file. We also need to
|
||||
# wait for Blaze to start, TransFAIR does not check for this. And finally,
|
||||
# once TransFAIR has finished loading data, a lock file is created, to stop
|
||||
# a time-consuming repeat run.
|
||||
command: bash -c " \
|
||||
echo listing /app/data && \
|
||||
ls -la /app/data && \
|
||||
ls /app/data/*.[cC][sS][vV] 1> /dev/null 2>&1 && \
|
||||
[ ! -f /app/data/lock ] && \
|
||||
( \
|
||||
echo 'Wait for Blaze to finish initializing' ; \
|
||||
sleep 360 ; \
|
||||
echo 'Remove old output files' ; \
|
||||
rm -rf /app/test/* ; \
|
||||
cd /app ; \
|
||||
echo 'Run TransFAIR' ; \
|
||||
java -jar transFAIR.jar ; \
|
||||
echo 'Touching lock file' ; \
|
||||
touch /app/data/lock \
|
||||
) & tail -f /dev/null"
|
||||
# If you put .csv files into ./../ecdc/data, TransFAIR will try to process them.
|
||||
volumes:
|
||||
- ../../ecdc/test:/app/test/
|
||||
- ../../ecdc/data:/app/data/
|
||||
|
||||
# Report on the data pushed to Blaze by TransFAIR
|
||||
test-data-loader:
|
||||
container_name: test-data-loader
|
||||
image: samply/test-data-loader
|
||||
command: sh -c "sleep 420 && echo Listing all resources in FHIR store && blazectl --server http://bridgehead-bbmri-blaze:8080/fhir count-resources && tail -f /dev/null"
|
||||
|
@ -1,28 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ "${ENABLE_EHDS2}" == "true" ]; then
|
||||
log INFO "EHDS2 setup detected -- will start services for German Biobank Node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/ehds2-compose.yml"
|
||||
|
||||
# The environment needs to be defined in /etc/bridgehead
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
export EHDS2_BROKER_ID=broker.bbmri.samply.de
|
||||
export EHDS2_ROOT_CERT=ehds2
|
||||
;;
|
||||
"test")
|
||||
export EHDS2_BROKER_ID=broker.test.bbmri.samply.de
|
||||
export EHDS2_ROOT_CERT=ehds2.test
|
||||
;;
|
||||
*)
|
||||
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
export EHDS2_BROKER_ID=broker.bbmri.samply.de
|
||||
export EHDS2_ROOT_CERT=ehds2
|
||||
;;
|
||||
esac
|
||||
|
||||
EHDS2_BROKER_URL=https://${EHDS2_BROKER_ID}
|
||||
EHDS2_PROXY_ID=${SITE_ID}.${EHDS2_BROKER_ID}
|
||||
EHDS2_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
EHDS2_SUPPORT_EMAIL=feedback@germanbiobanknode.de
|
||||
fi
|
@ -1,22 +0,0 @@
|
||||
# DKFZ certificate
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUMy/n0zFRihhVR3aAD54LumzeYdwwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjIxMDI1MDczNTA4WhcNMzIx
|
||||
MDIyMDczNTM3WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAL3qWliHIlIT1Qlsyq/NKJ1uj6/AF0STNg5NTNpb
|
||||
Xqe5rmUqs6jmQepputGStBVe5TthFw56whISv9FqD5s1PZUGyFikW1pJUnF7ZYRf
|
||||
MfrJHRi1vUnD3Gw36FCot+i6BAxfw/rdp9hoqFZ6erRkULLaYZ5S2cDHN0DWc18V
|
||||
3VgZ66ah8QXSx7ERRNa/eWRkHrPIYhyVSoKuyZfvbVgsYZADSlviCgIHPrGLerLr
|
||||
ylNUyuTxJ5RKStOwPn7A+Jp7nRT+MRh9BphA7s6NuK9h+eVe1DiLbIETWyCEfN3Y
|
||||
INpunatn3QDhqOIfNcuBArjsAj7mg8l5KNba8nUP4v0EJYECAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMvc5Fizz1vO
|
||||
MEG3MIsy7UY69ZNIMB8GA1UdIwQYMBaAFMvc5Fizz1vOMEG3MIsy7UY69ZNIMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQBb8a5su820
|
||||
h8JStJC+KpvXmDrGkwx9bHlEZMgQQejIrwPLEbA32KBvNxdoUxF9q1Y773MKdqbc
|
||||
cCJwzQXE/NPZ13hCGrEIXs8DgH52GhEB5592k5/bRNcAvUwbZSXPPiT0rgq/eUOt
|
||||
BYhgN0ov7h1MC5L6CYB/rQwqck7JPlmrXTkh2gix4/dEdBRzsHsn/xlo8ay5QYHG
|
||||
rx2Adit76eZu/MJoJNzl1r8MPxLqyAie3KcIU54A+UMozLrWEQP/TyOyWZdjUjJt
|
||||
cBYgkKJTjwdRhc+ehI3kFo7b/a/Z/jl9szKsAPHozMixSi8lGnsYwN80oqeRvT7h
|
||||
wcMUK+igv3/K
|
||||
-----END CERTIFICATE-----
|
||||
|
@ -1,22 +0,0 @@
|
||||
# DKFZ certificate
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUJ0g7k2vrdAwNTU38S1/mU8NO26MwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNzEwMTIyMzQxWhcNMzMw
|
||||
NzA3MTIyNDExWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBALMvc/fApbsAl+/NXDszNgffNR5llAb9CfxzdnRn
|
||||
ryoBqZdPevBYZZfKBARRKjFbXRDdPWbE7erDeo1LiCM6PObXCuT9wmGWJtvfkmqW
|
||||
3Z/a75e4r360kceMEGVn4kWpi9dz8s7+oXVZURjW2r13h6pq6xQNZDNlXmpR8wHG
|
||||
58TSrQC4n1vzdSwMWdptgOA8Sw8adR7ZJI1yNZpmynB2QolKKNESI7FcSKC/+b+H
|
||||
LoPkseAwQG9yJo23qEw1GZS67B47iKIqX2wp9VLQobHw7ncrhKXQLSWq973k/Swp
|
||||
7lBdfOsTouf72flLiF1HbdOLcFDmWgIbf5scj2HaQe8b/UcCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHYxBJiJZieW
|
||||
e6G1vwn6Q36/crgNMB8GA1UdIwQYMBaAFHYxBJiJZieWe6G1vwn6Q36/crgNMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCN6WVNYpWJ
|
||||
6Z1Ee+otLZYMXhjyR6NUQ5s0aHiug97gB8mTiNlgXiiTgipCbofEmENgh1inYrPC
|
||||
WfdXxqOaekSXCQW6nSO1KtBzEYtkN5LrN1cjKqt51P2DbkllinK37wwCS2Kfup1+
|
||||
yjhTRxrehSIfsMVK6bTUeSoc8etkgwErZpORhlpqZKWhmOwcMpgsYJJOLhUetqc1
|
||||
UNe/254bc0vqHEPT6VI/86c7qAmk1xR0RUfrnKAEqZtUeuoj2fe1L/6yOB16fxt5
|
||||
3V3oim7EO6eZCTjDo9fU5DaFiqSMe7WVdr03Na0cWet60XKRH/xaiC6gMWdHWcbh
|
||||
vZdXnV1qjlM2
|
||||
-----END CERTIFICATE-----
|
||||
|
15
bbmri/vars
15
bbmri/vars
@ -4,10 +4,7 @@
|
||||
# Makes only sense for German Biobanks
|
||||
: ${ENABLE_GBN:=false}
|
||||
|
||||
# Makes only sense for EHDS2 project
|
||||
: ${ENABLE_EHDS2:=false}
|
||||
|
||||
FOCUS_RETRY_COUNT=128
|
||||
FOCUS_RETRY_COUNT=32
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
for module in $PROJECT/modules/*.sh
|
||||
@ -17,16 +14,12 @@ do
|
||||
done
|
||||
|
||||
SUPPORT_EMAIL=$ERIC_SUPPORT_EMAIL
|
||||
BROKER_URL_FOR_PREREQ="https://ecdc-vm-ehds-test1.swedencentral.cloudapp.azure.com"
|
||||
BROKER_URL_FOR_PREREQ="${ERIC_BROKER_URL:-$GBN_BROKER_URL}"
|
||||
|
||||
if [ -n "$GBN_SUPPORT_EMAIL" ]; then
|
||||
SUPPORT_EMAIL=$GBN_SUPPORT_EMAIL
|
||||
fi
|
||||
|
||||
if [ -n "$EHDS2_SUPPORT_EMAIL" ]; then
|
||||
SUPPORT_EMAIL=$EHDS2_SUPPORT_EMAIL
|
||||
fi
|
||||
|
||||
function do_enroll {
|
||||
COUNT=0
|
||||
if [ "$ENABLE_ERIC" == "true" ]; then
|
||||
@ -37,10 +30,6 @@ function do_enroll {
|
||||
do_enroll_inner $GBN_PROXY_ID $GBN_SUPPORT_EMAIL
|
||||
COUNT=$((COUNT+1))
|
||||
fi
|
||||
if [ "$ENABLE_EHDS2" == "true" ]; then
|
||||
do_enroll_inner $EHDS2_PROXY_ID $EHDS2_SUPPORT_EMAIL
|
||||
COUNT=$((COUNT+1))
|
||||
fi
|
||||
if [ $COUNT -ge 2 ]; then
|
||||
echo
|
||||
echo "You just received $COUNT certificate signing requests (CSR). Please send $COUNT e-mails, with 1 CSR each, to the respective e-mail address."
|
||||
|
17
bridgehead
17
bridgehead
@ -41,6 +41,7 @@ case "$PROJECT" in
|
||||
;;
|
||||
esac
|
||||
|
||||
# TODO: Please add proper documentation for variable priorities (1. secrets, 2. vars, 3. PROJECT.local.conf, 4. PROJECT.conf, 5. ???
|
||||
loadVars() {
|
||||
# Load variables from /etc/bridgehead and /srv/docker/bridgehead
|
||||
set -a
|
||||
@ -50,6 +51,7 @@ loadVars() {
|
||||
source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import"
|
||||
fi
|
||||
fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile"
|
||||
setHostname
|
||||
[ -e ./$PROJECT/vars ] && source ./$PROJECT/vars
|
||||
set +a
|
||||
|
||||
@ -64,7 +66,6 @@ loadVars() {
|
||||
OVERRIDE+=" -f ./$PROJECT/docker-compose.override.yml"
|
||||
fi
|
||||
detectCompose
|
||||
setHostname
|
||||
setupProxy
|
||||
|
||||
# Set some project-independent default values
|
||||
@ -88,20 +89,8 @@ case "$ACTION" in
|
||||
start)
|
||||
loadVars
|
||||
hc_send log "Bridgehead $PROJECT startup: Checking requirements ..."
|
||||
chown -R bridgehead ${BASE}
|
||||
checkRequirements
|
||||
# Note: changes to "bridgehead" script will only take effect after next start.
|
||||
su bridgehead -c "git pull"
|
||||
chown -R bridgehead ${BASE}
|
||||
# Local versions of focus and transfair are needed by EHDS2
|
||||
clone_focus_if_nonexistent ${BASE}/..
|
||||
build_focus ${BASE}/..
|
||||
clone_transfair_if_nonexistent ${BASE}/..
|
||||
build_transfair ${BASE}/..
|
||||
# Location for input data and results for EHDS2
|
||||
mkdir -p ${BASE}/../ecdc/test
|
||||
mkdir -p ${BASE}/../ecdc/data
|
||||
chown -R bridgehead ${BASE}/../ecdc
|
||||
sync_secrets
|
||||
hc_send log "Bridgehead $PROJECT startup: Requirements checked out. Now starting bridgehead ..."
|
||||
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
|
||||
;;
|
||||
|
@ -52,6 +52,12 @@ services:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.address=http://oauth2_proxy:4180/"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.trustForwardHeader=true"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.authResponseHeaders=X-Auth-Request-Access-Token,Authorization"
|
||||
|
||||
|
||||
volumes:
|
||||
blaze-data:
|
||||
|
53
ccp/modules/cbioportal-compose.yml
Normal file
53
ccp/modules/cbioportal-compose.yml
Normal file
@ -0,0 +1,53 @@
|
||||
version: '3.7'
|
||||
|
||||
services:
|
||||
cbioportal:
|
||||
# image: docker.verbis.dkfz.de/ccp/dktk-cbioportal:latest
|
||||
image: dktk-cbioportal
|
||||
container_name: bridgehead-cbioportal
|
||||
environment:
|
||||
DB_PASSWORD: ${CBIOPORTAL_DB_PASSWORD}
|
||||
HTTP_RELATIVE_PATH: "/cbioportal"
|
||||
UPLOAD_HTTP_RELATIVE_PATH: "/cbioportal-upload"
|
||||
depends_on:
|
||||
- cbioportal-database
|
||||
- cbioportal-session
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.cbioportal.rule=PathPrefix(`/cbioportal`)"
|
||||
- "traefik.http.routers.cbioportal.service=cbioportal"
|
||||
- "traefik.http.services.cbioportal.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.cbioportal.tls=true"
|
||||
- "traefik.http.routers.cbioportal-upload.rule=PathPrefix(`/cbioportal-upload`)"
|
||||
- "traefik.http.routers.cbioportal-upload.service=cbioportal-upload"
|
||||
- "traefik.http.routers.cbioportal-upload.tls=true"
|
||||
- "traefik.http.services.cbioportal-upload.loadbalancer.server.port=8001"
|
||||
|
||||
|
||||
cbioportal-database:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-cbioportal-database:latest
|
||||
container_name: bridgehead-cbioportal-database
|
||||
environment:
|
||||
MYSQL_DATABASE: cbioportal
|
||||
MYSQL_USER: cbio_user
|
||||
MYSQL_PASSWORD: ${CBIOPORTAL_DB_PASSWORD}
|
||||
MYSQL_ROOT_PASSWORD: ${CBIOPORTAL_DB_ROOT_PASSWORD}
|
||||
volumes:
|
||||
- /var/cache/bridgehead/ccp/cbioportal_db_data:/var/lib/mysql
|
||||
|
||||
cbioportal-session:
|
||||
image: cbioportal/session-service:0.6.1
|
||||
container_name: bridgehead-cbioportal-session
|
||||
environment:
|
||||
SERVER_PORT: 5000
|
||||
JAVA_OPTS: -Dspring.data.mongodb.uri=mongodb://cbioportal-session-database:27017/session-service
|
||||
depends_on:
|
||||
- cbioportal-session-database
|
||||
|
||||
cbioportal-session-database:
|
||||
image: mongo:4.2
|
||||
container_name: bridgehead-cbioportal-session-database
|
||||
environment:
|
||||
MONGO_INITDB_DATABASE: session_service
|
||||
volumes:
|
||||
- /var/cache/bridgehead/ccp/cbioportal_session_db_data:/data/db
|
8
ccp/modules/cbioportal-setup.sh
Normal file
8
ccp/modules/cbioportal-setup.sh
Normal file
@ -0,0 +1,8 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_CBIOPORTAL" == true ]; then
|
||||
log INFO "cBioPortal setup detected -- will start cBioPortal service."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/cbioportal-compose.yml"
|
||||
CBIOPORTAL_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the cbioportal database. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
CBIOPORTAL_DB_ROOT_PASSWORD="$(echo \"This is a salt string to generate one consistent root password for the cbioportal database. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
|
||||
fi
|
10
ccp/modules/cbioportal.md
Normal file
10
ccp/modules/cbioportal.md
Normal file
@ -0,0 +1,10 @@
|
||||
# CBioPortal Data uploader
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
We have integrated an API that allows you to upload data directly to cbioportal without the need to have cbioportal installed in your system.
|
||||
|
||||
## Tech stack
|
||||
|
||||
We used Flask to add this feature
|
161
ccp/modules/datashield-compose.yml
Normal file
161
ccp/modules/datashield-compose.yml
Normal file
@ -0,0 +1,161 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
rstudio:
|
||||
container_name: bridgehead-rstudio
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rstudio:latest
|
||||
environment:
|
||||
#DEFAULT_USER: "rstudio" # This line is kept for informational purposes
|
||||
PASSWORD: "${RSTUDIO_ADMIN_PASSWORD}" # It is required, even if the authentication is disabled
|
||||
DISABLE_AUTH: "true" # https://rocker-project.org/images/versioned/rstudio.html#how-to-use
|
||||
HTTP_RELATIVE_PATH: "/rstudio"
|
||||
ALL_PROXY: "http://forward_proxy:3128" # https://rocker-project.org/use/networking.html
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.rstudio_ccp.rule=PathPrefix(`/rstudio`)"
|
||||
- "traefik.http.services.rstudio_ccp.loadbalancer.server.port=8787"
|
||||
- "traefik.http.middlewares.rstudio_ccp_strip.stripprefix.prefixes=/rstudio"
|
||||
- "traefik.http.routers.rstudio_ccp.tls=true"
|
||||
- "traefik.http.routers.rstudio_ccp.middlewares=oidcAuth,rstudio_ccp_strip"
|
||||
networks:
|
||||
- rstudio
|
||||
|
||||
opal:
|
||||
container_name: bridgehead-opal
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-opal:latest
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.opal_ccp.rule=PathPrefix(`/opal`)"
|
||||
- "traefik.http.services.opal_ccp.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.opal_ccp.tls=true"
|
||||
links:
|
||||
- opal-rserver
|
||||
- opal-db
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC -Dhttps.proxyHost=forward_proxy -Dhttps.proxyPort=3128"
|
||||
# OPAL_ADMINISTRATOR_USER: "administrator" # This line is kept for informational purposes
|
||||
OPAL_ADMINISTRATOR_PASSWORD: "${OPAL_ADMIN_PASSWORD}"
|
||||
POSTGRESDATA_HOST: "opal-db"
|
||||
POSTGRESDATA_DATABASE: "opal"
|
||||
POSTGRESDATA_USER: "opal"
|
||||
POSTGRESDATA_PASSWORD: "${OPAL_DB_PASSWORD}"
|
||||
ROCK_HOSTS: "opal-rserver:8085"
|
||||
APP_URL: "https://${HOST}/opal"
|
||||
APP_CONTEXT_PATH: "/opal"
|
||||
OPAL_PRIVATE_KEY: "/run/secrets/opal-key.pem"
|
||||
OPAL_CERTIFICATE: "/run/secrets/opal-cert.pem"
|
||||
KEYCLOAK_URL: "${KEYCLOAK_URL}"
|
||||
KEYCLOAK_REALM: "${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_CLIENT_ID: "${KEYCLOAK_PRIVATE_CLIENT_ID}"
|
||||
KEYCLOAK_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
KEYCLOAK_ADMIN_GROUP: "${KEYCLOAK_ADMIN_GROUP}"
|
||||
TOKEN_MANAGER_PASSWORD: "${TOKEN_MANAGER_OPAL_PASSWORD}"
|
||||
EXPORTER_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
BEAM_APP_ID: token-manager.${PROXY_ID}
|
||||
BEAM_SECRET: ${TOKEN_MANAGER_SECRET}
|
||||
BEAM_DATASHIELD_PROXY: request-manager
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/opal-metadata-db:/srv" # Opal metadata
|
||||
secrets:
|
||||
- opal-cert.pem
|
||||
- opal-key.pem
|
||||
|
||||
opal-db:
|
||||
container_name: bridgehead-opal-db
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: "${OPAL_DB_PASSWORD}" # Set in datashield-setup.sh
|
||||
POSTGRES_USER: "opal"
|
||||
POSTGRES_DB: "opal"
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/opal-db:/var/lib/postgresql/data" # Opal project data (imported from exporter)
|
||||
|
||||
opal-rserver:
|
||||
container_name: bridgehead-opal-rserver
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rserver # datashield/rock-base + dsCCPhos
|
||||
tmpfs:
|
||||
- /srv
|
||||
|
||||
beam-connect:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
|
||||
container_name: bridgehead-datashield-connect
|
||||
environment:
|
||||
PROXY_URL: "http://beam-proxy:8081"
|
||||
TLS_CA_CERTIFICATES_DIR: /run/secrets
|
||||
APP_ID: datashield-connect.${SITE_ID}.${BROKER_ID}
|
||||
PROXY_APIKEY: ${DATASHIELD_CONNECT_SECRET}
|
||||
DISCOVERY_URL: "./map/central.json"
|
||||
LOCAL_TARGETS_FILE: "./map/local.json"
|
||||
NO_AUTH: "true"
|
||||
secrets:
|
||||
- opal-cert.pem
|
||||
depends_on:
|
||||
- beam-proxy
|
||||
volumes:
|
||||
- /tmp/bridgehead/opal-map/:/map/:ro
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
traefik:
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
forward_proxy:
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
beam-proxy:
|
||||
environment:
|
||||
APP_datashield-connect_KEY: ${DATASHIELD_CONNECT_SECRET}
|
||||
APP_token-manager_KEY: ${TOKEN_MANAGER_SECRET}
|
||||
|
||||
# TODO: Allow users of group /DataSHIELD and KEYCLOAK_USER_GROUP at the same time:
|
||||
# Maybe a solution would be (https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview/):
|
||||
# --allowed-groups=/DataSHIELD,KEYCLOAK_USER_GROUP
|
||||
oauth2_proxy:
|
||||
image: quay.io/oauth2-proxy/oauth2-proxy
|
||||
container_name: bridgehead_oauth2_proxy
|
||||
command: >-
|
||||
--allowed-group=/DataSHIELD
|
||||
--oidc-groups-claim=${KEYCLOAK_GROUP_CLAIM}
|
||||
--auth-logging=true
|
||||
--whitelist-domain=${HOST}
|
||||
--http-address="0.0.0.0:4180"
|
||||
--reverse-proxy=true
|
||||
--upstream="static://202"
|
||||
--email-domain="*"
|
||||
--cookie-name="_BRIDGEHEAD_oauth2"
|
||||
--cookie-secret="${OAUTH2_PROXY_SECRET}"
|
||||
--cookie-expire="12h"
|
||||
--cookie-secure="true"
|
||||
--cookie-httponly="true"
|
||||
#OIDC settings
|
||||
--provider="keycloak-oidc"
|
||||
--provider-display-name="VerbIS Login"
|
||||
--client-id="${KEYCLOAK_PRIVATE_CLIENT_ID}"
|
||||
--client-secret="${OIDC_CLIENT_SECRET}"
|
||||
--redirect-url="https://${HOST}${OAUTH2_CALLBACK}"
|
||||
--oidc-issuer-url="${KEYCLOAK_ISSUER_URL}"
|
||||
--scope="openid email profile"
|
||||
--code-challenge-method="S256"
|
||||
--skip-provider-button=true
|
||||
#X-Forwarded-Header settings - true/false depending on your needs
|
||||
--pass-basic-auth=true
|
||||
--pass-user-headers=false
|
||||
--pass-access-token=false
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.oauth2_proxy.rule=Host(`${HOST}`) && PathPrefix(`/oauth2`, `/oauth2/callback`)"
|
||||
- "traefik.http.services.oauth2_proxy.loadbalancer.server.port=4180"
|
||||
- "traefik.http.routers.oauth2_proxy.tls=true"
|
||||
|
||||
secrets:
|
||||
opal-cert.pem:
|
||||
file: /tmp/bridgehead/opal-cert.pem
|
||||
opal-key.pem:
|
||||
file: /tmp/bridgehead/opal-key.pem
|
||||
|
||||
networks:
|
||||
rstudio:
|
157
ccp/modules/datashield-import-template.xml
Normal file
157
ccp/modules/datashield-import-template.xml
Normal file
@ -0,0 +1,157 @@
|
||||
<template id="opal-ccp" source-id="blaze-store" opal-project="ccp-demo" target-id="opal" >
|
||||
|
||||
<container csv-filename="Patient-${TIMESTAMP}.csv" opal-table="patient" opal-entity-type="Patient">
|
||||
<attribute csv-column="patient-id" opal-value-type="text" primary-key="true" val-fhir-path="Patient.id.value" anonym="Pat" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="dktk-id-global" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Global').value.value"/>
|
||||
<attribute csv-column="dktk-id-lokal" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Lokal').value.value" />
|
||||
<attribute csv-column="geburtsdatum" opal-value-type="date" val-fhir-path="Patient.birthDate.value"/>
|
||||
<attribute csv-column="geschlecht" opal-value-type="text" val-fhir-path="Patient.gender.value" />
|
||||
<attribute csv-column="datum_des_letztbekannten_vitalstatus" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '75186-7').effective.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
|
||||
<attribute csv-column="vitalstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '75186-7').value.coding.code.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
|
||||
<!--fehlt in ADT2FHIR--><attribute csv-column="tod_tumorbedingt" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://fhir.de/CodeSystem/bfarm/icd-10-gm').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
|
||||
<!--fehlt in ADT2FHIR--><attribute csv-column="todesursachen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/JNUCS').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="Diagnosis-${TIMESTAMP}.csv" opal-table="diagnosis" opal-entity-type="Diagnosis">
|
||||
<attribute csv-column="diagnosis-id" primary-key="true" opal-value-type="text" val-fhir-path="Condition.id.value" anonym="Dia" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Condition.subject.reference.value" anonym="Pat"/>
|
||||
<attribute csv-column="primaerdiagnose" opal-value-type="text" val-fhir-path="Condition.code.coding.code.value"/>
|
||||
<attribute csv-column="tumor_diagnosedatum" opal-value-type="date" val-fhir-path="Condition.onset.value"/>
|
||||
<attribute csv-column="primaertumor_diagnosetext" opal-value-type="text" val-fhir-path="Condition.code.text.value"/>
|
||||
<attribute csv-column="version_des_icd-10_katalogs" opal-value-type="integer" val-fhir-path="Condition.code.coding.version.value"/>
|
||||
<attribute csv-column="lokalisation" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').code.value"/>
|
||||
<attribute csv-column="icd-o_katalog_topographie_version" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').version.value"/>
|
||||
<attribute csv-column="seitenlokalisation_nach_adt-gekid" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/SeitenlokalisationCS').code.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="Progress-${TIMESTAMP}.csv" opal-table="progress" opal-entity-type="Progress">
|
||||
<!--it would be better to generate a an ID, instead of extracting the ClinicalImpression id-->
|
||||
<attribute csv-column="progress-id" primary-key="true" opal-value-type="text" val-fhir-path="ClinicalImpression.id.value" anonym="Pro" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="ClinicalImpression.problem.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="ClinicalImpression.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="untersuchungs-_befunddatum_im_verlauf" opal-value-type="date" val-fhir-path="ClinicalImpression.effective.value" />
|
||||
<!-- just for evaluation: redundant to Untersuchungs-, Befunddatum im Verlauf-->
|
||||
<attribute csv-column="datum_lokales_oder_regionaeres_rezidiv" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').effective.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
<attribute csv-column="gesamtbeurteilung_tumorstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21976-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
|
||||
<attribute csv-column="lokales_oder_regionaeres_rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
|
||||
<attribute csv-column="lymphknoten-rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4370-8').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
<attribute csv-column="fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4226-2').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
</container>
|
||||
|
||||
<container csv-filename="Histology-${TIMESTAMP}.csv" opal-table="histology" opal-entity-type="Histology" >
|
||||
<attribute csv-column="histology-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').id" anonym="His" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="histologie_datum" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '59847-4').effective.value"/>
|
||||
<attribute csv-column="icd-o_katalog_morphologie_version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.version.value" />
|
||||
<attribute csv-column="morphologie" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.code.value"/>
|
||||
<attribute csv-column="morphologie-freitext" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.text.value"/>
|
||||
<attribute csv-column="grading" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59542-1').value.coding.code.value" join-fhir-path="Observation.where(code.coding.code = '59847-4').hasMember.reference.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Metastasis-${TIMESTAMP}.csv" opal-table="metastasis" opal-entity-type="Metastasis" >
|
||||
<attribute csv-column="metastasis-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').id" anonym="Met" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_fernmetastasen" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21907-1').effective.value"/>
|
||||
<attribute csv-column="fernmetastasen_vorhanden" opal-value-type="boolean" val-fhir-path="Observation.where(code.coding.code = '21907-1').value.coding.code.value"/>
|
||||
<attribute csv-column="lokalisation_fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').bodySite.coding.code.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="TNM-${TIMESTAMP}.csv" opal-table="tnm" opal-entity-type="TNM">
|
||||
<attribute csv-column="tnm-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').id" anonym="TNM" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_der_tnm_dokumentation_datum_befund" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').effective.value"/>
|
||||
<attribute csv-column="uicc_stadium" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-y-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '59479-6' or code.coding.code = '59479-6').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-r-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21983-2' or code.coding.code = '21983-2').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-m-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '42030-7' or code.coding.code = '42030-7').value.coding.code.value"/>
|
||||
<!--nur bei UICC, nicht in ADT2FHIR--><attribute csv-column="tnm-version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.version.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="System-Therapy-${TIMESTAMP}.csv" opal-table="system-therapy" opal-entity-type="SystemTherapy">
|
||||
<attribute csv-column="system-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="MedicationStatement.id" anonym="Sys" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="MedicationStatement.reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="MedicationStatement.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="systemische_therapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
<attribute csv-column="intention_chemotherapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value"/>
|
||||
<attribute csv-column="therapieart" opal-value-type="text" val-fhir-path="MedicationStatement.category.coding.code.value"/>
|
||||
<attribute csv-column="systemische_therapie_beginn" opal-value-type="date" val-fhir-path="MedicationStatement.effective.start.value"/>
|
||||
<attribute csv-column="systemische_therapie_ende" opal-value-type="date" val-fhir-path="MedicationStatement.effective.end.value"/>
|
||||
<attribute csv-column="systemische_therapie_protokoll" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SystemischeTherapieProtokoll').value.text.value"/>
|
||||
<attribute csv-column="systemische_therapie_substanzen" opal-value-type="text" val-fhir-path="MedicationStatement.medication.text.value"/>
|
||||
<attribute csv-column="chemotherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'CH').exists().value" />
|
||||
<attribute csv-column="hormontherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'HO').exists().value" />
|
||||
<attribute csv-column="immuntherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'IM').exists().value" />
|
||||
<attribute csv-column="knochenmarktransplantation" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'KM').exists().value" />
|
||||
<attribute csv-column="abwartende_strategie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'WS').exists().value" />
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Surgery-${TIMESTAMP}.csv" opal-table="surgery" opal-entity-type="Surgery">
|
||||
<attribute csv-column="surgery-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').id" anonym="Sur" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="ops-code" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').code.coding.code.value"/>
|
||||
<attribute csv-column="datum_der_op" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'OP').performed.value"/>
|
||||
<attribute csv-column="intention_op" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-OPIntention').value.coding.code.value"/>
|
||||
<attribute csv-column="lokale_beurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/LokaleBeurteilungResidualstatusCS').code.value" />
|
||||
<attribute csv-column="gesamtbeurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/GesamtbeurteilungResidualstatusCS').code.value" />
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Radiation-Therapy-${TIMESTAMP}.csv" opal-table="radiation-therapy" opal-entity-type="RadiationTherapy">
|
||||
<attribute csv-column="radiation-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').id" anonym="Rad" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="strahlentherapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
<attribute csv-column="intention_strahlentherapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value" />
|
||||
<attribute csv-column="strahlentherapie_beginn" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.start.value"/>
|
||||
<attribute csv-column="strahlentherapie_ende" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.end.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Molecular-Marker-${TIMESTAMP}.csv" opal-table="molecular-marker" opal-entity-type="MolecularMarker">
|
||||
<attribute csv-column="mol-marker-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').id" anonym="Mol" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').focus.reference.value" anonym="Dia" />
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_der_datenerhebung" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '69548-6').effective.value"/>
|
||||
<attribute csv-column="marker" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').component.value.coding.code.value"/>
|
||||
<attribute csv-column="status_des_molekularen_markers" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.coding.code.value" />
|
||||
<attribute csv-column="zusaetzliche_alternative_dokumentation" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.text.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Sample-${TIMESTAMP}.csv" opal-table="sample" opal-entity-type="Sample">
|
||||
<attribute csv-column="sample-id" primary-key="true" opal-value-type="text" val-fhir-path="Specimen.id" anonym="Sam" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Specimen.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="entnahmedatum" opal-value-type="date" val-fhir-path="Specimen.collection.collectedDateTime.value"/>
|
||||
<attribute csv-column="probenart" opal-value-type="text" val-fhir-path="Specimen.type.coding.code.value"/>
|
||||
<attribute csv-column="status" opal-value-type="text" val-fhir-path="Specimen.status.code.value"/>
|
||||
<attribute csv-column="projekt" opal-value-type="text" val-fhir-path="Specimen.identifier.system.value"/>
|
||||
<!-- @TODO: it is still necessary to clarify whether it would not be better to take the quantity of collection.quantity -->
|
||||
<attribute csv-column="menge" opal-value-type="integer" val-fhir-path="Specimen.container.specimenQuantity.value.value"/>
|
||||
<attribute csv-column="einheit" opal-value-type="text" val-fhir-path="Specimen.container.specimenQuantity.unit.value"/>
|
||||
<attribute csv-column="aliquot" opal-value-type="text" val-fhir-path="Specimen.parent.reference.exists().value" />
|
||||
</container>
|
||||
|
||||
|
||||
|
||||
|
||||
<fhir-rev-include>Observation:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Condition:patient</fhir-rev-include>
|
||||
<fhir-rev-include>ClinicalImpression:patient</fhir-rev-include>
|
||||
<fhir-rev-include>MedicationStatement:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Procedure:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Specimen:patient</fhir-rev-include>
|
||||
|
||||
</template>
|
13
ccp/modules/datashield-mappings.json
Normal file
13
ccp/modules/datashield-mappings.json
Normal file
@ -0,0 +1,13 @@
|
||||
[
|
||||
"berlin",
|
||||
"muenchen-lmu",
|
||||
"dresden",
|
||||
"freiburg",
|
||||
"muenchen-tum",
|
||||
"tuebingen",
|
||||
"mainz",
|
||||
"frankfurt",
|
||||
"essen",
|
||||
"dktk-datashield-test",
|
||||
"dktk-test"
|
||||
]
|
33
ccp/modules/datashield-setup.sh
Normal file
33
ccp/modules/datashield-setup.sh
Normal file
@ -0,0 +1,33 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_DATASHIELD" == true ]; then
|
||||
log INFO "DataSHIELD setup detected -- will start DataSHIELD services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/datashield-compose.yml"
|
||||
EXPORTER_OPAL_PASSWORD="$(generate_password \"exporter in Opal\")"
|
||||
TOKEN_MANAGER_OPAL_PASSWORD="$(generate_password \"Token Manager in Opal\")"
|
||||
OPAL_DB_PASSWORD="$(echo \"Opal DB\" | generate_simple_password)"
|
||||
OPAL_ADMIN_PASSWORD="$(generate_password \"admin password for Opal\")"
|
||||
RSTUDIO_ADMIN_PASSWORD="$(generate_password \"admin password for R-Studio\")"
|
||||
DATASHIELD_CONNECT_SECRET="$(echo \"DataShield Connect\" | generate_simple_password)"
|
||||
TOKEN_MANAGER_SECRET="$(echo \"Token Manager\" | generate_simple_password)"
|
||||
if [ ! -e /tmp/bridgehead/opal-cert.pem ]; then
|
||||
mkdir -p /tmp/bridgehead/
|
||||
chown -R bridgehead:docker /tmp/bridgehead/
|
||||
openssl req -x509 -newkey rsa:4096 -nodes -keyout /tmp/bridgehead/opal-key.pem -out /tmp/bridgehead/opal-cert.pem -days 3650 -subj "/CN=opal/C=DE"
|
||||
chmod g+r /tmp/bridgehead/opal-key.pem
|
||||
fi
|
||||
mkdir -p /tmp/bridgehead/opal-map
|
||||
jq -n '{"sites": input | map({
|
||||
"name": .,
|
||||
"id": .,
|
||||
"virtualhost": "\(.):443",
|
||||
"beamconnect": "datashield-connect.\(.).'"$BROKER_ID"'"
|
||||
})}' ./$PROJECT/modules/datashield-mappings.json > /tmp/bridgehead/opal-map/central.json
|
||||
jq -n '[{
|
||||
"external": "'"$SITE_ID"':443",
|
||||
"internal": "opal:8443",
|
||||
"allowed": input | map("datashield-connect.\(.).'"$BROKER_ID"'")
|
||||
}]' ./$PROJECT/modules/datashield-mappings.json > /tmp/bridgehead/opal-map/local.json
|
||||
chown -R bridgehead:docker /tmp/bridgehead/
|
||||
add_private_oidc_redirect_url "/opal/*"
|
||||
fi
|
28
ccp/modules/datashield.md
Normal file
28
ccp/modules/datashield.md
Normal file
@ -0,0 +1,28 @@
|
||||
# DataSHIELD
|
||||
This module constitutes the infrastructure to run DataSHIELD within the bridghead.
|
||||
For more information about DataSHIELD, please visit https://www.datashield.org/
|
||||
|
||||
## R-Studio
|
||||
To connect to the different bridgeheads of the CCP through DataSHIELD, you can use your own R-Studio environment.
|
||||
However, this R-Studio has already installed the DataSHIELD libraries and is integrated within the bridgehead.
|
||||
This can save you some time for extra configuration of your R-Studio environment.
|
||||
|
||||
## Opal
|
||||
This is the core of DataSHIELD. It is made up of Opal, a Postgres database and an R-server.
|
||||
For more information about Opal, please visit https://opaldoc.obiba.org
|
||||
|
||||
### Opal
|
||||
Opal is OBiBa’s core database application for biobanks.
|
||||
|
||||
### Opal-DB
|
||||
Opal requires a database to import the data for DataSHIELD. We use a Postgres instance as database.
|
||||
The data is imported within the bridgehead through the exporter.
|
||||
|
||||
### Opal-R-Server
|
||||
R-Server to execute R scripts in DataSHIELD.
|
||||
|
||||
## Beam
|
||||
### Beam-Connect
|
||||
Beam-Connect is used to route http(s) traffic through beam to enable R-Studio to access data from other bridgeheads that have datashield enabled.
|
||||
### Beam-Proxy
|
||||
The usual beam proxy used for communication.
|
@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ -n "${ENABLE_DNPM}" ]; then
|
||||
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam.Connect for DNPM."
|
||||
|
6
ccp/modules/export-and-qb.curl-templates
Normal file
6
ccp/modules/export-and-qb.curl-templates
Normal file
@ -0,0 +1,6 @@
|
||||
# Full Excel Export
|
||||
curl --location --request POST 'https://${HOST}/ccp-exporter/request?query=Patient&query-format=FHIR_PATH&template-id=ccp&output-format=EXCEL' \
|
||||
--header 'x-api-key: ${EXPORT_API_KEY}'
|
||||
|
||||
# QB
|
||||
curl --location --request POST 'https://${HOST}/ccp-reporter/generate?template-id=ccp'
|
67
ccp/modules/exporter-compose.yml
Normal file
67
ccp/modules/exporter-compose.yml
Normal file
@ -0,0 +1,67 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
exporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
|
||||
container_name: bridgehead-ccp-exporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
EXPORTER_DB_USER: "exporter"
|
||||
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
|
||||
HTTP_RELATIVE_PATH: "/ccp-exporter"
|
||||
SITE: "${SITE_ID}"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
|
||||
- "traefik.http.services.exporter_ccp.loadbalancer.server.port=8092"
|
||||
- "traefik.http.routers.exporter_ccp.tls=true"
|
||||
- "traefik.http.middlewares.exporter_ccp_strip.stripprefix.prefixes=/ccp-exporter"
|
||||
- "traefik.http.routers.exporter_ccp.middlewares=exporter_ccp_strip"
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/exporter-files:/app/exporter-files/output"
|
||||
|
||||
exporter-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
container_name: bridgehead-ccp-exporter-db
|
||||
environment:
|
||||
POSTGRES_USER: "exporter"
|
||||
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
POSTGRES_DB: "exporter"
|
||||
volumes:
|
||||
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
|
||||
- "/var/cache/bridgehead/ccp/exporter-db:/var/lib/postgresql/data"
|
||||
|
||||
reporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
|
||||
container_name: bridgehead-ccp-reporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-reporter"
|
||||
SITE: "${SITE_ID}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
EXPORTER_URL: "http://exporter:8092"
|
||||
LOG_FHIR_VALIDATION: "false"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
|
||||
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
|
||||
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
|
||||
# a process that can take several hours, because it depends on the exporter.
|
||||
# There is a risk that the bridgehead restarts, losing the already created export.
|
||||
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/reporter-files:/app/reports"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.reporter_ccp.rule=PathPrefix(`/ccp-reporter`)"
|
||||
- "traefik.http.services.reporter_ccp.loadbalancer.server.port=8095"
|
||||
- "traefik.http.routers.reporter_ccp.tls=true"
|
||||
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
|
||||
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"
|
8
ccp/modules/exporter-setup.sh
Normal file
8
ccp/modules/exporter-setup.sh
Normal file
@ -0,0 +1,8 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_EXPORTER" == true ]; then
|
||||
log INFO "Exporter setup detected -- will start Exporter service."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
|
||||
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
|
||||
fi
|
15
ccp/modules/exporter.md
Normal file
15
ccp/modules/exporter.md
Normal file
@ -0,0 +1,15 @@
|
||||
# Exporter and Reporter
|
||||
|
||||
|
||||
## Exporter
|
||||
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
|
||||
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
|
||||
|
||||
## Exporter-DB
|
||||
It is a database to save queries for its execution in the exporter.
|
||||
The exporter manages also the different executions of the same query in through the database.
|
||||
|
||||
## Reporter
|
||||
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
|
||||
It is compatible with different template engines as Groovy, Thymeleaf,...
|
||||
It is perfect to generate a document as our traditional CCP quality report.
|
@ -1,4 +1,5 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
id-manager:
|
||||
image: docker.verbis.dkfz.de/bridgehead/magicpl
|
||||
@ -43,7 +44,7 @@ services:
|
||||
- patientlist-db
|
||||
|
||||
patientlist-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.6-alpine
|
||||
container_name: bridgehead-patientlist-db
|
||||
environment:
|
||||
POSTGRES_USER: "mainzelliste"
|
||||
|
@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
#!/bin/bash -e
|
||||
|
||||
function idManagementSetup() {
|
||||
if [ -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
|
||||
|
47
ccp/modules/login-compose.yml
Normal file
47
ccp/modules/login-compose.yml
Normal file
@ -0,0 +1,47 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
login-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
container_name: bridgehead-login-db
|
||||
environment:
|
||||
POSTGRES_USER: "keycloak"
|
||||
POSTGRES_PASSWORD: "${KEYCLOAK_DB_PASSWORD}" # Set in login-setup.sh
|
||||
POSTGRES_DB: "keycloak"
|
||||
tmpfs:
|
||||
- /var/lib/postgresql/data
|
||||
# Consider removing this comment once we have collected experience in production.
|
||||
# volumes:
|
||||
# - "bridgehead-login-db:/var/lib/postgresql/data"
|
||||
|
||||
login:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-keycloak:latest
|
||||
container_name: bridgehead-login
|
||||
environment:
|
||||
KEYCLOAK_ADMIN: "admin"
|
||||
KEYCLOAK_ADMIN_PASSWORD: "${LDM_AUTH}"
|
||||
TEILER_ADMIN: "${PROJECT}"
|
||||
TEILER_ADMIN_PASSWORD: "${LDM_AUTH}"
|
||||
TEILER_ADMIN_FIRST_NAME: "${OPERATOR_FIRST_NAME}"
|
||||
TEILER_ADMIN_LAST_NAME: "${OPERATOR_LAST_NAME}"
|
||||
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
|
||||
KC_DB_PASSWORD: "${KEYCLOAK_DB_PASSWORD}" # Set in login-setup.sh
|
||||
KC_HOSTNAME_URL: "https://${HOST}/login"
|
||||
KC_HOSTNAME_STRICT: "false"
|
||||
KC_PROXY_ADDRESS_FORWARDING: "true"
|
||||
TEILER_ORCHESTRATOR_EXTERN_URL: "https://${HOST}/ccp-teiler"
|
||||
command:
|
||||
- start-dev --import-realm --proxy edge --http-relative-path=/login
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.login.rule=PathPrefix(`/login`)"
|
||||
- "traefik.http.services.login.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.login.tls=true"
|
||||
depends_on:
|
||||
- login-db
|
||||
|
||||
# Consider removing this comment once we have collected experience in production.
|
||||
#volumes:
|
||||
# bridgehead-login-db:
|
||||
# name: "bridgehead-login-db"
|
7
ccp/modules/login-setup.sh
Normal file
7
ccp/modules/login-setup.sh
Normal file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_LOGIN" == true ]; then
|
||||
log INFO "Login setup detected -- will start Login services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/login-compose.yml"
|
||||
KEYCLOAK_DB_PASSWORD="$(generate_password \"local Keycloak\")"
|
||||
fi
|
13
ccp/modules/login.md
Normal file
13
ccp/modules/login.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Login
|
||||
The login component is a local Keycloak instance. In the future will be replaced by the central keycloak instance
|
||||
or maybe can be used to add local identity providers to the bridgehead or just to simplify the configuration of
|
||||
the central keycloak instance for the integration of every new bridgehead.
|
||||
The basic configuration of our Keycloak instance is contained in a small json file.
|
||||
|
||||
### Teiler User
|
||||
Currently, the local keycloak is used by the teiler. There is a basic admin user in the basic configuration of keycloak.
|
||||
The user can be configured with the environment variables TEILER_ADMIN_XXX.
|
||||
|
||||
## Login-DB
|
||||
Keycloak requires a local database for its configuration. However, as we use an initial json configuration file, if no
|
||||
local identity provider is configured nor any local user, theoretically we don't need a volume for the login.
|
@ -2,7 +2,8 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
mtba:
|
||||
image: docker.verbis.dkfz.de/cache/samply/mtba:1.0.0
|
||||
#image: docker.verbis.dkfz.de/cache/samply/mtba:latest
|
||||
image: docker.verbis.dkfz.de/cache/samply/mtba:develop
|
||||
container_name: bridgehead-mtba
|
||||
environment:
|
||||
BLAZE_STORE_URL: http://blaze:8080
|
||||
@ -11,22 +12,30 @@ services:
|
||||
ID_MANAGER_API_KEY: ${IDMANAGER_UPLOAD_APIKEY}
|
||||
ID_MANAGER_PSEUDONYM_ID_TYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
|
||||
ID_MANAGER_URL: http://id-manager:8080/id-manager
|
||||
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER}
|
||||
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER}
|
||||
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER}
|
||||
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER}
|
||||
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER:-FIRST_NAME}
|
||||
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER:-LAST_NAME}
|
||||
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER:-GENDER}
|
||||
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER:-BIRTHDAY}
|
||||
CBIOPORTAL_URL: http://cbioportal:8080
|
||||
FILE_CHARSET: ${MTBA_FILE_CHARSET}
|
||||
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE}
|
||||
CSV_DELIMITER: ${MTBA_CSV_DELIMITER}
|
||||
FILE_CHARSET: ${MTBA_FILE_CHARSET:-UTF-8}
|
||||
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE:-LF}
|
||||
CSV_DELIMITER: ${MTBA_CSV_DELIMITER:-TAB}
|
||||
HTTP_RELATIVE_PATH: "/mtba"
|
||||
KEYCLOAK_ADMIN_GROUP: "${KEYCLOAK_ADMIN_GROUP}"
|
||||
KEYCLOAK_CLIENT_ID: "${KEYCLOAK_PRIVATE_CLIENT_ID}"
|
||||
KEYCLOAK_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
KEYCLOAK_REALM: "${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_URL: "${KEYCLOAK_URL}"
|
||||
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.mtba.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.mtba.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.mtba.tls=true"
|
||||
- "traefik.http.routers.mtba_ccp.rule=PathPrefix(`/mtba`)"
|
||||
- "traefik.http.services.mtba_ccp.loadbalancer.server.port=8480"
|
||||
- "traefik.http.routers.mtba_ccp.tls=true"
|
||||
|
||||
volumes:
|
||||
- /tmp/bridgehead/mtba/input:/app/input
|
||||
- /tmp/bridgehead/mtba/persist:/app/persist
|
||||
- /var/cache/bridgehead/ccp/mtba/input:/app/input
|
||||
- /var/cache/bridgehead/ccp/mtba/persist:/app/persist
|
||||
|
||||
# TODO: Include CBioPortal in Deployment ...
|
||||
# NOTE: CBioPortal can't load data while the system is running. So after import of data bridgehead needs to be restarted!
|
||||
|
@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
#!/bin/bash -e
|
||||
|
||||
function mtbaSetup() {
|
||||
if [ -n "$ENABLE_MTBA" ];then
|
||||
@ -8,5 +8,6 @@ function mtbaSetup() {
|
||||
exit 1;
|
||||
fi
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/mtba-compose.yml"
|
||||
add_private_oidc_redirect_url "/mtba/*"
|
||||
fi
|
||||
}
|
||||
}
|
||||
|
6
ccp/modules/mtba.md
Normal file
6
ccp/modules/mtba.md
Normal file
@ -0,0 +1,6 @@
|
||||
# Molecular Tumor Board Alliance (MTBA)
|
||||
|
||||
In this module, the genetic data to import is stored in a directory (/tmp/bridgehead/mtba/input). A process checks
|
||||
regularly if there are files in the directory. The files are pseudonomized when the IDAT is provided. The files are
|
||||
combined with clinical data of the blaze and imported in cBioPortal. On the other hand, this files are also imported in
|
||||
Blaze.
|
@ -1,4 +1,5 @@
|
||||
version: "3.7"
|
||||
|
||||
volumes:
|
||||
nngm-rest:
|
||||
|
||||
@ -21,9 +22,6 @@ services:
|
||||
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
|
||||
volumes:
|
||||
- nngm-rest:/var/log
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"
|
||||
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ -n "$NNGM_CTS_APIKEY" ]; then
|
||||
log INFO "nNGM setup detected -- will start nNGM Connector."
|
||||
|
82
ccp/modules/teiler-compose.yml
Normal file
82
ccp/modules/teiler-compose.yml
Normal file
@ -0,0 +1,82 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
teiler-orchestrator:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-orchestrator:latest
|
||||
container_name: bridgehead-teiler-orchestrator
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.rule=PathPrefix(`/ccp-teiler`)"
|
||||
- "traefik.http.services.teiler_orchestrator_ccp.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_orchestrator_ccp_strip.stripprefix.prefixes=/ccp-teiler"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.middlewares=teiler_orchestrator_ccp_strip"
|
||||
environment:
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "https://${HOST}/ccp-teiler-dashboard"
|
||||
DEFAULT_LANGUAGE: "${DEFAULT_LANGUAGE_LOWER_CASE}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
|
||||
teiler-dashboard:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:latest
|
||||
container_name: bridgehead-teiler-dashboard
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.rule=PathPrefix(`/ccp-teiler-dashboard`)"
|
||||
- "traefik.http.services.teiler_dashboard_ccp.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_dashboard_ccp_strip.stripprefix.prefixes=/ccp-teiler-dashboard"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.middlewares=teiler_dashboard_ccp_strip"
|
||||
environment:
|
||||
DEFAULT_LANGUAGE: "${DEFAULT_LANGUAGE}"
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
KEYCLOAK_URL: "${KEYCLOAK_URL}"
|
||||
KEYCLOAK_REALM: "${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_CLIENT_ID: "${KEYCLOAK_PUBLIC_CLIENT_ID}"
|
||||
KEYCLOAK_TOKEN_GROUP: "${KEYCLOAK_GROUP_CLAIM}"
|
||||
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
|
||||
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
|
||||
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
|
||||
TEILER_PROJECT: "${PROJECT}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_HTTP_RELATIVE_PATH: "/ccp-teiler-dashboard"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_USER: "${KEYCLOAK_USER_GROUP}"
|
||||
TEILER_ADMIN: "${KEYCLOAK_ADMIN_GROUP}"
|
||||
REPORTER_DEFAULT_TEMPLATE_ID: "ccp-qb"
|
||||
EXPORTER_DEFAULT_TEMPLATE_ID: "ccp"
|
||||
|
||||
|
||||
teiler-backend:
|
||||
# image: docker.verbis.dkfz.de/ccp/dktk-teiler-backend:latest
|
||||
image: dktk-teiler-backend
|
||||
container_name: bridgehead-teiler-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_backend_ccp.rule=PathPrefix(`/ccp-teiler-backend`)"
|
||||
- "traefik.http.services.teiler_backend_ccp.loadbalancer.server.port=8085"
|
||||
- "traefik.http.routers.teiler_backend_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_backend_ccp_strip.stripprefix.prefixes=/ccp-teiler-backend"
|
||||
- "traefik.http.routers.teiler_backend_ccp.middlewares=teiler_backend_ccp_strip"
|
||||
environment:
|
||||
LOG_LEVEL: "INFO"
|
||||
APPLICATION_PORT: "8085"
|
||||
APPLICATION_ADDRESS: "${HOST}"
|
||||
DEFAULT_LANGUAGE: "${DEFAULT_LANGUAGE}"
|
||||
CONFIG_ENV_VAR_PATH: "/run/secrets/ccp.conf"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "https://${HOST}/ccp-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "https://${HOST}/ccp-teiler-dashboard/en"
|
||||
CENTRAX_URL: "${CENTRAXX_URL}"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
ENABLE_MTBA: "${ENABLE_MTBA}"
|
||||
ENABLE_DATASHIELD: "${ENABLE_DATASHIELD}"
|
||||
secrets:
|
||||
- ccp.conf
|
||||
|
||||
secrets:
|
||||
ccp.conf:
|
||||
file: /etc/bridgehead/ccp.conf
|
7
ccp/modules/teiler-setup.sh
Normal file
7
ccp/modules/teiler-setup.sh
Normal file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_TEILER" == true ];then
|
||||
log INFO "Teiler setup detected -- will start Teiler services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/teiler-compose.yml"
|
||||
add_public_oidc_redirect_url "/ccp-teiler/*"
|
||||
fi
|
0
ccp/modules/teiler-ui-compose.yml
Normal file
0
ccp/modules/teiler-ui-compose.yml
Normal file
19
ccp/modules/teiler.md
Normal file
19
ccp/modules/teiler.md
Normal file
@ -0,0 +1,19 @@
|
||||
# Teiler
|
||||
This module orchestrates the different microfrontends of the bridgehead as a single page application.
|
||||
|
||||
## Teiler Orchestrator
|
||||
Single SPA component that consists on the root HTML site of the single page application and a javascript code that
|
||||
gets the information about the microfrontend calling the teiler backend and is responsible for registering them. With the
|
||||
resulting mapping, it can initialize, mount and unmount the required microfrontends on the fly.
|
||||
|
||||
The microfrontends run independently in different containers and can be based on different frameworks (Angular, Vue, React,...)
|
||||
This microfrontends can run as single alone but need an extension with Single-SPA (https://single-spa.js.org/docs/ecosystem).
|
||||
There are also available three templates (Angular, Vue, React) to be directly extended to be used directly in the teiler.
|
||||
|
||||
## Teiler Dashboard
|
||||
It consists on the main dashboard and a set of embedded services.
|
||||
### Login
|
||||
user and password in ccp.local.conf
|
||||
|
||||
## Teiler Backend
|
||||
In this component, the microfrontends are configured.
|
20
ccp/vars
20
ccp/vars
@ -7,7 +7,25 @@ SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
DEFAULT_LANGUAGE=DE
|
||||
DEFAULT_LANGUAGE_LOWER_CASE=${DEFAULT_LANGUAGE,,}
|
||||
ENABLE_EXPORTER=true
|
||||
ENABLE_TEILER=true
|
||||
#ENABLE_DATASHIELD=true
|
||||
|
||||
KEYCLOAK_USER_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})"
|
||||
KEYCLOAK_ADMIN_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})_Verwalter"
|
||||
KEYCLOAK_PRIVATE_CLIENT_ID=${SITE_ID}-private
|
||||
KEYCLOAK_PUBLIC_CLIENT_ID=${SITE_ID}-public
|
||||
# TODO: Change Keycloak Realm to productive. "test-realm-01" is only for testing
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-test-realm-01}"
|
||||
KEYCLOAK_URL="https://login.verbis.dkfz.de"
|
||||
KEYCLOAK_ISSUER_URL="${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_GROUP_CLAIM="groups"
|
||||
OAUTH2_CALLBACK=/oauth2/callback
|
||||
OAUTH2_PROXY_SECRET="$(echo \"This is a salt string to generate one consistent encryption key for the oauth2_proxy. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 32)"
|
||||
|
||||
add_private_oidc_redirect_url "${OAUTH2_CALLBACK}"
|
||||
|
||||
for module in $PROJECT/modules/*.sh
|
||||
do
|
||||
@ -17,4 +35,4 @@ done
|
||||
|
||||
idManagementSetup
|
||||
mtbaSetup
|
||||
adt2fhirRestSetup
|
||||
adt2fhirRestSetup
|
||||
|
14
ecdc.service
14
ecdc.service
@ -1,14 +0,0 @@
|
||||
[Unit]
|
||||
Description=Start ECDC Bridgehead
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/srv/docker/bridgehead/restart_service.sh
|
||||
ExecStop=/srv/docker/bridgehead/shutdown_service.sh
|
||||
Restart=always
|
||||
RestartSec=36000
|
||||
KillMode=mixed
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
|
156
lib/functions.sh
156
lib/functions.sh
@ -240,66 +240,108 @@ add_basic_auth_user() {
|
||||
sed -i "/^$NAME/ s|$|\n# User: $USER\n# Password: $PASSWORD|" $FILE
|
||||
}
|
||||
|
||||
function clone_repo_if_nonexistent() {
|
||||
local repo_url="$1" # First argument: Repository URL
|
||||
local target_dir="$2" # Second argument: Target directory
|
||||
local branch_name="$3" # Third argument: Branch name
|
||||
OIDC_PUBLIC_REDIRECT_URLS=${OIDC_PUBLIC_REDIRECT_URLS:-""}
|
||||
OIDC_PRIVATE_REDIRECT_URLS=${OIDC_PRIVATE_REDIRECT_URLS:-""}
|
||||
|
||||
echo Repo directory: $target_dir
|
||||
|
||||
# Check if the target directory exists
|
||||
if [ ! -d "$target_dir" ]; then
|
||||
echo "Directory '$target_dir' does not exist. Cloning the repository..."
|
||||
# Clone the repository
|
||||
git clone "$repo_url" "$target_dir"
|
||||
fi
|
||||
|
||||
# Change to the cloned directory
|
||||
cd "$target_dir"
|
||||
|
||||
# Checkout the specified branch
|
||||
chown -R bridgehead .
|
||||
su bridgehead -c "git checkout $branch_name"
|
||||
|
||||
cd -
|
||||
}
|
||||
|
||||
function clone_transfair_if_nonexistent() {
|
||||
local base_dir="$1"
|
||||
|
||||
clone_repo_if_nonexistent https://github.com/samply/transFAIR.git $base_dir/transfair ehds2_develop
|
||||
}
|
||||
|
||||
function clone_focus_if_nonexistent() {
|
||||
local base_dir="$1"
|
||||
|
||||
clone_repo_if_nonexistent https://github.com/samply/focus.git $base_dir/focus ehds2
|
||||
}
|
||||
|
||||
|
||||
function build_transfair() {
|
||||
local base_dir="$1"
|
||||
|
||||
# We only take the touble to build transfair if:
|
||||
#
|
||||
# 1. There is data available (any CSV files) and
|
||||
# 2. There is no data lock file (which means that no ETL has yet been run).
|
||||
if ls ../ecdc/data/*.[cC][sS][vV] 1> /dev/null 2>&1 && [ ! -f ../ecdc/data/lock ]; then
|
||||
cd $base_dir/transfair
|
||||
su bridgehead -c "git pull"
|
||||
docker build --progress=plain -t samply/transfair --no-cache .
|
||||
chown -R bridgehead .
|
||||
cd -
|
||||
# Add a redirect url to the public oidc client of the bridgehead
|
||||
function add_public_oidc_redirect_url() {
|
||||
if [[ $OIDC_PUBLIC_REDIRECT_URLS == "" ]]; then
|
||||
OIDC_PUBLIC_REDIRECT_URLS+="$(generate_redirect_urls $1)"
|
||||
else
|
||||
OIDC_PUBLIC_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
|
||||
fi
|
||||
}
|
||||
|
||||
function build_focus() {
|
||||
local base_dir="$1"
|
||||
|
||||
cd $base_dir/focus
|
||||
su bridgehead -c "git pull"
|
||||
docker build --progress=plain -f DockerfileWithBuild -t samply/focus --no-cache .
|
||||
chown -R bridgehead .
|
||||
cd -
|
||||
# Add a redirect url to the private oidc client of the bridgehead
|
||||
function add_private_oidc_redirect_url() {
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS == "" ]]; then
|
||||
OIDC_PRIVATE_REDIRECT_URLS+="$(generate_redirect_urls $1)"
|
||||
else
|
||||
OIDC_PRIVATE_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
|
||||
fi
|
||||
}
|
||||
|
||||
function sync_secrets() {
|
||||
local delimiter=$'\x1E'
|
||||
local secret_sync_args=""
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS != "" ]]; then
|
||||
secret_sync_args="OIDC:OIDC_CLIENT_SECRET:private;$OIDC_PRIVATE_REDIRECT_URLS"
|
||||
fi
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS != "" ]]; then
|
||||
if [[ $secret_sync_args == "" ]]; then
|
||||
secret_sync_args="OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
|
||||
else
|
||||
secret_sync_args+="${delimiter}OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
|
||||
fi
|
||||
fi
|
||||
if [[ $secret_sync_args == "" ]]; then
|
||||
return
|
||||
fi
|
||||
mkdir -p /var/cache/bridgehead/secrets/
|
||||
touch /var/cache/bridgehead/secrets/oidc
|
||||
chown -R bridgehead:docker /var/cache/bridgehead/secrets
|
||||
# The oidc provider will need to be switched based on the project at some point I guess
|
||||
docker run --rm \
|
||||
-v /var/cache/bridgehead/secrets/oidc:/usr/local/cache \
|
||||
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
|
||||
-v /srv/docker/bridgehead/$PROJECT/root.crt.pem:/run/secrets/root.crt.pem:ro \
|
||||
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
|
||||
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
|
||||
-e HTTPS_PROXY=$HTTPS_PROXY_FULL_URL \
|
||||
-e PROXY_ID=$PROXY_ID \
|
||||
-e BROKER_URL=$BROKER_URL \
|
||||
-e OIDC_PROVIDER=secret-sync-central.oidc-client-enrollment.$BROKER_ID \
|
||||
-e SECRET_DEFINITIONS=$secret_sync_args \
|
||||
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
|
||||
set -a # Export variables as environment variables
|
||||
source /var/cache/bridgehead/secrets/*
|
||||
set +a # Export variables in the regular way
|
||||
}
|
||||
|
||||
capitalize_first_letter() {
|
||||
input="$1"
|
||||
capitalized="$(tr '[:lower:]' '[:upper:]' <<< ${input:0:1})${input:1}"
|
||||
echo "$capitalized"
|
||||
}
|
||||
|
||||
# Generate a string of ',' separated string of redirect urls relative to $HOST.
|
||||
# $1 will be appended to the url
|
||||
# If the host looks like dev-jan.inet.dkfz-heidelberg.de it will generate urls with dev-jan and the original $HOST as url Authorities
|
||||
function generate_redirect_urls(){
|
||||
local redirect_urls="https://${HOST}$1"
|
||||
local host_without_proxy="$(echo "$HOST" | cut -d '.' -f1)"
|
||||
# Only append second url if its different and the host is not an ip address
|
||||
if [[ "$HOST" != "$host_without_proxy" && ! "$HOST" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
redirect_urls+=",https://$host_without_proxy$1"
|
||||
fi
|
||||
echo "$redirect_urls"
|
||||
}
|
||||
|
||||
# This password contains at least one special char, a random number and a random upper and lower case letter
|
||||
generate_password(){
|
||||
local seed_text="$1"
|
||||
local seed_num=$(awk 'BEGIN{FS=""} NR==1{print $10}' /etc/bridgehead/pki/${SITE_ID}.priv.pem | od -An -tuC)
|
||||
local nums="1234567890"
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 10}')
|
||||
local random_digit=${nums:$n:1}
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 26}')
|
||||
local upper="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
local lower="abcdefghijklmnopqrstuvwxyz"
|
||||
local random_upper=${upper:$n:1}
|
||||
local random_lower=${lower:$n:1}
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 8}')
|
||||
local special='@#$%^&+='
|
||||
local random_special=${special:$n:1}
|
||||
|
||||
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
|
||||
local main_password=$(echo "${combined_text}" | openssl rsautl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/\//A/g')
|
||||
|
||||
echo "${main_password}${random_digit}${random_upper}${random_lower}${random_special}"
|
||||
}
|
||||
|
||||
# This password only contains alphanumeric characters
|
||||
generate_simple_password(){
|
||||
local seed_text="$1"
|
||||
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
|
||||
echo "${combined_text}" | openssl rsautl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/[+\/]/A/g'
|
||||
}
|
||||
|
@ -84,7 +84,7 @@ else
|
||||
SYNCTEXT="For example, consider entering a correct NTP server (e.g. your institution's Active Directory Domain Controller in /etc/systemd/timesyncd.conf (option NTP=) and restart systemd-timesyncd."
|
||||
if [ $SKEW -ge 300 ]; then
|
||||
report_error 5 "Your clock is not synchronized (${SKEW}s off). This will cause Samply.Beam's certificate will fail. Please setup time synchronization. $SYNCTEXT"
|
||||
log WARN "Server Time Error"
|
||||
exit 1
|
||||
elif [ $SKEW -ge 60 ]; then
|
||||
log WARN "Your clock is more than a minute off (${SKEW}s). Consider syncing to a time server. $SYNCTEXT"
|
||||
fi
|
||||
|
@ -45,7 +45,7 @@ services:
|
||||
|
||||
landing:
|
||||
container_name: bridgehead-landingpage
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:master
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:main
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
|
||||
|
@ -1,30 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Start a running Bridgehead. If there is already a Bridgehead running,
|
||||
# stop it first.
|
||||
# This is intended to be used by systemctl.
|
||||
|
||||
cd /srv/docker/bridgehead
|
||||
|
||||
echo "git status before stop"
|
||||
git status
|
||||
|
||||
echo "Stopping running Bridgehead, if present"
|
||||
./bridgehead stop bbmri
|
||||
|
||||
# If "flush_blaze" is present, delete the Blaze volume before starting
|
||||
# the Bridgehead again. This allows a user to upload all data, if
|
||||
# requested.
|
||||
if [ -f "/srv/docker/ecdc/data/flush_blaze" ]; then
|
||||
docker volume rm bbmri_blaze-data
|
||||
rm -f /srv/docker/ecdc/data/flush_blaze
|
||||
fi
|
||||
|
||||
echo "git status before start"
|
||||
git status | systemd-cat -p info
|
||||
|
||||
echo "Start the Bridgehead anew"
|
||||
./bridgehead start bbmri
|
||||
|
||||
echo "Bridgehead has unexpectedly terminated"
|
||||
|
83
run.sh
83
run.sh
@ -1,83 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Start a Bridgehead from the command line. Upload data if requested.
|
||||
# Behind the scenes we use systemctl to do the work.
|
||||
|
||||
# Function to print usage
|
||||
print_usage() {
|
||||
echo "Start a Bridghead, optionally upload data"
|
||||
echo "Usage: $0 [--upload | --upload-all | --help | -h]"
|
||||
echo "Options:"
|
||||
echo " --upload Run Bridgehead and upload just the new CSV data files."
|
||||
echo " --upload-all Run Bridgehead and upload all CSV data files."
|
||||
echo " --help, -h Display this help message."
|
||||
echo " No options Run Bridgehead only."
|
||||
}
|
||||
|
||||
# Initialize variables
|
||||
UPLOAD=false
|
||||
UPLOAD_ALL=false
|
||||
|
||||
# Parse arguments
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--upload)
|
||||
UPLOAD=true
|
||||
;;
|
||||
--upload-all)
|
||||
UPLOAD_ALL=true
|
||||
;;
|
||||
--help|-h)
|
||||
print_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown argument '$1'"
|
||||
print_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Check for conflicting options
|
||||
if [ "$UPLOAD" = true ] && [ "$UPLOAD_ALL" = true ]; then
|
||||
echo "Error: you must specify either --upload or --upload-all, specifying both is not permitted."
|
||||
print_usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Disable/stop standard Bridgehead systemctl services, if present
|
||||
sudo systemctl disable bridgehead@bbmri.service >& /dev/null
|
||||
sudo systemctl disable system-bridgehead.slice >& /dev/null
|
||||
sudo systemctl disable bridgehead-update@bbmri.timer >& /dev/null
|
||||
sudo systemctl stop bridgehead@bbmri.service >& /dev/null
|
||||
sudo systemctl stop system-bridgehead.slice >& /dev/null
|
||||
sudo systemctl stop bridgehead-update@bbmri.timer >& /dev/null
|
||||
|
||||
# Set up systemctl for EHDS2/ECDC if necessary
|
||||
cp /srv/docker/bridgehead/ecdc.service /etc/systemd/system
|
||||
systemctl daemon-reload
|
||||
systemctl enable ecdc.service
|
||||
|
||||
# Use systemctl to stop the Bridgehead if it is running
|
||||
sudo systemctl stop ecdc.service
|
||||
|
||||
# Use files to tell the Bridgehead what to do with any data present
|
||||
if [ "$UPLOAD" = true ] || [ "$UPLOAD_ALL" = true ]; then
|
||||
if [ -f /srv/docker/ecdc/data/lock ]; then
|
||||
rm /srv/docker/ecdc/data/lock
|
||||
fi
|
||||
fi
|
||||
if [ "$UPLOAD_ALL" = true ]; then
|
||||
echo "All CSV files in /srv/docker/ecdc/data will be uploaded"
|
||||
touch /srv/docker/ecdc/data/flush_blaze
|
||||
fi
|
||||
|
||||
# Start up the Bridgehead
|
||||
sudo systemctl start ecdc.service
|
||||
|
||||
# Show status of Bridgehead service
|
||||
sleep 10
|
||||
systemctl status ecdc.service
|
||||
|
@ -1,13 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Shut down a running Bridgehead.
|
||||
# This is intended to be used by systemctl.
|
||||
|
||||
cd /srv/docker/bridgehead
|
||||
|
||||
echo "git status before stop"
|
||||
git status
|
||||
|
||||
echo "Stopping running Bridgehead, if present"
|
||||
./bridgehead stop bbmri
|
||||
|
43
stop.sh
43
stop.sh
@ -1,43 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Shut down a running Bridgehead.
|
||||
# Behind the scenes we use systemctl to do the work.
|
||||
|
||||
# Function to print usage
|
||||
print_usage() {
|
||||
echo "Stop the running Bridgehead"
|
||||
echo "Usage: $0 [--help | -h]"
|
||||
echo "Options:"
|
||||
echo " --help, -h Display this help message."
|
||||
echo " No options Stop Bridgehead only."
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--help|-h)
|
||||
print_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown argument '$1'"
|
||||
print_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Set up systemctl for EHDS2/ECDC if necessary
|
||||
cp /srv/docker/bridgehead/ecdc.service /etc/systemd/system
|
||||
systemctl daemon-reload
|
||||
systemctl enable ecdc.service
|
||||
|
||||
# Use systemctl to stop the Bridgehead if it is running
|
||||
sudo systemctl stop ecdc.service
|
||||
|
||||
# Show status of Bridgehead service
|
||||
sleep 20
|
||||
systemctl status ecdc.service
|
||||
docker ps
|
||||
|
Reference in New Issue
Block a user