mirror of
https://github.com/samply/bridgehead.git
synced 2025-06-16 21:50:14 +02:00
Compare commits
5 Commits
feature/cb
...
feature/co
Author | SHA1 | Date | |
---|---|---|---|
7837d3b542 | |||
d832cfdd43 | |||
4537617cd0 | |||
6fa587fe4f | |||
48d9a034c7 |
2
.gitignore
vendored
2
.gitignore
vendored
@ -1,7 +1,7 @@
|
||||
##Ignore site configuration
|
||||
.gitmodules
|
||||
site-config/*
|
||||
.idea
|
||||
|
||||
## Ignore site configuration
|
||||
*/docker-compose.override.yml
|
||||
|
||||
|
88
README.md
88
README.md
@ -22,7 +22,6 @@ This repository is the starting point for any information and tools you will nee
|
||||
- [TLS terminating proxies](#tls-terminating-proxies)
|
||||
- [File structure](#file-structure)
|
||||
- [BBMRI-ERIC Directory entry needed](#bbmri-eric-directory-entry-needed)
|
||||
- [Loading data](#loading-data)
|
||||
4. [Things you should know](#things-you-should-know)
|
||||
- [Auto-Updates](#auto-updates)
|
||||
- [Auto-Backups](#auto-backups)
|
||||
@ -34,10 +33,6 @@ This repository is the starting point for any information and tools you will nee
|
||||
|
||||
## Requirements
|
||||
|
||||
The data protection group at your site will probably want to know exactly what our software does with patient data, and you may need to get their approval before you are allowed to install a Bridgehead. To help you with this, we have provided some data protection concepts:
|
||||
|
||||
- [Germany](https://www.bbmri.de/biobanking/it/infrastruktur/datenschutzkonzept/)
|
||||
|
||||
### Hardware
|
||||
|
||||
Hardware requirements strongly depend on the specific use-cases of your network as well as on the data it is going to serve. Most use-cases are well-served with the following configuration:
|
||||
@ -56,41 +51,24 @@ Ensure the following software (or newer) is installed:
|
||||
- docker >= 20.10.1
|
||||
- docker-compose >= 2.xx (`docker-compose` and `docker compose` are both supported).
|
||||
- systemd
|
||||
- curl
|
||||
|
||||
We recommend to install Docker(-compose) from its official sources as described on the [Docker website](https://docs.docker.com).
|
||||
|
||||
> 📝 Note for Ubuntu: Snap versions of Docker are not supported.
|
||||
Note for Ubuntu: Please note that snap versions of Docker are not supported.
|
||||
|
||||
Note for git and Docker: if you have a local proxy, you will need to adjust your setup appropriately, see [git proxy](https://gist.github.com/evantoli/f8c23a37eb3558ab8765) and [docker proxy](https://docs.docker.com/network/proxy/).
|
||||
|
||||
### Network
|
||||
|
||||
A Bridgehead communicates to all central components via outgoing HTTPS connections.
|
||||
A running Bridgehead requires an outgoing HTTPS proxy to communicate with the central components.
|
||||
|
||||
Your site might require an outgoing proxy (i.e. HTTPS forward proxy) to connect to external servers; you should discuss this with your local systems administration. In that case, you will need to note down the URL of the proxy. If the proxy requires authentication, you will also need to make a note of its username and password. This information will be used later on during the installation process. TLS terminating proxies are also supported, see [here](#tls-terminating-proxies). Apart from the Bridgehead itself, you may also need to configure the proxy server in [git](https://gist.github.com/evantoli/f8c23a37eb3558ab8765) and [docker](https://docs.docker.com/network/proxy/).
|
||||
Additionally, your site might use its own proxy. You should discuss this with your local systems administration. If a proxy is being used, you will need to note down the URL of the proxy. If it is a secure proxy, then you will also need to make a note of its username and password. This information will be used later on during the installation process.
|
||||
|
||||
The following URLs need to be accessible (prefix with `https://`):
|
||||
* To fetch code and configuration from git repositories
|
||||
* github.com
|
||||
* git.verbis.dkfz.de
|
||||
* To fetch docker images
|
||||
* docker.verbis.dkfz.de
|
||||
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/all))
|
||||
* hub.docker.com
|
||||
* registry-1.docker.io
|
||||
* production.cloudflare.docker.com
|
||||
* To report bridgeheads operational status
|
||||
* healthchecks.verbis.dkfz.de
|
||||
* only for DKTK/CCP
|
||||
* broker.ccp-it.dktk.dkfz.de
|
||||
* only for BBMRI-ERIC
|
||||
* broker.bbmri.samply.de
|
||||
* gitlab.bbmri-eric.eu
|
||||
* only for German Biobank Node
|
||||
* broker.bbmri.de
|
||||
Note that git and Docker may also need to be configured to use this proxy. This is a job for your systems administrators.
|
||||
|
||||
> 📝 This URL list is subject to change. Instead of the individual names, we highly recommend whitelisting wildcard domains: *.dkfz.de, github.com, *.docker.com, *.docker.io, *.samply.de, *.bbmri.de.
|
||||
If there is a site firewall, this needs to be configured so that outgoing calls to the following URLs are allowed: *.dkfz.de, github.com, docker.io, *.docker.io, *.samply.de.
|
||||
|
||||
> 📝 Ubuntu's pre-installed uncomplicated firewall (ufw) is known to conflict with Docker, more info [here](https://github.com/chaifeng/ufw-docker).
|
||||
Note for Ubuntu: Please note that the uncomplicated firewall (ufw) is known to conflict with Docker [here](https://github.com/chaifeng/ufw-docker).
|
||||
|
||||
## Deployment
|
||||
|
||||
@ -123,7 +101,7 @@ Mention:
|
||||
We will set the repository up for you. We will then send you:
|
||||
|
||||
- A Repository Short Name (RSN). Beware: this is distinct from your site name.
|
||||
- Repository URL containing the acces token eg. https://BH_Dummy:dummy_token@git.verbis.dkfz.de/<project>-bridgehead-configs/dummy.git
|
||||
- Repository URL containing the acces token eg. https://BH_Dummy:dummy_token@git.verbis.dkfz.de/bbmri-bridgehead-configs/dummy.git
|
||||
|
||||
During the installation, your Bridgehead will download your site's configuration from GitLab and you can review the details provided to us by email.
|
||||
|
||||
@ -316,32 +294,6 @@ Once you edited the gitlab config, the bridgehead will autoupdate the config wit
|
||||
|
||||
There will be a delay before the effects of Directory sync become visible. First, you will need to wait until the time you have specified in ```TIMER_CRON```. Second, the information will then be synchronized from your national node with the central European Directory. This can take up to 24 hours.
|
||||
|
||||
### Loading data
|
||||
|
||||
The data accessed by the federated search is held in the Bridgehead in a FHIR store (we use Blaze).
|
||||
|
||||
You can load data into this store by using its FHIR API:
|
||||
|
||||
```
|
||||
https://<Name of your server>/bbmri-localdatamanagement/fhir
|
||||
```
|
||||
The name of your server will generally be the full name of the VM that the Bridgehead runs on. You can alternatively supply an IP address.
|
||||
|
||||
The FHIR API uses basic auth. You can find the credentials in `/etc/bridgehead/<project>.local.conf`.
|
||||
|
||||
Note that if you don't have a DNS certificate for the Bridgehead, you will need to allow an insecure connection. E.g. with curl, use the `-k` flag.
|
||||
|
||||
The storage space on your hard drive will depend on the number of FHIR resources that you intend to generate. This will be the sum of the number of patients/subjects, the number of samples, the number of conditions/diseases and the number of observations. As a general rule of thumb, you can assume that each resource will consume about 2 kilobytes of disk space.
|
||||
|
||||
For more information on Blaze performance, please refer to [import performance](https://github.com/samply/blaze/blob/master/docs/performance/import.md).
|
||||
|
||||
#### ETL for BBMRI and GBA
|
||||
|
||||
Normally, you will need to build your own ETL to feed the Bridgehead. However, there is one case where a short cut might be available:
|
||||
- If you are using CentraXX as a BIMS and you have a FHIR-Export License, then you can employ standard mapping scripts that access the CentraXX-internal data structures and map the data onto the BBMRI FHIR profile. It may be necessary to adjust a few parameters, but this is nonetheless significantly easier than writing your own ETL.
|
||||
|
||||
You can find the profiles for generating FHIR in [Simplifier](https://simplifier.net/bbmri.de/~resources?category=Profile).
|
||||
|
||||
## Things you should know
|
||||
|
||||
### Auto-Updates
|
||||
@ -385,28 +337,8 @@ Installation under WSL ought to work, but we have not tested this.
|
||||
|
||||
### Docker Daemon Proxy Configuration
|
||||
|
||||
Docker has a background daemon, responsible for downloading images and starting them. Sometimes, proxy configuration from your system won't carry over and it will fail to download images. In that case, you'll need to configure the proxy inside the system unit of docker by creating the file `/etc/systemd/system/docker.service.d/proxy.conf` with the following content:
|
||||
Docker has a background daemon, responsible for downloading images and starting them. Sometimes, proxy configuration from your system won't carry over and it will fail to download images. In that case, configure the proxy for this daemon as described in the [official documentation](https://docs.docker.com).
|
||||
|
||||
``` ini
|
||||
[Service]
|
||||
Environment="HTTP_PROXY=http://proxy.example.com:3128"
|
||||
Environment="HTTPS_PROXY=https://proxy.example.com:3128"
|
||||
Environment="NO_PROXY=localhost,127.0.0.1,some-local-docker-registry.example.com,.corp"
|
||||
```
|
||||
|
||||
After saving the configuration file, you'll need to reload the system daemon for the changes to take effect:
|
||||
|
||||
``` shell
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
and restart the docker daemon:
|
||||
|
||||
``` shell
|
||||
sudo systemctl restart docker
|
||||
```
|
||||
|
||||
For more information, please consult the [official documentation](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy).
|
||||
|
||||
### Monitoring
|
||||
|
||||
|
@ -1,5 +1,3 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
directory_sync_service:
|
||||
image: "docker.verbis.dkfz.de/cache/samply/directory_sync_service"
|
||||
|
@ -18,7 +18,7 @@ services:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
|
||||
- /srv/docker/bridgehead/ccp/root-new.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
dnpm-beam-connect:
|
||||
depends_on: [ dnpm-beam-proxy ]
|
||||
@ -32,11 +32,9 @@ services:
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
HTTP_PROXY: http://forward_proxy:3128
|
||||
HTTPS_PROXY: http://forward_proxy:3128
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend
|
||||
RUST_LOG: ${RUST_LOG:-info}
|
||||
NO_AUTH: "true"
|
||||
extra_host:
|
||||
- "host.docker.internal:host-gateway"
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
|
@ -1,33 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-backend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
|
||||
container_name: bridgehead-dnpm-backend
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
|
||||
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.bwhc-backend.tls=true"
|
||||
|
||||
dnpm-frontend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
|
||||
container_name: bridgehead-dnpm-frontend
|
||||
links:
|
||||
- dnpm-backend
|
||||
environment:
|
||||
- NUXT_HOST=0.0.0.0
|
||||
- NUXT_PORT=8080
|
||||
- BACKEND_PROTOCOL=https
|
||||
- BACKEND_HOSTNAME=$HOST
|
||||
- BACKEND_PORT=443
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.bwhc-frontend.tls=true"
|
@ -1,27 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
fi
|
@ -5,6 +5,7 @@ if [ -n "${ENABLE_DNPM}" ]; then
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
|
||||
|
||||
# Set variables required for Beam-Connect
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
DNPM_BROKER_ID="broker.ccp-it.dktk.dkfz.de"
|
||||
DNPM_BROKER_URL="https://${DNPM_BROKER_ID}"
|
||||
|
@ -2,7 +2,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
focus-eric:
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:main
|
||||
container_name: bridgehead-focus-eric
|
||||
environment:
|
||||
API_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
|
||||
@ -22,6 +22,7 @@ services:
|
||||
BROKER_URL: ${ERIC_BROKER_URL}
|
||||
PROXY_ID: ${ERIC_PROXY_ID}
|
||||
APP_focus_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
|
||||
APP_monitoring_KEY: ${ERIC_MONITORING_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
@ -32,5 +33,14 @@ services:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/bbmri/modules/${ERIC_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro
|
||||
- /srv/docker/bridgehead/bbmri/modules/eric.root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
bridgehead-monitoring:
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-monitoring:latest
|
||||
container_name: bridgehead-monitoring-eric
|
||||
environment:
|
||||
- BEAM_ID=monitoring.${ERIC_PROXY_ID}
|
||||
- BEAM_API_KEY=${ERIC_MONITORING_BEAM_SECRET_SHORT}
|
||||
- BEAM_PROXY_URL=http://beam-proxy-eric:8081
|
||||
depends_on:
|
||||
- beam-proxy-eric
|
||||
|
@ -4,25 +4,13 @@ if [ "${ENABLE_ERIC}" == "true" ]; then
|
||||
log INFO "BBMRI-ERIC setup detected -- will start services for BBMRI-ERIC."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/eric-compose.yml"
|
||||
|
||||
# The environment needs to be defined in /etc/bridgehead
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
export ERIC_BROKER_ID=broker.bbmri.samply.de
|
||||
export ERIC_ROOT_CERT=eric
|
||||
;;
|
||||
"test")
|
||||
export ERIC_BROKER_ID=broker-test.bbmri-test.samply.de
|
||||
export ERIC_ROOT_CERT=eric.test
|
||||
;;
|
||||
*)
|
||||
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
export ERIC_BROKER_ID=broker.bbmri.samply.de
|
||||
export ERIC_ROOT_CERT=eric
|
||||
;;
|
||||
esac
|
||||
|
||||
# Set required variables
|
||||
ERIC_BROKER_ID=broker.bbmri.samply.de
|
||||
ERIC_BROKER_URL=https://${ERIC_BROKER_ID}
|
||||
ERIC_PROXY_ID=${SITE_ID}.${ERIC_BROKER_ID}
|
||||
ERIC_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
ERIC_SUPPORT_EMAIL=bridgehead@helpdesk.bbmri-eric.eu
|
||||
|
||||
#Monitoring
|
||||
ERIC_MONITORING_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
fi
|
||||
|
@ -1,20 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUJ0g7k2vrdAwNTU38S1/mU8NO26MwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNzEwMTIyMzQxWhcNMzMw
|
||||
NzA3MTIyNDExWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBALMvc/fApbsAl+/NXDszNgffNR5llAb9CfxzdnRn
|
||||
ryoBqZdPevBYZZfKBARRKjFbXRDdPWbE7erDeo1LiCM6PObXCuT9wmGWJtvfkmqW
|
||||
3Z/a75e4r360kceMEGVn4kWpi9dz8s7+oXVZURjW2r13h6pq6xQNZDNlXmpR8wHG
|
||||
58TSrQC4n1vzdSwMWdptgOA8Sw8adR7ZJI1yNZpmynB2QolKKNESI7FcSKC/+b+H
|
||||
LoPkseAwQG9yJo23qEw1GZS67B47iKIqX2wp9VLQobHw7ncrhKXQLSWq973k/Swp
|
||||
7lBdfOsTouf72flLiF1HbdOLcFDmWgIbf5scj2HaQe8b/UcCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHYxBJiJZieW
|
||||
e6G1vwn6Q36/crgNMB8GA1UdIwQYMBaAFHYxBJiJZieWe6G1vwn6Q36/crgNMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCN6WVNYpWJ
|
||||
6Z1Ee+otLZYMXhjyR6NUQ5s0aHiug97gB8mTiNlgXiiTgipCbofEmENgh1inYrPC
|
||||
WfdXxqOaekSXCQW6nSO1KtBzEYtkN5LrN1cjKqt51P2DbkllinK37wwCS2Kfup1+
|
||||
yjhTRxrehSIfsMVK6bTUeSoc8etkgwErZpORhlpqZKWhmOwcMpgsYJJOLhUetqc1
|
||||
UNe/254bc0vqHEPT6VI/86c7qAmk1xR0RUfrnKAEqZtUeuoj2fe1L/6yOB16fxt5
|
||||
3V3oim7EO6eZCTjDo9fU5DaFiqSMe7WVdr03Na0cWet60XKRH/xaiC6gMWdHWcbh
|
||||
vZdXnV1qjlM2
|
||||
-----END CERTIFICATE-----
|
@ -2,7 +2,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
focus-gbn:
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
|
||||
image: docker.verbis.dkfz.de/cache/samply/focus:main
|
||||
container_name: bridgehead-focus-gbn
|
||||
environment:
|
||||
API_KEY: ${GBN_FOCUS_BEAM_SECRET_SHORT}
|
||||
@ -22,6 +22,7 @@ services:
|
||||
BROKER_URL: ${GBN_BROKER_URL}
|
||||
PROXY_ID: ${GBN_PROXY_ID}
|
||||
APP_focus_KEY: ${GBN_FOCUS_BEAM_SECRET_SHORT}
|
||||
APP_monitoring_KEY: ${GBN_MONITORING_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
@ -32,5 +33,15 @@ services:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/bbmri/modules/${GBN_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro
|
||||
- /srv/docker/bridgehead/bbmri/modules/gbn.root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
bridgehead-monitoring:
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-monitoring:latest
|
||||
container_name: bridgehead-monitoring-gbn
|
||||
environment:
|
||||
- BEAM_ID=monitoring.${GBN_PROXY_ID}
|
||||
- BEAM_API_KEY=${GBN_MONITORING_BEAM_SECRET_SHORT}
|
||||
- BEAM_PROXY_URL=http://beam-proxy-gbn:8081
|
||||
depends_on:
|
||||
- beam-proxy-gbn
|
||||
|
||||
|
@ -4,25 +4,13 @@ if [ "${ENABLE_GBN}" == "true" ]; then
|
||||
log INFO "GBN setup detected -- will start services for German Biobank Node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/gbn-compose.yml"
|
||||
|
||||
# The environment needs to be defined in /etc/bridgehead
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
export GBN_BROKER_ID=broker.bbmri.de
|
||||
export GBN_ROOT_CERT=gbn
|
||||
;;
|
||||
"test")
|
||||
export GBN_BROKER_ID=broker.test.bbmri.de
|
||||
export GBN_ROOT_CERT=gbn.test
|
||||
;;
|
||||
*)
|
||||
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
export GBN_BROKER_ID=broker.bbmri.de
|
||||
export GBN_ROOT_CERT=gbn
|
||||
;;
|
||||
esac
|
||||
|
||||
# Set required variables
|
||||
GBN_BROKER_ID=broker.bbmri.de
|
||||
GBN_BROKER_URL=https://${GBN_BROKER_ID}
|
||||
GBN_PROXY_ID=${SITE_ID}.${GBN_BROKER_ID}
|
||||
GBN_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
GBN_SUPPORT_EMAIL=feedback@germanbiobanknode.de
|
||||
|
||||
#Monitoring
|
||||
GBN_MONITORING_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
fi
|
||||
|
@ -1,20 +0,0 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUQJjusHYR89Xas+kRbg41aHZxfmcwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwODIxMDk1MDI1WhcNMzMw
|
||||
ODE4MDk1MDU1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAMP0jt2tSk23Bu+QeogqlFwjbMnqwRcWGKAOF4ch
|
||||
aOK2B5u/BnpqIZDZbhfSIJTv8DPe3+nA2VqRfSiW3HbV0auqxx1ii2ZmHYbvO2P/
|
||||
Jj6hyIiYYGqCMRVXk7iB+DfMysQEaSJO/7lJSprlVQCl0u7MAQ4q/szVNwcCm2Xi
|
||||
iE00Wlota2xTYjnJHYjeaLZL4kQsjqW2aCWHG4q77Z4NXT+lXN9XXedgoXLhuwWl
|
||||
UyHhXPjyCVu1iFzsXwSTodPAETGoInRYMqMA7PrbHZu1b2Jz0BwCQ+bark1td+Mf
|
||||
l3uP0QduhZnH6zGO0KyUFRzeiesgabv5bgUeSSsIOVjnLJUCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFME99nPh1Vuo
|
||||
7eRaymL2Ps7qGxIdMB8GA1UdIwQYMBaAFME99nPh1Vuo7eRaymL2Ps7qGxIdMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQB0WG0xT00R
|
||||
5CA0tVHaNo8bQuAXytu566TspKc5vVd3r6mglj/MiSSQG2MVz+GUU6LnnApgln1P
|
||||
pvZuyaldB0QdTTLeJVMr/eFtZonlxqcxkj+VW2Y7mRHT7Xx9GQvzKYvSK5m/+xzH
|
||||
pAQl8AirgkoZ5b+ltlzM0pDAH204xj3/skmGqM/o0FKzRtpetHYkZPiquHCmO2Cp
|
||||
nTMkv7c2qu5t2Dm5q0Tmb7ZRoA1yIYhDn/UfhTAVWQnoMfXK8oB9nkRRb7pAfOXo
|
||||
W1K4A+oWqKrJwfIH/Ycnw7hu8hPuGOyIN/PLnLpJp9M2I67vywp5lIvFib4UukyJ
|
||||
wJw6/iTienIA
|
||||
-----END CERTIFICATE-----
|
@ -14,7 +14,7 @@ do
|
||||
done
|
||||
|
||||
SUPPORT_EMAIL=$ERIC_SUPPORT_EMAIL
|
||||
BROKER_URL_FOR_PREREQ="${ERIC_BROKER_URL:-$GBN_BROKER_URL}"
|
||||
BROKER_URL_FOR_PREREQ=$ERIC_BROKER_URL
|
||||
|
||||
if [ -n "$GBN_SUPPORT_EMAIL" ]; then
|
||||
SUPPORT_EMAIL=$GBN_SUPPORT_EMAIL
|
||||
|
21
bridgehead
21
bridgehead
@ -41,7 +41,6 @@ case "$PROJECT" in
|
||||
;;
|
||||
esac
|
||||
|
||||
# TODO: Please add proper documentation for variable priorities (1. secrets, 2. vars, 3. PROJECT.local.conf, 4. PROJECT.conf, 5. ???
|
||||
loadVars() {
|
||||
# Load variables from /etc/bridgehead and /srv/docker/bridgehead
|
||||
set -a
|
||||
@ -51,7 +50,6 @@ loadVars() {
|
||||
source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import"
|
||||
fi
|
||||
fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile"
|
||||
setHostname
|
||||
[ -e ./$PROJECT/vars ] && source ./$PROJECT/vars
|
||||
set +a
|
||||
|
||||
@ -66,23 +64,7 @@ loadVars() {
|
||||
OVERRIDE+=" -f ./$PROJECT/docker-compose.override.yml"
|
||||
fi
|
||||
detectCompose
|
||||
setupProxy
|
||||
|
||||
# Set some project-independent default values
|
||||
: ${ENVIRONMENT:=production}
|
||||
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
export FOCUS_TAG=main
|
||||
;;
|
||||
"test")
|
||||
export FOCUS_TAG=develop
|
||||
;;
|
||||
*)
|
||||
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
export FOCUS_TAG=main
|
||||
;;
|
||||
esac
|
||||
setHostname
|
||||
}
|
||||
|
||||
case "$ACTION" in
|
||||
@ -90,7 +72,6 @@ case "$ACTION" in
|
||||
loadVars
|
||||
hc_send log "Bridgehead $PROJECT startup: Checking requirements ..."
|
||||
checkRequirements
|
||||
sync_secrets
|
||||
hc_send log "Bridgehead $PROJECT startup: Requirements checked out. Now starting bridgehead ..."
|
||||
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
|
||||
;;
|
||||
|
@ -28,7 +28,7 @@ services:
|
||||
BLAZE_URL: "http://bridgehead-ccp-blaze:8080/fhir/"
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
OBFUSCATE: "no"
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
@ -40,6 +40,7 @@ services:
|
||||
BROKER_URL: ${BROKER_URL}
|
||||
PROXY_ID: ${PROXY_ID}
|
||||
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
|
||||
APP_monitoring_KEY: ${CCP_MONITORING_BEAM_SECRET_SHORT}
|
||||
PRIVKEY_FILE: /run/secrets/proxy.pem
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
@ -52,11 +53,15 @@ services:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.address=http://oauth2_proxy:4180/"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.trustForwardHeader=true"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.authResponseHeaders=X-Auth-Request-Access-Token,Authorization"
|
||||
bridgehead-monitoring:
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-monitoring:latest
|
||||
container_name: bridgehead-monitoring-ccp
|
||||
environment:
|
||||
- BEAM_ID=monitoring.${PROXY_ID}
|
||||
- BEAM_API_KEY=${CCP_MONITORING_BEAM_SECRET_SHORT}
|
||||
- BEAM_PROXY_URL=http://beam-proxy:8081
|
||||
depends_on:
|
||||
- beam-proxy
|
||||
|
||||
|
||||
volumes:
|
||||
|
@ -1,18 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
adt2fhir-rest:
|
||||
container_name: bridgehead-adt2fhir-rest
|
||||
image: docker.verbis.dkfz.de/ccp/adt2fhir-rest:main
|
||||
environment:
|
||||
IDTYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
|
||||
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
|
||||
SALT: ${LOCAL_SALT}
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.adt2fhir-rest.rule=PathPrefix(`/adt2fhir-rest`)"
|
||||
- "traefik.http.middlewares.adt2fhir-rest_strip.stripprefix.prefixes=/adt2fhir-rest"
|
||||
- "traefik.http.services.adt2fhir-rest.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.adt2fhir-rest.tls=true"
|
||||
- "traefik.http.routers.adt2fhir-rest.middlewares=adt2fhir-rest_strip,auth"
|
@ -1,13 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
function adt2fhirRestSetup() {
|
||||
if [ -n "$ENABLE_ADT2FHIR_REST" ]; then
|
||||
log INFO "ADT2FHIR-REST setup detected -- will start adt2fhir-rest API."
|
||||
if [ ! -n "$IDMANAGER_LOCAL_PATIENTLIST_APIKEY" ]; then
|
||||
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
|
||||
exit 1;
|
||||
fi
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/adt2fhir-rest-compose.yml"
|
||||
LOCAL_SALT="$(echo \"local-random-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
fi
|
||||
}
|
@ -1,53 +0,0 @@
|
||||
version: '3.7'
|
||||
|
||||
services:
|
||||
cbioportal:
|
||||
# image: docker.verbis.dkfz.de/ccp/dktk-cbioportal:latest
|
||||
image: dktk-cbioportal
|
||||
container_name: bridgehead-cbioportal
|
||||
environment:
|
||||
DB_PASSWORD: ${CBIOPORTAL_DB_PASSWORD}
|
||||
HTTP_RELATIVE_PATH: "/cbioportal"
|
||||
UPLOAD_HTTP_RELATIVE_PATH: "/cbioportal-upload"
|
||||
depends_on:
|
||||
- cbioportal-database
|
||||
- cbioportal-session
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.cbioportal.rule=PathPrefix(`/cbioportal`)"
|
||||
- "traefik.http.routers.cbioportal.service=cbioportal"
|
||||
- "traefik.http.services.cbioportal.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.cbioportal.tls=true"
|
||||
- "traefik.http.routers.cbioportal-upload.rule=PathPrefix(`/cbioportal-upload`)"
|
||||
- "traefik.http.routers.cbioportal-upload.service=cbioportal-upload"
|
||||
- "traefik.http.routers.cbioportal-upload.tls=true"
|
||||
- "traefik.http.services.cbioportal-upload.loadbalancer.server.port=8001"
|
||||
|
||||
|
||||
cbioportal-database:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-cbioportal-database:latest
|
||||
container_name: bridgehead-cbioportal-database
|
||||
environment:
|
||||
MYSQL_DATABASE: cbioportal
|
||||
MYSQL_USER: cbio_user
|
||||
MYSQL_PASSWORD: ${CBIOPORTAL_DB_PASSWORD}
|
||||
MYSQL_ROOT_PASSWORD: ${CBIOPORTAL_DB_ROOT_PASSWORD}
|
||||
volumes:
|
||||
- /var/cache/bridgehead/ccp/cbioportal_db_data:/var/lib/mysql
|
||||
|
||||
cbioportal-session:
|
||||
image: cbioportal/session-service:0.6.1
|
||||
container_name: bridgehead-cbioportal-session
|
||||
environment:
|
||||
SERVER_PORT: 5000
|
||||
JAVA_OPTS: -Dspring.data.mongodb.uri=mongodb://cbioportal-session-database:27017/session-service
|
||||
depends_on:
|
||||
- cbioportal-session-database
|
||||
|
||||
cbioportal-session-database:
|
||||
image: mongo:4.2
|
||||
container_name: bridgehead-cbioportal-session-database
|
||||
environment:
|
||||
MONGO_INITDB_DATABASE: session_service
|
||||
volumes:
|
||||
- /var/cache/bridgehead/ccp/cbioportal_session_db_data:/data/db
|
@ -1,8 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_CBIOPORTAL" == true ]; then
|
||||
log INFO "cBioPortal setup detected -- will start cBioPortal service."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/cbioportal-compose.yml"
|
||||
CBIOPORTAL_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the cbioportal database. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
CBIOPORTAL_DB_ROOT_PASSWORD="$(echo \"This is a salt string to generate one consistent root password for the cbioportal database. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
|
||||
fi
|
@ -1,10 +0,0 @@
|
||||
# CBioPortal Data uploader
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
We have integrated an API that allows you to upload data directly to cbioportal without the need to have cbioportal installed in your system.
|
||||
|
||||
## Tech stack
|
||||
|
||||
We used Flask to add this feature
|
2
ccp/modules/ccp-setup.sh
Normal file
2
ccp/modules/ccp-setup.sh
Normal file
@ -0,0 +1,2 @@
|
||||
#Monitoring
|
||||
CCP_MONITORING_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
@ -1,161 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
rstudio:
|
||||
container_name: bridgehead-rstudio
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rstudio:latest
|
||||
environment:
|
||||
#DEFAULT_USER: "rstudio" # This line is kept for informational purposes
|
||||
PASSWORD: "${RSTUDIO_ADMIN_PASSWORD}" # It is required, even if the authentication is disabled
|
||||
DISABLE_AUTH: "true" # https://rocker-project.org/images/versioned/rstudio.html#how-to-use
|
||||
HTTP_RELATIVE_PATH: "/rstudio"
|
||||
ALL_PROXY: "http://forward_proxy:3128" # https://rocker-project.org/use/networking.html
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.rstudio_ccp.rule=PathPrefix(`/rstudio`)"
|
||||
- "traefik.http.services.rstudio_ccp.loadbalancer.server.port=8787"
|
||||
- "traefik.http.middlewares.rstudio_ccp_strip.stripprefix.prefixes=/rstudio"
|
||||
- "traefik.http.routers.rstudio_ccp.tls=true"
|
||||
- "traefik.http.routers.rstudio_ccp.middlewares=oidcAuth,rstudio_ccp_strip"
|
||||
networks:
|
||||
- rstudio
|
||||
|
||||
opal:
|
||||
container_name: bridgehead-opal
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-opal:latest
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.opal_ccp.rule=PathPrefix(`/opal`)"
|
||||
- "traefik.http.services.opal_ccp.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.opal_ccp.tls=true"
|
||||
links:
|
||||
- opal-rserver
|
||||
- opal-db
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC -Dhttps.proxyHost=forward_proxy -Dhttps.proxyPort=3128"
|
||||
# OPAL_ADMINISTRATOR_USER: "administrator" # This line is kept for informational purposes
|
||||
OPAL_ADMINISTRATOR_PASSWORD: "${OPAL_ADMIN_PASSWORD}"
|
||||
POSTGRESDATA_HOST: "opal-db"
|
||||
POSTGRESDATA_DATABASE: "opal"
|
||||
POSTGRESDATA_USER: "opal"
|
||||
POSTGRESDATA_PASSWORD: "${OPAL_DB_PASSWORD}"
|
||||
ROCK_HOSTS: "opal-rserver:8085"
|
||||
APP_URL: "https://${HOST}/opal"
|
||||
APP_CONTEXT_PATH: "/opal"
|
||||
OPAL_PRIVATE_KEY: "/run/secrets/opal-key.pem"
|
||||
OPAL_CERTIFICATE: "/run/secrets/opal-cert.pem"
|
||||
KEYCLOAK_URL: "${KEYCLOAK_URL}"
|
||||
KEYCLOAK_REALM: "${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_CLIENT_ID: "${KEYCLOAK_PRIVATE_CLIENT_ID}"
|
||||
KEYCLOAK_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
KEYCLOAK_ADMIN_GROUP: "${KEYCLOAK_ADMIN_GROUP}"
|
||||
TOKEN_MANAGER_PASSWORD: "${TOKEN_MANAGER_OPAL_PASSWORD}"
|
||||
EXPORTER_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
BEAM_APP_ID: token-manager.${PROXY_ID}
|
||||
BEAM_SECRET: ${TOKEN_MANAGER_SECRET}
|
||||
BEAM_DATASHIELD_PROXY: request-manager
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/opal-metadata-db:/srv" # Opal metadata
|
||||
secrets:
|
||||
- opal-cert.pem
|
||||
- opal-key.pem
|
||||
|
||||
opal-db:
|
||||
container_name: bridgehead-opal-db
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: "${OPAL_DB_PASSWORD}" # Set in datashield-setup.sh
|
||||
POSTGRES_USER: "opal"
|
||||
POSTGRES_DB: "opal"
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/opal-db:/var/lib/postgresql/data" # Opal project data (imported from exporter)
|
||||
|
||||
opal-rserver:
|
||||
container_name: bridgehead-opal-rserver
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rserver # datashield/rock-base + dsCCPhos
|
||||
tmpfs:
|
||||
- /srv
|
||||
|
||||
beam-connect:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
|
||||
container_name: bridgehead-datashield-connect
|
||||
environment:
|
||||
PROXY_URL: "http://beam-proxy:8081"
|
||||
TLS_CA_CERTIFICATES_DIR: /run/secrets
|
||||
APP_ID: datashield-connect.${SITE_ID}.${BROKER_ID}
|
||||
PROXY_APIKEY: ${DATASHIELD_CONNECT_SECRET}
|
||||
DISCOVERY_URL: "./map/central.json"
|
||||
LOCAL_TARGETS_FILE: "./map/local.json"
|
||||
NO_AUTH: "true"
|
||||
secrets:
|
||||
- opal-cert.pem
|
||||
depends_on:
|
||||
- beam-proxy
|
||||
volumes:
|
||||
- /tmp/bridgehead/opal-map/:/map/:ro
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
traefik:
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
forward_proxy:
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
beam-proxy:
|
||||
environment:
|
||||
APP_datashield-connect_KEY: ${DATASHIELD_CONNECT_SECRET}
|
||||
APP_token-manager_KEY: ${TOKEN_MANAGER_SECRET}
|
||||
|
||||
# TODO: Allow users of group /DataSHIELD and KEYCLOAK_USER_GROUP at the same time:
|
||||
# Maybe a solution would be (https://oauth2-proxy.github.io/oauth2-proxy/docs/configuration/overview/):
|
||||
# --allowed-groups=/DataSHIELD,KEYCLOAK_USER_GROUP
|
||||
oauth2_proxy:
|
||||
image: quay.io/oauth2-proxy/oauth2-proxy
|
||||
container_name: bridgehead_oauth2_proxy
|
||||
command: >-
|
||||
--allowed-group=/DataSHIELD
|
||||
--oidc-groups-claim=${KEYCLOAK_GROUP_CLAIM}
|
||||
--auth-logging=true
|
||||
--whitelist-domain=${HOST}
|
||||
--http-address="0.0.0.0:4180"
|
||||
--reverse-proxy=true
|
||||
--upstream="static://202"
|
||||
--email-domain="*"
|
||||
--cookie-name="_BRIDGEHEAD_oauth2"
|
||||
--cookie-secret="${OAUTH2_PROXY_SECRET}"
|
||||
--cookie-expire="12h"
|
||||
--cookie-secure="true"
|
||||
--cookie-httponly="true"
|
||||
#OIDC settings
|
||||
--provider="keycloak-oidc"
|
||||
--provider-display-name="VerbIS Login"
|
||||
--client-id="${KEYCLOAK_PRIVATE_CLIENT_ID}"
|
||||
--client-secret="${OIDC_CLIENT_SECRET}"
|
||||
--redirect-url="https://${HOST}${OAUTH2_CALLBACK}"
|
||||
--oidc-issuer-url="${KEYCLOAK_ISSUER_URL}"
|
||||
--scope="openid email profile"
|
||||
--code-challenge-method="S256"
|
||||
--skip-provider-button=true
|
||||
#X-Forwarded-Header settings - true/false depending on your needs
|
||||
--pass-basic-auth=true
|
||||
--pass-user-headers=false
|
||||
--pass-access-token=false
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.oauth2_proxy.rule=Host(`${HOST}`) && PathPrefix(`/oauth2`, `/oauth2/callback`)"
|
||||
- "traefik.http.services.oauth2_proxy.loadbalancer.server.port=4180"
|
||||
- "traefik.http.routers.oauth2_proxy.tls=true"
|
||||
|
||||
secrets:
|
||||
opal-cert.pem:
|
||||
file: /tmp/bridgehead/opal-cert.pem
|
||||
opal-key.pem:
|
||||
file: /tmp/bridgehead/opal-key.pem
|
||||
|
||||
networks:
|
||||
rstudio:
|
@ -1,157 +0,0 @@
|
||||
<template id="opal-ccp" source-id="blaze-store" opal-project="ccp-demo" target-id="opal" >
|
||||
|
||||
<container csv-filename="Patient-${TIMESTAMP}.csv" opal-table="patient" opal-entity-type="Patient">
|
||||
<attribute csv-column="patient-id" opal-value-type="text" primary-key="true" val-fhir-path="Patient.id.value" anonym="Pat" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="dktk-id-global" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Global').value.value"/>
|
||||
<attribute csv-column="dktk-id-lokal" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Lokal').value.value" />
|
||||
<attribute csv-column="geburtsdatum" opal-value-type="date" val-fhir-path="Patient.birthDate.value"/>
|
||||
<attribute csv-column="geschlecht" opal-value-type="text" val-fhir-path="Patient.gender.value" />
|
||||
<attribute csv-column="datum_des_letztbekannten_vitalstatus" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '75186-7').effective.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
|
||||
<attribute csv-column="vitalstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '75186-7').value.coding.code.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
|
||||
<!--fehlt in ADT2FHIR--><attribute csv-column="tod_tumorbedingt" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://fhir.de/CodeSystem/bfarm/icd-10-gm').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
|
||||
<!--fehlt in ADT2FHIR--><attribute csv-column="todesursachen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/JNUCS').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="Diagnosis-${TIMESTAMP}.csv" opal-table="diagnosis" opal-entity-type="Diagnosis">
|
||||
<attribute csv-column="diagnosis-id" primary-key="true" opal-value-type="text" val-fhir-path="Condition.id.value" anonym="Dia" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Condition.subject.reference.value" anonym="Pat"/>
|
||||
<attribute csv-column="primaerdiagnose" opal-value-type="text" val-fhir-path="Condition.code.coding.code.value"/>
|
||||
<attribute csv-column="tumor_diagnosedatum" opal-value-type="date" val-fhir-path="Condition.onset.value"/>
|
||||
<attribute csv-column="primaertumor_diagnosetext" opal-value-type="text" val-fhir-path="Condition.code.text.value"/>
|
||||
<attribute csv-column="version_des_icd-10_katalogs" opal-value-type="integer" val-fhir-path="Condition.code.coding.version.value"/>
|
||||
<attribute csv-column="lokalisation" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').code.value"/>
|
||||
<attribute csv-column="icd-o_katalog_topographie_version" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').version.value"/>
|
||||
<attribute csv-column="seitenlokalisation_nach_adt-gekid" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/SeitenlokalisationCS').code.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="Progress-${TIMESTAMP}.csv" opal-table="progress" opal-entity-type="Progress">
|
||||
<!--it would be better to generate a an ID, instead of extracting the ClinicalImpression id-->
|
||||
<attribute csv-column="progress-id" primary-key="true" opal-value-type="text" val-fhir-path="ClinicalImpression.id.value" anonym="Pro" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="ClinicalImpression.problem.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="ClinicalImpression.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="untersuchungs-_befunddatum_im_verlauf" opal-value-type="date" val-fhir-path="ClinicalImpression.effective.value" />
|
||||
<!-- just for evaluation: redundant to Untersuchungs-, Befunddatum im Verlauf-->
|
||||
<attribute csv-column="datum_lokales_oder_regionaeres_rezidiv" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').effective.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
<attribute csv-column="gesamtbeurteilung_tumorstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21976-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
|
||||
<attribute csv-column="lokales_oder_regionaeres_rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
|
||||
<attribute csv-column="lymphknoten-rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4370-8').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
<attribute csv-column="fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4226-2').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
|
||||
</container>
|
||||
|
||||
<container csv-filename="Histology-${TIMESTAMP}.csv" opal-table="histology" opal-entity-type="Histology" >
|
||||
<attribute csv-column="histology-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').id" anonym="His" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="histologie_datum" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '59847-4').effective.value"/>
|
||||
<attribute csv-column="icd-o_katalog_morphologie_version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.version.value" />
|
||||
<attribute csv-column="morphologie" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.code.value"/>
|
||||
<attribute csv-column="morphologie-freitext" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.text.value"/>
|
||||
<attribute csv-column="grading" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59542-1').value.coding.code.value" join-fhir-path="Observation.where(code.coding.code = '59847-4').hasMember.reference.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Metastasis-${TIMESTAMP}.csv" opal-table="metastasis" opal-entity-type="Metastasis" >
|
||||
<attribute csv-column="metastasis-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').id" anonym="Met" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_fernmetastasen" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21907-1').effective.value"/>
|
||||
<attribute csv-column="fernmetastasen_vorhanden" opal-value-type="boolean" val-fhir-path="Observation.where(code.coding.code = '21907-1').value.coding.code.value"/>
|
||||
<attribute csv-column="lokalisation_fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').bodySite.coding.code.value"/>
|
||||
</container>
|
||||
|
||||
<container csv-filename="TNM-${TIMESTAMP}.csv" opal-table="tnm" opal-entity-type="TNM">
|
||||
<attribute csv-column="tnm-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').id" anonym="TNM" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').focus.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_der_tnm_dokumentation_datum_befund" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').effective.value"/>
|
||||
<attribute csv-column="uicc_stadium" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="c_p_u_preefix_m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-y-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '59479-6' or code.coding.code = '59479-6').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-r-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21983-2' or code.coding.code = '21983-2').value.coding.code.value"/>
|
||||
<attribute csv-column="tnm-m-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '42030-7' or code.coding.code = '42030-7').value.coding.code.value"/>
|
||||
<!--nur bei UICC, nicht in ADT2FHIR--><attribute csv-column="tnm-version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.version.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="System-Therapy-${TIMESTAMP}.csv" opal-table="system-therapy" opal-entity-type="SystemTherapy">
|
||||
<attribute csv-column="system-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="MedicationStatement.id" anonym="Sys" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="MedicationStatement.reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="MedicationStatement.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="systemische_therapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
<attribute csv-column="intention_chemotherapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value"/>
|
||||
<attribute csv-column="therapieart" opal-value-type="text" val-fhir-path="MedicationStatement.category.coding.code.value"/>
|
||||
<attribute csv-column="systemische_therapie_beginn" opal-value-type="date" val-fhir-path="MedicationStatement.effective.start.value"/>
|
||||
<attribute csv-column="systemische_therapie_ende" opal-value-type="date" val-fhir-path="MedicationStatement.effective.end.value"/>
|
||||
<attribute csv-column="systemische_therapie_protokoll" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SystemischeTherapieProtokoll').value.text.value"/>
|
||||
<attribute csv-column="systemische_therapie_substanzen" opal-value-type="text" val-fhir-path="MedicationStatement.medication.text.value"/>
|
||||
<attribute csv-column="chemotherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'CH').exists().value" />
|
||||
<attribute csv-column="hormontherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'HO').exists().value" />
|
||||
<attribute csv-column="immuntherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'IM').exists().value" />
|
||||
<attribute csv-column="knochenmarktransplantation" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'KM').exists().value" />
|
||||
<attribute csv-column="abwartende_strategie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'WS').exists().value" />
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Surgery-${TIMESTAMP}.csv" opal-table="surgery" opal-entity-type="Surgery">
|
||||
<attribute csv-column="surgery-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').id" anonym="Sur" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="ops-code" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').code.coding.code.value"/>
|
||||
<attribute csv-column="datum_der_op" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'OP').performed.value"/>
|
||||
<attribute csv-column="intention_op" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-OPIntention').value.coding.code.value"/>
|
||||
<attribute csv-column="lokale_beurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/LokaleBeurteilungResidualstatusCS').code.value" />
|
||||
<attribute csv-column="gesamtbeurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/GesamtbeurteilungResidualstatusCS').code.value" />
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Radiation-Therapy-${TIMESTAMP}.csv" opal-table="radiation-therapy" opal-entity-type="RadiationTherapy">
|
||||
<attribute csv-column="radiation-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').id" anonym="Rad" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="strahlentherapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
<attribute csv-column="intention_strahlentherapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value" />
|
||||
<attribute csv-column="strahlentherapie_beginn" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.start.value"/>
|
||||
<attribute csv-column="strahlentherapie_ende" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.end.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Molecular-Marker-${TIMESTAMP}.csv" opal-table="molecular-marker" opal-entity-type="MolecularMarker">
|
||||
<attribute csv-column="mol-marker-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').id" anonym="Mol" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').focus.reference.value" anonym="Dia" />
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="datum_der_datenerhebung" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '69548-6').effective.value"/>
|
||||
<attribute csv-column="marker" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').component.value.coding.code.value"/>
|
||||
<attribute csv-column="status_des_molekularen_markers" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.coding.code.value" />
|
||||
<attribute csv-column="zusaetzliche_alternative_dokumentation" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.text.value"/>
|
||||
</container>
|
||||
|
||||
|
||||
<container csv-filename="Sample-${TIMESTAMP}.csv" opal-table="sample" opal-entity-type="Sample">
|
||||
<attribute csv-column="sample-id" primary-key="true" opal-value-type="text" val-fhir-path="Specimen.id" anonym="Sam" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Specimen.subject.reference.value" anonym="Pat" />
|
||||
<attribute csv-column="entnahmedatum" opal-value-type="date" val-fhir-path="Specimen.collection.collectedDateTime.value"/>
|
||||
<attribute csv-column="probenart" opal-value-type="text" val-fhir-path="Specimen.type.coding.code.value"/>
|
||||
<attribute csv-column="status" opal-value-type="text" val-fhir-path="Specimen.status.code.value"/>
|
||||
<attribute csv-column="projekt" opal-value-type="text" val-fhir-path="Specimen.identifier.system.value"/>
|
||||
<!-- @TODO: it is still necessary to clarify whether it would not be better to take the quantity of collection.quantity -->
|
||||
<attribute csv-column="menge" opal-value-type="integer" val-fhir-path="Specimen.container.specimenQuantity.value.value"/>
|
||||
<attribute csv-column="einheit" opal-value-type="text" val-fhir-path="Specimen.container.specimenQuantity.unit.value"/>
|
||||
<attribute csv-column="aliquot" opal-value-type="text" val-fhir-path="Specimen.parent.reference.exists().value" />
|
||||
</container>
|
||||
|
||||
|
||||
|
||||
|
||||
<fhir-rev-include>Observation:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Condition:patient</fhir-rev-include>
|
||||
<fhir-rev-include>ClinicalImpression:patient</fhir-rev-include>
|
||||
<fhir-rev-include>MedicationStatement:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Procedure:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Specimen:patient</fhir-rev-include>
|
||||
|
||||
</template>
|
@ -1,13 +0,0 @@
|
||||
[
|
||||
"berlin",
|
||||
"muenchen-lmu",
|
||||
"dresden",
|
||||
"freiburg",
|
||||
"muenchen-tum",
|
||||
"tuebingen",
|
||||
"mainz",
|
||||
"frankfurt",
|
||||
"essen",
|
||||
"dktk-datashield-test",
|
||||
"dktk-test"
|
||||
]
|
@ -1,33 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_DATASHIELD" == true ]; then
|
||||
log INFO "DataSHIELD setup detected -- will start DataSHIELD services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/datashield-compose.yml"
|
||||
EXPORTER_OPAL_PASSWORD="$(generate_password \"exporter in Opal\")"
|
||||
TOKEN_MANAGER_OPAL_PASSWORD="$(generate_password \"Token Manager in Opal\")"
|
||||
OPAL_DB_PASSWORD="$(echo \"Opal DB\" | generate_simple_password)"
|
||||
OPAL_ADMIN_PASSWORD="$(generate_password \"admin password for Opal\")"
|
||||
RSTUDIO_ADMIN_PASSWORD="$(generate_password \"admin password for R-Studio\")"
|
||||
DATASHIELD_CONNECT_SECRET="$(echo \"DataShield Connect\" | generate_simple_password)"
|
||||
TOKEN_MANAGER_SECRET="$(echo \"Token Manager\" | generate_simple_password)"
|
||||
if [ ! -e /tmp/bridgehead/opal-cert.pem ]; then
|
||||
mkdir -p /tmp/bridgehead/
|
||||
chown -R bridgehead:docker /tmp/bridgehead/
|
||||
openssl req -x509 -newkey rsa:4096 -nodes -keyout /tmp/bridgehead/opal-key.pem -out /tmp/bridgehead/opal-cert.pem -days 3650 -subj "/CN=opal/C=DE"
|
||||
chmod g+r /tmp/bridgehead/opal-key.pem
|
||||
fi
|
||||
mkdir -p /tmp/bridgehead/opal-map
|
||||
jq -n '{"sites": input | map({
|
||||
"name": .,
|
||||
"id": .,
|
||||
"virtualhost": "\(.):443",
|
||||
"beamconnect": "datashield-connect.\(.).'"$BROKER_ID"'"
|
||||
})}' ./$PROJECT/modules/datashield-mappings.json > /tmp/bridgehead/opal-map/central.json
|
||||
jq -n '[{
|
||||
"external": "'"$SITE_ID"':443",
|
||||
"internal": "opal:8443",
|
||||
"allowed": input | map("datashield-connect.\(.).'"$BROKER_ID"'")
|
||||
}]' ./$PROJECT/modules/datashield-mappings.json > /tmp/bridgehead/opal-map/local.json
|
||||
chown -R bridgehead:docker /tmp/bridgehead/
|
||||
add_private_oidc_redirect_url "/opal/*"
|
||||
fi
|
@ -1,28 +0,0 @@
|
||||
# DataSHIELD
|
||||
This module constitutes the infrastructure to run DataSHIELD within the bridghead.
|
||||
For more information about DataSHIELD, please visit https://www.datashield.org/
|
||||
|
||||
## R-Studio
|
||||
To connect to the different bridgeheads of the CCP through DataSHIELD, you can use your own R-Studio environment.
|
||||
However, this R-Studio has already installed the DataSHIELD libraries and is integrated within the bridgehead.
|
||||
This can save you some time for extra configuration of your R-Studio environment.
|
||||
|
||||
## Opal
|
||||
This is the core of DataSHIELD. It is made up of Opal, a Postgres database and an R-server.
|
||||
For more information about Opal, please visit https://opaldoc.obiba.org
|
||||
|
||||
### Opal
|
||||
Opal is OBiBa’s core database application for biobanks.
|
||||
|
||||
### Opal-DB
|
||||
Opal requires a database to import the data for DataSHIELD. We use a Postgres instance as database.
|
||||
The data is imported within the bridgehead through the exporter.
|
||||
|
||||
### Opal-R-Server
|
||||
R-Server to execute R scripts in DataSHIELD.
|
||||
|
||||
## Beam
|
||||
### Beam-Connect
|
||||
Beam-Connect is used to route http(s) traffic through beam to enable R-Studio to access data from other bridgeheads that have datashield enabled.
|
||||
### Beam-Proxy
|
||||
The usual beam proxy used for communication.
|
@ -16,11 +16,9 @@ services:
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
HTTPS_PROXY: "http://forward_proxy:3128"
|
||||
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal
|
||||
NO_PROXY: beam-proxy,dnpm-backend
|
||||
RUST_LOG: ${RUST_LOG:-info}
|
||||
NO_AUTH: "true"
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
|
@ -1,33 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-backend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
|
||||
container_name: bridgehead-dnpm-backend
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
|
||||
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.bwhc-backend.tls=true"
|
||||
|
||||
dnpm-frontend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
|
||||
container_name: bridgehead-dnpm-frontend
|
||||
links:
|
||||
- dnpm-backend
|
||||
environment:
|
||||
- NUXT_HOST=0.0.0.0
|
||||
- NUXT_PORT=8080
|
||||
- BACKEND_PROTOCOL=https
|
||||
- BACKEND_HOSTNAME=$HOST
|
||||
- BACKEND_PORT=443
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.bwhc-frontend.tls=true"
|
@ -1,27 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
fi
|
@ -1,9 +1,10 @@
|
||||
#!/bin/bash -e
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM}" ]; then
|
||||
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam.Connect for DNPM."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
|
||||
|
||||
# Set variables required for Beam-Connect
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
fi
|
||||
|
@ -1,6 +0,0 @@
|
||||
# Full Excel Export
|
||||
curl --location --request POST 'https://${HOST}/ccp-exporter/request?query=Patient&query-format=FHIR_PATH&template-id=ccp&output-format=EXCEL' \
|
||||
--header 'x-api-key: ${EXPORT_API_KEY}'
|
||||
|
||||
# QB
|
||||
curl --location --request POST 'https://${HOST}/ccp-reporter/generate?template-id=ccp'
|
@ -1,67 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
exporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
|
||||
container_name: bridgehead-ccp-exporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
EXPORTER_DB_USER: "exporter"
|
||||
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
|
||||
HTTP_RELATIVE_PATH: "/ccp-exporter"
|
||||
SITE: "${SITE_ID}"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
|
||||
- "traefik.http.services.exporter_ccp.loadbalancer.server.port=8092"
|
||||
- "traefik.http.routers.exporter_ccp.tls=true"
|
||||
- "traefik.http.middlewares.exporter_ccp_strip.stripprefix.prefixes=/ccp-exporter"
|
||||
- "traefik.http.routers.exporter_ccp.middlewares=exporter_ccp_strip"
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/exporter-files:/app/exporter-files/output"
|
||||
|
||||
exporter-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
container_name: bridgehead-ccp-exporter-db
|
||||
environment:
|
||||
POSTGRES_USER: "exporter"
|
||||
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
POSTGRES_DB: "exporter"
|
||||
volumes:
|
||||
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
|
||||
- "/var/cache/bridgehead/ccp/exporter-db:/var/lib/postgresql/data"
|
||||
|
||||
reporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
|
||||
container_name: bridgehead-ccp-reporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-reporter"
|
||||
SITE: "${SITE_ID}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
EXPORTER_URL: "http://exporter:8092"
|
||||
LOG_FHIR_VALIDATION: "false"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
|
||||
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
|
||||
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
|
||||
# a process that can take several hours, because it depends on the exporter.
|
||||
# There is a risk that the bridgehead restarts, losing the already created export.
|
||||
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/reporter-files:/app/reports"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.reporter_ccp.rule=PathPrefix(`/ccp-reporter`)"
|
||||
- "traefik.http.services.reporter_ccp.loadbalancer.server.port=8095"
|
||||
- "traefik.http.routers.reporter_ccp.tls=true"
|
||||
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
|
||||
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"
|
@ -1,8 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_EXPORTER" == true ]; then
|
||||
log INFO "Exporter setup detected -- will start Exporter service."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
|
||||
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
|
||||
fi
|
@ -1,15 +0,0 @@
|
||||
# Exporter and Reporter
|
||||
|
||||
|
||||
## Exporter
|
||||
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
|
||||
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
|
||||
|
||||
## Exporter-DB
|
||||
It is a database to save queries for its execution in the exporter.
|
||||
The exporter manages also the different executions of the same query in through the database.
|
||||
|
||||
## Reporter
|
||||
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
|
||||
It is compatible with different template engines as Groovy, Thymeleaf,...
|
||||
It is perfect to generate a document as our traditional CCP quality report.
|
@ -1,5 +1,4 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
id-manager:
|
||||
image: docker.verbis.dkfz.de/bridgehead/magicpl
|
||||
@ -44,7 +43,7 @@ services:
|
||||
- patientlist-db
|
||||
|
||||
patientlist-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.6-alpine
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.1-alpine
|
||||
container_name: bridgehead-patientlist-db
|
||||
environment:
|
||||
POSTGRES_USER: "mainzelliste"
|
||||
|
@ -1,4 +1,4 @@
|
||||
#!/bin/bash -e
|
||||
#!/bin/bash
|
||||
|
||||
function idManagementSetup() {
|
||||
if [ -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
|
||||
@ -39,7 +39,6 @@ function applySpecialCases() {
|
||||
result="$1";
|
||||
result="${result/Lmu/LMU}";
|
||||
result="${result/Tum/TUM}";
|
||||
result="${result/Dktk Test/Teststandort}";
|
||||
echo "$result";
|
||||
}
|
||||
|
||||
|
@ -1,47 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
login-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
|
||||
container_name: bridgehead-login-db
|
||||
environment:
|
||||
POSTGRES_USER: "keycloak"
|
||||
POSTGRES_PASSWORD: "${KEYCLOAK_DB_PASSWORD}" # Set in login-setup.sh
|
||||
POSTGRES_DB: "keycloak"
|
||||
tmpfs:
|
||||
- /var/lib/postgresql/data
|
||||
# Consider removing this comment once we have collected experience in production.
|
||||
# volumes:
|
||||
# - "bridgehead-login-db:/var/lib/postgresql/data"
|
||||
|
||||
login:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-keycloak:latest
|
||||
container_name: bridgehead-login
|
||||
environment:
|
||||
KEYCLOAK_ADMIN: "admin"
|
||||
KEYCLOAK_ADMIN_PASSWORD: "${LDM_AUTH}"
|
||||
TEILER_ADMIN: "${PROJECT}"
|
||||
TEILER_ADMIN_PASSWORD: "${LDM_AUTH}"
|
||||
TEILER_ADMIN_FIRST_NAME: "${OPERATOR_FIRST_NAME}"
|
||||
TEILER_ADMIN_LAST_NAME: "${OPERATOR_LAST_NAME}"
|
||||
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
|
||||
KC_DB_PASSWORD: "${KEYCLOAK_DB_PASSWORD}" # Set in login-setup.sh
|
||||
KC_HOSTNAME_URL: "https://${HOST}/login"
|
||||
KC_HOSTNAME_STRICT: "false"
|
||||
KC_PROXY_ADDRESS_FORWARDING: "true"
|
||||
TEILER_ORCHESTRATOR_EXTERN_URL: "https://${HOST}/ccp-teiler"
|
||||
command:
|
||||
- start-dev --import-realm --proxy edge --http-relative-path=/login
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.login.rule=PathPrefix(`/login`)"
|
||||
- "traefik.http.services.login.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.login.tls=true"
|
||||
depends_on:
|
||||
- login-db
|
||||
|
||||
# Consider removing this comment once we have collected experience in production.
|
||||
#volumes:
|
||||
# bridgehead-login-db:
|
||||
# name: "bridgehead-login-db"
|
@ -1,7 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_LOGIN" == true ]; then
|
||||
log INFO "Login setup detected -- will start Login services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/login-compose.yml"
|
||||
KEYCLOAK_DB_PASSWORD="$(generate_password \"local Keycloak\")"
|
||||
fi
|
@ -1,13 +0,0 @@
|
||||
# Login
|
||||
The login component is a local Keycloak instance. In the future will be replaced by the central keycloak instance
|
||||
or maybe can be used to add local identity providers to the bridgehead or just to simplify the configuration of
|
||||
the central keycloak instance for the integration of every new bridgehead.
|
||||
The basic configuration of our Keycloak instance is contained in a small json file.
|
||||
|
||||
### Teiler User
|
||||
Currently, the local keycloak is used by the teiler. There is a basic admin user in the basic configuration of keycloak.
|
||||
The user can be configured with the environment variables TEILER_ADMIN_XXX.
|
||||
|
||||
## Login-DB
|
||||
Keycloak requires a local database for its configuration. However, as we use an initial json configuration file, if no
|
||||
local identity provider is configured nor any local user, theoretically we don't need a volume for the login.
|
@ -2,7 +2,6 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
mtba:
|
||||
#image: docker.verbis.dkfz.de/cache/samply/mtba:latest
|
||||
image: docker.verbis.dkfz.de/cache/samply/mtba:develop
|
||||
container_name: bridgehead-mtba
|
||||
environment:
|
||||
@ -12,30 +11,22 @@ services:
|
||||
ID_MANAGER_API_KEY: ${IDMANAGER_UPLOAD_APIKEY}
|
||||
ID_MANAGER_PSEUDONYM_ID_TYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
|
||||
ID_MANAGER_URL: http://id-manager:8080/id-manager
|
||||
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER:-FIRST_NAME}
|
||||
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER:-LAST_NAME}
|
||||
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER:-GENDER}
|
||||
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER:-BIRTHDAY}
|
||||
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER}
|
||||
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER}
|
||||
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER}
|
||||
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER}
|
||||
CBIOPORTAL_URL: http://cbioportal:8080
|
||||
FILE_CHARSET: ${MTBA_FILE_CHARSET:-UTF-8}
|
||||
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE:-LF}
|
||||
CSV_DELIMITER: ${MTBA_CSV_DELIMITER:-TAB}
|
||||
HTTP_RELATIVE_PATH: "/mtba"
|
||||
KEYCLOAK_ADMIN_GROUP: "${KEYCLOAK_ADMIN_GROUP}"
|
||||
KEYCLOAK_CLIENT_ID: "${KEYCLOAK_PRIVATE_CLIENT_ID}"
|
||||
KEYCLOAK_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
KEYCLOAK_REALM: "${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_URL: "${KEYCLOAK_URL}"
|
||||
|
||||
FILE_CHARSET: ${MTBA_FILE_CHARSET}
|
||||
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE}
|
||||
CSV_DELIMITER: ${MTBA_CSV_DELIMITER}
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.mtba_ccp.rule=PathPrefix(`/mtba`)"
|
||||
- "traefik.http.services.mtba_ccp.loadbalancer.server.port=8480"
|
||||
- "traefik.http.routers.mtba_ccp.tls=true"
|
||||
|
||||
- "traefik.http.routers.mtba.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.mtba.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.mtba.tls=true"
|
||||
volumes:
|
||||
- /var/cache/bridgehead/ccp/mtba/input:/app/input
|
||||
- /var/cache/bridgehead/ccp/mtba/persist:/app/persist
|
||||
- /tmp/bridgehead/mtba/input:/app/input
|
||||
- /tmp/bridgehead/mtba/persist:/app/persist
|
||||
|
||||
# TODO: Include CBioPortal in Deployment ...
|
||||
# NOTE: CBioPortal can't load data while the system is running. So after import of data bridgehead needs to be restarted!
|
||||
|
@ -1,13 +1,13 @@
|
||||
#!/bin/bash -e
|
||||
#!/bin/bash
|
||||
|
||||
function mtbaSetup() {
|
||||
if [ -n "$ENABLE_MTBA" ];then
|
||||
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
|
||||
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
|
||||
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
|
||||
exit 1;
|
||||
fi
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/mtba-compose.yml"
|
||||
add_private_oidc_redirect_url "/mtba/*"
|
||||
fi
|
||||
# TODO: Check if ID-Management Module is activated!
|
||||
if [ -n "$ENABLE_MTBA" ];then
|
||||
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
|
||||
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
|
||||
log ERROR "Detected MTBA Module configuration but ID-Management Module seems not to be configured!"
|
||||
exit 1;
|
||||
fi
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/mtba-compose.yml"
|
||||
fi
|
||||
}
|
@ -1,6 +0,0 @@
|
||||
# Molecular Tumor Board Alliance (MTBA)
|
||||
|
||||
In this module, the genetic data to import is stored in a directory (/tmp/bridgehead/mtba/input). A process checks
|
||||
regularly if there are files in the directory. The files are pseudonomized when the IDAT is provided. The files are
|
||||
combined with clinical data of the blaze and imported in cBioPortal. On the other hand, this files are also imported in
|
||||
Blaze.
|
@ -1,5 +1,4 @@
|
||||
version: "3.7"
|
||||
|
||||
volumes:
|
||||
nngm-rest:
|
||||
|
||||
@ -22,6 +21,9 @@ services:
|
||||
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
|
||||
volumes:
|
||||
- nngm-rest:/var/log
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"
|
||||
|
||||
|
||||
|
@ -1,6 +1,8 @@
|
||||
#!/bin/bash -e
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "$NNGM_CTS_APIKEY" ]; then
|
||||
log INFO "nNGM setup detected -- will start nNGM Connector."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
|
||||
fi
|
||||
function nngmSetup() {
|
||||
if [ -n "$NNGM_CTS_APIKEY" ]; then
|
||||
log INFO "nNGM setup detected -- will start nNGM Connector."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
|
||||
fi
|
||||
}
|
||||
|
@ -1,82 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
teiler-orchestrator:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-orchestrator:latest
|
||||
container_name: bridgehead-teiler-orchestrator
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.rule=PathPrefix(`/ccp-teiler`)"
|
||||
- "traefik.http.services.teiler_orchestrator_ccp.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_orchestrator_ccp_strip.stripprefix.prefixes=/ccp-teiler"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.middlewares=teiler_orchestrator_ccp_strip"
|
||||
environment:
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "https://${HOST}/ccp-teiler-dashboard"
|
||||
DEFAULT_LANGUAGE: "${DEFAULT_LANGUAGE_LOWER_CASE}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
|
||||
teiler-dashboard:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:latest
|
||||
container_name: bridgehead-teiler-dashboard
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.rule=PathPrefix(`/ccp-teiler-dashboard`)"
|
||||
- "traefik.http.services.teiler_dashboard_ccp.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_dashboard_ccp_strip.stripprefix.prefixes=/ccp-teiler-dashboard"
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.middlewares=teiler_dashboard_ccp_strip"
|
||||
environment:
|
||||
DEFAULT_LANGUAGE: "${DEFAULT_LANGUAGE}"
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
KEYCLOAK_URL: "${KEYCLOAK_URL}"
|
||||
KEYCLOAK_REALM: "${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_CLIENT_ID: "${KEYCLOAK_PUBLIC_CLIENT_ID}"
|
||||
KEYCLOAK_TOKEN_GROUP: "${KEYCLOAK_GROUP_CLAIM}"
|
||||
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
|
||||
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
|
||||
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
|
||||
TEILER_PROJECT: "${PROJECT}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_HTTP_RELATIVE_PATH: "/ccp-teiler-dashboard"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_USER: "${KEYCLOAK_USER_GROUP}"
|
||||
TEILER_ADMIN: "${KEYCLOAK_ADMIN_GROUP}"
|
||||
REPORTER_DEFAULT_TEMPLATE_ID: "ccp-qb"
|
||||
EXPORTER_DEFAULT_TEMPLATE_ID: "ccp"
|
||||
|
||||
|
||||
teiler-backend:
|
||||
# image: docker.verbis.dkfz.de/ccp/dktk-teiler-backend:latest
|
||||
image: dktk-teiler-backend
|
||||
container_name: bridgehead-teiler-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_backend_ccp.rule=PathPrefix(`/ccp-teiler-backend`)"
|
||||
- "traefik.http.services.teiler_backend_ccp.loadbalancer.server.port=8085"
|
||||
- "traefik.http.routers.teiler_backend_ccp.tls=true"
|
||||
- "traefik.http.middlewares.teiler_backend_ccp_strip.stripprefix.prefixes=/ccp-teiler-backend"
|
||||
- "traefik.http.routers.teiler_backend_ccp.middlewares=teiler_backend_ccp_strip"
|
||||
environment:
|
||||
LOG_LEVEL: "INFO"
|
||||
APPLICATION_PORT: "8085"
|
||||
APPLICATION_ADDRESS: "${HOST}"
|
||||
DEFAULT_LANGUAGE: "${DEFAULT_LANGUAGE}"
|
||||
CONFIG_ENV_VAR_PATH: "/run/secrets/ccp.conf"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "https://${HOST}/ccp-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "https://${HOST}/ccp-teiler-dashboard/en"
|
||||
CENTRAX_URL: "${CENTRAXX_URL}"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
ENABLE_MTBA: "${ENABLE_MTBA}"
|
||||
ENABLE_DATASHIELD: "${ENABLE_DATASHIELD}"
|
||||
secrets:
|
||||
- ccp.conf
|
||||
|
||||
secrets:
|
||||
ccp.conf:
|
||||
file: /etc/bridgehead/ccp.conf
|
@ -1,7 +0,0 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_TEILER" == true ];then
|
||||
log INFO "Teiler setup detected -- will start Teiler services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/teiler-compose.yml"
|
||||
add_public_oidc_redirect_url "/ccp-teiler/*"
|
||||
fi
|
@ -1,19 +0,0 @@
|
||||
# Teiler
|
||||
This module orchestrates the different microfrontends of the bridgehead as a single page application.
|
||||
|
||||
## Teiler Orchestrator
|
||||
Single SPA component that consists on the root HTML site of the single page application and a javascript code that
|
||||
gets the information about the microfrontend calling the teiler backend and is responsible for registering them. With the
|
||||
resulting mapping, it can initialize, mount and unmount the required microfrontends on the fly.
|
||||
|
||||
The microfrontends run independently in different containers and can be based on different frameworks (Angular, Vue, React,...)
|
||||
This microfrontends can run as single alone but need an extension with Single-SPA (https://single-spa.js.org/docs/ecosystem).
|
||||
There are also available three templates (Angular, Vue, React) to be directly extended to be used directly in the teiler.
|
||||
|
||||
## Teiler Dashboard
|
||||
It consists on the main dashboard and a set of embedded services.
|
||||
### Login
|
||||
user and password in ccp.local.conf
|
||||
|
||||
## Teiler Backend
|
||||
In this component, the microfrontends are configured.
|
20
ccp/root-new.crt.pem
Normal file
20
ccp/root-new.crt.pem
Normal file
@ -0,0 +1,20 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUN7yzueIZzwpe8PaPEIMY8zoH+eMwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNTIzMTAxNzIzWhcNMzMw
|
||||
NTIwMTAxNzUzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBAN5JAj+HydSGaxvA0AOcrXVTZ9FfsH0cMVBlQb72
|
||||
bGZgrRvkqtB011TNXZfsHl7rPxCY61DcsDJfFq3+8VHT+S9HE0qV1bEwP+oA3xc4
|
||||
Opq77av77cNNOqDC7h+jyPhHcUaE33iddmrH9Zn2ofWTSkKHHu3PAe5udCrc2QnD
|
||||
4PLRF6gqiEY1mcGknJrXj1ff/X0nRY/m6cnHNXz0Cvh8oPOtbdfGgfZjID2/fJNP
|
||||
fNoNKqN+5oJAZ+ZZ9id9rBvKj1ivW3F2EoGjZF268SgZzc5QrM/D1OpSBQf5SF/V
|
||||
qUPcQTgt9ry3YR+SZYazLkfKMEOWEa0WsqJVgXdQ6FyergcCAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEa70kcseqU5
|
||||
bHx2zSt4bG21HokhMB8GA1UdIwQYMBaAFEa70kcseqU5bHx2zSt4bG21HokhMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCGmE7NXW4T
|
||||
6J4mV3b132cGEMD7grx5JeiXK5EHMlswUS+Odz0NcBNzhUHdG4WVMbrilHbI5Ua+
|
||||
6jdKx5WwnqzjQvElP0MCw6sH/35gbokWgk1provOP99WOFRsQs+9Sm8M2XtMf9HZ
|
||||
m3wABwU/O+dhZZ1OT1PjSZD0OKWKqH/KvlsoF5R6P888KpeYFiIWiUNS5z21Jm8A
|
||||
ZcllJjiRJ60EmDwSUOQVJJSMOvtr6xTZDZLtAKSN8zN08lsNGzyrFwqjDwU0WTqp
|
||||
scMXEGBsWQjlvxqDnXyljepR0oqRIjOvgrWaIgbxcnu98tK/OdBGwlAPKNUW7Crr
|
||||
vO+eHxl9iqd4
|
||||
-----END CERTIFICATE-----
|
20
ccp/vars
20
ccp/vars
@ -7,25 +7,7 @@ SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
DEFAULT_LANGUAGE=DE
|
||||
DEFAULT_LANGUAGE_LOWER_CASE=${DEFAULT_LANGUAGE,,}
|
||||
ENABLE_EXPORTER=true
|
||||
ENABLE_TEILER=true
|
||||
#ENABLE_DATASHIELD=true
|
||||
|
||||
KEYCLOAK_USER_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})"
|
||||
KEYCLOAK_ADMIN_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})_Verwalter"
|
||||
KEYCLOAK_PRIVATE_CLIENT_ID=${SITE_ID}-private
|
||||
KEYCLOAK_PUBLIC_CLIENT_ID=${SITE_ID}-public
|
||||
# TODO: Change Keycloak Realm to productive. "test-realm-01" is only for testing
|
||||
KEYCLOAK_REALM="${KEYCLOAK_REALM:-test-realm-01}"
|
||||
KEYCLOAK_URL="https://login.verbis.dkfz.de"
|
||||
KEYCLOAK_ISSUER_URL="${KEYCLOAK_URL}/realms/${KEYCLOAK_REALM}"
|
||||
KEYCLOAK_GROUP_CLAIM="groups"
|
||||
OAUTH2_CALLBACK=/oauth2/callback
|
||||
OAUTH2_PROXY_SECRET="$(echo \"This is a salt string to generate one consistent encryption key for the oauth2_proxy. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 32)"
|
||||
|
||||
add_private_oidc_redirect_url "${OAUTH2_CALLBACK}"
|
||||
|
||||
for module in $PROJECT/modules/*.sh
|
||||
do
|
||||
@ -34,5 +16,5 @@ do
|
||||
done
|
||||
|
||||
idManagementSetup
|
||||
nngmSetup
|
||||
mtbaSetup
|
||||
adt2fhirRestSetup
|
||||
|
137
lib/functions.sh
137
lib/functions.sh
@ -9,33 +9,6 @@ detectCompose() {
|
||||
fi
|
||||
}
|
||||
|
||||
setupProxy() {
|
||||
### Note: As the current data protection concepts do not allow communication via HTTP,
|
||||
### we are not setting a proxy for HTTP requests.
|
||||
|
||||
local http="no"
|
||||
local https="no"
|
||||
if [ $HTTPS_PROXY_URL ]; then
|
||||
local proto="$(echo $HTTPS_PROXY_URL | grep :// | sed -e 's,^\(.*://\).*,\1,g')"
|
||||
local fqdn="$(echo ${HTTPS_PROXY_URL/$proto/})"
|
||||
local hostport=$(echo $HTTPS_PROXY_URL | sed -e "s,$proto,,g" | cut -d/ -f1)
|
||||
HTTPS_PROXY_HOST="$(echo $hostport | sed -e 's,:.*,,g')"
|
||||
HTTPS_PROXY_PORT="$(echo $hostport | sed -e 's,^.*:,:,g' -e 's,.*:\([0-9]*\).*,\1,g' -e 's,[^0-9],,g')"
|
||||
if [[ ! -z "$HTTPS_PROXY_USERNAME" && ! -z "$HTTPS_PROXY_PASSWORD" ]]; then
|
||||
local proto="$(echo $HTTPS_PROXY_URL | grep :// | sed -e 's,^\(.*://\).*,\1,g')"
|
||||
local fqdn="$(echo ${HTTPS_PROXY_URL/$proto/})"
|
||||
HTTPS_PROXY_FULL_URL="$(echo $proto$HTTPS_PROXY_USERNAME:$HTTPS_PROXY_PASSWORD@$fqdn)"
|
||||
https="authenticated"
|
||||
else
|
||||
HTTPS_PROXY_FULL_URL=$HTTPS_PROXY_URL
|
||||
https="unauthenticated"
|
||||
fi
|
||||
fi
|
||||
|
||||
log INFO "Configuring proxy servers: $http http proxy (we're not supporting unencrypted comms), $https https proxy"
|
||||
export HTTPS_PROXY_HOST HTTPS_PROXY_PORT HTTPS_PROXY_FULL_URL
|
||||
}
|
||||
|
||||
exitIfNotRoot() {
|
||||
if [ "$EUID" -ne 0 ]; then
|
||||
log "ERROR" "Please run as root"
|
||||
@ -76,7 +49,7 @@ fetchVarsFromVault() {
|
||||
|
||||
set +e
|
||||
|
||||
PASS=$(BW_MASTERPASS="$BW_MASTERPASS" BW_CLIENTID="$BW_CLIENTID" BW_CLIENTSECRET="$BW_CLIENTSECRET" docker run --rm -e BW_MASTERPASS -e BW_CLIENTID -e BW_CLIENTSECRET -e http_proxy docker.verbis.dkfz.de/cache/samply/bridgehead-vaultfetcher:latest $@)
|
||||
PASS=$(BW_MASTERPASS="$BW_MASTERPASS" BW_CLIENTID="$BW_CLIENTID" BW_CLIENTSECRET="$BW_CLIENTSECRET" docker run --rm -e BW_MASTERPASS -e BW_CLIENTID -e BW_CLIENTSECRET -e http_proxy samply/bridgehead-vaultfetcher $@)
|
||||
RET=$?
|
||||
|
||||
if [ $RET -ne 0 ]; then
|
||||
@ -215,7 +188,7 @@ function do_enroll_inner {
|
||||
PARAMS+="--admin-email $SUPPORT_EMAIL"
|
||||
fi
|
||||
|
||||
docker run --rm -v /etc/bridgehead/pki:/etc/bridgehead/pki docker.verbis.dkfz.de/cache/samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $MANUAL_PROXY_ID $PARAMS
|
||||
docker run --rm -v /etc/bridgehead/pki:/etc/bridgehead/pki samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $MANUAL_PROXY_ID $PARAMS
|
||||
chmod 600 $PRIVATEKEYFILENAME
|
||||
}
|
||||
|
||||
@ -239,109 +212,3 @@ add_basic_auth_user() {
|
||||
log DEBUG "Saving clear text credentials in $FILE. If wanted, delete them manually."
|
||||
sed -i "/^$NAME/ s|$|\n# User: $USER\n# Password: $PASSWORD|" $FILE
|
||||
}
|
||||
|
||||
OIDC_PUBLIC_REDIRECT_URLS=${OIDC_PUBLIC_REDIRECT_URLS:-""}
|
||||
OIDC_PRIVATE_REDIRECT_URLS=${OIDC_PRIVATE_REDIRECT_URLS:-""}
|
||||
|
||||
# Add a redirect url to the public oidc client of the bridgehead
|
||||
function add_public_oidc_redirect_url() {
|
||||
if [[ $OIDC_PUBLIC_REDIRECT_URLS == "" ]]; then
|
||||
OIDC_PUBLIC_REDIRECT_URLS+="$(generate_redirect_urls $1)"
|
||||
else
|
||||
OIDC_PUBLIC_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Add a redirect url to the private oidc client of the bridgehead
|
||||
function add_private_oidc_redirect_url() {
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS == "" ]]; then
|
||||
OIDC_PRIVATE_REDIRECT_URLS+="$(generate_redirect_urls $1)"
|
||||
else
|
||||
OIDC_PRIVATE_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
|
||||
fi
|
||||
}
|
||||
|
||||
function sync_secrets() {
|
||||
local delimiter=$'\x1E'
|
||||
local secret_sync_args=""
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS != "" ]]; then
|
||||
secret_sync_args="OIDC:OIDC_CLIENT_SECRET:private;$OIDC_PRIVATE_REDIRECT_URLS"
|
||||
fi
|
||||
if [[ $OIDC_PRIVATE_REDIRECT_URLS != "" ]]; then
|
||||
if [[ $secret_sync_args == "" ]]; then
|
||||
secret_sync_args="OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
|
||||
else
|
||||
secret_sync_args+="${delimiter}OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
|
||||
fi
|
||||
fi
|
||||
if [[ $secret_sync_args == "" ]]; then
|
||||
return
|
||||
fi
|
||||
mkdir -p /var/cache/bridgehead/secrets/
|
||||
touch /var/cache/bridgehead/secrets/oidc
|
||||
chown -R bridgehead:docker /var/cache/bridgehead/secrets
|
||||
# The oidc provider will need to be switched based on the project at some point I guess
|
||||
docker run --rm \
|
||||
-v /var/cache/bridgehead/secrets/oidc:/usr/local/cache \
|
||||
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
|
||||
-v /srv/docker/bridgehead/$PROJECT/root.crt.pem:/run/secrets/root.crt.pem:ro \
|
||||
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
|
||||
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
|
||||
-e HTTPS_PROXY=$HTTPS_PROXY_FULL_URL \
|
||||
-e PROXY_ID=$PROXY_ID \
|
||||
-e BROKER_URL=$BROKER_URL \
|
||||
-e OIDC_PROVIDER=secret-sync-central.oidc-client-enrollment.$BROKER_ID \
|
||||
-e SECRET_DEFINITIONS=$secret_sync_args \
|
||||
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
|
||||
set -a # Export variables as environment variables
|
||||
source /var/cache/bridgehead/secrets/*
|
||||
set +a # Export variables in the regular way
|
||||
}
|
||||
|
||||
capitalize_first_letter() {
|
||||
input="$1"
|
||||
capitalized="$(tr '[:lower:]' '[:upper:]' <<< ${input:0:1})${input:1}"
|
||||
echo "$capitalized"
|
||||
}
|
||||
|
||||
# Generate a string of ',' separated string of redirect urls relative to $HOST.
|
||||
# $1 will be appended to the url
|
||||
# If the host looks like dev-jan.inet.dkfz-heidelberg.de it will generate urls with dev-jan and the original $HOST as url Authorities
|
||||
function generate_redirect_urls(){
|
||||
local redirect_urls="https://${HOST}$1"
|
||||
local host_without_proxy="$(echo "$HOST" | cut -d '.' -f1)"
|
||||
# Only append second url if its different and the host is not an ip address
|
||||
if [[ "$HOST" != "$host_without_proxy" && ! "$HOST" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
redirect_urls+=",https://$host_without_proxy$1"
|
||||
fi
|
||||
echo "$redirect_urls"
|
||||
}
|
||||
|
||||
# This password contains at least one special char, a random number and a random upper and lower case letter
|
||||
generate_password(){
|
||||
local seed_text="$1"
|
||||
local seed_num=$(awk 'BEGIN{FS=""} NR==1{print $10}' /etc/bridgehead/pki/${SITE_ID}.priv.pem | od -An -tuC)
|
||||
local nums="1234567890"
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 10}')
|
||||
local random_digit=${nums:$n:1}
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 26}')
|
||||
local upper="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
||||
local lower="abcdefghijklmnopqrstuvwxyz"
|
||||
local random_upper=${upper:$n:1}
|
||||
local random_lower=${lower:$n:1}
|
||||
local n=$(echo "$seed_num" | awk '{print $1 % 8}')
|
||||
local special='@#$%^&+='
|
||||
local random_special=${special:$n:1}
|
||||
|
||||
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
|
||||
local main_password=$(echo "${combined_text}" | openssl rsautl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/\//A/g')
|
||||
|
||||
echo "${main_password}${random_digit}${random_upper}${random_lower}${random_special}"
|
||||
}
|
||||
|
||||
# This password only contains alphanumeric characters
|
||||
generate_simple_password(){
|
||||
local seed_text="$1"
|
||||
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
|
||||
echo "${combined_text}" | openssl rsautl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/[+\/]/A/g'
|
||||
}
|
||||
|
@ -47,8 +47,8 @@ function hc_send(){
|
||||
|
||||
if [ -n "$2" ]; then
|
||||
MSG="$2\n\nDocker stats:\n$UPTIME"
|
||||
echo -e "$MSG" | https_proxy=$HTTPS_PROXY_FULL_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null -X POST --data-binary @- "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
|
||||
echo -e "$MSG" | https_proxy=$HTTPS_PROXY_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null -X POST --data-binary @- "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
|
||||
else
|
||||
https_proxy=$HTTPS_PROXY_FULL_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
|
||||
https_proxy=$HTTPS_PROXY_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
|
||||
fi
|
||||
}
|
||||
|
@ -14,7 +14,7 @@ checkOwner /etc/bridgehead bridgehead || exit 1
|
||||
|
||||
## Check if user is a su
|
||||
log INFO "Checking if all prerequisites are met ..."
|
||||
prerequisites="git docker curl"
|
||||
prerequisites="git docker"
|
||||
for prerequisite in $prerequisites; do
|
||||
$prerequisite --version 2>&1
|
||||
is_available=$?
|
||||
@ -68,7 +68,7 @@ source /etc/bridgehead/${PROJECT}.conf
|
||||
source ${PROJECT}/vars
|
||||
|
||||
set +e
|
||||
SERVERTIME="$(https_proxy=$HTTPS_PROXY_FULL_URL curl -m 5 -s -I $BROKER_URL_FOR_PREREQ 2>&1 | grep -i -e '^Date: ' | sed -e 's/^Date: //i')"
|
||||
SERVERTIME="$(https_proxy=$HTTPS_PROXY_URL curl -m 5 -s -I $BROKER_URL_FOR_PREREQ 2>&1 | grep -i -e '^Date: ' | sed -e 's/^Date: //i')"
|
||||
RET=$?
|
||||
set -e
|
||||
if [ $RET -ne 0 ]; then
|
||||
|
@ -30,7 +30,7 @@ source $CONFFILE
|
||||
assertVarsNotEmpty SITE_ID || fail_and_report 1 "Update failed: SITE_ID empty"
|
||||
export SITE_ID
|
||||
|
||||
checkOwner /srv/docker/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /srv/docker/bridgehead"
|
||||
checkOwner . bridgehead || fail_and_report 1 "Update failed: Wrong permissions in $(pwd)"
|
||||
checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead"
|
||||
|
||||
CREDHELPER="/srv/docker/bridgehead/lib/gitpassword.sh"
|
||||
@ -50,12 +50,12 @@ for DIR in /etc/bridgehead $(pwd); do
|
||||
git -C $DIR config credential.helper "$CREDHELPER"
|
||||
fi
|
||||
old_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
|
||||
if [ -z "$HTTPS_PROXY_FULL_URL" ]; then
|
||||
if [ -z "$HTTP_PROXY_URL" ]; then
|
||||
log "INFO" "Git is using no proxy!"
|
||||
OUT=$(retry 5 git -C $DIR fetch 2>&1 && retry 5 git -C $DIR pull 2>&1)
|
||||
else
|
||||
log "INFO" "Git is using proxy ${HTTPS_PROXY_URL} from ${CONFFILE}"
|
||||
OUT=$(retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR pull 2>&1)
|
||||
log "INFO" "Git is using proxy ${HTTP_PROXY_URL} from ${CONFFILE}"
|
||||
OUT=$(retry 5 git -c http.proxy=$HTTP_PROXY_URL -c https.proxy=$HTTPS_PROXY_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTP_PROXY_URL -c https.proxy=$HTTPS_PROXY_URL -C $DIR pull 2>&1)
|
||||
fi
|
||||
if [ $? -ne 0 ]; then
|
||||
report_error log "Unable to update git $DIR: $OUT"
|
||||
@ -116,7 +116,7 @@ if [ -n "${BACKUP_DIRECTORY}" ]; then
|
||||
mkdir -p "$BACKUP_DIRECTORY"
|
||||
chown -R "$BACKUP_DIRECTORY" bridgehead;
|
||||
fi
|
||||
checkOwner "$BACKUP_DIRECTORY" bridgehead || fail_and_report 1 "Automatic maintenance failed: Wrong permissions for backup directory $BACKUP_DIRECTORY"
|
||||
checkOwner "$BACKUP_DIRECTORY" bridgehead || fail_and_report 1 "Automatic maintenance failed: Wrong permissions for backup directory $(pwd)"
|
||||
# Collect all container names that contain '-db'
|
||||
BACKUP_SERVICES="$(docker ps --filter name=-db --format "{{.Names}}" | tr "\n" "\ ")"
|
||||
log INFO "Performing automatic maintenance: Creating Backups for $BACKUP_SERVICES";
|
||||
|
@ -35,8 +35,8 @@ services:
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-forward-proxy:latest
|
||||
environment:
|
||||
HTTPS_PROXY: ${HTTPS_PROXY_URL}
|
||||
HTTPS_PROXY_USERNAME: ${HTTPS_PROXY_USERNAME}
|
||||
HTTPS_PROXY_PASSWORD: ${HTTPS_PROXY_PASSWORD}
|
||||
USERNAME: ${HTTPS_PROXY_USERNAME}
|
||||
PASSWORD: ${HTTPS_PROXY_PASSWORD}
|
||||
tmpfs:
|
||||
- /var/log/squid
|
||||
- /var/spool/squid
|
||||
@ -45,7 +45,7 @@ services:
|
||||
|
||||
landing:
|
||||
container_name: bridgehead-landingpage
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:main
|
||||
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:master
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
|
||||
|
@ -18,7 +18,7 @@ services:
|
||||
- "forward_proxy"
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
|
||||
- /srv/docker/bridgehead/ccp/root-new.crt.pem:/conf/root.crt.pem:ro
|
||||
|
||||
dnpm-beam-connect:
|
||||
depends_on: [ dnpm-beam-proxy ]
|
||||
@ -32,11 +32,9 @@ services:
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
HTTP_PROXY: http://forward_proxy:3128
|
||||
HTTPS_PROXY: http://forward_proxy:3128
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend
|
||||
RUST_LOG: ${RUST_LOG:-info}
|
||||
NO_AUTH: "true"
|
||||
extra_hosts:
|
||||
- "host.docker.internal:host-gateway"
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
|
@ -1,33 +0,0 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-backend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
|
||||
container_name: bridgehead-dnpm-backend
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
|
||||
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.bwhc-backend.tls=true"
|
||||
|
||||
dnpm-frontend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
|
||||
container_name: bridgehead-dnpm-frontend
|
||||
links:
|
||||
- dnpm-backend
|
||||
environment:
|
||||
- NUXT_HOST=0.0.0.0
|
||||
- NUXT_PORT=8080
|
||||
- BACKEND_PROTOCOL=https
|
||||
- BACKEND_HOSTNAME=$HOST
|
||||
- BACKEND_PORT=443
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.bwhc-frontend.tls=true"
|
@ -1,27 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
fi
|
@ -5,12 +5,9 @@ if [ -n "${ENABLE_DNPM}" ]; then
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
|
||||
|
||||
# Set variables required for Beam-Connect
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
|
||||
DNPM_BROKER_ID="broker.ccp-it.dktk.dkfz.de"
|
||||
DNPM_BROKER_URL="https://${DNPM_BROKER_ID}"
|
||||
if [ -z ${BROKER_URL_FOR_PREREQ+x} ]; then
|
||||
BROKER_URL_FOR_PREREQ=$DNPM_BROKER_URL
|
||||
log DEBUG "No Broker for clock check set; using $DNPM_BROKER_URL"
|
||||
fi
|
||||
DNPM_PROXY_ID="${SITE_ID}.${DNPM_BROKER_ID}"
|
||||
fi
|
||||
|
@ -1,29 +0,0 @@
|
||||
version: "3.7"
|
||||
volumes:
|
||||
nngm-rest:
|
||||
|
||||
services:
|
||||
connector:
|
||||
container_name: bridgehead-connector
|
||||
image: docker.verbis.dkfz.de/ccp/nngm-rest:main
|
||||
environment:
|
||||
CTS_MAGICPL_API_KEY: ${NNGM_MAGICPL_APIKEY}
|
||||
CTS_API_KEY: ${NNGM_CTS_APIKEY}
|
||||
CRYPT_KEY: ${NNGM_CRYPTKEY}
|
||||
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"
|
||||
- "traefik.http.middlewares.connector_strip.stripprefix.prefixes=/nngm-connector"
|
||||
- "traefik.http.services.connector.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.connector.tls=true"
|
||||
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
|
||||
volumes:
|
||||
- nngm-rest:/var/log
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"
|
||||
|
||||
|
@ -1,6 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "$NNGM_CTS_APIKEY" ]; then
|
||||
log INFO "nNGM setup detected -- will start nNGM Connector."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
|
||||
fi
|
Reference in New Issue
Block a user