Merge branch 'main' into main_tls_docu

This commit is contained in:
Croft 2023-08-30 11:52:13 +02:00
commit 6b025a8f6a
18 changed files with 115 additions and 81 deletions

3
.gitignore vendored
View File

@ -4,3 +4,6 @@ site-config/*
## Ignore site configuration
*/docker-compose.override.yml
## MAC OS
.DS_Store

View File

@ -56,6 +56,8 @@ We recommend to install Docker(-compose) from its official sources as described
Note for Ubuntu: Please note that snap versions of Docker are not supported.
Note for git and Docker: if you have a local proxy, you will need to adjust your setup appropriately, see [git proxy](https://gist.github.com/evantoli/f8c23a37eb3558ab8765) and [docker proxy](https://docs.docker.com/network/proxy/).
### Network
A running Bridgehead requires an outgoing HTTPS proxy to communicate with the central components.
@ -262,28 +264,29 @@ Here a file will be mentioned, perhaps in the directory /etc/ssl/certs. The exac
Your Bridgehead's actual data is not stored in the above directories, but in named docker volumes, see `docker volume ls` and `docker volume inspect <volume_name>`.
### BBMRI-ERIC Directory
### BBMRI-ERIC Directory entry needed
If you run a biobank, you should register with the [Directory](https://directory.bbmri-eric.eu), a BBMRI-ERIC project that catalogs biobanks.
If you run a biobank, you should be listed together with your collections with in the [Directory](https://directory.bbmri-eric.eu), a BBMRI-ERIC project that catalogs biobanks.
To do this, contact the BBMRI-ERIC national node for the country where your biobank is based, see [the list of nodes](http://www.bbmri-eric.eu/national-nodes/).
Once you have registered, **you should choose one of your sample collections as a default collection for your biobank**. This is the collection that will be automatically used to label any samples that have not been assigned a collection ID in your ETL process. Make a note of this ID, you will need it later on in the installation process.
Once you have added your biobank to the Directory you got persistent identifier (PID) for your biobank and unique identifiers (IDs) for your collections. The collection IDs are necessary for the biospecimens assigning to the collections and later in the data flows between BBMRI-ERIC tools. In case you cannot distribute all your biospecimens within collections via assigning the collection IDs, **you should choose one of your sample collections as a default collection for your biobank**. This collection will be automatically used to label any samples that have not been assigned a collection ID in your ETL process. Make a note of this default collection ID, you will need it later on in the installation process.
The Bridgehead's **Directory Sync** is an optional feature that keeps the Directory up to date with your local data, e.g. number of samples. Conversely, it also updates the local FHIR store with the latest contact details etc. from the Directory. You must explicitly set your country specific directory url, username and password to enable this feature.
### Directory sync tool
The Bridgehead's **Directory Sync** is an optional feature that keeps the Directory up to date with your local data, e.g. number of samples. Conversely, it also updates the local FHIR store with the latest contact details etc. from the Directory. You must explicitly set your country specific directory URL, username and password to enable this feature.
Full details can be found in [directory_sync_service](https://github.com/samply/directory_sync_service).
To enable it, you will need to set these variables to the ```bbmri.conf``` file of your GitLab repository. Here is an example config:
```
### Directory sync service
DS_DIRECTORY_URL=https://directory.bbmri-eric.eu
DS_DIRECTORY_USER_NAME=your_directory_username
DS_DIRECTORY_USER_PASS=qwdnqwswdvqHBVGFR9887
DS_TIMER_CRON="0 22 * * *"
```
You must contact the Directory for your national node to find the URL, and to register as a user.
You must contact the Directory team for your national node to find the URL, and to register as a user.
Additionally, you should choose when you want Directory sync to run. In the example above, this is set to happen at 10 pm every evening. You can modify this to suit your requirements. The timer specification should follow the [cron](https://crontab.guru) convention.

View File

@ -22,7 +22,7 @@ services:
dnpm-beam-connect:
depends_on: [ dnpm-beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:dnpm
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://dnpm-beam-proxy:8081
@ -34,6 +34,7 @@ services:
HTTPS_PROXY: http://forward_proxy:3128
NO_PROXY: dnpm-beam-proxy,dnpm-backend
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
volumes:
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro

View File

@ -73,7 +73,6 @@ case "$ACTION" in
hc_send log "Bridgehead $PROJECT startup: Checking requirements ..."
checkRequirements
hc_send log "Bridgehead $PROJECT startup: Requirements checked out. Now starting bridgehead ..."
export LDM_LOGIN=$(getLdmPassword)
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
;;
stop)
@ -103,9 +102,16 @@ case "$ACTION" in
uninstall)
exec ./lib/uninstall-bridgehead.sh $PROJECT
;;
adduser)
loadVars
log "INFO" "Adding encrypted credentials in /etc/bridgehead/$PROJECT.local.conf"
read -p "Please choose the component (LDM_AUTH|NNGM_AUTH) you want to add a user to : " COMPONENT
read -p "Please enter a username: " USER
read -s -p "Please enter a password (will not be echoed): "$'\n' PASSWORD
add_basic_auth_user $USER $PASSWORD $COMPONENT $PROJECT
;;
enroll)
loadVars
do_enroll $PROXY_ID
;;
preRun | preUpdate)

View File

@ -7,7 +7,6 @@ services:
environment:
BASE_URL: "http://bridgehead-ccp-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx4g"
LOG_LEVEL: "debug"
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-data:/app/data"

View File

@ -6,7 +6,7 @@ services:
APP_dnpm-connect_KEY: ${DNPM_BEAM_SECRET_SHORT}
dnpm-beam-connect:
depends_on: [ beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:dnpm
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://beam-proxy:8081
@ -18,6 +18,7 @@ services:
HTTPS_PROXY: "http://forward_proxy:3128"
NO_PROXY: beam-proxy,dnpm-backend
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
volumes:
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro

13
ccp/modules/mtba-setup.sh Normal file
View File

@ -0,0 +1,13 @@
#!/bin/bash
function mtbaSetup() {
# TODO: Check if ID-Management Module is activated!
if [ -n "$ENABLE_MTBA" ];then
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log ERROR "Detected MTBA Module configuration but ID-Management Module seems not to be configured!"
exit 1;
fi
OVERRIDE+=" -f ./$PROJECT/modules/mtba-compose.yml"
fi
}

View File

@ -18,7 +18,12 @@ services:
- "traefik.http.middlewares.connector_strip.stripprefix.prefixes=/nngm-connector"
- "traefik.http.services.connector.loadbalancer.server.port=8080"
- "traefik.http.routers.connector.tls=true"
- "traefik.http.routers.connector.middlewares=connector_strip,auth"
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
volumes:
- nngm-rest:/var/log
traefik:
labels:
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"

View File

@ -0,0 +1,8 @@
#!/bin/bash
function nngmSetup() {
if [ -n "$NNGM_CTS_APIKEY" ]; then
log INFO "nNGM setup detected -- will start nNGM Connector."
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
fi
}

View File

@ -1,24 +0,0 @@
#!/bin/bash
##nNGM vars:
#NNGM_MAGICPL_APIKEY
#NNGM_CTS_APIKEY
#NNGM_CRYPTKEY
function nngmSetup() {
if [ -n "$NNGM_CTS_APIKEY" ]; then
log INFO "nNGM setup detected -- will start nNGM Connector."
OVERRIDE+=" -f ./$PROJECT/nngm-compose.yml"
fi
}
function mtbaSetup() {
# TODO: Check if ID-Management Module is activated!
if [ -n "$ENABLE_MTBA" ];then
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log ERROR "Detected MTBA Module configuration but ID-Management Module seems not to be configured!"
exit 1;
fi
OVERRIDE+=" -f ./$PROJECT/mtba-compose.yml"
fi
}

View File

@ -1,20 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUMeGRSrNPhRdQ1tU7uK5+lUa4f38wDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjIwOTI5MTQxMjU1WhcNMzIw
OTI2MTQxMzI1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAMYyroOUeb27mYzClOrjCmgIceLalsFA0aVCh5mZ
KtP8+1U3oq/7exP30gXiJojxW7xoerfyQY9s0Sz5YYbxYbuskFOYEtyAILB/pxgd
+k+J3tlZKolpfmo7WT5tZiHxH/zjrtAYGnuB2xPHRMCWh/tHYrELgXQuilNol24y
GBa1plTlARy0aKEDUHp87WLhD2qH7B8sFlLgo0+gunE1UtR2HMSPF45w3VXszyG6
fJNrAj0yPnKy3Dm1BMO3jDO2e0A9lCQ71a4j4TeKePfCk1xCArSu6PpiwiacKplF
c6CRR6KrWVm2g+8Y2hFcOBG/Py2xusm3PWbpylGq6vtFRkkCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEFxD6BQwQO5
xsJ+3cvZypsnh6dDMB8GA1UdIwQYMBaAFEFxD6BQwQO5xsJ+3cvZypsnh6dDMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQB5zTeIhV/3
3Am6O144EFtnIeaZ2w0D6aEHqHAZp50vJv3+uQfOliCOzgw7VDxI4Zz2JALjlR/i
uOYHsu3YIRMIOmPOjqrdDJa6auB0ufL4oUPfCRln7Fh0f3JVlz3BUoHsSDt949p4
g0nnsciL2JHuzlqjn7Jyt3L7dAHrlFKulCcuidG5D3cqXrRCbF83f+k3TC/HRiNd
25oMi7I4MP/SOCdfQGUGIsHIf/0hSm3pNjDOrC/XuI/8gh2f5io+Y8V+hMwMBcm4
JbH8bdyBB+EIhsNbTwf2MWntD5bmg47sf7hh23aNvKXI67Li1pTI2t1CqiGnFR0U
fCEpeaEAHs0k
MIIDNTCCAh2gAwIBAgIUN7yzueIZzwpe8PaPEIMY8zoH+eMwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNTIzMTAxNzIzWhcNMzMw
NTIwMTAxNzUzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAN5JAj+HydSGaxvA0AOcrXVTZ9FfsH0cMVBlQb72
bGZgrRvkqtB011TNXZfsHl7rPxCY61DcsDJfFq3+8VHT+S9HE0qV1bEwP+oA3xc4
Opq77av77cNNOqDC7h+jyPhHcUaE33iddmrH9Zn2ofWTSkKHHu3PAe5udCrc2QnD
4PLRF6gqiEY1mcGknJrXj1ff/X0nRY/m6cnHNXz0Cvh8oPOtbdfGgfZjID2/fJNP
fNoNKqN+5oJAZ+ZZ9id9rBvKj1ivW3F2EoGjZF268SgZzc5QrM/D1OpSBQf5SF/V
qUPcQTgt9ry3YR+SZYazLkfKMEOWEa0WsqJVgXdQ6FyergcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEa70kcseqU5
bHx2zSt4bG21HokhMB8GA1UdIwQYMBaAFEa70kcseqU5bHx2zSt4bG21HokhMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCGmE7NXW4T
6J4mV3b132cGEMD7grx5JeiXK5EHMlswUS+Odz0NcBNzhUHdG4WVMbrilHbI5Ua+
6jdKx5WwnqzjQvElP0MCw6sH/35gbokWgk1provOP99WOFRsQs+9Sm8M2XtMf9HZ
m3wABwU/O+dhZZ1OT1PjSZD0OKWKqH/KvlsoF5R6P888KpeYFiIWiUNS5z21Jm8A
ZcllJjiRJ60EmDwSUOQVJJSMOvtr6xTZDZLtAKSN8zN08lsNGzyrFwqjDwU0WTqp
scMXEGBsWQjlvxqDnXyljepR0oqRIjOvgrWaIgbxcnu98tK/OdBGwlAPKNUW7Crr
vO+eHxl9iqd4
-----END CERTIFICATE-----

View File

@ -1,4 +1,4 @@
BROKER_ID=broker.dev.ccp-it.dktk.dkfz.de
BROKER_ID=broker.ccp-it.dktk.dkfz.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
@ -8,17 +8,13 @@ PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
# This will load id-management setup. Effective only if id-management configuration is defined.
source $PROJECT/modules/id-management-setup.sh
idManagementSetup
# This will load nngm setup. Effective only if nngm configuration is defined.
source $PROJECT/nngm-setup.sh
nngmSetup
mtbaSetup
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
idManagementSetup
nngmSetup
mtbaSetup

View File

@ -9,14 +9,6 @@ detectCompose() {
fi
}
getLdmPassword() {
if [ -n "$LDM_PASSWORD" ]; then
docker run --rm docker.verbis.dkfz.de/cache/httpd:alpine htpasswd -nb $PROJECT $LDM_PASSWORD | tr -d '\n' | tr -d '\r'
else
echo -n ""
fi
}
exitIfNotRoot() {
if [ "$EUID" -ne 0 ]; then
log "ERROR" "Please run as root"
@ -34,7 +26,7 @@ checkOwner(){
}
printUsage() {
echo "Usage: bridgehead start|stop|is-running|update|install|uninstall|enroll PROJECTNAME"
echo "Usage: bridgehead start|stop|is-running|update|install|uninstall|adduser|enroll PROJECTNAME"
echo "PROJECTNAME should be one of ccp|bbmri"
}
@ -196,10 +188,27 @@ function do_enroll_inner {
PARAMS+="--admin-email $SUPPORT_EMAIL"
fi
docker run --rm -ti -v /etc/bridgehead/pki:/etc/bridgehead/pki samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $MANUAL_PROXY_ID $PARAMS
docker run --rm -v /etc/bridgehead/pki:/etc/bridgehead/pki samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $MANUAL_PROXY_ID $PARAMS
chmod 600 $PRIVATEKEYFILENAME
}
function do_enroll {
do_enroll_inner $@
}
add_basic_auth_user() {
USER="${1}"
PASSWORD="${2}"
NAME="${3}"
PROJECT="${4}"
FILE="/etc/bridgehead/${PROJECT}.local.conf"
ENCRY_CREDENTIALS="$(docker run --rm docker.verbis.dkfz.de/cache/httpd:alpine htpasswd -nb $USER $PASSWORD | tr -d '\n' | tr -d '\r')"
if [ -f $FILE ] && grep -R -q "$NAME=" $FILE # if a specific basic auth user already exists:
then
sed -i "/$NAME/ s|='|='$ENCRY_CREDENTIALS,|" $FILE
else
echo -e "\n## Basic Authentication Credentials for:\n$NAME='$ENCRY_CREDENTIALS'" >> $FILE;
fi
log DEBUG "Saving clear text credentials in $FILE. If wanted, delete them manually."
sed -i "/^$NAME/ s|$|\n# User: $USER\n# Password: $PASSWORD|" $FILE
}

View File

@ -29,12 +29,16 @@ bridgehead ALL= NOPASSWD: BRIDGEHEAD${PROJECT^^}
EOF
# TODO: Determine whether this should be located in setup-bridgehead (triggered through bridgehead install) or in update bridgehead (triggered every hour)
if [ -z "$LDM_PASSWORD" ]; then
log "INFO" "Now generating a password for the local data management. Please save the password for your ETL process!"
if [ -z "$LDM_AUTH" ]; then
log "INFO" "Now generating basic auth for the local data management (see adduser in bridgehead for more information). "
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
add_basic_auth_user $PROJECT $generated_passwd "LDM_AUTH" $PROJECT
fi
log "INFO" "Your generated credentials are:\n user: $PROJECT\n password: $generated_passwd"
echo -e "## Local Data Management Basic Authentication\n# User: $PROJECT\nLDM_PASSWORD=$generated_passwd" >> /etc/bridgehead/${PROJECT}.local.conf;
if [ ! -z "$NNGM_CTS_APIKEY" ] && [ -z "$NNGM_AUTH" ]; then
log "INFO" "Now generating basic auth for nNGM upload API (see adduser in bridgehead for more information). "
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
add_basic_auth_user "nngm" $generated_passwd "NNGM_AUTH" $PROJECT
fi
log "INFO" "Registering system units for bridgehead and bridgehead-update"

View File

@ -139,6 +139,15 @@ else
log WARN "Automated backups are disabled (variable AUTO_BACKUPS != \"true\")"
fi
#TODO: the following block can be deleted after successful update at all sites
if [ ! -z "$LDM_PASSWORD" ]; then
FILE="/etc/bridgehead/$PROJECT.local.conf"
log "INFO" "Migrating LDM_PASSWORD to encrypted credentials in $FILE"
add_basic_auth_user $PROJECT $LDM_PASSWORD "LDM_AUTH" $PROJECT
add_basic_auth_user $PROJECT $LDM_PASSWORD "NNGM_AUTH" $PROJECT
sed -i "/LDM_PASSWORD/{d;}" $FILE
fi
exit 0
# TODO: Print last commit explicit

View File

@ -21,7 +21,7 @@ services:
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_LOGIN}"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_AUTH}"
ports:
- 80:80
- 443:443

View File

@ -22,7 +22,7 @@ services:
dnpm-beam-connect:
depends_on: [ dnpm-beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:dnpm
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://dnpm-beam-proxy:8081
@ -34,6 +34,7 @@ services:
HTTPS_PROXY: http://forward_proxy:3128
NO_PROXY: dnpm-beam-proxy,dnpm-backend
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
volumes:
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro