Compare commits

..

4 Commits

Author SHA1 Message Date
ac160ab981 Added root cert 2023-02-24 12:15:43 +01:00
81d09aae29 Switched to aql local spot 2023-02-24 09:58:35 +01:00
2e7a349aac Fixed docker-compose vars 2023-02-24 07:54:36 +01:00
052f6f1239 Added snap 2023-02-23 14:25:39 +01:00
65 changed files with 433 additions and 2137 deletions

3
.gitignore vendored
View File

@ -4,6 +4,3 @@ site-config/*
## Ignore site configuration
*/docker-compose.override.yml
## MAC OS
.DS_Store

397
README.md
View File

@ -6,39 +6,27 @@ This repository is the starting point for any information and tools you will nee
1. [Requirements](#requirements)
- [Hardware](#hardware)
- [Software](#software)
- [Network](#network)
- [System](#system)
- [Git](#git)
- [Docker](#docker)
2. [Deployment](#deployment)
- [EHDS2/ECDC](#ehds2-ecdc)
- [Site name](#site-name)
- [Projects](#projects)
- [GitLab repository](#gitlab-repository)
- [Base Installation](#base-installation)
- [Installation](#installation)
- [Register with Samply.Beam](#register-with-samplybeam)
- [Starting and stopping your Bridgehead](#starting-and-stopping-your-bridgehead)
- [Testing your new Bridgehead](#testing-your-new-bridgehead)
- [De-installing a Bridgehead](#de-installing-a-bridgehead)
3. [Site-specific configuration](#site-specific-configuration)
- [HTTPS Access](#https-access)
- [TLS terminating proxies](#tls-terminating-proxies)
- [File structure](#file-structure)
- [BBMRI-ERIC Directory entry needed](#bbmri-eric-directory-entry-needed)
- [Loading data](#loading-data)
4. [Things you should know](#things-you-should-know)
- [Auto-Updates](#auto-updates)
- [Auto-Backups](#auto-backups)
- [Non-Linux OS](#non-linux-os)
5. [Troubleshooting](#troubleshooting)
- [Docker Daemon Proxy Configuration](#docker-daemon-proxy-configuration)
- [Auto-starting your Bridgehead when the server starts](#auto-starting-your-bridgehead-when-the-server-starts)
3. [Additional Services](#additional-Services)
- [Monitoring](#monitoring)
6. [License](#license)
- [Register with a Directory](#register-with-a-Directory)
4. [Site-specific configuration](#site-specific-configuration)
- [HTTPS Access](#https-access)
- [Locally Managed Secrets](#locally-managed-secrets)
- [Git Proxy Configuration](#git-proxy-configuration)
- [Docker Daemon Proxy Configuration](#docker-daemon-proxy-configuration)
- [Non-Linux OS](#non-linux-os)
5. [License](#license)
## Requirements
The data protection group at your site will probably want to know exactly what our software does with patient data, and you may need to get their approval before you are allowed to install a Bridgehead. To help you with this, we have provided some data protection concepts:
- [Germany](https://www.bbmri.de/biobanking/it/infrastruktur/datenschutzkonzept/)
### Hardware
Hardware requirements strongly depend on the specific use-cases of your network as well as on the data it is going to serve. Most use-cases are well-served with the following configuration:
@ -57,148 +45,28 @@ Ensure the following software (or newer) is installed:
- docker >= 20.10.1
- docker-compose >= 2.xx (`docker-compose` and `docker compose` are both supported).
- systemd
- curl
We recommend to install Docker(-compose) from its official sources as described on the [Docker website](https://docs.docker.com).
> 📝 Note for Ubuntu: Snap versions of Docker are not supported.
Note for Ubuntu: Please note that snap versions of Docker are not supported.
### Network
A Bridgehead communicates to all central components via outgoing HTTPS connections.
Since it needs to carry sensitive patient data, Bridgeheads are intended to be deployed within your institution's secure network and behave well even in networks in strict security settings, e.g. firewall rules. The only connectivity required is an outgoing HTTPS proxy. TLS termination is supported, too (see [below](#tls-terminating-proxies))
Your site might require an outgoing proxy (i.e. HTTPS forward proxy) to connect to external servers; you should discuss this with your local systems administration. In that case, you will need to note down the URL of the proxy. If the proxy requires authentication, you will also need to make a note of its username and password. This information will be used later on during the installation process. TLS terminating proxies are also supported, see [here](#tls-terminating-proxies). Apart from the Bridgehead itself, you may also need to configure the proxy server in [git](https://gist.github.com/evantoli/f8c23a37eb3558ab8765) and [docker](https://docs.docker.com/network/proxy/).
The following URLs need to be accessible (prefix with `https://`):
* To fetch code and configuration from git repositories
* github.com
* git.verbis.dkfz.de
* To fetch docker images
* docker.verbis.dkfz.de
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/all))
* hub.docker.com
* registry-1.docker.io
* production.cloudflare.docker.com
* To report bridgeheads operational status
* healthchecks.verbis.dkfz.de
* only for DKTK/CCP
* broker.ccp-it.dktk.dkfz.de
* only for BBMRI-ERIC
* broker.bbmri.samply.de
* gitlab.bbmri-eric.eu
* only for German Biobank Node
* broker.bbmri.de
* only for EHDS2/ECDC
* ecdc-vm-ehds-test1.swedencentral.cloudapp.azure.com
> 📝 This URL list is subject to change. Instead of the individual names, we highly recommend whitelisting wildcard domains: *.dkfz.de, github.com, *.docker.com, *.docker.io, *.samply.de, *.bbmri.de.
> 📝 Ubuntu's pre-installed uncomplicated firewall (ufw) is known to conflict with Docker, more info [here](https://github.com/chaifeng/ufw-docker).
Note for Ubuntu: Please note that the uncomplicated firewall (ufw) is known to conflict with Docker [here](https://github.com/chaifeng/ufw-docker).
## Deployment
### EHDS2/ECDC
The ECDC Bridgehead allows you to connect your site/node to the [AMR Explorer](http://ehds2-lens.swedencentral.cloudapp.azure.com/), a non-public central web site that allow certified researchers to search for information relating to antiobiotic resistance, Europe-wide. You can supply the Bridgehead with data from your site in the form of CSV files, which will then be made available to the Explorer for searching purposes.
You will need to set up some configuration before you can start a Bridgehead. This can be done as follows:
```shell
sudo mkdir -p /etc/bridgehead
sudo cp /srv/docker/bridgehead/bbmri/modules/bbmri.conf /etc/bridgehead
```
Now edit ```/etc/bridgehead/bbmri.conf``` and customize the following variables for your site:
- SITE_NAME
- SITE_ID
- OPERATOR_FIRST_NAME
- OPERATOR_LAST_NAME
- OPERATOR_EMAIL
If you run a proxy at your site, you will also need to give values to the ```HTTP*_PROXY*``` variables.
When you first start the Bridgehead, it will clone two extra repositories into /srv/docker, namely, ```focus``` and ```transfair```. It will automatically build local images of these repositories for you. These components have the following functionality that has been customized for ECDC:
- *focus.* This component is responsible for completing the CQL that is used for running queries against the Blaze FHIR store. It uses a set of templates for doing this. Extra templates have been written for the ECDC use case. They can be found in /srv/docker/focus/resources/cql/EHDS2*.
- *transfair.* This is an ETL component. It takes the CSV data that you provide, converts it to FHIR, and loads it to Blaze. This will be run once, if there is data in /srv/docker/ecdc/data. A lock file in the data directory ensures that it does not get run again. Remove this lock file and restart the Bridgehead if you want to load new data.
These images will normally be rebuilt every time you restart the Bridgehead. This is a workaround to fix a bug: if you don't rebuild these images for every start, then legacy versions will be used and you will lose the new ECDC functionality. The reason for this is still under investigation.
### Site name
You will need to choose a short name for your site. This is not a URL, just a simple identifying string. For the examples below, we will use "your-site-name", but you should obviously choose something that is meaningful to you and which is unique.
Site names should adhere to the following conventions:
- They should be lower-case.
- They should generally be named after the city where your site is based, e.g. ```karlsruhe```.
- If you have a multi-part name, please use a hypen ("-") as separator, e.g. ```le-havre```.
- If your site is for testing purposes, rather than production, please append "-test", e.g. ```zaragoza-test```.
- If you are a developer and you are making changes to the Bridgehead, please use your name and prepend "dev-", e.g. ```dev-joe-doe```.
### GitLab repository
You can skip this section if you are doing an ECDC/EHDS2 installation.
In order to be able to install, you will need to have your own repository in GitLab for your site's configuration settings. This allows automated updates of the Bridgehead software.
To request a new repository, please contact your research network administration or send an email to one of the project specific addresses:
- For the bbmri project: bridgehead@helpdesk.bbmri-eric.eu.
- For the ccp project: support-ccp@dkfz-heidelberg.de
Mention:
- which project you belong to, i.e. "bbmri" or "ccp"
- site name (According to conventions listed above)
- operator name and email
We will set the repository up for you. We will then send you:
- A Repository Short Name (RSN). Beware: this is distinct from your site name.
- Repository URL containing the acces token eg. https://BH_Dummy:dummy_token@git.verbis.dkfz.de/<project>-bridgehead-configs/dummy.git
During the installation, your Bridgehead will download your site's configuration from GitLab and you can review the details provided to us by email.
### Base Installation
Clone the bridgehead repository:
First, clone the repository to the directory `/srv/docker/bridgehead`:
```shell
sudo mkdir -p /srv/docker/
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
```
If this is an ECDC/EHDS2 installation, switch to the ```ehds2``` branch and copy the configuration file to the required location:
```shell
cd /srv/docker/bridgehead
sudo git checkout ehds2
sudo mkdir -p /etc/bridgehead/
sudo cp bbmri/modules/bbmri.conf /etc/bridgehead/
sudo vi /etc/bridgehead/bbmri.conf # Modify to include national node name and admin contact details
```
For an ECDC/EHDS2 installation, you will also need to copy your data in a comma-separated value (CSV) formatted file to ```/srv/docker/ecdc/data```. Make sure it is readable by all. Only files with the ending ```.csv``` will be read in, all other files will be ignored.
If this is not an ECDC/EHDS2 installation, then download your site specific configuration repository:
```shell
sudo mkdir -p /etc/bridgehead/
sudo git clone <REPO_URL_FROM_EMAIL> /etc/bridgehead/
```
Review the site configuration:
```shell
sudo cat /etc/bridgehead/bbmri.conf
```
Pay special attention to:
- SITE_NAME
- SITE_ID
- OPERATOR_FIRST_NAME
- OPERATOR_LAST_NAME
- OPERATOR_EMAIL
Then, run the installation script:
```shell
@ -206,6 +74,8 @@ cd /srv/docker/bridgehead
sudo ./bridgehead install <PROJECT>
```
... and follow the instructions on the screen. You should then be prompted to do the next step:
### Register with Samply.Beam
Many Bridgehead services rely on the secure, performant and flexible messaging middleware called [Samply.Beam](https://github.com/samply/beam). You will need to register ("enroll") with Samply.Beam by creating a cryptographic key pair for your bridgehead:
@ -215,40 +85,10 @@ cd /srv/docker/bridgehead
sudo ./bridgehead enroll <PROJECT>
```
... and follow the instructions on the screen. Please send your default Collection ID and the display name of your site together with the certificate request when you enroll. You should then be prompted to do the next step:
Note: if you are doing an ECDC/EHDS2 installation, you will need to perform the Beam certificate signing yourself. Do not send an email to either of the email addreesses suggested by the bridgehead enroll procedure. Instead, log on to the VM where Beam is running and perform the following (you will need root permissions):
```shell
cd /srv/docker/beam-broker
sudo mkdir -p csr
sudo vi csr/ecdc-bridgehead-<national node name>.csr # Copy and paste the certificate printed during the enroll
sudo pki-scripts/managepki sign --csr-file csr/ecdc-bridgehead-<national node name>.csr --common-name=ecdc-bridgehead-<national node name>.broker.bbmri.samply.de
```
You can check that the Bridgehead has connected to Beam with the following command:
```shell
pki-scripts/managepki list
```
... and follow the instructions on the screen. You should then be prompted to do the next step:
### Starting and stopping your Bridgehead
For an ECDC/EHDS2 installation, this is done with the help of specialized scripts:
To start:
```shell
sudo /srv/docker/bridgehead/run.sh
```
To stop (you generally won't need to do this):
```shell
sudo /srv/docker/bridgehead/stop.sh
```
For regular installations, read on.
If you followed the above steps, your Bridgehead should already be configured to autostart (via systemd). If you would like to start/stop manually:
To start, run
@ -269,60 +109,6 @@ To enable/disable autostart, run
sudo systemctl [enable|disable] bridgehead@<PROJECT>.service
```
### Testing your new Bridgehead
After starting the Bridgehead, you can watch the initialization process with the following command:
```shell
journalctl -u bridgehead@bbmri -f
```
if this exits with something similar to the following:
```
bridgehead@bbmri.service: Main process exited, code=exited, status=1/FAILURE
```
Then you know that there was a problem with starting the Bridgehead. Scroll up the printout to find the cause of the error.
Once the Bridgehead is running, you can also view the individual Docker processes with:
```shell
docker ps
```
There should be 6 - 10 Docker proceses. If there are fewer, then you know that something has gone wrong. To see what is going on, run:
```shell
journalctl -u bridgehead@bbmri -f
```
Once the Bridgehead has passed these checks, take a look at the landing page:
```
https://localhost
```
You can either do this in a browser or with curl. If you visit the URL in the browser, you will neet to click through several warnings, because you will initially be using a self-signed certificate. With curl, you can bypass these checks:
```shell
curl -k https://localhost
```
If you get errors when you do this, you need to use ```docker logs``` to examine your landing page container in order to determine what is going wrong.
If you have chosen to take part in our monitoring program (by setting the ```MONITOR_APIKEY``` variable in the configuration), you will be informed by email when problems are detected in your Bridgehead.
### De-installing a Bridgehead
You may decide that you want to remove a Bridgehead installation from your machine, e.g. if you want to migrate it to a new location or if you want to start a fresh installation because the initial attempts did not work.
To do this, run:
```shell
sh bridgehead uninstall
```
## Site-specific configuration
### HTTPS Access
@ -333,21 +119,6 @@ Even within your internal network, the Bridgehead enforces HTTPS for all service
All of the Bridgehead's outgoing connections are secured by transport encryption (TLS) and a Bridgehead will refuse to connect if certificate verification fails. If your local forward proxy server performs TLS termination, please place its CA certificate in `/etc/bridgehead/trusted-ca-certs` as a `.pem` file, e.g. `/etc/bridgehead/trusted-ca-certs/mylocalca.pem`. Then, all Bridgehead components will pick up this certificate and trust it for outgoing connections.
To find the certificate file, first run the following:
```
curl -v https://broker.bbmri.samply.de/v1/health
```
In the output, look out for the line:
```
successfully set certificate verify locations:
```
Here a file will be mentioned, perhaps in the directory /etc/ssl/certs. The exact location will depend on your operating system. This is the file that you need to copy.
### File structure
- `/srv/docker/bridgehead` contains this git repository with the shell scripts and *project-specific configuration*. In here, all files are identical for all sites. You should not make any changes here.
@ -360,96 +131,28 @@ Here a file will be mentioned, perhaps in the directory /etc/ssl/certs. The exac
Your Bridgehead's actual data is not stored in the above directories, but in named docker volumes, see `docker volume ls` and `docker volume inspect <volume_name>`.
### BBMRI-ERIC Directory entry needed
If you run a biobank, you should be listed together with your collections with in the [Directory](https://directory.bbmri-eric.eu), a BBMRI-ERIC project that catalogs biobanks.
To do this, contact the BBMRI-ERIC national node for the country where your biobank is based, see [the list of nodes](http://www.bbmri-eric.eu/national-nodes/).
Once you have added your biobank to the Directory you got persistent identifier (PID) for your biobank and unique identifiers (IDs) for your collections. The collection IDs are necessary for the biospecimens assigning to the collections and later in the data flows between BBMRI-ERIC tools. In case you cannot distribute all your biospecimens within collections via assigning the collection IDs, **you should choose one of your sample collections as a default collection for your biobank**. This collection will be automatically used to label any samples that have not been assigned a collection ID in your ETL process. Make a note of this default collection ID, you will need it later on in the installation process.
### Directory sync tool
The Bridgehead's **Directory Sync** is an optional feature that keeps the Directory up to date with your local data, e.g. number of samples. Conversely, it also updates the local FHIR store with the latest contact details etc. from the Directory. You must explicitly set your country specific directory URL, username and password to enable this feature.
Full details can be found in [directory_sync_service](https://github.com/samply/directory_sync_service).
To enable it, you will need to set these variables to the ```bbmri.conf``` file of your GitLab repository. Here is an example config:
```
DS_DIRECTORY_URL=https://directory.bbmri-eric.eu
DS_DIRECTORY_USER_NAME=your_directory_username
DS_DIRECTORY_USER_PASS=qwdnqwswdvqHBVGFR9887
DS_TIMER_CRON="0 22 * * *"
```
You must contact the Directory team for your national node to find the URL, and to register as a user.
Additionally, you should choose when you want Directory sync to run. In the example above, this is set to happen at 10 pm every evening. You can modify this to suit your requirements. The timer specification should follow the [cron](https://crontab.guru) convention.
Once you edited the gitlab config, the bridgehead will autoupdate the config with the values and will sync the data.
There will be a delay before the effects of Directory sync become visible. First, you will need to wait until the time you have specified in ```TIMER_CRON```. Second, the information will then be synchronized from your national node with the central European Directory. This can take up to 24 hours.
### Loading data
The data accessed by the federated search is held in the Bridgehead in a FHIR store (we use Blaze).
For an ECDC/EHDS2 installation, you need to provide your data as a table in a CSV (comma-separated value) files and place it in the directory /srv/docker/ecdc/data. You can provide as many data files as you like, and you can add new files incrementally over time.
In order for this new data to be loaded, you will need to execute the ```run.sh``` script with the appropriate arguments:
- To read just the most recently added data files: ```/srv/docker/bridgehead run.sh --upload```.
- To read in all data from scratch: ```/srv/docker/bridgehead run.sh --upload-all```.
These two variants give you the choice between uploading data in an incremental way that preserves the date used for statistics or as a single upload that date stamps everything with the current date.
The Bridgehead can be started without data, but obviously, any searches run from the Explorer will return zero results for your site if you do that. Note that an empty data directory will automatically be inserted on the first start of the Bridgehead if you don't set one up yourself.
For non-ECDC setups, read on.
You can load data into this store by using its FHIR API:
```
https://<Name of your server>/bbmri-localdatamanagement/fhir
```
The name of your server will generally be the full name of the VM that the Bridgehead runs on. You can alternatively supply an IP address.
The FHIR API uses basic auth. You can find the credentials in `/etc/bridgehead/<project>.local.conf`.
Note that if you don't have a DNS certificate for the Bridgehead, you will need to allow an insecure connection. E.g. with curl, use the `-k` flag.
The storage space on your hard drive will depend on the number of FHIR resources that you intend to generate. This will be the sum of the number of patients/subjects, the number of samples, the number of conditions/diseases and the number of observations. As a general rule of thumb, you can assume that each resource will consume about 2 kilobytes of disk space.
For more information on Blaze performance, please refer to [import performance](https://github.com/samply/blaze/blob/master/docs/performance/import.md).
#### ETL for BBMRI and GBA
Normally, you will need to build your own ETL to feed the Bridgehead. However, there is one case where a short cut might be available:
- If you are using CentraXX as a BIMS and you have a FHIR-Export License, then you can employ standard mapping scripts that access the CentraXX-internal data structures and map the data onto the BBMRI FHIR profile. It may be necessary to adjust a few parameters, but this is nonetheless significantly easier than writing your own ETL.
You can find the profiles for generating FHIR in [Simplifier](https://simplifier.net/bbmri.de/~resources?category=Profile).
## Things you should know
### Auto-Updates
Your Bridgehead will automatically and regularly check for updates. Whenever something has been updates (e.g., one of the git repositories or one of the docker images), your Bridgehead is automatically restarted. This should happen automatically and does not need any configuration.
If you would like to understand what happens exactly and when, please check the systemd units deployed during the [installation](#base-installation) via `systemctl cat bridgehead-update@<PROJECT>.service` and `systemctl cat bridgehead-update@<PROJECT>.timer`.
If you would like to understand what happens exactly and when, please check the systemd units deployed during the [installation](#base-installation) via `systemctl cat bridgehead-update@<PROJECT>.service` and `systemctl cat bridgehead-update@<PROJECT.timer`.
### Auto-Backups
### Monitoring
Some of the components in the bridgehead will store persistent data. For those components, we integrated an automated backup solution in the bridgehead updates. It will automatically save the backup in multiple files
To keep all Bridgeheads up and working and detect any errors before a user does, a central monitoring
1) Last-XX, were XX represents a weekday to allow re-import of at least one version of the database for each of the past seven days.
2) Year-KW-XX, were XX represents the calendar week to allow re-import of at least one version per calendar week
3) Year-Month, to allow re-import of at least one version per month
- Your Bridgehead itself will report relevant system events, such as successful/failed updates, restarts, performance metrics or version numbers.
- Your Bridgehead is also monitored from the outside by your network's central components. For example, the federated search will regularly perform a black-box test by sending an empty query to your Bridgehead and checking if the results make sense.
To enable the Auto-Backup feature, please set the Variable `BACKUP_DIRECTORY` in your sites configuration.
In all monitoring cases, obviously no sensitive information is transmitted, in particular not any patient-related data. Aggregated data, e.g. total amount of datasets, may be transmitted for diagnostic purposes.
### Development Installation
## Troubleshooting
By using `./bridgehead dev-install <projectname>` instead of `install`, you can install a developer bridgehead. The difference is, that you can provide an arbitrary configuration repository during the installation, meaning that it does not have to adhere to the usual naming scheme. This allows for better decoupling between development and production configurations.
### Docker Daemon Proxy Configuration
Docker has a background daemon, responsible for downloading images and starting them. Sometimes, proxy configuration from your system won't carry over and it will fail to download images. In that case, configure the proxy for this daemon as described in the [official documentation](https://docs.docker.com).
### Non-Linux OS
@ -468,42 +171,6 @@ We have tested the installation procedure with an Ubuntu 22.04 guest system runn
Installation under WSL ought to work, but we have not tested this.
## Troubleshooting
### Docker Daemon Proxy Configuration
Docker has a background daemon, responsible for downloading images and starting them. Sometimes, proxy configuration from your system won't carry over and it will fail to download images. In that case, you'll need to configure the proxy inside the system unit of docker by creating the file `/etc/systemd/system/docker.service.d/proxy.conf` with the following content:
``` ini
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:3128"
Environment="HTTPS_PROXY=https://proxy.example.com:3128"
Environment="NO_PROXY=localhost,127.0.0.1,some-local-docker-registry.example.com,.corp"
```
After saving the configuration file, you'll need to reload the system daemon for the changes to take effect:
``` shell
sudo systemctl daemon-reload
```
and restart the docker daemon:
``` shell
sudo systemctl restart docker
```
For more information, please consult the [official documentation](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy).
### Monitoring
To keep all Bridgeheads up and working and detect any errors before a user does, a central monitoring
- Your Bridgehead itself will report relevant system events, such as successful/failed updates, restarts, performance metrics or version numbers.
- Your Bridgehead is also monitored from the outside by your network's central components. For example, the federated search will regularly perform a black-box test by sending an empty query to your Bridgehead and checking if the results make sense.
In all monitoring cases, obviously no sensitive information is transmitted, in particular not any patient-related data. Aggregated data, e.g. total amount of datasets, may be transmitted for diagnostic purposes.
## License
Copyright 2019 - 2022 The Samply Community

View File

@ -1,13 +1,60 @@
version: "3.7"
# This includes only the shared persistence for BBMRI-ERIC and GBN and EHDS2. Federation components are included as modules, see vars.
services:
traefik:
container_name: bridgehead-traefik
image: traefik:latest
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.file.directory=/configuration/
- --api.dashboard=true
- --accesslog=true
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/api`) || PathPrefix(`/dashboard`)"
- "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_LOGIN}"
ports:
- 80:80
- 443:443
volumes:
- /etc/bridgehead/traefik-tls:/certs:ro
- ../lib/traefik-configuration/:/configuration:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
forward_proxy:
container_name: bridgehead-forward-proxy
image: samply/bridgehead-forward-proxy:latest
environment:
HTTPS_PROXY: ${HTTPS_PROXY_URL}
USERNAME: ${HTTPS_PROXY_USERNAME}
PASSWORD: ${HTTPS_PROXY_PASSWORD}
volumes:
- /etc/bridgehead/trusted-ca-certs:/docker/custom-certs/:ro
landing:
container_name: bridgehead-landingpage
image: samply/bridgehead-landingpage:master
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
environment:
HOST: ${HOST}
PROJECT: ${PROJECT}
SITE_NAME: ${SITE_NAME}
blaze:
#image: docker.verbis.dkfz.de/cache/samply/blaze:latest
# Blaze versions 0.26 and 0.27 do not return anything when you run a
# CQL query, so I am pinning the version at 0.25.
image: samply/blaze:0.25
image: "samply/blaze:0.19"
container_name: bridgehead-bbmri-blaze
environment:
BASE_URL: "http://bridgehead-bbmri-blaze:8080"
@ -23,13 +70,43 @@ services:
- "traefik.http.services.blaze_ccp.loadbalancer.server.port=8080"
- "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth"
- "traefik.http.routers.blaze_ccp.tls=true"
ports:
- "8081:8080"
spot:
image: samply/spot:latest
container_name: bridgehead-spot
environment:
SECRET: ${SPOT_BEAM_SECRET_LONG}
APPID: spot
PROXY_ID: ${PROXY_ID}
LDM_URL: http://bridgehead-bbmri-blaze:8080/fhir
BEAM_PROXY: http://beam-proxy:8081
depends_on:
- "beam-proxy"
- "blaze"
beam-proxy:
image: "samply/beam-proxy:develop"
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_0_ID: spot
APP_0_KEY: ${SPOT_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- ./root.crt.pem:/conf/root.crt.pem:ro
volumes:
blaze-data:
# used in modules *-locator.yml
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -1,81 +0,0 @@
### DO NOT EDIT THIS FILE DIRECTLY.
###
### This file is collaboratively managed by yourself and the CCP-IT team at DKFZ.
### The Bridgehead will pull it from git every night and restart if required.
### To make any changes (or review changes by CCP-IT), please login here:
### [URL_TO_SITE_SPECIFIC_GIT_REPO]
###
### DO NOT EDIT THIS FILE DIRECTLY.
### A note on Secrets:
###
### Variable with a value of <VAULT> will be fetched from a central component
### upon each bridgehead startup.
### Using the proven Vaultwarden password manager puts you in full control of
### who can read the passwords. In particular, as long as you don't declare a
### secret as shared ("SITE+DKFZ"), DKFZ cannot read these strings.
### We recommend putting credentials such as local passwords into the password
### store, not the git repo. Please keep your master password safe (vault.conf).
### Common Configuration of all Components
## This is a descriptive human readable name of your site (e.g. Belgium)
SITE_NAME=<National node>
## This is the id for your site used in machine to machine communication (should be
## lower-case, e.g. belgium)
SITE_ID=<National node>
## This server's hostname, for access from other computers within your institution
## (e.g. mybridgehead.intern.myinstitution.org)
## Optional. If left empty, this is auto-generated via the `hostname` command.
HOST=
## Proxy Configuration
# leave empty if not applicable
# eg.: http://my-proxy-host:my-proxy-port
HTTP_PROXY_URL=
HTTP_PROXY_USERNAME=
HTTP_PROXY_PASSWORD=
HTTPS_PROXY_URL=$HTTP_PROXY_URL
HTTPS_PROXY_USERNAME=$HTTP_PROXY_USERNAME
HTTPS_PROXY_PASSWORD=$HTTP_PROXY_PASSWORD
## Maintenance Configuration
# By default, the bridgehead regularly performs certain housekeeping tasks such as pruning of old docker images to not run out of disk space.
# Set the following to false to opt-out. (Default: true)
#AUTO_HOUSEKEEPING=
### Connector Configuration
## The operator of the specific site.
OPERATOR_FIRST_NAME=
OPERATOR_LAST_NAME=
OPERATOR_EMAIL=
OPERATOR_PHONE=
## SMTP Server
# ex.: mailhost.intern.klinik.de
MAIL_HOST=
MAIL_PORT=
# ex.: no-reply@bridgehead.intern.klinik.de
MAIL_FROM_ADDRESS=
MAIL_FROM_NAME=
### Monitoring
# The apikey used for reporting to the central DKFZ monitoring. Leave empty to opt out.
MONITOR_APIKEY=
### Biobanking (BBMRI) specifics
## We consider BBMRI as BBMRI-ERIC (European) and German Biobank Node (Germany).
## Obviously, all German biobanks are by definition also European. Thus,
## any Bridgehead will by default connect to the BBMRI-ERIC services but not
## the national ones. We aim to proceed similarly for other BBMRI-ERIC National Nodes.
##
## The default values are correct for biobanks outside Germany.
## For a biobank inside Germany, set ENABLE_GBN=true.
# Connect to the European services, e.g. BBMRI-ERIC Sample Locator (Default: true)
ENABLE_ERIC=false
# Connect to the German services, e.g. Biobank Node Sample Locator (Default: false)
# Set this to true in German biobanks!
ENABLE_GBN=false
# Connect to the ECDC services, e.g. ECDC Sample Locator (Default: false)
# Set this to true in ECDC national nodes!
ENABLE_EHDS2=true

View File

@ -1,8 +0,0 @@
services:
directory_sync_service:
image: "docker.verbis.dkfz.de/cache/samply/directory_sync_service"
environment:
DS_DIRECTORY_URL: ${DS_DIRECTORY_URL}
DS_DIRECTORY_USER_NAME: ${DS_DIRECTORY_USER_NAME}
DS_DIRECTORY_PASS_CODE: ${DS_DIRECTORY_PASS_CODE}
DS_TIMER_CRON: ${DS_TIMER_CRON}

View File

@ -1,6 +0,0 @@
#!/bin/bash
if [ -n "${DS_DIRECTORY_USER_NAME}" ]; then
log INFO "Directory sync setup detected -- will start directory sync service."
OVERRIDE+=" -f ./$PROJECT/modules/directory-sync-compose.yml"
fi

View File

@ -1,53 +0,0 @@
version: "3.7"
services:
dnpm-beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-dnpm-beam-proxy
environment:
BROKER_URL: ${DNPM_BROKER_URL}
PROXY_ID: ${DNPM_PROXY_ID}
APP_dnpm-connect_KEY: ${DNPM_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
dnpm-beam-connect:
depends_on: [ dnpm-beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://dnpm-beam-proxy:8081
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${DNPM_PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
HTTP_PROXY: http://forward_proxy:3128
HTTPS_PROXY: http://forward_proxy:3128
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
extra_host:
- "host.docker.internal:host-gateway"
volumes:
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
- "traefik.http.middlewares.dnpm-connect-strip.stripprefix.prefixes=/dnpm-connect"
- "traefik.http.routers.dnpm-connect.middlewares=dnpm-connect-strip"
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
- "traefik.http.routers.dnpm-connect.tls=true"
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -1,33 +0,0 @@
version: "3.7"
services:
dnpm-backend:
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
container_name: bridgehead-dnpm-backend
environment:
- ZPM_SITE=${ZPM_SITE}
volumes:
- /etc/bridgehead/dnpm:/bwhc_config:ro
- ${DNPM_DATA_DIR}:/bwhc_data
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
- "traefik.http.routers.bwhc-backend.tls=true"
dnpm-frontend:
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
container_name: bridgehead-dnpm-frontend
links:
- dnpm-backend
environment:
- NUXT_HOST=0.0.0.0
- NUXT_PORT=8080
- BACKEND_PROTOCOL=https
- BACKEND_HOSTNAME=$HOST
- BACKEND_PORT=443
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
- "traefik.http.routers.bwhc-frontend.tls=true"

View File

@ -1,27 +0,0 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1
fi
if [ -z "${DNPM_DATA_DIR+x}" ]; then
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
exit 1
fi
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
echo "Override of landing page url already in place"
else
echo "Adding override of landing page url"
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
else
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
fi
fi
fi

View File

@ -1,12 +0,0 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM}" ]; then
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam and Beam.Connect for DNPM."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
# Set variables required for Beam-Connect
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
DNPM_BROKER_ID="broker.ccp-it.dktk.dkfz.de"
DNPM_BROKER_URL="https://${DNPM_BROKER_ID}"
DNPM_PROXY_ID="${SITE_ID}.${DNPM_BROKER_ID}"
fi

View File

@ -1,82 +0,0 @@
version: "3.7"
services:
focus-ehds2:
#image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
image: samply/focus
container_name: bridgehead-focus-ehds2
environment:
API_KEY: ${EHDS2_FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${EHDS2_PROXY_ID}
PROXY_ID: ${EHDS2_PROXY_ID}
BLAZE_URL: "http://blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy-ehds2:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
OBFUSCATE: "no"
depends_on:
- "beam-proxy-ehds2"
- "blaze"
beam-proxy-ehds2:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-beam-proxy-ehds2
environment:
BROKER_URL: ${EHDS2_BROKER_URL}
PROXY_ID: ${EHDS2_PROXY_ID}
APP_focus_KEY: ${EHDS2_FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/bbmri/modules/${EHDS2_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro
# Convert ECDC CSV file into FHIR and push to Blaze
transfair:
container_name: transfair
image: samply/transfair
environment:
FHIR_INPUT_URL: "http://source_blaze:8080/fhir"
FHIR_OUTPUT_URL: "http://bridgehead-bbmri-blaze:8080/fhir"
PROFILE: "amr2fhir"
#WRITE_BUNDLES_TO_FILE: "true"
AMR_FILE_PATH: "/app/data"
restart: on-failure
# The start up logic for TransFAIR is kind of complicated for the ECDC/EHDS2
# pilot. This is because we only want to run it if 1. there are source data
# files to be transformed and 2. if there is no lock file. We also need to
# wait for Blaze to start, TransFAIR does not check for this. And finally,
# once TransFAIR has finished loading data, a lock file is created, to stop
# a time-consuming repeat run.
command: bash -c " \
echo listing /app/data && \
ls -la /app/data && \
ls /app/data/*.[cC][sS][vV] 1> /dev/null 2>&1 && \
[ ! -f /app/data/lock ] && \
( \
echo 'Wait for Blaze to finish initializing' ; \
sleep 360 ; \
echo 'Remove old output files' ; \
rm -rf /app/test/* ; \
cd /app ; \
echo 'Run TransFAIR' ; \
java -jar transFAIR.jar ; \
echo 'Touching lock file' ; \
touch /app/data/lock \
) & tail -f /dev/null"
# If you put .csv files into ./../ecdc/data, TransFAIR will try to process them.
volumes:
- ../../ecdc/test:/app/test/
- ../../ecdc/data:/app/data/
# Report on the data pushed to Blaze by TransFAIR
test-data-loader:
container_name: test-data-loader
image: samply/test-data-loader
command: sh -c "sleep 420 && echo Listing all resources in FHIR store && blazectl --server http://bridgehead-bbmri-blaze:8080/fhir count-resources && tail -f /dev/null"

View File

@ -1,28 +0,0 @@
#!/bin/bash
if [ "${ENABLE_EHDS2}" == "true" ]; then
log INFO "EHDS2 setup detected -- will start services for German Biobank Node."
OVERRIDE+=" -f ./$PROJECT/modules/ehds2-compose.yml"
# The environment needs to be defined in /etc/bridgehead
case "$ENVIRONMENT" in
"production")
export EHDS2_BROKER_ID=broker.bbmri.samply.de
export EHDS2_ROOT_CERT=ehds2
;;
"test")
export EHDS2_BROKER_ID=broker.test.bbmri.samply.de
export EHDS2_ROOT_CERT=ehds2.test
;;
*)
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export EHDS2_BROKER_ID=broker.bbmri.samply.de
export EHDS2_ROOT_CERT=ehds2
;;
esac
EHDS2_BROKER_URL=https://${EHDS2_BROKER_ID}
EHDS2_PROXY_ID=${SITE_ID}.${EHDS2_BROKER_ID}
EHDS2_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
EHDS2_SUPPORT_EMAIL=feedback@germanbiobanknode.de
fi

View File

@ -1,22 +0,0 @@
# DKFZ certificate
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUMy/n0zFRihhVR3aAD54LumzeYdwwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjIxMDI1MDczNTA4WhcNMzIx
MDIyMDczNTM3WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAL3qWliHIlIT1Qlsyq/NKJ1uj6/AF0STNg5NTNpb
Xqe5rmUqs6jmQepputGStBVe5TthFw56whISv9FqD5s1PZUGyFikW1pJUnF7ZYRf
MfrJHRi1vUnD3Gw36FCot+i6BAxfw/rdp9hoqFZ6erRkULLaYZ5S2cDHN0DWc18V
3VgZ66ah8QXSx7ERRNa/eWRkHrPIYhyVSoKuyZfvbVgsYZADSlviCgIHPrGLerLr
ylNUyuTxJ5RKStOwPn7A+Jp7nRT+MRh9BphA7s6NuK9h+eVe1DiLbIETWyCEfN3Y
INpunatn3QDhqOIfNcuBArjsAj7mg8l5KNba8nUP4v0EJYECAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFMvc5Fizz1vO
MEG3MIsy7UY69ZNIMB8GA1UdIwQYMBaAFMvc5Fizz1vOMEG3MIsy7UY69ZNIMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQBb8a5su820
h8JStJC+KpvXmDrGkwx9bHlEZMgQQejIrwPLEbA32KBvNxdoUxF9q1Y773MKdqbc
cCJwzQXE/NPZ13hCGrEIXs8DgH52GhEB5592k5/bRNcAvUwbZSXPPiT0rgq/eUOt
BYhgN0ov7h1MC5L6CYB/rQwqck7JPlmrXTkh2gix4/dEdBRzsHsn/xlo8ay5QYHG
rx2Adit76eZu/MJoJNzl1r8MPxLqyAie3KcIU54A+UMozLrWEQP/TyOyWZdjUjJt
cBYgkKJTjwdRhc+ehI3kFo7b/a/Z/jl9szKsAPHozMixSi8lGnsYwN80oqeRvT7h
wcMUK+igv3/K
-----END CERTIFICATE-----

View File

@ -1,22 +0,0 @@
# DKFZ certificate
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUJ0g7k2vrdAwNTU38S1/mU8NO26MwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNzEwMTIyMzQxWhcNMzMw
NzA3MTIyNDExWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBALMvc/fApbsAl+/NXDszNgffNR5llAb9CfxzdnRn
ryoBqZdPevBYZZfKBARRKjFbXRDdPWbE7erDeo1LiCM6PObXCuT9wmGWJtvfkmqW
3Z/a75e4r360kceMEGVn4kWpi9dz8s7+oXVZURjW2r13h6pq6xQNZDNlXmpR8wHG
58TSrQC4n1vzdSwMWdptgOA8Sw8adR7ZJI1yNZpmynB2QolKKNESI7FcSKC/+b+H
LoPkseAwQG9yJo23qEw1GZS67B47iKIqX2wp9VLQobHw7ncrhKXQLSWq973k/Swp
7lBdfOsTouf72flLiF1HbdOLcFDmWgIbf5scj2HaQe8b/UcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHYxBJiJZieW
e6G1vwn6Q36/crgNMB8GA1UdIwQYMBaAFHYxBJiJZieWe6G1vwn6Q36/crgNMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCN6WVNYpWJ
6Z1Ee+otLZYMXhjyR6NUQ5s0aHiug97gB8mTiNlgXiiTgipCbofEmENgh1inYrPC
WfdXxqOaekSXCQW6nSO1KtBzEYtkN5LrN1cjKqt51P2DbkllinK37wwCS2Kfup1+
yjhTRxrehSIfsMVK6bTUeSoc8etkgwErZpORhlpqZKWhmOwcMpgsYJJOLhUetqc1
UNe/254bc0vqHEPT6VI/86c7qAmk1xR0RUfrnKAEqZtUeuoj2fe1L/6yOB16fxt5
3V3oim7EO6eZCTjDo9fU5DaFiqSMe7WVdr03Na0cWet60XKRH/xaiC6gMWdHWcbh
vZdXnV1qjlM2
-----END CERTIFICATE-----

View File

@ -1,36 +0,0 @@
version: "3.7"
services:
focus-eric:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
container_name: bridgehead-focus-eric
environment:
API_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${ERIC_PROXY_ID}
PROXY_ID: ${ERIC_PROXY_ID}
BLAZE_URL: "http://blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy-eric:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
depends_on:
- "beam-proxy-eric"
- "blaze"
beam-proxy-eric:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-beam-proxy-eric
environment:
BROKER_URL: ${ERIC_BROKER_URL}
PROXY_ID: ${ERIC_PROXY_ID}
APP_focus_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/bbmri/modules/${ERIC_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro

View File

@ -1,28 +0,0 @@
#!/bin/bash
if [ "${ENABLE_ERIC}" == "true" ]; then
log INFO "BBMRI-ERIC setup detected -- will start services for BBMRI-ERIC."
OVERRIDE+=" -f ./$PROJECT/modules/eric-compose.yml"
# The environment needs to be defined in /etc/bridgehead
case "$ENVIRONMENT" in
"production")
export ERIC_BROKER_ID=broker.bbmri.samply.de
export ERIC_ROOT_CERT=eric
;;
"test")
export ERIC_BROKER_ID=broker-test.bbmri-test.samply.de
export ERIC_ROOT_CERT=eric.test
;;
*)
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export ERIC_BROKER_ID=broker.bbmri.samply.de
export ERIC_ROOT_CERT=eric
;;
esac
ERIC_BROKER_URL=https://${ERIC_BROKER_ID}
ERIC_PROXY_ID=${SITE_ID}.${ERIC_BROKER_ID}
ERIC_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
ERIC_SUPPORT_EMAIL=bridgehead@helpdesk.bbmri-eric.eu
fi

View File

@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUJ0g7k2vrdAwNTU38S1/mU8NO26MwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNzEwMTIyMzQxWhcNMzMw
NzA3MTIyNDExWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBALMvc/fApbsAl+/NXDszNgffNR5llAb9CfxzdnRn
ryoBqZdPevBYZZfKBARRKjFbXRDdPWbE7erDeo1LiCM6PObXCuT9wmGWJtvfkmqW
3Z/a75e4r360kceMEGVn4kWpi9dz8s7+oXVZURjW2r13h6pq6xQNZDNlXmpR8wHG
58TSrQC4n1vzdSwMWdptgOA8Sw8adR7ZJI1yNZpmynB2QolKKNESI7FcSKC/+b+H
LoPkseAwQG9yJo23qEw1GZS67B47iKIqX2wp9VLQobHw7ncrhKXQLSWq973k/Swp
7lBdfOsTouf72flLiF1HbdOLcFDmWgIbf5scj2HaQe8b/UcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHYxBJiJZieW
e6G1vwn6Q36/crgNMB8GA1UdIwQYMBaAFHYxBJiJZieWe6G1vwn6Q36/crgNMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCN6WVNYpWJ
6Z1Ee+otLZYMXhjyR6NUQ5s0aHiug97gB8mTiNlgXiiTgipCbofEmENgh1inYrPC
WfdXxqOaekSXCQW6nSO1KtBzEYtkN5LrN1cjKqt51P2DbkllinK37wwCS2Kfup1+
yjhTRxrehSIfsMVK6bTUeSoc8etkgwErZpORhlpqZKWhmOwcMpgsYJJOLhUetqc1
UNe/254bc0vqHEPT6VI/86c7qAmk1xR0RUfrnKAEqZtUeuoj2fe1L/6yOB16fxt5
3V3oim7EO6eZCTjDo9fU5DaFiqSMe7WVdr03Na0cWet60XKRH/xaiC6gMWdHWcbh
vZdXnV1qjlM2
-----END CERTIFICATE-----

View File

@ -1,36 +0,0 @@
version: "3.7"
services:
focus-gbn:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
container_name: bridgehead-focus-gbn
environment:
API_KEY: ${GBN_FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${GBN_PROXY_ID}
PROXY_ID: ${GBN_PROXY_ID}
BLAZE_URL: "http://blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy-gbn:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
depends_on:
- "beam-proxy-gbn"
- "blaze"
beam-proxy-gbn:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-beam-proxy-gbn
environment:
BROKER_URL: ${GBN_BROKER_URL}
PROXY_ID: ${GBN_PROXY_ID}
APP_focus_KEY: ${GBN_FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/bbmri/modules/${GBN_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro

View File

@ -1,28 +0,0 @@
#!/bin/bash
if [ "${ENABLE_GBN}" == "true" ]; then
log INFO "GBN setup detected -- will start services for German Biobank Node."
OVERRIDE+=" -f ./$PROJECT/modules/gbn-compose.yml"
# The environment needs to be defined in /etc/bridgehead
case "$ENVIRONMENT" in
"production")
export GBN_BROKER_ID=broker.bbmri.de
export GBN_ROOT_CERT=gbn
;;
"test")
export GBN_BROKER_ID=broker.test.bbmri.de
export GBN_ROOT_CERT=gbn.test
;;
*)
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export GBN_BROKER_ID=broker.bbmri.de
export GBN_ROOT_CERT=gbn
;;
esac
GBN_BROKER_URL=https://${GBN_BROKER_ID}
GBN_PROXY_ID=${SITE_ID}.${GBN_BROKER_ID}
GBN_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
GBN_SUPPORT_EMAIL=feedback@germanbiobanknode.de
fi

View File

@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUckVOQQWZBTC0pWhn1X3lPxAWricwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwOTA0MDkwMTQ0WhcNMzMw
OTAxMDkwMjEzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAOOD+CVvteBmu1hKV1QlfbHmiLCnuf6F+9k+1u/b
6as6k7BURn8KZAxVLWSIwC6x2C7n9CHN9Jieb4DWpS0XmXQVUEpT1/yiLGBdxp2x
nrbzm7caOunsWsPlGOcXPJKJpzAhcg58RDzXZ+2+shulSmsgPNlWBaLhNL5wj0sQ
MzbwGVlGIJg18Ye/9WgQkO2ZcnTGb5cRsChKs4H43ZC34ZSSk7wqWg6P3e2xFam1
YKXBOZzhwHoI4AxUQ+gd6upz5dqcwbaNZm10VP8fMac2dMLw9cOCS0ueDCS4viLd
A69yds19AndBPMZhoEY1UHafjJ1uITRJQpaaB4vNliX+1rECAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFC74YIorSwWD
/s5ozz3xvqUMDJ3qMB8GA1UdIwQYMBaAFC74YIorSwWD/s5ozz3xvqUMDJ3qMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCzcIccBzYr
sHCGTGsSyLGBYsuI5yl+hvFOitYTha/mC+XBxq2R6By2WzbfSZtyZkUtC/+FqdCY
VtMSjbDVXtBgsabfqODBobHmPyOEmNUX4IGcyn06rdM+rHQRah98lF+PhiPPO42F
9Wj8dkq4/Gf+Yarq31ZbY0sed2sEPZ/bV26Og8Ft9qip5gKwklyakAiCnDIq+QBd
ltvng3g08AQM0o5KIphP2/WU0UoSk1YPVMjRxuLiFg8xvr2EdCQQ9oA7xbhrmAXe
242HVW/7KokjmowyWTQlIUGnuGdCOtTl8h74eHTID0YWO68hHkA0J5Ox2j4dZxvw
HRFTxAR1gGKX
-----END CERTIFICATE-----

View File

@ -1,20 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUQJjusHYR89Xas+kRbg41aHZxfmcwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwODIxMDk1MDI1WhcNMzMw
ODE4MDk1MDU1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAMP0jt2tSk23Bu+QeogqlFwjbMnqwRcWGKAOF4ch
aOK2B5u/BnpqIZDZbhfSIJTv8DPe3+nA2VqRfSiW3HbV0auqxx1ii2ZmHYbvO2P/
Jj6hyIiYYGqCMRVXk7iB+DfMysQEaSJO/7lJSprlVQCl0u7MAQ4q/szVNwcCm2Xi
iE00Wlota2xTYjnJHYjeaLZL4kQsjqW2aCWHG4q77Z4NXT+lXN9XXedgoXLhuwWl
UyHhXPjyCVu1iFzsXwSTodPAETGoInRYMqMA7PrbHZu1b2Jz0BwCQ+bark1td+Mf
l3uP0QduhZnH6zGO0KyUFRzeiesgabv5bgUeSSsIOVjnLJUCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFME99nPh1Vuo
7eRaymL2Ps7qGxIdMB8GA1UdIwQYMBaAFME99nPh1Vuo7eRaymL2Ps7qGxIdMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQB0WG0xT00R
5CA0tVHaNo8bQuAXytu566TspKc5vVd3r6mglj/MiSSQG2MVz+GUU6LnnApgln1P
pvZuyaldB0QdTTLeJVMr/eFtZonlxqcxkj+VW2Y7mRHT7Xx9GQvzKYvSK5m/+xzH
pAQl8AirgkoZ5b+ltlzM0pDAH204xj3/skmGqM/o0FKzRtpetHYkZPiquHCmO2Cp
nTMkv7c2qu5t2Dm5q0Tmb7ZRoA1yIYhDn/UfhTAVWQnoMfXK8oB9nkRRb7pAfOXo
W1K4A+oWqKrJwfIH/Ycnw7hu8hPuGOyIN/PLnLpJp9M2I67vywp5lIvFib4UukyJ
wJw6/iTienIA
-----END CERTIFICATE-----

View File

@ -1,48 +1,7 @@
# Makes sense for all European Biobanks
: ${ENABLE_ERIC:=true}
# Makes only sense for German Biobanks
: ${ENABLE_GBN:=false}
# Makes only sense for EHDS2 project
: ${ENABLE_EHDS2:=false}
FOCUS_RETRY_COUNT=128
BROKER_ID=broker.bbmri.samply.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
SPOT_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
SPOT_BEAM_SECRET_LONG="ApiKey spot.${PROXY_ID} ${SPOT_BEAM_SECRET_SHORT}"
SUPPORT_EMAIL=bridgehead@helpdesk.bbmri-eric.eu
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
SUPPORT_EMAIL=$ERIC_SUPPORT_EMAIL
BROKER_URL_FOR_PREREQ="https://ecdc-vm-ehds-test1.swedencentral.cloudapp.azure.com"
if [ -n "$GBN_SUPPORT_EMAIL" ]; then
SUPPORT_EMAIL=$GBN_SUPPORT_EMAIL
fi
if [ -n "$EHDS2_SUPPORT_EMAIL" ]; then
SUPPORT_EMAIL=$EHDS2_SUPPORT_EMAIL
fi
function do_enroll {
COUNT=0
if [ "$ENABLE_ERIC" == "true" ]; then
do_enroll_inner $ERIC_PROXY_ID $ERIC_SUPPORT_EMAIL
COUNT=$((COUNT+1))
fi
if [ "$ENABLE_GBN" == "true" ]; then
do_enroll_inner $GBN_PROXY_ID $GBN_SUPPORT_EMAIL
COUNT=$((COUNT+1))
fi
if [ "$ENABLE_EHDS2" == "true" ]; then
do_enroll_inner $EHDS2_PROXY_ID $EHDS2_SUPPORT_EMAIL
COUNT=$((COUNT+1))
fi
if [ $COUNT -ge 2 ]; then
echo
echo "You just received $COUNT certificate signing requests (CSR). Please send $COUNT e-mails, with 1 CSR each, to the respective e-mail address."
fi
}

View File

@ -32,7 +32,7 @@ case "$PROJECT" in
bbmri)
#nothing extra to do
;;
minimal)
snap)
#nothing extra to do
;;
*)
@ -54,95 +54,47 @@ loadVars() {
set +a
OVERRIDE=${OVERRIDE:=""}
# minimal contains shared components, so potential overrides must be applied in every project
if [ -f "minimal/docker-compose.override.yml" ]; then
log INFO "Applying Bridgehead common components override (minimal/docker-compose.override.yml)"
OVERRIDE+=" -f ./minimal/docker-compose.override.yml"
fi
if [ -f "$PROJECT/docker-compose.override.yml" ]; then
log INFO "Applying $PROJECT/docker-compose.override.yml"
OVERRIDE+=" -f ./$PROJECT/docker-compose.override.yml"
fi
detectCompose
setHostname
setupProxy
# Set some project-independent default values
: ${ENVIRONMENT:=production}
case "$ENVIRONMENT" in
"production")
export FOCUS_TAG=main
;;
"test")
export FOCUS_TAG=develop
;;
*)
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export FOCUS_TAG=main
;;
esac
}
case "$ACTION" in
start)
loadVars
hc_send log "Bridgehead $PROJECT startup: Checking requirements ..."
chown -R bridgehead ${BASE}
checkRequirements
# Note: changes to "bridgehead" script will only take effect after next start.
su bridgehead -c "git pull"
chown -R bridgehead ${BASE}
# Local versions of focus and transfair are needed by EHDS2
clone_focus_if_nonexistent ${BASE}/..
build_focus ${BASE}/..
clone_transfair_if_nonexistent ${BASE}/..
build_transfair ${BASE}/..
# Location for input data and results for EHDS2
mkdir -p ${BASE}/../ecdc/test
mkdir -p ${BASE}/../ecdc/data
chown -R bridgehead ${BASE}/../ecdc
hc_send log "Bridgehead $PROJECT startup: Requirements checked out. Now starting bridgehead ..."
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
export LDM_LOGIN=$(getLdmPassword)
exec $COMPOSE -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
;;
stop)
loadVars
# HACK: This is temporarily to properly shut down false bridgehead instances (bridgehead-ccp instead ccp)
$COMPOSE -p bridgehead-$PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE down
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE down
;;
is-running)
bk_is_running
exit $?
exec $COMPOSE -f ./$PROJECT/docker-compose.yml $OVERRIDE down
;;
update)
loadVars
exec ./lib/update-bridgehead.sh $PROJECT
;;
install)
source ./lib/prepare-system.sh NODEV
loadVars
exec ./lib/install-bridgehead.sh $PROJECT
;;
dev-install)
exec ./lib/prepare-system.sh DEV
source ./lib/prepare-system.sh
loadVars
exec ./lib/install-bridgehead.sh $PROJECT
;;
uninstall)
exec ./lib/uninstall-bridgehead.sh $PROJECT
;;
adduser)
loadVars
log "INFO" "Adding encrypted credentials in /etc/bridgehead/$PROJECT.local.conf"
read -p "Please choose the component (LDM_AUTH|NNGM_AUTH) you want to add a user to : " COMPONENT
read -p "Please enter a username: " USER
read -s -p "Please enter a password (will not be echoed): "$'\n' PASSWORD
add_basic_auth_user $USER $PASSWORD $COMPONENT $PROJECT
;;
enroll)
loadVars
do_enroll $PROXY_ID
if [ -e $PRIVATEKEYFILENAME ]; then
log ERROR "Private key already exists at $PRIVATEKEYFILENAME. Please delete first to proceed."
exit 1
fi
docker run --rm -ti -v /etc/bridgehead/pki:/etc/bridgehead/pki samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $PROXY_ID --admin-email $SUPPORT_EMAIL
chmod 600 $PRIVATEKEYFILENAME
;;
preRun | preUpdate)
fixPermissions

View File

@ -1,12 +1,65 @@
version: "3.7"
services:
traefik:
container_name: bridgehead-traefik
image: traefik:latest
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.file.directory=/configuration/
- --api.dashboard=true
- --accesslog=true
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/api`) || PathPrefix(`/dashboard`)"
- "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_LOGIN}"
ports:
- 80:80
- 443:443
volumes:
- /etc/bridgehead/traefik-tls:/certs:ro
- ../lib/traefik-configuration/:/configuration:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
forward_proxy:
container_name: bridgehead-forward-proxy
image: samply/bridgehead-forward-proxy:latest
environment:
HTTPS_PROXY: ${HTTPS_PROXY_URL}
USERNAME: ${HTTPS_PROXY_USERNAME}
PASSWORD: ${HTTPS_PROXY_PASSWORD}
volumes:
- /etc/bridgehead/trusted-ca-certs:/docker/custom-certs/:ro
landing:
container_name: bridgehead-landingpage
image: samply/bridgehead-landingpage:master
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
environment:
HOST: ${HOST}
PROJECT: ${PROJECT}
SITE_NAME: ${SITE_NAME}
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:latest
image: "samply/blaze:0.19"
container_name: bridgehead-ccp-blaze
environment:
BASE_URL: "http://bridgehead-ccp-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx4g"
LOG_LEVEL: "debug"
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-data:/app/data"
@ -18,28 +71,29 @@ services:
- "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth"
- "traefik.http.routers.blaze_ccp.tls=true"
focus:
image: docker.verbis.dkfz.de/cache/samply/focus:main
container_name: bridgehead-focus
spot:
image: samply/spot:latest
container_name: bridgehead-spot
environment:
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${PROXY_ID}
SECRET: ${SPOT_BEAM_SECRET_LONG}
APPID: spot
PROXY_ID: ${PROXY_ID}
BLAZE_URL: "http://bridgehead-ccp-blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
EPSILON: 0.28
LDM_URL: http://bridgehead-ccp-blaze:8080/fhir
BEAM_PROXY: http://beam-proxy:8081
depends_on:
- "beam-proxy"
- "blaze"
beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
image: "samply/beam-proxy:develop"
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
APP_0_ID: spot
APP_0_KEY: ${SPOT_BEAM_SECRET_SHORT}
APP_1_ID: report-hub
APP_1_KEY: ${REPORTHUB_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
@ -50,7 +104,7 @@ services:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
- ./root.crt.pem:/conf/root.crt.pem:ro
volumes:

34
ccp/exliquid-compose.yml Normal file
View File

@ -0,0 +1,34 @@
version: "3.7"
services:
exliquid-task-store:
image: "samply/blaze:0.19"
container_name: bridgehead-exliquid-task-store
environment:
BASE_URL: "http://bridgehead-exliquid-task-store:8080"
JAVA_TOOL_OPTIONS: "-Xmx1g"
volumes:
- "exliquid-task-store-data:/app/data"
labels:
- "traefik.enable=false"
exliquid-report-hub:
image: "samply/report-hub:latest"
container_name: bridgehead-exliquid-report-hub
environment:
SPRING_WEBFLUX_BASE_PATH: "/exliquid"
JAVA_TOOL_OPTIONS: "-Xmx1g"
APP_BEAM_APPID: "report-hub.${PROXY_ID}"
APP_BEAM_SECRET: ${REPORTHUB_BEAM_SECRET_SHORT}
APP_BEAM_PROXY_BASEURL: http://beam-proxy:8081
APP_TASKSTORE_BASEURL: "http://bridgehead-exliquid-task-store:8080/fhir"
APP_DATASTORE_BASEURL: http://bridgehead-ccp-blaze:8080/fhir
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.report-ccp.rule=PathPrefix(`/exliquid`)"
- "traefik.http.services.report-ccp.loadbalancer.server.port=8080"
- "traefik.http.routers.report-ccp.tls=true"
volumes:
exliquid-task-store-data:

19
ccp/exliquid-setup.sh Normal file
View File

@ -0,0 +1,19 @@
#!/bin/bash
function exliquidSetup() {
case ${SITE_ID} in
berlin|dresden|essen|frankfurt|freiburg|luebeck|mainz|muenchen-lmu|muenchen-tu|mannheim|tuebingen)
EXLIQUID=1
;;
dktk-test)
EXLIQUID=1
;;
*)
EXLIQUID=0
;;
esac
if [[ $EXLIQUID -eq 1 ]]; then
log INFO "EXLIQUID setup detected -- will start Report-Hub."
OVERRIDE+=" -f ./$PROJECT/exliquid-compose.yml"
fi
}

View File

@ -1,18 +0,0 @@
version: "3.7"
services:
adt2fhir-rest:
container_name: bridgehead-adt2fhir-rest
image: docker.verbis.dkfz.de/ccp/adt2fhir-rest:main
environment:
IDTYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
SALT: ${LOCAL_SALT}
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.adt2fhir-rest.rule=PathPrefix(`/adt2fhir-rest`)"
- "traefik.http.middlewares.adt2fhir-rest_strip.stripprefix.prefixes=/adt2fhir-rest"
- "traefik.http.services.adt2fhir-rest.loadbalancer.server.port=8080"
- "traefik.http.routers.adt2fhir-rest.tls=true"
- "traefik.http.routers.adt2fhir-rest.middlewares=adt2fhir-rest_strip,auth"

View File

@ -1,13 +0,0 @@
#!/bin/bash
function adt2fhirRestSetup() {
if [ -n "$ENABLE_ADT2FHIR_REST" ]; then
log INFO "ADT2FHIR-REST setup detected -- will start adt2fhir-rest API."
if [ ! -n "$IDMANAGER_LOCAL_PATIENTLIST_APIKEY" ]; then
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
exit 1;
fi
OVERRIDE+=" -f ./$PROJECT/modules/adt2fhir-rest-compose.yml"
LOCAL_SALT="$(echo \"local-random-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
fi
}

View File

@ -1,33 +0,0 @@
version: "3.7"
services:
beam-proxy:
environment:
APP_dnpm-connect_KEY: ${DNPM_BEAM_SECRET_SHORT}
dnpm-beam-connect:
depends_on: [ beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://beam-proxy:8081
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
HTTP_PROXY: "http://forward_proxy:3128"
HTTPS_PROXY: "http://forward_proxy:3128"
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
- "traefik.http.middlewares.dnpm-connect-strip.stripprefix.prefixes=/dnpm-connect"
- "traefik.http.routers.dnpm-connect.middlewares=dnpm-connect-strip"
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
- "traefik.http.routers.dnpm-connect.tls=true"

View File

@ -1,33 +0,0 @@
version: "3.7"
services:
dnpm-backend:
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
container_name: bridgehead-dnpm-backend
environment:
- ZPM_SITE=${ZPM_SITE}
volumes:
- /etc/bridgehead/dnpm:/bwhc_config:ro
- ${DNPM_DATA_DIR}:/bwhc_data
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
- "traefik.http.routers.bwhc-backend.tls=true"
dnpm-frontend:
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
container_name: bridgehead-dnpm-frontend
links:
- dnpm-backend
environment:
- NUXT_HOST=0.0.0.0
- NUXT_PORT=8080
- BACKEND_PROTOCOL=https
- BACKEND_HOSTNAME=$HOST
- BACKEND_PORT=443
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
- "traefik.http.routers.bwhc-frontend.tls=true"

View File

@ -1,27 +0,0 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1
fi
if [ -z "${DNPM_DATA_DIR+x}" ]; then
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
exit 1
fi
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
echo "Override of landing page url already in place"
else
echo "Adding override of landing page url"
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
else
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
fi
fi
fi

View File

@ -1,9 +0,0 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM}" ]; then
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam.Connect for DNPM."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
# Set variables required for Beam-Connect
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
fi

View File

@ -1,58 +0,0 @@
version: "3.7"
services:
id-manager:
image: docker.verbis.dkfz.de/bridgehead/magicpl
container_name: bridgehead-id-manager
environment:
TOMCAT_REVERSEPROXY_FQDN: ${HOST}
TOMCAT_REVERSEPROXY_SSL: "true"
MAGICPL_SITE: ${IDMANAGEMENT_FRIENDLY_ID}
MAGICPL_ALLOWED_ORIGINS: https://${HOST}
MAGICPL_LOCAL_PATIENTLIST_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
MAGICPL_CENTRAXX_APIKEY: ${IDMANAGER_UPLOAD_APIKEY}
MAGICPL_CONNECTOR_APIKEY: ${IDMANAGER_READ_APIKEY}
MAGICPL_CENTRAL_PATIENTLIST_APIKEY: ${IDMANAGER_CENTRAL_PATIENTLIST_APIKEY}
MAGICPL_CONTROLNUMBERGENERATOR_APIKEY: ${IDMANAGER_CONTROLNUMBERGENERATOR_APIKEY}
MAGICPL_OIDC_CLIENT_ID: ${IDMANAGER_AUTH_CLIENT_ID}
MAGICPL_OIDC_CLIENT_SECRET: ${IDMANAGER_AUTH_CLIENT_SECRET}
depends_on:
- patientlist
labels:
- "traefik.enable=true"
- "traefik.http.routers.id-manager.rule=PathPrefix(`/id-manager`)"
- "traefik.http.services.id-manager.loadbalancer.server.port=8080"
- "traefik.http.routers.id-manager.tls=true"
patientlist:
image: docker.verbis.dkfz.de/bridgehead/mainzelliste
container_name: bridgehead-patientlist
environment:
- TOMCAT_REVERSEPROXY_FQDN=${HOST}
- ML_SITE=${IDMANAGEMENT_FRIENDLY_ID}
- ML_DB_PASS=${PATIENTLIST_POSTGRES_PASSWORD}
- ML_API_KEY=${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
- ML_UPLOAD_API_KEY=${IDMANAGER_UPLOAD_APIKEY}
# Add Variables from /etc/patientlist-id-generators.env
- PATIENTLIST_SEEDS_TRANSFORMED
labels:
- "traefik.enable=true"
- "traefik.http.routers.patientlist.rule=PathPrefix(`/patientlist`)"
- "traefik.http.services.patientlist.loadbalancer.server.port=8080"
- "traefik.http.routers.patientlist.tls=true"
depends_on:
- patientlist-db
patientlist-db:
image: docker.verbis.dkfz.de/cache/postgres:15.4-alpine
container_name: bridgehead-patientlist-db
environment:
POSTGRES_USER: "mainzelliste"
POSTGRES_DB: "mainzelliste"
POSTGRES_PASSWORD: ${PATIENTLIST_POSTGRES_PASSWORD}
volumes:
- "patientlist-db-data:/var/lib/postgresql/data"
# NOTE: Add backups here. This is only imported if /var/lib/bridgehead/data/patientlist/ is empty!!!
- "/tmp/bridgehead/patientlist/:/docker-entrypoint-initdb.d/"
volumes:
patientlist-db-data:

View File

@ -1,53 +0,0 @@
#!/bin/bash
function idManagementSetup() {
if [ -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log INFO "id-management setup detected -- will start id-management (mainzelliste & magicpl)."
OVERRIDE+=" -f ./$PROJECT/modules/id-management-compose.yml"
# Auto Generate local Passwords
PATIENTLIST_POSTGRES_PASSWORD="$(echo \"id-management-module-db-password-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
IDMANAGER_LOCAL_PATIENTLIST_APIKEY="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
# Transform Seeds Configuration to pass it to the Mainzelliste Container
PATIENTLIST_SEEDS_TRANSFORMED="$(declare -p PATIENTLIST_SEEDS | tr -d '\"' | sed 's/\[/\[\"/g' | sed 's/\]/\"\]/g')"
# Ensure old ids are working !!!
export IDMANAGEMENT_FRIENDLY_ID=$(legacyIdMapping "$SITE_ID")
fi
}
# Transform into single string array, e.g. 'dktk-test' to 'dktk test'
# Usage: transformToSingleStringArray 'dktk-test' -> 'dktk test'
function transformToSingleStringArray() {
echo "${1//-/ }";
}
# Ensure all Words are Uppercase
# Usage: transformToUppercase 'dktk test' -> 'Dktk Test'
function transformToUppercase() {
result="";
for word in $1; do
result+=" ${word^}";
done
echo "$result";
}
# Handle all execeptions from the norm (e.g LMU, TUM)
# Usage: applySpecialCases 'Muenchen Lmu Test' -> 'Muenchen LMU Test'
function applySpecialCases() {
result="$1";
result="${result/Lmu/LMU}";
result="${result/Tum/TUM}";
result="${result/Dktk Test/Teststandort}";
echo "$result";
}
# Transform current siteids to legacy version
# Usage: legacyIdMapping "dktk-test" -> "DktkTest"
function legacyIdMapping() {
single_string_array=$(transformToSingleStringArray "$1");
uppercase_string=$(transformToUppercase "$single_string_array");
normalized_string=$(applySpecialCases "$uppercase_string");
echo "$normalized_string" | tr -d ' '
}

View File

@ -1,66 +0,0 @@
# Module: Id-Management
This module provides integration with the CCP-Pseudonymiziation Service. To learn more on the backgrounds of this service, you can refer to the [CCP Data Protection Concept](https://dktk.dkfz.de/klinische-plattformen/documents-download).
## Getting Started
The following configuration variables are added to your sites-configuration repository:
```
IDMANAGER_UPLOAD_APIKEY="<random-string>"
IDMANAGER_READ_APIKEY="<random-string>"
IDMANAGER_CENTRAL_PATIENTLIST_APIKEY="<given-to-you-by-ccp-it>"
IDMANAGER_CONTROLNUMBERGENERATOR_APIKEY="<given-to-you-by-ccp-it>"
IDMANAGER_AUTH_CLIENT_ID="<given-to-you-by-ccp-it>"
IDMANAGER_AUTH_CLIENT_SECRET="<given-to-you-by-ccp-it>"
IDMANAGER_SEEDS_BK="<three-numbers>"
IDMANAGER_SEEDS_MDS="<three-numbers>"
IDMANAGER_SEEDS_DKTK000001985="<three-numbers>"
```
> NOTE: Additionally, the CCP-IT adds lines declaring the `PATIENTLIST_SEEDS` array in your site configuration. This will contain the seeds for the different id-generators used in all projects.
Once your Bridgehead is updated and restarted, you're all set!
## Additional information you may want to know
### Services
Upon configuration, the Bridgehead will spawn the following services:
- The `bridgehead-id-manager` at https://bridgehead.local/id-manager, provides a common interface for creating pseudonyms in the bridgehead.
- The `bridgehead-patientlist` at https://bridgehead.local/patientlist is a local instance of the open-source software [Mainzelliste](https://mainzelliste.de). This service's primary task is to map patients IDAT to pseudonyms identifying them along the different CCP projects.
- The `bridgehead-patientlist-db` is only accessible within the Bridgehead itself. This is a local postgresql instance storing the database for `bridgehead-patientlist`. The data is persisted as a named volume `patientlist-db-data`.
### How to import an existing database (e.g from Legacy Windows or from Backups)
First you must shutdown your local bridgehead instance:
```
systemctl stop bridgehead@ccp
```
Next you need to remove the current patientlist database:
```
docker volume rm patientlist-db-data;
```
Third, you need to place your postgres dump in the import directory `/tmp/bridgehead/patientlist/some-dump.sql`. This will only be imported, then the volume `patientlist-db-data` was removed previously.
> NOTE: Please create the postgres dump with the options "--no-owner" and "--no-privileges". Additionally ensure the dump is created in the plain format (SQL).
After this, you can restart your bridgehead and the dump will be imported:
```
systemctl start bridgehead@ccp
```
### How to connect your local data-management
Typically, the sites connect their local data-management for the pseudonym creation with the id-management in the bridgehead. In the following two sections, you can read where you can change the configuration:
#### Sites using CentraXX
On your CentraXX Server, you need to change following settings in the "centraxx-dev.properties" file.
```
dktk.idmanagement.url=https://<your-linux-bk-host>/id-manager/translator/getId
dktk.idmanagement.apiKey=<your-setting-for-IDMANAGER_UPLOAD_APIKEY>
```
They typically already exist, but need to be changed to the new values!
#### Sites using ADT2FHIR
@Pierre
### How to connect the legacy windows bridgehead
You need to change the configuration file "..." of your Windows Bridgehead. TODO...

View File

@ -1,36 +0,0 @@
version: "3.7"
services:
mtba:
image: docker.verbis.dkfz.de/cache/samply/mtba:1.0.0
container_name: bridgehead-mtba
environment:
BLAZE_STORE_URL: http://blaze:8080
# NOTE: Aktuell Berechtigungen wie MagicPL!!!
# TODO: Add separate ApiKey to MagicPL only for MTBA!
ID_MANAGER_API_KEY: ${IDMANAGER_UPLOAD_APIKEY}
ID_MANAGER_PSEUDONYM_ID_TYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
ID_MANAGER_URL: http://id-manager:8080/id-manager
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER}
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER}
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER}
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER}
CBIOPORTAL_URL: http://cbioportal:8080
FILE_CHARSET: ${MTBA_FILE_CHARSET}
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE}
CSV_DELIMITER: ${MTBA_CSV_DELIMITER}
labels:
- "traefik.enable=true"
- "traefik.http.routers.mtba.rule=PathPrefix(`/`)"
- "traefik.http.services.mtba.loadbalancer.server.port=80"
- "traefik.http.routers.mtba.tls=true"
volumes:
- /tmp/bridgehead/mtba/input:/app/input
- /tmp/bridgehead/mtba/persist:/app/persist
# TODO: Include CBioPortal in Deployment ...
# NOTE: CBioPortal can't load data while the system is running. So after import of data bridgehead needs to be restarted!
# TODO: Find a trigger to let mtba signal a restart for CBioPortal
volumes:
mtba-data:

View File

@ -1,12 +0,0 @@
#!/bin/bash
function mtbaSetup() {
if [ -n "$ENABLE_MTBA" ];then
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
exit 1;
fi
OVERRIDE+=" -f ./$PROJECT/modules/mtba-compose.yml"
fi
}

View File

@ -1,29 +0,0 @@
version: "3.7"
volumes:
nngm-rest:
services:
connector:
container_name: bridgehead-connector
image: docker.verbis.dkfz.de/ccp/nngm-rest:main
environment:
CTS_MAGICPL_API_KEY: ${NNGM_MAGICPL_APIKEY}
CTS_API_KEY: ${NNGM_CTS_APIKEY}
CRYPT_KEY: ${NNGM_CRYPTKEY}
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"
- "traefik.http.middlewares.connector_strip.stripprefix.prefixes=/nngm-connector"
- "traefik.http.services.connector.loadbalancer.server.port=8080"
- "traefik.http.routers.connector.tls=true"
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
volumes:
- nngm-rest:/var/log
traefik:
labels:
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"

View File

@ -1,6 +0,0 @@
#!/bin/bash
if [ -n "$NNGM_CTS_APIKEY" ]; then
log INFO "nNGM setup detected -- will start nNGM Connector."
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
fi

32
ccp/nngm-compose.yml Normal file
View File

@ -0,0 +1,32 @@
version: "3.7"
services:
connector:
container_name: bridgehead-connector
image: docker.verbis.dkfz.de/ccp/connector:bk2
environment:
POSTGRES_PASSWORD: ${CONNECTOR_POSTGRES_PASSWORD}
NNGM_MAGICPL_APIKEY: ${NNGM_MAGICPL_APIKEY}
NNGM_MAINZELLISTE_APIKEY: ${NNGM_MAINZELLISTE_APIKEY}
NNGM_CTS_APIKEY: ${NNGM_CTS_APIKEY}
NNGM_CRYPTKEY: ${NNGM_CRYPTKEY}
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.connector.rule=PathPrefix(`/ccp-connector`)"
- "traefik.http.services.connector.loadbalancer.server.port=8080"
- "traefik.http.routers.connector.tls=true"
connector_db:
image: postgres:9.5-alpine
container_name: bridgehead-ccp-connector-db
volumes:
- "connector_db_data:/var/lib/postgresql/data"
environment:
POSTGRES_DB: "samplyconnector"
POSTGRES_USER: "samplyconnector"
POSTGRES_PASSWORD: ${CONNECTOR_POSTGRES_PASSWORD}
restart: always
volumes:
connector_db_data:

9
ccp/nngm-setup.sh Normal file
View File

@ -0,0 +1,9 @@
#!/bin/bash
function nngmSetup() {
if [ -n "$NNGM_CTS_APIKEY" ]; then
log INFO "nNGM setup detected -- will start nNGM Connector."
OVERRIDE+=" -f ./$PROJECT/nngm-compose.yml"
fi
CONNECTOR_POSTGRES_PASSWORD="$(echo \"This is a salt string to generate one consistent password. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
}

View File

@ -1,20 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUN7yzueIZzwpe8PaPEIMY8zoH+eMwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNTIzMTAxNzIzWhcNMzMw
NTIwMTAxNzUzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAN5JAj+HydSGaxvA0AOcrXVTZ9FfsH0cMVBlQb72
bGZgrRvkqtB011TNXZfsHl7rPxCY61DcsDJfFq3+8VHT+S9HE0qV1bEwP+oA3xc4
Opq77av77cNNOqDC7h+jyPhHcUaE33iddmrH9Zn2ofWTSkKHHu3PAe5udCrc2QnD
4PLRF6gqiEY1mcGknJrXj1ff/X0nRY/m6cnHNXz0Cvh8oPOtbdfGgfZjID2/fJNP
fNoNKqN+5oJAZ+ZZ9id9rBvKj1ivW3F2EoGjZF268SgZzc5QrM/D1OpSBQf5SF/V
qUPcQTgt9ry3YR+SZYazLkfKMEOWEa0WsqJVgXdQ6FyergcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEa70kcseqU5
bHx2zSt4bG21HokhMB8GA1UdIwQYMBaAFEa70kcseqU5bHx2zSt4bG21HokhMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCGmE7NXW4T
6J4mV3b132cGEMD7grx5JeiXK5EHMlswUS+Odz0NcBNzhUHdG4WVMbrilHbI5Ua+
6jdKx5WwnqzjQvElP0MCw6sH/35gbokWgk1provOP99WOFRsQs+9Sm8M2XtMf9HZ
m3wABwU/O+dhZZ1OT1PjSZD0OKWKqH/KvlsoF5R6P888KpeYFiIWiUNS5z21Jm8A
ZcllJjiRJ60EmDwSUOQVJJSMOvtr6xTZDZLtAKSN8zN08lsNGzyrFwqjDwU0WTqp
scMXEGBsWQjlvxqDnXyljepR0oqRIjOvgrWaIgbxcnu98tK/OdBGwlAPKNUW7Crr
vO+eHxl9iqd4
MIIDNTCCAh2gAwIBAgIUMeGRSrNPhRdQ1tU7uK5+lUa4f38wDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjIwOTI5MTQxMjU1WhcNMzIw
OTI2MTQxMzI1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAMYyroOUeb27mYzClOrjCmgIceLalsFA0aVCh5mZ
KtP8+1U3oq/7exP30gXiJojxW7xoerfyQY9s0Sz5YYbxYbuskFOYEtyAILB/pxgd
+k+J3tlZKolpfmo7WT5tZiHxH/zjrtAYGnuB2xPHRMCWh/tHYrELgXQuilNol24y
GBa1plTlARy0aKEDUHp87WLhD2qH7B8sFlLgo0+gunE1UtR2HMSPF45w3VXszyG6
fJNrAj0yPnKy3Dm1BMO3jDO2e0A9lCQ71a4j4TeKePfCk1xCArSu6PpiwiacKplF
c6CRR6KrWVm2g+8Y2hFcOBG/Py2xusm3PWbpylGq6vtFRkkCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEFxD6BQwQO5
xsJ+3cvZypsnh6dDMB8GA1UdIwQYMBaAFEFxD6BQwQO5xsJ+3cvZypsnh6dDMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQB5zTeIhV/3
3Am6O144EFtnIeaZ2w0D6aEHqHAZp50vJv3+uQfOliCOzgw7VDxI4Zz2JALjlR/i
uOYHsu3YIRMIOmPOjqrdDJa6auB0ufL4oUPfCRln7Fh0f3JVlz3BUoHsSDt949p4
g0nnsciL2JHuzlqjn7Jyt3L7dAHrlFKulCcuidG5D3cqXrRCbF83f+k3TC/HRiNd
25oMi7I4MP/SOCdfQGUGIsHIf/0hSm3pNjDOrC/XuI/8gh2f5io+Y8V+hMwMBcm4
JbH8bdyBB+EIhsNbTwf2MWntD5bmg47sf7hh23aNvKXI67Li1pTI2t1CqiGnFR0U
fCEpeaEAHs0k
-----END CERTIFICATE-----

View File

@ -1,20 +1,15 @@
BROKER_ID=broker.ccp-it.dktk.dkfz.de
BROKER_ID=broker.dev.ccp-it.dktk.dkfz.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
FOCUS_RETRY_COUNT=32
SPOT_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
SPOT_BEAM_SECRET_LONG="ApiKey spot.${PROXY_ID} ${SPOT_BEAM_SECRET_SHORT}"
REPORTHUB_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
REPORTHUB_BEAM_SECRET_LONG="ApiKey report-hub.${PROXY_ID} ${REPORTHUB_BEAM_SECRET_SHORT}"
SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
idManagementSetup
mtbaSetup
adt2fhirRestSetup
# This will load nngm setup. Effective only if nngm configuration is defined.
source $PROJECT/nngm-setup.sh
nngmSetup
source $PROJECT/exliquid-setup.sh
exliquidSetup

View File

@ -1,14 +0,0 @@
[Unit]
Description=Start ECDC Bridgehead
[Service]
Type=simple
ExecStart=/srv/docker/bridgehead/restart_service.sh
ExecStop=/srv/docker/bridgehead/shutdown_service.sh
Restart=always
RestartSec=36000
KillMode=mixed
[Install]
WantedBy=default.target

View File

@ -9,31 +9,12 @@ detectCompose() {
fi
}
setupProxy() {
### Note: As the current data protection concepts do not allow communication via HTTP,
### we are not setting a proxy for HTTP requests.
local http="no"
local https="no"
if [ $HTTPS_PROXY_URL ]; then
local proto="$(echo $HTTPS_PROXY_URL | grep :// | sed -e 's,^\(.*://\).*,\1,g')"
local fqdn="$(echo ${HTTPS_PROXY_URL/$proto/})"
local hostport=$(echo $HTTPS_PROXY_URL | sed -e "s,$proto,,g" | cut -d/ -f1)
HTTPS_PROXY_HOST="$(echo $hostport | sed -e 's,:.*,,g')"
HTTPS_PROXY_PORT="$(echo $hostport | sed -e 's,^.*:,:,g' -e 's,.*:\([0-9]*\).*,\1,g' -e 's,[^0-9],,g')"
if [[ ! -z "$HTTPS_PROXY_USERNAME" && ! -z "$HTTPS_PROXY_PASSWORD" ]]; then
local proto="$(echo $HTTPS_PROXY_URL | grep :// | sed -e 's,^\(.*://\).*,\1,g')"
local fqdn="$(echo ${HTTPS_PROXY_URL/$proto/})"
HTTPS_PROXY_FULL_URL="$(echo $proto$HTTPS_PROXY_USERNAME:$HTTPS_PROXY_PASSWORD@$fqdn)"
https="authenticated"
getLdmPassword() {
if [ -n "$LDM_PASSWORD" ]; then
docker run --rm httpd:alpine htpasswd -nb $PROJECT $LDM_PASSWORD | tr -d '\n' | tr -d '\r'
else
HTTPS_PROXY_FULL_URL=$HTTPS_PROXY_URL
https="unauthenticated"
echo -n ""
fi
fi
log INFO "Configuring proxy servers: $http http proxy (we're not supporting unencrypted comms), $https https proxy"
export HTTPS_PROXY_HOST HTTPS_PROXY_PORT HTTPS_PROXY_FULL_URL
}
exitIfNotRoot() {
@ -53,7 +34,7 @@ checkOwner(){
}
printUsage() {
echo "Usage: bridgehead start|stop|is-running|update|install|uninstall|adduser|enroll PROJECTNAME"
echo "Usage: bridgehead start|stop|update|install|uninstall|enroll PROJECTNAME"
echo "PROJECTNAME should be one of ccp|bbmri"
}
@ -76,7 +57,7 @@ fetchVarsFromVault() {
set +e
PASS=$(BW_MASTERPASS="$BW_MASTERPASS" BW_CLIENTID="$BW_CLIENTID" BW_CLIENTSECRET="$BW_CLIENTSECRET" docker run --rm -e BW_MASTERPASS -e BW_CLIENTID -e BW_CLIENTSECRET -e http_proxy docker.verbis.dkfz.de/cache/samply/bridgehead-vaultfetcher:latest $@)
PASS=$(BW_MASTERPASS="$BW_MASTERPASS" BW_CLIENTID="$BW_CLIENTID" BW_CLIENTSECRET="$BW_CLIENTSECRET" docker run --rm -e BW_MASTERPASS -e BW_CLIENTID -e BW_CLIENTSECRET -e http_proxy samply/bridgehead-vaultfetcher $@)
RET=$?
if [ $RET -ne 0 ]; then
@ -150,22 +131,11 @@ fail_and_report() {
setHostname() {
if [ -z "$HOST" ]; then
export HOST=$(hostname -f | tr "[:upper:]" "[:lower:]")
export HOST=$(hostname -f)
log DEBUG "Using auto-detected hostname $HOST."
fi
}
# Takes 1) The Backup Directory Path 2) The name of the Service to be backuped
# Creates 3 Backups: 1) For the past seven days 2) For the current month and 3) for each calendar week
createEncryptedPostgresBackup(){
docker exec "$2" bash -c 'pg_dump -U $POSTGRES_USER $POSTGRES_DB --format=p --no-owner --no-privileges' | \
# TODO: Encrypt using /etc/bridgehead/pki/${SITE_ID}.priv.pem | \
tee "$1/$2/$(date +Last-%A).sql" | \
tee "$1/$2/$(date +%Y-%m).sql" > \
"$1/$2/$(date +%Y-KW%V).sql"
}
# from: https://gist.github.com/sj26/88e1c6584397bb7c13bd11108a579746
# ex. use: retry 5 /bin/false
function retry {
@ -188,118 +158,6 @@ function retry {
return 0
}
function bk_is_running {
detectCompose
RUNNING="$($COMPOSE -p $PROJECT -f minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE ps -q)"
NUMBEROFRUNNING=$(echo "$RUNNING" | wc -l)
if [ $NUMBEROFRUNNING -ge 2 ]; then
return 0
else
return 1
fi
}
function do_enroll_inner {
PARAMS=""
MANUAL_PROXY_ID="${1:-$PROXY_ID}"
if [ -z "$MANUAL_PROXY_ID" ]; then
log ERROR "No Proxy ID set"
exit 1
else
log INFO "Enrolling Beam Proxy Id $MANUAL_PROXY_ID"
fi
SUPPORT_EMAIL="${2:-$SUPPORT_EMAIL}"
if [ -n "$SUPPORT_EMAIL" ]; then
PARAMS+="--admin-email $SUPPORT_EMAIL"
fi
docker run --rm -v /etc/bridgehead/pki:/etc/bridgehead/pki docker.verbis.dkfz.de/cache/samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $MANUAL_PROXY_ID $PARAMS
chmod 600 $PRIVATEKEYFILENAME
}
function do_enroll {
do_enroll_inner $@
}
add_basic_auth_user() {
USER="${1}"
PASSWORD="${2}"
NAME="${3}"
PROJECT="${4}"
FILE="/etc/bridgehead/${PROJECT}.local.conf"
ENCRY_CREDENTIALS="$(docker run --rm docker.verbis.dkfz.de/cache/httpd:alpine htpasswd -nb $USER $PASSWORD | tr -d '\n' | tr -d '\r')"
if [ -f $FILE ] && grep -R -q "$NAME=" $FILE # if a specific basic auth user already exists:
then
sed -i "/$NAME/ s|='|='$ENCRY_CREDENTIALS,|" $FILE
else
echo -e "\n## Basic Authentication Credentials for:\n$NAME='$ENCRY_CREDENTIALS'" >> $FILE;
fi
log DEBUG "Saving clear text credentials in $FILE. If wanted, delete them manually."
sed -i "/^$NAME/ s|$|\n# User: $USER\n# Password: $PASSWORD|" $FILE
}
function clone_repo_if_nonexistent() {
local repo_url="$1" # First argument: Repository URL
local target_dir="$2" # Second argument: Target directory
local branch_name="$3" # Third argument: Branch name
echo Repo directory: $target_dir
# Check if the target directory exists
if [ ! -d "$target_dir" ]; then
echo "Directory '$target_dir' does not exist. Cloning the repository..."
# Clone the repository
git clone "$repo_url" "$target_dir"
fi
# Change to the cloned directory
cd "$target_dir"
# Checkout the specified branch
chown -R bridgehead .
su bridgehead -c "git checkout $branch_name"
cd -
}
function clone_transfair_if_nonexistent() {
local base_dir="$1"
clone_repo_if_nonexistent https://github.com/samply/transFAIR.git $base_dir/transfair ehds2_develop
}
function clone_focus_if_nonexistent() {
local base_dir="$1"
clone_repo_if_nonexistent https://github.com/samply/focus.git $base_dir/focus ehds2
}
function build_transfair() {
local base_dir="$1"
# We only take the touble to build transfair if:
#
# 1. There is data available (any CSV files) and
# 2. There is no data lock file (which means that no ETL has yet been run).
if ls ../ecdc/data/*.[cC][sS][vV] 1> /dev/null 2>&1 && [ ! -f ../ecdc/data/lock ]; then
cd $base_dir/transfair
su bridgehead -c "git pull"
docker build --progress=plain -t samply/transfair --no-cache .
chown -R bridgehead .
cd -
fi
}
function build_focus() {
local base_dir="$1"
cd $base_dir/focus
su bridgehead -c "git pull"
docker build --progress=plain -f DockerfileWithBuild -t samply/focus --no-cache .
chown -R bridgehead .
cd -
}
##Setting Network properties
# currently not needed
#export HOSTIP=$(MSYS_NO_PATHCONV=1 docker run --rm --add-host=host.docker.internal:host-gateway ubuntu cat /etc/hosts | grep 'host.docker.internal' | awk '{print $1}');

View File

@ -29,16 +29,12 @@ bridgehead ALL= NOPASSWD: BRIDGEHEAD${PROJECT^^}
EOF
# TODO: Determine whether this should be located in setup-bridgehead (triggered through bridgehead install) or in update bridgehead (triggered every hour)
if [ -z "$LDM_AUTH" ]; then
log "INFO" "Now generating basic auth for the local data management (see adduser in bridgehead for more information). "
if [ -z "$LDM_PASSWORD" ]; then
log "INFO" "Now generating a password for the local data management. Please save the password for your ETL process!"
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
add_basic_auth_user $PROJECT $generated_passwd "LDM_AUTH" $PROJECT
fi
if [ ! -z "$NNGM_CTS_APIKEY" ] && [ -z "$NNGM_AUTH" ]; then
log "INFO" "Now generating basic auth for nNGM upload API (see adduser in bridgehead for more information). "
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
add_basic_auth_user "nngm" $generated_passwd "NNGM_AUTH" $PROJECT
log "INFO" "Your generated credentials are:\n user: $PROJECT\n password: $generated_passwd"
echo -e "## Local Data Management Basic Authentication\n# User: $PROJECT\nLDM_PASSWORD=$generated_passwd" >> /etc/bridgehead/${PROJECT}.local.conf;
fi
log "INFO" "Registering system units for bridgehead and bridgehead-update"

View File

@ -47,8 +47,8 @@ function hc_send(){
if [ -n "$2" ]; then
MSG="$2\n\nDocker stats:\n$UPTIME"
echo -e "$MSG" | https_proxy=$HTTPS_PROXY_FULL_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null -X POST --data-binary @- "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
echo -e "$MSG" | https_proxy=$HTTPS_PROXY_URL curl -A "$USER_AGENT" -s -o /dev/null -X POST --data-binary @- "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
else
https_proxy=$HTTPS_PROXY_FULL_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
https_proxy=$HTTPS_PROXY_URL curl -A "$USER_AGENT" -s -o /dev/null "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
fi
}

View File

@ -1,21 +1,10 @@
#!/bin/bash -e
DEV_MODE="${1:-NODEV}"
source lib/log.sh
source lib/functions.sh
log "INFO" "Preparing your system for bridgehead installation ..."
# Check, if running in WSL
if [[ $(grep -i Microsoft /proc/version) ]]; then
# Check, if systemd is available
if [ "$(systemctl is-system-running)" = "offline" ]; then
log "ERROR" "It seems you have no active systemd environment in your WSL environment. Please follow the guide in https://devblogs.microsoft.com/commandline/systemd-support-is-now-available-in-wsl/"
exit 1
fi
fi
# Create the bridgehead user
if id bridgehead &>/dev/null; then
log "INFO" "Existing user with id $(id -u bridgehead) will be used by the bridgehead system units."
@ -25,12 +14,7 @@ else
fi
# Clone the OpenSource repository of bridgehead
set +e
bridgehead_repository_url=$(git remote get-url origin)
if [ $? -ne 0 ]; then
bridgehead_repository_url="https://github.com/samply/bridgehead.git"
fi
set -e
bridgehead_repository_url="https://github.com/samply/bridgehead.git"
if [ -d "/srv/docker/bridgehead" ]; then
current_owner=$(stat -c '%U' /srv/docker/bridgehead)
if [ "$(su -c 'git -C /srv/docker/bridgehead remote get-url origin' $current_owner)" == "$bridgehead_repository_url" ]; then
@ -42,7 +26,7 @@ if [ -d "/srv/docker/bridgehead" ]; then
else
log "INFO" "Cloning $bridgehead_repository_url to /srv/docker/bridgehead"
mkdir -p /srv/docker/
git clone $bridgehead_repository_url /srv/docker/bridgehead
git clone bridgehead_repository_url /srv/docker/bridgehead
fi
case "$PROJECT" in
@ -52,8 +36,8 @@ case "$PROJECT" in
bbmri)
site_configuration_repository_middle="git.verbis.dkfz.de/bbmri-bridgehead-configs/"
;;
minimal)
site_configuration_repository_middle="git.verbis.dkfz.de/minimal-bridgehead-configs/"
snap)
site_configuration_repository_middle="git.verbis.dkfz.de/bridgehead-configurations/bridgehead-config-"
;;
*)
log ERROR "Internal error, this should not happen."
@ -69,26 +53,18 @@ if [ -d /etc/bridgehead ]; then
else
log "WARN" "Your site configuration repository in /etc/bridgehead seems to have another origin than git.verbis.dkfz.de. Please check if the repository is correctly cloned!"
fi
elif [[ "$DEV_MODE" == "NODEV" ]]; then
else
log "INFO" "Now cloning your site configuration repository for you."
if [ -z "$site" ]; then
read -p "Please enter your site: " site
fi
if [ -z "$access_token" ]; then
read -s -p "Please enter the bridgehead's access token for your site configuration repository (will not be echoed): " access_token
fi
site_configuration_repository_url="https://bytoken:${access_token}@${site_configuration_repository_middle}$(echo $site | tr '[:upper:]' '[:lower:]').git"
git clone $site_configuration_repository_url /etc/bridgehead
if [ $? -gt 0 ]; then
log "ERROR" "Unable to clone your configuration repository. Please obtain correct access data and try again."
fi
elif [[ "$DEV_MODE" == "DEV" ]]; then
log "INFO" "Now cloning your developer configuration repository for you."
read -p "Please enter your config repository URL: " url
git clone "$url" /etc/bridgehead
fi
chown -R bridgehead /etc/bridgehead /srv/docker/bridgehead
log INFO "System preparation is completed and configuration is present."
log INFO "System preparation is completed and private key is present."

View File

@ -14,7 +14,7 @@ checkOwner /etc/bridgehead bridgehead || exit 1
## Check if user is a su
log INFO "Checking if all prerequisites are met ..."
prerequisites="git docker curl"
prerequisites="git docker"
for prerequisite in $prerequisites; do
$prerequisite --version 2>&1
is_available=$?
@ -62,34 +62,6 @@ if [ -e /etc/bridgehead/vault.conf ]; then
fi
fi
log INFO "Checking network access ($BROKER_URL_FOR_PREREQ) ..."
source /etc/bridgehead/${PROJECT}.conf
source ${PROJECT}/vars
set +e
SERVERTIME="$(https_proxy=$HTTPS_PROXY_FULL_URL curl -m 5 -s -I $BROKER_URL_FOR_PREREQ 2>&1 | grep -i -e '^Date: ' | sed -e 's/^Date: //i')"
RET=$?
set -e
if [ $RET -ne 0 ]; then
log WARN "Unable to connect to Samply.Beam broker at $BROKER_URL_FOR_PREREQ. Please check your proxy settings.\nThe currently configured proxy was \"$HTTPS_PROXY_URL\". This error is normal when using proxy authentication."
log WARN "Unable to check clock skew due to previous error."
else
log INFO "Checking clock skew ..."
SERVERTIME_AS_TIMESTAMP=$(date --date="$SERVERTIME" +%s)
MYTIME=$(date +%s)
SKEW=$(($SERVERTIME_AS_TIMESTAMP - $MYTIME))
SKEW=$(echo $SKEW | awk -F- '{print $NF}')
SYNCTEXT="For example, consider entering a correct NTP server (e.g. your institution's Active Directory Domain Controller in /etc/systemd/timesyncd.conf (option NTP=) and restart systemd-timesyncd."
if [ $SKEW -ge 300 ]; then
report_error 5 "Your clock is not synchronized (${SKEW}s off). This will cause Samply.Beam's certificate will fail. Please setup time synchronization. $SYNCTEXT"
log WARN "Server Time Error"
elif [ $SKEW -ge 60 ]; then
log WARN "Your clock is more than a minute off (${SKEW}s). Consider syncing to a time server. $SYNCTEXT"
fi
fi
checkPrivKey() {
if [ -e /etc/bridgehead/pki/${SITE_ID}.priv.pem ]; then
log INFO "Success - private key found."
@ -97,6 +69,8 @@ checkPrivKey() {
log ERROR "Unable to find private key at /etc/bridgehead/pki/${SITE_ID}.priv.pem. To fix, please run\n bridgehead enroll ${PROJECT}\nand follow the instructions."
return 1
fi
log INFO "Success - all prerequisites are met!"
hc_send log "Success - all prerequisites are met!"
return 0
}
@ -106,7 +80,4 @@ else
checkPrivKey || exit 1
fi
log INFO "Success - all prerequisites are met!"
hc_send log "Success - all prerequisites are met!"
exit 0

View File

@ -4,15 +4,10 @@ source lib/functions.sh
AUTO_HOUSEKEEPING=${AUTO_HOUSEKEEPING:-true}
if [ "$AUTO_HOUSEKEEPING" == "true" ]; then
A="Performing automatic maintenance: "
if bk_is_running; then
A="$A Cleaning docker images."
docker system prune -a -f
else
A="$A Not cleaning docker images since BK is not running."
fi
A="Performing automatic maintenance: Cleaning docker images."
hc_send log "$A"
log INFO "$A"
docker system prune -a -f
else
log WARN "Automatic housekeeping disabled (variable AUTO_HOUSEKEEPING != \"true\")"
fi
@ -30,7 +25,7 @@ source $CONFFILE
assertVarsNotEmpty SITE_ID || fail_and_report 1 "Update failed: SITE_ID empty"
export SITE_ID
checkOwner /srv/docker/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /srv/docker/bridgehead"
checkOwner . bridgehead || fail_and_report 1 "Update failed: Wrong permissions in $(pwd)"
checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead"
CREDHELPER="/srv/docker/bridgehead/lib/gitpassword.sh"
@ -50,12 +45,12 @@ for DIR in /etc/bridgehead $(pwd); do
git -C $DIR config credential.helper "$CREDHELPER"
fi
old_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
if [ -z "$HTTPS_PROXY_FULL_URL" ]; then
if [ -z "$HTTP_PROXY_URL" ]; then
log "INFO" "Git is using no proxy!"
OUT=$(retry 5 git -C $DIR fetch 2>&1 && retry 5 git -C $DIR pull 2>&1)
else
log "INFO" "Git is using proxy ${HTTPS_PROXY_URL} from ${CONFFILE}"
OUT=$(retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR pull 2>&1)
log "INFO" "Git is using proxy ${HTTP_PROXY_URL} from ${CONFFILE}"
OUT=$(retry 5 git -c http.proxy=$HTTP_PROXY_URL -c https.proxy=$HTTPS_PROXY_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTP_PROXY_URL -c https.proxy=$HTTPS_PROXY_URL -C $DIR pull 2>&1)
fi
if [ $? -ne 0 ]; then
report_error log "Unable to update git $DIR: $OUT"
@ -86,7 +81,7 @@ done
# Check docker updates
log "INFO" "Checking for updates to running docker images ..."
docker_updated="false"
for IMAGE in $(cat $PROJECT/docker-compose.yml ${OVERRIDE//-f/} minimal/docker-compose.yml | grep -v "^#" | grep "image:" | sed -e 's_^.*image: \(.*\).*$_\1_g; s_\"__g'); do
for IMAGE in $(cat $PROJECT/docker-compose.yml | grep -v "^#" | grep "image:" | sed -e 's_^.*image: \(.*\).*$_\1_g; s_\"__g'); do
log "INFO" "Checking for Updates of Image: $IMAGE"
if docker pull $IMAGE | grep "Downloaded newer image"; then
CHANGE="Image $IMAGE updated."
@ -108,46 +103,6 @@ else
hc_send log "$RES"
fi
if [ -n "${BACKUP_DIRECTORY}" ]; then
if [ ! -d "$BACKUP_DIRECTORY" ]; then
message="Performing automatic maintenance: Attempting to create backup directory $BACKUP_DIRECTORY."
hc_send log "$message"
log INFO "$message"
mkdir -p "$BACKUP_DIRECTORY"
chown -R "$BACKUP_DIRECTORY" bridgehead;
fi
checkOwner "$BACKUP_DIRECTORY" bridgehead || fail_and_report 1 "Automatic maintenance failed: Wrong permissions for backup directory $BACKUP_DIRECTORY"
# Collect all container names that contain '-db'
BACKUP_SERVICES="$(docker ps --filter name=-db --format "{{.Names}}" | tr "\n" "\ ")"
log INFO "Performing automatic maintenance: Creating Backups for $BACKUP_SERVICES";
for service in $BACKUP_SERVICES; do
if [ ! -d "$BACKUP_DIRECTORY/$service" ]; then
message="Performing automatic maintenance: Attempting to create backup directory for $service in $BACKUP_DIRECTORY."
hc_send log "$message"
log INFO "$message"
mkdir -p "$BACKUP_DIRECTORY/$service"
fi
if createEncryptedPostgresBackup "$BACKUP_DIRECTORY" "$service"; then
message="Performing automatic maintenance: Stored encrypted backup for $service in $BACKUP_DIRECTORY."
hc_send log "$message"
log INFO "$message"
else
fail_and_report 5 "Failed to create encrypted update for $service"
fi
done
else
log WARN "Automated backups are disabled (variable AUTO_BACKUPS != \"true\")"
fi
#TODO: the following block can be deleted after successful update at all sites
if [ ! -z "$LDM_PASSWORD" ]; then
FILE="/etc/bridgehead/$PROJECT.local.conf"
log "INFO" "Migrating LDM_PASSWORD to encrypted credentials in $FILE"
add_basic_auth_user $PROJECT $LDM_PASSWORD "LDM_AUTH" $PROJECT
add_basic_auth_user $PROJECT $LDM_PASSWORD "NNGM_AUTH" $PROJECT
sed -i "/LDM_PASSWORD/{d;}" $FILE
fi
exit 0
# TODO: Print last commit explicit

View File

@ -1,53 +0,0 @@
version: "3.7"
services:
dnpm-beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-dnpm-beam-proxy
environment:
BROKER_URL: ${DNPM_BROKER_URL}
PROXY_ID: ${DNPM_PROXY_ID}
APP_dnpm-connect_KEY: ${DNPM_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: ./conf/trusted-ca-certs
ROOTCERT_FILE: ./conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
dnpm-beam-connect:
depends_on: [ dnpm-beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://dnpm-beam-proxy:8081
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${DNPM_PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
HTTP_PROXY: http://forward_proxy:3128
HTTPS_PROXY: http://forward_proxy:3128
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
- "traefik.http.middlewares.dnpm-connect-strip.stripprefix.prefixes=/dnpm-connect"
- "traefik.http.routers.dnpm-connect.middlewares=dnpm-connect-strip"
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
- "traefik.http.routers.dnpm-connect.tls=true"
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -1,33 +0,0 @@
version: "3.7"
services:
dnpm-backend:
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
container_name: bridgehead-dnpm-backend
environment:
- ZPM_SITE=${ZPM_SITE}
volumes:
- /etc/bridgehead/dnpm:/bwhc_config:ro
- ${DNPM_DATA_DIR}:/bwhc_data
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
- "traefik.http.routers.bwhc-backend.tls=true"
dnpm-frontend:
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
container_name: bridgehead-dnpm-frontend
links:
- dnpm-backend
environment:
- NUXT_HOST=0.0.0.0
- NUXT_PORT=8080
- BACKEND_PROTOCOL=https
- BACKEND_HOSTNAME=$HOST
- BACKEND_PORT=443
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
- "traefik.http.routers.bwhc-frontend.tls=true"

View File

@ -1,27 +0,0 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1
fi
if [ -z "${DNPM_DATA_DIR+x}" ]; then
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
exit 1
fi
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
echo "Override of landing page url already in place"
else
echo "Adding override of landing page url"
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
else
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
fi
fi
fi

View File

@ -1,16 +0,0 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM}" ]; then
log DEBUG "DNPM setup detected (Beam.Connect) -- will start Beam and Beam.Connect for DNPM."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
# Set variables required for Beam-Connect
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
DNPM_BROKER_ID="broker.ccp-it.dktk.dkfz.de"
DNPM_BROKER_URL="https://${DNPM_BROKER_ID}"
if [ -z ${BROKER_URL_FOR_PREREQ+x} ]; then
BROKER_URL_FOR_PREREQ=$DNPM_BROKER_URL
log DEBUG "No Broker for clock check set; using $DNPM_BROKER_URL"
fi
DNPM_PROXY_ID="${SITE_ID}.${DNPM_BROKER_ID}"
fi

View File

@ -1,29 +0,0 @@
version: "3.7"
volumes:
nngm-rest:
services:
connector:
container_name: bridgehead-connector
image: docker.verbis.dkfz.de/ccp/nngm-rest:main
environment:
CTS_MAGICPL_API_KEY: ${NNGM_MAGICPL_APIKEY}
CTS_API_KEY: ${NNGM_CTS_APIKEY}
CRYPT_KEY: ${NNGM_CRYPTKEY}
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"
- "traefik.http.middlewares.connector_strip.stripprefix.prefixes=/nngm-connector"
- "traefik.http.services.connector.loadbalancer.server.port=8080"
- "traefik.http.routers.connector.tls=true"
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
volumes:
- nngm-rest:/var/log
traefik:
labels:
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"

View File

@ -1,6 +0,0 @@
#!/bin/bash
if [ -n "$NNGM_CTS_APIKEY" ]; then
log INFO "nNGM setup detected -- will start nNGM Connector."
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
fi

View File

@ -1,6 +0,0 @@
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -1,30 +0,0 @@
#!/bin/bash
# Start a running Bridgehead. If there is already a Bridgehead running,
# stop it first.
# This is intended to be used by systemctl.
cd /srv/docker/bridgehead
echo "git status before stop"
git status
echo "Stopping running Bridgehead, if present"
./bridgehead stop bbmri
# If "flush_blaze" is present, delete the Blaze volume before starting
# the Bridgehead again. This allows a user to upload all data, if
# requested.
if [ -f "/srv/docker/ecdc/data/flush_blaze" ]; then
docker volume rm bbmri_blaze-data
rm -f /srv/docker/ecdc/data/flush_blaze
fi
echo "git status before start"
git status | systemd-cat -p info
echo "Start the Bridgehead anew"
./bridgehead start bbmri
echo "Bridgehead has unexpectedly terminated"

83
run.sh
View File

@ -1,83 +0,0 @@
#!/usr/bin/env bash
# Start a Bridgehead from the command line. Upload data if requested.
# Behind the scenes we use systemctl to do the work.
# Function to print usage
print_usage() {
echo "Start a Bridghead, optionally upload data"
echo "Usage: $0 [--upload | --upload-all | --help | -h]"
echo "Options:"
echo " --upload Run Bridgehead and upload just the new CSV data files."
echo " --upload-all Run Bridgehead and upload all CSV data files."
echo " --help, -h Display this help message."
echo " No options Run Bridgehead only."
}
# Initialize variables
UPLOAD=false
UPLOAD_ALL=false
# Parse arguments
while [[ "$#" -gt 0 ]]; do
case $1 in
--upload)
UPLOAD=true
;;
--upload-all)
UPLOAD_ALL=true
;;
--help|-h)
print_usage
exit 0
;;
*)
echo "Error: Unknown argument '$1'"
print_usage
exit 1
;;
esac
shift
done
# Check for conflicting options
if [ "$UPLOAD" = true ] && [ "$UPLOAD_ALL" = true ]; then
echo "Error: you must specify either --upload or --upload-all, specifying both is not permitted."
print_usage
exit 1
fi
# Disable/stop standard Bridgehead systemctl services, if present
sudo systemctl disable bridgehead@bbmri.service >& /dev/null
sudo systemctl disable system-bridgehead.slice >& /dev/null
sudo systemctl disable bridgehead-update@bbmri.timer >& /dev/null
sudo systemctl stop bridgehead@bbmri.service >& /dev/null
sudo systemctl stop system-bridgehead.slice >& /dev/null
sudo systemctl stop bridgehead-update@bbmri.timer >& /dev/null
# Set up systemctl for EHDS2/ECDC if necessary
cp /srv/docker/bridgehead/ecdc.service /etc/systemd/system
systemctl daemon-reload
systemctl enable ecdc.service
# Use systemctl to stop the Bridgehead if it is running
sudo systemctl stop ecdc.service
# Use files to tell the Bridgehead what to do with any data present
if [ "$UPLOAD" = true ] || [ "$UPLOAD_ALL" = true ]; then
if [ -f /srv/docker/ecdc/data/lock ]; then
rm /srv/docker/ecdc/data/lock
fi
fi
if [ "$UPLOAD_ALL" = true ]; then
echo "All CSV files in /srv/docker/ecdc/data will be uploaded"
touch /srv/docker/ecdc/data/flush_blaze
fi
# Start up the Bridgehead
sudo systemctl start ecdc.service
# Show status of Bridgehead service
sleep 10
systemctl status ecdc.service

View File

@ -1,13 +0,0 @@
#!/bin/bash
# Shut down a running Bridgehead.
# This is intended to be used by systemctl.
cd /srv/docker/bridgehead
echo "git status before stop"
git status
echo "Stopping running Bridgehead, if present"
./bridgehead stop bbmri

View File

@ -3,7 +3,7 @@ version: "3.7"
services:
traefik:
container_name: bridgehead-traefik
image: docker.verbis.dkfz.de/cache/traefik:latest
image: traefik:latest
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
@ -21,7 +21,7 @@ services:
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_AUTH}"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_LOGIN}"
ports:
- 80:80
- 443:443
@ -32,28 +32,52 @@ services:
forward_proxy:
container_name: bridgehead-forward-proxy
image: docker.verbis.dkfz.de/cache/samply/bridgehead-forward-proxy:latest
image: samply/bridgehead-forward-proxy:latest
environment:
HTTPS_PROXY: ${HTTPS_PROXY_URL}
HTTPS_PROXY_USERNAME: ${HTTPS_PROXY_USERNAME}
HTTPS_PROXY_PASSWORD: ${HTTPS_PROXY_PASSWORD}
tmpfs:
- /var/log/squid
- /var/spool/squid
USERNAME: ${HTTPS_PROXY_USERNAME}
PASSWORD: ${HTTPS_PROXY_PASSWORD}
volumes:
- /etc/bridgehead/trusted-ca-certs:/docker/custom-certs/:ro
landing:
container_name: bridgehead-landingpage
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:master
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
spot:
image: docker.verbis.dkfz.de/ccp-private/aql-local-spot
container_name: bridgehead-spot
environment:
HOST: ${HOST}
PROJECT: ${PROJECT}
SITE_NAME: ${SITE_NAME}
SECRET: ${SPOT_BEAM_SECRET_LONG}
APPID: spot
PROXY_ID: ${PROXY_ID}
LDM_URL: ${LDM_URL}
AUTH_USER: ${AUTH_USER}
AUTH_PW: ${AUTH_PW}
BEAM_PROXY: http://beam-proxy:8081
depends_on:
- "beam-proxy"
beam-proxy:
image: "samply/beam-proxy:develop"
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_0_ID: snap
APP_0_KEY: ${SPOT_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- ./root.crt.pem:/conf/root.crt.pem:ro
volumes:
blaze-data:
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

20
snap/root.crt.pem Normal file
View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUMeGRSrNPhRdQ1tU7uK5+lUa4f38wDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjIwOTI5MTQxMjU1WhcNMzIw
OTI2MTQxMzI1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAMYyroOUeb27mYzClOrjCmgIceLalsFA0aVCh5mZ
KtP8+1U3oq/7exP30gXiJojxW7xoerfyQY9s0Sz5YYbxYbuskFOYEtyAILB/pxgd
+k+J3tlZKolpfmo7WT5tZiHxH/zjrtAYGnuB2xPHRMCWh/tHYrELgXQuilNol24y
GBa1plTlARy0aKEDUHp87WLhD2qH7B8sFlLgo0+gunE1UtR2HMSPF45w3VXszyG6
fJNrAj0yPnKy3Dm1BMO3jDO2e0A9lCQ71a4j4TeKePfCk1xCArSu6PpiwiacKplF
c6CRR6KrWVm2g+8Y2hFcOBG/Py2xusm3PWbpylGq6vtFRkkCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEFxD6BQwQO5
xsJ+3cvZypsnh6dDMB8GA1UdIwQYMBaAFEFxD6BQwQO5xsJ+3cvZypsnh6dDMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQB5zTeIhV/3
3Am6O144EFtnIeaZ2w0D6aEHqHAZp50vJv3+uQfOliCOzgw7VDxI4Zz2JALjlR/i
uOYHsu3YIRMIOmPOjqrdDJa6auB0ufL4oUPfCRln7Fh0f3JVlz3BUoHsSDt949p4
g0nnsciL2JHuzlqjn7Jyt3L7dAHrlFKulCcuidG5D3cqXrRCbF83f+k3TC/HRiNd
25oMi7I4MP/SOCdfQGUGIsHIf/0hSm3pNjDOrC/XuI/8gh2f5io+Y8V+hMwMBcm4
JbH8bdyBB+EIhsNbTwf2MWntD5bmg47sf7hh23aNvKXI67Li1pTI2t1CqiGnFR0U
fCEpeaEAHs0k
-----END CERTIFICATE-----

9
snap/vars Normal file
View File

@ -0,0 +1,9 @@
BROKER_ID=broker.dev.ccp-it.dktk.dkfz.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
SPOT_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
SPOT_BEAM_SECRET_LONG="ApiKey spot.${PROXY_ID} ${SPOT_BEAM_SECRET_SHORT}"
REPORTHUB_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
REPORTHUB_BEAM_SECRET_LONG="ApiKey report-hub.${PROXY_ID} ${REPORTHUB_BEAM_SECRET_SHORT}"
SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem

43
stop.sh
View File

@ -1,43 +0,0 @@
#!/usr/bin/env bash
# Shut down a running Bridgehead.
# Behind the scenes we use systemctl to do the work.
# Function to print usage
print_usage() {
echo "Stop the running Bridgehead"
echo "Usage: $0 [--help | -h]"
echo "Options:"
echo " --help, -h Display this help message."
echo " No options Stop Bridgehead only."
}
# Parse arguments
while [[ "$#" -gt 0 ]]; do
case $1 in
--help|-h)
print_usage
exit 0
;;
*)
echo "Error: Unknown argument '$1'"
print_usage
exit 1
;;
esac
shift
done
# Set up systemctl for EHDS2/ECDC if necessary
cp /srv/docker/bridgehead/ecdc.service /etc/systemd/system
systemctl daemon-reload
systemctl enable ecdc.service
# Use systemctl to stop the Bridgehead if it is running
sudo systemctl stop ecdc.service
# Show status of Bridgehead service
sleep 20
systemctl status ecdc.service
docker ps