mirror of
https://github.com/samply/bridgehead.git
synced 2025-09-11 17:11:23 +02:00
Compare commits
73 Commits
refactor/l
...
feature/ex
Author | SHA1 | Date | |
---|---|---|---|
|
65055e2f16 | ||
|
5d55723bed | ||
|
8414604257 | ||
|
4c6f9e0f13 | ||
|
a1cdc2659d | ||
|
92bc0557a3 | ||
|
141f1f22d0 | ||
|
b4a788e010 | ||
|
c33fbfc8bc | ||
|
faa8abd4ee | ||
|
7693289d4d | ||
|
d482324361 | ||
|
b7a42f3d3b | ||
|
fd013232f5 | ||
|
eb52554892 | ||
|
08c695e960 | ||
|
1513fe1c6c | ||
|
af08a9fb08 | ||
|
b95f0efbe7 | ||
|
99567e2b40 | ||
|
96ff6043a1 | ||
|
844ce3386e | ||
|
9782bf66b6 | ||
|
87f0e8ad7f | ||
|
7365be3e7b | ||
|
c5d08c50a4 | ||
|
72ecaadba8 | ||
|
2ddd535794 | ||
|
973547c322 | ||
|
6b649c9233 | ||
|
3144ee5214 | ||
|
68804dc71b | ||
|
e5aebfe382 | ||
|
6f3aba1eaa | ||
|
82ced89b33 | ||
|
5d94bac0e2 | ||
|
83555540f5 | ||
|
e396e00178 | ||
|
ecb29830e4 | ||
|
98121c17e8 | ||
|
e38511e118 | ||
|
8334fac84d | ||
|
8000356b57 | ||
|
74d8e68d96 | ||
|
c568a56651 | ||
|
8384143387 | ||
|
8fe73a8123 | ||
|
bca63e82a9 | ||
|
721627a78f | ||
|
e08ff92401 | ||
|
e3553370b6 | ||
|
1ad73d8f82 | ||
|
0b6fa439ba | ||
|
615990b92a | ||
|
db950d6d87 | ||
|
6a71da3dd1 | ||
|
138a1fa5f1 | ||
|
39a87bcf61 | ||
|
655d0d24c7 | ||
|
fa0d9fb8b4 | ||
|
139fcecabe | ||
|
2058a7a5c9 | ||
|
47364f999e | ||
|
910289079b | ||
|
1003cd73cf | ||
|
3d1105b97c | ||
|
5c28e704d2 | ||
|
df1ec21848 | ||
|
e3510363ad | ||
|
45aefd24e5 | ||
|
122ff16bb1 | ||
|
a4e292dd18 | ||
|
75089ab428 |
1
.github/CODEOWNERS
vendored
Normal file
1
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1 @@
|
||||
* @samply/bridgehead-developers
|
136
README.md
136
README.md
@@ -22,11 +22,16 @@ This repository is the starting point for any information and tools you will nee
|
||||
- [TLS terminating proxies](#tls-terminating-proxies)
|
||||
- [File structure](#file-structure)
|
||||
- [BBMRI-ERIC Directory entry needed](#bbmri-eric-directory-entry-needed)
|
||||
- [Directory sync tool](#directory-sync-tool)
|
||||
- [Loading data](#loading-data)
|
||||
- [Teiler (Frontend)](#teiler-frontend)
|
||||
- [Data Exporter Service](#data-exporter-service)
|
||||
- [Data Quality Report](#data-quality-report)
|
||||
4. [Things you should know](#things-you-should-know)
|
||||
- [Auto-Updates](#auto-updates)
|
||||
- [Auto-Backups](#auto-backups)
|
||||
- [Non-Linux OS](#non-linux-os)
|
||||
- [FAQ](#faq)
|
||||
5. [Troubleshooting](#troubleshooting)
|
||||
- [Docker Daemon Proxy Configuration](#docker-daemon-proxy-configuration)
|
||||
- [Monitoring](#monitoring)
|
||||
@@ -76,7 +81,7 @@ The following URLs need to be accessible (prefix with `https://`):
|
||||
* git.verbis.dkfz.de
|
||||
* To fetch docker images
|
||||
* docker.verbis.dkfz.de
|
||||
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/all))
|
||||
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/setup/allow-list/))
|
||||
* hub.docker.com
|
||||
* registry-1.docker.io
|
||||
* production.cloudflare.docker.com
|
||||
@@ -154,7 +159,7 @@ Pay special attention to:
|
||||
Clone the bridgehead repository:
|
||||
```shell
|
||||
sudo mkdir -p /srv/docker/
|
||||
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
|
||||
sudo git clone -b main https://github.com/samply/bridgehead.git /srv/docker/bridgehead
|
||||
```
|
||||
|
||||
Then, run the installation script:
|
||||
@@ -254,6 +259,8 @@ sh bridgehead uninstall
|
||||
|
||||
## Site-specific configuration
|
||||
|
||||
[How to Change Config Access Token](docs/update-access-token.md)
|
||||
|
||||
### HTTPS Access
|
||||
|
||||
Even within your internal network, the Bridgehead enforces HTTPS for all services. During the installation, a self-signed, long-lived certificate was created for you. To increase security, you can simply replace the files under `/etc/bridgehead/traefik-tls` with ones from established certification authorities such as [Let's Encrypt](https://letsencrypt.org) or [DFN-AAI](https://www.aai.dfn.de).
|
||||
@@ -299,26 +306,38 @@ Once you have added your biobank to the Directory you got persistent identifier
|
||||
|
||||
### Directory sync tool
|
||||
|
||||
The Bridgehead's **Directory Sync** is an optional feature that keeps the Directory up to date with your local data, e.g. number of samples. Conversely, it also updates the local FHIR store with the latest contact details etc. from the Directory. You must explicitly set your country specific directory URL, username and password to enable this feature.
|
||||
The Bridgehead's **Directory Sync** is an optional feature that keeps the BBMRI-ERIC Directory up to date with your local data, e.g. number of samples. Conversely, it can also update the local FHIR store with the latest contact details etc. from the BBMRI-ERIC Directory.
|
||||
|
||||
You should talk with your local data protection group regarding the information that is published by Directory sync.
|
||||
|
||||
Full details can be found in [directory_sync_service](https://github.com/samply/directory_sync_service).
|
||||
|
||||
To enable it, you will need to set these variables to the ```bbmri.conf``` file of your GitLab repository. Here is an example config:
|
||||
To enable it, you will need to explicitly set the username and password variables for BBMRI-ERIC Directory login in the configuration file of your GitLab repository (e.g. ```bbmri.conf```). Here is an example minimal config:
|
||||
|
||||
```
|
||||
DS_DIRECTORY_USER_NAME=your_directory_username
|
||||
DS_DIRECTORY_USER_PASS=your_directory_password
|
||||
```
|
||||
Please contact your National Node to obtain this information.
|
||||
Please contact your National Node or Directory support (directory-dev@helpdesk.bbmri-eric.eu) to obtain these credentials.
|
||||
|
||||
Optionally, you **may** change when you want Directory sync to run by specifying a [cron](https://crontab.guru) expression, e.g. `DS_TIMER_CRON="0 22 * * *"` for 10 pm every evening.
|
||||
The following environment variables can be used from within your config file to control the behavior of Directory sync:
|
||||
|
||||
Once you edited the gitlab config, the bridgehead will autoupdate the config with the values and will sync the data.
|
||||
| Variable | Purpose | Default if not specified |
|
||||
|:-----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------|
|
||||
| DS_DIRECTORY_URL | Base URL of the Directory | https://directory-backend.molgenis.net |
|
||||
| DS_DIRECTORY_USER_NAME | User name for logging in to Directory **Mandatory** | |
|
||||
| DS_DIRECTORY_USER_PASS | Password for logging in to Directory **Mandatory** | |
|
||||
| DS_DIRECTORY_DEFAULT_COLLECTION_ID | ID of collection to be used if not in samples | |
|
||||
| DS_DIRECTORY_ALLOW_STAR_MODEL | Set to 'True' to send star model info to Directory | True |
|
||||
| DS_FHIR_STORE_URL | URL for FHIR store | http://bridgehead-bbmri-blaze:8080 |
|
||||
| DS_TIMER_CRON | Execution interval for Directory sync, [cron](https://crontab.guru) format | 0 22 * * * |
|
||||
| DS_IMPORT_BIOBANKS | Set to 'True' to import biobank metadata from Directory | True |
|
||||
| DS_IMPORT_COLLECTIONS | Set to 'True' to import collection metadata from Directory | True |
|
||||
|
||||
Once you have finished editing the config, the Bridgehead will autoupdate the config with the values and will sync data at regular intervals, using the time specified in DS_TIMER_CRON.
|
||||
|
||||
There will be a delay before the effects of Directory sync become visible. First, you will need to wait until the time you have specified in ```TIMER_CRON```. Second, the information will then be synchronized from your national node with the central European Directory. This can take up to 24 hours.
|
||||
|
||||
More details of Directory sync can be found in [directory_sync_service](https://github.com/samply/directory_sync_service).
|
||||
|
||||
### Loading data
|
||||
|
||||
The data accessed by the federated search is held in the Bridgehead in a FHIR store (we use Blaze).
|
||||
@@ -338,6 +357,24 @@ The storage space on your hard drive will depend on the number of FHIR resources
|
||||
|
||||
For more information on Blaze performance, please refer to [import performance](https://github.com/samply/blaze/blob/master/docs/performance/import.md).
|
||||
|
||||
### Clearing data
|
||||
|
||||
The Bridgehead's FHIR store, Blaze, saves its data in a Docker volume. This means that the data will persist even if you stop the Bridgehead. You can clear existing data from the FHIR store by deleting the relevant Docker volume.
|
||||
|
||||
First, stop the Bridgehead:
|
||||
```shell
|
||||
sudo systemctl stop bridgehead@<PROJECT>.service
|
||||
```
|
||||
Now remove the volume:
|
||||
```shell
|
||||
docker volume rm <PROJECT>_blaze-data
|
||||
```
|
||||
Finally, restart the Bridgehead:
|
||||
```shell
|
||||
sudo systemctl start bridgehead@<PROJECT>.service
|
||||
```
|
||||
You will need to do this for example if you are using a VM as a test environment and you subsequently want to use the same VM for production.
|
||||
|
||||
#### ETL for BBMRI and GBA
|
||||
|
||||
Normally, you will need to build your own ETL to feed the Bridgehead. However, there is one case where a short cut might be available:
|
||||
@@ -345,6 +382,39 @@ Normally, you will need to build your own ETL to feed the Bridgehead. However, t
|
||||
|
||||
You can find the profiles for generating FHIR in [Simplifier](https://simplifier.net/bbmri.de/~resources?category=Profile).
|
||||
|
||||
### Teiler (Frontend)
|
||||
|
||||
Teiler is the web-based frontend of the Bridgehead, providing access to its various internal, and external services and components.
|
||||
To learn how to integrate your custom module into Teiler, please refer to https://github.com/samply/teiler-dashboard.
|
||||
- To activate Teiler, set the following environment variable in your `<PROJECT>.conf` file:
|
||||
|
||||
```bash
|
||||
ENABLE_TEILER=true
|
||||
```
|
||||
[For further information](ccp/modules/teiler.md)
|
||||
|
||||
### Data Exporter Service
|
||||
|
||||
The Exporter is a dedicated service for extracting and exporting Bridgehead data in (tabular) formats such as Excel, CSV, Opal, JSON, XML, ...
|
||||
- To enable the Exporter service, set the following environment variable in your `<PROJECT>.conf` file:
|
||||
|
||||
```bash
|
||||
ENABLE_EXPORTER=true
|
||||
```
|
||||
|
||||
#### Data Quality Report
|
||||
To assess the quality and plausibility of your imported data, the Reporter component is pre-configured to generate Excel reports with data quality metrics and statistical analyses. Reporter is part of the Exporter and can be enabled by setting the same environment variable in your `<PROJECT>.conf` file:
|
||||
```bash
|
||||
ENABLE_EXPORTER=true
|
||||
```
|
||||
|
||||
For convenience, it's recommended to enable the Teiler web frontend alongside the Exporter to access export and quality control features via a web interface: set the following environment varibles in your `<PROJECT>.conf` file:
|
||||
```bash
|
||||
ENABLE_TEILER=true
|
||||
ENABLE_EXPORTER=true
|
||||
```
|
||||
[For further information](ccp/modules/exporter.md)
|
||||
|
||||
## Things you should know
|
||||
|
||||
### Auto-Updates
|
||||
@@ -384,6 +454,54 @@ We have tested the installation procedure with an Ubuntu 22.04 guest system runn
|
||||
|
||||
Installation under WSL ought to work, but we have not tested this.
|
||||
|
||||
### FAQ
|
||||
|
||||
**Q: How is the security of GitHub pulls, volumes/containers, and image signing ensured?**
|
||||
|
||||
A: Changes to Git branches that could be delivered to sites (main and develop) must be accepted via a pull request with at least two positive reviews.
|
||||
Containers/images are not built manually, but rather automatically through a CI/CD pipeline, so that an image can be rolled back to a defined code version at any time without changes.
|
||||
**Note:** If firewall access for (outgoing) connections to GitHub and/or Docker Hub is problematic at the site, mirrors for both services are available, operated by the DKFZ.
|
||||
|
||||
**Q: How is authentication between users and components regulated?**
|
||||
|
||||
A: When setting up a Bridgehead, a private key and a so-called Certificate Sign Request (CSR) are generated locally. This CSR is manually signed by the broker operator, which allows the Bridgehead access to the network infrastructure.
|
||||
All communication runs via Samply.Beam and is therefore end-to-end encrypted, but also signed. This allows the integrity and authenticity of the sender to be technically verified (which happens automatically both in the broker and at the recipients).
|
||||
The connection to the broker is additionally secured using traditional TLS (transport encryption over https).
|
||||
|
||||
**Q: Are there any statistics on incoming traffic from the Bridgehead (what goes in and what goes out)?**
|
||||
|
||||
A: Incoming and outgoing traffic can only enter/leave the Bridgehead via a forward or reverse proxy, respectively. These components log all connections.
|
||||
Statistical analysis is not currently being conducted, but is on the roadmap for some projects. We are also working on a dashboard for all tasks/responses delivered via Samply.Beam.
|
||||
|
||||
**Q: How is container access controlled, and what permission level is used?**
|
||||
|
||||
A: Currently, it is not possible to run the Bridgehead "out-of-the-box" as a rootless Docker Compose stack. The main reason is the operation of the reverse proxy (Traefik), which binds to the privileged ports 80 (HTTP) and 443 (HTTPS).
|
||||
Otherwise, there are no known technical obstacles, although we don't have concrete experience implementing this.
|
||||
At the file system level, a "bridgehead" user is created during installation, which manages the configuration and Bridgehead folders.
|
||||
|
||||
**Q: Is a cloud installation (not a company-owned one, but an external service provider) possible?**
|
||||
|
||||
A: Technically, yes. This is primarily a data protection issue between the participant and their cloud provider.
|
||||
The Bridgehead contains a data storage system that, during use, contains sensitive patient and sample data.
|
||||
There are cloud providers with whom appropriately worded contracts can be concluded to make this possible.
|
||||
Of course, the details must be discussed with the responsible data protection officer.
|
||||
|
||||
**Q: What needs to be considered regarding the Docker distribution/registry, and how is it used here?**
|
||||
|
||||
A: The Bridgehead images are located both in Docker Hub and mirrored in a registry operated by the DKFZ.
|
||||
The latter is used by default, avoiding potential issues with Docker Hub URL activation or rate limits.
|
||||
When using automatic updates (highly recommended), an daily check is performed for:
|
||||
- site configuration updates
|
||||
- Bridgehead software updates
|
||||
- container image updates
|
||||
|
||||
If updates are found, they are downloaded and applied.
|
||||
See the first question for the control mechanism.
|
||||
|
||||
**Q: Is data only transferred one-way (Bridgehead/FHIR Store → Central/Locator), or is two-way access necessary?**
|
||||
|
||||
A: By using Samply.Beam, only one outgoing connection to the broker is required at the network level (i.e., Bridgehead → Broker).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Docker Daemon Proxy Configuration
|
||||
|
@@ -4,7 +4,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-bbmri-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-bbmri-blaze:8080"
|
||||
|
@@ -12,5 +12,7 @@ services:
|
||||
DS_DIRECTORY_MOCK: ${DS_DIRECTORY_MOCK}
|
||||
DS_DIRECTORY_DEFAULT_COLLECTION_ID: ${DS_DIRECTORY_DEFAULT_COLLECTION_ID}
|
||||
DS_DIRECTORY_COUNTRY: ${DS_DIRECTORY_COUNTRY}
|
||||
DS_IMPORT_BIOBANKS: ${DS_IMPORT_BIOBANKS:-true}
|
||||
DS_IMPORT_COLLECTIONS: ${DS_IMPORT_COLLECTIONS:-true}
|
||||
depends_on:
|
||||
- "blaze"
|
||||
|
@@ -10,6 +10,10 @@ if [ "${ENABLE_ERIC}" == "true" ]; then
|
||||
export ERIC_BROKER_ID=broker.bbmri.samply.de
|
||||
export ERIC_ROOT_CERT=eric
|
||||
;;
|
||||
"acceptance")
|
||||
export ERIC_BROKER_ID=broker-acc.bbmri-acc.samply.de
|
||||
export ERIC_ROOT_CERT=eric.acc
|
||||
;;
|
||||
"test")
|
||||
export ERIC_BROKER_ID=broker-test.bbmri-test.samply.de
|
||||
export ERIC_ROOT_CERT=eric.test
|
||||
|
20
bbmri/modules/eric.acc.root.crt.pem
Normal file
20
bbmri/modules/eric.acc.root.crt.pem
Normal file
@@ -0,0 +1,20 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDNTCCAh2gAwIBAgIUFzdpDi1OLdXyogtCsktHFhCILtMwDQYJKoZIhvcNAQEL
|
||||
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjUwNjEwMTQzNjE1WhcNMzUw
|
||||
NjA4MTQzNjQ1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBALpJCWE9Qe19R9DqotdkPV6jfiuJSKI3UYkCWdWG
|
||||
nRfkKB6OaY5t3JCHDqaEME9FwSd2nFXhTp5F6snG/K7g8MCLIEzGzuSnrdjGqINq
|
||||
zXLfgqnxvQpPR4ARLNNgnKxZaq7m4Q3T/l+QAshK6CnCUWFQ6q5x3g/pZHFP2USd
|
||||
/G2FtDHX6YK4bHbbnigIPG6PdY2RYy60i30XGdIPBNf82XGkAtPUBz731gHOV5Vg
|
||||
d+jfAqTwZAhYC2CcNmswFw1H9GrvTI/9KZWKcZNUIqemc0A/FyEyONUM18/vjQ7D
|
||||
lUwOcQsgAg44QTOUPgqXv3sJPQM5EnGuv3yYV9u6Y2i78M8CAwEAAaN7MHkwDgYD
|
||||
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFPrDeNWgtEyZ
|
||||
VM0yeoRZdK2QGjyvMB8GA1UdIwQYMBaAFPrDeNWgtEyZVM0yeoRZdK2QGjyvMBYG
|
||||
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQAD2S0kqL18
|
||||
laewh+qnyZ0WMq12mLV/Rwll6ZuShCx2uAu3UZuIGWk3l7gG5zlws+i+zbaNcn4o
|
||||
HsS3WG9kiNLOMKp8LXGkjErl6RaQr+kb8qgYFTPjOr6v0OdVn6ve9RDNYB5Hd+zE
|
||||
9jAWmS8PfS2AldE4VAd0C4pWTAinhnKGrKdn1YAX5x+LMq1y0lc1Pd4CDgsjD6SS
|
||||
3td7JtenXqCX0mN0XSeck7vvFGa6QpcQoVcN9tRENctHZTwyeGA21IkXylpFPUkE
|
||||
LT60k48fNC8TZkBlfvtVGRebpm5krXIKEaVy5LniEpSuOR4hTqsgoQDntBjW4zHA
|
||||
GeWQ1wQNTEBX
|
||||
-----END CERTIFICATE-----
|
86
bbmri/modules/exporter-compose.yml
Normal file
86
bbmri/modules/exporter-compose.yml
Normal file
@@ -0,0 +1,86 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
exporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
|
||||
container_name: bridgehead-bbmri-exporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
EXPORTER_DB_USER: "exporter"
|
||||
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
|
||||
HTTP_RELATIVE_PATH: "/bbmri-exporter"
|
||||
SITE: "${SITE_ID}"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.exporter_bbmri.rule=PathPrefix(`/bbmri-exporter`)"
|
||||
- "traefik.http.services.exporter_bbmri.loadbalancer.server.port=8092"
|
||||
- "traefik.http.routers.exporter_bbmri.tls=true"
|
||||
- "traefik.http.middlewares.exporter_bbmri_strip.stripprefix.prefixes=/bbmri-exporter"
|
||||
- "traefik.http.routers.exporter_bbmri.middlewares=exporter_bbmri_strip"
|
||||
# Main router
|
||||
- "traefik.http.routers.exporter_bbmri.priority=20"
|
||||
|
||||
# API router
|
||||
- "traefik.http.routers.exporter_bbmri_api.middlewares=exporter_bbmri_strip,exporter_auth"
|
||||
- "traefik.http.routers.exporter_bbmri_api.rule=PathRegexp(`/bbmri-exporter/.+`)"
|
||||
- "traefik.http.routers.exporter_bbmri_api.tls=true"
|
||||
- "traefik.http.routers.exporter_bbmri_api.priority=25"
|
||||
|
||||
# Shared middlewares
|
||||
- "traefik.http.middlewares.exporter_auth.basicauth.users=${EXPORTER_USER}"
|
||||
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/bbmri/exporter-files:/app/exporter-files/output"
|
||||
|
||||
exporter-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
|
||||
container_name: bridgehead-bbmri-exporter-db
|
||||
environment:
|
||||
POSTGRES_USER: "exporter"
|
||||
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
POSTGRES_DB: "exporter"
|
||||
volumes:
|
||||
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
|
||||
- "/var/cache/bridgehead/bbmri/exporter-db:/var/lib/postgresql/data"
|
||||
|
||||
reporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
|
||||
container_name: bridgehead-bbmri-reporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
HTTP_RELATIVE_PATH: "/bbmri-reporter"
|
||||
SITE: "${SITE_ID}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
EXPORTER_URL: "http://exporter:8092"
|
||||
LOG_FHIR_VALIDATION: "false"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
|
||||
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
|
||||
# However, in the first executions in the bbmri sites, this volume seems to be very important. A report is
|
||||
# a process that can take several hours, because it depends on the exporter.
|
||||
# There is a risk that the bridgehead restarts, losing the already created export.
|
||||
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/bbmri/reporter-files:/app/reports"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.reporter_bbmri.rule=PathPrefix(`/bbmri-reporter`)"
|
||||
- "traefik.http.services.reporter_bbmri.loadbalancer.server.port=8095"
|
||||
- "traefik.http.routers.reporter_bbmri.tls=true"
|
||||
- "traefik.http.middlewares.reporter_bbmri_strip.stripprefix.prefixes=/bbmri-reporter"
|
||||
- "traefik.http.routers.reporter_bbmri.middlewares=reporter_bbmri_strip"
|
||||
- "traefik.http.routers.reporter_bbmri.priority=20"
|
||||
|
||||
- "traefik.http.routers.reporter_bbmri_api.middlewares=reporter_bbmri_strip,exporter_auth"
|
||||
- "traefik.http.routers.reporter_bbmri_api.rule=PathRegexp(`/bbmri-reporter/.+`)"
|
||||
- "traefik.http.routers.reporter_bbmri_api.tls=true"
|
||||
- "traefik.http.routers.reporter_bbmri_api.priority=25"
|
||||
|
8
bbmri/modules/exporter-setup.sh
Normal file
8
bbmri/modules/exporter-setup.sh
Normal file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_EXPORTER" == true ]; then
|
||||
log INFO "Exporter setup detected -- will start Exporter service."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
|
||||
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
|
||||
fi
|
69
bbmri/modules/teiler-compose.yml
Normal file
69
bbmri/modules/teiler-compose.yml
Normal file
@@ -0,0 +1,69 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
teiler-orchestrator:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-orchestrator:latest
|
||||
container_name: bridgehead-teiler-orchestrator
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_orchestrator_bbmri.rule=PathPrefix(`/bbmri-teiler`)"
|
||||
- "traefik.http.services.teiler_orchestrator_bbmri.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.teiler_orchestrator_bbmri.tls=true"
|
||||
- "traefik.http.middlewares.teiler_orchestrator_bbmri_strip.stripprefix.prefixes=/bbmri-teiler"
|
||||
- "traefik.http.routers.teiler_orchestrator_bbmri.middlewares=teiler_orchestrator_bbmri_strip"
|
||||
environment:
|
||||
TEILER_BACKEND_URL: "/bbmri-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "/bbmri-teiler-dashboard"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE_LOWER_CASE}"
|
||||
HTTP_RELATIVE_PATH: "/bbmri-teiler"
|
||||
|
||||
teiler-dashboard:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:develop
|
||||
container_name: bridgehead-teiler-dashboard
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_dashboard_bbmri.rule=PathPrefix(`/bbmri-teiler-dashboard`)"
|
||||
- "traefik.http.services.teiler_dashboard_bbmri.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.teiler_dashboard_bbmri.tls=true"
|
||||
- "traefik.http.middlewares.teiler_dashboard_bbmri_strip.stripprefix.prefixes=/bbmri-teiler-dashboard"
|
||||
- "traefik.http.routers.teiler_dashboard_bbmri.middlewares=teiler_dashboard_bbmri_strip"
|
||||
environment:
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
TEILER_BACKEND_URL: "/bbmri-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "/bbmri-teiler-dashboard"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PUBLIC_CLIENT_ID}"
|
||||
OIDC_TOKEN_GROUP: "${OIDC_GROUP_CLAIM}"
|
||||
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
|
||||
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
|
||||
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
|
||||
TEILER_PROJECT: "${PROJECT}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
TEILER_ORCHESTRATOR_URL: "/bbmri-teiler"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/bbmri-teiler"
|
||||
TEILER_USER: "${OIDC_USER_GROUP}"
|
||||
TEILER_ADMIN: "${OIDC_ADMIN_GROUP}"
|
||||
REPORTER_DEFAULT_TEMPLATE_ID: "bbmri-qb"
|
||||
EXPORTER_DEFAULT_TEMPLATE_ID: "bbmri"
|
||||
|
||||
|
||||
teiler-backend:
|
||||
image: docker.verbis.dkfz.de/ccp/bbmri-teiler-backend:latest
|
||||
container_name: bridgehead-teiler-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_backend_bbmri.rule=PathPrefix(`/bbmri-teiler-backend`)"
|
||||
- "traefik.http.services.teiler_backend_bbmri.loadbalancer.server.port=8085"
|
||||
- "traefik.http.routers.teiler_backend_bbmri.tls=true"
|
||||
- "traefik.http.middlewares.teiler_backend_bbmri_strip.stripprefix.prefixes=/bbmri-teiler-backend"
|
||||
- "traefik.http.routers.teiler_backend_bbmri.middlewares=teiler_backend_bbmri_strip"
|
||||
environment:
|
||||
LOG_LEVEL: "INFO"
|
||||
APPLICATION_PORT: "8085"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/bbmri-teiler"
|
||||
TEILER_ORCHESTRATOR_URL: "/bbmri-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "/bbmri-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "/bbmri-teiler-dashboard/en"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
8
bbmri/modules/teiler-setup.sh
Normal file
8
bbmri/modules/teiler-setup.sh
Normal file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_TEILER" == true ];then
|
||||
log INFO "Teiler setup detected -- will start Teiler services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/teiler-compose.yml"
|
||||
TEILER_DEFAULT_LANGUAGE=EN
|
||||
TEILER_DEFAULT_LANGUAGE_LOWER_CASE=${TEILER_DEFAULT_LANGUAGE,,}
|
||||
fi
|
@@ -1,3 +1,9 @@
|
||||
BROKER_ID=broker-test.bbmri-test.samply.de
|
||||
BROKER_URL=https://${BROKER_ID}
|
||||
PROXY_ID=${SITE_ID}.${BROKER_ID}
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
|
||||
# Makes sense for all European Biobanks
|
||||
: ${ENABLE_ERIC:=true}
|
||||
|
||||
@@ -5,7 +11,6 @@
|
||||
: ${ENABLE_GBN:=false}
|
||||
|
||||
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
|
||||
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
for module in $PROJECT/modules/*.sh
|
||||
do
|
||||
|
54
bridgehead
54
bridgehead
@@ -53,17 +53,47 @@ case "$PROJECT" in
|
||||
;;
|
||||
esac
|
||||
|
||||
# Loads config variables and runs the projects setup script
|
||||
loadVars() {
|
||||
# Load variables from /etc/bridgehead and /srv/docker/bridgehead
|
||||
set -a
|
||||
# Source the project specific config file
|
||||
source /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "/etc/bridgehead/$PROJECT.conf not found"
|
||||
# Source the project specific local config file if present
|
||||
# This file is ignored by git as oposed to the regular config file as it contains private site information like etl auth data
|
||||
if [ -e /etc/bridgehead/$PROJECT.local.conf ]; then
|
||||
log INFO "Applying /etc/bridgehead/$PROJECT.local.conf"
|
||||
source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import"
|
||||
fi
|
||||
# Set execution environment on main default to prod else test
|
||||
if [[ -z "${ENVIRONMENT+x}" ]]; then
|
||||
if [ "$(git rev-parse --abbrev-ref HEAD)" == "main" ]; then
|
||||
ENVIRONMENT="production"
|
||||
else
|
||||
ENVIRONMENT="test" # we have acceptance environment in BBMRI ERIC and it would be more appropriate to default to that one in case the data they have in BH is real, but I'm gonna leave it as is for backward compatibility
|
||||
fi
|
||||
fi
|
||||
# Source the versions of the images components
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
source ./versions/prod
|
||||
;;
|
||||
"test")
|
||||
source ./versions/test
|
||||
;;
|
||||
"acceptance")
|
||||
source ./versions/acceptance
|
||||
;;
|
||||
*)
|
||||
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
source ./versions/prod
|
||||
;;
|
||||
esac
|
||||
fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile"
|
||||
setHostname
|
||||
optimizeBlazeMemoryUsage
|
||||
# Run project specific setup if it exists
|
||||
# This will ususally modiy the `OVERRIDE` to include all the compose files that the project depends on
|
||||
# This is also where projects specify which modules to load
|
||||
[ -e ./$PROJECT/vars ] && source ./$PROJECT/vars
|
||||
set +a
|
||||
|
||||
@@ -79,26 +109,6 @@ loadVars() {
|
||||
fi
|
||||
detectCompose
|
||||
setupProxy
|
||||
|
||||
# Set some project-independent default values
|
||||
: ${ENVIRONMENT:=production}
|
||||
export ENVIRONMENT
|
||||
|
||||
case "$ENVIRONMENT" in
|
||||
"production")
|
||||
export FOCUS_TAG=main
|
||||
export BEAM_TAG=main
|
||||
;;
|
||||
"test")
|
||||
export FOCUS_TAG=develop
|
||||
export BEAM_TAG=develop
|
||||
;;
|
||||
*)
|
||||
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
|
||||
export FOCUS_TAG=main
|
||||
export BEAM_TAG=main
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
case "$ACTION" in
|
||||
@@ -152,7 +162,7 @@ case "$ACTION" in
|
||||
adduser)
|
||||
loadVars
|
||||
log "INFO" "Adding encrypted credentials in /etc/bridgehead/$PROJECT.local.conf"
|
||||
read -p "Please choose the component (LDM_AUTH|NNGM_AUTH) you want to add a user to : " COMPONENT
|
||||
read -p "Please choose the component (LDM_AUTH|NNGM_AUTH|EXPORTER_USER) you want to add a user to : " COMPONENT
|
||||
read -p "Please enter a username: " USER
|
||||
read -s -p "Please enter a password (will not be echoed): "$'\n' PASSWORD
|
||||
add_basic_auth_user $USER $PASSWORD $COMPONENT $PROJECT
|
||||
|
@@ -2,13 +2,14 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-cce-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-cce-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
|
||||
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
@@ -31,6 +32,10 @@ services:
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
QUERIES_TO_CACHE: '/queries_to_cache.conf'
|
||||
ENDPOINT_TYPE: ${FOCUS_ENDPOINT_TYPE:-blaze}
|
||||
volumes:
|
||||
- /srv/docker/bridgehead/cce/queries_to_cache.conf:/queries_to_cache.conf:ro
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
87
cce/modules/exporter-compose.yml
Normal file
87
cce/modules/exporter-compose.yml
Normal file
@@ -0,0 +1,87 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
exporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
|
||||
container_name: bridgehead-cce-exporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
EXPORTER_DB_USER: "exporter"
|
||||
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
|
||||
HTTP_RELATIVE_PATH: "/cce-exporter"
|
||||
SITE: "${SITE_ID}"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.exporter_cce.rule=PathPrefix(`/cce-exporter`)"
|
||||
- "traefik.http.services.exporter_cce.loadbalancer.server.port=8092"
|
||||
- "traefik.http.routers.exporter_cce.tls=true"
|
||||
- "traefik.http.middlewares.exporter_cce_strip.stripprefix.prefixes=/cce-exporter"
|
||||
- "traefik.http.routers.exporter_cce.middlewares=exporter_cce_strip"
|
||||
# Main router
|
||||
- "traefik.http.routers.exporter_cce.priority=20"
|
||||
|
||||
# API router
|
||||
- "traefik.http.routers.exporter_cce_api.middlewares=exporter_cce_strip,exporter_auth"
|
||||
- "traefik.http.routers.exporter_cce_api.rule=PathRegexp(`/cce-exporter/.+`)"
|
||||
- "traefik.http.routers.exporter_cce_api.tls=true"
|
||||
- "traefik.http.routers.exporter_cce_api.priority=25"
|
||||
|
||||
# Shared middlewares
|
||||
- "traefik.http.middlewares.exporter_auth.basicauth.users=${EXPORTER_USER}"
|
||||
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/cce/exporter-files:/app/exporter-files/output"
|
||||
|
||||
exporter-db:
|
||||
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
|
||||
container_name: bridgehead-cce-exporter-db
|
||||
environment:
|
||||
POSTGRES_USER: "exporter"
|
||||
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
|
||||
POSTGRES_DB: "exporter"
|
||||
volumes:
|
||||
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
|
||||
- "/var/cache/bridgehead/cce/exporter-db:/var/lib/postgresql/data"
|
||||
|
||||
reporter:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
|
||||
container_name: bridgehead-cce-reporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
HTTP_RELATIVE_PATH: "/cce-reporter"
|
||||
SITE: "${SITE_ID}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
EXPORTER_URL: "http://exporter:8092"
|
||||
LOG_FHIR_VALIDATION: "false"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
|
||||
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
|
||||
# However, in the first executions in the cce sites, this volume seems to be very important. A report is
|
||||
# a process that can take several hours, because it depends on the exporter.
|
||||
# There is a risk that the bridgehead restarts, losing the already created export.
|
||||
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/cce/reporter-files:/app/reports"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.reporter_cce.rule=PathPrefix(`/cce-reporter`)"
|
||||
- "traefik.http.services.reporter_cce.loadbalancer.server.port=8095"
|
||||
- "traefik.http.routers.reporter_cce.tls=true"
|
||||
- "traefik.http.middlewares.reporter_cce_strip.stripprefix.prefixes=/cce-reporter"
|
||||
- "traefik.http.routers.reporter_cce.middlewares=reporter_cce_strip"
|
||||
- "traefik.http.routers.reporter_cce.priority=20"
|
||||
|
||||
- "traefik.http.routers.reporter_cce_api.middlewares=reporter_cce_strip,exporter_auth"
|
||||
- "traefik.http.routers.reporter_cce_api.rule=PathRegexp(`/cce-reporter/.+`)"
|
||||
- "traefik.http.routers.reporter_cce_api.tls=true"
|
||||
- "traefik.http.routers.reporter_cce_api.priority=25"
|
||||
|
8
cce/modules/exporter-setup.sh
Normal file
8
cce/modules/exporter-setup.sh
Normal file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_EXPORTER" == true ]; then
|
||||
log INFO "Exporter setup detected -- will start Exporter service."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
|
||||
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
|
||||
fi
|
@@ -17,7 +17,6 @@ services:
|
||||
BEAM_PROXY_ID: ${SITE_ID}
|
||||
BEAM_BROKER_ID: ${BROKER_ID}
|
||||
BEAM_APP_ID: "focus"
|
||||
PROJECT_METADATA: "cce_supervisors"
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
labels:
|
||||
@@ -30,4 +29,4 @@ services:
|
||||
- "traefik.http.routers.spot.rule=Host(`${HOST}`) && PathPrefix(`/backend`)"
|
||||
- "traefik.http.middlewares.stripprefix_spot.stripprefix.prefixes=/backend"
|
||||
- "traefik.http.routers.spot.tls=true"
|
||||
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot"
|
||||
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot,auth"
|
||||
|
69
cce/modules/teiler-compose.yml
Normal file
69
cce/modules/teiler-compose.yml
Normal file
@@ -0,0 +1,69 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
|
||||
teiler-orchestrator:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-orchestrator:latest
|
||||
container_name: bridgehead-teiler-orchestrator
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_orchestrator_cce.rule=PathPrefix(`/cce-teiler`)"
|
||||
- "traefik.http.services.teiler_orchestrator_cce.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.teiler_orchestrator_cce.tls=true"
|
||||
- "traefik.http.middlewares.teiler_orchestrator_cce_strip.stripprefix.prefixes=/cce-teiler"
|
||||
- "traefik.http.routers.teiler_orchestrator_cce.middlewares=teiler_orchestrator_cce_strip"
|
||||
environment:
|
||||
TEILER_BACKEND_URL: "/cce-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "/cce-teiler-dashboard"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE_LOWER_CASE}"
|
||||
HTTP_RELATIVE_PATH: "/cce-teiler"
|
||||
|
||||
teiler-dashboard:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:develop
|
||||
container_name: bridgehead-teiler-dashboard
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_dashboard_cce.rule=PathPrefix(`/cce-teiler-dashboard`)"
|
||||
- "traefik.http.services.teiler_dashboard_cce.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.teiler_dashboard_cce.tls=true"
|
||||
- "traefik.http.middlewares.teiler_dashboard_cce_strip.stripprefix.prefixes=/cce-teiler-dashboard"
|
||||
- "traefik.http.routers.teiler_dashboard_cce.middlewares=teiler_dashboard_cce_strip"
|
||||
environment:
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
TEILER_BACKEND_URL: "/cce-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "/cce-teiler-dashboard"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PUBLIC_CLIENT_ID}"
|
||||
OIDC_TOKEN_GROUP: "${OIDC_GROUP_CLAIM}"
|
||||
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
|
||||
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
|
||||
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
|
||||
TEILER_PROJECT: "${PROJECT}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
TEILER_ORCHESTRATOR_URL: "/cce-teiler"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/cce-teiler"
|
||||
TEILER_USER: "${OIDC_USER_GROUP}"
|
||||
TEILER_ADMIN: "${OIDC_ADMIN_GROUP}"
|
||||
REPORTER_DEFAULT_TEMPLATE_ID: "cce-qb"
|
||||
EXPORTER_DEFAULT_TEMPLATE_ID: "cce"
|
||||
|
||||
|
||||
teiler-backend:
|
||||
image: docker.verbis.dkfz.de/ccp/cce-teiler-backend:latest
|
||||
container_name: bridgehead-teiler-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.teiler_backend_cce.rule=PathPrefix(`/cce-teiler-backend`)"
|
||||
- "traefik.http.services.teiler_backend_cce.loadbalancer.server.port=8085"
|
||||
- "traefik.http.routers.teiler_backend_cce.tls=true"
|
||||
- "traefik.http.middlewares.teiler_backend_cce_strip.stripprefix.prefixes=/cce-teiler-backend"
|
||||
- "traefik.http.routers.teiler_backend_cce.middlewares=teiler_backend_cce_strip"
|
||||
environment:
|
||||
LOG_LEVEL: "INFO"
|
||||
APPLICATION_PORT: "8085"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/cce-teiler"
|
||||
TEILER_ORCHESTRATOR_URL: "/cce-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "/cce-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "/cce-teiler-dashboard/en"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
8
cce/modules/teiler-setup.sh
Normal file
8
cce/modules/teiler-setup.sh
Normal file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
if [ "$ENABLE_TEILER" == true ];then
|
||||
log INFO "Teiler setup detected -- will start Teiler services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/teiler-compose.yml"
|
||||
TEILER_DEFAULT_LANGUAGE=EN
|
||||
TEILER_DEFAULT_LANGUAGE_LOWER_CASE=${TEILER_DEFAULT_LANGUAGE,,}
|
||||
fi
|
2
cce/queries_to_cache.conf
Normal file
2
cce/queries_to_cache.conf
Normal file
@@ -0,0 +1,2 @@
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwoKY29udGV4dCBQYXRpZW50CgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0FHRV9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfU1BFQ0lNRU5fU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREtUS19TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OCnRydWU=
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwpjb2Rlc3lzdGVtIGljZDEwOiAnaHR0cDovL2ZoaXIuZGUvQ29kZVN5c3RlbS9iZmFybS9pY2QtMTAtZ20nCmNvZGVzeXN0ZW0gbW9ycGg6ICd1cm46b2lkOjIuMTYuODQwLjEuMTEzODgzLjYuNDMuMScKCmNvbnRleHQgUGF0aWVudAoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9BR0VfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX1NQRUNJTUVOX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfUFJPQ0VEVVJFX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfTUVESUNBVElPTl9TVFJBVElGSUVSCkRLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTgooKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDNjEnIGZyb20gaWNkMTBdKSBhbmQKKChleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODE0MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQ3LzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg0ODAvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODUwMC8zJykpKQ==
|
@@ -2,7 +2,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-ccp-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-ccp-blaze:8080"
|
||||
@@ -11,7 +11,6 @@ services:
|
||||
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
|
||||
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
LOG_LEVEL: ${LOG_LEVEL_BLAZE:-WARN}
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
labels:
|
||||
@@ -35,9 +34,8 @@ services:
|
||||
EPSILON: 0.28
|
||||
QUERIES_TO_CACHE: '/queries_to_cache.conf'
|
||||
ENDPOINT_TYPE: ${FOCUS_ENDPOINT_TYPE:-blaze}
|
||||
RUST_LOG: ${LOG_LEVEL_FOCUS:-WARN}
|
||||
volumes:
|
||||
- /srv/docker/bridgehead/ccp/queries_to_cache.conf:/queries_to_cache.conf
|
||||
- /srv/docker/bridgehead/ccp/queries_to_cache.conf:/queries_to_cache.conf:ro
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
@@ -53,7 +51,6 @@ services:
|
||||
ALL_PROXY: http://forward_proxy:3128
|
||||
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
|
||||
ROOTCERT_FILE: /conf/root.crt.pem
|
||||
RUST_LOG: ${LOG_LEVEL_FOCUS:-WARN}
|
||||
secrets:
|
||||
- proxy.pem
|
||||
depends_on:
|
||||
|
@@ -2,7 +2,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
blaze-secondary:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-ccp-blaze-secondary
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-ccp-blaze-secondary:8080"
|
||||
@@ -10,7 +10,6 @@ services:
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
LOG_LEVEL: ${LOG_LEVEL_BLAZE:-WARN}
|
||||
volumes:
|
||||
- "blaze-secondary-data:/app/data"
|
||||
labels:
|
||||
|
@@ -1,26 +1,6 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
rstudio:
|
||||
container_name: bridgehead-rstudio
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rstudio:latest
|
||||
environment:
|
||||
#DEFAULT_USER: "rstudio" # This line is kept for informational purposes
|
||||
PASSWORD: "${RSTUDIO_ADMIN_PASSWORD}" # It is required, even if the authentication is disabled
|
||||
DISABLE_AUTH: "true" # https://rocker-project.org/images/versioned/rstudio.html#how-to-use
|
||||
HTTP_RELATIVE_PATH: "/rstudio"
|
||||
ALL_PROXY: "http://forward_proxy:3128" # https://rocker-project.org/use/networking.html
|
||||
LOG_LEVEL: ${LOG_LEVEL_RSTUDIO:-WARN}
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.rstudio_ccp.rule=PathPrefix(`/rstudio`)"
|
||||
- "traefik.http.services.rstudio_ccp.loadbalancer.server.port=8787"
|
||||
- "traefik.http.middlewares.rstudio_ccp_strip.stripprefix.prefixes=/rstudio"
|
||||
- "traefik.http.routers.rstudio_ccp.tls=true"
|
||||
- "traefik.http.routers.rstudio_ccp.middlewares=oidcAuth,rstudio_ccp_strip"
|
||||
networks:
|
||||
- rstudio
|
||||
|
||||
opal:
|
||||
container_name: bridgehead-opal
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-opal:latest
|
||||
@@ -46,7 +26,6 @@ services:
|
||||
OPAL_PRIVATE_KEY: "/run/secrets/opal-key.pem"
|
||||
OPAL_CERTIFICATE: "/run/secrets/opal-cert.pem"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
OIDC_REALM: "${OIDC_REALM}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PRIVATE_CLIENT_ID}"
|
||||
OIDC_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
OIDC_ADMIN_GROUP: "${OIDC_ADMIN_GROUP}"
|
||||
@@ -55,7 +34,6 @@ services:
|
||||
BEAM_APP_ID: token-manager.${PROXY_ID}
|
||||
BEAM_SECRET: ${TOKEN_MANAGER_SECRET}
|
||||
BEAM_DATASHIELD_PROXY: request-manager
|
||||
LOG_LEVEL: ${LOG_LEVEL_OPAL:-WARN}
|
||||
volumes:
|
||||
- "/var/cache/bridgehead/ccp/opal-metadata-db:/srv" # Opal metadata
|
||||
secrets:
|
||||
@@ -77,8 +55,6 @@ services:
|
||||
image: docker.verbis.dkfz.de/ccp/dktk-rserver # datashield/rock-base + dsCCPhos
|
||||
tmpfs:
|
||||
- /srv
|
||||
environment:
|
||||
LOG_LEVEL: ${LOG_LEVEL_OPAL:-WARN}
|
||||
|
||||
beam-connect:
|
||||
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
|
||||
@@ -91,86 +67,20 @@ services:
|
||||
DISCOVERY_URL: "./map/central.json"
|
||||
LOCAL_TARGETS_FILE: "./map/local.json"
|
||||
NO_AUTH: "true"
|
||||
RUST_LOG: ${LOG_LEVEL_BEAMCONNECT:-WARN}
|
||||
secrets:
|
||||
- opal-cert.pem
|
||||
depends_on:
|
||||
- beam-proxy
|
||||
volumes:
|
||||
- /tmp/bridgehead/opal-map/:/map/:ro
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.address=http://oauth2-proxy:4180/"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.trustForwardHeader=true"
|
||||
- "traefik.http.middlewares.oidcAuth.forwardAuth.authResponseHeaders=X-Auth-Request-Access-Token,Authorization"
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
forward_proxy:
|
||||
networks:
|
||||
- default
|
||||
- rstudio
|
||||
|
||||
beam-proxy:
|
||||
environment:
|
||||
APP_datashield-connect_KEY: ${DATASHIELD_CONNECT_SECRET}
|
||||
APP_token-manager_KEY: ${TOKEN_MANAGER_SECRET}
|
||||
|
||||
# TODO: Allow users of group /DataSHIELD and OIDC_USER_GROUP at the same time:
|
||||
# Maybe a solution would be (https://oauth2-proxy.github.io/oauth2-proxy/configuration/oauth_provider):
|
||||
# --allowed-groups=/DataSHIELD,OIDC_USER_GROUP
|
||||
oauth2-proxy:
|
||||
image: docker.verbis.dkfz.de/cache/oauth2-proxy/oauth2-proxy:latest
|
||||
container_name: bridgehead-oauth2proxy
|
||||
command: >-
|
||||
--allowed-group=DataSHIELD
|
||||
--oidc-groups-claim=${OIDC_GROUP_CLAIM}
|
||||
--auth-logging=true
|
||||
--whitelist-domain=${HOST}
|
||||
--http-address="0.0.0.0:4180"
|
||||
--reverse-proxy=true
|
||||
--upstream="static://202"
|
||||
--email-domain="*"
|
||||
--cookie-name="_BRIDGEHEAD_oauth2"
|
||||
--cookie-secret="${OAUTH2_PROXY_SECRET}"
|
||||
--cookie-expire="12h"
|
||||
--cookie-secure="true"
|
||||
--cookie-httponly="true"
|
||||
#OIDC settings
|
||||
--provider="keycloak-oidc"
|
||||
--provider-display-name="VerbIS Login"
|
||||
--client-id="${OIDC_PRIVATE_CLIENT_ID}"
|
||||
--client-secret="${OIDC_CLIENT_SECRET}"
|
||||
--redirect-url="https://${HOST}${OAUTH2_CALLBACK}"
|
||||
--oidc-issuer-url="${OIDC_ISSUER_URL}"
|
||||
--scope="openid email profile"
|
||||
--code-challenge-method="S256"
|
||||
--skip-provider-button=true
|
||||
#X-Forwarded-Header settings - true/false depending on your needs
|
||||
--pass-basic-auth=true
|
||||
--pass-user-headers=false
|
||||
--pass-access-token=false
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.oauth2_proxy.rule=PathPrefix(`/oauth2`)"
|
||||
- "traefik.http.services.oauth2_proxy.loadbalancer.server.port=4180"
|
||||
- "traefik.http.routers.oauth2_proxy.tls=true"
|
||||
environment:
|
||||
http_proxy: "http://forward_proxy:3128"
|
||||
https_proxy: "http://forward_proxy:3128"
|
||||
depends_on:
|
||||
forward_proxy:
|
||||
condition: service_healthy
|
||||
|
||||
secrets:
|
||||
opal-cert.pem:
|
||||
file: /tmp/bridgehead/opal-cert.pem
|
||||
opal-key.pem:
|
||||
file: /tmp/bridgehead/opal-key.pem
|
||||
|
||||
networks:
|
||||
rstudio:
|
||||
|
@@ -5,17 +5,12 @@ if [ "$ENABLE_DATASHIELD" == true ]; then
|
||||
if [ -z "${ENABLE_EXPORTER}" ] || [ "${ENABLE_EXPORTER}" != "true" ]; then
|
||||
log WARN "The ENABLE_EXPORTER variable is either not set or not set to 'true'."
|
||||
fi
|
||||
OAUTH2_CALLBACK=/oauth2/callback
|
||||
OAUTH2_PROXY_SECRET="$(echo \"This is a salt string to generate one consistent encryption key for the oauth2_proxy. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 32)"
|
||||
add_private_oidc_redirect_url "${OAUTH2_CALLBACK}"
|
||||
|
||||
log INFO "DataSHIELD setup detected -- will start DataSHIELD services."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/datashield-compose.yml"
|
||||
EXPORTER_OPAL_PASSWORD="$(generate_password \"exporter in Opal\")"
|
||||
TOKEN_MANAGER_OPAL_PASSWORD="$(generate_password \"Token Manager in Opal\")"
|
||||
OPAL_DB_PASSWORD="$(echo \"Opal DB\" | generate_simple_password)"
|
||||
OPAL_ADMIN_PASSWORD="$(generate_password \"admin password for Opal\")"
|
||||
RSTUDIO_ADMIN_PASSWORD="$(generate_password \"admin password for R-Studio\")"
|
||||
DATASHIELD_CONNECT_SECRET="$(echo \"DataShield Connect\" | generate_simple_password)"
|
||||
TOKEN_MANAGER_SECRET="$(echo \"Token Manager\" | generate_simple_password)"
|
||||
if [ ! -e /tmp/bridgehead/opal-cert.pem ]; then
|
||||
@@ -23,18 +18,13 @@ if [ "$ENABLE_DATASHIELD" == true ]; then
|
||||
openssl req -x509 -newkey rsa:4096 -nodes -keyout /tmp/bridgehead/opal-key.pem -out /tmp/bridgehead/opal-cert.pem -days 3650 -subj "/CN=opal/C=DE"
|
||||
fi
|
||||
mkdir -p /tmp/bridgehead/opal-map
|
||||
sites="$(cat ./$PROJECT/modules/datashield-sites.json)"
|
||||
echo "$sites" | docker_jq -n --args '{"sites": input | map({
|
||||
"name": .,
|
||||
"id": .,
|
||||
"virtualhost": "\(.):443",
|
||||
"beamconnect": "datashield-connect.\(.).'"$BROKER_ID"'"
|
||||
})}' $sites >/tmp/bridgehead/opal-map/central.json
|
||||
echo "$sites" | docker_jq -n --args '[{
|
||||
"external": "'"$SITE_ID"':443",
|
||||
echo '{"sites": []}' >/tmp/bridgehead/opal-map/central.json
|
||||
# Only allow connections from the central beam proxy that is used by all coder workspaces
|
||||
echo '[{
|
||||
"external": "'$SITE_ID':443",
|
||||
"internal": "opal:8443",
|
||||
"allowed": input | map("\(.).'"$BROKER_ID"'")
|
||||
}]' >/tmp/bridgehead/opal-map/local.json
|
||||
"allowed": ["central-ds-orchestrator.'$BROKER_ID'"]
|
||||
}]' > /tmp/bridgehead/opal-map/local.json
|
||||
if [ "$USER" == "root" ]; then
|
||||
chown -R bridgehead:docker /tmp/bridgehead
|
||||
chmod g+wr /tmp/bridgehead/opal-map/*
|
||||
|
@@ -1,15 +0,0 @@
|
||||
[
|
||||
"berlin",
|
||||
"muenchen-lmu",
|
||||
"dresden",
|
||||
"freiburg",
|
||||
"muenchen-tum",
|
||||
"tuebingen",
|
||||
"mainz",
|
||||
"frankfurt",
|
||||
"essen",
|
||||
"dktk-datashield-test",
|
||||
"dktk-test",
|
||||
"mannheim",
|
||||
"central-ds-orchestrator"
|
||||
]
|
@@ -13,11 +13,11 @@ services:
|
||||
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
|
||||
APP_ID: dnpm-connect.${PROXY_ID}
|
||||
DISCOVERY_URL: "./conf/central_targets.json"
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
HTTPS_PROXY: "http://forward_proxy:3128"
|
||||
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
|
||||
RUST_LOG: ${LOG_LEVEL_BEAMCONNECTDNPM:-WARN}
|
||||
RUST_LOG: ${RUST_LOG:-info}
|
||||
NO_AUTH: "true"
|
||||
TLS_CA_CERTIFICATES_DIR: ./conf/trusted-ca-certs
|
||||
extra_hosts:
|
||||
@@ -25,7 +25,7 @@ services:
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
- /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
|
||||
|
@@ -1,34 +1,99 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-backend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
|
||||
container_name: bridgehead-dnpm-backend
|
||||
dnpm-mysql:
|
||||
image: mysql:9
|
||||
healthcheck:
|
||||
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
|
||||
interval: 3s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
- N_RANDOM_FILES=${DNPM_SYNTH_NUM}
|
||||
MYSQL_ROOT_HOST: "%"
|
||||
MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
|
||||
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.bwhc-backend.tls=true"
|
||||
- /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
|
||||
|
||||
dnpm-frontend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
|
||||
container_name: bridgehead-dnpm-frontend
|
||||
links:
|
||||
- dnpm-backend
|
||||
dnpm-authup:
|
||||
image: authup/authup:latest
|
||||
container_name: bridgehead-dnpm-authup
|
||||
volumes:
|
||||
- /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
|
||||
depends_on:
|
||||
dnpm-mysql:
|
||||
condition: service_healthy
|
||||
command: server/core start
|
||||
environment:
|
||||
- NUXT_HOST=0.0.0.0
|
||||
- NUXT_PORT=8080
|
||||
- BACKEND_PROTOCOL=https
|
||||
- BACKEND_HOSTNAME=$HOST
|
||||
- BACKEND_PORT=443
|
||||
- PUBLIC_URL=https://${HOST}/auth/
|
||||
- AUTHORIZE_REDIRECT_URL=https://${HOST}
|
||||
- ROBOT_ADMIN_ENABLED=true
|
||||
- ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
|
||||
- ROBOT_ADMIN_SECRET_RESET=true
|
||||
- DB_TYPE=mysql
|
||||
- DB_HOST=dnpm-mysql
|
||||
- DB_USERNAME=root
|
||||
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
|
||||
- DB_DATABASE=auth
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.bwhc-frontend.tls=true"
|
||||
- "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth"
|
||||
- "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
|
||||
- "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
|
||||
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
|
||||
- "traefik.http.routers.dnpm-auth.tls=true"
|
||||
|
||||
dnpm-portal:
|
||||
image: ghcr.io/dnpm-dip/portal:latest
|
||||
container_name: bridgehead-dnpm-portal
|
||||
environment:
|
||||
- NUXT_API_URL=http://dnpm-backend:9000/
|
||||
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
|
||||
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
|
||||
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
|
||||
- "traefik.http.routers.dnpm-frontend.tls=true"
|
||||
|
||||
dnpm-backend:
|
||||
container_name: bridgehead-dnpm-backend
|
||||
image: ghcr.io/dnpm-dip/backend:latest
|
||||
environment:
|
||||
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
|
||||
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
|
||||
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
|
||||
- HATEOAS_HOST=https://${HOST}
|
||||
- CONNECTOR_TYPE=broker
|
||||
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm/config:/dnpm_config
|
||||
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
|
||||
depends_on:
|
||||
dnpm-authup:
|
||||
condition: service_healthy
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
|
||||
# expose everything
|
||||
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
|
||||
- "traefik.http.routers.dnpm-backend.tls=true"
|
||||
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
|
||||
# except ETL
|
||||
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
|
||||
- "traefik.http.routers.dnpm-backend-etl.tls=true"
|
||||
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
|
||||
# this needs an ETL processor with support for basic auth
|
||||
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
|
||||
# except peer-to-peer
|
||||
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
|
||||
- "traefik.http.routers.dnpm-backend-peer.tls=true"
|
||||
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
|
||||
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
|
||||
# this effectively denies all requests
|
||||
# this is okay, because requests from peers don't go through Traefik
|
||||
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
|
||||
|
||||
landing:
|
||||
labels:
|
||||
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"
|
||||
|
@@ -1,28 +1,16 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
log INFO "DNPM setup detected -- will start DNPM:DIP node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
mkdir -p /var/cache/bridgehead/dnpm/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/dnpm/'. Please run sudo './bridgehead install $PROJECT' again to fix the permissions."
|
||||
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:--1}
|
||||
DNPM_MYSQL_ROOT_PASSWORD="$(generate_simple_password 'dnpm mysql')"
|
||||
DNPM_AUTHUP_SECRET="$(generate_simple_password 'dnpm authup')"
|
||||
fi
|
||||
|
@@ -6,6 +6,7 @@ services:
|
||||
container_name: bridgehead-ccp-exporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
EXPORTER_DB_USER: "exporter"
|
||||
@@ -15,7 +16,6 @@ services:
|
||||
SITE: "${SITE_ID}"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
|
||||
LOG_LEVEL: ${LOG_LEVEL_EXPORTER:-WARN}
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
|
||||
@@ -42,6 +42,7 @@ services:
|
||||
container_name: bridgehead-ccp-reporter
|
||||
environment:
|
||||
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
|
||||
LOG_LEVEL: "INFO"
|
||||
CROSS_ORIGINS: "https://${HOST}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-reporter"
|
||||
SITE: "${SITE_ID}"
|
||||
@@ -49,7 +50,6 @@ services:
|
||||
EXPORTER_URL: "http://exporter:8092"
|
||||
LOG_FHIR_VALIDATION: "false"
|
||||
HTTP_SERVLET_REQUEST_SCHEME: "https"
|
||||
LOG_LEVEL: ${LOG_LEVEL_REPORTER:-WARN}
|
||||
|
||||
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
|
||||
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
|
||||
@@ -69,4 +69,4 @@ services:
|
||||
focus:
|
||||
environment:
|
||||
EXPORTER_URL: "http://exporter:8092"
|
||||
AUTH_HEADER: "${EXPORTER_API_KEY}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
|
381
ccp/modules/exporter-templates.md
Normal file
381
ccp/modules/exporter-templates.md
Normal file
@@ -0,0 +1,381 @@
|
||||
# Exporter Templates
|
||||
|
||||
An exporter template describes the **structure** and **content** of the **export output**.
|
||||
|
||||
## Main Elements
|
||||
|
||||
* **converter**: Defines the **export job**, specifying **output** filenames and **data sources**.
|
||||
* **container**: Represents a logical grouping of data rows (like a **table**).
|
||||
* **attribute**: Defines individual data fields/**columns** extracted from the data source.
|
||||
|
||||
## Other Elements
|
||||
|
||||
* **cql**: Contains Clinical Quality Language metadata used to enrich or filter data.
|
||||
* **fhir-rev-include**: Defines FHIR reverse includes to fetch related resources.
|
||||
* **fhir-package**: Defines a FHIR package to be included in the FHIR query.
|
||||
* **fhir-terminology-server**: FHIR terminology server for validation support.
|
||||
|
||||
## Example Snippet
|
||||
|
||||
```xml
|
||||
<converter id="ccp" excel-filename="Export-${SITE}-${TIMESTAMP}.xlsx" source-id="blaze-store" >
|
||||
<container id="Patient" csv-filename="Patient-${SITE}-${TIMESTAMP}.csv" excel-sheet="Patient" xml-filename="Patient-${SITE}-${TIMESTAMP}.xml" xml-root-element="Patients" xml-element="Patient" json-filename="Patient-${SITE}-${TIMESTAMP}.json" json-key="Patients" >
|
||||
<attribute id="Patient-ID" default-name="PatientID" val-fhir-path="Patient.id.value" anonym="Pat" op="EXTRACT_RELATIVE_ID"/>
|
||||
|
||||
<attribute default-name="DKTKIDGlobal" val-fhir-path="Patient.identifier.where(type.coding.code = 'Global').value.value"/>
|
||||
<attribute default-name="DKTKIDLokal" val-fhir-path="Patient.identifier.where(type.coding.code = 'Lokal').value.value" />
|
||||
<attribute default-name="DateOfBirth" val-fhir-path="Patient.birthDate.value.toString().substring(0, 4) + '-01-01'"/>
|
||||
<attribute default-name="Gender" val-fhir-path="Patient.gender.value" />
|
||||
</container>
|
||||
|
||||
<container id="Diagnosis" csv-filename="Diagnosis-${SITE}-${TIMESTAMP}.csv" excel-sheet="Diagnosis" xml-filename="Diagnosis-${SITE}-${TIMESTAMP}.xml" xml-root-element="Diagnoses" xml-element="Diagnosis" json-filename="Diagnosis-${SITE}-${TIMESTAMP}.json" json-key="Diagnoses">
|
||||
<attribute id="Diagnosis-ID" default-name="DiagnosisID" val-fhir-path="Condition.id.value" anonym="Dia" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute id="Patient-ID" link="Patient.Patient-ID" default-name="PatientID" val-fhir-path="Condition.subject.reference.value" anonym="Pat"/>
|
||||
|
||||
<attribute default-name="ICD10Code" val-fhir-path="Condition.code.coding.code.value"/>
|
||||
<attribute default-name="ICDOTopographyCode" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').code.value"/>
|
||||
<attribute default-name="LocalizationSide" val-fhir-path="Condition.bodySite.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/SeitenlokalisationCS').code.value"/>
|
||||
</container>
|
||||
|
||||
<container id="Histology" csv-filename="Histology-${SITE}-${TIMESTAMP}.csv" excel-sheet="Histology" xml-filename="Histology-${SITE}-${TIMESTAMP}.xml" xml-root-element="Histologies" xml-element="Histology" json-filename="Histology-${SITE}-${TIMESTAMP}.json" json-key="Histologies" >
|
||||
<attribute id="Histology-ID" default-name="HistologyID" val-fhir-path="Observation.where(code.coding.code = '59847-4').id" anonym="His" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute id="Diagnosis-ID" link="Diagnosis.Diagnosis-ID" default-name="DiagnosisID" val-fhir-path="Observation.where(code.coding.code = '59847-4').focus.reference.value" anonym="Dia"/>
|
||||
<attribute id="Patient-ID" link="Patient.Patient-ID" default-name="PatientID" val-fhir-path="Observation.where(code.coding.code = '59847-4').subject.reference.value" anonym="Pat" />
|
||||
|
||||
<attribute default-name="ICDOMorphologyCode" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.code.value"/>
|
||||
<attribute default-name="Grading" val-fhir-path="Observation.where(code.coding.code = '59542-1').value.coding.code.value" join-fhir-path="Observation.where(code.coding.code = '59847-4').hasMember.reference.value"/>
|
||||
</container>
|
||||
|
||||
<container id="Radiation-Therapy" csv-filename="RadiationTherapy-${SITE}-${TIMESTAMP}.csv" excel-sheet="RadiationTherapy" xml-filename="RadiationTherapy-${SITE}-${TIMESTAMP}.xml" xml-root-element="Radiation-Therapies" xml-element="Radiation-Therapy" json-filename="RadiationTherapy-${SITE}-${TIMESTAMP}.json" json-key="Radiation Therapies">
|
||||
<attribute id="Radiation-Therapy-ID" default-name="RadiationTherapyID" val-fhir-path="Procedure.where(category.coding.code = 'ST').id" anonym="Rad" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute id="Diagnosis-ID" link="Diagnosis.Diagnosis-ID" default-name="DiagnosisID" val-fhir-path="Procedure.where(category.coding.code = 'ST').reasonReference.reference.value" anonym="Dia"/>
|
||||
<attribute id="Patient-ID" link="Patient.Patient-ID" default-name="PatientID" val-fhir-path="Procedure.where(category.coding.code = 'ST').subject.reference.value" anonym="Pat" />
|
||||
|
||||
<attribute default-name="RadiationTherapyRelationToSurgery" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
<attribute default-name="RadiationTherapyIntention" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value" />
|
||||
<attribute default-name="RadiationTherapyStart" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.start.value"/>
|
||||
<attribute default-name="RadiationTherapyEnd" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.end.value"/>
|
||||
<attribute default-name="Nebenwirkung Grad" val-fhir-path="AdverseEvent.severity.coding.code.value" join-fhir-path="/AdverseEvent.suspectEntity.instance.reference.where(value.startsWith('Procedure')).value" />
|
||||
</container>
|
||||
|
||||
|
||||
<cql>
|
||||
<default-fhir-search-query>Patient</default-fhir-search-query>
|
||||
|
||||
<token key="DKTK_STRAT_MEDICATION_STRATIFIER" value="define MedicationStatement: if InInitialPopulation then [MedicationStatement] else {} as List <MedicationStatement> " />
|
||||
<token key="DKTK_STRAT_PRIMARY_DIAGNOSIS_NO_SORT_STRATIFIER" value="define PrimaryDiagnosis: First( from [Condition] C where C.extension.where(url='http://hl7.org/fhir/StructureDefinition/condition-related').empty()) " />
|
||||
|
||||
<measure-parameters>
|
||||
{
|
||||
"resourceType": "Parameters",
|
||||
"parameter": [
|
||||
{
|
||||
"name": "periodStart",
|
||||
"valueDate": "2000"
|
||||
},
|
||||
{
|
||||
"name": "periodEnd",
|
||||
"valueDate": "2030"
|
||||
},
|
||||
{
|
||||
"name": "reportType",
|
||||
"valueCode": "subject-list"
|
||||
}
|
||||
]
|
||||
}
|
||||
</measure-parameters>
|
||||
</cql>
|
||||
|
||||
|
||||
|
||||
<fhir-rev-include>Observation:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Condition:patient</fhir-rev-include>
|
||||
<fhir-rev-include>ClinicalImpression:patient</fhir-rev-include>
|
||||
<fhir-rev-include>MedicationStatement:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Procedure:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Specimen:patient</fhir-rev-include>
|
||||
<fhir-rev-include>AdverseEvent:subject</fhir-rev-include>
|
||||
<fhir-rev-include>CarePlan:patient</fhir-rev-include>
|
||||
|
||||
</converter>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 1. **Converter**
|
||||
|
||||
Main tag of an exporter template. The Exporter functions as a flexible system of converters. Given a specific input format and desired output, the exporter determines the optimal chain of converters to transform the input into the required output.
|
||||
|
||||
Each converter template provides essential details that help the exporter build the correct conversion chain and produce the final export. The template includes the following components:
|
||||
|
||||
- **Source**: The data source from which information is read. Sources are defined in converter.xml, and each template refers to a source by its ID.
|
||||
|
||||
- **Information to Export**: Specifies which elements from the source should be included in the output.
|
||||
|
||||
- **Metadata**: Defines output structure elements such as header names, column titles, sheet names, etc.
|
||||
|
||||
- **Additional Query Information**: Contains any extra data needed to complete and refine the user's query.
|
||||
|
||||
| Tag | Description |
|
||||
| ------------- | --------------------------------------------------------------------------------------------- |
|
||||
| `<converter>` | Main tag for exporter template containing sources, metadata, and additional query information |
|
||||
|
||||
| Attribute | Description | Example | Default |
|
||||
| ------------------------ | --------------------------------------------------------------------------------------- | --------------------------------------------------- | ------- |
|
||||
| id | ID to reference a template | `id="ccp-opal"` | — |
|
||||
| default-name | Default name when output is in a single file format (no extension; added automatically) | — | — |
|
||||
| ignore | Deactivate template but keep accessible | `ignore="true"` | false |
|
||||
| excel-filename | Name of the Excel output file (supports variables `${SITE}`, `${TIMESTAMP}`) | `excel-filename="Export-${SITE}-${TIMESTAMP}.xlsx"` | — |
|
||||
| csv-separator | CSV separator character | — | `"\t"` |
|
||||
| source-id | ID of the data source | `source-id="blaze-store"` | — |
|
||||
| target-id | ID of a target server for file transfer (e.g., Opal for DataSHIELD) | `target-id="opal"` | — |
|
||||
| opal-project | Opal-specific: name of project | — | — |
|
||||
| opal-permission-type | Opal permission type (`user` or `group`) | — | — |
|
||||
| opal-permission-subjects | Opal permission subjects | — | — |
|
||||
| opal-permission | Opal permission (`administrate` or `use`) | — | — |
|
||||
|
||||
**Notes:**
|
||||
* You can use variables such as `${SITE}`, `${TIMESTAMP}`, and other environment variables within tags.
|
||||
* To define environment variables for a specific export, use the HTTP parameter **`CONTEXT`**.
|
||||
The value must be a Base64-encoded string containing comma-separated key-value pairs.
|
||||
* **Example:**
|
||||
Plain: `KEY1=VALUE1,KEY2=VALUE2`
|
||||
Base64: `S0VZMT1WQUxVRTEsS0VZMj1WQUxVRTI=`
|
||||
|
||||
**Allowed child elements:**
|
||||
|
||||
* `<container>`, `<cql>`, `<fhir-rev-include>`, `<fhir-package>`, `<fhir-terminology-server>`
|
||||
|
||||
---
|
||||
|
||||
## 2. **Container**
|
||||
|
||||
Represents a data table with columns (attributes).
|
||||
|
||||
| Tag | Description |
|
||||
| ------------- | --------------------------------------------------- |
|
||||
| `<container>` | Defines a container/table with attributes (columns) |
|
||||
|
||||
| Attribute | Description | Example | Default |
|
||||
| ---------------- | ------------------------------------------------------------ | --------------------------------------------- | ------- |
|
||||
| id | Container ID to reference | — | — |
|
||||
| default-name | Name of Excel sheet/file (no extension, added automatically) | — | — |
|
||||
| csv-filename | Name of CSV file | `csv-filename="Diagnosis-${TIMESTAMP}.csv"` | — |
|
||||
| json-filename | Name of JSON file | `json-filename="diagnosis-${TIMESTAMP}.json"` | — |
|
||||
| xml-filename | Name of XML file | `xml-filename="diagnosis-${TIMESTAMP}.xml"` | — |
|
||||
| xml-root-element | Root element name in XML | `xml-root-element="diagnoses"` | — |
|
||||
| xml-element | Element name for each entry in XML | `xml-element="diagnosis"` | — |
|
||||
| excel-sheet | Excel sheet name | `excel-sheet="diagnosis-${TIMESTAMP}.xlsx"` | — |
|
||||
| opal-table | Opal table name | `opal-name="Diagnosis"` | — |
|
||||
| opal-entity-type | Opal entity type | — | — |
|
||||
|
||||
### Note
|
||||
The following attributes can be used to define the name of the output file:
|
||||
|
||||
- **default-name**: Used as a fallback name if no specific filename is provided for the selected output format.
|
||||
|
||||
- **csv-filename**: Specifies the filename for CSV output.
|
||||
|
||||
- **xml-filename**: Specifies the filename for XML output.
|
||||
|
||||
... and so on for other supported formats.
|
||||
|
||||
If the user selects an output format that does not have a specifically defined filename, the default-name will be used as the base, with the appropriate file extension automatically appended.
|
||||
If neither a format-specific filename nor a default-name is provided, a filename will be automatically generated using a UUID and the correct extension.
|
||||
---
|
||||
|
||||
## 3. **Attribute**
|
||||
|
||||
Represents a column in a container/table.
|
||||
|
||||
| Tag | Description |
|
||||
| ------------- | --------------------------- |
|
||||
| `<attribute>` | Defines an attribute/column |
|
||||
|
||||
| Attribute | Description | Example | Default |
|
||||
| ------------------------- | ---------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ | ------- |
|
||||
| id | Attribute ID | `id="Patient-ID"` | — |
|
||||
| default-name | Default name of the attribute (used if no output-specific name provided) | — | — |
|
||||
| link | Reference to an attribute of another container (format: `<container-name>.<attribute-id>`) | `link="Patient.Patient-ID"` | — |
|
||||
| csv-column | Name of the CSV column | — | — |
|
||||
| excel-column | Name of the Excel column | — | — |
|
||||
| json-key | JSON key | — | — |
|
||||
| xml-element | XML element name | — | — |
|
||||
| opal-value-type | Opal-specific value type | — | — |
|
||||
| opal-script | Script to be applied to the field in Opal | — | — |
|
||||
| primary-key | Marks attribute as primary key | `primary-key="true"` | false |
|
||||
| validation | Marks attribute as syntactic validation field (ends with `-Validation` in DKTK/BBMRI reporter) | `validation="true"` | false |
|
||||
| val-fhir-path | FHIR path to extract value (if source is a FHIR server) | `val-fhir-path="Patient.gender.value"` | — |
|
||||
| join-fhir-path | FHIR path for joining secondary resources to main resource | `join-fhir-path="/AdverseEvent.suspectEntity.instance.reference.where(value.startsWith('Procedure')).value"` | — |
|
||||
| condition-value-fhir-path | Condition filtering for complex value extraction (FHIR path syntax) | `condition-value-fhir-path="Patient.birthDate <= today() - 18 'years'"` | — |
|
||||
| anonym | Anonymization prefix; replaces real value with `anonym` + number | `anonym="Pat"` | — |
|
||||
| mdr | Metadata repository ID in DKTK context | `mdr="dktk:dataelement:20:3"` | — |
|
||||
| op | Operation applied on value (e.g., `EXTRACT_RELATIVE_ID`) | `op="EXTRACT_RELATIVE_ID"` | — |
|
||||
|
||||
---
|
||||
|
||||
### Notes on **join-fhir-path**
|
||||
|
||||
* Used to join resources in FHIR queries when container references multiple resources.
|
||||
* Two join types:
|
||||
|
||||
* **Direct:** main resource points to secondary resource in a **parent to child** relationship.
|
||||
* **Indirect:** secondary resource points back to main resource (path begins with `/`) in a **child to parent** relationship.
|
||||
* Joins can chain multiple resources, e.g., `R1 -> R2 -> R3`, with commas separating joins.
|
||||
* It is even possible to combine direct and indirect references: `R1 -> R2 <- R3`: `<fhir path reference R1 -> R2>,/<fhir path reference R3 -> R2>`
|
||||
|
||||
|
||||
*Examples*:
|
||||
|
||||
* Example of a **direct relationship**:
|
||||
```xml
|
||||
<container id="Histology">
|
||||
<attribute id="Histology-ID" default-name="HistologyID" val-fhir-path="Observation.where(code.coding.code = '59847-4').id" anonym="His" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute default-name="ICDOMorphologyCode" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.code.value"/>
|
||||
...
|
||||
<attribute default-name="Grading" val-fhir-path="Observation.where(code.coding.code = '59542-1').value.coding.code.value" join-fhir-path="Observation.where(code.coding.code = '59847-4').hasMember.reference.value"/>
|
||||
</container>
|
||||
```
|
||||
Here, the main observation Observation.where(code.coding.code = '59847-4') contains a reference to the secondary observation Observation.where(code.coding.code = '59542-1'), where we can find the value that we are looking for.
|
||||
|
||||
* Example of an **indirect relationship**:
|
||||
```xml
|
||||
<container id="Radiation-Therapy" ...>
|
||||
<attribute id="Radiation-Therapy-ID" default-name="RadiationTherapyID" val-fhir-path="Procedure.where(category.coding.code = 'ST').id" anonym="Rad" op="EXTRACT_RELATIVE_ID"/>
|
||||
<attribute default-name="RadiationTherapyRelationToSurgery" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
|
||||
...
|
||||
<attribute default-name="Nebenwirkung Grad" val-fhir-path="AdverseEvent.severity.coding.code.value" join-fhir-path="/AdverseEvent.suspectEntity.instance.reference.where(value.startsWith('Procedure')).value" />
|
||||
</container>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Note
|
||||
The following attributes define the name of a column or field in the output:
|
||||
|
||||
- **default-name**: A general fallback name used when no format-specific name is provided.
|
||||
|
||||
- **csv-column**: Name used for the CSV output.
|
||||
|
||||
- **excel-column**: Name used for Excel output.
|
||||
|
||||
- **json-key**: Name used for JSON output.
|
||||
|
||||
- **xml-element**: Name used for XML output.
|
||||
|
||||
If a format-specific name is not defined for a given output, the default-name will be used.
|
||||
If default-name is also missing, a UUID will be generated and used as the name.
|
||||
|
||||
---
|
||||
|
||||
## 4. **CQL**
|
||||
|
||||
Contains metadata and details important for handling CQL queries.
|
||||
|
||||
| Tag | Description |
|
||||
| ------- | ---------------------------------------------------------------- |
|
||||
| `<cql>` | Container for CQL query metadata including tokens and parameters |
|
||||
|
||||
**Allowed child elements:**
|
||||
|
||||
* `<token>`, `<measure-parameters>`, `<default-fhir-search-query>`
|
||||
|
||||
---
|
||||
|
||||
## 5. **Token (CQL)**
|
||||
|
||||
Replaces keys in CQL queries with specific values (commonly used for stratifiers).
|
||||
|
||||
| Tag | Description |
|
||||
| --------- | ------------------------------------- |
|
||||
| `<token>` | Contains `key` and `value` attributes |
|
||||
|
||||
| Attribute | Description | Example |
|
||||
| --------- | ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| key | Key to replace in CQL | `key="DKTK_STRAT_MEDICATION_STRATIFIER"` |
|
||||
| value | CQL code snippet that replaces key | `value="define MedicationStatement: if InInitialPopulation then [MedicationStatement] else {} as List <MedicationStatement>"` |
|
||||
|
||||
---
|
||||
|
||||
## 6. **Measure Parameters (CQL)**
|
||||
|
||||
Parameters for a CQL measure query, typically in JSON format.
|
||||
|
||||
| Tag | Description |
|
||||
| ---------------------- | ----------------------------------------------------------- |
|
||||
| `<measure-parameters>` | Parameters such as `periodStart`, `periodEnd`, `reportType` |
|
||||
|
||||
*Example*:
|
||||
```xml
|
||||
<measure-parameters>
|
||||
{
|
||||
"resourceType": "Parameters",
|
||||
"parameter": [
|
||||
{
|
||||
"name": "periodStart",
|
||||
"valueDate": "2000"
|
||||
},
|
||||
{
|
||||
"name": "periodEnd",
|
||||
"valueDate": "2030"
|
||||
},
|
||||
{
|
||||
"name": "reportType",
|
||||
"valueCode": "subject-list"
|
||||
}
|
||||
]
|
||||
}
|
||||
</measure-parameters>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. **Default FHIR Search Query (CQL)**
|
||||
|
||||
FHIR search query applied after obtaining measure reports from CQL.
|
||||
|
||||
| Tag | Description | Example |
|
||||
| ----------------------------- | ----------------------------------------------------- | --------- |
|
||||
| `<default-fhir-search-query>` | Defines a FHIR resource type to query (e.g., Patient) | `Patient` |
|
||||
|
||||
CQL (Clinical Quality Language) queries are primarily used to generate MeasureReports. However, in some cases, it is more useful to extract the underlying data used to build those MeasureReports.
|
||||
|
||||
In this context, the CQL query acts as a filtering mechanism—more expressive and powerful than a standard FHIR search query. When the Exporter processes a CQL input, it sends the query to the FHIR server along with the relevant MeasureReport request. The FHIR server responds with a reference to a subset of resources, typically a list of patient IDs. This subset serves as a filter for subsequent data extraction.
|
||||
|
||||
The behavior depends on the selected input format:
|
||||
|
||||
- **CQL**: The Exporter returns the MeasureReports resulting from the execution of the CQL query.
|
||||
|
||||
- **CQL_DATA**: After obtaining the list of matching resource references from the FHIR server, the Exporter performs a second request—a standard FHIR search query—on that filtered list to retrieve the actual data resources (e.g., Patients, Observations, etc.).
|
||||
|
||||
The default FHIR search query is applied to get the resources from the FHIR server after getting the list of patients.
|
||||
|
||||
---
|
||||
|
||||
## 8. **FHIR Reverse Include**
|
||||
|
||||
Defines which resources should be reverse-included when using FHIR search as input or CQL\_DATA.
|
||||
|
||||
| Tag | Description |
|
||||
| -------------------- | ------------------------------------------------------------ |
|
||||
| `<fhir-rev-include>` | Specifies reverse include resources to simplify FHIR queries |
|
||||
|
||||
This tag allows users to simplify the FHIR search query by only specifying the search criteria. The specific FHIR resources to be retrieved are defined in the template, not in the user’s query.
|
||||
|
||||
This design shifts responsibility:
|
||||
- The user focuses on defining what to filter (e.g., patients with a certain condition).
|
||||
- The template defines what information will be extracted from each matching FHIR resource (e.g., which fields from Patient, Observation, etc.).
|
||||
|
||||
By separating concerns in this way, the template ensures consistent and controlled data extraction while keeping the user's input simple.
|
||||
|
||||
*Example*:
|
||||
```xml
|
||||
<fhir-rev-include>Observation:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Condition:patient</fhir-rev-include>
|
||||
<fhir-rev-include>ClinicalImpression:patient</fhir-rev-include>
|
||||
<fhir-rev-include>MedicationStatement:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Procedure:patient</fhir-rev-include>
|
||||
<fhir-rev-include>Specimen:patient</fhir-rev-include>
|
||||
<fhir-rev-include>AdverseEvent:subject</fhir-rev-include>
|
||||
<fhir-rev-include>CarePlan:patient</fhir-rev-include>
|
||||
```
|
@@ -1,15 +1,91 @@
|
||||
# Exporter and Reporter
|
||||
|
||||
---
|
||||
|
||||
## Exporter
|
||||
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
|
||||
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
|
||||
|
||||
**GitHub:** [https://github.com/samply/exporter](https://github.com/samply/exporter)
|
||||
|
||||
The Exporter is a **REST API** that enables the export of data from various **bridgehead databases** as **structured tables**. It currently supports only **FHIR sources** such as **Blaze**, but it is designed to be extended to **other types** of data sources. The Exporter provides multiple output formats, including **CSV, Excel, JSON, and XML**, and can also export data directly into **Opal (DataSHIELD)**.
|
||||
|
||||
### How it works
|
||||
|
||||
The **user** submits a **query** and specifies the desired **export template** and **output format**. The **query** acts like the `WHERE` clause in SQL, filtering data, while the **template** defines what data to select and how to format it, similar to the `SELECT` clause. The Exporter then processes this to generate the export files.
|
||||
|
||||
### Exporter Templates
|
||||
[For further information](exporter-templates.md)
|
||||
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Below is a list of configurable environment variables used by the Exporter:
|
||||
|
||||
| Variable | Default | Description |
|
||||
| --------------------------------------------------------- | ------------------------------------------- | ---------------------------------------------------------- |
|
||||
| APPLICATION\_PORT | 8092 | Port on which the application runs. |
|
||||
| ARCHIVE\_EXPIRED\_QUERIES\_CRON\_EXPRESSION | `0 0 2 * * *` | Cron expression for archiving expired queries. |
|
||||
| CLEAN\_TEMP\_FILES\_CRON\_EXPRESSION | `0 0 1 * * *` | Cron expression for cleaning temporary files. |
|
||||
| CLEAN\_WRITE\_FILES\_CRON\_EXPRESSION | `0 0 2 * * *` | Cron expression for cleaning written files. |
|
||||
| CONVERTER\_TEMPLATE\_DIRECTORY | | Directory containing conversion templates. |
|
||||
| CONVERTER\_XML\_APPLICATION\_CONTEXT\_PATH | | Path to the XML application context used by the converter. |
|
||||
| CROSS\_ORIGINS | | Allowed CORS origins (comma-separated). |
|
||||
| CSV\_SEPARATOR\_REPLACEMENT | | Character to replace CSV separators within values. |
|
||||
| EXCEL\_WORKBOOK\_WINDOW | 30000000 | Memory window size for Excel workbook processing. |
|
||||
| EXPORTER\_API\_KEY | | API key for authenticating access to the exporter. |
|
||||
| EXPORTER\_DB\_FLYWAY\_MIGRATION\_ENABLED | true | Enable Flyway DB migrations on startup. |
|
||||
| EXPORTER\_DB\_PASSWORD | | Password for exporter database. |
|
||||
| EXPORTER\_DB\_URL | `jdbc:postgresql://localhost:5432/exporter` | JDBC URL for exporter DB. |
|
||||
| EXPORTER\_DB\_USER | | Username for exporter DB. |
|
||||
| FHIR\_PACKAGES\_DIRECTORY | | Directory where FHIR packages are stored. |
|
||||
| HAPI\_FHIR\_CLIENT\_LOG\_LEVEL | OFF | Log level for HAPI FHIR client. |
|
||||
| HIBERNATE\_LOG | false | Enable Hibernate SQL logging. |
|
||||
| HTTP\_RELATIVE\_PATH | | Relative base path for HTTP endpoints. |
|
||||
| HTTP\_SERVLET\_REQUEST\_SCHEME | http | Default HTTP scheme. |
|
||||
| LOG\_FHIR\_VALIDATION | | Enable logging of FHIR validation results. |
|
||||
| LOG\_LEVEL | INFO | Application log level. |
|
||||
| MAX\_NUMBER\_OF\_EXCEL\_ROWS\_IN\_A\_SHEET | 100000 | Max rows per Excel sheet. |
|
||||
| MAX\_NUMBER\_OF\_RETRIES | 10 | Max retry attempts. |
|
||||
| MERGE\_FILENAME | | Name of merged output file. |
|
||||
| SITE | | Site identifier for filenames/logs. |
|
||||
| TEMP\_FILES\_LIFETIME\_IN\_DAYS | 1 | Lifetime of temporary files (days). |
|
||||
| TEMPORAL\_FILE\_DIRECTORY | | Directory for temporary files. |
|
||||
| TIMEOUT\_IN\_SECONDS | 10 | Default timeout (seconds). |
|
||||
| TIMESTAMP\_FORMAT | | Timestamp format string. |
|
||||
| WEBCLIENT\_BUFFER\_SIZE\_IN\_BYTES | 8192 | Buffer size for web client. |
|
||||
| WEBCLIENT\_CONNECTION\_TIMEOUT\_IN\_SECONDS | 5 | Connection timeout (seconds). |
|
||||
| WEBCLIENT\_MAX\_NUMBER\_OF\_RETRIES | 10 | Max retries for web client. |
|
||||
| WEBCLIENT\_REQUEST\_TIMEOUT\_IN\_SECONDS | 10 | Request timeout (seconds). |
|
||||
| WEBCLIENT\_TCP\_KEEP\_CONNECTION\_NUMBER\_OF\_TRIES | 3 | TCP keepalive retry attempts. |
|
||||
| WEBCLIENT\_TCP\_KEEP\_IDLE\_IN\_SECONDS | 30 | TCP keepalive idle time (seconds). |
|
||||
| WEBCLIENT\_TCP\_KEEP\_INTERVAL\_IN\_SECONDS | 10 | TCP keepalive probe interval (seconds). |
|
||||
| WEBCLIENT\_TIME\_IN\_SECONDS\_AFTER\_RETRY\_WITH\_FAILURE | 1 | Wait time after failed retry (seconds). |
|
||||
| WRITE\_FILE\_DIRECTORY | | Directory for final output files. |
|
||||
| WRITE\_FILES\_LIFETIME\_IN\_DAYS | 30 | Lifetime of written files (days). |
|
||||
| XML\_FILE\_MERGER\_ROOT\_ELEMENT | Containers | Root element for XML file merging. |
|
||||
| ZIP\_FILENAME | `exporter-files-${SITE}-${TIMESTAMP}.zip` | Pattern for ZIP archive naming. |
|
||||
|
||||
---
|
||||
|
||||
### About Cron Expressions in Spring
|
||||
|
||||
Cron expressions configure scheduled tasks and consist of six space-separated fields representing second, minute, hour, day of month, month, and day of week. For example, the default `0 0 2 * * *` means “at 2:00 AM every day.” These expressions allow precise scheduling for maintenance tasks such as cleaning files or archiving data.
|
||||
|
||||
---
|
||||
|
||||
## Exporter-DB
|
||||
It is a database to save queries for its execution in the exporter.
|
||||
The exporter manages also the different executions of the same query in through the database.
|
||||
|
||||
**GitHub:** [https://github.com/samply/exporter-db](https://github.com/samply/exporter-db) (If exists; if not, just remove or adjust accordingly)
|
||||
|
||||
The Exporter-DB stores queries for execution by the Exporter and tracks multiple executions of the same query, managing versioning and scheduling.
|
||||
|
||||
---
|
||||
|
||||
## Reporter
|
||||
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
|
||||
It is compatible with different template engines as Groovy, Thymeleaf,...
|
||||
It is perfect to generate a document as our traditional CCP quality report.
|
||||
|
||||
**GitHub:** [https://github.com/samply/reporter](https://github.com/samply/reporter)
|
||||
|
||||
The Reporter is a **plugin for the Exporter** designed for generating **complex Excel reports** based on **customizable templates**. It supports various template engines like **Groovy** and **Thymeleaf**, making it ideal for producing detailed documents such as the traditional CCP **data quality report**.
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
@@ -14,7 +14,6 @@ services:
|
||||
MAGICPL_CONNECTOR_APIKEY: ${IDMANAGER_READ_APIKEY}
|
||||
MAGICPL_CENTRAL_PATIENTLIST_APIKEY: ${IDMANAGER_CENTRAL_PATIENTLIST_APIKEY}
|
||||
MAGICPL_CONTROLNUMBERGENERATOR_APIKEY: ${IDMANAGER_CONTROLNUMBERGENERATOR_APIKEY}
|
||||
ML_LOG_LEVEL: ${LOG_LEVEL_IDMANAGER:-WARN}
|
||||
depends_on:
|
||||
- patientlist
|
||||
- traefik-forward-auth
|
||||
@@ -45,8 +44,6 @@ services:
|
||||
- ML_UPLOAD_API_KEY=${IDMANAGER_UPLOAD_APIKEY}
|
||||
# Add Variables from /etc/patientlist-id-generators.env
|
||||
- PATIENTLIST_SEEDS_TRANSFORMED
|
||||
- ML_LOG_LEVEL=${LOG_LEVEL_PATIENTLIST:-WARN}
|
||||
#TODO confirm LOG_LEVEL
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.patientlist.rule=PathPrefix(`/patientlist`)"
|
||||
@@ -105,11 +102,11 @@ services:
|
||||
condition: service_healthy
|
||||
|
||||
ccp-patient-project-identificator:
|
||||
image: samply/ccp-patient-project-identificator
|
||||
image: docker.verbis.dkfz.de/cache/samply/ccp-patient-project-identificator
|
||||
container_name: bridgehead-ccp-patient-project-identificator
|
||||
environment:
|
||||
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
|
||||
SITE_NAME: ${SITE_NAME}
|
||||
SITE_NAME: ${IDMANAGEMENT_FRIENDLY_ID}
|
||||
|
||||
volumes:
|
||||
patientlist-db-data:
|
||||
|
@@ -23,9 +23,7 @@ services:
|
||||
OIDC_ADMIN_GROUP: "${OIDC_ADMIN_GROUP}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PRIVATE_CLIENT_ID}"
|
||||
OIDC_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
|
||||
OIDC_REALM: "${OIDC_REALM}"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
LOG_LEVEL: ${LOG_LEVEL_MTBA:-WARN}
|
||||
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
|
@@ -12,8 +12,6 @@ services:
|
||||
CTS_API_KEY: ${NNGM_CTS_APIKEY}
|
||||
CRYPT_KEY: ${NNGM_CRYPTKEY}
|
||||
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
|
||||
LOG_LEVEL: ${LOG_LEVEL_NNGM:-WARN}
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"
|
||||
|
@@ -3,15 +3,13 @@ version: "3.7"
|
||||
services:
|
||||
obds2fhir-rest:
|
||||
container_name: bridgehead-obds2fhir-rest
|
||||
image: docker.verbis.dkfz.de/ccp/obds2fhir-rest:main
|
||||
image: docker.verbis.dkfz.de/samply/obds2fhir-rest:main
|
||||
environment:
|
||||
IDTYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
|
||||
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
|
||||
SALT: ${LOCAL_SALT}
|
||||
KEEP_INTERNAL_ID: ${KEEP_INTERNAL_ID:-false}
|
||||
MAINZELLISTE_URL: ${PATIENTLIST_URL:-http://patientlist:8080/patientlist}
|
||||
LOG_LEVEL: ${LOG_LEVEL_REPORTER:-WARN}
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.obds2fhir-rest.rule=PathPrefix(`/obds2fhir-rest`) || PathPrefix(`/adt2fhir-rest`)"
|
||||
|
@@ -13,11 +13,10 @@ services:
|
||||
- "traefik.http.middlewares.teiler_orchestrator_ccp_strip.stripprefix.prefixes=/ccp-teiler"
|
||||
- "traefik.http.routers.teiler_orchestrator_ccp.middlewares=teiler_orchestrator_ccp_strip"
|
||||
environment:
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "https://${HOST}/ccp-teiler-dashboard"
|
||||
TEILER_BACKEND_URL: "/ccp-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "/ccp-teiler-dashboard"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE_LOWER_CASE}"
|
||||
HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
LOG_LEVEL: ${LOG_LEVEL_TEILER:-WARN}
|
||||
|
||||
teiler-dashboard:
|
||||
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:develop
|
||||
@@ -31,9 +30,9 @@ services:
|
||||
- "traefik.http.routers.teiler_dashboard_ccp.middlewares=teiler_dashboard_ccp_strip"
|
||||
environment:
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
TEILER_BACKEND_URL: "/ccp-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "/ccp-teiler-dashboard"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
OIDC_REALM: "${OIDC_REALM}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PUBLIC_CLIENT_ID}"
|
||||
OIDC_TOKEN_GROUP: "${OIDC_GROUP_CLAIM}"
|
||||
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
|
||||
@@ -41,14 +40,12 @@ services:
|
||||
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
|
||||
TEILER_PROJECT: "${PROJECT}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_HTTP_RELATIVE_PATH: "/ccp-teiler-dashboard"
|
||||
TEILER_ORCHESTRATOR_URL: "/ccp-teiler"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_USER: "${OIDC_USER_GROUP}"
|
||||
TEILER_ADMIN: "${OIDC_ADMIN_GROUP}"
|
||||
REPORTER_DEFAULT_TEMPLATE_ID: "ccp-qb"
|
||||
EXPORTER_DEFAULT_TEMPLATE_ID: "ccp"
|
||||
LOG_LEVEL: ${LOG_LEVEL_TEILER:-WARN}
|
||||
|
||||
|
||||
teiler-backend:
|
||||
@@ -62,22 +59,14 @@ services:
|
||||
- "traefik.http.middlewares.teiler_backend_ccp_strip.stripprefix.prefixes=/ccp-teiler-backend"
|
||||
- "traefik.http.routers.teiler_backend_ccp.middlewares=teiler_backend_ccp_strip"
|
||||
environment:
|
||||
LOG_LEVEL: "INFO"
|
||||
APPLICATION_PORT: "8085"
|
||||
APPLICATION_ADDRESS: "${HOST}"
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
CONFIG_ENV_VAR_PATH: "/run/secrets/ccp.conf"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "https://${HOST}/ccp-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "https://${HOST}/ccp-teiler-dashboard/en"
|
||||
CENTRAX_URL: "${CENTRAXX_URL}"
|
||||
TEILER_ORCHESTRATOR_URL: "/ccp-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "/ccp-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "/ccp-teiler-dashboard/en"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
ENABLE_MTBA: "${ENABLE_MTBA}"
|
||||
ENABLE_DATASHIELD: "${ENABLE_DATASHIELD}"
|
||||
LOG_LEVEL: ${LOG_LEVEL_TEILER:-WARN}
|
||||
secrets:
|
||||
- ccp.conf
|
||||
|
||||
secrets:
|
||||
ccp.conf:
|
||||
file: /etc/bridgehead/ccp.conf
|
||||
IDMANAGER_UPLOAD_APIKEY: "${IDMANAGER_UPLOAD_APIKEY}" # Only used to check if the ID Manager is active
|
||||
|
@@ -1,19 +1,287 @@
|
||||
# Teiler
|
||||
This module orchestrates the different microfrontends of the bridgehead as a single page application.
|
||||
|
||||
**Teiler** is the central frontend of the **bridgehead system**. It brings together multiple independent tools—each built as a **microfrontend**—into a single, unified web application.
|
||||
|
||||
Users interact with Teiler as one coherent interface, but behind the scenes, it dynamically integrates and displays self-contained modules developed with different technologies (**Angular**, **Vue**, **React**, etc.). This modular approach makes Teiler highly flexible, allowing teams to develop, deploy, and maintain features independently.
|
||||
|
||||
Teiler ensures:
|
||||
|
||||
* **A consistent look and feel** across tools.
|
||||
* **Smooth navigation** between components.
|
||||
* **Seamless user authentication** across the entire interface.
|
||||
|
||||
Each independent tool integrated into Teiler is called a **bridgehead app**. A bridgehead app can be:
|
||||
|
||||
- A fully standalone microfrontend with its own frontend and backend services.
|
||||
- An embedded service inside the Teiler Dashboard.
|
||||
- An external link to another service, possibly hosted on a central server or elsewhere in the federated research network.
|
||||
|
||||
The modularity of Teiler enables it to adapt easily to the evolving needs of the research federated network by simply adding, updating, or removing bridgehead apps.
|
||||
|
||||
Below is a breakdown of Teiler's internal components that make this orchestration possible.
|
||||
|
||||
- [Teiler Orchestrator](#teiler-orchestrator)
|
||||
- [Teiler Dashboard](#teiler-dashboard)
|
||||
- [Teiler Backend](#teiler-backend)
|
||||
|
||||
---
|
||||
|
||||
## Teiler Orchestrator
|
||||
Single SPA component that consists on the root HTML site of the single page application and a javascript code that
|
||||
gets the information about the microfrontend calling the teiler backend and is responsible for registering them. With the
|
||||
resulting mapping, it can initialize, mount and unmount the required microfrontends on the fly.
|
||||
|
||||
**GitHub repository:** [https://github.com/samply/teiler-orchestrator](https://github.com/samply/teiler-orchestrator)
|
||||
|
||||
The **Teiler Orchestrator** is the entry point of the **Single Page Application (SPA)**. It consists of:
|
||||
|
||||
The microfrontends run independently in different containers and can be based on different frameworks (Angular, Vue, React,...)
|
||||
This microfrontends can run as single alone but need an extension with Single-SPA (https://single-spa.js.org/docs/ecosystem).
|
||||
There are also available three templates (Angular, Vue, React) to be directly extended to be used directly in the teiler.
|
||||
- An **HTML root page**.
|
||||
- A **JavaScript layer** that:
|
||||
- **Retrieves microfrontend configurations** from the backend.
|
||||
- **Registers and manages** the microfrontends using [**Single-SPA**](https://single-spa.js.org/), the framework Teiler uses to create and coordinate its microfrontend environment.
|
||||
|
||||
Using this information, the orchestrator dynamically **loads the correct microfrontend** for a given route and manages its **lifecycle** (*init*, *mount*, *unmount*) in real time.
|
||||
|
||||
**Microfrontends** run in their own containers and can be implemented with any major frontend framework. To be compatible with Teiler, they must integrate with **Single-SPA**.
|
||||
|
||||
To encourage developers to create their own microfrontends and integrate them into Teiler, we provide **starter templates** for **Angular**, **Vue**, and **React**. Developing a new microfrontend is straightforward:
|
||||
|
||||
1. Use one of the templates.
|
||||
2. Extend it with your own functionality.
|
||||
3. Add its configuration in the **Teiler Backend**.
|
||||
|
||||
This modular approach accelerates development and fosters collaboration.
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Teiler Dashboard
|
||||
It consists on the main dashboard and a set of embedded services.
|
||||
### Login
|
||||
user and password in ccp.local.conf
|
||||
|
||||
**GitHub repository:** [https://github.com/samply/teiler-dashboard](https://github.com/samply/teiler-dashboard)
|
||||
|
||||
The **Teiler Dashboard** is the unified interface users interact with after logging in. It provides:
|
||||
|
||||
- A **single point of access** where various bridgehead apps are embedded as microfrontends.
|
||||
- **Central navigation** and **session management** for a smooth user experience.
|
||||
|
||||
### Authentication and Authorization
|
||||
|
||||
Teiler uses **OpenID Connect (OIDC)** for user authentication, accessible via the **top navigation bar**.
|
||||
|
||||
We consider three possible **application roles**:
|
||||
|
||||
| Role | Description |
|
||||
|--------|-----------------------------------------------------------|
|
||||
| Public | Accessible by any user without the need to log in |
|
||||
| User | Normal users working with various bridgehead applications |
|
||||
| Admin | Bridgehead system administrators |
|
||||
|
||||
It is possible to **deactivate OIDC authentication** entirely. In such cases, **all apps must have at least the public role** to allow access. While this may be suitable for development or testing, we **strongly encourage** at least some external authentication mechanism or network-level access control to secure the bridgehead environment.
|
||||
|
||||
Alternatively, basic authentication can be enforced through the existing **Traefik infrastructure** integrated with the bridgehead.
|
||||
|
||||
---
|
||||
|
||||
## Teiler Backend
|
||||
In this component, the microfrontends are configured.
|
||||
|
||||
**GitHub repository:** [https://github.com/samply/teiler-backend](https://github.com/samply/teiler-backend)
|
||||
|
||||
The **Teiler Backend** serves as the central configuration hub for all microfrontends and bridgehead apps. It defines:
|
||||
|
||||
- Which bridgehead apps are available.
|
||||
- Their loading URLs and routes.
|
||||
- Optional metadata such as display names, icons, roles, and activation status.
|
||||
|
||||
It enables the orchestrator to remain **generic and flexible**, adapting dynamically to whatever apps are defined in the backend configuration.
|
||||
|
||||
### Assets Directory
|
||||
|
||||
There is an **assets** directory where you can save images and other static files to be accessible to your microfrontends. This helps configure and customize apps more easily and quickly.
|
||||
|
||||
Assets can be referenced via:
|
||||
|
||||
```
|
||||
<Teiler Backend URL>/assets/<filename>
|
||||
```
|
||||
|
||||
### App Configuration via Environment Variables
|
||||
|
||||
Apps are configured using environment variables with the following structure:
|
||||
|
||||
```
|
||||
TEILER_APP<Number>_<suffix>
|
||||
Optional: TEILER_APP<Number>_<LanguageCode>_<suffix>
|
||||
```
|
||||
|
||||
- The **number** is just for grouping variables for a single app and has no intrinsic meaning.
|
||||
- The **app** is the unit within Teiler, shown as a box in the dashboard.
|
||||
- Apps can be:
|
||||
- Embedded apps inside the Teiler Dashboard (there is a helper Python script for generating embedded apps: [create-embedded-app.py](https://github.com/samply/teiler-dashboard/blob/main/create-embedded-app.py))
|
||||
- External links (e.g., central services outside the local bridgehead instance)
|
||||
- An app's frontend (microfrontend or embedded app) can either contain the entire functionality or serve as a frontend communicating with other backend microservices in the bridgehead.
|
||||
|
||||
Currently supported languages in the main projects DKTK and BBMRI are **English (EN)** and **German (DE)**, but the system can be extended to other languages.
|
||||
|
||||
The Teiler Dashboard requests variables from the backend for each app and passes the desired language code. If a language-specific variable is unavailable, the default language value is returned.
|
||||
|
||||
### Internationalization (i18n)
|
||||
#### ⚠️ Important
|
||||
|
||||
If you make any changes to the **Teiler Dashboard**, and those changes involve text elements (e.g., labels, buttons, messages), you must also update the **English translations**, since the application uses **internationalization (i18n)**.
|
||||
|
||||
The **default language** of the project is **German**, so any new text must be manually translated into English after extracting the updated i18n entries.
|
||||
|
||||
To extract new translation entries, run the following command:
|
||||
|
||||
```bash
|
||||
ng extract-i18n --output-path src/i18n --format=xlf2
|
||||
````
|
||||
|
||||
This will generate or update the file:
|
||||
`src/i18n/messages.xlf`
|
||||
|
||||
---
|
||||
|
||||
#### ✅ Requirements to Run the Extraction Command
|
||||
|
||||
| Program | Purpose | Linux Shell (Ubuntu/Debian) | Windows PowerShell |
|
||||
| -------------------------------- | ---------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------- |
|
||||
| **Node.js** | JavaScript runtime required by Angular and npm | `sudo apt update && sudo apt install nodejs npm`<br>**or**<br>[Use NodeSource setup](https://github.com/nodesource/distributions) (recommended) | [Download from nodejs.org](https://nodejs.org) and install manually |
|
||||
| **npm** | Node package manager (comes with Node.js) | *(Included with Node.js)* | *(Included with Node.js)* |
|
||||
| **Angular CLI** | Command-line interface for Angular tooling | `npm install -g @angular/cli` | `npm install -g @angular/cli` |
|
||||
| **Angular project dependencies** | Required packages from `package.json` | `npm install` | `npm install` |
|
||||
|
||||
---
|
||||
|
||||
#### ✏️ Updating the English Translation
|
||||
|
||||
After running the extraction command, the file `src/i18n/messages.xlf` will contain any newly added i18n entries.
|
||||
|
||||
To provide English translations:
|
||||
|
||||
1. Open `src/i18n/messages.en.xlf`.
|
||||
2. Compare it with the updated `messages.xlf` to identify any new entries.
|
||||
3. Copy the new `<trans-unit>` blocks from `messages.xlf` into `messages.en.xlf`.
|
||||
4. For each entry, add the English translation inside the `<target>` tag (in `messages.en.xlf`):
|
||||
|
||||
```xml
|
||||
<trans-unit id="..." datatype="html">
|
||||
<source>Willkommen</source>
|
||||
<target>Welcome</target>
|
||||
</trans-unit>
|
||||
```
|
||||
|
||||
### App Availability Monitoring
|
||||
|
||||
The Teiler Backend regularly **pings apps** to check availability and displays status messages such as:
|
||||
|
||||
- "Frontend not available"
|
||||
- "Backend not available"
|
||||
- "Frontend and Backend not available"
|
||||
|
||||
### Accepted TEILER_APP Variable Suffixes
|
||||
|
||||
| Suffix | Description |
|
||||
|------------------|---------------------------------------------------------------------------------------------------------------|
|
||||
| NAME | Identifier of the app (no spaces). For embedded apps, must match the identifier defined in Teiler Dashboard. |
|
||||
| TITLE | Display title shown to users. |
|
||||
| DESCRIPTION | Brief description of the app. |
|
||||
| BACKENDURL | URL of the backend microservice (if applicable). |
|
||||
| BACKENDCHECKURL | URL that the backend pings to verify backend availability. Defaults to BACKENDURL if not specified. |
|
||||
| SOURCEURL | URL of the microfrontend or external link (not used for embedded apps). |
|
||||
| SOURCECHECKURL | URL to ping to check microfrontend or external link availability. Defaults to SOURCEURL if not specified. |
|
||||
| ROLES | Comma-separated roles allowed: `TEILER_PUBLIC`, `TEILER_USER`, `TEILER_ADMIN`. |
|
||||
| ISACTIVATED | `true` or `false`. Used to temporarily deactivate an app without deleting its config. |
|
||||
| ICONCLASS | Bootstrap icon class to display in app box (e.g., `"bi bi-search"`). |
|
||||
| ICONSOURCEURL | URL to an image icon. Prefer using local assets instead of external URLs. |
|
||||
| ORDER | Relative display order of the app in the dashboard. |
|
||||
| ISEXTERNALLINK | `true` or `false`. Indicates if the app is an external link outside the local bridgehead. |
|
||||
| ISLOCAL | `true` or `false`. Indicates if the app runs locally within the bridgehead site or on a central server. |
|
||||
|
||||
*Note:* Embedded apps often have many of these variables preconfigured and may not require manual specification. See the [Teiler Dashboard documentation](https://github.com/samply/teiler-dashboard) for details.
|
||||
|
||||
### Additional Teiler Backend Variables for Dashboard Configuration
|
||||
|
||||
| Variable Prefix | Description |
|
||||
|------------------------------------|--------------------------------------------------------------------------------------------------------------|
|
||||
| TEILER_DASHBOARD_ | General configuration of the dashboard. |
|
||||
| TEILER_DASHBOARD_<LangCode>_ | Language-specific configuration overrides. |
|
||||
|
||||
Important suffixes include:
|
||||
|
||||
| Suffix | Description |
|
||||
|------------------|-------------------------------------------------------------------------------------------|
|
||||
| WELCOME_TITLE | Title shown on the initial screen before login. |
|
||||
| WELCOME_TEXT | Welcome message or instructions before login. |
|
||||
| FURTHER_INFO | Additional informational text or links. |
|
||||
| BACKGROUND_IMAGE_URL | URL to a background image (SVG recommended for scalability). |
|
||||
| LOGO_URL | URL to the project or bridgehead logo. |
|
||||
| LOGO_HEIGHT | Height of the displayed logo. |
|
||||
| LOGO_TEXT | Title text of the bridgehead (e.g., "DKTK Bridgehead"). |
|
||||
| COLOR_PALETTE | JSON link to color palettes for text, lines, icons, and background (especially for SVGs). |
|
||||
| COLOR_PROFILE | Selected color profile from the palette. (color palette name) |
|
||||
| FONT | Font family for the dashboard text. |
|
||||
|
||||
### 🎨 Color Palette
|
||||
|
||||
Below is an example of a **color palette** definition in JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"color-palettes": [
|
||||
{
|
||||
"name": "Grey",
|
||||
"colors": {
|
||||
"text": "grey",
|
||||
"line": "grey",
|
||||
"icon": "grey",
|
||||
"background": "grey"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Black",
|
||||
"colors": {
|
||||
"text": "black",
|
||||
"line": "black",
|
||||
"icon": "black",
|
||||
"background": "#F7ADAD"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Each palette contains a unique `name` and a set of color values for different UI elements.
|
||||
|
||||
#### 🔍 Palette Elements
|
||||
|
||||
| **Variable** | **Description** |
|
||||
| ------------ | --------------------------------------------------- |
|
||||
| `name` | Identifier of the color palette |
|
||||
| `text` | Color used for text |
|
||||
| `line` | Color used for lines (e.g., borders, dividers) |
|
||||
| `icon` | Color used for icons |
|
||||
| `background` | Background color (especially useful for SVG images) |
|
||||
|
||||
|
||||
---
|
||||
|
||||
### 🚀 Ready to Extend Teiler?
|
||||
|
||||
If you want to create your own **bridgehead app** and integrate it into **Teiler**, start by:
|
||||
|
||||
1. Selecting a template **or**
|
||||
2. Building a microfrontend compatible with [Single-SPA](https://single-spa.js.org/).
|
||||
|
||||
Then, register your app’s configuration in the **Teiler Backend** as described above.
|
||||
|
||||
> 💡 **Tip:** This flexible, modular design makes it easy to plug in your own features and services.
|
||||
|
||||
---
|
||||
|
||||
### 🔧 Build & Contribute Your App!
|
||||
|
||||
🧩 **Join the ecosystem!**
|
||||
Add your app to Teiler and expand its functionality for everyone.
|
||||
|
||||
Whether it’s a visualization tool, a data processing module, or a custom UI component — your contribution can help grow the platform. 💪
|
||||
|
||||
> 👉 **Get started today and shape the future of Teiler!**
|
||||
|
||||
|
@@ -1,2 +1,3 @@
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCgpjb250ZXh0IFBhdGllbnQKCgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX1BSSU1BUllfRElBR05PU0lTX05PX1NPUlRfU1RSQVRJRklFUgpES1RLX1NUUkFUX0FHRV9DTEFTU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfU1BFQ0lNRU5fU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREtUS19TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKCiAgREtUS19TVFJBVF9ISVNUT0xPR1lfU1RSQVRJRklFUgpES1RLX1NUUkFUX0RFRl9JTl9JTklUSUFMX1BPUFVMQVRJT04KdHJ1ZQ==
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCmNvZGVzeXN0ZW0gaWNkMTA6ICdodHRwOi8vZmhpci5kZS9Db2RlU3lzdGVtL2JmYXJtL2ljZC0xMC1nbScKY29kZXN5c3RlbSBtb3JwaDogJ3VybjpvaWQ6Mi4xNi44NDAuMS4xMTM4ODMuNi40My4xJwoKY29udGV4dCBQYXRpZW50CgoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUklNQVJZX0RJQUdOT1NJU19OT19TT1JUX1NUUkFUSUZJRVIKREtUS19TVFJBVF9BR0VfQ0xBU1NfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX1NQRUNJTUVOX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfUFJPQ0VEVVJFX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfTUVESUNBVElPTl9TVFJBVElGSUVSCgogIERLVEtfU1RSQVRfSElTVE9MT0dZX1NUUkFUSUZJRVIKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDNjEnIGZyb20gaWNkMTBdKSBhbmQgCigoZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgxNDAvMycpIG9yIAooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgxNDcvMycpIG9yIAooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg0ODAvMycpIG9yIAooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg1MDAvMycpKQ==
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCgpjb250ZXh0IFBhdGllbnQKCgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX1BSSU1BUllfRElBR05PU0lTX05PX1NPUlRfU1RSQVRJRklFUgpES1RLX1NUUkFUX0FHRV9DTEFTU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRLVEtfUkVQTEFDRV9TUEVDSU1FTl9TVFJBVElGSUVSaWYgSW5Jbml0aWFsUG9wdWxhdGlvbiB0aGVuIFtTcGVjaW1lbl0gZWxzZSB7fSBhcyBMaXN0PFNwZWNpbWVuPgpES1RLX1NUUkFUX1BST0NFRFVSRV9TVFJBVElGSUVSCgpES1RLX1NUUkFUX01FRElDQVRJT05fU1RSQVRJRklFUgoKICBES1RLX1JFUExBQ0VfSElTVE9MT0dZX1NUUkFUSUZJRVIKIGlmIGhpc3RvLmNvZGUuY29kaW5nLndoZXJlKGNvZGUgPSAnNTk4NDctNCcpLmNvZGUuZmlyc3QoKSBpcyBudWxsIHRoZW4gMCBlbHNlIDEKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OCnRydWU=
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCmNvZGVzeXN0ZW0gaWNkMTA6ICdodHRwOi8vZmhpci5kZS9Db2RlU3lzdGVtL2JmYXJtL2ljZC0xMC1nbScKY29kZXN5c3RlbSBtb3JwaDogJ3VybjpvaWQ6Mi4xNi44NDAuMS4xMTM4ODMuNi40My4xJwoKY29udGV4dCBQYXRpZW50CgoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUklNQVJZX0RJQUdOT1NJU19OT19TT1JUX1NUUkFUSUZJRVIKREtUS19TVFJBVF9BR0VfQ0xBU1NfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpES1RLX1JFUExBQ0VfU1BFQ0lNRU5fU1RSQVRJRklFUmlmIEluSW5pdGlhbFBvcHVsYXRpb24gdGhlbiBbU3BlY2ltZW5dIGVsc2Uge30gYXMgTGlzdDxTcGVjaW1lbj4KREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREtUS19TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKCiAgREtUS19SRVBMQUNFX0hJU1RPTE9HWV9TVFJBVElGSUVSCiBpZiBoaXN0by5jb2RlLmNvZGluZy53aGVyZShjb2RlID0gJzU5ODQ3LTQnKS5jb2RlLmZpcnN0KCkgaXMgbnVsbCB0aGVuIDAgZWxzZSAxCkRLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTihleGlzdHMgW0NvbmRpdGlvbjogQ29kZSAnQzYxJyBmcm9tIGljZDEwXSkgYW5kIAooKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQwLzMnKSBvciAKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQ3LzMnKSBvciAKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4NDgwLzMnKSBvciAKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4NTAwLzMnKSk=
|
||||
ORGANOID_DASHBOARD_PUBLIC
|
||||
|
17
ccp/vars
17
ccp/vars
@@ -12,14 +12,9 @@ OIDC_USER_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})"
|
||||
OIDC_ADMIN_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})_Verwalter"
|
||||
OIDC_PRIVATE_CLIENT_ID=${SITE_ID}-private
|
||||
OIDC_PUBLIC_CLIENT_ID=${SITE_ID}-public
|
||||
# Use "test-realm-01" for testing
|
||||
OIDC_REALM="${OIDC_REALM:-master}"
|
||||
OIDC_URL="https://login.verbis.dkfz.de"
|
||||
OIDC_ISSUER_URL="${OIDC_URL}/realms/${OIDC_REALM}"
|
||||
OIDC_URL="https://sso.verbis.dkfz.de/application/o/${OIDC_PUBLIC_CLIENT_ID}/"
|
||||
OIDC_GROUP_CLAIM="groups"
|
||||
|
||||
POSTGRES_TAG=15.6-alpine
|
||||
|
||||
for module in $PROJECT/modules/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
@@ -29,4 +24,12 @@ done
|
||||
idManagementSetup
|
||||
mtbaSetup
|
||||
obds2fhirRestSetup
|
||||
blazeSecondarySetup
|
||||
blazeSecondarySetup
|
||||
|
||||
for module in modules/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
source $module
|
||||
done
|
||||
|
||||
transfairSetup
|
@@ -2,7 +2,7 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-dhki-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-dhki-blaze:8080"
|
||||
@@ -33,7 +33,7 @@ services:
|
||||
EPSILON: 0.28
|
||||
QUERIES_TO_CACHE: '/queries_to_cache.conf'
|
||||
volumes:
|
||||
- /srv/docker/bridgehead/dhki/queries_to_cache.conf:/queries_to_cache.conf
|
||||
- /srv/docker/bridgehead/dhki/queries_to_cache.conf:/queries_to_cache.conf:ro
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
12
dhki/vars
12
dhki/vars
@@ -8,8 +8,6 @@ PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
|
||||
|
||||
BROKER_URL_FOR_PREREQ=$BROKER_URL
|
||||
|
||||
POSTGRES_TAG=15.6-alpine
|
||||
|
||||
for module in ccp/modules/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
@@ -17,4 +15,12 @@ do
|
||||
done
|
||||
|
||||
idManagementSetup
|
||||
obds2fhirRestSetup
|
||||
obds2fhirRestSetup
|
||||
|
||||
for module in modules/*.sh
|
||||
do
|
||||
log DEBUG "sourcing $module"
|
||||
source $module
|
||||
done
|
||||
|
||||
transfairSetup
|
42
docs/update-access-token.md
Normal file
42
docs/update-access-token.md
Normal file
@@ -0,0 +1,42 @@
|
||||
## How to Change Config Access Token
|
||||
|
||||
### 1. Generate a New Access Token
|
||||
|
||||
1. Go to your Git configuration repository provider, it might be either [git.verbis.dkfz.de](https://git.verbis.dkfz.de) or [gitlab.bbmri-eric.eu](https://gitlab.bbmri-eric.eu).
|
||||
2. Navigate to the configuration repository for your site.
|
||||
3. Go to **Settings → Access Tokens** to check if your Access Token is valid or expired.
|
||||
- **If expired**, create a new Access Token.
|
||||
4. Configure the new Access Token with the following settings:
|
||||
- **Expiration date**: One year from today, minus one day.
|
||||
- **Role**: Developer.
|
||||
- **Scope**: Only `read_repository`.
|
||||
5. Save the newly generated Access Token in a secure location.
|
||||
|
||||
---
|
||||
|
||||
### 2. Replace the Old Access Token
|
||||
|
||||
1. Navigate to `/etc/bridgehead` in your system.
|
||||
2. Run the following command to retrieve the current Git remote URL:
|
||||
```bash
|
||||
git remote get-url origin
|
||||
```
|
||||
Example output:
|
||||
```
|
||||
https://name40dkfz-heidelberg.de:<old_access_token>@git.verbis.dkfz.de/bbmri-bridgehead-configs/test.git
|
||||
```
|
||||
3. Replace `<old_access_token>` with your new Access Token in the URL.
|
||||
4. Set the updated URL using the following command:
|
||||
```bash
|
||||
git remote set-url origin https://name40dkfz-heidelberg.de:<new_access_token>@git.verbis.dkfz.de/bbmri-bridgehead-configs/test.git
|
||||
|
||||
```
|
||||
|
||||
5. Start the Bridgehead update service by running:
|
||||
```bash
|
||||
systemctl start bridgehead-update@<project>
|
||||
```
|
||||
6. View the output to ensure the update process is successful:
|
||||
```bash
|
||||
journalctl -u bridgehead-update@<project> -f
|
||||
```
|
@@ -2,13 +2,14 @@ version: "3.7"
|
||||
|
||||
services:
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-itcc-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-itcc-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
|
||||
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
|
||||
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
|
||||
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
|
||||
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "blaze-data:/app/data"
|
||||
@@ -31,6 +32,10 @@ services:
|
||||
BEAM_PROXY_URL: http://beam-proxy:8081
|
||||
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
|
||||
EPSILON: 0.28
|
||||
QUERIES_TO_CACHE: '/queries_to_cache.conf'
|
||||
ENDPOINT_TYPE: ${FOCUS_ENDPOINT_TYPE:-blaze}
|
||||
volumes:
|
||||
- /srv/docker/bridgehead/itcc/queries_to_cache.conf:/queries_to_cache.conf:ro
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
- "blaze"
|
||||
|
@@ -17,7 +17,6 @@ services:
|
||||
BEAM_PROXY_ID: ${SITE_ID}
|
||||
BEAM_BROKER_ID: ${BROKER_ID}
|
||||
BEAM_APP_ID: "focus"
|
||||
PROJECT_METADATA: "dktk_supervisors"
|
||||
depends_on:
|
||||
- "beam-proxy"
|
||||
labels:
|
||||
@@ -30,4 +29,4 @@ services:
|
||||
- "traefik.http.routers.spot.rule=Host(`${HOST}`) && PathPrefix(`/backend`)"
|
||||
- "traefik.http.middlewares.stripprefix_spot.stripprefix.prefixes=/backend"
|
||||
- "traefik.http.routers.spot.tls=true"
|
||||
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot"
|
||||
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot,auth"
|
||||
|
2
itcc/queries_to_cache.conf
Normal file
2
itcc/queries_to_cache.conf
Normal file
@@ -0,0 +1,2 @@
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwoKY29udGV4dCBQYXRpZW50CkRLVEtfU1RSQVRfR0VOREVSX1NUUkFUSUZJRVIKICBES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCiAgSVRDQ19TVFJBVF9BR0VfQ0xBU1NfU1RSQVRJRklFUgogIERLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTgp0cnVl
|
||||
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwpjb2Rlc3lzdGVtIG1vbGVjdWxhck1hcmtlcjogJ2h0dHA6Ly93d3cuZ2VuZW5hbWVzLm9yZycKCmNvbnRleHQgUGF0aWVudApES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCiAgREtUS19TVFJBVF9ESUFHTk9TSVNfU1RSQVRJRklFUgogIElUQ0NfU1RSQVRfQUdFX0NMQVNTX1NUUkFUSUZJRVIKICBES1RLX1NUUkFUX0RFRl9JTl9JTklUSUFMX1BPUFVMQVRJT04KKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNjk1NDgtNicgZnJvbSBsb2luY10gTwp3aGVyZSBPLmNvbXBvbmVudC53aGVyZShjb2RlLmNvZGluZyBjb250YWlucyBDb2RlICc0ODAxOC02JyBmcm9tIGxvaW5jKS52YWx1ZS5jb2RpbmcgY29udGFpbnMgQ29kZSAnQlJBRicgZnJvbSBtb2xlY3VsYXJNYXJrZXIp
|
@@ -6,7 +6,7 @@ services:
|
||||
replicas: 0 #deactivate landing page
|
||||
|
||||
blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:0.28
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-kr-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-kr-blaze:8080"
|
||||
|
@@ -10,7 +10,6 @@ services:
|
||||
SALT: ${LOCAL_SALT}
|
||||
KEEP_INTERNAL_ID: ${KEEP_INTERNAL_ID:-false}
|
||||
MAINZELLISTE_URL: ${PATIENTLIST_URL:-http://patientlist:8080/patientlist}
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.obds2fhir-rest.rule=PathPrefix(`/obds2fhir-rest`) || PathPrefix(`/adt2fhir-rest`)"
|
||||
|
@@ -31,8 +31,8 @@ services:
|
||||
environment:
|
||||
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
|
||||
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
|
||||
TEILER_DASHBOARD_URL: "https://${HOST}/ccp-teiler-dashboard"
|
||||
OIDC_URL: "${OIDC_URL}"
|
||||
OIDC_REALM: "${OIDC_REALM}"
|
||||
OIDC_CLIENT_ID: "${OIDC_PUBLIC_CLIENT_ID}"
|
||||
OIDC_TOKEN_GROUP: "${OIDC_GROUP_CLAIM}"
|
||||
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
|
||||
@@ -41,7 +41,6 @@ services:
|
||||
TEILER_PROJECT: "${PROJECT}"
|
||||
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_HTTP_RELATIVE_PATH: "/ccp-teiler-dashboard"
|
||||
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
|
||||
TEILER_USER: "${OIDC_USER_GROUP}"
|
||||
TEILER_ADMIN: "${OIDC_ADMIN_GROUP}"
|
||||
@@ -69,7 +68,6 @@ services:
|
||||
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
|
||||
TEILER_DASHBOARD_DE_URL: "https://${HOST}/ccp-teiler-dashboard/de"
|
||||
TEILER_DASHBOARD_EN_URL: "https://${HOST}/ccp-teiler-dashboard/en"
|
||||
CENTRAX_URL: "${CENTRAXX_URL}"
|
||||
HTTP_PROXY: "http://forward_proxy:3128"
|
||||
ENABLE_MTBA: "${ENABLE_MTBA}"
|
||||
ENABLE_DATASHIELD: "${ENABLE_DATASHIELD}"
|
||||
|
@@ -116,7 +116,7 @@ assertVarsNotEmpty() {
|
||||
MISSING_VARS=""
|
||||
|
||||
for VAR in $@; do
|
||||
if [ -z "${!VAR}" ]; then
|
||||
if [ -z "${!VAR}" ]; then
|
||||
MISSING_VARS+="$VAR "
|
||||
fi
|
||||
done
|
||||
@@ -301,27 +301,108 @@ function sync_secrets() {
|
||||
if [[ $secret_sync_args == "" ]]; then
|
||||
return
|
||||
fi
|
||||
|
||||
if [ "$PROJECT" == "bbmri" ]; then
|
||||
# If the project is BBMRI, use the BBMRI-ERIC broker and not the GBN broker
|
||||
proxy_id=$ERIC_PROXY_ID
|
||||
broker_url=$ERIC_BROKER_URL
|
||||
broker_id=$ERIC_BROKER_ID
|
||||
root_crt_file="/srv/docker/bridgehead/bbmri/modules/${ERIC_ROOT_CERT}.root.crt.pem"
|
||||
else
|
||||
proxy_id=$PROXY_ID
|
||||
broker_url=$BROKER_URL
|
||||
broker_id=$BROKER_ID
|
||||
root_crt_file="/srv/docker/bridgehead/$PROJECT/root.crt.pem"
|
||||
fi
|
||||
|
||||
mkdir -p /var/cache/bridgehead/secrets/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/secrets/'. Please run sudo './bridgehead install $PROJECT' again."
|
||||
touch /var/cache/bridgehead/secrets/oidc
|
||||
docker run --rm \
|
||||
-v /var/cache/bridgehead/secrets/oidc:/usr/local/cache \
|
||||
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
|
||||
-v /srv/docker/bridgehead/$PROJECT/root.crt.pem:/run/secrets/root.crt.pem:ro \
|
||||
-v $root_crt_file:/run/secrets/root.crt.pem:ro \
|
||||
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
|
||||
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
|
||||
-e NO_PROXY=localhost,127.0.0.1 \
|
||||
-e ALL_PROXY=$HTTPS_PROXY_FULL_URL \
|
||||
-e PROXY_ID=$PROXY_ID \
|
||||
-e BROKER_URL=$BROKER_URL \
|
||||
-e OIDC_PROVIDER=secret-sync-central.oidc-client-enrollment.$BROKER_ID \
|
||||
-e PROXY_ID=$proxy_id \
|
||||
-e BROKER_URL=$broker_url \
|
||||
-e OIDC_PROVIDER=secret-sync-central.test-secret-sync.$broker_id \
|
||||
-e SECRET_DEFINITIONS=$secret_sync_args \
|
||||
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
|
||||
|
||||
set -a # Export variables as environment variables
|
||||
source /var/cache/bridgehead/secrets/*
|
||||
source /var/cache/bridgehead/secrets/oidc
|
||||
set +a # Export variables in the regular way
|
||||
}
|
||||
|
||||
function secret_sync_gitlab_token() {
|
||||
# Map the origin of the git repository /etc/bridgehead to the prefix recognized by Secret Sync
|
||||
local gitlab
|
||||
case "$(git -C /etc/bridgehead remote get-url origin)" in
|
||||
*git.verbis.dkfz.de*) gitlab=verbis;;
|
||||
*gitlab.bbmri-eric.eu*) gitlab=bbmri;;
|
||||
*)
|
||||
log "WARN" "Not running Secret Sync because the git repository /etc/bridgehead has unknown origin"
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ "$PROJECT" == "bbmri" ]; then
|
||||
# If the project is BBMRI, use the BBMRI-ERIC broker and not the GBN broker
|
||||
proxy_id=$ERIC_PROXY_ID
|
||||
broker_url=$ERIC_BROKER_URL
|
||||
broker_id=$ERIC_BROKER_ID
|
||||
root_crt_file="/srv/docker/bridgehead/bbmri/modules/${ERIC_ROOT_CERT}.root.crt.pem"
|
||||
else
|
||||
proxy_id=$PROXY_ID
|
||||
broker_url=$BROKER_URL
|
||||
broker_id=$BROKER_ID
|
||||
root_crt_file="/srv/docker/bridgehead/$PROJECT/root.crt.pem"
|
||||
fi
|
||||
|
||||
# Create a temporary directory for Secret Sync that is valid per boot
|
||||
secret_sync_tempdir="/tmp/bridgehead/secret-sync.boot-$(cat /proc/sys/kernel/random/boot_id)"
|
||||
mkdir -p $secret_sync_tempdir
|
||||
|
||||
# Use Secret Sync to validate the GitLab token in $secret_sync_tempdir/cache.
|
||||
# If it is missing or expired, Secret Sync will create a new token and write it to the file.
|
||||
# The git credential helper reads the token from the file during git pull.
|
||||
log "INFO" "Running Secret Sync for the GitLab token (gitlab=$gitlab)"
|
||||
docker pull docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest # make sure we have the latest image
|
||||
docker run --rm \
|
||||
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
|
||||
-v $root_crt_file:/run/secrets/root.crt.pem:ro \
|
||||
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
|
||||
-v $secret_sync_tempdir:/secret-sync/ \
|
||||
-e CACHE_PATH=/secret-sync/gitlab-token \
|
||||
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
|
||||
-e NO_PROXY=localhost,127.0.0.1 \
|
||||
-e ALL_PROXY=$HTTPS_PROXY_FULL_URL \
|
||||
-e PROXY_ID=$proxy_id \
|
||||
-e BROKER_URL=$broker_url \
|
||||
-e GITLAB_PROJECT_ACCESS_TOKEN_PROVIDER=secret-sync-central.central-secret-sync.$broker_id \
|
||||
-e SECRET_DEFINITIONS=GitLabProjectAccessToken:BRIDGEHEAD_CONFIG_REPO_TOKEN:$gitlab \
|
||||
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
|
||||
if [ $? -eq 0 ]; then
|
||||
log "INFO" "Secret Sync was successful"
|
||||
# In the past we used to hardcode tokens into the repository URL. We have to remove those now for the git credential helper to become effective.
|
||||
CLEAN_REPO="$(git -C /etc/bridgehead remote get-url origin | sed -E 's|https://[^@]+@|https://|')"
|
||||
git -C /etc/bridgehead remote set-url origin "$CLEAN_REPO"
|
||||
# Set the git credential helper
|
||||
git -C /etc/bridgehead config credential.helper /srv/docker/bridgehead/lib/gitlab-token-helper.sh
|
||||
else
|
||||
log "WARN" "Secret Sync failed"
|
||||
# Remove the git credential helper
|
||||
git -C /etc/bridgehead config --unset credential.helper
|
||||
fi
|
||||
|
||||
# In the past the git credential helper was also set for /srv/docker/bridgehead but never used.
|
||||
# Let's remove it to avoid confusion. This line can be removed at some point the future when we
|
||||
# believe that it was removed on all/most production servers.
|
||||
git -C /srv/docker/bridgehead config --unset credential.helper
|
||||
}
|
||||
|
||||
capitalize_first_letter() {
|
||||
input="$1"
|
||||
capitalized="$(tr '[:lower:]' '[:upper:]' <<< ${input:0:1})${input:1}"
|
||||
@@ -369,7 +450,3 @@ generate_simple_password(){
|
||||
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
|
||||
echo "${combined_text}" | sha1sum | openssl pkeyutl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/[+\/]/A/g'
|
||||
}
|
||||
|
||||
docker_jq() {
|
||||
docker run --rm -i docker.verbis.dkfz.de/cache/jqlang/jq:latest "$@"
|
||||
}
|
||||
|
11
lib/gitlab-token-helper.sh
Executable file
11
lib/gitlab-token-helper.sh
Executable file
@@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
|
||||
[ "$1" = "get" ] || exit
|
||||
|
||||
source "/tmp/bridgehead/secret-sync.boot-$(cat /proc/sys/kernel/random/boot_id)/gitlab-token"
|
||||
|
||||
# Any non-empty username works, only the token matters
|
||||
cat << EOF
|
||||
username=bk
|
||||
password=$BRIDGEHEAD_CONFIG_REPO_TOKEN
|
||||
EOF
|
@@ -1,41 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ "$1" != "get" ]; then
|
||||
echo "Usage: $0 get"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
baseDir() {
|
||||
# see https://stackoverflow.com/questions/59895
|
||||
SOURCE=${BASH_SOURCE[0]}
|
||||
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
|
||||
DIR=$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )
|
||||
SOURCE=$(readlink "$SOURCE")
|
||||
[[ $SOURCE != /* ]] && SOURCE=$DIR/$SOURCE # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
|
||||
done
|
||||
DIR=$( cd -P "$( dirname "$SOURCE" )/.." >/dev/null 2>&1 && pwd )
|
||||
echo $DIR
|
||||
}
|
||||
|
||||
BASE=$(baseDir)
|
||||
cd $BASE
|
||||
|
||||
source lib/functions.sh
|
||||
|
||||
assertVarsNotEmpty SITE_ID || fail_and_report 1 "gitpassword.sh failed: SITE_ID is empty."
|
||||
|
||||
PARAMS="$(cat)"
|
||||
GITHOST=$(echo "$PARAMS" | grep "^host=" | sed 's/host=\(.*\)/\1/g')
|
||||
|
||||
fetchVarsFromVault GIT_PASSWORD
|
||||
|
||||
if [ -z "${GIT_PASSWORD}" ]; then
|
||||
fail_and_report 1 "gitpassword.sh failed: Git password not found."
|
||||
fi
|
||||
|
||||
cat <<EOF
|
||||
protocol=https
|
||||
host=$GITHOST
|
||||
username=bk-${SITE_ID}
|
||||
password=${GIT_PASSWORD}
|
||||
EOF
|
@@ -41,6 +41,20 @@ if [ ! -z "$NNGM_CTS_APIKEY" ] && [ -z "$NNGM_AUTH" ]; then
|
||||
add_basic_auth_user "nngm" $generated_passwd "NNGM_AUTH" $PROJECT
|
||||
fi
|
||||
|
||||
if [ -z "$TRANSFAIR_AUTH" ]; then
|
||||
if [[ -n "$TTP_URL" || -n "$EXCHANGE_ID_SYSTEM" ]]; then
|
||||
log "INFO" "Now generating basic auth user for transfair API (see adduser in bridgehead for more information). "
|
||||
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
|
||||
add_basic_auth_user "transfair" $generated_passwd "TRANSFAIR_AUTH" $PROJECT
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "$ENABLE_EXPORTER" == "true" ] && [ -z "$EXPORTER_USER" ]; then
|
||||
log "INFO" "Now generating basic auth for the exporter and reporter (see adduser in bridgehead for more information)."
|
||||
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
|
||||
add_basic_auth_user $PROJECT $generated_passwd "EXPORTER_USER" $PROJECT
|
||||
fi
|
||||
|
||||
log "INFO" "Registering system units for bridgehead and bridgehead-update"
|
||||
cp -v \
|
||||
lib/systemd/bridgehead\@.service \
|
||||
|
@@ -19,7 +19,7 @@ fi
|
||||
|
||||
hc_send log "Checking for bridgehead updates ..."
|
||||
|
||||
CONFFILE=/etc/bridgehead/$1.conf
|
||||
CONFFILE=/etc/bridgehead/$PROJECT.conf
|
||||
|
||||
if [ ! -e $CONFFILE ]; then
|
||||
fail_and_report 1 "Configuration file $CONFFILE not found."
|
||||
@@ -33,7 +33,7 @@ export SITE_ID
|
||||
checkOwner /srv/docker/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /srv/docker/bridgehead"
|
||||
checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead"
|
||||
|
||||
CREDHELPER="/srv/docker/bridgehead/lib/gitpassword.sh"
|
||||
secret_sync_gitlab_token
|
||||
|
||||
CHANGES=""
|
||||
|
||||
@@ -45,10 +45,6 @@ for DIR in /etc/bridgehead $(pwd); do
|
||||
if [ -n "$OUT" ]; then
|
||||
report_error log "The working directory $DIR is modified. Changed files: $OUT"
|
||||
fi
|
||||
if [ "$(git -C $DIR config --get credential.helper)" != "$CREDHELPER" ]; then
|
||||
log "INFO" "Configuring repo to use bridgehead git credential helper."
|
||||
git -C $DIR config credential.helper "$CREDHELPER"
|
||||
fi
|
||||
old_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
|
||||
if [ -z "$HTTPS_PROXY_FULL_URL" ]; then
|
||||
log "INFO" "Git is using no proxy!"
|
||||
@@ -58,7 +54,8 @@ for DIR in /etc/bridgehead $(pwd); do
|
||||
OUT=$(retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR pull 2>&1)
|
||||
fi
|
||||
if [ $? -ne 0 ]; then
|
||||
report_error log "Unable to update git $DIR: $OUT"
|
||||
OUT_SAN=$(echo $OUT | sed -E 's|://[^:]+:[^@]+@|://credentials@|g')
|
||||
report_error log "Unable to update git $DIR: $OUT_SAN"
|
||||
fi
|
||||
|
||||
new_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
|
||||
|
@@ -16,7 +16,7 @@ services:
|
||||
- --entrypoints.web.http.redirections.entrypoint.scheme=https
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dashboard.rule=PathPrefix(`/api`) || PathPrefix(`/dashboard/`)"
|
||||
- "traefik.http.routers.dashboard.rule=PathPrefix(`/dashboard/`)"
|
||||
- "traefik.http.routers.dashboard.entrypoints=websecure"
|
||||
- "traefik.http.routers.dashboard.service=api@internal"
|
||||
- "traefik.http.routers.dashboard.tls=true"
|
||||
|
142
minimal/modules/dnpm-central-targets.json
Normal file
142
minimal/modules/dnpm-central-targets.json
Normal file
@@ -0,0 +1,142 @@
|
||||
{
|
||||
"sites": [
|
||||
{
|
||||
"id": "UKFR",
|
||||
"name": "Freiburg",
|
||||
"virtualhost": "ukfr.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKHD",
|
||||
"name": "Heidelberg",
|
||||
"virtualhost": "ukhd.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKT",
|
||||
"name": "Tübingen",
|
||||
"virtualhost": "ukt.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKU",
|
||||
"name": "Ulm",
|
||||
"virtualhost": "uku.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UM",
|
||||
"name": "Mainz",
|
||||
"virtualhost": "um.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKMR",
|
||||
"name": "Marburg",
|
||||
"virtualhost": "ukmr.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKE",
|
||||
"name": "Hamburg",
|
||||
"virtualhost": "uke.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKA",
|
||||
"name": "Aachen",
|
||||
"virtualhost": "uka.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "Charite",
|
||||
"name": "Berlin",
|
||||
"virtualhost": "charite.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.berlin-test.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "MRI",
|
||||
"name": "Muenchen-tum",
|
||||
"virtualhost": "mri.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.muenchen-tum.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "KUM",
|
||||
"name": "Muenchen-lmu",
|
||||
"virtualhost": "kum.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.muenchen-lmu.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "MHH",
|
||||
"name": "Hannover",
|
||||
"virtualhost": "mhh.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.hannover.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKDD",
|
||||
"name": "dresden-dnpm",
|
||||
"virtualhost": "ukdd.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dresden-dnpm.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKB",
|
||||
"name": "Bonn",
|
||||
"virtualhost": "ukb.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.bonn-dnpm.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKD",
|
||||
"name": "Duesseldorf",
|
||||
"virtualhost": "ukd.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.duesseldorf-dnpm.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKK",
|
||||
"name": "Koeln",
|
||||
"virtualhost": "ukk.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UME",
|
||||
"name": "Essen",
|
||||
"virtualhost": "ume.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.essen.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKM",
|
||||
"name": "Muenster",
|
||||
"virtualhost": "ukm.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.muenster-dnpm.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKF",
|
||||
"name": "Frankfurt",
|
||||
"virtualhost": "ukf.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.frankfurt.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UMG",
|
||||
"name": "Goettingen",
|
||||
"virtualhost": "umg.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.goettingen.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKW",
|
||||
"name": "Würzburg",
|
||||
"virtualhost": "ukw.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.wuerzburg-dnpm.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "UKSH",
|
||||
"name": "Schleswig-Holstein",
|
||||
"virtualhost": "uksh.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.uksh-dnpm.broker.ccp-it.dktk.dkfz.de"
|
||||
},
|
||||
{
|
||||
"id": "TKT",
|
||||
"name": "Test",
|
||||
"virtualhost": "tkt.dnpm.de",
|
||||
"beamconnect": "dnpm-connect.tobias-develop.broker.ccp-it.dktk.dkfz.de"
|
||||
}
|
||||
]
|
||||
}
|
@@ -29,7 +29,7 @@ services:
|
||||
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
|
||||
APP_ID: dnpm-connect.${DNPM_PROXY_ID}
|
||||
DISCOVERY_URL: "./conf/central_targets.json"
|
||||
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
|
||||
LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
|
||||
HTTP_PROXY: http://forward_proxy:3128
|
||||
HTTPS_PROXY: http://forward_proxy:3128
|
||||
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
|
||||
@@ -41,7 +41,7 @@ services:
|
||||
volumes:
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
|
||||
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
|
||||
- /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
|
||||
|
@@ -1,34 +1,99 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
dnpm-backend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
|
||||
container_name: bridgehead-dnpm-backend
|
||||
dnpm-mysql:
|
||||
image: mysql:9
|
||||
healthcheck:
|
||||
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
|
||||
interval: 3s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
environment:
|
||||
- ZPM_SITE=${ZPM_SITE}
|
||||
- N_RANDOM_FILES=${DNPM_SYNTH_NUM}
|
||||
MYSQL_ROOT_HOST: "%"
|
||||
MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm:/bwhc_config:ro
|
||||
- ${DNPM_DATA_DIR}:/bwhc_data
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
|
||||
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
|
||||
- "traefik.http.routers.bwhc-backend.tls=true"
|
||||
- /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
|
||||
|
||||
dnpm-frontend:
|
||||
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
|
||||
container_name: bridgehead-dnpm-frontend
|
||||
links:
|
||||
- dnpm-backend
|
||||
dnpm-authup:
|
||||
image: authup/authup:latest
|
||||
container_name: bridgehead-dnpm-authup
|
||||
volumes:
|
||||
- /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
|
||||
depends_on:
|
||||
dnpm-mysql:
|
||||
condition: service_healthy
|
||||
command: server/core start
|
||||
environment:
|
||||
- NUXT_HOST=0.0.0.0
|
||||
- NUXT_PORT=8080
|
||||
- BACKEND_PROTOCOL=https
|
||||
- BACKEND_HOSTNAME=$HOST
|
||||
- BACKEND_PORT=443
|
||||
- PUBLIC_URL=https://${HOST}/auth/
|
||||
- AUTHORIZE_REDIRECT_URL=https://${HOST}
|
||||
- ROBOT_ADMIN_ENABLED=true
|
||||
- ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
|
||||
- ROBOT_ADMIN_SECRET_RESET=true
|
||||
- DB_TYPE=mysql
|
||||
- DB_HOST=dnpm-mysql
|
||||
- DB_USERNAME=root
|
||||
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
|
||||
- DB_DATABASE=auth
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.bwhc-frontend.tls=true"
|
||||
- "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth/"
|
||||
- "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
|
||||
- "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
|
||||
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
|
||||
- "traefik.http.routers.dnpm-auth.tls=true"
|
||||
|
||||
dnpm-portal:
|
||||
image: ghcr.io/dnpm-dip/portal:latest
|
||||
container_name: bridgehead-dnpm-portal
|
||||
environment:
|
||||
- NUXT_API_URL=http://dnpm-backend:9000/
|
||||
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
|
||||
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
|
||||
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
|
||||
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
|
||||
- "traefik.http.routers.dnpm-frontend.tls=true"
|
||||
|
||||
dnpm-backend:
|
||||
container_name: bridgehead-dnpm-backend
|
||||
image: ghcr.io/dnpm-dip/backend:latest
|
||||
environment:
|
||||
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
|
||||
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
|
||||
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
|
||||
- HATEOAS_HOST=https://${HOST}
|
||||
- CONNECTOR_TYPE=broker
|
||||
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
|
||||
volumes:
|
||||
- /etc/bridgehead/dnpm/config:/dnpm_config
|
||||
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
|
||||
depends_on:
|
||||
dnpm-authup:
|
||||
condition: service_healthy
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
|
||||
# expose everything
|
||||
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
|
||||
- "traefik.http.routers.dnpm-backend.tls=true"
|
||||
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
|
||||
# except ETL
|
||||
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
|
||||
- "traefik.http.routers.dnpm-backend-etl.tls=true"
|
||||
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
|
||||
# this needs an ETL processor with support for basic auth
|
||||
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
|
||||
# except peer-to-peer
|
||||
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
|
||||
- "traefik.http.routers.dnpm-backend-peer.tls=true"
|
||||
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
|
||||
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
|
||||
# this effectively denies all requests
|
||||
# this is okay, because requests from peers don't go through Traefik
|
||||
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
|
||||
|
||||
landing:
|
||||
labels:
|
||||
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"
|
||||
|
@@ -1,28 +1,16 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "${ENABLE_DNPM_NODE}" ]; then
|
||||
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
|
||||
log INFO "DNPM setup detected -- will start DNPM:DIP node."
|
||||
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
|
||||
|
||||
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
|
||||
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
|
||||
if [ -z "${ZPM_SITE+x}" ]; then
|
||||
log ERROR "Mandatory variable ZPM_SITE not defined!"
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "${DNPM_DATA_DIR+x}" ]; then
|
||||
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
|
||||
exit 1
|
||||
fi
|
||||
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
|
||||
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
|
||||
echo "Override of landing page url already in place"
|
||||
else
|
||||
echo "Adding override of landing page url"
|
||||
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
|
||||
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
else
|
||||
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
|
||||
fi
|
||||
fi
|
||||
mkdir -p /var/cache/bridgehead/dnpm/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/dnpm/'. Please run sudo './bridgehead install $PROJECT' again to fix the permissions."
|
||||
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:--1}
|
||||
DNPM_MYSQL_ROOT_PASSWORD="$(generate_simple_password 'dnpm mysql')"
|
||||
DNPM_AUTHUP_SECRET="$(generate_simple_password 'dnpm authup')"
|
||||
fi
|
||||
|
@@ -11,7 +11,6 @@ services:
|
||||
CTS_API_KEY: ${NNGM_CTS_APIKEY}
|
||||
CRYPT_KEY: ${NNGM_CRYPTKEY}
|
||||
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"
|
||||
|
17
modules/ssh-tunnel-compose.yml
Normal file
17
modules/ssh-tunnel-compose.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
ssh-tunnel:
|
||||
image: docker.verbis.dkfz.de/cache/samply/ssh-tunnel
|
||||
container_name: bridgehead-ccp-ssh-tunnel
|
||||
environment:
|
||||
SSH_TUNNEL_USERNAME: "${SSH_TUNNEL_USERNAME}"
|
||||
SSH_TUNNEL_HOST: "${SSH_TUNNEL_HOST}"
|
||||
SSH_TUNNEL_PORT: "${SSH_TUNNEL_PORT:-22}"
|
||||
volumes:
|
||||
- "/etc/bridgehead/ssh-tunnel.conf:/ssh-tunnel.conf:ro"
|
||||
secrets:
|
||||
- privkey
|
||||
secrets:
|
||||
privkey:
|
||||
file: /etc/bridgehead/pki/ssh-tunnel.priv.pem
|
6
modules/ssh-tunnel-setup.sh
Normal file
6
modules/ssh-tunnel-setup.sh
Normal file
@@ -0,0 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ -n "$ENABLE_SSH_TUNNEL" ]; then
|
||||
log INFO "SSH Tunnel setup detected -- will start SSH Tunnel."
|
||||
OVERRIDE+=" -f ./modules/ssh-tunnel-compose.yml"
|
||||
fi
|
19
modules/ssh-tunnel.md
Normal file
19
modules/ssh-tunnel.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# SSH Tunnel Module
|
||||
|
||||
This module enables SSH tunneling capabilities for the Bridgehead installation.
|
||||
The primary use case for this is to connect bridgehead components that are hosted externally due to security concerns.
|
||||
To connect the new components to the locally running bridgehead infra one is supposed to write a docker-compose.override.yml changing the urls to point to the corresponding forwarded port of the ssh-tunnel container.
|
||||
|
||||
## Configuration Variables
|
||||
|
||||
- `ENABLE_SSH_TUNNEL`: Required to enable the module
|
||||
- `SSH_TUNNEL_USERNAME`: Username for SSH connection
|
||||
- `SSH_TUNNEL_HOST`: Target host for SSH tunnel
|
||||
- `SSH_TUNNEL_PORT`: SSH port (defaults to 22)
|
||||
|
||||
## Configuration Files
|
||||
|
||||
The module requires the following files to be present:
|
||||
|
||||
- `/etc/bridgehead/ssh-tunnel.conf`: SSH tunnel configuration file. Detailed information can be found [here](https://github.com/samply/ssh-tunnel?tab=readme-ov-file#configuration).
|
||||
- `/etc/bridgehead/pki/ssh-tunnel.priv.pem`: The SSH private key used to connect to the `SSH_TUNNEL_HOST`. **Passphrases for the key are not supported!**
|
86
modules/transfair-compose.yml
Normal file
86
modules/transfair-compose.yml
Normal file
@@ -0,0 +1,86 @@
|
||||
|
||||
services:
|
||||
transfair:
|
||||
image: docker.verbis.dkfz.de/cache/samply/transfair:latest
|
||||
container_name: bridgehead-transfair
|
||||
environment:
|
||||
# NOTE: Those 3 variables need only to be passed if their set, otherwise transfair will complain about empty url values
|
||||
- TTP_URL
|
||||
- TTP_ML_API_KEY
|
||||
- TTP_GW_SOURCE
|
||||
- TTP_GW_EPIX_DOMAIN
|
||||
- TTP_GW_GPAS_DOMAIN
|
||||
- TTP_TYPE
|
||||
- TTP_AUTH
|
||||
- PROJECT_ID_SYSTEM
|
||||
- FHIR_REQUEST_URL=${FHIR_REQUEST_URL}
|
||||
- FHIR_INPUT_URL=${FHIR_INPUT_URL}
|
||||
- FHIR_OUTPUT_URL=${FHIR_OUTPUT_URL:-http://blaze:8080}
|
||||
- FHIR_REQUEST_CREDENTIALS=${FHIR_REQUEST_CREDENTIALS}
|
||||
- FHIR_INPUT_CREDENTIALS=${FHIR_INPUT_CREDENTIALS}
|
||||
- FHIR_OUTPUT_CREDENTIALS=${FHIR_OUTPUT_CREDENTIALS}
|
||||
- EXCHANGE_ID_SYSTEM=${EXCHANGE_ID_SYSTEM:-SESSION_ID}
|
||||
- DATABASE_URL=sqlite://transfair/data_requests.sql?mode=rwc
|
||||
- RUST_LOG=${RUST_LOG:-info}
|
||||
- TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs
|
||||
- TLS_DISABLE=${TRANSFAIR_TLS_DISABLE:-false}
|
||||
- NO_PROXY=${TRANSFAIR_NO_PROXIES}
|
||||
- ALL_PROXY=http://forward_proxy:3128
|
||||
volumes:
|
||||
- /var/cache/bridgehead/${PROJECT}/transfair:/transfair
|
||||
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.middlewares.transfair-strip.stripprefix.prefixes=/transfair"
|
||||
- "traefik.http.routers.transfair.middlewares=transfair-strip,transfair-auth"
|
||||
- "traefik.http.routers.transfair.rule=PathPrefix(`/transfair`)"
|
||||
- "traefik.http.services.transfair.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.transfair.tls=true"
|
||||
|
||||
traefik:
|
||||
labels:
|
||||
- "traefik.http.middlewares.transfair-auth.basicauth.users=${TRANSFAIR_AUTH}"
|
||||
|
||||
transfair-input-blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-transfair-input-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-transfair-input-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx1024m"
|
||||
DB_BLOCK_CACHE_SIZE: 1024
|
||||
CQL_EXPR_CACHE_SIZE: 8
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "transfair-input-blaze-data:/app/data"
|
||||
profiles: ["transfair-input-blaze"]
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.transfair-input-blaze.rule=PathPrefix(`/data-delivery`)"
|
||||
- "traefik.http.middlewares.transfair-input-strip.stripprefix.prefixes=/data-delivery"
|
||||
- "traefik.http.services.transfair-input-blaze.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.transfair-input-blaze.middlewares=transfair-input-strip,transfair-auth"
|
||||
- "traefik.http.routers.transfair-input-blaze.tls=true"
|
||||
|
||||
transfair-request-blaze:
|
||||
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
|
||||
container_name: bridgehead-transfair-request-blaze
|
||||
environment:
|
||||
BASE_URL: "http://bridgehead-transfair-request-blaze:8080"
|
||||
JAVA_TOOL_OPTIONS: "-Xmx1024m"
|
||||
DB_BLOCK_CACHE_SIZE: 1024
|
||||
CQL_EXPR_CACHE_SIZE: 8
|
||||
ENFORCE_REFERENTIAL_INTEGRITY: "false"
|
||||
volumes:
|
||||
- "transfair-request-blaze-data:/app/data"
|
||||
profiles: ["transfair-request-blaze"]
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.transfair-request-blaze.rule=PathPrefix(`/data-requests`)"
|
||||
- "traefik.http.middlewares.transfair-request-strip.stripprefix.prefixes=/data-requests"
|
||||
- "traefik.http.services.transfair-request-blaze.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.transfair-request-blaze.middlewares=transfair-request-strip,transfair-auth"
|
||||
- "traefik.http.routers.transfair-request-blaze.tls=true"
|
||||
|
||||
volumes:
|
||||
transfair-input-blaze-data:
|
||||
transfair-request-blaze-data:
|
35
modules/transfair-setup.sh
Executable file
35
modules/transfair-setup.sh
Executable file
@@ -0,0 +1,35 @@
|
||||
#!/bin/bash -e
|
||||
|
||||
function transfairSetup() {
|
||||
if [[ -n "$TTP_URL" || -n "$EXCHANGE_ID_SYSTEM" ]]; then
|
||||
echo "Starting transfair."
|
||||
OVERRIDE+=" -f ./modules/transfair-compose.yml"
|
||||
if [ -n "$FHIR_INPUT_URL" ]; then
|
||||
log INFO "TransFAIR input fhir store set to external $FHIR_INPUT_URL"
|
||||
else
|
||||
log INFO "TransFAIR input fhir store not set writing to internal blaze"
|
||||
FHIR_INPUT_URL="http://transfair-input-blaze:8080"
|
||||
OVERRIDE+=" --profile transfair-input-blaze"
|
||||
fi
|
||||
if [ -n "$FHIR_REQUEST_URL" ]; then
|
||||
log INFO "TransFAIR request fhir store set to external $FHIR_REQUEST_URL"
|
||||
else
|
||||
log INFO "TransFAIR request fhir store not set writing to internal blaze"
|
||||
FHIR_REQUEST_URL="http://transfair-request-blaze:8080"
|
||||
OVERRIDE+=" --profile transfair-request-blaze"
|
||||
fi
|
||||
if [ -n "$TTP_GW_SOURCE" ]; then
|
||||
log INFO "TransFAIR configured with greifswald as ttp"
|
||||
TTP_TYPE="greifswald"
|
||||
elif [ -n "$TTP_ML_API_KEY" ]; then
|
||||
log INFO "TransFAIR configured with mainzelliste as ttp"
|
||||
TTP_TYPE="mainzelliste"
|
||||
else
|
||||
log INFO "TransFAIR configured without ttp"
|
||||
fi
|
||||
TRANSFAIR_NO_PROXIES="transfair-input-blaze,blaze,transfair-requests-blaze"
|
||||
if [ -n "${TRANSFAIR_NO_PROXY}" ]; then
|
||||
TRANSFAIR_NO_PROXIES+=",${TRANSFAIR_NO_PROXY}"
|
||||
fi
|
||||
fi
|
||||
}
|
4
versions/acceptance
Normal file
4
versions/acceptance
Normal file
@@ -0,0 +1,4 @@
|
||||
FOCUS_TAG=develop
|
||||
BEAM_TAG=develop
|
||||
BLAZE_TAG=main
|
||||
POSTGRES_TAG=15.13-alpine
|
4
versions/prod
Normal file
4
versions/prod
Normal file
@@ -0,0 +1,4 @@
|
||||
FOCUS_TAG=main
|
||||
BEAM_TAG=main
|
||||
BLAZE_TAG=0.32
|
||||
POSTGRES_TAG=15.13-alpine
|
4
versions/test
Normal file
4
versions/test
Normal file
@@ -0,0 +1,4 @@
|
||||
FOCUS_TAG=develop
|
||||
BEAM_TAG=develop
|
||||
BLAZE_TAG=main
|
||||
POSTGRES_TAG=15.13-alpine
|
Reference in New Issue
Block a user