Compare commits

..

48 Commits

Author SHA1 Message Date
364cf5f1c9 Added more detail to metadata feedback README 2025-02-06 16:20:54 +01:00
e571681fce Added clarifying information to README 2025-02-06 16:04:59 +01:00
4f5f6b17e7 Added metadata feedback to README 2025-02-06 11:23:19 +01:00
146235236b Removed stuff accumulated during testing phase
Most of the things added during testing were not necessary and they were
removed. This had the additional advantage that many files are now identical
to their equivalents in the develop branch, making the diff more manageable.
2025-02-06 09:28:48 +01:00
0169435074 We are not using the Teiler in this pilot 2025-02-05 15:10:29 +01:00
6c71a83e70 Added startup script for metadata feedback 2025-02-05 14:30:24 +01:00
ed95dff63e Merge branch 'develop' into metadata_fb 2025-02-05 14:02:47 +01:00
3841a98a3b Push hub URL into /etc/bridgehead/bbmri.conf 2025-02-05 13:50:12 +01:00
Jan
721627a78f feat: migrate to new dnpm:dip node (#251)
* feat: migrate to new dnpm:dip node

* hardcode dnpm connector type to broker

* use `SITE_NAME` for dnpm `LOCAL_SITE`

* host central targets in git

* dnpm: add goettingen to central targets

* dnpm: add uksh to central targets

* dnpm: replace named volumes with fs volumes

* chore: change dnpm images

* chore: pin mysql

* dnpm: Secure endpoints for ETL and p2p communications (#254)

* fix authup redirect (#262)

When a OIDC provider is configured, you'll get redirected to authup by Keycloak which redirects you to the DNPM:DIP. 

Currently the url looks like this: https://myserver/authup//someurl
and produces an error. Manually removing the additional / fixes the issue.

* Whitespace formatting

---------

Co-authored-by: Niklas <niklas@ytvwld.de>
Co-authored-by: Niklas Reimer <niklas@backbord.net>
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2025-02-05 13:24:28 +01:00
710092c020 Autogenerate secrets for feedback-agent 2025-02-05 11:17:38 +01:00
Jan
e08ff92401 Merge pull request #268 from samply/main
Main backport
2025-02-05 08:45:35 +01:00
9977578aa5 Changes for metadata feedback
These changes fall into the following categories:

* Integration of the feedback components into Traefik.
* Small tweaks to Bridgehead parameters needed by metadata feedback.
* Reverting changes made to some files for testing purposes.
2025-02-04 14:55:08 +01:00
Jan
e3553370b6 feat: unify version handeling (#265) 2025-01-29 13:17:59 +01:00
Jan
1ad73d8f82 fix: properly load oidc secrets (#267) 2025-01-29 10:59:27 +01:00
0b6fa439ba Merge pull request #266 from samply/fix/docker-hub-link
Fixed Docker Hub URL list link
2025-01-29 09:52:26 +01:00
615990b92a Use secret-sync for gitpassword (#257)
---------

Co-authored-by: Tim Schumacher <tim@tschumacher.net>
Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>
Co-authored-by: Tim Schumacher <tim.schumacher@dkfz-heidelberg.de>
2025-01-28 14:53:49 +01:00
db950d6d87 Fixed Docker Hub URL list link 2025-01-28 08:59:57 +00:00
6a71da3dd1 docs: documentation for changing your configuration repository access token (#256) 2025-01-27 10:49:10 +01:00
138a1fa5f1 fix: changed ccp_ppi to use IDMANAGEMENT_FRIENDLY_ID instead of SITE_NAME (#259) 2025-01-27 10:49:10 +01:00
39a87bcf61 Update: Blaze to version 0.31 (#260) 2025-01-27 10:49:10 +01:00
Jan
655d0d24c7 fix: remove restart: always in compose files (#261) 2025-01-27 10:49:10 +01:00
910289079b docs: documentation for changing your configuration repository access token (#256) 2025-01-21 09:35:25 +01:00
1003cd73cf fix: changed ccp_ppi to use IDMANAGEMENT_FRIENDLY_ID instead of SITE_NAME (#259) 2025-01-21 09:30:20 +01:00
3d1105b97c Update: Blaze to version 0.31 (#260) 2025-01-21 09:28:47 +01:00
Jan
5c28e704d2 fix: remove restart: always in compose files (#261) 2025-01-21 09:27:27 +01:00
df1ec21848 Merge pull request #246 from samply/develop
set environment variables needed for the focus blaze-and-sql endpoint to work when fhir2sql is enabled
2024-12-11 10:53:27 +01:00
e3510363ad fix: remove credentials from git remote, if update fails (#253)
Signed-off-by: Patrick Skowronek <patrick.skowronek@dkfz-heidelberg.de>
2024-12-10 17:18:07 +01:00
45aefd24e5 renamed exporter API key to EXPORTER_API_KEY (#252) 2024-12-06 11:27:38 +01:00
122ff16bb1 fix: use verbis cache image instead of docker-hub (#250) 2024-11-12 09:58:17 +01:00
967e45624e Add exporter configuration to focus (#249) 2024-11-05 15:46:00 +01:00
8cd5b93b95 Merge pull request #237 from samply/feat/ccp-ppi
Feat/ccp ppi
2024-11-05 13:59:01 +01:00
Jan
52c24ee6fa fix(fhir2sql): add the postgres connection string to focus (#245) 2024-11-05 13:34:36 +01:00
26712d3567 feat: add and set FOCUS_ENDPOINT_TYPE to support fhir2sql (#244)
* add and set FOCUS_ENDPOINT_TYPE

Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>

---------

Co-authored-by: davidmscholz <david.scholz@dkfz-heidelberg.de>
Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>
2024-10-30 15:21:15 +01:00
a4e292dd18 Merge pull request #241 from samply/develop
feat: add focus tags in ccp and bbmri (#240)
2024-10-23 07:24:06 +02:00
eea17c3478 fix: remove invalid beam-proxy tag and add it to focus instead (#242) 2024-10-22 16:48:48 +02:00
cf5230963c feat: add focus tags in ccp and bbmri (#240)
Co-authored-by: p.delpy@dkfz-heidelberg.de <p.delpy@dkfz-heidelberg.de>
2024-10-21 11:00:01 +02:00
634d4e2a4b Added comments explaining how things have been changed for local testing 2024-10-18 13:32:06 +02:00
bc24599c54 Beam suppressed
Changes have been made so that the Bridgehead can run without Beam.
2024-10-18 11:15:18 +02:00
75089ab428 Merge pull request #221 from samply/develop
Keep develop in sync with main
2024-10-15 14:30:15 +02:00
7aaee5e7d5 feat: add auto archiving action (#238)
* feat: add auto archiving action

---------

Co-authored-by: p.delpy@dkfz-heidelberg.de <p.delpy@dkfz-heidelberg.de>
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2024-10-15 13:03:42 +02:00
23981062bb Move ppi to id-management 2024-10-10 13:27:24 +02:00
8e7fe6851e fix: use correct mainzelliste api key 2024-10-10 13:11:43 +02:00
760d599b7c Add CCP-PPI 2024-10-10 12:56:36 +02:00
502eef0cc8 feat: adapt secret sync for bbmri 2024-06-12 08:36:28 +00:00
c4b7620fd6 BBMRI Teiler default templates: bbmri (exporter) and bbmri-qb (reporter) 2024-04-30 14:23:38 +02:00
6a21a5b641 Move exporter and teiler to bbmri/modules 2024-04-30 14:16:59 +02:00
c04ffc5f33 Merge branch 'main' into qr_bbmri 2024-04-30 14:04:34 +02:00
b1bdf48e55 changes for the bbmri-qr 2024-04-18 13:50:14 +02:00
54 changed files with 797 additions and 198 deletions

View File

@ -0,0 +1,39 @@
import os
import requests
from datetime import datetime, timedelta
# Configuration
GITHUB_TOKEN = os.getenv('GITHUB_TOKEN')
REPO = 'samply/bridgehead'
HEADERS = {'Authorization': f'token {GITHUB_TOKEN}', 'Accept': 'application/vnd.github.v3+json'}
API_URL = f'https://api.github.com/repos/{REPO}/branches'
INACTIVE_DAYS = 365
CUTOFF_DATE = datetime.now() - timedelta(days=INACTIVE_DAYS)
# Fetch all branches
def get_branches():
response = requests.get(API_URL, headers=HEADERS)
response.raise_for_status()
return response.json() if response.status_code == 200 else []
# Rename inactive branches
def rename_branch(old_name, new_name):
rename_url = f'https://api.github.com/repos/{REPO}/branches/{old_name}/rename'
response = requests.post(rename_url, json={'new_name': new_name}, headers=HEADERS)
response.raise_for_status()
print(f"Renamed branch {old_name} to {new_name}" if response.status_code == 201 else f"Failed to rename {old_name}: {response.status_code}")
# Check if the branch is inactive
def is_inactive(commit_url):
last_commit_date = requests.get(commit_url, headers=HEADERS).json()['commit']['committer']['date']
return datetime.strptime(last_commit_date, '%Y-%m-%dT%H:%M:%SZ') < CUTOFF_DATE
# Rename inactive branches
def main():
for branch in get_branches():
if is_inactive(branch['commit']['url']):
#rename_branch(branch['name'], f"archived/{branch['name']}")
print(f"[LOG] Branch '{branch['name']}' is inactive and would be renamed to 'archived/{branch['name']}'")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,27 @@
name: Cleanup - Rename Inactive Branches
on:
schedule:
- cron: '0 0 * * 0' # Runs every Sunday at midnight
jobs:
archive-stale-branches:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install Libraries
run: pip install requests
- name: Run Script to Rename Inactive Branches
run: |
python .github/scripts/rename_inactive_branches.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@ -23,6 +23,7 @@ This repository is the starting point for any information and tools you will nee
- [File structure](#file-structure)
- [BBMRI-ERIC Directory entry needed](#bbmri-eric-directory-entry-needed)
- [Loading data](#loading-data)
- [Metadata feedback](#metadata-feedback)
4. [Things you should know](#things-you-should-know)
- [Auto-Updates](#auto-updates)
- [Auto-Backups](#auto-backups)
@ -76,7 +77,7 @@ The following URLs need to be accessible (prefix with `https://`):
* git.verbis.dkfz.de
* To fetch docker images
* docker.verbis.dkfz.de
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/all))
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/setup/allow-list/))
* hub.docker.com
* registry-1.docker.io
* production.cloudflare.docker.com
@ -155,6 +156,7 @@ Clone the bridgehead repository:
```shell
sudo mkdir -p /srv/docker/
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
sudo git checkout metadata_fb # Only needed if you want to use metadata feedback
```
Then, run the installation script:
@ -254,6 +256,8 @@ sh bridgehead uninstall
## Site-specific configuration
[How to Change Config Access Token](docs/update-access-token.md)
### HTTPS Access
Even within your internal network, the Bridgehead enforces HTTPS for all services. During the installation, a self-signed, long-lived certificate was created for you. To increase security, you can simply replace the files under `/etc/bridgehead/traefik-tls` with ones from established certification authorities such as [Let's Encrypt](https://letsencrypt.org) or [DFN-AAI](https://www.aai.dfn.de).
@ -345,6 +349,26 @@ Normally, you will need to build your own ETL to feed the Bridgehead. However, t
You can find the profiles for generating FHIR in [Simplifier](https://simplifier.net/bbmri.de/~resources?category=Profile).
### Metadata feedback
The Bridgehead comes with a tool that allows you to associate metadata with samples. Multiple arbitrary text strings are allowed. A typical use case would be publications based on research using a sample. Here, one could lay down the DOI of the publication in the sample.
Full details of the system can be found [here](https://github.com/samply/feedback-deployment). To avail yourself of this feature, you need to
- Use the bbmri project.
- work with the ```metadata_fb``` branch of the Bridgehead repository.
- Build the feedback-agent Docker container (more details [here](https://github.com/samply/feedback-agent/)).
- Build the feedback-agent-ui Docker container (more details [here](https://github.com/samply/feedback-agent-ui/)).
The following extra environment variables need to be added to your ```/etc/bridgehead/bbmri.conf``` file:
``` code
ENABLE_EXPORTER=true
ENABLE_FEEDBACK_AGENT=true
FEEDBACK_HUB_URL=<URL for central feedback hub backend API>
FOCUS_RETRY_COUNT=256
```
## Things you should know
### Auto-Updates

View File

@ -4,7 +4,7 @@ version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
image: docker.verbis.dkfz.de/cache/samply/blaze:0.31
container_name: bridgehead-bbmri-blaze
environment:
BASE_URL: "http://bridgehead-bbmri-blaze:8080"

View File

@ -2,7 +2,7 @@ version: "3.7"
services:
focus-eric:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}-bbmri
container_name: bridgehead-focus-eric
environment:
API_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
@ -22,6 +22,7 @@ services:
BROKER_URL: ${ERIC_BROKER_URL}
PROXY_ID: ${ERIC_PROXY_ID}
APP_focus_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
APP_feedback-agent_KEY: ${FEEDBACK_AGENT_BEAM_SECRET}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs

View File

@ -0,0 +1,67 @@
version: "3.7"
services:
exporter:
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
container_name: bridgehead-ccp-exporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
CROSS_ORIGINS: "https://${HOST}"
EXPORTER_DB_USER: "exporter"
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}"
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
HTTP_RELATIVE_PATH: "/ccp-exporter"
SITE: "${SITE_ID}"
HTTP_SERVLET_REQUEST_SCHEME: "https"
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
labels:
- "traefik.enable=true"
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
- "traefik.http.services.exporter_ccp.loadbalancer.server.port=8092"
- "traefik.http.routers.exporter_ccp.tls=true"
- "traefik.http.middlewares.exporter_ccp_strip.stripprefix.prefixes=/ccp-exporter"
- "traefik.http.routers.exporter_ccp.middlewares=exporter_ccp_strip"
volumes:
- "/var/cache/bridgehead/ccp/exporter-files:/app/exporter-files/output"
exporter-db:
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
container_name: bridgehead-ccp-exporter-db
environment:
POSTGRES_USER: "exporter"
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}"
POSTGRES_DB: "exporter"
volumes:
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
- "/var/cache/bridgehead/ccp/exporter-db:/var/lib/postgresql/data"
reporter:
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
container_name: bridgehead-ccp-reporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
CROSS_ORIGINS: "https://${HOST}"
HTTP_RELATIVE_PATH: "/ccp-reporter"
SITE: "${SITE_ID}"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
EXPORTER_URL: "http://exporter:8092"
LOG_FHIR_VALIDATION: "false"
HTTP_SERVLET_REQUEST_SCHEME: "https"
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
# a process that can take several hours, because it depends on the exporter.
# There is a risk that the bridgehead restarts, losing the already created export.
volumes:
- "/var/cache/bridgehead/ccp/reporter-files:/app/reports"
labels:
- "traefik.enable=true"
- "traefik.http.routers.reporter_ccp.rule=PathPrefix(`/ccp-reporter`)"
- "traefik.http.services.reporter_ccp.loadbalancer.server.port=8095"
- "traefik.http.routers.reporter_ccp.tls=true"
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"

View File

@ -0,0 +1,9 @@
#!/bin/bash -e
if [ "$ENABLE_EXPORTER" == true ]; then
log INFO "Exporter setup detected -- will start Exporter service."
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
POSTGRES_TAG=15.6-alpine
fi

15
bbmri/modules/exporter.md Normal file
View File

@ -0,0 +1,15 @@
# Exporter and Reporter
## Exporter
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
## Exporter-DB
It is a database to save queries for its execution in the exporter.
The exporter manages also the different executions of the same query in through the database.
## Reporter
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
It is compatible with different template engines as Groovy, Thymeleaf,...
It is perfect to generate a document as our traditional CCP quality report.

View File

@ -0,0 +1,59 @@
version: "3.7"
services:
feedback-agent-ui:
image: "samply/feedback-agent-ui"
environment:
- VUE_APP_EXPORTER_URL=https://localhost/ccp-exporter
- VUE_APP_FB_BACKEND_URL=http://localhost:8072
labels:
- traefik.enable=true
# HTTPS
- traefik.http.routers.feedback_agent_ui_ccp_https.rule=PathPrefix(`/ccp-feedback-agent-ui`)
- traefik.http.services.feedback_agent_ui_ccp_https.loadbalancer.server.port=8096
- traefik.http.routers.feedback_agent_ui_ccp_https.entrypoints=websecure
- traefik.http.routers.feedback_agent_ui_ccp_https.tls=true
feedback-agent:
image: "samply/feedback-agent"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://feedback-agent-db:5432/compose-postgres
- SPRING_DATASOURCE_USERNAME=compose-postgres
- SPRING_DATASOURCE_PASSWORD=${FEEDBACK_AGENT_DB_PASSWORD}
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
- BEAM_PROXY_URI=http://beam-proxy-eric:8081
- FEEDBACK_HUB_URL=${FEEDBACK_HUB_URL}
- BLAZE_BASE_URL=http://blaze:8080/fhir
- FEEDBACK_AGENT_SECRET=${FEEDBACK_AGENT_BEAM_SECRET}
- FEEDBACK_AGENT_BEAM_ID=feedback-agent.${ERIC_PROXY_ID}
- FEEDBACK_HUB_BEAM_ID=feedback-hub.feedback-central.${ERIC_BROKER_ID}
- EXPORTER_API_KEY=${EXPORTER_API_KEY}
- CORS_ALLOWED_ORIGINS="https://${HOST}
networks:
# Only needed for local testing.
- feedback
- default
labels:
- traefik.enable=true
# HTTPS
- traefik.http.routers.feedback_agent_ccp_https.rule=PathPrefix(`/ccp-feedback-agent`)
- traefik.http.services.feedback_agent_ccp_https.loadbalancer.server.port=8072
- traefik.http.routers.feedback_agent_ccp_https.entrypoints=websecure
- traefik.http.middlewares.feedback_agent_ccp_https_strip.stripprefix.prefixes=/ccp-feedback-agent
- traefik.http.routers.feedback_agent_ccp_https.middlewares=feedback_agent_ccp_https_strip
- traefik.http.routers.feedback_agent_ccp_https.tls=true
feedback-agent-db:
image: 'postgres:13.1-alpine'
container_name: feedback-agent-db
environment:
- POSTGRES_USER=compose-postgres
- POSTGRES_PASSWORD=${FEEDBACK_AGENT_DB_PASSWORD}
# This is needed when you run both agent and hub locally in a test
# environment. Not necessary in production, though it probably won't
# cause any problems.
networks:
# Network to connect agent and hub.
feedback:
name: feedback
driver: bridge

View File

@ -0,0 +1,8 @@
#!/bin/bash
if [ "$ENABLE_FEEDBACK_AGENT" == true ]; then
OVERRIDE+=" -f ./$PROJECT/modules/feedback-agent-compose.yml"
FEEDBACK_AGENT_BEAM_SECRET="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
FEEDBACK_AGENT_DB_PASSWORD="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
fi

View File

@ -0,0 +1,6 @@
# Metadata feedback agent
This component can be used to choose the sample to be associated
with a given piece of metadata (generally the ID of a publication
relating to research done with the sample).

View File

@ -2,7 +2,7 @@ version: "3.7"
services:
focus-gbn:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}-bbmri
container_name: bridgehead-focus-gbn
environment:
API_KEY: ${GBN_FOCUS_BEAM_SECRET_SHORT}

View File

@ -7,7 +7,6 @@
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
source versions
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"

View File

@ -53,18 +53,45 @@ case "$PROJECT" in
;;
esac
# Loads config variables and runs the projects setup script
loadVars() {
# Load variables from /etc/bridgehead and /srv/docker/bridgehead
set -a
# Source the project specific config file
source /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "/etc/bridgehead/$PROJECT.conf not found"
# Source the project specific local config file if present
# This file is ignored by git as oposed to the regular config file as it contains private site information like etl auth data
if [ -e /etc/bridgehead/$PROJECT.local.conf ]; then
log INFO "Applying /etc/bridgehead/$PROJECT.local.conf"
source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import"
fi
# Set execution environment on main default to prod else test
if [[ -z "${ENVIRONMENT+x}" ]]; then
if [ "$(git rev-parse --abbrev-ref HEAD)" == "main" ]; then
ENVIRONMENT="production"
else
ENVIRONMENT="test"
fi
fi
# Source the versions of the images components
case "$ENVIRONMENT" in
"production")
source ./versions/prod
;;
"test")
source ./versions/test
;;
*)
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
source ./versions/prod
;;
esac
fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile"
setHostname
optimizeBlazeMemoryUsage
[ -e ./$PROJECT/vars ] && source ./versions ./$PROJECT/vars
# Run project specific setup if it exists
# This will ususally modiy the `OVERRIDE` to include all the compose files that the project depends on
# This is also where projects specify which modules to load
[ -e ./$PROJECT/vars ] && source ./$PROJECT/vars
set +a
OVERRIDE=${OVERRIDE:=""}
@ -79,26 +106,6 @@ loadVars() {
fi
detectCompose
setupProxy
# Set some project-independent default values
: ${ENVIRONMENT:=production}
export ENVIRONMENT
case "$ENVIRONMENT" in
"production")
export FOCUS_TAG=main
export BEAM_TAG=main
;;
"test")
export FOCUS_TAG=develop
export BEAM_TAG=develop
;;
*)
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export FOCUS_TAG=main
export BEAM_TAG=main
;;
esac
}
case "$ACTION" in

View File

@ -2,7 +2,7 @@ version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
image: docker.verbis.dkfz.de/cache/samply/blaze:0.31
container_name: bridgehead-cce-blaze
environment:
BASE_URL: "http://bridgehead-cce-blaze:8080"

View File

@ -7,7 +7,6 @@ SUPPORT_EMAIL=manoj.waikar@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
source versions
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"

View File

@ -2,7 +2,7 @@ version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
image: docker.verbis.dkfz.de/cache/samply/blaze:0.31
container_name: bridgehead-ccp-blaze
environment:
BASE_URL: "http://bridgehead-ccp-blaze:8080"
@ -22,7 +22,7 @@ services:
- "traefik.http.routers.blaze_ccp.tls=true"
focus:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}-dktk
container_name: bridgehead-focus
environment:
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
@ -33,6 +33,7 @@ services:
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
EPSILON: 0.28
QUERIES_TO_CACHE: '/queries_to_cache.conf'
ENDPOINT_TYPE: ${FOCUS_ENDPOINT_TYPE:-blaze}
volumes:
- /srv/docker/bridgehead/ccp/queries_to_cache.conf:/queries_to_cache.conf
depends_on:
@ -58,7 +59,6 @@ services:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
volumes:
blaze-data:

View File

@ -2,14 +2,13 @@ version: "3.7"
services:
blaze-secondary:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
image: docker.verbis.dkfz.de/cache/samply/blaze:0.31
container_name: bridgehead-ccp-blaze-secondary
environment:
BASE_URL: "http://bridgehead-ccp-blaze-secondary:8080"
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-secondary-data:/app/data"

View File

@ -13,7 +13,7 @@ services:
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
HTTP_PROXY: "http://forward_proxy:3128"
HTTPS_PROXY: "http://forward_proxy:3128"
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
@ -25,7 +25,7 @@ services:
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
- /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"

View File

@ -1,34 +1,99 @@
version: "3.7"
services:
dnpm-backend:
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
container_name: bridgehead-dnpm-backend
dnpm-mysql:
image: mysql:9
healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
interval: 3s
timeout: 5s
retries: 5
environment:
- ZPM_SITE=${ZPM_SITE}
- N_RANDOM_FILES=${DNPM_SYNTH_NUM}
MYSQL_ROOT_HOST: "%"
MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
volumes:
- /etc/bridgehead/dnpm:/bwhc_config:ro
- ${DNPM_DATA_DIR}:/bwhc_data
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
- "traefik.http.routers.bwhc-backend.tls=true"
- /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
dnpm-frontend:
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
container_name: bridgehead-dnpm-frontend
links:
- dnpm-backend
dnpm-authup:
image: authup/authup:latest
container_name: bridgehead-dnpm-authup
volumes:
- /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
depends_on:
dnpm-mysql:
condition: service_healthy
command: server/core start
environment:
- NUXT_HOST=0.0.0.0
- NUXT_PORT=8080
- BACKEND_PROTOCOL=https
- BACKEND_HOSTNAME=$HOST
- BACKEND_PORT=443
- PUBLIC_URL=https://${HOST}/auth/
- AUTHORIZE_REDIRECT_URL=https://${HOST}
- ROBOT_ADMIN_ENABLED=true
- ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
- ROBOT_ADMIN_SECRET_RESET=true
- DB_TYPE=mysql
- DB_HOST=dnpm-mysql
- DB_USERNAME=root
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
- DB_DATABASE=auth
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
- "traefik.http.routers.bwhc-frontend.tls=true"
- "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth"
- "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
- "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-auth.tls=true"
dnpm-portal:
image: ghcr.io/dnpm-dip/portal:latest
container_name: bridgehead-dnpm-portal
environment:
- NUXT_API_URL=http://dnpm-backend:9000/
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-frontend.tls=true"
dnpm-backend:
container_name: bridgehead-dnpm-backend
image: ghcr.io/dnpm-dip/backend:latest
environment:
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- HATEOAS_HOST=https://${HOST}
- CONNECTOR_TYPE=broker
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
volumes:
- /etc/bridgehead/dnpm/config:/dnpm_config
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
depends_on:
dnpm-authup:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
# expose everything
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.dnpm-backend.tls=true"
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
# except ETL
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-etl.tls=true"
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
# this needs an ETL processor with support for basic auth
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
# except peer-to-peer
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-peer.tls=true"
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
# this effectively denies all requests
# this is okay, because requests from peers don't go through Traefik
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
landing:
labels:
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"

View File

@ -1,28 +1,16 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
log INFO "DNPM setup detected -- will start DNPM:DIP node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1
fi
if [ -z "${DNPM_DATA_DIR+x}" ]; then
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
exit 1
fi
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
echo "Override of landing page url already in place"
else
echo "Adding override of landing page url"
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
else
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
fi
fi
mkdir -p /var/cache/bridgehead/dnpm/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/dnpm/'. Please run sudo './bridgehead install $PROJECT' again to fix the permissions."
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:--1}
DNPM_MYSQL_ROOT_PASSWORD="$(generate_simple_password 'dnpm mysql')"
DNPM_AUTHUP_SECRET="$(generate_simple_password 'dnpm authup')"
fi

View File

@ -65,3 +65,8 @@ services:
- "traefik.http.routers.reporter_ccp.tls=true"
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"
focus:
environment:
EXPORTER_URL: "http://exporter:8092"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"

View File

@ -23,3 +23,7 @@ services:
POSTGRES_DB: "dashboard"
volumes:
- "/var/cache/bridgehead/ccp/dashboard-db:/var/lib/postgresql/data"
focus:
environment:
POSTGRES_CONNECTION_STRING: "postgresql://dashboard:${DASHBOARD_DB_PASSWORD}@dashboard-db/dashboard"

View File

@ -4,4 +4,5 @@ if [ "$ENABLE_FHIR2SQL" == true ]; then
log INFO "Dashboard setup detected -- will start Dashboard backend and FHIR2SQL service."
OVERRIDE+=" -f ./$PROJECT/modules/fhir2sql-compose.yml"
DASHBOARD_DB_PASSWORD="$(generate_simple_password 'fhir2sql')"
FOCUS_ENDPOINT_TYPE="blaze-and-sql"
fi

View File

@ -101,5 +101,12 @@ services:
forward_proxy:
condition: service_healthy
ccp-patient-project-identificator:
image: docker.verbis.dkfz.de/cache/samply/ccp-patient-project-identificator
container_name: bridgehead-ccp-patient-project-identificator
environment:
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
SITE_NAME: ${IDMANAGEMENT_FRIENDLY_ID}
volumes:
patientlist-db-data:

View File

@ -12,7 +12,6 @@ services:
CTS_API_KEY: ${NNGM_CTS_APIKEY}
CRYPT_KEY: ${NNGM_CRYPTKEY}
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"

View File

@ -10,7 +10,6 @@ services:
SALT: ${LOCAL_SALT}
KEEP_INTERNAL_ID: ${KEEP_INTERNAL_ID:-false}
MAINZELLISTE_URL: ${PATIENTLIST_URL:-http://patientlist:8080/patientlist}
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.obds2fhir-rest.rule=PathPrefix(`/obds2fhir-rest`) || PathPrefix(`/adt2fhir-rest`)"

View File

@ -18,7 +18,8 @@ OIDC_URL="https://login.verbis.dkfz.de"
OIDC_ISSUER_URL="${OIDC_URL}/realms/${OIDC_REALM}"
OIDC_GROUP_CLAIM="groups"
source versions
POSTGRES_TAG=15.6-alpine
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"

View File

@ -2,7 +2,7 @@ version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
image: docker.verbis.dkfz.de/cache/samply/blaze:0.31
container_name: bridgehead-dhki-blaze
environment:
BASE_URL: "http://bridgehead-dhki-blaze:8080"
@ -39,7 +39,7 @@ services:
- "blaze"
beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}

View File

@ -8,7 +8,8 @@ PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
source versions
POSTGRES_TAG=15.6-alpine
for module in ccp/modules/*.sh
do
log DEBUG "sourcing $module"

View File

@ -0,0 +1,42 @@
## How to Change Config Access Token
### 1. Generate a New Access Token
1. Go to your Git configuration repository provider, it might be either [git.verbis.dkfz.de](https://git.verbis.dkfz.de) or [gitlab.bbmri-eric.eu](https://gitlab.bbmri-eric.eu).
2. Navigate to the configuration repository for your site.
3. Go to **Settings → Access Tokens** to check if your Access Token is valid or expired.
- **If expired**, create a new Access Token.
4. Configure the new Access Token with the following settings:
- **Expiration date**: One year from today, minus one day.
- **Role**: Developer.
- **Scope**: Only `read_repository`.
5. Save the newly generated Access Token in a secure location.
---
### 2. Replace the Old Access Token
1. Navigate to `/etc/bridgehead` in your system.
2. Run the following command to retrieve the current Git remote URL:
```bash
git remote get-url origin
```
Example output:
```
https://name40dkfz-heidelberg.de:<old_access_token>@git.verbis.dkfz.de/bbmri-bridgehead-configs/test.git
```
3. Replace `<old_access_token>` with your new Access Token in the URL.
4. Set the updated URL using the following command:
```bash
git remote set-url origin https://name40dkfz-heidelberg.de:<new_access_token>@git.verbis.dkfz.de/bbmri-bridgehead-configs/test.git
```
5. Start the Bridgehead update service by running:
```bash
systemctl start bridgehead-update@<project>
```
6. View the output to ensure the update process is successful:
```bash
journalctl -u bridgehead-update@<project> -f
```

View File

@ -2,7 +2,7 @@ version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
image: docker.verbis.dkfz.de/cache/samply/blaze:0.31
container_name: bridgehead-itcc-blaze
environment:
BASE_URL: "http://bridgehead-itcc-blaze:8080"

View File

@ -7,7 +7,6 @@ SUPPORT_EMAIL=arturo.macias@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
source versions
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"

View File

@ -6,7 +6,7 @@ services:
replicas: 0 #deactivate landing page
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
image: docker.verbis.dkfz.de/cache/samply/blaze:0.31
container_name: bridgehead-kr-blaze
environment:
BASE_URL: "http://bridgehead-kr-blaze:8080"
@ -40,7 +40,7 @@ services:
- "blaze"
beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}

15
kr/modules/exporter.md Normal file
View File

@ -0,0 +1,15 @@
# Exporter and Reporter
## Exporter
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
## Exporter-DB
It is a database to save queries for its execution in the exporter.
The exporter manages also the different executions of the same query in through the database.
## Reporter
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
It is compatible with different template engines as Groovy, Thymeleaf,...
It is perfect to generate a document as our traditional CCP quality report.

View File

@ -10,7 +10,6 @@ services:
SALT: ${LOCAL_SALT}
KEEP_INTERNAL_ID: ${KEEP_INTERNAL_ID:-false}
MAINZELLISTE_URL: ${PATIENTLIST_URL:-http://patientlist:8080/patientlist}
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.obds2fhir-rest.rule=PathPrefix(`/obds2fhir-rest`) || PathPrefix(`/adt2fhir-rest`)"

19
kr/modules/teiler.md Normal file
View File

@ -0,0 +1,19 @@
# Teiler
This module orchestrates the different microfrontends of the bridgehead as a single page application.
## Teiler Orchestrator
Single SPA component that consists on the root HTML site of the single page application and a javascript code that
gets the information about the microfrontend calling the teiler backend and is responsible for registering them. With the
resulting mapping, it can initialize, mount and unmount the required microfrontends on the fly.
The microfrontends run independently in different containers and can be based on different frameworks (Angular, Vue, React,...)
This microfrontends can run as single alone but need an extension with Single-SPA (https://single-spa.js.org/docs/ecosystem).
There are also available three templates (Angular, Vue, React) to be directly extended to be used directly in the teiler.
## Teiler Dashboard
It consists on the main dashboard and a set of embedded services.
### Login
user and password in ccp.local.conf
## Teiler Backend
In this component, the microfrontends are configured.

View File

@ -7,7 +7,6 @@ SUPPORT_EMAIL=arturo.macias@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
source versions
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"

View File

@ -116,7 +116,7 @@ assertVarsNotEmpty() {
MISSING_VARS=""
for VAR in $@; do
if [ -z "${!VAR}" ]; then
if [ -z "${!VAR}" ]; then
MISSING_VARS+="$VAR "
fi
done
@ -170,11 +170,11 @@ optimizeBlazeMemoryUsage() {
available_system_memory_chunks=$((BLAZE_MEMORY_CAP / 1000))
if [ $available_system_memory_chunks -eq 0 ]; then
log WARN "Only ${BLAZE_MEMORY_CAP} system memory available for Blaze. If your Blaze stores more than 128000 fhir ressources it will run significally slower."
export BLAZE_RESOURCE_CACHE_CAP=128000
export BLAZE_CQL_CACHE_CAP=32
export BLAZE_RESOURCE_CACHE_CAP=128000;
export BLAZE_CQL_CACHE_CAP=32;
else
export BLAZE_RESOURCE_CACHE_CAP=$((available_system_memory_chunks * 312500))
export BLAZE_CQL_CACHE_CAP=$((($system_memory_in_mb/4)/16))
export BLAZE_CQL_CACHE_CAP=$((($system_memory_in_mb/4)/16));
fi
fi
}
@ -318,7 +318,7 @@ function sync_secrets() {
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
set -a # Export variables as environment variables
source /var/cache/bridgehead/secrets/*
source /var/cache/bridgehead/secrets/oidc
set +a # Export variables in the regular way
}

11
lib/gitlab-token-helper.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
[ "$1" = "get" ] || exit
source /var/cache/bridgehead/secrets/gitlab_token
# Any non-empty username works, only the token matters
cat << EOF
username=bk
password=$BRIDGEHEAD_CONFIG_REPO_TOKEN
EOF

View File

@ -1,41 +0,0 @@
#!/bin/bash
if [ "$1" != "get" ]; then
echo "Usage: $0 get"
exit 1
fi
baseDir() {
# see https://stackoverflow.com/questions/59895
SOURCE=${BASH_SOURCE[0]}
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
DIR=$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )
SOURCE=$(readlink "$SOURCE")
[[ $SOURCE != /* ]] && SOURCE=$DIR/$SOURCE # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
DIR=$( cd -P "$( dirname "$SOURCE" )/.." >/dev/null 2>&1 && pwd )
echo $DIR
}
BASE=$(baseDir)
cd $BASE
source lib/functions.sh
assertVarsNotEmpty SITE_ID || fail_and_report 1 "gitpassword.sh failed: SITE_ID is empty."
PARAMS="$(cat)"
GITHOST=$(echo "$PARAMS" | grep "^host=" | sed 's/host=\(.*\)/\1/g')
fetchVarsFromVault GIT_PASSWORD
if [ -z "${GIT_PASSWORD}" ]; then
fail_and_report 1 "gitpassword.sh failed: Git password not found."
fi
cat <<EOF
protocol=https
host=$GITHOST
username=bk-${SITE_ID}
password=${GIT_PASSWORD}
EOF

View File

@ -57,9 +57,9 @@ case "$PROJECT" in
;;
itcc)
site_configuration_repository_middle="git.verbis.dkfz.de/itcc-sites/"
;;
dhki)
site_configuration_repository_middle="git.verbis.dkfz.de/dhki/"
;;
dhki)
site_configuration_repository_middle="git.verbis.dkfz.de/dhki/"
;;
kr)
site_configuration_repository_middle="git.verbis.dkfz.de/krebsregister-sites/"

View File

@ -67,7 +67,7 @@ fi
log INFO "Checking network access ($BROKER_URL_FOR_PREREQ) ..."
source "${CONFIG_DIR}${PROJECT}".conf
source ${PROJECT}/vars vars
source ${PROJECT}/vars
if [ "${PROJECT}" != "minimal" ]; then
set +e

View File

@ -19,7 +19,7 @@ fi
hc_send log "Checking for bridgehead updates ..."
CONFFILE=/etc/bridgehead/$1.conf
CONFFILE=/etc/bridgehead/$PROJECT.conf
if [ ! -e $CONFFILE ]; then
fail_and_report 1 "Configuration file $CONFFILE not found."
@ -33,7 +33,43 @@ export SITE_ID
checkOwner /srv/docker/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /srv/docker/bridgehead"
checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead"
CREDHELPER="/srv/docker/bridgehead/lib/gitpassword.sh"
# Use Secret Sync to validate the GitLab token in /var/cache/bridgehead/secrets/gitlab_token.
# If it is missing or expired, Secret Sync will create a new token and write it to the file.
# The git credential helper reads the token from the file during git pull.
mkdir -p /var/cache/bridgehead/secrets
touch /var/cache/bridgehead/secrets/gitlab_token # the file has to exist to be mounted correctly in the Docker container
log "INFO" "Running Secret Sync for the GitLab token"
docker pull docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest # make sure we have the latest image
docker run --rm \
-v /var/cache/bridgehead/secrets/gitlab_token:/usr/local/cache \
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
-v /srv/docker/bridgehead/$PROJECT/root.crt.pem:/run/secrets/root.crt.pem:ro \
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
-e NO_PROXY=localhost,127.0.0.1 \
-e ALL_PROXY=$HTTPS_PROXY_FULL_URL \
-e PROXY_ID=$PROXY_ID \
-e BROKER_URL=$BROKER_URL \
-e GITLAB_PROJECT_ACCESS_TOKEN_PROVIDER=secret-sync-central.oidc-client-enrollment.$BROKER_ID \
-e SECRET_DEFINITIONS=GitLabProjectAccessToken:BRIDGEHEAD_CONFIG_REPO_TOKEN: \
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
if [ $? -eq 0 ]; then
log "INFO" "Secret Sync was successful"
# In the past we used to hardcode tokens into the repository URL. We have to remove those now for the git credential helper to become effective.
CLEAN_REPO="$(git -C /etc/bridgehead remote get-url origin | sed -E 's|https://[^@]+@|https://|')"
git -C /etc/bridgehead remote set-url origin "$CLEAN_REPO"
# Set the git credential helper
git -C /etc/bridgehead config credential.helper /srv/docker/bridgehead/lib/gitlab-token-helper.sh
else
log "WARN" "Secret Sync failed"
# Remove the git credential helper
git -C /etc/bridgehead config --unset credential.helper
fi
# In the past the git credential helper was also set for /srv/docker/bridgehead but never used.
# Let's remove it to avoid confusion. This line can be removed at some point the future when we
# believe that it was removed on all/most production servers.
git -C /srv/docker/bridgehead config --unset credential.helper
CHANGES=""
@ -45,10 +81,6 @@ for DIR in /etc/bridgehead $(pwd); do
if [ -n "$OUT" ]; then
report_error log "The working directory $DIR is modified. Changed files: $OUT"
fi
if [ "$(git -C $DIR config --get credential.helper)" != "$CREDHELPER" ]; then
log "INFO" "Configuring repo to use bridgehead git credential helper."
git -C $DIR config credential.helper "$CREDHELPER"
fi
old_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
if [ -z "$HTTPS_PROXY_FULL_URL" ]; then
log "INFO" "Git is using no proxy!"
@ -58,7 +90,8 @@ for DIR in /etc/bridgehead $(pwd); do
OUT=$(retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR pull 2>&1)
fi
if [ $? -ne 0 ]; then
report_error log "Unable to update git $DIR: $OUT"
OUT_SAN=$(echo $OUT | sed -E 's|://[^:]+:[^@]+@|://credentials@|g')
report_error log "Unable to update git $DIR: $OUT_SAN"
fi
new_git_hash="$(git -C $DIR rev-parse --verify HEAD)"

View File

@ -16,7 +16,7 @@ services:
- --entrypoints.web.http.redirections.entrypoint.scheme=https
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/api`) || PathPrefix(`/dashboard/`)"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/dashboard/`)"
- "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"

View File

@ -0,0 +1,142 @@
{
"sites": [
{
"id": "UKFR",
"name": "Freiburg",
"virtualhost": "ukfr.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKHD",
"name": "Heidelberg",
"virtualhost": "ukhd.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKT",
"name": "Tübingen",
"virtualhost": "ukt.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKU",
"name": "Ulm",
"virtualhost": "uku.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UM",
"name": "Mainz",
"virtualhost": "um.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKMR",
"name": "Marburg",
"virtualhost": "ukmr.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKE",
"name": "Hamburg",
"virtualhost": "uke.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKA",
"name": "Aachen",
"virtualhost": "uka.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "Charite",
"name": "Berlin",
"virtualhost": "charite.dnpm.de",
"beamconnect": "dnpm-connect.berlin-test.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "MRI",
"name": "Muenchen-tum",
"virtualhost": "mri.dnpm.de",
"beamconnect": "dnpm-connect.muenchen-tum.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "KUM",
"name": "Muenchen-lmu",
"virtualhost": "kum.dnpm.de",
"beamconnect": "dnpm-connect.muenchen-lmu.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "MHH",
"name": "Hannover",
"virtualhost": "mhh.dnpm.de",
"beamconnect": "dnpm-connect.hannover.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKDD",
"name": "dresden-dnpm",
"virtualhost": "ukdd.dnpm.de",
"beamconnect": "dnpm-connect.dresden-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKB",
"name": "Bonn",
"virtualhost": "ukb.dnpm.de",
"beamconnect": "dnpm-connect.bonn-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKD",
"name": "Duesseldorf",
"virtualhost": "ukd.dnpm.de",
"beamconnect": "dnpm-connect.duesseldorf-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKK",
"name": "Koeln",
"virtualhost": "ukk.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UME",
"name": "Essen",
"virtualhost": "ume.dnpm.de",
"beamconnect": "dnpm-connect.essen.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKM",
"name": "Muenster",
"virtualhost": "ukm.dnpm.de",
"beamconnect": "dnpm-connect.muenster-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKF",
"name": "Frankfurt",
"virtualhost": "ukf.dnpm.de",
"beamconnect": "dnpm-connect.frankfurt.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UMG",
"name": "Goettingen",
"virtualhost": "umg.dnpm.de",
"beamconnect": "dnpm-connect.goettingen.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKW",
"name": "Würzburg",
"virtualhost": "ukw.dnpm.de",
"beamconnect": "dnpm-connect.wuerzburg-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKSH",
"name": "Schleswig-Holstein",
"virtualhost": "uksh.dnpm.de",
"beamconnect": "dnpm-connect.uksh-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "TKT",
"name": "Test",
"virtualhost": "tkt.dnpm.de",
"beamconnect": "dnpm-connect.tobias-develop.broker.ccp-it.dktk.dkfz.de"
}
]
}

View File

@ -29,7 +29,7 @@ services:
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${DNPM_PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "./conf/connect_targets.json"
LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
HTTP_PROXY: http://forward_proxy:3128
HTTPS_PROXY: http://forward_proxy:3128
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
@ -41,7 +41,7 @@ services:
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro
- /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"

View File

@ -1,34 +1,99 @@
version: "3.7"
services:
dnpm-backend:
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector
container_name: bridgehead-dnpm-backend
dnpm-mysql:
image: mysql:9
healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
interval: 3s
timeout: 5s
retries: 5
environment:
- ZPM_SITE=${ZPM_SITE}
- N_RANDOM_FILES=${DNPM_SYNTH_NUM}
MYSQL_ROOT_HOST: "%"
MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
volumes:
- /etc/bridgehead/dnpm:/bwhc_config:ro
- ${DNPM_DATA_DIR}:/bwhc_data
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
- "traefik.http.routers.bwhc-backend.tls=true"
- /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
dnpm-frontend:
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209
container_name: bridgehead-dnpm-frontend
links:
- dnpm-backend
dnpm-authup:
image: authup/authup:latest
container_name: bridgehead-dnpm-authup
volumes:
- /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
depends_on:
dnpm-mysql:
condition: service_healthy
command: server/core start
environment:
- NUXT_HOST=0.0.0.0
- NUXT_PORT=8080
- BACKEND_PROTOCOL=https
- BACKEND_HOSTNAME=$HOST
- BACKEND_PORT=443
- PUBLIC_URL=https://${HOST}/auth/
- AUTHORIZE_REDIRECT_URL=https://${HOST}
- ROBOT_ADMIN_ENABLED=true
- ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
- ROBOT_ADMIN_SECRET_RESET=true
- DB_TYPE=mysql
- DB_HOST=dnpm-mysql
- DB_USERNAME=root
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
- DB_DATABASE=auth
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080"
- "traefik.http.routers.bwhc-frontend.tls=true"
- "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth/"
- "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
- "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-auth.tls=true"
dnpm-portal:
image: ghcr.io/dnpm-dip/portal:latest
container_name: bridgehead-dnpm-portal
environment:
- NUXT_API_URL=http://dnpm-backend:9000/
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-frontend.tls=true"
dnpm-backend:
container_name: bridgehead-dnpm-backend
image: ghcr.io/dnpm-dip/backend:latest
environment:
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- HATEOAS_HOST=https://${HOST}
- CONNECTOR_TYPE=broker
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
volumes:
- /etc/bridgehead/dnpm/config:/dnpm_config
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
depends_on:
dnpm-authup:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
# expose everything
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.dnpm-backend.tls=true"
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
# except ETL
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-etl.tls=true"
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
# this needs an ETL processor with support for basic auth
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
# except peer-to-peer
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-peer.tls=true"
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
# this effectively denies all requests
# this is okay, because requests from peers don't go through Traefik
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
landing:
labels:
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"

View File

@ -1,28 +1,16 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node."
log INFO "DNPM setup detected -- will start DNPM:DIP node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1
fi
if [ -z "${DNPM_DATA_DIR+x}" ]; then
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!"
exit 1
fi
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
echo "Override of landing page url already in place"
else
echo "Adding override of landing page url"
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
else
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
fi
fi
mkdir -p /var/cache/bridgehead/dnpm/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/dnpm/'. Please run sudo './bridgehead install $PROJECT' again to fix the permissions."
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:--1}
DNPM_MYSQL_ROOT_PASSWORD="$(generate_simple_password 'dnpm mysql')"
DNPM_AUTHUP_SECRET="$(generate_simple_password 'dnpm authup')"
fi

View File

@ -11,7 +11,6 @@ services:
CTS_API_KEY: ${NNGM_CTS_APIKEY}
CRYPT_KEY: ${NNGM_CRYPTKEY}
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"

View File

@ -1,5 +1,3 @@
source versions
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"

View File

@ -1,2 +0,0 @@
BLAZE_TAG=0.28
POSTGRES_TAG=15.6-alpine

2
versions/prod Normal file
View File

@ -0,0 +1,2 @@
FOCUS_TAG=main
BEAM_TAG=main

2
versions/test Normal file
View File

@ -0,0 +1,2 @@
FOCUS_TAG=develop
BEAM_TAG=develop