Compare commits

...

31 Commits

Author SHA1 Message Date
364cf5f1c9 Added more detail to metadata feedback README 2025-02-06 16:20:54 +01:00
e571681fce Added clarifying information to README 2025-02-06 16:04:59 +01:00
4f5f6b17e7 Added metadata feedback to README 2025-02-06 11:23:19 +01:00
146235236b Removed stuff accumulated during testing phase
Most of the things added during testing were not necessary and they were
removed. This had the additional advantage that many files are now identical
to their equivalents in the develop branch, making the diff more manageable.
2025-02-06 09:28:48 +01:00
0169435074 We are not using the Teiler in this pilot 2025-02-05 15:10:29 +01:00
6c71a83e70 Added startup script for metadata feedback 2025-02-05 14:30:24 +01:00
ed95dff63e Merge branch 'develop' into metadata_fb 2025-02-05 14:02:47 +01:00
3841a98a3b Push hub URL into /etc/bridgehead/bbmri.conf 2025-02-05 13:50:12 +01:00
Jan
721627a78f feat: migrate to new dnpm:dip node (#251)
* feat: migrate to new dnpm:dip node

* hardcode dnpm connector type to broker

* use `SITE_NAME` for dnpm `LOCAL_SITE`

* host central targets in git

* dnpm: add goettingen to central targets

* dnpm: add uksh to central targets

* dnpm: replace named volumes with fs volumes

* chore: change dnpm images

* chore: pin mysql

* dnpm: Secure endpoints for ETL and p2p communications (#254)

* fix authup redirect (#262)

When a OIDC provider is configured, you'll get redirected to authup by Keycloak which redirects you to the DNPM:DIP. 

Currently the url looks like this: https://myserver/authup//someurl
and produces an error. Manually removing the additional / fixes the issue.

* Whitespace formatting

---------

Co-authored-by: Niklas <niklas@ytvwld.de>
Co-authored-by: Niklas Reimer <niklas@backbord.net>
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2025-02-05 13:24:28 +01:00
710092c020 Autogenerate secrets for feedback-agent 2025-02-05 11:17:38 +01:00
Jan
e08ff92401 Merge pull request #268 from samply/main
Main backport
2025-02-05 08:45:35 +01:00
9977578aa5 Changes for metadata feedback
These changes fall into the following categories:

* Integration of the feedback components into Traefik.
* Small tweaks to Bridgehead parameters needed by metadata feedback.
* Reverting changes made to some files for testing purposes.
2025-02-04 14:55:08 +01:00
Jan
e3553370b6 feat: unify version handeling (#265) 2025-01-29 13:17:59 +01:00
Jan
1ad73d8f82 fix: properly load oidc secrets (#267) 2025-01-29 10:59:27 +01:00
0b6fa439ba Merge pull request #266 from samply/fix/docker-hub-link
Fixed Docker Hub URL list link
2025-01-29 09:52:26 +01:00
615990b92a Use secret-sync for gitpassword (#257)
---------

Co-authored-by: Tim Schumacher <tim@tschumacher.net>
Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>
Co-authored-by: Tim Schumacher <tim.schumacher@dkfz-heidelberg.de>
2025-01-28 14:53:49 +01:00
db950d6d87 Fixed Docker Hub URL list link 2025-01-28 08:59:57 +00:00
6a71da3dd1 docs: documentation for changing your configuration repository access token (#256) 2025-01-27 10:49:10 +01:00
138a1fa5f1 fix: changed ccp_ppi to use IDMANAGEMENT_FRIENDLY_ID instead of SITE_NAME (#259) 2025-01-27 10:49:10 +01:00
39a87bcf61 Update: Blaze to version 0.31 (#260) 2025-01-27 10:49:10 +01:00
Jan
655d0d24c7 fix: remove restart: always in compose files (#261) 2025-01-27 10:49:10 +01:00
df1ec21848 Merge pull request #246 from samply/develop
set environment variables needed for the focus blaze-and-sql endpoint to work when fhir2sql is enabled
2024-12-11 10:53:27 +01:00
a4e292dd18 Merge pull request #241 from samply/develop
feat: add focus tags in ccp and bbmri (#240)
2024-10-23 07:24:06 +02:00
634d4e2a4b Added comments explaining how things have been changed for local testing 2024-10-18 13:32:06 +02:00
bc24599c54 Beam suppressed
Changes have been made so that the Bridgehead can run without Beam.
2024-10-18 11:15:18 +02:00
75089ab428 Merge pull request #221 from samply/develop
Keep develop in sync with main
2024-10-15 14:30:15 +02:00
502eef0cc8 feat: adapt secret sync for bbmri 2024-06-12 08:36:28 +00:00
c4b7620fd6 BBMRI Teiler default templates: bbmri (exporter) and bbmri-qb (reporter) 2024-04-30 14:23:38 +02:00
6a21a5b641 Move exporter and teiler to bbmri/modules 2024-04-30 14:16:59 +02:00
c04ffc5f33 Merge branch 'main' into qr_bbmri 2024-04-30 14:04:34 +02:00
b1bdf48e55 changes for the bbmri-qr 2024-04-18 13:50:14 +02:00
23 changed files with 608 additions and 160 deletions

View File

@ -23,6 +23,7 @@ This repository is the starting point for any information and tools you will nee
- [File structure](#file-structure) - [File structure](#file-structure)
- [BBMRI-ERIC Directory entry needed](#bbmri-eric-directory-entry-needed) - [BBMRI-ERIC Directory entry needed](#bbmri-eric-directory-entry-needed)
- [Loading data](#loading-data) - [Loading data](#loading-data)
- [Metadata feedback](#metadata-feedback)
4. [Things you should know](#things-you-should-know) 4. [Things you should know](#things-you-should-know)
- [Auto-Updates](#auto-updates) - [Auto-Updates](#auto-updates)
- [Auto-Backups](#auto-backups) - [Auto-Backups](#auto-backups)
@ -76,7 +77,7 @@ The following URLs need to be accessible (prefix with `https://`):
* git.verbis.dkfz.de * git.verbis.dkfz.de
* To fetch docker images * To fetch docker images
* docker.verbis.dkfz.de * docker.verbis.dkfz.de
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/all)) * Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/setup/allow-list/))
* hub.docker.com * hub.docker.com
* registry-1.docker.io * registry-1.docker.io
* production.cloudflare.docker.com * production.cloudflare.docker.com
@ -155,6 +156,7 @@ Clone the bridgehead repository:
```shell ```shell
sudo mkdir -p /srv/docker/ sudo mkdir -p /srv/docker/
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead
sudo git checkout metadata_fb # Only needed if you want to use metadata feedback
``` ```
Then, run the installation script: Then, run the installation script:
@ -347,6 +349,26 @@ Normally, you will need to build your own ETL to feed the Bridgehead. However, t
You can find the profiles for generating FHIR in [Simplifier](https://simplifier.net/bbmri.de/~resources?category=Profile). You can find the profiles for generating FHIR in [Simplifier](https://simplifier.net/bbmri.de/~resources?category=Profile).
### Metadata feedback
The Bridgehead comes with a tool that allows you to associate metadata with samples. Multiple arbitrary text strings are allowed. A typical use case would be publications based on research using a sample. Here, one could lay down the DOI of the publication in the sample.
Full details of the system can be found [here](https://github.com/samply/feedback-deployment). To avail yourself of this feature, you need to
- Use the bbmri project.
- work with the ```metadata_fb``` branch of the Bridgehead repository.
- Build the feedback-agent Docker container (more details [here](https://github.com/samply/feedback-agent/)).
- Build the feedback-agent-ui Docker container (more details [here](https://github.com/samply/feedback-agent-ui/)).
The following extra environment variables need to be added to your ```/etc/bridgehead/bbmri.conf``` file:
``` code
ENABLE_EXPORTER=true
ENABLE_FEEDBACK_AGENT=true
FEEDBACK_HUB_URL=<URL for central feedback hub backend API>
FOCUS_RETRY_COUNT=256
```
## Things you should know ## Things you should know
### Auto-Updates ### Auto-Updates

View File

@ -22,6 +22,7 @@ services:
BROKER_URL: ${ERIC_BROKER_URL} BROKER_URL: ${ERIC_BROKER_URL}
PROXY_ID: ${ERIC_PROXY_ID} PROXY_ID: ${ERIC_PROXY_ID}
APP_focus_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT} APP_focus_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
APP_feedback-agent_KEY: ${FEEDBACK_AGENT_BEAM_SECRET}
PRIVKEY_FILE: /run/secrets/proxy.pem PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128 ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs

View File

@ -0,0 +1,67 @@
version: "3.7"
services:
exporter:
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
container_name: bridgehead-ccp-exporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
CROSS_ORIGINS: "https://${HOST}"
EXPORTER_DB_USER: "exporter"
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}"
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
HTTP_RELATIVE_PATH: "/ccp-exporter"
SITE: "${SITE_ID}"
HTTP_SERVLET_REQUEST_SCHEME: "https"
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
labels:
- "traefik.enable=true"
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
- "traefik.http.services.exporter_ccp.loadbalancer.server.port=8092"
- "traefik.http.routers.exporter_ccp.tls=true"
- "traefik.http.middlewares.exporter_ccp_strip.stripprefix.prefixes=/ccp-exporter"
- "traefik.http.routers.exporter_ccp.middlewares=exporter_ccp_strip"
volumes:
- "/var/cache/bridgehead/ccp/exporter-files:/app/exporter-files/output"
exporter-db:
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
container_name: bridgehead-ccp-exporter-db
environment:
POSTGRES_USER: "exporter"
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}"
POSTGRES_DB: "exporter"
volumes:
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
- "/var/cache/bridgehead/ccp/exporter-db:/var/lib/postgresql/data"
reporter:
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
container_name: bridgehead-ccp-reporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
CROSS_ORIGINS: "https://${HOST}"
HTTP_RELATIVE_PATH: "/ccp-reporter"
SITE: "${SITE_ID}"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
EXPORTER_URL: "http://exporter:8092"
LOG_FHIR_VALIDATION: "false"
HTTP_SERVLET_REQUEST_SCHEME: "https"
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
# a process that can take several hours, because it depends on the exporter.
# There is a risk that the bridgehead restarts, losing the already created export.
volumes:
- "/var/cache/bridgehead/ccp/reporter-files:/app/reports"
labels:
- "traefik.enable=true"
- "traefik.http.routers.reporter_ccp.rule=PathPrefix(`/ccp-reporter`)"
- "traefik.http.services.reporter_ccp.loadbalancer.server.port=8095"
- "traefik.http.routers.reporter_ccp.tls=true"
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"

View File

@ -0,0 +1,9 @@
#!/bin/bash -e
if [ "$ENABLE_EXPORTER" == true ]; then
log INFO "Exporter setup detected -- will start Exporter service."
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
POSTGRES_TAG=15.6-alpine
fi

15
bbmri/modules/exporter.md Normal file
View File

@ -0,0 +1,15 @@
# Exporter and Reporter
## Exporter
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
## Exporter-DB
It is a database to save queries for its execution in the exporter.
The exporter manages also the different executions of the same query in through the database.
## Reporter
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
It is compatible with different template engines as Groovy, Thymeleaf,...
It is perfect to generate a document as our traditional CCP quality report.

View File

@ -0,0 +1,59 @@
version: "3.7"
services:
feedback-agent-ui:
image: "samply/feedback-agent-ui"
environment:
- VUE_APP_EXPORTER_URL=https://localhost/ccp-exporter
- VUE_APP_FB_BACKEND_URL=http://localhost:8072
labels:
- traefik.enable=true
# HTTPS
- traefik.http.routers.feedback_agent_ui_ccp_https.rule=PathPrefix(`/ccp-feedback-agent-ui`)
- traefik.http.services.feedback_agent_ui_ccp_https.loadbalancer.server.port=8096
- traefik.http.routers.feedback_agent_ui_ccp_https.entrypoints=websecure
- traefik.http.routers.feedback_agent_ui_ccp_https.tls=true
feedback-agent:
image: "samply/feedback-agent"
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://feedback-agent-db:5432/compose-postgres
- SPRING_DATASOURCE_USERNAME=compose-postgres
- SPRING_DATASOURCE_PASSWORD=${FEEDBACK_AGENT_DB_PASSWORD}
- SPRING_JPA_HIBERNATE_DDL_AUTO=update
- BEAM_PROXY_URI=http://beam-proxy-eric:8081
- FEEDBACK_HUB_URL=${FEEDBACK_HUB_URL}
- BLAZE_BASE_URL=http://blaze:8080/fhir
- FEEDBACK_AGENT_SECRET=${FEEDBACK_AGENT_BEAM_SECRET}
- FEEDBACK_AGENT_BEAM_ID=feedback-agent.${ERIC_PROXY_ID}
- FEEDBACK_HUB_BEAM_ID=feedback-hub.feedback-central.${ERIC_BROKER_ID}
- EXPORTER_API_KEY=${EXPORTER_API_KEY}
- CORS_ALLOWED_ORIGINS="https://${HOST}
networks:
# Only needed for local testing.
- feedback
- default
labels:
- traefik.enable=true
# HTTPS
- traefik.http.routers.feedback_agent_ccp_https.rule=PathPrefix(`/ccp-feedback-agent`)
- traefik.http.services.feedback_agent_ccp_https.loadbalancer.server.port=8072
- traefik.http.routers.feedback_agent_ccp_https.entrypoints=websecure
- traefik.http.middlewares.feedback_agent_ccp_https_strip.stripprefix.prefixes=/ccp-feedback-agent
- traefik.http.routers.feedback_agent_ccp_https.middlewares=feedback_agent_ccp_https_strip
- traefik.http.routers.feedback_agent_ccp_https.tls=true
feedback-agent-db:
image: 'postgres:13.1-alpine'
container_name: feedback-agent-db
environment:
- POSTGRES_USER=compose-postgres
- POSTGRES_PASSWORD=${FEEDBACK_AGENT_DB_PASSWORD}
# This is needed when you run both agent and hub locally in a test
# environment. Not necessary in production, though it probably won't
# cause any problems.
networks:
# Network to connect agent and hub.
feedback:
name: feedback
driver: bridge

View File

@ -0,0 +1,8 @@
#!/bin/bash
if [ "$ENABLE_FEEDBACK_AGENT" == true ]; then
OVERRIDE+=" -f ./$PROJECT/modules/feedback-agent-compose.yml"
FEEDBACK_AGENT_BEAM_SECRET="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
FEEDBACK_AGENT_DB_PASSWORD="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
fi

View File

@ -0,0 +1,6 @@
# Metadata feedback agent
This component can be used to choose the sample to be associated
with a given piece of metadata (generally the ID of a publication
relating to research done with the sample).

View File

@ -53,17 +53,44 @@ case "$PROJECT" in
;; ;;
esac esac
# Loads config variables and runs the projects setup script
loadVars() { loadVars() {
# Load variables from /etc/bridgehead and /srv/docker/bridgehead
set -a set -a
# Source the project specific config file
source /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "/etc/bridgehead/$PROJECT.conf not found" source /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "/etc/bridgehead/$PROJECT.conf not found"
# Source the project specific local config file if present
# This file is ignored by git as oposed to the regular config file as it contains private site information like etl auth data
if [ -e /etc/bridgehead/$PROJECT.local.conf ]; then if [ -e /etc/bridgehead/$PROJECT.local.conf ]; then
log INFO "Applying /etc/bridgehead/$PROJECT.local.conf" log INFO "Applying /etc/bridgehead/$PROJECT.local.conf"
source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import" source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import"
fi fi
# Set execution environment on main default to prod else test
if [[ -z "${ENVIRONMENT+x}" ]]; then
if [ "$(git rev-parse --abbrev-ref HEAD)" == "main" ]; then
ENVIRONMENT="production"
else
ENVIRONMENT="test"
fi
fi
# Source the versions of the images components
case "$ENVIRONMENT" in
"production")
source ./versions/prod
;;
"test")
source ./versions/test
;;
*)
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
source ./versions/prod
;;
esac
fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile" fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile"
setHostname setHostname
optimizeBlazeMemoryUsage optimizeBlazeMemoryUsage
# Run project specific setup if it exists
# This will ususally modiy the `OVERRIDE` to include all the compose files that the project depends on
# This is also where projects specify which modules to load
[ -e ./$PROJECT/vars ] && source ./$PROJECT/vars [ -e ./$PROJECT/vars ] && source ./$PROJECT/vars
set +a set +a
@ -79,26 +106,6 @@ loadVars() {
fi fi
detectCompose detectCompose
setupProxy setupProxy
# Set some project-independent default values
: ${ENVIRONMENT:=production}
export ENVIRONMENT
case "$ENVIRONMENT" in
"production")
export FOCUS_TAG=main
export BEAM_TAG=main
;;
"test")
export FOCUS_TAG=develop
export BEAM_TAG=develop
;;
*)
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export FOCUS_TAG=main
export BEAM_TAG=main
;;
esac
} }
case "$ACTION" in case "$ACTION" in

View File

@ -13,7 +13,7 @@ services:
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT} PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${PROXY_ID} APP_ID: dnpm-connect.${PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json" DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "./conf/connect_targets.json" LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
HTTP_PROXY: "http://forward_proxy:3128" HTTP_PROXY: "http://forward_proxy:3128"
HTTPS_PROXY: "http://forward_proxy:3128" HTTPS_PROXY: "http://forward_proxy:3128"
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal${DNPM_ADDITIONAL_NO_PROXY} NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
@ -25,7 +25,7 @@ services:
volumes: volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro - /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro - /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro - /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)" - "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"

View File

@ -1,34 +1,99 @@
version: "3.7" version: "3.7"
services: services:
dnpm-backend: dnpm-mysql:
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector image: mysql:9
container_name: bridgehead-dnpm-backend healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
interval: 3s
timeout: 5s
retries: 5
environment: environment:
- ZPM_SITE=${ZPM_SITE} MYSQL_ROOT_HOST: "%"
- N_RANDOM_FILES=${DNPM_SYNTH_NUM} MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
volumes: volumes:
- /etc/bridgehead/dnpm:/bwhc_config:ro - /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
- ${DNPM_DATA_DIR}:/bwhc_data
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
- "traefik.http.routers.bwhc-backend.tls=true"
dnpm-frontend: dnpm-authup:
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209 image: authup/authup:latest
container_name: bridgehead-dnpm-frontend container_name: bridgehead-dnpm-authup
links: volumes:
- dnpm-backend - /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
depends_on:
dnpm-mysql:
condition: service_healthy
command: server/core start
environment: environment:
- NUXT_HOST=0.0.0.0 - PUBLIC_URL=https://${HOST}/auth/
- NUXT_PORT=8080 - AUTHORIZE_REDIRECT_URL=https://${HOST}
- BACKEND_PROTOCOL=https - ROBOT_ADMIN_ENABLED=true
- BACKEND_HOSTNAME=$HOST - ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
- BACKEND_PORT=443 - ROBOT_ADMIN_SECRET_RESET=true
- DB_TYPE=mysql
- DB_HOST=dnpm-mysql
- DB_USERNAME=root
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
- DB_DATABASE=auth
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)" - "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth"
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080" - "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
- "traefik.http.routers.bwhc-frontend.tls=true" - "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-auth.tls=true"
dnpm-portal:
image: ghcr.io/dnpm-dip/portal:latest
container_name: bridgehead-dnpm-portal
environment:
- NUXT_API_URL=http://dnpm-backend:9000/
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-frontend.tls=true"
dnpm-backend:
container_name: bridgehead-dnpm-backend
image: ghcr.io/dnpm-dip/backend:latest
environment:
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- HATEOAS_HOST=https://${HOST}
- CONNECTOR_TYPE=broker
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
volumes:
- /etc/bridgehead/dnpm/config:/dnpm_config
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
depends_on:
dnpm-authup:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
# expose everything
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.dnpm-backend.tls=true"
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
# except ETL
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-etl.tls=true"
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
# this needs an ETL processor with support for basic auth
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
# except peer-to-peer
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-peer.tls=true"
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
# this effectively denies all requests
# this is okay, because requests from peers don't go through Traefik
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
landing:
labels:
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"

View File

@ -1,28 +1,16 @@
#!/bin/bash #!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node." log INFO "DNPM setup detected -- will start DNPM:DIP node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml" OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf # Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
if [ -z "${ZPM_SITE+x}" ]; then if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!" log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1 exit 1
fi fi
if [ -z "${DNPM_DATA_DIR+x}" ]; then mkdir -p /var/cache/bridgehead/dnpm/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/dnpm/'. Please run sudo './bridgehead install $PROJECT' again to fix the permissions."
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!" DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:--1}
exit 1 DNPM_MYSQL_ROOT_PASSWORD="$(generate_simple_password 'dnpm mysql')"
fi DNPM_AUTHUP_SECRET="$(generate_simple_password 'dnpm authup')"
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
echo "Override of landing page url already in place"
else
echo "Adding override of landing page url"
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
else
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
fi
fi
fi fi

View File

@ -318,7 +318,7 @@ function sync_secrets() {
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
set -a # Export variables as environment variables set -a # Export variables as environment variables
source /var/cache/bridgehead/secrets/* source /var/cache/bridgehead/secrets/oidc
set +a # Export variables in the regular way set +a # Export variables in the regular way
} }

11
lib/gitlab-token-helper.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
[ "$1" = "get" ] || exit
source /var/cache/bridgehead/secrets/gitlab_token
# Any non-empty username works, only the token matters
cat << EOF
username=bk
password=$BRIDGEHEAD_CONFIG_REPO_TOKEN
EOF

View File

@ -1,41 +0,0 @@
#!/bin/bash
if [ "$1" != "get" ]; then
echo "Usage: $0 get"
exit 1
fi
baseDir() {
# see https://stackoverflow.com/questions/59895
SOURCE=${BASH_SOURCE[0]}
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
DIR=$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )
SOURCE=$(readlink "$SOURCE")
[[ $SOURCE != /* ]] && SOURCE=$DIR/$SOURCE # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
DIR=$( cd -P "$( dirname "$SOURCE" )/.." >/dev/null 2>&1 && pwd )
echo $DIR
}
BASE=$(baseDir)
cd $BASE
source lib/functions.sh
assertVarsNotEmpty SITE_ID || fail_and_report 1 "gitpassword.sh failed: SITE_ID is empty."
PARAMS="$(cat)"
GITHOST=$(echo "$PARAMS" | grep "^host=" | sed 's/host=\(.*\)/\1/g')
fetchVarsFromVault GIT_PASSWORD
if [ -z "${GIT_PASSWORD}" ]; then
fail_and_report 1 "gitpassword.sh failed: Git password not found."
fi
cat <<EOF
protocol=https
host=$GITHOST
username=bk-${SITE_ID}
password=${GIT_PASSWORD}
EOF

View File

@ -19,7 +19,7 @@ fi
hc_send log "Checking for bridgehead updates ..." hc_send log "Checking for bridgehead updates ..."
CONFFILE=/etc/bridgehead/$1.conf CONFFILE=/etc/bridgehead/$PROJECT.conf
if [ ! -e $CONFFILE ]; then if [ ! -e $CONFFILE ]; then
fail_and_report 1 "Configuration file $CONFFILE not found." fail_and_report 1 "Configuration file $CONFFILE not found."
@ -33,7 +33,43 @@ export SITE_ID
checkOwner /srv/docker/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /srv/docker/bridgehead" checkOwner /srv/docker/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /srv/docker/bridgehead"
checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead" checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead"
CREDHELPER="/srv/docker/bridgehead/lib/gitpassword.sh" # Use Secret Sync to validate the GitLab token in /var/cache/bridgehead/secrets/gitlab_token.
# If it is missing or expired, Secret Sync will create a new token and write it to the file.
# The git credential helper reads the token from the file during git pull.
mkdir -p /var/cache/bridgehead/secrets
touch /var/cache/bridgehead/secrets/gitlab_token # the file has to exist to be mounted correctly in the Docker container
log "INFO" "Running Secret Sync for the GitLab token"
docker pull docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest # make sure we have the latest image
docker run --rm \
-v /var/cache/bridgehead/secrets/gitlab_token:/usr/local/cache \
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
-v /srv/docker/bridgehead/$PROJECT/root.crt.pem:/run/secrets/root.crt.pem:ro \
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
-e NO_PROXY=localhost,127.0.0.1 \
-e ALL_PROXY=$HTTPS_PROXY_FULL_URL \
-e PROXY_ID=$PROXY_ID \
-e BROKER_URL=$BROKER_URL \
-e GITLAB_PROJECT_ACCESS_TOKEN_PROVIDER=secret-sync-central.oidc-client-enrollment.$BROKER_ID \
-e SECRET_DEFINITIONS=GitLabProjectAccessToken:BRIDGEHEAD_CONFIG_REPO_TOKEN: \
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
if [ $? -eq 0 ]; then
log "INFO" "Secret Sync was successful"
# In the past we used to hardcode tokens into the repository URL. We have to remove those now for the git credential helper to become effective.
CLEAN_REPO="$(git -C /etc/bridgehead remote get-url origin | sed -E 's|https://[^@]+@|https://|')"
git -C /etc/bridgehead remote set-url origin "$CLEAN_REPO"
# Set the git credential helper
git -C /etc/bridgehead config credential.helper /srv/docker/bridgehead/lib/gitlab-token-helper.sh
else
log "WARN" "Secret Sync failed"
# Remove the git credential helper
git -C /etc/bridgehead config --unset credential.helper
fi
# In the past the git credential helper was also set for /srv/docker/bridgehead but never used.
# Let's remove it to avoid confusion. This line can be removed at some point the future when we
# believe that it was removed on all/most production servers.
git -C /srv/docker/bridgehead config --unset credential.helper
CHANGES="" CHANGES=""
@ -45,10 +81,6 @@ for DIR in /etc/bridgehead $(pwd); do
if [ -n "$OUT" ]; then if [ -n "$OUT" ]; then
report_error log "The working directory $DIR is modified. Changed files: $OUT" report_error log "The working directory $DIR is modified. Changed files: $OUT"
fi fi
if [ "$(git -C $DIR config --get credential.helper)" != "$CREDHELPER" ]; then
log "INFO" "Configuring repo to use bridgehead git credential helper."
git -C $DIR config credential.helper "$CREDHELPER"
fi
old_git_hash="$(git -C $DIR rev-parse --verify HEAD)" old_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
if [ -z "$HTTPS_PROXY_FULL_URL" ]; then if [ -z "$HTTPS_PROXY_FULL_URL" ]; then
log "INFO" "Git is using no proxy!" log "INFO" "Git is using no proxy!"

View File

@ -16,7 +16,7 @@ services:
- --entrypoints.web.http.redirections.entrypoint.scheme=https - --entrypoints.web.http.redirections.entrypoint.scheme=https
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/api`) || PathPrefix(`/dashboard/`)" - "traefik.http.routers.dashboard.rule=PathPrefix(`/dashboard/`)"
- "traefik.http.routers.dashboard.entrypoints=websecure" - "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal" - "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true" - "traefik.http.routers.dashboard.tls=true"

View File

@ -0,0 +1,142 @@
{
"sites": [
{
"id": "UKFR",
"name": "Freiburg",
"virtualhost": "ukfr.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKHD",
"name": "Heidelberg",
"virtualhost": "ukhd.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKT",
"name": "Tübingen",
"virtualhost": "ukt.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKU",
"name": "Ulm",
"virtualhost": "uku.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UM",
"name": "Mainz",
"virtualhost": "um.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKMR",
"name": "Marburg",
"virtualhost": "ukmr.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKE",
"name": "Hamburg",
"virtualhost": "uke.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKA",
"name": "Aachen",
"virtualhost": "uka.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "Charite",
"name": "Berlin",
"virtualhost": "charite.dnpm.de",
"beamconnect": "dnpm-connect.berlin-test.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "MRI",
"name": "Muenchen-tum",
"virtualhost": "mri.dnpm.de",
"beamconnect": "dnpm-connect.muenchen-tum.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "KUM",
"name": "Muenchen-lmu",
"virtualhost": "kum.dnpm.de",
"beamconnect": "dnpm-connect.muenchen-lmu.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "MHH",
"name": "Hannover",
"virtualhost": "mhh.dnpm.de",
"beamconnect": "dnpm-connect.hannover.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKDD",
"name": "dresden-dnpm",
"virtualhost": "ukdd.dnpm.de",
"beamconnect": "dnpm-connect.dresden-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKB",
"name": "Bonn",
"virtualhost": "ukb.dnpm.de",
"beamconnect": "dnpm-connect.bonn-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKD",
"name": "Duesseldorf",
"virtualhost": "ukd.dnpm.de",
"beamconnect": "dnpm-connect.duesseldorf-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKK",
"name": "Koeln",
"virtualhost": "ukk.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UME",
"name": "Essen",
"virtualhost": "ume.dnpm.de",
"beamconnect": "dnpm-connect.essen.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKM",
"name": "Muenster",
"virtualhost": "ukm.dnpm.de",
"beamconnect": "dnpm-connect.muenster-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKF",
"name": "Frankfurt",
"virtualhost": "ukf.dnpm.de",
"beamconnect": "dnpm-connect.frankfurt.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UMG",
"name": "Goettingen",
"virtualhost": "umg.dnpm.de",
"beamconnect": "dnpm-connect.goettingen.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKW",
"name": "Würzburg",
"virtualhost": "ukw.dnpm.de",
"beamconnect": "dnpm-connect.wuerzburg-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKSH",
"name": "Schleswig-Holstein",
"virtualhost": "uksh.dnpm.de",
"beamconnect": "dnpm-connect.uksh-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "TKT",
"name": "Test",
"virtualhost": "tkt.dnpm.de",
"beamconnect": "dnpm-connect.tobias-develop.broker.ccp-it.dktk.dkfz.de"
}
]
}

View File

@ -29,7 +29,7 @@ services:
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT} PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${DNPM_PROXY_ID} APP_ID: dnpm-connect.${DNPM_PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json" DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "./conf/connect_targets.json" LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
HTTP_PROXY: http://forward_proxy:3128 HTTP_PROXY: http://forward_proxy:3128
HTTPS_PROXY: http://forward_proxy:3128 HTTPS_PROXY: http://forward_proxy:3128
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal${DNPM_ADDITIONAL_NO_PROXY} NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
@ -41,7 +41,7 @@ services:
volumes: volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro - /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro - /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /etc/bridgehead/dnpm/central_targets.json:/conf/central_targets.json:ro - /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)" - "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"

View File

@ -1,34 +1,99 @@
version: "3.7" version: "3.7"
services: services:
dnpm-backend: dnpm-mysql:
image: ghcr.io/kohlbacherlab/bwhc-backend:1.0-snapshot-broker-connector image: mysql:9
container_name: bridgehead-dnpm-backend healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
interval: 3s
timeout: 5s
retries: 5
environment: environment:
- ZPM_SITE=${ZPM_SITE} MYSQL_ROOT_HOST: "%"
- N_RANDOM_FILES=${DNPM_SYNTH_NUM} MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
volumes: volumes:
- /etc/bridgehead/dnpm:/bwhc_config:ro - /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
- ${DNPM_DATA_DIR}:/bwhc_data
labels:
- "traefik.enable=true"
- "traefik.http.routers.bwhc-backend.rule=PathPrefix(`/bwhc`)"
- "traefik.http.services.bwhc-backend.loadbalancer.server.port=9000"
- "traefik.http.routers.bwhc-backend.tls=true"
dnpm-frontend: dnpm-authup:
image: ghcr.io/kohlbacherlab/bwhc-frontend:2209 image: authup/authup:latest
container_name: bridgehead-dnpm-frontend container_name: bridgehead-dnpm-authup
links: volumes:
- dnpm-backend - /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
depends_on:
dnpm-mysql:
condition: service_healthy
command: server/core start
environment: environment:
- NUXT_HOST=0.0.0.0 - PUBLIC_URL=https://${HOST}/auth/
- NUXT_PORT=8080 - AUTHORIZE_REDIRECT_URL=https://${HOST}
- BACKEND_PROTOCOL=https - ROBOT_ADMIN_ENABLED=true
- BACKEND_HOSTNAME=$HOST - ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
- BACKEND_PORT=443 - ROBOT_ADMIN_SECRET_RESET=true
- DB_TYPE=mysql
- DB_HOST=dnpm-mysql
- DB_USERNAME=root
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
- DB_DATABASE=auth
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
- "traefik.http.routers.bwhc-frontend.rule=PathPrefix(`/`)" - "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth/"
- "traefik.http.services.bwhc-frontend.loadbalancer.server.port=8080" - "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
- "traefik.http.routers.bwhc-frontend.tls=true" - "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-auth.tls=true"
dnpm-portal:
image: ghcr.io/dnpm-dip/portal:latest
container_name: bridgehead-dnpm-portal
environment:
- NUXT_API_URL=http://dnpm-backend:9000/
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-frontend.tls=true"
dnpm-backend:
container_name: bridgehead-dnpm-backend
image: ghcr.io/dnpm-dip/backend:latest
environment:
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- HATEOAS_HOST=https://${HOST}
- CONNECTOR_TYPE=broker
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
volumes:
- /etc/bridgehead/dnpm/config:/dnpm_config
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
depends_on:
dnpm-authup:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
# expose everything
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.dnpm-backend.tls=true"
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
# except ETL
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-etl.tls=true"
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
# this needs an ETL processor with support for basic auth
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
# except peer-to-peer
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-peer.tls=true"
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
# this effectively denies all requests
# this is okay, because requests from peers don't go through Traefik
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
landing:
labels:
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"

View File

@ -1,28 +1,16 @@
#!/bin/bash #!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected (BwHC Node) -- will start BwHC node." log INFO "DNPM setup detected -- will start DNPM:DIP node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml" OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf # Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
DNPM_APPLICATION_SECRET="$(echo \"This is a salt string to generate one consistent password for DNPM. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
if [ -z "${ZPM_SITE+x}" ]; then if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!" log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1 exit 1
fi fi
if [ -z "${DNPM_DATA_DIR+x}" ]; then mkdir -p /var/cache/bridgehead/dnpm/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/dnpm/'. Please run sudo './bridgehead install $PROJECT' again to fix the permissions."
log ERROR "Mandatory variable DNPM_DATA_DIR not defined!" DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:--1}
exit 1 DNPM_MYSQL_ROOT_PASSWORD="$(generate_simple_password 'dnpm mysql')"
fi DNPM_AUTHUP_SECRET="$(generate_simple_password 'dnpm authup')"
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:-0}
if grep -q 'traefik.http.routers.landing.rule=PathPrefix(`/landing`)' /srv/docker/bridgehead/minimal/docker-compose.override.yml 2>/dev/null; then
echo "Override of landing page url already in place"
else
echo "Adding override of landing page url"
if [ -f /srv/docker/bridgehead/minimal/docker-compose.override.yml ]; then
echo -e ' landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
else
echo -e 'version: "3.7"\nservices:\n landing:\n labels:\n - "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"' >> /srv/docker/bridgehead/minimal/docker-compose.override.yml
fi
fi
fi fi

2
versions/prod Normal file
View File

@ -0,0 +1,2 @@
FOCUS_TAG=main
BEAM_TAG=main

2
versions/test Normal file
View File

@ -0,0 +1,2 @@
FOCUS_TAG=develop
BEAM_TAG=develop