Compare commits

...

686 Commits

Author SHA1 Message Date
d9d6a8acb3 Fix GitLab token syncing for BBMRI 2025-03-18 11:42:53 +01:00
Jan
4ef585bfc5 Merge pull request #281 from samply/develop
Merge develop into main
2025-03-17 10:02:58 +01:00
6f3aba1eaa feat: support self-signed cert on ttp (#283) 2025-03-12 15:45:55 +01:00
Jan
82ced89b33 chore: unify blaze versioning and upgrade to 0.32 (#277) 2025-03-12 12:52:00 +01:00
5d94bac0e2 Mount queries_to_cache.conf readonly (#282) 2025-03-10 16:03:05 +01:00
83555540f5 chore: update central Secret Sync proxy ID (#280) 2025-03-10 12:08:19 +01:00
e396e00178 Fix/ccp cql cache (#279)
* fix: adapt focus caching to new cql and add focus caching for cce and itcc

---------

Co-authored-by: p.delpy@dkfz-heidelberg.de <p.delpy@dkfz-heidelberg.de>
2025-03-07 12:20:38 +01:00
ecb29830e4 Send the origin of /etc/bridgehead repo to Secret Sync
This is to anticipate an upcoming change in Secret Sync that will allow it to support multiple GitLab servers, e.g. verbis GitLab and BBMRI GitLab.
2025-03-05 11:04:22 +01:00
98121c17e8 Make Codeowners Rule apply to all Files in Repo 2025-02-20 17:28:38 +01:00
e38511e118 Add Codeowners File 2025-02-20 17:20:54 +01:00
6ca67ca082 Merge pull request #270 from samply/develop 2025-02-20 15:46:40 +01:00
8334fac84d fix: use correct obds2fhir-rest image
---------

Co-authored-by: Pierre Delpy <p.delpy@dkfz-heidelberg.de>
2025-02-20 13:48:10 +01:00
8000356b57 docs: explicitly clone main branch (#269) 2025-02-07 11:21:43 +01:00
74d8e68d96 Merge pull request #258 from samply/feat/routine-connector
feat: transFAIR
2025-02-07 10:39:54 +01:00
c568a56651 refactor: set transfair log to info 2025-02-07 10:30:43 +01:00
8384143387 fix: make transfair reach the internal blaze stores 2025-02-07 09:17:17 +01:00
8fe73a8123 fix: support mode without ttp 2025-02-07 09:17:07 +01:00
bca63e82a9 fix: don't use return in transfairSetup
For some reason the return not only exits transfairSetup, but also the
bridgehead script
2025-02-06 15:43:37 +01:00
Jan
721627a78f feat: migrate to new dnpm:dip node (#251)
* feat: migrate to new dnpm:dip node

* hardcode dnpm connector type to broker

* use `SITE_NAME` for dnpm `LOCAL_SITE`

* host central targets in git

* dnpm: add goettingen to central targets

* dnpm: add uksh to central targets

* dnpm: replace named volumes with fs volumes

* chore: change dnpm images

* chore: pin mysql

* dnpm: Secure endpoints for ETL and p2p communications (#254)

* fix authup redirect (#262)

When a OIDC provider is configured, you'll get redirected to authup by Keycloak which redirects you to the DNPM:DIP. 

Currently the url looks like this: https://myserver/authup//someurl
and produces an error. Manually removing the additional / fixes the issue.

* Whitespace formatting

---------

Co-authored-by: Niklas <niklas@ytvwld.de>
Co-authored-by: Niklas Reimer <niklas@backbord.net>
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2025-02-05 13:24:28 +01:00
Jan
e08ff92401 Merge pull request #268 from samply/main
Main backport
2025-02-05 08:45:35 +01:00
Jan
e3553370b6 feat: unify version handeling (#265) 2025-01-29 13:17:59 +01:00
Jan
1ad73d8f82 fix: properly load oidc secrets (#267) 2025-01-29 10:59:27 +01:00
0b6fa439ba Merge pull request #266 from samply/fix/docker-hub-link
Fixed Docker Hub URL list link
2025-01-29 09:52:26 +01:00
615990b92a Use secret-sync for gitpassword (#257)
---------

Co-authored-by: Tim Schumacher <tim@tschumacher.net>
Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>
Co-authored-by: Tim Schumacher <tim.schumacher@dkfz-heidelberg.de>
2025-01-28 14:53:49 +01:00
db950d6d87 Fixed Docker Hub URL list link 2025-01-28 08:59:57 +00:00
6a71da3dd1 docs: documentation for changing your configuration repository access token (#256) 2025-01-27 10:49:10 +01:00
138a1fa5f1 fix: changed ccp_ppi to use IDMANAGEMENT_FRIENDLY_ID instead of SITE_NAME (#259) 2025-01-27 10:49:10 +01:00
39a87bcf61 Update: Blaze to version 0.31 (#260) 2025-01-27 10:49:10 +01:00
Jan
655d0d24c7 fix: remove restart: always in compose files (#261) 2025-01-27 10:49:10 +01:00
fa0d9fb8b4 restrict additional blaze memory usage 2025-01-24 09:23:08 +00:00
139fcecabe redo transfair setup 2025-01-23 13:27:42 +00:00
2058a7a5c9 update image url 2025-01-23 12:06:43 +00:00
47364f999e wip: routine connector 2025-01-21 13:55:08 +00:00
910289079b docs: documentation for changing your configuration repository access token (#256) 2025-01-21 09:35:25 +01:00
1003cd73cf fix: changed ccp_ppi to use IDMANAGEMENT_FRIENDLY_ID instead of SITE_NAME (#259) 2025-01-21 09:30:20 +01:00
3d1105b97c Update: Blaze to version 0.31 (#260) 2025-01-21 09:28:47 +01:00
Jan
5c28e704d2 fix: remove restart: always in compose files (#261) 2025-01-21 09:27:27 +01:00
df1ec21848 Merge pull request #246 from samply/develop
set environment variables needed for the focus blaze-and-sql endpoint to work when fhir2sql is enabled
2024-12-11 10:53:27 +01:00
e3510363ad fix: remove credentials from git remote, if update fails (#253)
Signed-off-by: Patrick Skowronek <patrick.skowronek@dkfz-heidelberg.de>
2024-12-10 17:18:07 +01:00
45aefd24e5 renamed exporter API key to EXPORTER_API_KEY (#252) 2024-12-06 11:27:38 +01:00
122ff16bb1 fix: use verbis cache image instead of docker-hub (#250) 2024-11-12 09:58:17 +01:00
967e45624e Add exporter configuration to focus (#249) 2024-11-05 15:46:00 +01:00
8cd5b93b95 Merge pull request #237 from samply/feat/ccp-ppi
Feat/ccp ppi
2024-11-05 13:59:01 +01:00
Jan
52c24ee6fa fix(fhir2sql): add the postgres connection string to focus (#245) 2024-11-05 13:34:36 +01:00
26712d3567 feat: add and set FOCUS_ENDPOINT_TYPE to support fhir2sql (#244)
* add and set FOCUS_ENDPOINT_TYPE

Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>

---------

Co-authored-by: davidmscholz <david.scholz@dkfz-heidelberg.de>
Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>
2024-10-30 15:21:15 +01:00
a4e292dd18 Merge pull request #241 from samply/develop
feat: add focus tags in ccp and bbmri (#240)
2024-10-23 07:24:06 +02:00
eea17c3478 fix: remove invalid beam-proxy tag and add it to focus instead (#242) 2024-10-22 16:48:48 +02:00
cf5230963c feat: add focus tags in ccp and bbmri (#240)
Co-authored-by: p.delpy@dkfz-heidelberg.de <p.delpy@dkfz-heidelberg.de>
2024-10-21 11:00:01 +02:00
75089ab428 Merge pull request #221 from samply/develop
Keep develop in sync with main
2024-10-15 14:30:15 +02:00
7aaee5e7d5 feat: add auto archiving action (#238)
* feat: add auto archiving action

---------

Co-authored-by: p.delpy@dkfz-heidelberg.de <p.delpy@dkfz-heidelberg.de>
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2024-10-15 13:03:42 +02:00
3312ca8a64 feat: added blaze cql cache (#236) 2024-10-10 14:34:28 +02:00
23981062bb Move ppi to id-management 2024-10-10 13:27:24 +02:00
8e7fe6851e fix: use correct mainzelliste api key 2024-10-10 13:11:43 +02:00
760d599b7c Add CCP-PPI 2024-10-10 12:56:36 +02:00
072ee348fc fix: deactivate landingpage for KR project (#234)
fix: deactivate landingpage for KR project
2024-10-09 09:24:27 +02:00
f328e40963 Merge pull request #233 from samply/fix/id-management-redirection
Allow Usage of Centraxx Interface without login
2024-10-08 13:30:44 +02:00
599bcfcec4 Feature/send branch to healthchecks (#232)
feature: log git branches to healthchecks and code refactoring
2024-10-02 07:53:20 +02:00
eb2955872f fix: allow usage of centraxx interface without login
Before this change CentraXX was redirected to the
central login servers then interacting with the id-management
2024-10-01 13:30:23 +02:00
24da24d05e Traefik dashboard
Deactivate traefik dashboard by default. Add trailing slash to PathPrefix to clarify the URL the dashboard is available at.
2024-10-01 10:40:24 +02:00
65359c2ee6 Feature/pilot projects backup (#227)
add pilot projects
2024-09-12 10:04:53 +02:00
b1de62607f Merge pull request #223 from samply/feat/add-dkfz-hector-ki-project
Added DHKI project
2024-09-12 09:44:03 +02:00
969f1e7242 fix: remove accidental commit 2024-09-12 09:41:19 +02:00
77c870ab22 fix: fix bash path 2024-09-12 09:29:30 +02:00
f0bdb5c146 fix: re-add modules 2024-09-12 09:25:38 +02:00
735e064b03 Update queries_to_cache.conf 2024-09-12 09:19:21 +02:00
6465dcb0ad feat: added dhki project 2024-09-12 08:47:33 +02:00
c585322ee7 Merge pull request #226 from samply/main
Update Develop from Main
2024-09-12 08:22:17 +02:00
4568e32ffa readme: Data protection group --> officer 2024-09-04 09:26:37 +02:00
ed8dacaa59 Fix fhir2sql db pw generation (#219)
fix postgresql password generation so that the password does not contain problematic characters that mess with the connection string
2024-08-29 09:15:22 +02:00
Jan
3fe781255b feat: Configure beam-connect to trust ds-orchestrator beam proxy (#220) 2024-08-29 09:07:50 +02:00
b9f0bf7064 Merge pull request #205 from samply/fix/login-confusion-with-datashield
fix: specify host for id-management login
2024-08-20 08:22:24 +02:00
6228cb3762 fix: specify host for id-management login
Otherwise traefik will match the route with the one specified in datashield-compose.yml
2024-08-19 17:09:10 +02:00
05fa323c33 Merge pull request #198 from samply/fix/idmanagement-authentication
Switch ID-Management to Keycloak from Samply.Auth
2024-08-19 15:53:54 +02:00
33843fe961 fix: switch id-management to keycloak 2024-08-19 14:43:21 +02:00
0f1f88f538 Merge pull request #204 from samply/fix/environment
Don't repeat definition of ENVIRONMENT var
2024-08-19 09:15:52 +02:00
60acac619d Don't repeat definition of ENVIRONMENT var 2024-08-19 08:38:34 +02:00
376cd03bed Merge pull request #203 from samply/fix/environment
export ENVIRONMENT
2024-08-19 08:33:05 +02:00
ae95f14030 export ENVIRONMENT 2024-08-19 08:27:20 +02:00
25e1d4fb15 Merge pull request #202 from samply/main_fix_dirsync_passwd
Fixed environment variable passing for Directory sync
2024-08-15 13:43:39 +02:00
18c9e1bb30 Remove DP statement already present in readme. 2024-08-15 11:43:14 +00:00
de847f309c Provide defaults 2024-08-15 11:40:44 +00:00
3496fa7a0f Let Directory sync handle connection with Blaze
Remove the delayed start, because Directory sync will automatically keep trying to
connect to Blaze if not initially present.
2024-08-15 13:36:57 +02:00
95574f38be Included Blaze dependency 2024-08-15 10:33:28 +02:00
Jan
bff316cde1 Merge pull request #201 from samply/update/landing-page
Added env to landing-page
2024-08-15 10:00:09 +02:00
b8b81b1242 Fixed environment variable passing for Directory sync
There were problems with the passing of environment variables from
bbmri.conf to the Directory synce container:

* The Directory password variable was misspellt.
* Some useful variables were missing.

Additionally, a delay was added before launching Directory sync,
to give Blaze time to start up.
2024-08-15 09:17:34 +02:00
7c560a2e93 Added env to landing-page 2024-08-15 09:10:37 +02:00
4a13395408 Merge pull request #200 from samply/reduce-update-interval
Reduce bridgehead update interval to once a day at 6am
2024-08-05 08:29:15 +02:00
35d6a17778 Fix bridgehead update timer time convention
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2024-08-01 11:39:03 +02:00
ecd9269022 Add bridgehead update timer persistance
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2024-08-01 11:38:25 +02:00
5227dc57a7 Fix systemd timer description
Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>
2024-08-01 11:32:15 +02:00
62edaf99e0 Reduce bridgehead update interval to once a day at 6am 2024-08-01 11:23:56 +02:00
775cef59d6 Merge pull request #199 from samply/feat/ds-orchestrator
Configure beam-connect to trust ds-orchestrator beam proxy
2024-08-01 10:16:41 +02:00
9c941853bd Merge pull request #196 from samply/new-dashboard-backend
Add new/additional dashboard backend
2024-08-01 10:05:32 +02:00
aca22fb3e3 Merge pull request #144 from samply/documentation/sharing_vm
Recommendation for standalone Bridgehead
2024-08-01 09:52:28 +02:00
33a2505517 Move down, rephrase 2024-08-01 09:51:49 +02:00
83b653e0c3 feat: Configure beam-connect to trust ds-orchestrator beam proxy 2024-07-31 07:18:30 +00:00
2e5aeabca8 Rename fhir2sql module files 2024-07-30 07:44:47 +00:00
af44b6b446 Fix depends_on syntax 2024-07-30 07:40:49 +00:00
5ed07423f3 fix dashboard-compose 2024-07-30 09:24:07 +02:00
bc0f46ecc9 Merge pull request #197 from samply/beam-tag
In ENVIRONMENT=production, use main tag for Samply.Beam.
2024-07-29 15:16:53 +02:00
4ab10ff71d In ENVIRONMENT=production, use main tag for Samply.Beam. 2024-07-29 13:50:30 +02:00
df08d67839 add optional dashboard module 2024-07-29 10:45:00 +02:00
964c5324e6 Merge pull request #160 from samply/main_directory_sync_extra_attributes_and_star_model
Allow user to push star model facts to Directory
2024-07-26 14:09:39 +02:00
023be58528 Merge pull request #195 from samply/feature/caching-ccp
add caching in focus
2024-07-26 14:07:15 +02:00
8942b923b3 Added comment for consistency with Directory Sync README 2024-07-26 09:57:40 +02:00
3f8bb158bc Merge branch 'main' into main_directory_sync_extra_attributes_and_star_model 2024-07-25 16:54:47 +02:00
cfa85067f0 initialize develop; add itcc and cce 2024-07-25 12:44:39 +02:00
c3b770b70f Merge pull request #194 from samply/workaround/secondary-cortex-blaze
Workaround/secondary cortex blaze
2024-07-25 10:32:38 +02:00
d316f1c798 add caching in focus 2024-07-24 13:56:53 +02:00
293810f254 Added: exporter with blaze-secondary 2024-07-18 14:06:47 +02:00
6b4480c54b workaround: add second blaze 2024-07-18 14:06:47 +02:00
a92b2eff76 Merge pull request #193 from samply/Fix-patientlisturl-in-obds2fhir-rest-compose.yml
Fix patientlisturl in obds2fhir-rest-compose.yml
2024-07-16 10:59:53 +02:00
b36c9ae03e Fix patientlisturl in obds2fhir-rest-compose.yml 2024-07-16 10:49:23 +02:00
16629f3e45 Merge pull request #192 from samply/hotfix/idmanagementFlag
fix: use correct ID management flag for oBDS2FHIR and MTBA
2024-07-16 10:20:15 +02:00
91dc31d039 fix: use correct ID management flag for oBDS2FHIR 2024-07-08 14:31:09 +02:00
7c54b6bb08 Merge pull request #191 from samply/fix/oauth-redirect
fix: Fix traefik label for oauth2 redirect
2024-07-03 15:10:28 +02:00
9e4bc214ce fix: Fix traefik label for oauth2 redirect 2024-07-03 13:01:02 +00:00
94a38155b5 Merge pull request #190 from samply/feature/obds-fhir
feature: upgrade to oBDS2FHIR
2024-07-02 09:19:26 +02:00
2ee8e0185a feature: upgrade to oBDS2FHIR 2024-07-02 08:57:15 +02:00
f28e3c2cd2 Remove unnecessary default values 2024-07-01 15:19:44 +02:00
91ff51304b Add new dashboard backend 2024-07-01 15:14:38 +02:00
e28b125b93 Merge pull request #189 from samply/fix/updateBlazeVersion
fix: set blaze to version 0.28
2024-06-28 14:31:56 +02:00
f7751b9d92 fix: set blaze to version 0.28
The 0.28 release is not downgradeable, therefore switching again to 0.28
2024-06-28 14:29:56 +02:00
4da71353cc Merge pull request #188 from samply/fix/blazeDktkNotRunning
Switch to old blaze Version
2024-06-28 13:59:20 +02:00
0db7df1440 Update docker-compose.yml 2024-06-28 13:57:30 +02:00
373ba7a543 Merge pull request #187 from samply/feat/focus-retry
feat: allow setting focus retry count and increase default
2024-06-14 17:00:13 +02:00
e72c996952 feat: allow setting focus retry count and increase default 2024-06-13 07:29:54 +00:00
4fc53c00bf Fix typo 2024-06-11 08:41:35 +02:00
647aa05c73 Merge pull request #186 from samply/feat/journal-logs
feat: Add logs command for journalctl
2024-06-05 14:58:35 +02:00
Jan
ec9df1feec Update README.md
Co-authored-by: patrickskowronekdkfz <86347677+patrickskowronekdkfz@users.noreply.github.com>
2024-06-05 14:57:42 +02:00
a018104e0b feat: Add logs command for journalctl and rename old one to docker-logs 2024-06-05 12:35:44 +00:00
68f06c0d9d Merge pull request #185 from samply/update/focus_main
switch focus of ccp to tag
2024-05-21 16:21:22 +02:00
033da484d1 switch focus of ccp to tag 2024-05-21 16:16:40 +02:00
714e46f082 Merge pull request #184 from samply/refactor/mainzelliste-return-ssl
Ensure Mainzelliste returns SSL in Responses
2024-04-29 08:33:00 +02:00
29c2b5ef69 refactor: Ensure Mainzelliste returns SSL in Responses
Before, the Mainzelliste would always use http instead of https then
referring to it self in responses
2024-04-26 11:29:38 +02:00
433edde75a Merge pull request #182 from samply/revert-177-maintenance/gbn
Revert "GBN maintenance mode"
2024-04-18 11:37:11 +02:00
fe3fc6204a Revert "GBN maintenance mode" 2024-04-18 11:01:04 +02:00
4b3b13b101 Merge pull request #177 from samply/maintenance/gbn
GBN maintenance mode
2024-04-17 20:47:35 +02:00
1afbf88a76 fix: use only bbmri broker 2024-04-16 09:23:42 +02:00
7d5f771181 Merge pull request #181 from samply/fix/secret-sync-args
fix: Generate public oidc client when there is no private client
2024-04-16 09:15:53 +02:00
f9a9baf13d fix: Generate public oidc client when there is no private client 2024-04-15 15:53:27 +02:00
d4259406a9 Merge pull request #180 from samply/fix/secret-sync
fix: Kill stale secret-sync instances
2024-04-15 13:18:38 +02:00
0745eab7b5 fix: Kill stale secret-sync instances 2024-04-15 13:14:46 +02:00
b404277083 Merge pull request #179 from samply/update/focus_0_4_4
update: dktk focus to 0.4.4
2024-04-15 11:02:31 +02:00
b767b3230f update: dktk focus to 0.4.4 2024-04-15 10:13:16 +02:00
dd653a7871 Merge pull request #178 from samply/PierreDelpy-patch-1
fix typo functions.sh
2024-04-15 09:09:47 +02:00
7418861e8c fix typo functions.sh 2024-04-15 09:08:56 +02:00
94b2c29bc7 GBN maintenance mode 2024-04-15 08:31:57 +02:00
ac3ff314ff Merge pull request #176 from samply/fix/bash-math
fix: Make math work on bash 4.2
2024-04-03 12:59:52 +02:00
2831fb9a22 fix: Make math work on bash 4.2 2024-04-02 14:36:23 +02:00
7934d912b8 Merge pull request #175 from samply/update/focus_0_4_2
Update focus to 0.4.2
2024-03-22 14:11:09 +01:00
70ad318b28 Update focus to 0.4.2 2024-03-22 13:59:42 +01:00
6da143f348 Merge pull request #164 from samply/feature/datashield
Exporter, Teiler, QB, DataSHIELD
2024-03-19 09:55:56 +01:00
4fac079aec Merge branch 'main' into feature/datashield 2024-03-19 09:52:40 +01:00
ec6f9302a1 Fix spelling of log WARN 2024-03-19 08:47:57 +00:00
896b24be9b Use bridgehead log functions in datashield setup 2024-03-19 08:45:50 +00:00
adf8e35ba9 Remove empty file (teiler-ui-compose.yml) 2024-03-18 19:22:10 +01:00
480bbe04e7 Changed: TEILER_DEFAULT_LANGUAGE 2024-03-18 16:47:40 +01:00
d8b9498ef9 Update minimal/docker-compose.yml
Co-authored-by: Jan <59206115+Threated@users.noreply.github.com>
2024-03-18 12:45:46 +01:00
3180d0fd76 Replace | openssl rsautl -sign with | sha1sum | openssl pkeyutl -sign 2024-03-18 12:44:34 +01:00
3a8df378a6 Update lib/functions.sh
Co-authored-by: Tobias Kussel <TKussel@users.noreply.github.com>
2024-03-18 12:36:09 +01:00
8cb33c2ddc Add warning if ENABLE_EXPORTER is not set or set to true 2024-03-18 12:18:19 +01:00
591d95e8db Remove empty line 2024-03-18 12:13:09 +01:00
349027e969 Rename oauth2_proxy docker service to oauth2-proxy 2024-03-18 12:12:16 +01:00
ff06782234 Remove todo 2024-03-18 12:04:04 +01:00
6969a7a3bc Remove unnecessary comment 2024-03-18 12:02:53 +01:00
8ea7da64b7 Merge pull request #165 from samply/fix/useCommonLanguage
Use always English Output of free command
2024-03-15 11:51:37 +01:00
6217e28590 fix: use always english output of free command 2024-03-15 11:48:25 +01:00
a87e9b9284 Merge pull request #154 from samply/refactor/blazePerformanceTuning
Optimize memory usage of Blaze
2024-03-15 09:55:34 +01:00
1f17fad366 fix: Dont change ownership of all files under /tmp/bridgehead and /var/cache/bridgehead 2024-03-14 14:09:21 +00:00
5a6322fcaa refactor: Move oauth2 proxy related things to datashield setup 2024-03-14 11:50:08 +00:00
f88dfb5654 Merge pull request #145 from samply/feature/datashield-central-keycloak
Remove local Keycloak Installation
2024-03-13 10:34:13 +01:00
1a233b81a4 Merge pull request #163 from samply/refactor/datashield
refactor: Move vars to their setup files
2024-03-13 10:13:10 +01:00
e1e523f1ac refactor: tune configuration of blaze according to system memory 2024-03-13 08:56:48 +01:00
7478d804df refactor: Move vars to their setup files 2024-03-11 10:34:05 +00:00
06033c8ea0 Merge pull request #162 from samply/update/focus_0_4_1
Updated ccp focus to 0.4.1
2024-03-08 16:26:41 +01:00
eeb17e7bfe feat: added optional resource cache cap 2024-03-08 13:33:30 +01:00
3223c22ff5 Merge pull request #161 from samply/fix/minimal-checks
Don't test clock skew and private key existance for minimal bridgeheads
2024-03-08 09:20:05 +01:00
ea6441fbcb Updated ccp focus to 0.4.1 2024-03-08 08:33:15 +01:00
1a928e6701 Included the new functionality into the README 2024-03-06 11:35:17 +01:00
8104711075 Allow user to push star model facts to Directory
This takes advantage of new functionality added to Directory sync.

Defaults to false.
2024-03-06 11:26:07 +01:00
b5c35211f6 Dont test clock skew and priv key for minimal bridgeheads 2024-03-05 14:58:06 +00:00
3777d4bf05 Add default value for BLAZE_MEMORY_CAP
Co-authored-by: Tobias Kussel <TKussel@users.noreply.github.com>
2024-03-05 10:35:16 +01:00
48e198fa0c Merge pull request #159 from samply/feat/dnpm-test-data-generation
Create env to control dnpm synthetic data generation
2024-02-29 08:18:13 +01:00
ad4430e480 Create env to control dnpm synthetic data generation 2024-02-28 10:11:03 +00:00
Jan
7245ddc720 Merge pull request #158 from samply/main
Fix focus version
2024-02-28 11:01:47 +01:00
443dcc6ec2 Merge pull request #157 from samply/fix/set-focus-version
fix: set focus version to 0.4.0
2024-02-28 10:56:00 +01:00
b2c933f5e5 fix: set focus version to 0.4.0 2024-02-28 10:52:57 +01:00
db9692795a fix: Fix if syntrax 2024-02-27 12:47:33 +01:00
74eb86f8af fix: Update permissions on update 2024-02-27 12:47:33 +01:00
fb4da54297 chore: Add mannheim to datashield sites 2024-02-27 12:47:33 +01:00
3e44dab9f2 chore: Remame datashield mappings to datashield sites 2024-02-27 12:47:33 +01:00
f72e7c7799 Changed: replace keycloak with oidc 2024-02-27 12:47:33 +01:00
19d0fefe94 Changed: master realm 2024-02-27 12:47:33 +01:00
9a1860ccf9 Removed: / from groups 2024-02-27 12:47:33 +01:00
8a197ce5c7 Add oauth2_proxy 2024-02-27 12:47:33 +01:00
29d2bc0440 Add Keycloak to MTBA 2024-02-27 12:47:33 +01:00
2eb56e66c8 Integrate central Keycloak in Teiler 2024-02-27 12:47:33 +01:00
ef8866b943 fix: Start oauth proxy after forward_proxy is ready 2024-02-27 12:47:33 +01:00
cea577bde5 Removed: login-compose 2024-02-27 12:47:33 +01:00
97a558dd46 Removed:Login-compose 2024-02-27 12:47:33 +01:00
1995997ac2 fix: Wait for forward proxy to start 2024-02-27 12:47:33 +01:00
64250d9d21 refactor: Use beam proxy directly as proxy 2024-02-27 12:47:33 +01:00
f3fa1ce712 fix: secret sync account for minimal override 2024-02-27 12:47:33 +01:00
b241feecdb fix: Pull oauth2 proxy from harbor 2024-02-27 12:47:33 +01:00
4a9427a1bd fix: Use forward proxy for secret sync 2024-02-27 12:47:33 +01:00
af3e5231d8 Added: Proxy to R-Studio oauth2-proxy 2024-02-27 12:47:32 +01:00
51e8888fe1 Use latest jq 2024-02-27 12:47:32 +01:00
32ffb33ab1 fix: Only give writeable dirs the docker role 2024-02-27 12:47:32 +01:00
224c1472b2 fix: Correctly set file permissions 2024-02-27 12:47:32 +01:00
01d3a38e18 refactor: Use jq from docker 2024-02-27 12:47:32 +01:00
92a1f4bb59 Add dsCCPhos 2024-02-27 12:47:32 +01:00
4e3cd68922 Only sync secrets on startup 2024-02-27 12:47:32 +01:00
c60c9fc4b4 fix: Use strong pw for opal 2024-02-27 12:47:32 +01:00
f0a05b12ad fix: Generate stable passwords 2024-02-27 12:47:32 +01:00
935c45b74d Added: volume for opal metadata db (III) 2024-02-27 12:47:32 +01:00
01efc6f9b9 Added: volume for opal metadata db (II) 2024-02-27 12:47:32 +01:00
e54475f704 Added: volume for opal metadata db 2024-02-27 12:47:32 +01:00
2f04e51f96 Add test sites 2024-02-27 12:47:32 +01:00
d62f5a404b Add central token manager beam id 2024-02-27 12:47:32 +01:00
977ad139f8 Added: allowed-groups 2024-02-27 12:47:32 +01:00
643e9e67a6 Added: Enable MTBA and Enable DataSHIELD to Teiler Backend 2024-02-27 12:47:32 +01:00
37f100dc01 Default values for MTBA 2024-02-27 12:47:32 +01:00
0793ea9fc6 Use develop version of mtba 2024-02-27 12:47:32 +01:00
44d7b34834 Use last version of mtba 2024-02-27 12:47:32 +01:00
f6dac7038f Only users of group DataSHIELD can use R-Studio 2024-02-27 12:47:32 +01:00
8e5ddc493c teiler-orchestrator and teiler-dashboard latest 2024-02-27 12:47:32 +01:00
Jan
fa141f8e86 fix: undo permission changes on startup 2024-02-27 12:47:31 +01:00
Jan
2a024e751d fix: only change permissions on related files 2024-02-27 12:47:31 +01:00
d3da426610 fix: opal ssl cert 2024-02-27 12:47:31 +01:00
b34f4f2a0f fix: chown syntax 2024-02-27 12:47:31 +01:00
1edcdce5c6 fix: beam connect site renaming 2024-02-27 12:47:31 +01:00
b73ddc883c fix: Change permissions on new bridgehead dirs 2024-02-27 12:47:31 +01:00
9f31e950a5 fix: generate the right beam connect mappings 2024-02-27 12:47:31 +01:00
371097377a feat: Add token-manager to beam 2024-02-27 12:47:31 +01:00
0a2dbb4b2d fix: Restrict rstudio network access 2024-02-27 12:47:31 +01:00
148e87341f move OAUTH2_SECRET 2024-02-27 12:47:31 +01:00
28a612f218 add default template-ids of exporter and reporter 2024-02-27 12:47:31 +01:00
e411883d18 mtba develop 2024-02-27 12:47:31 +01:00
0b2e64a2d5 add /oauth2/callback and /mtba to Keycloak private client 2024-02-27 12:47:31 +01:00
25ac4d2590 mtba latest 2024-02-27 12:47:31 +01:00
f9b26b6958 Use develop branch for mtba 2024-02-27 12:47:27 +01:00
5d4d0405ab fix: public client generation 2024-02-27 12:47:14 +01:00
b44a208e08 Better redirect url handeling 2024-02-27 12:47:13 +01:00
0cd4ededc7 Add oauth2_proxy 2024-02-27 12:47:13 +01:00
f6965859fe Add comment about PASSWORD and DISABLE_AUTH in R-Studio 2024-02-27 12:47:13 +01:00
ae965fddb3 Add proxy to R-Studio for loading R packages 2024-02-27 12:47:13 +01:00
903ef0df9b Add Keycloak to MTBA 2024-02-27 12:47:13 +01:00
e32f484c31 Add keycloak configuration 2024-02-27 12:47:13 +01:00
8486abedd4 Add R-Studio Admin Password 2024-02-27 12:47:13 +01:00
163650f592 Add generate_password function 2024-02-27 12:47:13 +01:00
9ebbf2ed9b Bugfix: Export /var/cache/bridgehead/secrets as environment variables 2024-02-27 12:47:13 +01:00
131b52f57b Account for ip address host values 2024-02-27 12:47:13 +01:00
043e12b985 Remove port handeling when generating redirect url 2024-02-27 12:47:13 +01:00
bb076c5d5a Add function generate_redirect_urls 2024-02-27 12:47:13 +01:00
3c8ec73ac3 Update oidc provider to new url 2024-02-27 12:47:13 +01:00
0015365d1b Generate addtional redirect url 2024-02-27 12:47:13 +01:00
dc3d5496e1 Integrate central Keycloak in Teiler 2024-02-27 12:47:13 +01:00
93a91326a2 Make sure path exists 2024-02-27 12:47:13 +01:00
4115319956 Setup hostname earlier 2024-02-27 12:47:13 +01:00
f854ab58ce Update to new secret-sync semantics 2024-02-27 12:47:13 +01:00
cec3dfe4cd Add secret sync to the bridgehead 2024-02-27 12:47:13 +01:00
3d136959e7 Bugfix: Add version in every docker compose file 2024-02-27 12:47:13 +01:00
8e171b71de Remove unnecessary version of docker-compose.override files 2024-02-27 12:47:13 +01:00
d3edb5e143 Bugfix: Add version in every docker compose file 2024-02-27 12:47:13 +01:00
b87d746a20 Remove unnecessary version of docker-compose.override files 2024-02-27 12:47:09 +01:00
afb63306a8 Remove unnecessary version of docker-compose.override files 2024-02-27 12:46:36 +01:00
90ee8d63f7 Externalize postgres version 2024-02-27 12:44:33 +01:00
8d4f487806 MTBA 1.0.0 2024-02-27 12:44:33 +01:00
a2c242583e Remove nngmSetup in vars 2024-02-27 12:44:33 +01:00
178867cde7 Prevent creation of volumes 2024-02-27 12:44:33 +01:00
77240ff92f Use Bridgehead's internal http proxy 2024-02-27 12:44:33 +01:00
876c4efa41 Make Opal use proxy server 2024-02-27 12:44:33 +01:00
058d1c83e6 Use newest version of beam-connect 2024-02-27 12:44:33 +01:00
ec6407414b Update export template script: FHIR_QUERY to FHIR_PATH 2024-02-27 12:44:33 +01:00
89c90d3aa0 /var/cache for mtba 2024-02-27 12:44:32 +01:00
0039efa353 Add docu about login in teiler 2024-02-27 12:44:32 +01:00
c1020c569a Bugfix: datashield local.json as array 2024-02-27 12:44:32 +01:00
2237562e6e Prevent anonymous volume creation 2024-02-27 12:44:32 +01:00
c8fc35576e Bugfix: Exporter and Reporter /var/cache volumes 2024-02-27 12:44:32 +01:00
3dfc4cf57d Postgres 15.4 in datashield, exporter and login 2024-02-27 12:44:32 +01:00
3a6520a668 Update ccp/modules/mtba.md
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2024-02-27 12:44:32 +01:00
dcddbf2235 Bugfix: Add version of docker-compose 2024-02-27 12:44:26 +01:00
e2f31b6eeb Make sure copy works and the correct owner is set 2024-02-27 12:43:34 +01:00
452946aa04 Add all sites 2024-02-27 12:43:34 +01:00
5c7da0d40d Auto generate mappings 2024-02-27 12:43:34 +01:00
77145277de Add ccp to /var/cache/bridgehead/* volumes 2024-02-27 12:43:34 +01:00
9cdcf2afb8 Rewrite comments 2024-02-27 12:43:34 +01:00
13a74e5dab Move exporter db to /var/cache/bridgehead 2024-02-27 12:43:34 +01:00
c33726d385 Exporter cache 2024-02-27 12:43:34 +01:00
f38d9f8c19 Rework commented sections 2024-02-27 12:43:34 +01:00
b5ca5ea4a7 Autogenerate maps for Opal's beam-connect. To be completed by @Threated with a map-generator in the script. 2024-02-27 12:43:34 +01:00
862e452f3c Cache opal in /var/cache/bridgehead 2024-02-27 12:43:34 +01:00
4aa8f0f3ba Bugfix: Add version in every docker compose file 2024-02-27 12:43:33 +01:00
ccf0b91f17 #!/bin/bash -e 2024-02-27 12:43:33 +01:00
720783249d Bugfix: LDM_AUTH instead of LDM_PASSWORD 2024-02-27 12:43:33 +01:00
2b3eabe95c Rename Teiler Backend, Teiler Dashboard and Teiler Orchestrator 2024-02-27 12:43:33 +01:00
14aece46f7 Add site to exporter and reporter 2024-02-27 12:43:33 +01:00
ff1f7904ad Add forward proxy to teiler-core 2024-02-27 12:43:33 +01:00
8d38adc91e Bugfix: mtba labels 2024-02-27 12:43:33 +01:00
cfc3c7c90e Bugfix: exporter 2024-02-27 12:43:33 +01:00
963144cc31 Disable datashield 2024-02-27 12:43:33 +01:00
765613b87f Bugfix: MTBA path prefix 2024-02-27 12:43:32 +01:00
2b61775652 Enable datashield 2024-02-27 12:43:32 +01:00
4b0b17424f Comment Keycloak volume 2024-02-27 12:43:32 +01:00
f26a8f7a71 Fix comment in login-compose.yml 2024-02-27 12:43:32 +01:00
973b5828f6 Remove old comment of exporter-setup.sh 2024-02-27 12:43:32 +01:00
839e7a4518 Comment on datashield volume 2024-02-27 12:43:32 +01:00
6cfb42dc9b Comment on export and report volumes 2024-02-27 12:43:32 +01:00
5d8bec53c0 Bugfix: JAVA_OPTS for exporter 2024-02-27 12:43:32 +01:00
c52975f204 Add mtba module documentation 2024-02-27 12:43:32 +01:00
957fa64ce9 Add teiler-ui module documentation 2024-02-27 12:43:32 +01:00
b4805af0a1 Add some docs about beam-connect 2024-02-27 12:43:31 +01:00
e3b8a7369b Add login module documentation 2024-02-27 12:43:31 +01:00
adeaf433dc Add Exporter module documentation 2024-02-27 12:43:31 +01:00
846e9c23a7 Add DataSHIELD module documentation 2024-02-27 12:43:31 +01:00
bb7451d8c3 Add JAVA_OPTS to reporter and exporter 2024-02-27 12:43:31 +01:00
26165232f0 Enable Login, Teiler and Exporter 2024-02-27 12:43:31 +01:00
7ed24f667d Export and QB Curl templates 2024-02-27 12:43:31 +01:00
d97ac56126 Generate exporter api key automatically 2024-02-27 12:43:31 +01:00
e7f6c0b1a0 Add default language to ccp 2024-02-27 12:43:31 +01:00
c4c4f743d2 Remove updater cron of teiler-core 2024-02-27 12:43:31 +01:00
be9adcbfa2 Remove clean temp files configuration of exporter 2024-02-27 12:43:30 +01:00
10a362c237 Add explanation why is the volume of exporter-db currently so important for us. 2024-02-27 12:43:30 +01:00
75c86b79e8 Add Teiler Admin to Keycloak 2024-02-27 12:43:30 +01:00
a6443a6857 Remove IS_DKTK_SITE 2024-02-27 12:43:30 +01:00
f3745b973a User default user rstudio in rstudio 2024-02-27 12:43:30 +01:00
50d28d293f Generate DATASHIELD_CONNECT_SECRET automatically 2024-02-27 12:43:30 +01:00
44415369cc Update ccp/modules/datashield-compose.yml 2024-02-27 12:43:30 +01:00
9b8331ed28 Update ccp/modules/datashield-compose.yml 2024-02-27 12:43:30 +01:00
73d969e374 Use LDM_PASSWORD for all admin passwords 2024-02-27 12:43:30 +01:00
840096d1d5 Enable only if true 2024-02-27 12:43:29 +01:00
43c45f0628 Remove todo in rstudio 2024-02-27 12:43:29 +01:00
e182e2fbe6 Remove unnecessary version of docker-compose.override files 2024-02-27 12:43:23 +01:00
c8bafb2461 R-Server rock-base:6.3 2024-02-27 12:42:38 +01:00
0866cacc5a User postgres if docker.verbis.dkfz.de 2024-02-27 12:42:38 +01:00
a1e76a61b8 Remove ports of beam-connect in datashield-compose.yml 2024-02-27 12:42:38 +01:00
09aa33c912 Generate passwords only if modules are enabled 2024-02-27 12:42:38 +01:00
36ac8d41c8 Add http scheme to exporter 2024-02-27 12:42:37 +01:00
c003999721 Migrate to new app key syntax 2024-02-27 12:42:37 +01:00
50360d3f41 update new broker 2024-02-27 12:42:37 +01:00
5148e3382d Add parameter LOG_FHIR_VALIDATION to exporter 2024-02-27 12:42:37 +01:00
2d7d1d73b3 Add reporter 2024-02-27 12:42:37 +01:00
20c65336e6 Switch to no-auth branch of beam-connect 2024-02-27 12:42:37 +01:00
276f886546 secrets are readonly by default 2024-02-27 12:42:37 +01:00
bc239c0b02 change to dockerhub image 2024-02-27 12:42:37 +01:00
6438fc5f4e Change beam-connect version and load opal cert 2024-02-27 12:42:37 +01:00
f2f48869af Change cert permission and location 2024-02-27 12:42:36 +01:00
e9e1ce5a65 ccp.conf in teiler-core as secret 2024-02-27 12:42:36 +01:00
687dbba383 Add opal certificate 2024-02-27 12:42:36 +01:00
5e376b17ad Remove unnecessary volumes 2024-02-27 12:42:36 +01:00
04cf5128b0 Remove mongo db 2024-02-27 12:42:36 +01:00
43ab59563c Add Opal Password in Exporter 2024-02-27 12:42:36 +01:00
996f53a164 expose beam connect ports 2024-02-27 12:42:36 +01:00
b5ce188842 Fix beam connect app id 2024-02-27 12:42:36 +01:00
325ae1d574 beam connect and move beam-connect config 2024-02-27 12:42:36 +01:00
68782d1c32 Experiment 2024-02-27 12:42:35 +01:00
bedc2ca6d0 Add beam connect to docekr-compose 2024-02-27 12:42:35 +01:00
dfde7c18ff Experiment 2024-02-27 12:42:35 +01:00
0b1e0474d7 Add DataSHIELD 2024-02-27 12:42:35 +01:00
72255e6211 Bugfix: cross origins of exporter 2024-02-27 12:42:35 +01:00
32de51eefb Merge id-management-setup with main 2024-02-27 12:42:35 +01:00
0cfe1d3617 Change salt string for exporter and login 2024-02-27 12:42:35 +01:00
fe07c63f36 Adapt teiler-ui to traefik 2024-02-27 12:42:35 +01:00
3a91259a8a Add keycloak teiler app to teiler-ui 2024-02-27 12:42:35 +01:00
4bbd2a15fe Change volume names for teiler components 2024-02-27 12:42:35 +01:00
0a17bbc81f Add stripprefix to teiler-ui 2024-02-27 12:42:35 +01:00
c794508880 Add stripprefix to teiler-core 2024-02-27 12:42:34 +01:00
3e0bf38018 Add forward strategy to teiler-core 2024-02-27 12:42:34 +01:00
e2d109558d Add forward strategy to teiler-core 2024-02-27 12:42:34 +01:00
9299a201a6 Deactivate traffik for mtba 2024-02-27 12:42:34 +01:00
c9b1975c9e Tidy teiler and mtba volumes 2024-02-27 12:42:34 +01:00
17f52a7907 Add Teiler Core 2024-02-27 12:42:34 +01:00
4d1a9bb701 Add Endpoint for Teiler 2024-02-27 12:42:34 +01:00
efc04cea4f Update Teiler Core config 2024-02-27 12:42:34 +01:00
8fe03a6cd2 Add original Keycloak config 2024-02-27 12:42:34 +01:00
c66dac9881 update keykloak config 2024-02-27 12:42:33 +01:00
38c7f3c24a beautiful config 2024-02-27 12:42:33 +01:00
49be101165 Rename teiler to exporter (bugfix) 2024-02-27 12:42:33 +01:00
6626f860a2 Rename teiler to exporter 2024-02-27 12:42:33 +01:00
eb17d8c159 Configure login extern URLs 2024-02-27 12:42:33 +01:00
6340acdbe8 Bugfix: services in teiler-ui-compose.yml 2024-02-27 12:42:32 +01:00
c916a357dc Change images of dktk-teiler and dktk-keycloak 2024-02-27 12:42:32 +01:00
20e2b2a0ed Add nngm and exliquid modules 2024-02-27 12:42:32 +01:00
2e6edb6179 Add Teiler UI and Teiler module 2024-02-27 12:42:32 +01:00
c58096aa27 Merge pull request #155 from samply/fix/dnpm-no-proxy
Add DNPM_NO_PROXY configuration option
2024-02-23 11:37:47 +01:00
b5ef856f12 refactor: calculate memory using free
Co-authored-by: Tobias Kussel <TKussel@users.noreply.github.com>
2024-02-23 08:27:06 +01:00
5470fd726a Merge pull request #156 from samply/fix/dnpm-beam-connect-trusted-certs
mount and process trusted certs in dnpm-beam-connect
2024-02-22 19:25:02 +01:00
3f6e3a2bb4 mount and process trusted certs in dnpm-beam-connect 2024-02-22 15:41:11 +00:00
9937002d06 Add DNPM_NO_PROXY configuration option 2024-02-21 15:04:00 +00:00
e4bc34cce9 Amended following Tobias' comments 2024-02-21 10:08:54 +01:00
a1d0e93106 Merge pull request #153 from samply/dnpm-not-in-bbmri
Remove DNPM code from BBMRI
2024-02-20 16:46:47 +01:00
7d07c0623d refactor: optimize memory usage of blaze 2024-02-20 15:27:00 +01:00
f367a406bb Remove DNPM code from BBMRI 2024-02-20 10:47:42 +01:00
8854670f4d Merge pull request #152 from samply/feature/dnpm-echo
Add dnpm echo to dnpm-compose
2024-02-20 10:36:26 +01:00
aac31945a3 Add dnpm echo to dnpm-compose 2024-02-19 08:50:15 +00:00
60b2bddf15 Merge pull request #146 from samply/fix/updates
Fix image updates for image names with vars
2024-02-14 16:37:28 +01:00
d8da5da7eb Merge pull request #150 from samply/feature/bridgehead-logs
Add `bridgehead logs` command
2024-02-14 16:25:41 +01:00
16fc40f8ae feat: Add bridgehead logs command 2024-02-14 14:43:17 +00:00
e90c087547 Merge pull request #148 from samply/increase-postgres-version
Bump postgres version to 15.6
2024-02-13 08:43:20 +01:00
001b84a774 Revert "Merge pull request #147 from samply/fix/set-focus-version-to-main"
This reverts commit 6550c0cdab, reversing
changes made to 40d991d94e.
2024-02-12 08:55:29 +00:00
ed0bd483dd Use backwards compatible compose config version 2024-02-12 08:54:15 +00:00
5516ad7641 Add project 2024-02-12 08:54:15 +00:00
d44ff4055f fix(updates): Use docker compose config to list images 2024-02-12 08:54:15 +00:00
44ac09b9c1 Bump postgres version to 15.6 2024-02-09 16:58:02 +01:00
f3abde1dfd Merge pull request #138 from samply/documentation/blaze_resources
Added Blaze performance info to README
2024-02-09 11:07:24 +01:00
6550c0cdab Merge pull request #147 from samply/fix/set-focus-version-to-main
fix: set focus version to main
2024-02-08 13:42:10 +01:00
2d5b6e6932 fix: set focus version to main 2024-02-08 12:30:42 +01:00
8fddb809a7 Recommendation for standalone Bridgehead
Requested by Zdenka
2024-01-25 13:49:46 +01:00
40d991d94e Merge pull request #124 from samply/documentation/gba_additions
Mentioned data protection concept and added GBA Firewall change
2024-01-25 13:42:39 +01:00
ae02526baf Merge pull request #142 from samply/addTestInstancesForBbmri
Tested for test GBN and ERIC BHs
2024-01-16 13:03:12 +01:00
0fd2481425 Merge pull request #143 from samply/mtba-hotfix
Hotfix Pierre: Use version 1.0.0 for MTBA
2024-01-09 12:50:34 +01:00
5ba1a1a820 Fix variable visibility 2024-01-09 11:47:02 +01:00
ea51fc5910 Hotfix Pierre: Use version 1.0.0 for MTBA 2024-01-09 11:32:01 +01:00
c4018aae08 fixed gbn broker url 2024-01-09 07:53:44 +01:00
417c158435 GBN broker IDs 2024-01-08 16:26:07 +01:00
00030a6141 GBN variable names 2024-01-08 16:15:46 +01:00
29fb0e7099 Use focus tag depending on ENVIRONMENT 2024-01-08 15:49:46 +01:00
00cae67fa1 Add missing switch-case for gbn 2024-01-08 15:49:28 +01:00
2074461ee7 Use new variable ENVIRONMENT in /etc/bridgehead; defaults to "production". 2024-01-08 13:03:12 +01:00
954d46efb1 Added test root certs and logic for beam to use test brokers 2024-01-05 11:58:42 +01:00
48558812aa Merge pull request #141 from samply/feature/nngm-module-in-minimal
Add nngm module to minimal project
2023-12-18 10:53:53 +01:00
a80a980cea Merge pull request #140 from samply/feature/dnpm-node
Feature/dnpm node
2023-12-18 10:53:39 +01:00
2606c62b1c Merge pull request #139 from samply/fix/dnpm-connect
Cleanup dnpm connect module
2023-12-18 10:53:25 +01:00
f66f2755d8 Change landing page path override to /landing 2023-12-15 13:32:00 +00:00
842c83c66f Use updated nngm module setup 2023-12-15 10:39:50 +00:00
d28a3ac889 Add dnpm node module to bbmri project 2023-12-15 09:46:40 +00:00
fb6af1c4af Add nngm module to minimal project 2023-12-15 09:43:31 +00:00
c02da838c7 Cleanup dnpm connect module 2023-12-15 09:41:11 +00:00
459fa7f78e Add DNPM Node feature to minimal 2023-12-15 09:24:06 +00:00
28c38ed569 Provided a more general relation between resource count & disk space 2023-12-08 10:37:08 +01:00
16211cfedf Added Blaze performance info to README 2023-12-05 13:05:10 +01:00
0b90cdb769 Merge pull request #137 from samply/mtba-hotfix-image
set mtba image to latest
2023-11-30 15:00:02 +01:00
9bf1b42003 set mtba image to latest 2023-11-30 14:58:54 +01:00
7b96864e63 Merge pull request #136 from samply/bugfix/mtba-1.0.0
Bugfix: MTBA 1.0.0
2023-11-30 13:57:58 +01:00
2ba9645ab4 Bugfix: MTBA 1.0.0 2023-11-30 12:29:58 +01:00
6457b21ac6 Merge pull request #130 from samply/documentation/data_load
Information about loading data into the Bridgehead's FHIR store
2023-11-21 11:10:52 +01:00
7ce501548a Merge pull request #134 from samply/feature/ccp-obfuscation
Enable obfuscation for ccp queries
2023-11-17 13:50:34 +01:00
5558d4fefc Merge branch 'main' into documentation/gba_additions 2023-11-08 10:05:00 +01:00
545c6175f5 Replaced reference to "bbmri" with more generic "project". 2023-11-08 09:55:21 +01:00
096225a77d Incorporated Patrick's comments for PR 130 2023-11-06 09:28:43 +01:00
2252504d78 add bwhc node module 2023-11-03 07:33:16 +00:00
6bf34b7732 Enable obfuscation for ccp queries 2023-11-02 07:34:56 +00:00
dc0d42ca07 Merge pull request #133 from samply/fix/auth_proxy_usage
fix: adjusted the forwarding of env vars to forward proxy
2023-10-26 11:41:06 +02:00
90248b331f fix: adjusted the forwarding of env vars to forward proxy 2023-10-25 15:22:08 +02:00
e693a8f0e6 Merge pull request #132 from samply/fix/proxy-git-update
git requires http proxy config even vor https connections
2023-10-25 11:10:09 +02:00
d16eb6c94d git requires http proxy config even vor https connections 2023-10-25 08:47:02 +00:00
b52d49b4ef Merge pull request #127 from samply/fix/proxy_usage
Added proxy user + pw detection
2023-10-24 19:15:13 +02:00
699d8d6398 fix: git call 2023-10-24 10:42:36 +02:00
7e7d184e8b Merge pull request #131 from samply/feature/adt2fhir-rest
Feature/adt2fhir rest
2023-10-24 09:45:08 +02:00
392afb6410 Fix code 2023-10-24 07:23:24 +00:00
f855a19865 Fix sed (?) 2023-10-24 07:12:18 +00:00
bbfc607104 Always define new vars 2023-10-24 07:07:06 +00:00
f008b18760 Redo proxy, set HTTPS_PROXY_HOST and HTTPS_PROXY_PORT 2023-10-24 07:01:22 +00:00
0555786435 fix bash logic 2023-10-24 07:42:27 +02:00
262b9bd62e add adt2fhir-rest service 2023-10-24 07:30:17 +02:00
e0990d99cb Comment out HTTP proxy parsing 2023-10-23 11:06:59 +00:00
9fc8564e4e Fixed git proxy check 2023-10-20 16:47:15 +02:00
74817a21da Rewrote proxy detection logic to deal with all combinations of no/authenticated/unauthenticated proxy servers 2023-10-20 15:59:24 +02:00
87cc0acecc Corrected Link to Docker Daemon Proxy Configuration (#129) 2023-10-20 14:18:56 +02:00
93026d2d89 Change tag for bridgehead-landingpage 2023-10-20 13:58:46 +02:00
d9794a1eea Information about loading data into the Bridgehead's FHIR store
Added this information because many sites have asked about it.
2023-10-20 10:21:52 +02:00
68cd62b981 reaf: var naming for proxy usage in our bridgehead scripts 2023-10-10 10:43:22 +02:00
85446b0a3e Added SECURE_PROXY if the https and http proxy are the same 2023-10-09 09:43:30 +02:00
4bdad68da5 Added proxy user + pw detection 2023-10-05 09:43:57 +02:00
3dadeef786 Merge pull request #125 from samply/urls
readme: Provide URL list for forward proxies
2023-09-27 10:06:48 +02:00
5ca11d1bf5 Update README.md 2023-09-27 10:06:28 +02:00
997c4df5c0 Merge pull request #126 from samply/fix/docker-cache
use docker cache for beam-enroll and vaultfetcher
2023-09-27 09:42:14 +02:00
3c0a994237 use docker cache for beam-enroll and vaultfetcher 2023-09-27 09:22:11 +02:00
377b003207 Refactor, add BBMRI-ERIC gitlab 2023-09-27 09:12:48 +02:00
0c75ac2810 Add healthchecks
Co-authored-by: Torben Brenner <76154651+torbrenner@users.noreply.github.com>
2023-09-27 08:48:45 +02:00
d21c6d7835 Move git/docker proxy config 2023-09-26 13:42:14 +02:00
de10c8508e readme: URL list 2023-09-26 13:25:26 +02:00
b3ace55898 Mentioned data protection concept and added GBA Firewall change 2023-09-25 13:48:41 +02:00
b07731442b Merge pull request #121 from samply/chown-message
Fix error messages about wrong permissions
2023-09-25 10:23:50 +02:00
52f6193fde Merge pull request #123 from samply/correct-broker-url
Use GBN Broker for timesync if ERIC is disabled
2023-09-22 09:37:35 +02:00
a4ce7f4eb6 Fix subsitution 2023-09-22 07:35:05 +00:00
49b5cb976a Use GBN Broker for timesync if ERIC is disabled 2023-09-22 07:25:04 +00:00
Jan
50ef08ca6d Merge pull request #122 from samply/check-curl
Check if curl is installed
2023-09-22 08:22:52 +02:00
c354c450f3 Check if curl is installed 2023-09-21 16:23:22 +00:00
6bb0471a64 Merge pull request #120 from samply/maintenance/updatePostgres
Update Postgres from 15.1 to 15.4
2023-09-19 11:34:45 +02:00
2b0cdc0345 Fix error messages about wrong permissions 2023-09-19 11:33:19 +02:00
850a8eb973 chore: update postgres from 15.1 to 15.4 2023-09-18 15:06:00 +02:00
6b1ea4c74e Update gbn-setup.sh 2023-09-04 15:54:22 +02:00
8c8ebb9298 Added gbn broker url and e-mail 2023-09-04 15:30:44 +02:00
0536023ceb Merge pull request #117 from samply/feature/bbmri_de
Added broker.bbmri.de root cert
2023-09-04 14:44:38 +02:00
3a3a9d09a9 Added broker.bbmri.de root cert 2023-09-04 14:30:52 +02:00
c1f2131438 Merge pull request #116 from samply/main_missing_section
Changed section title in TOC
2023-09-01 14:13:10 +02:00
60e0db00a7 Changed section title in TOC 2023-09-01 14:02:46 +02:00
191be47252 Merge pull request #106 from samply/main_tls_docu
Added advice for finding PEM files
2023-08-30 11:55:56 +02:00
42300e923f Corrected URL 2023-08-30 11:53:06 +02:00
6b025a8f6a Merge branch 'main' into main_tls_docu 2023-08-30 11:52:13 +02:00
4ab1ff2008 Merge pull request #115 from samply/main_zdenkas_directory_documentation
Zdenkas updates for Directory and Collection usage
2023-08-30 11:28:59 +02:00
dddbf0efd0 Zdenkas updates for Directory and Collection usage
This updates documentatin only.
2023-08-29 10:41:25 +02:00
f4ff6f418a Merge pull request #113 from samply/main_documentation_git_docker_proxy
Added info about git and Docker proxy to documenation.
2023-08-29 10:24:05 +02:00
53c9580a46 Merge pull request #114 from samply/fix/dnpm-connect-env
Fix dnpm connect env, from bool to string
2023-08-29 07:47:14 +02:00
169ce2436f fix dnpm connect env, from bool to string 2023-08-28 17:21:52 +00:00
66deff38a2 Corrected Docker link 2023-08-28 10:24:18 +02:00
eeba6bce39 Added info about git and Docker proxy to documenation. 2023-08-28 10:15:19 +02:00
09b02fe4b6 Merge pull request #112 from samply/fix/dnpm-connect-tag
Use beam-connect:develop for DNPM
2023-08-24 19:23:50 +02:00
bba8a03f9f Use beam-connect:develop for DNPM 2023-08-24 12:22:43 +00:00
Jan
86239a80e7 Merge pull request #107 from samply/feature/testable-bridgehead
Dont require beam enroll to run interactively
2023-08-22 10:22:21 +02:00
6cfa745385 Remove -it from docker run 2023-08-17 11:21:20 +00:00
cfb1bed7b4 Adapt to changes in main 2023-08-17 11:20:38 +00:00
ff942ac735 Merge pull request #109 from samply/dktk-migration
Move DKTK-migration to Main
2023-08-16 09:48:12 +02:00
8d83fa1781 Merge pull request #110 from samply/dktk-migration2
Dktk migration2
2023-08-16 09:41:33 +02:00
fa973e2cfa fix: path in mtba setup 2023-08-16 09:37:30 +02:00
bbda5e917f Cleanup 2023-08-16 09:35:36 +02:00
e69c0ec306 Merge branch 'main' into dktk-migration 2023-08-16 09:14:04 +02:00
6af6dae6b6 Merge pull request #108 from samply/feature/custom-basic-auth
refactor addUser to adduser - lowercase
2023-08-15 15:50:35 +02:00
d2e4fc3ea3 Merge pull request #105 from samply/feature/custom-basic-auth
Feature/custom basic auth
2023-08-15 15:44:43 +02:00
af25df79e3 refactor addUser to adduser - lowercase 2023-08-15 15:42:42 +02:00
b58348328c fix nngm migration 2023-08-15 15:34:49 +02:00
829102f23e Merge branch 'main' into feature/custom-basic-auth 2023-08-15 14:24:19 +02:00
4754eb282b add migration for old credentials 2023-08-15 14:08:22 +02:00
705fbeaf97 Added advice for finding PEM files 2023-08-14 13:18:42 +02:00
2c7de6c8b4 refactor strange formatting 2023-08-09 09:24:23 +02:00
3f43c32bd2 refactor addUser code 2023-08-09 09:10:20 +02:00
7e6c310148 Merge pull request #104 from samply/bbmri-combined
BBMRI-ERIC / GBN combined Bridgehead
2023-08-09 08:42:10 +02:00
dc0fc286b1 add generic bash function addBasicAuthUser 2023-08-08 09:28:59 +02:00
eeacf6cc11 Merge pull request #103 from samply/prevent-creation-of-many-anonymous-volumes
Prevent creation of anonymous volumes
2023-08-07 15:40:27 +02:00
54d83736c3 Move BBMRI-ERIC, GBN to modules 2023-08-07 15:19:41 +02:00
b32a19a7b5 Make Directory Sync a module 2023-08-07 13:00:24 +02:00
acc1e2361a Prevent creation of many anonymous volumes
In combination with https://github.com/samply/bridgehead-forward-proxy/pull/10, this will prevent the creation of two anonymous volumes per startup for the bridgehead-forward-proxy.
2023-08-04 17:20:46 +02:00
6ccf9b2a70 Merge pull request #64 from samply/automate
Allow to automate installation
2023-08-01 10:54:56 +02:00
8ff5405b18 Merge pull request #99 from samply/main_firewall_exceptions
Added URLs that need to be accessible for the installation to work.
2023-08-01 10:34:12 +02:00
e775ec5834 Update README.md 2023-08-01 10:34:06 +02:00
317e7bc017 Merge pull request #102 from samply/fix/time-check
Fixed the Time Check
2023-08-01 10:30:59 +02:00
7093166a53 fix: make the check case insensitive 2023-08-01 10:25:23 +02:00
20359fde71 fix: correctly parse curl output 2023-08-01 10:23:14 +02:00
708fc41d12 Merge pull request #100 from samply/checkClockSync
Check time sync in prereqs
2023-07-29 16:05:04 +02:00
b7ed90c5c8 Change logging/reporting order 2023-07-28 11:32:52 +00:00
11bfd94f2a Merge branch 'main' into checkClockSync 2023-07-28 13:28:18 +02:00
9facafd0c4 Only read headers from Broker to check clock skew 2023-07-28 11:23:54 +00:00
8046eddfef Merge pull request #98 from samply/fix/redirectWithSSL
Ensure Id Management redirects with SSL
2023-07-27 16:11:47 +02:00
788e4ea9f7 add generic bash function addBasicAuthUser 2023-07-27 13:53:20 +02:00
8c45e1da80 Added URLs that need to be accessible for the installation to work. 2023-07-27 09:44:55 +02:00
6ad91edefb Don't run Blaze in debug mode 2023-07-27 09:37:19 +02:00
3a4c7b2ece dont require beam enroll to run interactively 2023-07-26 12:26:37 +00:00
0a12720e4c fix: ensure id-management redirects with ssl 2023-07-25 13:27:21 +02:00
7feb903dfa Merge pull request #97 from samply/docs-directory
Docs: Move info about BBMRI-ERIC Directory
2023-07-12 11:47:27 +02:00
b311ff7831 Docs: Move info about BBMRI-ERIC Directory 2023-07-12 08:53:16 +02:00
ed56f19b4e Merge branch 'main' into dktk-migration 2023-07-11 06:33:28 +00:00
11db7e2be9 Remove ref to report-hub 2023-06-30 13:36:42 +02:00
3dec0a7178 Remove EXLIQUID 2023-06-30 13:31:11 +02:00
612f350a60 Merge pull request #96 from samply/remove_exliquid_components
Deleted EXLIQUID setup + compose
2023-06-30 12:59:05 +02:00
9ca3e0059e Rmoved generated of report-hub secret 2023-06-30 12:33:34 +02:00
512f335da8 Removed Report-Hub env var 2023-06-30 12:32:25 +02:00
f510275685 Deleted EXLIQUID setup + compose 2023-06-30 12:30:19 +02:00
6288f809fb Merge branch 'main' into dktk-migration 2023-06-30 09:02:37 +00:00
10da2af390 Merge pull request #95 from samply/fix/dnpm-root-cert
Add new dktk root cert for non ccp project
2023-06-30 09:52:54 +02:00
dd0c28daf3 Add new dktk root cert 2023-06-30 07:50:18 +00:00
ba34d24fac Merge pull request #94 from samply/fix/dnpm-connect-path
Add traefik path stripper to dnpm-connect
2023-06-30 08:15:47 +02:00
6d94ebd4eb Add path stripper 2023-06-30 06:02:32 +00:00
957753965f Merge branch 'main' into dktk-migration 2023-06-29 09:27:27 +00:00
f61b6ba6b1 Merge pull request #93 from samply/fix/docker-namespace-two
Fix docker namespaces step 2 of 2
2023-06-29 10:31:43 +02:00
431eceb071 Fix docker namespaces step 2 of 2 2023-06-29 08:27:00 +00:00
e040579acf Merge branch 'main' into dktk-migration 2023-06-29 07:55:18 +00:00
b37e9daf80 Merge pull request #92 from samply/fix/docker-namespace
Fix docker namespaces step 1 of 2
2023-06-29 09:54:41 +02:00
f41e7df820 Fix docker namespaces step 1 of 2 2023-06-29 07:52:11 +00:00
d101900088 Merge pull request #91 from samply/fix/root-cert-path
Absolute path for beam root cert
2023-06-28 15:41:07 +02:00
a033386464 Absolute path for beam root cert 2023-06-28 15:38:05 +02:00
9f4523cf9e Absolute path for beam root cert 2023-06-28 14:55:35 +02:00
7b32eed493 Merge branch 'main' into dktk-migration 2023-06-28 13:58:18 +02:00
6cd7423a0a Merge pull request #82 from samply/feature/dnpm-connect
Feature/dnpm connect
2023-06-28 12:00:23 +02:00
f0d423fcf7 Adapt to new beam app syntax 2023-06-28 11:48:47 +02:00
3304d2818d Merge branch 'main' into feature/dnpm-connect 2023-06-28 11:45:45 +02:00
499360712b Merge pull request #83 from samply/update/blaze-20
Change blaze tag to latest
2023-06-28 11:43:57 +02:00
37ed5f5cd9 Merge pull request #88 from samply/new-beam-apikey-naming
New beam-proxy api key syntax
2023-06-28 11:34:06 +02:00
12991e4796 Fix enrollment for minimal bh 2023-06-28 11:16:15 +02:00
21fce5a058 beam enroll use docker cache 2023-06-27 10:29:29 +02:00
097de41652 focus: Rename SECRET to API_KEY 2023-06-19 15:45:06 +02:00
188d8d109e Change focus tag to main 2023-06-19 15:45:06 +02:00
10169dca85 Merge pull request #89 from samply/fix-focus
Rename focus tag and API_KEY vars
2023-06-19 13:46:11 +02:00
23a500aae9 focus: Rename SECRET to API_KEY 2023-06-19 13:33:26 +02:00
2f20082d4c Change focus tag to main 2023-06-19 13:32:51 +02:00
b1ee2fa5f4 New beam-proxy api key syntax 2023-06-19 13:25:22 +02:00
6cbf7915f0 Merge pull request #87 from samply/main
Merge Focus als Spot replacment from main
2023-06-19 13:21:26 +02:00
53c6ab5e7a Merge pull request #86 from samply/feature/dktk_focus
DKTK local spot -> focus 🥳
2023-06-19 13:11:59 +02:00
5642141f3f CCP focus develop -> main 2023-06-19 10:32:39 +02:00
149a550940 deactivate exliquid in freiburg 2023-06-19 07:24:39 +02:00
9dadd3efa0 variables 2023-06-16 16:28:22 +02:00
c41ebd226d DKTK local spot -> focus 🥳 2023-06-16 16:24:48 +02:00
019304862e make ID Management skippable for MTBA 2023-06-15 11:02:59 +02:00
71fea9f098 Switch to new dktk broker url with new root cert 2023-06-13 06:16:00 +00:00
c70b0be905 Merge pull request #85 from samply/documentation_for_installation
Documentation for installation
2023-06-01 17:27:17 +02:00
1565907243 Minor text and formatting improvements 2023-06-01 10:47:30 +02:00
906b98f26e Simplified the Directory registration section 2023-06-01 10:40:48 +02:00
a00fed7df2 Changed as result of onboarding experience on the 30th
* Proxy and firewall adjustments were highlighted.
* The need to register with the Directory and obtain a default
  Collection ID was mentioned.
2023-06-01 10:26:14 +02:00
f02587d9fa Change DNPM broker id 2023-05-25 11:20:18 +00:00
ff4fb06ad1 Address review comments 2023-05-19 11:53:03 +00:00
d91f1a8469 Merge pull request #81 from samply/feature/monitoring-timeout
add max-time for curl monitoring
2023-05-17 13:33:16 +02:00
a18b63e190 Use cached beam-connect image for dnpm 2023-05-17 10:04:35 +00:00
f4134bcfca Remove DNPM-BwHC experiment 2023-05-17 09:26:55 +00:00
4e7f023b8a Clean up bwhc startup 2023-05-16 10:56:28 +00:00
2de6504832 Change blaze tag to latest 2023-05-16 11:57:27 +02:00
187945b27e Merge remote-tracking branch 'origin/main' into feature/dnpm-connect 2023-05-16 09:25:08 +00:00
7b753c03c0 Add minimal project to readme 2023-05-16 10:46:17 +02:00
ee727fb220 add max-time for curl monitoring 2023-05-16 08:56:31 +02:00
c9806ad874 Adapt DNPM configuration 2023-05-15 13:43:01 +02:00
d87745443e support minimal project in system preparation 2023-05-10 20:15:14 +02:00
62a7e61685 Merge pull request #80 from samply/fix/enroll
Rely on beam-enroll message for existing key
2023-05-10 14:32:51 +02:00
64169acca2 Rely on beam-enroll message for exsisting key 2023-05-10 12:13:20 +00:00
11c3103968 Merge pull request #79 from samply/fix/openssl
Replace deprecated openssl command
2023-05-10 14:02:06 +02:00
498092d36a Replace deprecated openssl command 2023-05-10 10:59:13 +00:00
3e1659a38d Modularize DNPM components 2023-05-10 10:54:05 +00:00
465ba95e18 Merge pull request #78 from samply/feature/nngm-rest
nngm migration from connector to nngm-rest
2023-05-10 10:36:01 +02:00
dd0d2c64fd nngm migration from connector to nngm-rest 2023-05-09 07:55:30 +02:00
9260d0132a Merge pull request #74 from samply/documentation_minor_fix
Removed non-functioning links from Table of Contents
2023-04-25 16:31:08 +02:00
48dd477a94 Removed non-functioning links from Table of Contents
Removed Git and Docker from Requirements -> Software, since they are
no longer used.
2023-04-24 10:33:04 +02:00
3a42570ac4 Add DNPM discovery URL as public configuration 2023-04-04 13:11:33 +02:00
503f39820b Merge branch 'main' into dnpmconnector 2023-04-04 13:02:49 +02:00
c10ba98084 Merge pull request #53 from samply/directory_sync
Added a Directory sync component
2023-04-03 08:45:50 +02:00
5b926ba20c Remove opt directory_sync from compose 2023-03-31 12:15:43 +02:00
f4e65cc3d0 Implemented Torbens request for PR 53 2023-03-31 11:55:19 +02:00
e124e34d1e Merge branch 'directory_sync' of https://github.com/samply/bridgehead into directory_sync 2023-03-31 10:04:28 +02:00
fa41f8d77f Changed image to docker.verbis.dkfz.de/cache/
Requested by Torben Brenner in PR 53
2023-03-31 10:01:51 +02:00
df74d6d768 Make directory sync opt service 2023-03-31 08:04:28 +02:00
559d527258 Merge pull request #71 from samply/fix/bbmri-enroll-vars
Update variable name to make enroll command work for BBMRI
2023-03-27 16:03:58 +02:00
bdff02ce49 Update variable name to make enroll command work for BBMRI 2023-03-27 14:50:28 +02:00
88f1b031a7 Merge pull request #70 from samply/feature/focus
Feature/focus
2023-03-27 09:38:22 +02:00
bf291c1786 Merge branch 'main' of github.com:samply/bridgehead into feature/focus 2023-03-27 09:35:02 +02:00
bf408f9297 slash and quotation marks around blaze path 2023-03-27 09:28:55 +02:00
e8eb7b5563 Merge pull request #69 from samply/feature/focus
focus app name long
2023-03-23 15:43:19 +01:00
6530aca843 and proxy name 2023-03-23 15:30:28 +01:00
caeb303497 beam app id changed to avoid confusion 2023-03-23 15:26:10 +01:00
ebd213e119 focus app name long 2023-03-23 15:07:30 +01:00
c2d75044a5 Merge pull request #52 from samply/update_readme_for_installation
Enhanced the installation documentation.
2023-03-23 09:46:57 +01:00
9c2c6091e6 Merge pull request #68 from samply/feature/focus
replace local spot with focus
2023-03-22 14:26:06 +01:00
1c3785ace7 added missing variables and renamed correctly 2023-03-22 11:37:48 +01:00
8f3d2f0947 replace local spot with focus 2023-03-22 11:26:55 +01:00
8deafe2023 Merge branch 'main' into update_readme_for_installation 2023-03-22 09:56:11 +01:00
c39518f763 Update README.md 2023-03-17 11:25:56 +01:00
bf3989dcbd Update README.md
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2023-03-17 11:17:47 +01:00
c53fe491d9 Update README.md
Co-authored-by: Martin Lablans <6804500+lablans@users.noreply.github.com>
2023-03-17 11:17:25 +01:00
d7a983000b Update README.md 2023-03-17 11:03:19 +01:00
bedad57f41 Changes for Directory sync PR 53
* Change docker-compose.yml to reduce the number of environment
  variables being passed to Directory sync.
* Improve documentation.
2023-03-14 09:59:37 +01:00
bf79ade029 Merge pull request #67 from samply/hotfix/switchToOldProjectName
hotfix: Switch to old Project Name
2023-03-09 15:54:31 +01:00
25081c1bf4 hotfix: Switch to old Project Name 2023-03-09 15:28:07 +01:00
7743e2cf01 Merge pull request #66 from samply/use-docker-cache
Pull docker images from DKFZ mirror
2023-03-09 11:24:12 +01:00
33b50372c6 Pull docker images from DKFZ mirror 2023-03-09 11:16:34 +01:00
dd4066d1a0 Merge pull request #65 from samply/dont-delete-all-docker-images
Dont delete all docker images
2023-03-08 12:47:05 +01:00
380511d3bb Don't delete docker images if BK is not running 2023-03-08 10:37:37 +01:00
0ff153ef22 Use project name. Add is-running function. 2023-03-08 09:01:05 +00:00
ea3e148fd3 Merge pull request #62 from samply/feature/develop-install
Add developer install
2023-02-27 14:58:40 +01:00
b0086ee4af Merge pull request #60 from samply/feature/check-for-wsl
During install, check if running in WSL and if systemd is present
2023-02-27 13:16:07 +01:00
cedc97477f Add developer install option to the documentation 2023-02-27 13:02:59 +01:00
5d38f48f68 Add developer install 2023-02-24 16:32:17 +01:00
bfc00b9967 Prevent variable splitting in wsl check and improve error message 2023-02-24 11:41:05 +01:00
7a350a8c9b Fix string comparison in WSL check 2023-02-24 11:29:06 +01:00
a036d0a88c Merge pull request #61 from samply/develop
Merge develop into main
2023-02-24 08:35:59 +01:00
857e351b88 Support gitmirror for github.com repo 2023-02-23 18:05:53 +01:00
8b2e99200e Fix typo 2023-02-23 18:05:34 +01:00
2dc36433bf Fixed naming of site in exliquid script 2023-02-23 18:05:19 +01:00
3023b82bb1 Switch beam images to develop tag 2023-02-23 18:05:19 +01:00
fdda14c1be Fixed naming of site in exliquid script 2023-02-23 14:26:59 +01:00
4578c77d4b Fix systemd check 2023-02-22 15:42:52 +01:00
191e986364 Add check for installation in WSL and for systemd 2023-02-22 15:32:21 +01:00
a23f1ae075 Automate installation 2023-02-22 12:06:28 +00:00
5fdeaa7ca4 Merge pull request #59 from samply/switch-images-to-develop
Use beam-proxy:develop
2023-02-21 11:45:26 +01:00
90773ea92a Switch beam images to develop tag 2023-02-21 09:26:53 +01:00
8dd1b01842 Updates for PR52
* Incorporated some of Martin's suggestions (the ones where I had no questions)
* Updated the table of contents to reflect the current structure of the document.
2023-02-20 16:17:45 +01:00
068125c062 Updated environemnt variable names so that they start with "DS_" 2023-02-08 11:03:35 +01:00
92dd4b84c1 Incorporated new environemnt variable nameing for Directory sync 2023-01-31 09:43:26 +01:00
3e55030b1b Added a Directory sync component
* Added new container to bbmri/docker-compose.yml.
* Added set up documentation to README.
2023-01-27 13:49:52 +01:00
6123a9aeba Addressed Torben's comments to PR 52
- Included email for CCP repositories.
- Used journalctl instead of docker ps for Bridgehead status.
2023-01-27 11:08:00 +01:00
7d9cec562e Corrected site naming convention to comply with DKTK 2023-01-27 09:46:30 +01:00
90fe31b6c9 Described Docker logging in README 2023-01-26 11:15:55 +01:00
92d88ad815 Added new section for testing the Bridgehead 2023-01-26 09:37:44 +01:00
d2c5ec0418 Added instructions for Bridgehead de-install 2023-01-25 14:09:14 +01:00
92ccb78674 Fix for Tobias' comment in PR52 2023-01-23 14:49:03 +01:00
0c2873132a Included site naming conventions 2023-01-19 11:22:48 +01:00
ee6f60ef65 Enhanced the installation documentation.
Explained the following:

* Bridgehead projects
* Configuration repository
2023-01-19 09:59:47 +01:00
4a53bb3fb2 Expose dnpm backend hostname 2022-11-09 12:36:58 +00:00
bec42764bb Build the dnpm frontend in host network mode 2022-11-09 11:39:21 +00:00
b6f0cd7a13 Set HTTP(S) Proxy for bwhc frontend build 2022-11-09 10:45:39 +00:00
e11b24bf70 Fix dnpm build context 2022-11-09 09:46:30 +00:00
455d45603c Fix dnpm volume mounting path 2022-11-08 12:45:29 +00:00
6c2d970d01 Support DNPM Discovery URL 2022-11-08 11:00:42 +01:00
3a5444dec0 Allow to run DNPM with Connect or with BWHC included 2022-11-08 10:55:18 +01:00
bece71441c Support DNPM 2022-11-08 10:39:11 +01:00
091402eea0 Update prerequisites.sh 2022-09-30 17:42:53 +02:00
f52012008d Update prerequisites.sh 2022-09-30 17:36:07 +02:00
1d05137bb9 Use proxy in time check 2022-09-30 17:13:28 +02:00
7d13eace32 Check clock skew even smarter 2022-09-30 17:02:05 +02:00
054d71538d Check time sync in prereqs 2022-09-30 16:55:36 +02:00
109 changed files with 3605 additions and 466 deletions

1
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1 @@
* @samply/bridgehead-developers

View File

@ -0,0 +1,39 @@
import os
import requests
from datetime import datetime, timedelta
# Configuration
GITHUB_TOKEN = os.getenv('GITHUB_TOKEN')
REPO = 'samply/bridgehead'
HEADERS = {'Authorization': f'token {GITHUB_TOKEN}', 'Accept': 'application/vnd.github.v3+json'}
API_URL = f'https://api.github.com/repos/{REPO}/branches'
INACTIVE_DAYS = 365
CUTOFF_DATE = datetime.now() - timedelta(days=INACTIVE_DAYS)
# Fetch all branches
def get_branches():
response = requests.get(API_URL, headers=HEADERS)
response.raise_for_status()
return response.json() if response.status_code == 200 else []
# Rename inactive branches
def rename_branch(old_name, new_name):
rename_url = f'https://api.github.com/repos/{REPO}/branches/{old_name}/rename'
response = requests.post(rename_url, json={'new_name': new_name}, headers=HEADERS)
response.raise_for_status()
print(f"Renamed branch {old_name} to {new_name}" if response.status_code == 201 else f"Failed to rename {old_name}: {response.status_code}")
# Check if the branch is inactive
def is_inactive(commit_url):
last_commit_date = requests.get(commit_url, headers=HEADERS).json()['commit']['committer']['date']
return datetime.strptime(last_commit_date, '%Y-%m-%dT%H:%M:%SZ') < CUTOFF_DATE
# Rename inactive branches
def main():
for branch in get_branches():
if is_inactive(branch['commit']['url']):
#rename_branch(branch['name'], f"archived/{branch['name']}")
print(f"[LOG] Branch '{branch['name']}' is inactive and would be renamed to 'archived/{branch['name']}'")
if __name__ == "__main__":
main()

View File

@ -0,0 +1,27 @@
name: Cleanup - Rename Inactive Branches
on:
schedule:
- cron: '0 0 * * 0' # Runs every Sunday at midnight
jobs:
archive-stale-branches:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install Libraries
run: pip install requests
- name: Run Script to Rename Inactive Branches
run: |
python .github/scripts/rename_inactive_branches.py
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

5
.gitignore vendored
View File

@ -1,6 +1,9 @@
##Ignore site configuration ##Ignore site configuration
.gitmodules .gitmodules
site-config/* site-config/*
.idea
## Ignore site configuration ## Ignore site configuration
*/docker-compose.override.yml */docker-compose.override.yml
## MAC OS
.DS_Store

312
README.md
View File

@ -6,27 +6,38 @@ This repository is the starting point for any information and tools you will nee
1. [Requirements](#requirements) 1. [Requirements](#requirements)
- [Hardware](#hardware) - [Hardware](#hardware)
- [System](#system) - [Software](#software)
- [Git](#git) - [Network](#network)
- [Docker](#docker)
2. [Deployment](#deployment) 2. [Deployment](#deployment)
- [Installation](#installation) - [Site name](#site-name)
- [Projects](#projects)
- [GitLab repository](#gitlab-repository)
- [Base Installation](#base-installation)
- [Register with Samply.Beam](#register-with-samplybeam) - [Register with Samply.Beam](#register-with-samplybeam)
- [Starting and stopping your Bridgehead](#starting-and-stopping-your-bridgehead) - [Starting and stopping your Bridgehead](#starting-and-stopping-your-bridgehead)
- [Auto-starting your Bridgehead when the server starts](#auto-starting-your-bridgehead-when-the-server-starts) - [Testing your new Bridgehead](#testing-your-new-bridgehead)
3. [Additional Services](#additional-Services) - [De-installing a Bridgehead](#de-installing-a-bridgehead)
- [Monitoring](#monitoring) 3. [Site-specific configuration](#site-specific-configuration)
- [Register with a Directory](#register-with-a-Directory)
4. [Site-specific configuration](#site-specific-configuration)
- [HTTPS Access](#https-access) - [HTTPS Access](#https-access)
- [Locally Managed Secrets](#locally-managed-secrets) - [TLS terminating proxies](#tls-terminating-proxies)
- [Git Proxy Configuration](#git-proxy-configuration) - [File structure](#file-structure)
- [Docker Daemon Proxy Configuration](#docker-daemon-proxy-configuration) - [BBMRI-ERIC Directory entry needed](#bbmri-eric-directory-entry-needed)
- [Loading data](#loading-data)
4. [Things you should know](#things-you-should-know)
- [Auto-Updates](#auto-updates)
- [Auto-Backups](#auto-backups)
- [Non-Linux OS](#non-linux-os) - [Non-Linux OS](#non-linux-os)
5. [License](#license) 5. [Troubleshooting](#troubleshooting)
- [Docker Daemon Proxy Configuration](#docker-daemon-proxy-configuration)
- [Monitoring](#monitoring)
6. [License](#license)
## Requirements ## Requirements
The data protection officer at your site will probably want to know exactly what our software does with patient data, and you may need to get their approval before you are allowed to install a Bridgehead. To help you with this, we have provided some data protection concepts:
- [Germany](https://www.bbmri.de/biobanking/it/infrastruktur/datenschutzkonzept/)
### Hardware ### Hardware
Hardware requirements strongly depend on the specific use-cases of your network as well as on the data it is going to serve. Most use-cases are well-served with the following configuration: Hardware requirements strongly depend on the specific use-cases of your network as well as on the data it is going to serve. Most use-cases are well-served with the following configuration:
@ -35,6 +46,8 @@ Hardware requirements strongly depend on the specific use-cases of your network
- 32 GB RAM - 32 GB RAM
- 160GB Hard Drive, SSD recommended - 160GB Hard Drive, SSD recommended
We recommend using a dedicated VM for the Bridgehead, with no other applications running on it. While the Bridgehead can, in principle, run on a shared VM, you might run into surprising problems such as resource conflicts (e.g., two apps using tcp port 443).
### Software ### Software
You are strongly recommended to install the Bridgehead under a Linux operating system (but see the section [Non-Linux OS](#non-linux-os)). You will need root (administrator) priveleges on this machine in order to perform the deployment. We recommend the newest Ubuntu LTS server release. You are strongly recommended to install the Bridgehead under a Linux operating system (but see the section [Non-Linux OS](#non-linux-os)). You will need root (administrator) priveleges on this machine in order to perform the deployment. We recommend the newest Ubuntu LTS server release.
@ -45,26 +58,103 @@ Ensure the following software (or newer) is installed:
- docker >= 20.10.1 - docker >= 20.10.1
- docker-compose >= 2.xx (`docker-compose` and `docker compose` are both supported). - docker-compose >= 2.xx (`docker-compose` and `docker compose` are both supported).
- systemd - systemd
- curl
We recommend to install Docker(-compose) from its official sources as described on the [Docker website](https://docs.docker.com). We recommend to install Docker(-compose) from its official sources as described on the [Docker website](https://docs.docker.com).
Note for Ubuntu: Please note that snap versions of Docker are not supported. > 📝 Note for Ubuntu: Snap versions of Docker are not supported.
### Network ### Network
Since it needs to carry sensitive patient data, Bridgeheads are intended to be deployed within your institution's secure network and behave well even in networks in strict security settings, e.g. firewall rules. The only connectivity required is an outgoing HTTPS proxy. TLS termination is supported, too (see [below](#tls-terminating-proxies)) A Bridgehead communicates to all central components via outgoing HTTPS connections.
Note for Ubuntu: Please note that the uncomplicated firewall (ufw) is known to conflict with Docker [here](https://github.com/chaifeng/ufw-docker). Your site might require an outgoing proxy (i.e. HTTPS forward proxy) to connect to external servers; you should discuss this with your local systems administration. In that case, you will need to note down the URL of the proxy. If the proxy requires authentication, you will also need to make a note of its username and password. This information will be used later on during the installation process. TLS terminating proxies are also supported, see [here](#tls-terminating-proxies). Apart from the Bridgehead itself, you may also need to configure the proxy server in [git](https://gist.github.com/evantoli/f8c23a37eb3558ab8765) and [docker](https://docs.docker.com/network/proxy/).
The following URLs need to be accessible (prefix with `https://`):
* To fetch code and configuration from git repositories
* github.com
* git.verbis.dkfz.de
* To fetch docker images
* docker.verbis.dkfz.de
* Official Docker, Inc. URLs (subject to change, see [official list](https://docs.docker.com/desktop/setup/allow-list/))
* hub.docker.com
* registry-1.docker.io
* production.cloudflare.docker.com
* To report bridgeheads operational status
* healthchecks.verbis.dkfz.de
* only for DKTK/CCP
* broker.ccp-it.dktk.dkfz.de
* only for BBMRI-ERIC
* broker.bbmri.samply.de
* gitlab.bbmri-eric.eu
* only for German Biobank Node
* broker.bbmri.de
> 📝 This URL list is subject to change. Instead of the individual names, we highly recommend whitelisting wildcard domains: *.dkfz.de, github.com, *.docker.com, *.docker.io, *.samply.de, *.bbmri.de.
> 📝 Ubuntu's pre-installed uncomplicated firewall (ufw) is known to conflict with Docker, more info [here](https://github.com/chaifeng/ufw-docker).
## Deployment ## Deployment
### Site name
You will need to choose a short name for your site. This is not a URL, just a simple identifying string. For the examples below, we will use "your-site-name", but you should obviously choose something that is meaningful to you and which is unique.
Site names should adhere to the following conventions:
- They should be lower-case.
- They should generally be named after the city where your site is based, e.g. ```karlsruhe```.
- If you have a multi-part name, please use a hypen ("-") as separator, e.g. ```le-havre```.
- If your site is for testing purposes, rather than production, please append "-test", e.g. ```zaragoza-test```.
- If you are a developer and you are making changes to the Bridgehead, please use your name and prepend "dev-", e.g. ```dev-joe-doe```.
### GitLab repository
In order to be able to install, you will need to have your own repository in GitLab for your site's configuration settings. This allows automated updates of the Bridgehead software.
To request a new repository, please contact your research network administration or send an email to one of the project specific addresses:
- For the bbmri project: bridgehead@helpdesk.bbmri-eric.eu.
- For the ccp project: support-ccp@dkfz-heidelberg.de
Mention:
- which project you belong to, i.e. "bbmri" or "ccp"
- site name (According to conventions listed above)
- operator name and email
We will set the repository up for you. We will then send you:
- A Repository Short Name (RSN). Beware: this is distinct from your site name.
- Repository URL containing the acces token eg. https://BH_Dummy:dummy_token@git.verbis.dkfz.de/<project>-bridgehead-configs/dummy.git
During the installation, your Bridgehead will download your site's configuration from GitLab and you can review the details provided to us by email.
### Base Installation ### Base Installation
First, clone the repository to the directory `/srv/docker/bridgehead`: First, download your site specific configuration repository:
```shell
sudo mkdir -p /etc/bridgehead/
sudo git clone <REPO_URL_FROM_EMAIL> /etc/bridgehead/
```
Review the site configuration:
```shell
sudo cat /etc/bridgehead/bbmri.conf
```
Pay special attention to:
- SITE_NAME
- SITE_ID
- OPERATOR_FIRST_NAME
- OPERATOR_LAST_NAME
- OPERATOR_EMAIL
Clone the bridgehead repository:
```shell ```shell
sudo mkdir -p /srv/docker/ sudo mkdir -p /srv/docker/
sudo git clone https://github.com/samply/bridgehead.git /srv/docker/bridgehead sudo git clone -b main https://github.com/samply/bridgehead.git /srv/docker/bridgehead
``` ```
Then, run the installation script: Then, run the installation script:
@ -74,8 +164,6 @@ cd /srv/docker/bridgehead
sudo ./bridgehead install <PROJECT> sudo ./bridgehead install <PROJECT>
``` ```
... and follow the instructions on the screen. You should then be prompted to do the next step:
### Register with Samply.Beam ### Register with Samply.Beam
Many Bridgehead services rely on the secure, performant and flexible messaging middleware called [Samply.Beam](https://github.com/samply/beam). You will need to register ("enroll") with Samply.Beam by creating a cryptographic key pair for your bridgehead: Many Bridgehead services rely on the secure, performant and flexible messaging middleware called [Samply.Beam](https://github.com/samply/beam). You will need to register ("enroll") with Samply.Beam by creating a cryptographic key pair for your bridgehead:
@ -85,7 +173,7 @@ cd /srv/docker/bridgehead
sudo ./bridgehead enroll <PROJECT> sudo ./bridgehead enroll <PROJECT>
``` ```
... and follow the instructions on the screen. You should then be prompted to do the next step: ... and follow the instructions on the screen. Please send your default Collection ID and the display name of your site together with the certificate request when you enroll. You should then be prompted to do the next step:
### Starting and stopping your Bridgehead ### Starting and stopping your Bridgehead
@ -109,8 +197,65 @@ To enable/disable autostart, run
sudo systemctl [enable|disable] bridgehead@<PROJECT>.service sudo systemctl [enable|disable] bridgehead@<PROJECT>.service
``` ```
### Testing your new Bridgehead
After starting the Bridgehead, you can watch the initialization process with the following command:
```shell
/srv/docker/bridgehead/bridgehead logs <project> -f
```
if this exits with something similar to the following:
```
bridgehead@bbmri.service: Main process exited, code=exited, status=1/FAILURE
```
Then you know that there was a problem with starting the Bridgehead. Scroll up the printout to find the cause of the error.
Once the Bridgehead is running, you can also view the individual Docker processes with:
```shell
docker ps
```
There should be 6 - 10 Docker proceses. If there are fewer, then you know that something has gone wrong. To see what is going on, run:
```shell
/srv/docker/bridgehead/bridgehead logs <Project> -f
```
This translates to a journalctl command so all the regular journalctl flags can be used.
Once the Bridgehead has passed these checks, take a look at the landing page:
```
https://localhost
```
You can either do this in a browser or with curl. If you visit the URL in the browser, you will neet to click through several warnings, because you will initially be using a self-signed certificate. With curl, you can bypass these checks:
```shell
curl -k https://localhost
```
Should the landing page not show anything, you can inspect the logs of the containers to determine what is going wrong. To do this you can use `./bridgehead docker-logs <Project> -f` to follow the logs of the container. This transaltes to a docker compose logs command meaning all the ususal docker logs flags work.
If you have chosen to take part in our monitoring program (by setting the ```MONITOR_APIKEY``` variable in the configuration), you will be informed by email when problems are detected in your Bridgehead.
### De-installing a Bridgehead
You may decide that you want to remove a Bridgehead installation from your machine, e.g. if you want to migrate it to a new location or if you want to start a fresh installation because the initial attempts did not work.
To do this, run:
```shell
sh bridgehead uninstall
```
## Site-specific configuration ## Site-specific configuration
[How to Change Config Access Token](docs/update-access-token.md)
### HTTPS Access ### HTTPS Access
Even within your internal network, the Bridgehead enforces HTTPS for all services. During the installation, a self-signed, long-lived certificate was created for you. To increase security, you can simply replace the files under `/etc/bridgehead/traefik-tls` with ones from established certification authorities such as [Let's Encrypt](https://letsencrypt.org) or [DFN-AAI](https://www.aai.dfn.de). Even within your internal network, the Bridgehead enforces HTTPS for all services. During the installation, a self-signed, long-lived certificate was created for you. To increase security, you can simply replace the files under `/etc/bridgehead/traefik-tls` with ones from established certification authorities such as [Let's Encrypt](https://letsencrypt.org) or [DFN-AAI](https://www.aai.dfn.de).
@ -119,6 +264,21 @@ Even within your internal network, the Bridgehead enforces HTTPS for all service
All of the Bridgehead's outgoing connections are secured by transport encryption (TLS) and a Bridgehead will refuse to connect if certificate verification fails. If your local forward proxy server performs TLS termination, please place its CA certificate in `/etc/bridgehead/trusted-ca-certs` as a `.pem` file, e.g. `/etc/bridgehead/trusted-ca-certs/mylocalca.pem`. Then, all Bridgehead components will pick up this certificate and trust it for outgoing connections. All of the Bridgehead's outgoing connections are secured by transport encryption (TLS) and a Bridgehead will refuse to connect if certificate verification fails. If your local forward proxy server performs TLS termination, please place its CA certificate in `/etc/bridgehead/trusted-ca-certs` as a `.pem` file, e.g. `/etc/bridgehead/trusted-ca-certs/mylocalca.pem`. Then, all Bridgehead components will pick up this certificate and trust it for outgoing connections.
To find the certificate file, first run the following:
```
curl -v https://broker.bbmri.samply.de/v1/health
```
In the output, look out for the line:
```
successfully set certificate verify locations:
```
Here a file will be mentioned, perhaps in the directory /etc/ssl/certs. The exact location will depend on your operating system. This is the file that you need to copy.
### File structure ### File structure
- `/srv/docker/bridgehead` contains this git repository with the shell scripts and *project-specific configuration*. In here, all files are identical for all sites. You should not make any changes here. - `/srv/docker/bridgehead` contains this git repository with the shell scripts and *project-specific configuration*. In here, all files are identical for all sites. You should not make any changes here.
@ -131,15 +291,72 @@ All of the Bridgehead's outgoing connections are secured by transport encryption
Your Bridgehead's actual data is not stored in the above directories, but in named docker volumes, see `docker volume ls` and `docker volume inspect <volume_name>`. Your Bridgehead's actual data is not stored in the above directories, but in named docker volumes, see `docker volume ls` and `docker volume inspect <volume_name>`.
### BBMRI-ERIC Directory entry needed
If you run a biobank, you should be listed together with your collections with in the [Directory](https://directory.bbmri-eric.eu), a BBMRI-ERIC project that catalogs biobanks.
To do this, contact the BBMRI-ERIC national node for the country where your biobank is based, see [the list of nodes](http://www.bbmri-eric.eu/national-nodes/).
Once you have added your biobank to the Directory you got persistent identifier (PID) for your biobank and unique identifiers (IDs) for your collections. The collection IDs are necessary for the biospecimens assigning to the collections and later in the data flows between BBMRI-ERIC tools. In case you cannot distribute all your biospecimens within collections via assigning the collection IDs, **you should choose one of your sample collections as a default collection for your biobank**. This collection will be automatically used to label any samples that have not been assigned a collection ID in your ETL process. Make a note of this default collection ID, you will need it later on in the installation process.
### Directory sync tool
The Bridgehead's **Directory Sync** is an optional feature that keeps the Directory up to date with your local data, e.g. number of samples. Conversely, it also updates the local FHIR store with the latest contact details etc. from the Directory. You must explicitly set your country specific directory URL, username and password to enable this feature.
You should talk with your local data protection group regarding the information that is published by Directory sync.
Full details can be found in [directory_sync_service](https://github.com/samply/directory_sync_service).
To enable it, you will need to set these variables to the ```bbmri.conf``` file of your GitLab repository. Here is an example config:
```
DS_DIRECTORY_USER_NAME=your_directory_username
DS_DIRECTORY_USER_PASS=your_directory_password
```
Please contact your National Node to obtain this information.
Optionally, you **may** change when you want Directory sync to run by specifying a [cron](https://crontab.guru) expression, e.g. `DS_TIMER_CRON="0 22 * * *"` for 10 pm every evening.
Once you edited the gitlab config, the bridgehead will autoupdate the config with the values and will sync the data.
There will be a delay before the effects of Directory sync become visible. First, you will need to wait until the time you have specified in ```TIMER_CRON```. Second, the information will then be synchronized from your national node with the central European Directory. This can take up to 24 hours.
### Loading data
The data accessed by the federated search is held in the Bridgehead in a FHIR store (we use Blaze).
You can load data into this store by using its FHIR API:
```
https://<Name of your server>/bbmri-localdatamanagement/fhir
```
The name of your server will generally be the full name of the VM that the Bridgehead runs on. You can alternatively supply an IP address.
The FHIR API uses basic auth. You can find the credentials in `/etc/bridgehead/<project>.local.conf`.
Note that if you don't have a DNS certificate for the Bridgehead, you will need to allow an insecure connection. E.g. with curl, use the `-k` flag.
The storage space on your hard drive will depend on the number of FHIR resources that you intend to generate. This will be the sum of the number of patients/subjects, the number of samples, the number of conditions/diseases and the number of observations. As a general rule of thumb, you can assume that each resource will consume about 2 kilobytes of disk space.
For more information on Blaze performance, please refer to [import performance](https://github.com/samply/blaze/blob/master/docs/performance/import.md).
#### ETL for BBMRI and GBA
Normally, you will need to build your own ETL to feed the Bridgehead. However, there is one case where a short cut might be available:
- If you are using CentraXX as a BIMS and you have a FHIR-Export License, then you can employ standard mapping scripts that access the CentraXX-internal data structures and map the data onto the BBMRI FHIR profile. It may be necessary to adjust a few parameters, but this is nonetheless significantly easier than writing your own ETL.
You can find the profiles for generating FHIR in [Simplifier](https://simplifier.net/bbmri.de/~resources?category=Profile).
## Things you should know ## Things you should know
### Auto-Updates ### Auto-Updates
Your Bridgehead will automatically and regularly check for updates. Whenever something has been updates (e.g., one of the git repositories or one of the docker images), your Bridgehead is automatically restarted. This should happen automatically and does not need any configuration. Your Bridgehead will automatically and regularly check for updates. Whenever something has been updates (e.g., one of the git repositories or one of the docker images), your Bridgehead is automatically restarted. This should happen automatically and does not need any configuration.
If you would like to understand what happens exactly and when, please check the systemd units deployed during the [installation](#base-installation) via `systemctl cat bridgehead-update@<PROJECT>.service` and `systemctl cat bridgehead-update@<PROJECT.timer`. If you would like to understand what happens exactly and when, please check the systemd units deployed during the [installation](#base-installation) via `systemctl cat bridgehead-update@<PROJECT>.service` and `systemctl cat bridgehead-update@<PROJECT>.timer`.
### Auto-Backups ### Auto-Backups
Some of the components in the bridgehead will store persistent data. For those components, we integrated an automated backup solution in the bridgehead updates. It will automatically save the backup in multiple files Some of the components in the bridgehead will store persistent data. For those components, we integrated an automated backup solution in the bridgehead updates. It will automatically save the backup in multiple files
1) Last-XX, were XX represents a weekday to allow re-import of at least one version of the database for each of the past seven days. 1) Last-XX, were XX represents a weekday to allow re-import of at least one version of the database for each of the past seven days.
@ -148,20 +365,9 @@ Some of the components in the bridgehead will store persistent data. For those c
To enable the Auto-Backup feature, please set the Variable `BACKUP_DIRECTORY` in your sites configuration. To enable the Auto-Backup feature, please set the Variable `BACKUP_DIRECTORY` in your sites configuration.
### Monitoring ### Development Installation
To keep all Bridgeheads up and working and detect any errors before a user does, a central monitoring By using `./bridgehead dev-install <projectname>` instead of `install`, you can install a developer bridgehead. The difference is, that you can provide an arbitrary configuration repository during the installation, meaning that it does not have to adhere to the usual naming scheme. This allows for better decoupling between development and production configurations.
- Your Bridgehead itself will report relevant system events, such as successful/failed updates, restarts, performance metrics or version numbers.
- Your Bridgehead is also monitored from the outside by your network's central components. For example, the federated search will regularly perform a black-box test by sending an empty query to your Bridgehead and checking if the results make sense.
In all monitoring cases, obviously no sensitive information is transmitted, in particular not any patient-related data. Aggregated data, e.g. total amount of datasets, may be transmitted for diagnostic purposes.
## Troubleshooting
### Docker Daemon Proxy Configuration
Docker has a background daemon, responsible for downloading images and starting them. Sometimes, proxy configuration from your system won't carry over and it will fail to download images. In that case, configure the proxy for this daemon as described in the [official documentation](https://docs.docker.com).
### Non-Linux OS ### Non-Linux OS
@ -180,6 +386,42 @@ We have tested the installation procedure with an Ubuntu 22.04 guest system runn
Installation under WSL ought to work, but we have not tested this. Installation under WSL ought to work, but we have not tested this.
## Troubleshooting
### Docker Daemon Proxy Configuration
Docker has a background daemon, responsible for downloading images and starting them. Sometimes, proxy configuration from your system won't carry over and it will fail to download images. In that case, you'll need to configure the proxy inside the system unit of docker by creating the file `/etc/systemd/system/docker.service.d/proxy.conf` with the following content:
``` ini
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:3128"
Environment="HTTPS_PROXY=https://proxy.example.com:3128"
Environment="NO_PROXY=localhost,127.0.0.1,some-local-docker-registry.example.com,.corp"
```
After saving the configuration file, you'll need to reload the system daemon for the changes to take effect:
``` shell
sudo systemctl daemon-reload
```
and restart the docker daemon:
``` shell
sudo systemctl restart docker
```
For more information, please consult the [official documentation](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy).
### Monitoring
To keep all Bridgeheads up and working and detect any errors before a user does, a central monitoring
- Your Bridgehead itself will report relevant system events, such as successful/failed updates, restarts, performance metrics or version numbers.
- Your Bridgehead is also monitored from the outside by your network's central components. For example, the federated search will regularly perform a black-box test by sending an empty query to your Bridgehead and checking if the results make sense.
In all monitoring cases, obviously no sensitive information is transmitted, in particular not any patient-related data. Aggregated data, e.g. total amount of datasets, may be transmitted for diagnostic purposes.
## License ## License
Copyright 2019 - 2022 The Samply Community Copyright 2019 - 2022 The Samply Community

View File

@ -1,65 +1,17 @@
version: "3.7" version: "3.7"
# This includes only the shared persistence for BBMRI-ERIC and GBN. Federation components are included as modules, see vars.
services: services:
traefik:
container_name: bridgehead-traefik
image: traefik:latest
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.file.directory=/configuration/
- --api.dashboard=true
- --accesslog=true
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/api`) || PathPrefix(`/dashboard`)"
- "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_LOGIN}"
ports:
- 80:80
- 443:443
volumes:
- /etc/bridgehead/traefik-tls:/certs:ro
- ../lib/traefik-configuration/:/configuration:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
forward_proxy:
container_name: bridgehead-forward-proxy
image: samply/bridgehead-forward-proxy:latest
environment:
HTTPS_PROXY: ${HTTPS_PROXY_URL}
USERNAME: ${HTTPS_PROXY_USERNAME}
PASSWORD: ${HTTPS_PROXY_PASSWORD}
volumes:
- /etc/bridgehead/trusted-ca-certs:/docker/custom-certs/:ro
landing:
container_name: bridgehead-landingpage
image: samply/bridgehead-landingpage:master
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
environment:
HOST: ${HOST}
PROJECT: ${PROJECT}
SITE_NAME: ${SITE_NAME}
blaze: blaze:
image: "samply/blaze:0.19" image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
container_name: bridgehead-bbmri-blaze container_name: bridgehead-bbmri-blaze
environment: environment:
BASE_URL: "http://bridgehead-bbmri-blaze:8080" BASE_URL: "http://bridgehead-bbmri-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx4g" JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
LOG_LEVEL: "debug" DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
ENFORCE_REFERENTIAL_INTEGRITY: "false" ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes: volumes:
- "blaze-data:/app/data" - "blaze-data:/app/data"
@ -71,42 +23,10 @@ services:
- "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth" - "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth"
- "traefik.http.routers.blaze_ccp.tls=true" - "traefik.http.routers.blaze_ccp.tls=true"
spot:
image: samply/spot:latest
container_name: bridgehead-spot
environment:
SECRET: ${SPOT_BEAM_SECRET_LONG}
APPID: spot
PROXY_ID: ${PROXY_ID}
LDM_URL: http://bridgehead-bbmri-blaze:8080/fhir
BEAM_PROXY: http://beam-proxy:8081
depends_on:
- "beam-proxy"
- "blaze"
beam-proxy:
image: samply/beam-proxy:main
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_0_ID: spot
APP_0_KEY: ${SPOT_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- ./root.crt.pem:/conf/root.crt.pem:ro
volumes: volumes:
blaze-data: blaze-data:
# used in modules *-locator.yml
secrets: secrets:
proxy.pem: proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -0,0 +1,16 @@
version: "3.7"
services:
directory_sync_service:
image: "docker.verbis.dkfz.de/cache/samply/directory_sync_service"
environment:
DS_DIRECTORY_URL: ${DS_DIRECTORY_URL:-https://directory.bbmri-eric.eu}
DS_DIRECTORY_USER_NAME: ${DS_DIRECTORY_USER_NAME}
DS_DIRECTORY_USER_PASS: ${DS_DIRECTORY_USER_PASS}
DS_TIMER_CRON: ${DS_TIMER_CRON:-0 22 * * *}
DS_DIRECTORY_ALLOW_STAR_MODEL: ${DS_DIRECTORY_ALLOW_STAR_MODEL:-true}
DS_DIRECTORY_MOCK: ${DS_DIRECTORY_MOCK}
DS_DIRECTORY_DEFAULT_COLLECTION_ID: ${DS_DIRECTORY_DEFAULT_COLLECTION_ID}
DS_DIRECTORY_COUNTRY: ${DS_DIRECTORY_COUNTRY}
depends_on:
- "blaze"

View File

@ -0,0 +1,6 @@
#!/bin/bash
if [ -n "${DS_DIRECTORY_USER_NAME}" ]; then
log INFO "Directory sync setup detected -- will start directory sync service."
OVERRIDE+=" -f ./$PROJECT/modules/directory-sync-compose.yml"
fi

View File

@ -0,0 +1,36 @@
version: "3.7"
services:
focus-eric:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}-bbmri
container_name: bridgehead-focus-eric
environment:
API_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${ERIC_PROXY_ID}
PROXY_ID: ${ERIC_PROXY_ID}
BLAZE_URL: "http://blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy-eric:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
depends_on:
- "beam-proxy-eric"
- "blaze"
beam-proxy-eric:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
container_name: bridgehead-beam-proxy-eric
environment:
BROKER_URL: ${ERIC_BROKER_URL}
PROXY_ID: ${ERIC_PROXY_ID}
APP_focus_KEY: ${ERIC_FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/bbmri/modules/${ERIC_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro

View File

@ -0,0 +1,28 @@
#!/bin/bash
if [ "${ENABLE_ERIC}" == "true" ]; then
log INFO "BBMRI-ERIC setup detected -- will start services for BBMRI-ERIC."
OVERRIDE+=" -f ./$PROJECT/modules/eric-compose.yml"
# The environment needs to be defined in /etc/bridgehead
case "$ENVIRONMENT" in
"production")
export ERIC_BROKER_ID=broker.bbmri.samply.de
export ERIC_ROOT_CERT=eric
;;
"test")
export ERIC_BROKER_ID=broker-test.bbmri-test.samply.de
export ERIC_ROOT_CERT=eric.test
;;
*)
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export ERIC_BROKER_ID=broker.bbmri.samply.de
export ERIC_ROOT_CERT=eric
;;
esac
ERIC_BROKER_URL=https://${ERIC_BROKER_ID}
ERIC_PROXY_ID=${SITE_ID}.${ERIC_BROKER_ID}
ERIC_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
ERIC_SUPPORT_EMAIL=bridgehead@helpdesk.bbmri-eric.eu
fi

View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUJ0g7k2vrdAwNTU38S1/mU8NO26MwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNzEwMTIyMzQxWhcNMzMw
NzA3MTIyNDExWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBALMvc/fApbsAl+/NXDszNgffNR5llAb9CfxzdnRn
ryoBqZdPevBYZZfKBARRKjFbXRDdPWbE7erDeo1LiCM6PObXCuT9wmGWJtvfkmqW
3Z/a75e4r360kceMEGVn4kWpi9dz8s7+oXVZURjW2r13h6pq6xQNZDNlXmpR8wHG
58TSrQC4n1vzdSwMWdptgOA8Sw8adR7ZJI1yNZpmynB2QolKKNESI7FcSKC/+b+H
LoPkseAwQG9yJo23qEw1GZS67B47iKIqX2wp9VLQobHw7ncrhKXQLSWq973k/Swp
7lBdfOsTouf72flLiF1HbdOLcFDmWgIbf5scj2HaQe8b/UcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFHYxBJiJZieW
e6G1vwn6Q36/crgNMB8GA1UdIwQYMBaAFHYxBJiJZieWe6G1vwn6Q36/crgNMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCN6WVNYpWJ
6Z1Ee+otLZYMXhjyR6NUQ5s0aHiug97gB8mTiNlgXiiTgipCbofEmENgh1inYrPC
WfdXxqOaekSXCQW6nSO1KtBzEYtkN5LrN1cjKqt51P2DbkllinK37wwCS2Kfup1+
yjhTRxrehSIfsMVK6bTUeSoc8etkgwErZpORhlpqZKWhmOwcMpgsYJJOLhUetqc1
UNe/254bc0vqHEPT6VI/86c7qAmk1xR0RUfrnKAEqZtUeuoj2fe1L/6yOB16fxt5
3V3oim7EO6eZCTjDo9fU5DaFiqSMe7WVdr03Na0cWet60XKRH/xaiC6gMWdHWcbh
vZdXnV1qjlM2
-----END CERTIFICATE-----

View File

@ -0,0 +1,36 @@
version: "3.7"
services:
focus-gbn:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}-bbmri
container_name: bridgehead-focus-gbn
environment:
API_KEY: ${GBN_FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${GBN_PROXY_ID}
PROXY_ID: ${GBN_PROXY_ID}
BLAZE_URL: "http://blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy-gbn:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
depends_on:
- "beam-proxy-gbn"
- "blaze"
beam-proxy-gbn:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
container_name: bridgehead-beam-proxy-gbn
environment:
BROKER_URL: ${GBN_BROKER_URL}
PROXY_ID: ${GBN_PROXY_ID}
APP_focus_KEY: ${GBN_FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/bbmri/modules/${GBN_ROOT_CERT}.root.crt.pem:/conf/root.crt.pem:ro

View File

@ -0,0 +1,28 @@
#!/bin/bash
if [ "${ENABLE_GBN}" == "true" ]; then
log INFO "GBN setup detected -- will start services for German Biobank Node."
OVERRIDE+=" -f ./$PROJECT/modules/gbn-compose.yml"
# The environment needs to be defined in /etc/bridgehead
case "$ENVIRONMENT" in
"production")
export GBN_BROKER_ID=broker.bbmri.de
export GBN_ROOT_CERT=gbn
;;
"test")
export GBN_BROKER_ID=broker.test.bbmri.de
export GBN_ROOT_CERT=gbn.test
;;
*)
report_error 6 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
export GBN_BROKER_ID=broker.bbmri.de
export GBN_ROOT_CERT=gbn
;;
esac
GBN_BROKER_URL=https://${GBN_BROKER_ID}
GBN_PROXY_ID=${SITE_ID}.${GBN_BROKER_ID}
GBN_FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
GBN_SUPPORT_EMAIL=feedback@germanbiobanknode.de
fi

View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUckVOQQWZBTC0pWhn1X3lPxAWricwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwOTA0MDkwMTQ0WhcNMzMw
OTAxMDkwMjEzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAOOD+CVvteBmu1hKV1QlfbHmiLCnuf6F+9k+1u/b
6as6k7BURn8KZAxVLWSIwC6x2C7n9CHN9Jieb4DWpS0XmXQVUEpT1/yiLGBdxp2x
nrbzm7caOunsWsPlGOcXPJKJpzAhcg58RDzXZ+2+shulSmsgPNlWBaLhNL5wj0sQ
MzbwGVlGIJg18Ye/9WgQkO2ZcnTGb5cRsChKs4H43ZC34ZSSk7wqWg6P3e2xFam1
YKXBOZzhwHoI4AxUQ+gd6upz5dqcwbaNZm10VP8fMac2dMLw9cOCS0ueDCS4viLd
A69yds19AndBPMZhoEY1UHafjJ1uITRJQpaaB4vNliX+1rECAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFC74YIorSwWD
/s5ozz3xvqUMDJ3qMB8GA1UdIwQYMBaAFC74YIorSwWD/s5ozz3xvqUMDJ3qMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCzcIccBzYr
sHCGTGsSyLGBYsuI5yl+hvFOitYTha/mC+XBxq2R6By2WzbfSZtyZkUtC/+FqdCY
VtMSjbDVXtBgsabfqODBobHmPyOEmNUX4IGcyn06rdM+rHQRah98lF+PhiPPO42F
9Wj8dkq4/Gf+Yarq31ZbY0sed2sEPZ/bV26Og8Ft9qip5gKwklyakAiCnDIq+QBd
ltvng3g08AQM0o5KIphP2/WU0UoSk1YPVMjRxuLiFg8xvr2EdCQQ9oA7xbhrmAXe
242HVW/7KokjmowyWTQlIUGnuGdCOtTl8h74eHTID0YWO68hHkA0J5Ox2j4dZxvw
HRFTxAR1gGKX
-----END CERTIFICATE-----

View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUQJjusHYR89Xas+kRbg41aHZxfmcwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwODIxMDk1MDI1WhcNMzMw
ODE4MDk1MDU1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAMP0jt2tSk23Bu+QeogqlFwjbMnqwRcWGKAOF4ch
aOK2B5u/BnpqIZDZbhfSIJTv8DPe3+nA2VqRfSiW3HbV0auqxx1ii2ZmHYbvO2P/
Jj6hyIiYYGqCMRVXk7iB+DfMysQEaSJO/7lJSprlVQCl0u7MAQ4q/szVNwcCm2Xi
iE00Wlota2xTYjnJHYjeaLZL4kQsjqW2aCWHG4q77Z4NXT+lXN9XXedgoXLhuwWl
UyHhXPjyCVu1iFzsXwSTodPAETGoInRYMqMA7PrbHZu1b2Jz0BwCQ+bark1td+Mf
l3uP0QduhZnH6zGO0KyUFRzeiesgabv5bgUeSSsIOVjnLJUCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFME99nPh1Vuo
7eRaymL2Ps7qGxIdMB8GA1UdIwQYMBaAFME99nPh1Vuo7eRaymL2Ps7qGxIdMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQB0WG0xT00R
5CA0tVHaNo8bQuAXytu566TspKc5vVd3r6mglj/MiSSQG2MVz+GUU6LnnApgln1P
pvZuyaldB0QdTTLeJVMr/eFtZonlxqcxkj+VW2Y7mRHT7Xx9GQvzKYvSK5m/+xzH
pAQl8AirgkoZ5b+ltlzM0pDAH204xj3/skmGqM/o0FKzRtpetHYkZPiquHCmO2Cp
nTMkv7c2qu5t2Dm5q0Tmb7ZRoA1yIYhDn/UfhTAVWQnoMfXK8oB9nkRRb7pAfOXo
W1K4A+oWqKrJwfIH/Ycnw7hu8hPuGOyIN/PLnLpJp9M2I67vywp5lIvFib4UukyJ
wJw6/iTienIA
-----END CERTIFICATE-----

View File

@ -1,7 +1,37 @@
BROKER_ID=broker.bbmri.samply.de # Makes sense for all European Biobanks
BROKER_URL=https://${BROKER_ID} : ${ENABLE_ERIC:=true}
PROXY_ID=${SITE_ID}.${BROKER_ID}
SPOT_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)" # Makes only sense for German Biobanks
SPOT_BEAM_SECRET_LONG="ApiKey spot.${PROXY_ID} ${SPOT_BEAM_SECRET_SHORT}" : ${ENABLE_GBN:=false}
SUPPORT_EMAIL=bridgehead@helpdesk.bbmri-eric.eu
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
SUPPORT_EMAIL=$ERIC_SUPPORT_EMAIL
BROKER_URL_FOR_PREREQ="${ERIC_BROKER_URL:-$GBN_BROKER_URL}"
if [ -n "$GBN_SUPPORT_EMAIL" ]; then
SUPPORT_EMAIL=$GBN_SUPPORT_EMAIL
fi
function do_enroll {
COUNT=0
if [ "$ENABLE_ERIC" == "true" ]; then
do_enroll_inner $ERIC_PROXY_ID $ERIC_SUPPORT_EMAIL
COUNT=$((COUNT+1))
fi
if [ "$ENABLE_GBN" == "true" ]; then
do_enroll_inner $GBN_PROXY_ID $GBN_SUPPORT_EMAIL
COUNT=$((COUNT+1))
fi
if [ $COUNT -ge 2 ]; then
echo
echo "You just received $COUNT certificate signing requests (CSR). Please send $COUNT e-mails, with 1 CSR each, to the respective e-mail address."
fi
}

View File

@ -32,31 +32,80 @@ case "$PROJECT" in
bbmri) bbmri)
#nothing extra to do #nothing extra to do
;; ;;
cce)
#nothing extra to do
;;
itcc)
#nothing extra to do
;;
kr)
#nothing extra to do
;;
dhki)
#nothing extra to do
;;
minimal)
#nothing extra to do
;;
*) *)
printUsage printUsage
exit 1 exit 1
;; ;;
esac esac
# Loads config variables and runs the projects setup script
loadVars() { loadVars() {
# Load variables from /etc/bridgehead and /srv/docker/bridgehead
set -a set -a
# Source the project specific config file
source /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "/etc/bridgehead/$PROJECT.conf not found" source /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "/etc/bridgehead/$PROJECT.conf not found"
# Source the project specific local config file if present
# This file is ignored by git as oposed to the regular config file as it contains private site information like etl auth data
if [ -e /etc/bridgehead/$PROJECT.local.conf ]; then if [ -e /etc/bridgehead/$PROJECT.local.conf ]; then
log INFO "Applying /etc/bridgehead/$PROJECT.local.conf" log INFO "Applying /etc/bridgehead/$PROJECT.local.conf"
source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import" source /etc/bridgehead/$PROJECT.local.conf || fail_and_report 1 "Found /etc/bridgehead/$PROJECT.local.conf but failed to import"
fi fi
# Set execution environment on main default to prod else test
if [[ -z "${ENVIRONMENT+x}" ]]; then
if [ "$(git rev-parse --abbrev-ref HEAD)" == "main" ]; then
ENVIRONMENT="production"
else
ENVIRONMENT="test"
fi
fi
# Source the versions of the images components
case "$ENVIRONMENT" in
"production")
source ./versions/prod
;;
"test")
source ./versions/test
;;
*)
report_error 7 "Environment \"$ENVIRONMENT\" is unknown. Assuming production. FIX THIS!"
source ./versions/prod
;;
esac
fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile" fetchVarsFromVaultByFile /etc/bridgehead/$PROJECT.conf || fail_and_report 1 "Unable to fetchVarsFromVaultByFile"
setHostname
optimizeBlazeMemoryUsage
# Run project specific setup if it exists
# This will ususally modiy the `OVERRIDE` to include all the compose files that the project depends on
# This is also where projects specify which modules to load
[ -e ./$PROJECT/vars ] && source ./$PROJECT/vars [ -e ./$PROJECT/vars ] && source ./$PROJECT/vars
set +a set +a
OVERRIDE=${OVERRIDE:=""} OVERRIDE=${OVERRIDE:=""}
# minimal contains shared components, so potential overrides must be applied in every project
if [ -f "minimal/docker-compose.override.yml" ]; then
log INFO "Applying Bridgehead common components override (minimal/docker-compose.override.yml)"
OVERRIDE+=" -f ./minimal/docker-compose.override.yml"
fi
if [ -f "$PROJECT/docker-compose.override.yml" ]; then if [ -f "$PROJECT/docker-compose.override.yml" ]; then
log INFO "Applying $PROJECT/docker-compose.override.yml" log INFO "Applying $PROJECT/docker-compose.override.yml"
OVERRIDE+=" -f ./$PROJECT/docker-compose.override.yml" OVERRIDE+=" -f ./$PROJECT/docker-compose.override.yml"
fi fi
detectCompose detectCompose
setHostname setupProxy
} }
case "$ACTION" in case "$ACTION" in
@ -64,34 +113,60 @@ case "$ACTION" in
loadVars loadVars
hc_send log "Bridgehead $PROJECT startup: Checking requirements ..." hc_send log "Bridgehead $PROJECT startup: Checking requirements ..."
checkRequirements checkRequirements
sync_secrets
hc_send log "Bridgehead $PROJECT startup: Requirements checked out. Now starting bridgehead ..." hc_send log "Bridgehead $PROJECT startup: Requirements checked out. Now starting bridgehead ..."
export LDM_LOGIN=$(getLdmPassword) exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
exec $COMPOSE -f ./$PROJECT/docker-compose.yml $OVERRIDE up --abort-on-container-exit
;; ;;
stop) stop)
loadVars loadVars
exec $COMPOSE -f ./$PROJECT/docker-compose.yml $OVERRIDE down # Kill stale secret-sync instances if present
docker kill $(docker ps -q --filter ancestor=docker.verbis.dkfz.de/cache/samply/secret-sync-local) 2>/dev/null || true
# HACK: This is temporarily to properly shut down false bridgehead instances (bridgehead-ccp instead ccp)
$COMPOSE -p bridgehead-$PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE down
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE down
;;
is-running)
bk_is_running
exit $?
;;
logs)
loadVars
shift 2
exec journalctl -u bridgehead@$PROJECT -u bridgehead-update@$PROJECT -a $@
;;
docker-logs)
loadVars
shift 2
exec $COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE logs -f $@
;; ;;
update) update)
loadVars loadVars
exec ./lib/update-bridgehead.sh $PROJECT exec ./lib/update-bridgehead.sh $PROJECT
;; ;;
install) install)
source ./lib/prepare-system.sh source ./lib/prepare-system.sh NODEV
loadVars
exec ./lib/install-bridgehead.sh $PROJECT
;;
dev-install)
exec ./lib/prepare-system.sh DEV
loadVars loadVars
exec ./lib/install-bridgehead.sh $PROJECT exec ./lib/install-bridgehead.sh $PROJECT
;; ;;
uninstall) uninstall)
exec ./lib/uninstall-bridgehead.sh $PROJECT exec ./lib/uninstall-bridgehead.sh $PROJECT
;; ;;
adduser)
loadVars
log "INFO" "Adding encrypted credentials in /etc/bridgehead/$PROJECT.local.conf"
read -p "Please choose the component (LDM_AUTH|NNGM_AUTH) you want to add a user to : " COMPONENT
read -p "Please enter a username: " USER
read -s -p "Please enter a password (will not be echoed): "$'\n' PASSWORD
add_basic_auth_user $USER $PASSWORD $COMPONENT $PROJECT
;;
enroll) enroll)
loadVars loadVars
if [ -e $PRIVATEKEYFILENAME ]; then do_enroll $PROXY_ID
log ERROR "Private key already exists at $PRIVATEKEYFILENAME. Please delete first to proceed."
exit 1
fi
docker run --rm -ti -v /etc/bridgehead/pki:/etc/bridgehead/pki samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $PROXY_ID --admin-email $SUPPORT_EMAIL
chmod 600 $PRIVATEKEYFILENAME
;; ;;
preRun | preUpdate) preRun | preUpdate)
fixPermissions fixPermissions

68
cce/docker-compose.yml Normal file
View File

@ -0,0 +1,68 @@
version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
container_name: bridgehead-cce-blaze
environment:
BASE_URL: "http://bridgehead-cce-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-data:/app/data"
labels:
- "traefik.enable=true"
- "traefik.http.routers.blaze_cce.rule=PathPrefix(`/cce-localdatamanagement`)"
- "traefik.http.middlewares.cce_b_strip.stripprefix.prefixes=/cce-localdatamanagement"
- "traefik.http.services.blaze_cce.loadbalancer.server.port=8080"
- "traefik.http.routers.blaze_cce.middlewares=cce_b_strip,auth"
- "traefik.http.routers.blaze_cce.tls=true"
focus:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
container_name: bridgehead-focus
environment:
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${PROXY_ID}
PROXY_ID: ${PROXY_ID}
BLAZE_URL: "http://bridgehead-cce-blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
EPSILON: 0.28
QUERIES_TO_CACHE: '/queries_to_cache.conf'
ENDPOINT_TYPE: ${FOCUS_ENDPOINT_TYPE:-blaze}
volumes:
- /srv/docker/bridgehead/cce/queries_to_cache.conf:/queries_to_cache.conf:ro
depends_on:
- "beam-proxy"
- "blaze"
beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/cce/root.crt.pem:/conf/root.crt.pem:ro
volumes:
blaze-data:
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -0,0 +1,33 @@
version: "3.7"
services:
landing:
container_name: lens_federated-search
image: docker.verbis.dkfz.de/ccp/lens:${SITE_ID}
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
spot:
image: docker.verbis.dkfz.de/ccp-private/central-spot
environment:
BEAM_SECRET: "${FOCUS_BEAM_SECRET_SHORT}"
BEAM_URL: http://beam-proxy:8081
BEAM_PROXY_ID: ${SITE_ID}
BEAM_BROKER_ID: ${BROKER_ID}
BEAM_APP_ID: "focus"
PROJECT_METADATA: "cce_supervisors"
depends_on:
- "beam-proxy"
labels:
- "traefik.enable=true"
- "traefik.http.services.spot.loadbalancer.server.port=8080"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowmethods=GET,OPTIONS,POST"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolalloworiginlist=https://${HOST}"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowcredentials=true"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolmaxage=-1"
- "traefik.http.routers.spot.rule=Host(`${HOST}`) && PathPrefix(`/backend`)"
- "traefik.http.middlewares.stripprefix_spot.stripprefix.prefixes=/backend"
- "traefik.http.routers.spot.tls=true"
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot"

View File

@ -0,0 +1,5 @@
#!/bin/bash
if [ -n "$ENABLE_LENS" ];then
OVERRIDE+=" -f ./$PROJECT/modules/lens-compose.yml"
fi

View File

@ -0,0 +1,2 @@
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwoKY29udGV4dCBQYXRpZW50CgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0FHRV9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfU1BFQ0lNRU5fU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREtUS19TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OCnRydWU=
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwpjb2Rlc3lzdGVtIGljZDEwOiAnaHR0cDovL2ZoaXIuZGUvQ29kZVN5c3RlbS9iZmFybS9pY2QtMTAtZ20nCmNvZGVzeXN0ZW0gbW9ycGg6ICd1cm46b2lkOjIuMTYuODQwLjEuMTEzODgzLjYuNDMuMScKCmNvbnRleHQgUGF0aWVudAoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9BR0VfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX1NQRUNJTUVOX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfUFJPQ0VEVVJFX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfTUVESUNBVElPTl9TVFJBVElGSUVSCkRLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTgooKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDNjEnIGZyb20gaWNkMTBdKSBhbmQKKChleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODE0MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQ3LzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg0ODAvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODUwMC8zJykpKQ==

20
cce/root.crt.pem Normal file
View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUW34NEb7bl0+Ywx+I1VKtY5vpAOowDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTIyMTMzNzEzWhcNMzQw
MTE5MTMzNzQzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAL5UegLXTlq3XRRj8LyFs3aF0tpRPVoW9RXp5kFI
TnBvyO6qjNbMDT/xK+4iDtEX4QQUvsxAKxfXbe9i1jpdwjgH7JHaSGm2IjAiKLqO
OXQQtguWwfNmmp96Ql13ArLj458YH08xMO/w2NFWGwB/hfARa4z/T0afFuc/tKJf
XbGCG9xzJ9tmcG45QN8NChGhVvaTweNdVxGWlpHxmi0Mn8OM9CEuB7nPtTTiBuiu
pRC2zVVmNjVp4ktkAqL7IHOz+/F5nhiz6tOika9oD3376Xj055lPznLcTQn2+4d7
K7ZrBopCFxIQPjkgmYRLfPejbpdUjK1UVJw7hbWkqWqH7JMCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFGjvRcaIP4HM
poIguUAK9YL2n7fbMB8GA1UdIwQYMBaAFGjvRcaIP4HMpoIguUAK9YL2n7fbMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCbzycJSaDm
AXXNJqQ88djrKs5MDXS8RIjS/cu2ayuLaYDe+BzVmUXNA0Vt9nZGdaz63SLLcjpU
fNSxBfKbwmf7s30AK8Cnfj9q4W/BlBeVizUHQsg1+RQpDIdMrRQrwkXv8mfLw+w5
3oaXNW6W/8KpBp/H8TBZ6myl6jCbeR3T8EMXBwipMGop/1zkbF01i98Xpqmhx2+l
n+80ofPsSspOo5XmgCZym8CD/m/oFHmjcvOfpOCvDh4PZ+i37pmbSlCYoMpla3u/
7MJMP5lugfLBYNDN2p+V4KbHP/cApCDT5UWLOeAWjgiZQtHH5ilDeYqEc1oPjyJt
Rtup0MTxSJtN
-----END CERTIFICATE-----

14
cce/vars Normal file
View File

@ -0,0 +1,14 @@
BROKER_ID=test-no-real-data.broker.samply.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
SUPPORT_EMAIL=manoj.waikar@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done

View File

@ -1,65 +1,15 @@
version: "3.7" version: "3.7"
services: services:
traefik:
container_name: bridgehead-traefik
image: traefik:latest
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.file.directory=/configuration/
- --api.dashboard=true
- --accesslog=true
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/api`) || PathPrefix(`/dashboard`)"
- "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_LOGIN}"
ports:
- 80:80
- 443:443
volumes:
- /etc/bridgehead/traefik-tls:/certs:ro
- ../lib/traefik-configuration/:/configuration:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
forward_proxy:
container_name: bridgehead-forward-proxy
image: samply/bridgehead-forward-proxy:latest
environment:
HTTPS_PROXY: ${HTTPS_PROXY_URL}
USERNAME: ${HTTPS_PROXY_USERNAME}
PASSWORD: ${HTTPS_PROXY_PASSWORD}
volumes:
- /etc/bridgehead/trusted-ca-certs:/docker/custom-certs/:ro
landing:
container_name: bridgehead-landingpage
image: samply/bridgehead-landingpage:master
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
environment:
HOST: ${HOST}
PROJECT: ${PROJECT}
SITE_NAME: ${SITE_NAME}
blaze: blaze:
image: "samply/blaze:0.19" image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
container_name: bridgehead-ccp-blaze container_name: bridgehead-ccp-blaze
environment: environment:
BASE_URL: "http://bridgehead-ccp-blaze:8080" BASE_URL: "http://bridgehead-ccp-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx4g" JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
LOG_LEVEL: "debug" DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
ENFORCE_REFERENTIAL_INTEGRITY: "false" ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes: volumes:
- "blaze-data:/app/data" - "blaze-data:/app/data"
@ -71,29 +21,32 @@ services:
- "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth" - "traefik.http.routers.blaze_ccp.middlewares=ccp_b_strip,auth"
- "traefik.http.routers.blaze_ccp.tls=true" - "traefik.http.routers.blaze_ccp.tls=true"
spot: focus:
image: samply/spot:latest image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}-dktk
container_name: bridgehead-spot container_name: bridgehead-focus
environment: environment:
SECRET: ${SPOT_BEAM_SECRET_LONG} API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
APPID: spot BEAM_APP_ID_LONG: focus.${PROXY_ID}
PROXY_ID: ${PROXY_ID} PROXY_ID: ${PROXY_ID}
LDM_URL: http://bridgehead-ccp-blaze:8080/fhir BLAZE_URL: "http://bridgehead-ccp-blaze:8080/fhir/"
BEAM_PROXY: http://beam-proxy:8081 BEAM_PROXY_URL: http://beam-proxy:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
EPSILON: 0.28
QUERIES_TO_CACHE: '/queries_to_cache.conf'
ENDPOINT_TYPE: ${FOCUS_ENDPOINT_TYPE:-blaze}
volumes:
- /srv/docker/bridgehead/ccp/queries_to_cache.conf:/queries_to_cache.conf:ro
depends_on: depends_on:
- "beam-proxy" - "beam-proxy"
- "blaze" - "blaze"
beam-proxy: beam-proxy:
image: samply/beam-proxy:main image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
container_name: bridgehead-beam-proxy container_name: bridgehead-beam-proxy
environment: environment:
BROKER_URL: ${BROKER_URL} BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID} PROXY_ID: ${PROXY_ID}
APP_0_ID: spot APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
APP_0_KEY: ${SPOT_BEAM_SECRET_SHORT}
APP_1_ID: report-hub
APP_1_KEY: ${REPORTHUB_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128 ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
@ -104,8 +57,7 @@ services:
- "forward_proxy" - "forward_proxy"
volumes: volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro - /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- ./root.crt.pem:/conf/root.crt.pem:ro - /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
volumes: volumes:
blaze-data: blaze-data:

View File

@ -1,34 +0,0 @@
version: "3.7"
services:
exliquid-task-store:
image: "samply/blaze:0.19"
container_name: bridgehead-exliquid-task-store
environment:
BASE_URL: "http://bridgehead-exliquid-task-store:8080"
JAVA_TOOL_OPTIONS: "-Xmx1g"
volumes:
- "exliquid-task-store-data:/app/data"
labels:
- "traefik.enable=false"
exliquid-report-hub:
image: "samply/report-hub:latest"
container_name: bridgehead-exliquid-report-hub
environment:
SPRING_WEBFLUX_BASE_PATH: "/exliquid"
JAVA_TOOL_OPTIONS: "-Xmx1g"
APP_BEAM_APPID: "report-hub.${PROXY_ID}"
APP_BEAM_SECRET: ${REPORTHUB_BEAM_SECRET_SHORT}
APP_BEAM_PROXY_BASEURL: http://beam-proxy:8081
APP_TASKSTORE_BASEURL: "http://bridgehead-exliquid-task-store:8080/fhir"
APP_DATASTORE_BASEURL: http://bridgehead-ccp-blaze:8080/fhir
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.report-ccp.rule=PathPrefix(`/exliquid`)"
- "traefik.http.services.report-ccp.loadbalancer.server.port=8080"
- "traefik.http.routers.report-ccp.tls=true"
volumes:
exliquid-task-store-data:

View File

@ -1,19 +0,0 @@
#!/bin/bash
function exliquidSetup() {
case ${SITE_ID} in
berlin|dresden|essen|frankfurt|freiburg|luebeck|mainz|muenchen-lmu|muenchen-tu|mannheim|tuebingen)
EXLIQUID=1
;;
dktk-test)
EXLIQUID=1
;;
*)
EXLIQUID=0
;;
esac
if [[ $EXLIQUID -eq 1 ]]; then
log INFO "EXLIQUID setup detected -- will start Report-Hub."
OVERRIDE+=" -f ./$PROJECT/exliquid-compose.yml"
fi
}

View File

@ -0,0 +1,32 @@
version: "3.7"
services:
blaze-secondary:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
container_name: bridgehead-ccp-blaze-secondary
environment:
BASE_URL: "http://bridgehead-ccp-blaze-secondary:8080"
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-secondary-data:/app/data"
labels:
- "traefik.enable=true"
- "traefik.http.routers.blaze-secondary_ccp.rule=PathPrefix(`/ccp-localdatamanagement-secondary`)"
- "traefik.http.middlewares.ccp_b-secondary_strip.stripprefix.prefixes=/ccp-localdatamanagement-secondary"
- "traefik.http.services.blaze-secondary_ccp.loadbalancer.server.port=8080"
- "traefik.http.routers.blaze-secondary_ccp.middlewares=ccp_b-secondary_strip,auth"
- "traefik.http.routers.blaze-secondary_ccp.tls=true"
obds2fhir-rest:
environment:
STORE_PATH: ${STORE_PATH:-http://blaze:8080/fhir}
exporter:
environment:
BLAZE_HOST: "blaze-secondary"
volumes:
blaze-secondary-data:

View File

@ -0,0 +1,11 @@
#!/bin/bash
function blazeSecondarySetup() {
if [ -n "$ENABLE_SECONDARY_BLAZE" ]; then
log INFO "Secondary Blaze setup detected -- will start second blaze."
OVERRIDE+=" -f ./$PROJECT/modules/blaze-secondary-compose.yml"
#make oBDS2FHIR ignore ID-Management and replace target Blaze
PATIENTLIST_URL=" "
STORE_PATH="http://blaze-secondary:8080/fhir"
fi
}

View File

@ -0,0 +1,171 @@
version: "3.7"
services:
rstudio:
container_name: bridgehead-rstudio
image: docker.verbis.dkfz.de/ccp/dktk-rstudio:latest
environment:
#DEFAULT_USER: "rstudio" # This line is kept for informational purposes
PASSWORD: "${RSTUDIO_ADMIN_PASSWORD}" # It is required, even if the authentication is disabled
DISABLE_AUTH: "true" # https://rocker-project.org/images/versioned/rstudio.html#how-to-use
HTTP_RELATIVE_PATH: "/rstudio"
ALL_PROXY: "http://forward_proxy:3128" # https://rocker-project.org/use/networking.html
labels:
- "traefik.enable=true"
- "traefik.http.routers.rstudio_ccp.rule=PathPrefix(`/rstudio`)"
- "traefik.http.services.rstudio_ccp.loadbalancer.server.port=8787"
- "traefik.http.middlewares.rstudio_ccp_strip.stripprefix.prefixes=/rstudio"
- "traefik.http.routers.rstudio_ccp.tls=true"
- "traefik.http.routers.rstudio_ccp.middlewares=oidcAuth,rstudio_ccp_strip"
networks:
- rstudio
opal:
container_name: bridgehead-opal
image: docker.verbis.dkfz.de/ccp/dktk-opal:latest
labels:
- "traefik.enable=true"
- "traefik.http.routers.opal_ccp.rule=PathPrefix(`/opal`)"
- "traefik.http.services.opal_ccp.loadbalancer.server.port=8080"
- "traefik.http.routers.opal_ccp.tls=true"
links:
- opal-rserver
- opal-db
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC -Dhttps.proxyHost=forward_proxy -Dhttps.proxyPort=3128"
# OPAL_ADMINISTRATOR_USER: "administrator" # This line is kept for informational purposes
OPAL_ADMINISTRATOR_PASSWORD: "${OPAL_ADMIN_PASSWORD}"
POSTGRESDATA_HOST: "opal-db"
POSTGRESDATA_DATABASE: "opal"
POSTGRESDATA_USER: "opal"
POSTGRESDATA_PASSWORD: "${OPAL_DB_PASSWORD}"
ROCK_HOSTS: "opal-rserver:8085"
APP_URL: "https://${HOST}/opal"
APP_CONTEXT_PATH: "/opal"
OPAL_PRIVATE_KEY: "/run/secrets/opal-key.pem"
OPAL_CERTIFICATE: "/run/secrets/opal-cert.pem"
OIDC_URL: "${OIDC_URL}"
OIDC_REALM: "${OIDC_REALM}"
OIDC_CLIENT_ID: "${OIDC_PRIVATE_CLIENT_ID}"
OIDC_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
OIDC_ADMIN_GROUP: "${OIDC_ADMIN_GROUP}"
TOKEN_MANAGER_PASSWORD: "${TOKEN_MANAGER_OPAL_PASSWORD}"
EXPORTER_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
BEAM_APP_ID: token-manager.${PROXY_ID}
BEAM_SECRET: ${TOKEN_MANAGER_SECRET}
BEAM_DATASHIELD_PROXY: request-manager
volumes:
- "/var/cache/bridgehead/ccp/opal-metadata-db:/srv" # Opal metadata
secrets:
- opal-cert.pem
- opal-key.pem
opal-db:
container_name: bridgehead-opal-db
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
environment:
POSTGRES_PASSWORD: "${OPAL_DB_PASSWORD}" # Set in datashield-setup.sh
POSTGRES_USER: "opal"
POSTGRES_DB: "opal"
volumes:
- "/var/cache/bridgehead/ccp/opal-db:/var/lib/postgresql/data" # Opal project data (imported from exporter)
opal-rserver:
container_name: bridgehead-opal-rserver
image: docker.verbis.dkfz.de/ccp/dktk-rserver # datashield/rock-base + dsCCPhos
tmpfs:
- /srv
beam-connect:
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-datashield-connect
environment:
PROXY_URL: "http://beam-proxy:8081"
TLS_CA_CERTIFICATES_DIR: /run/secrets
APP_ID: datashield-connect.${SITE_ID}.${BROKER_ID}
PROXY_APIKEY: ${DATASHIELD_CONNECT_SECRET}
DISCOVERY_URL: "./map/central.json"
LOCAL_TARGETS_FILE: "./map/local.json"
NO_AUTH: "true"
secrets:
- opal-cert.pem
depends_on:
- beam-proxy
volumes:
- /tmp/bridgehead/opal-map/:/map/:ro
networks:
- default
- rstudio
traefik:
labels:
- "traefik.http.middlewares.oidcAuth.forwardAuth.address=http://oauth2-proxy:4180/"
- "traefik.http.middlewares.oidcAuth.forwardAuth.trustForwardHeader=true"
- "traefik.http.middlewares.oidcAuth.forwardAuth.authResponseHeaders=X-Auth-Request-Access-Token,Authorization"
networks:
- default
- rstudio
forward_proxy:
networks:
- default
- rstudio
beam-proxy:
environment:
APP_datashield-connect_KEY: ${DATASHIELD_CONNECT_SECRET}
APP_token-manager_KEY: ${TOKEN_MANAGER_SECRET}
# TODO: Allow users of group /DataSHIELD and OIDC_USER_GROUP at the same time:
# Maybe a solution would be (https://oauth2-proxy.github.io/oauth2-proxy/configuration/oauth_provider):
# --allowed-groups=/DataSHIELD,OIDC_USER_GROUP
oauth2-proxy:
image: docker.verbis.dkfz.de/cache/oauth2-proxy/oauth2-proxy:latest
container_name: bridgehead-oauth2proxy
command: >-
--allowed-group=DataSHIELD
--oidc-groups-claim=${OIDC_GROUP_CLAIM}
--auth-logging=true
--whitelist-domain=${HOST}
--http-address="0.0.0.0:4180"
--reverse-proxy=true
--upstream="static://202"
--email-domain="*"
--cookie-name="_BRIDGEHEAD_oauth2"
--cookie-secret="${OAUTH2_PROXY_SECRET}"
--cookie-expire="12h"
--cookie-secure="true"
--cookie-httponly="true"
#OIDC settings
--provider="keycloak-oidc"
--provider-display-name="VerbIS Login"
--client-id="${OIDC_PRIVATE_CLIENT_ID}"
--client-secret="${OIDC_CLIENT_SECRET}"
--redirect-url="https://${HOST}${OAUTH2_CALLBACK}"
--oidc-issuer-url="${OIDC_ISSUER_URL}"
--scope="openid email profile"
--code-challenge-method="S256"
--skip-provider-button=true
#X-Forwarded-Header settings - true/false depending on your needs
--pass-basic-auth=true
--pass-user-headers=false
--pass-access-token=false
labels:
- "traefik.enable=true"
- "traefik.http.routers.oauth2_proxy.rule=PathPrefix(`/oauth2`)"
- "traefik.http.services.oauth2_proxy.loadbalancer.server.port=4180"
- "traefik.http.routers.oauth2_proxy.tls=true"
environment:
http_proxy: "http://forward_proxy:3128"
https_proxy: "http://forward_proxy:3128"
depends_on:
forward_proxy:
condition: service_healthy
secrets:
opal-cert.pem:
file: /tmp/bridgehead/opal-cert.pem
opal-key.pem:
file: /tmp/bridgehead/opal-key.pem
networks:
rstudio:

View File

@ -0,0 +1,157 @@
<template id="opal-ccp" source-id="blaze-store" opal-project="ccp-demo" target-id="opal" >
<container csv-filename="Patient-${TIMESTAMP}.csv" opal-table="patient" opal-entity-type="Patient">
<attribute csv-column="patient-id" opal-value-type="text" primary-key="true" val-fhir-path="Patient.id.value" anonym="Pat" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="dktk-id-global" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Global').value.value"/>
<attribute csv-column="dktk-id-lokal" opal-value-type="text" val-fhir-path="Patient.identifier.where(type.coding.code = 'Lokal').value.value" />
<attribute csv-column="geburtsdatum" opal-value-type="date" val-fhir-path="Patient.birthDate.value"/>
<attribute csv-column="geschlecht" opal-value-type="text" val-fhir-path="Patient.gender.value" />
<attribute csv-column="datum_des_letztbekannten_vitalstatus" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '75186-7').effective.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
<attribute csv-column="vitalstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '75186-7').value.coding.code.value" join-fhir-path="/Observation.where(code.coding.code = '75186-7').subject.reference.value"/>
<!--fehlt in ADT2FHIR--><attribute csv-column="tod_tumorbedingt" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://fhir.de/CodeSystem/bfarm/icd-10-gm').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
<!--fehlt in ADT2FHIR--><attribute csv-column="todesursachen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '68343-3').value.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/JNUCS').code.value" join-fhir-path="/Observation.where(code.coding.code = '68343-3').subject.reference.value"/>
</container>
<container csv-filename="Diagnosis-${TIMESTAMP}.csv" opal-table="diagnosis" opal-entity-type="Diagnosis">
<attribute csv-column="diagnosis-id" primary-key="true" opal-value-type="text" val-fhir-path="Condition.id.value" anonym="Dia" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Condition.subject.reference.value" anonym="Pat"/>
<attribute csv-column="primaerdiagnose" opal-value-type="text" val-fhir-path="Condition.code.coding.code.value"/>
<attribute csv-column="tumor_diagnosedatum" opal-value-type="date" val-fhir-path="Condition.onset.value"/>
<attribute csv-column="primaertumor_diagnosetext" opal-value-type="text" val-fhir-path="Condition.code.text.value"/>
<attribute csv-column="version_des_icd-10_katalogs" opal-value-type="integer" val-fhir-path="Condition.code.coding.version.value"/>
<attribute csv-column="lokalisation" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').code.value"/>
<attribute csv-column="icd-o_katalog_topographie_version" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'urn:oid:2.16.840.1.113883.6.43.1').version.value"/>
<attribute csv-column="seitenlokalisation_nach_adt-gekid" opal-value-type="text" val-fhir-path="Condition.bodySite.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/SeitenlokalisationCS').code.value"/>
</container>
<container csv-filename="Progress-${TIMESTAMP}.csv" opal-table="progress" opal-entity-type="Progress">
<!--it would be better to generate a an ID, instead of extracting the ClinicalImpression id-->
<attribute csv-column="progress-id" primary-key="true" opal-value-type="text" val-fhir-path="ClinicalImpression.id.value" anonym="Pro" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="ClinicalImpression.problem.reference.value" anonym="Dia"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="ClinicalImpression.subject.reference.value" anonym="Pat" />
<attribute csv-column="untersuchungs-_befunddatum_im_verlauf" opal-value-type="date" val-fhir-path="ClinicalImpression.effective.value" />
<!-- just for evaluation: redundant to Untersuchungs-, Befunddatum im Verlauf-->
<attribute csv-column="datum_lokales_oder_regionaeres_rezidiv" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').effective.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
<attribute csv-column="gesamtbeurteilung_tumorstatus" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21976-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
<attribute csv-column="lokales_oder_regionaeres_rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4583-6').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value"/>
<attribute csv-column="lymphknoten-rezidiv" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4370-8').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
<attribute csv-column="fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = 'LA4226-2').value.coding.code.value" join-fhir-path="ClinicalImpression.finding.itemReference.reference.value" />
</container>
<container csv-filename="Histology-${TIMESTAMP}.csv" opal-table="histology" opal-entity-type="Histology" >
<attribute csv-column="histology-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').id" anonym="His" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').focus.reference.value" anonym="Dia"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').subject.reference.value" anonym="Pat" />
<attribute csv-column="histologie_datum" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '59847-4').effective.value"/>
<attribute csv-column="icd-o_katalog_morphologie_version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.version.value" />
<attribute csv-column="morphologie" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.coding.code.value"/>
<attribute csv-column="morphologie-freitext" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59847-4').value.text.value"/>
<attribute csv-column="grading" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '59542-1').value.coding.code.value" join-fhir-path="Observation.where(code.coding.code = '59847-4').hasMember.reference.value"/>
</container>
<container csv-filename="Metastasis-${TIMESTAMP}.csv" opal-table="metastasis" opal-entity-type="Metastasis" >
<attribute csv-column="metastasis-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').id" anonym="Met" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').focus.reference.value" anonym="Dia"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').subject.reference.value" anonym="Pat" />
<attribute csv-column="datum_fernmetastasen" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21907-1').effective.value"/>
<attribute csv-column="fernmetastasen_vorhanden" opal-value-type="boolean" val-fhir-path="Observation.where(code.coding.code = '21907-1').value.coding.code.value"/>
<attribute csv-column="lokalisation_fernmetastasen" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21907-1').bodySite.coding.code.value"/>
</container>
<container csv-filename="TNM-${TIMESTAMP}.csv" opal-table="tnm" opal-entity-type="TNM">
<attribute csv-column="tnm-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').id" anonym="TNM" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').focus.reference.value" anonym="Dia"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').subject.reference.value" anonym="Pat" />
<attribute csv-column="datum_der_tnm_dokumentation_datum_befund" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').effective.value"/>
<attribute csv-column="uicc_stadium" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.code.value"/>
<attribute csv-column="tnm-t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').value.coding.code.value"/>
<attribute csv-column="tnm-n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').value.coding.code.value"/>
<attribute csv-column="tnm-m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').value.coding.code.value"/>
<attribute csv-column="c_p_u_preefix_t" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21905-5' or code.coding.code = '21899-0').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
<attribute csv-column="c_p_u_preefix_n" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21906-3' or code.coding.code = '21900-6').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
<attribute csv-column="c_p_u_preefix_m" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21907-1' or code.coding.code = '21901-4').extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-TNMcpuPraefix').value.coding.code.value"/>
<attribute csv-column="tnm-y-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '59479-6' or code.coding.code = '59479-6').value.coding.code.value"/>
<attribute csv-column="tnm-r-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '21983-2' or code.coding.code = '21983-2').value.coding.code.value"/>
<attribute csv-column="tnm-m-symbol" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').component.where(code.coding.code = '42030-7' or code.coding.code = '42030-7').value.coding.code.value"/>
<!--nur bei UICC, nicht in ADT2FHIR--><attribute csv-column="tnm-version" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '21908-9' or code.coding.code = '21902-2').value.coding.version.value"/>
</container>
<container csv-filename="System-Therapy-${TIMESTAMP}.csv" opal-table="system-therapy" opal-entity-type="SystemTherapy">
<attribute csv-column="system-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="MedicationStatement.id" anonym="Sys" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="MedicationStatement.reasonReference.reference.value" anonym="Dia"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="MedicationStatement.subject.reference.value" anonym="Pat" />
<attribute csv-column="systemische_therapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
<attribute csv-column="intention_chemotherapie" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value"/>
<attribute csv-column="therapieart" opal-value-type="text" val-fhir-path="MedicationStatement.category.coding.code.value"/>
<attribute csv-column="systemische_therapie_beginn" opal-value-type="date" val-fhir-path="MedicationStatement.effective.start.value"/>
<attribute csv-column="systemische_therapie_ende" opal-value-type="date" val-fhir-path="MedicationStatement.effective.end.value"/>
<attribute csv-column="systemische_therapie_protokoll" opal-value-type="text" val-fhir-path="MedicationStatement.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SystemischeTherapieProtokoll').value.text.value"/>
<attribute csv-column="systemische_therapie_substanzen" opal-value-type="text" val-fhir-path="MedicationStatement.medication.text.value"/>
<attribute csv-column="chemotherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'CH').exists().value" />
<attribute csv-column="hormontherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'HO').exists().value" />
<attribute csv-column="immuntherapie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'IM').exists().value" />
<attribute csv-column="knochenmarktransplantation" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'KM').exists().value" />
<attribute csv-column="abwartende_strategie" opal-value-type="boolean" val-fhir-path="MedicationStatement.where(category.coding.code = 'WS').exists().value" />
</container>
<container csv-filename="Surgery-${TIMESTAMP}.csv" opal-table="surgery" opal-entity-type="Surgery">
<attribute csv-column="surgery-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').id" anonym="Sur" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').reasonReference.reference.value" anonym="Dia"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').subject.reference.value" anonym="Pat" />
<attribute csv-column="ops-code" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').code.coding.code.value"/>
<attribute csv-column="datum_der_op" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'OP').performed.value"/>
<attribute csv-column="intention_op" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-OPIntention').value.coding.code.value"/>
<attribute csv-column="lokale_beurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/LokaleBeurteilungResidualstatusCS').code.value" />
<attribute csv-column="gesamtbeurteilung_resttumor" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'OP').outcome.coding.where(system = 'http://dktk.dkfz.de/fhir/onco/core/CodeSystem/GesamtbeurteilungResidualstatusCS').code.value" />
</container>
<container csv-filename="Radiation-Therapy-${TIMESTAMP}.csv" opal-table="radiation-therapy" opal-entity-type="RadiationTherapy">
<attribute csv-column="radiation-therapy-id" primary-key="true" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').id" anonym="Rad" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').reasonReference.reference.value" anonym="Dia"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Procedure.where(category.coding.code = 'ST').subject.reference.value" anonym="Pat" />
<attribute csv-column="strahlentherapie_stellung_zu_operativer_therapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-StellungZurOp').value.coding.code.value"/>
<attribute csv-column="intention_strahlentherapie" opal-value-type="text" val-fhir-path="Procedure.extension('http://dktk.dkfz.de/fhir/StructureDefinition/onco-core-Extension-SYSTIntention').value.coding.code.value" />
<attribute csv-column="strahlentherapie_beginn" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.start.value"/>
<attribute csv-column="strahlentherapie_ende" opal-value-type="date" val-fhir-path="Procedure.where(category.coding.code = 'ST').performed.end.value"/>
</container>
<container csv-filename="Molecular-Marker-${TIMESTAMP}.csv" opal-table="molecular-marker" opal-entity-type="MolecularMarker">
<attribute csv-column="mol-marker-id" primary-key="true" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').id" anonym="Mol" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="diagnosis-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').focus.reference.value" anonym="Dia" />
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').subject.reference.value" anonym="Pat" />
<attribute csv-column="datum_der_datenerhebung" opal-value-type="date" val-fhir-path="Observation.where(code.coding.code = '69548-6').effective.value"/>
<attribute csv-column="marker" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').component.value.coding.code.value"/>
<attribute csv-column="status_des_molekularen_markers" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.coding.code.value" />
<attribute csv-column="zusaetzliche_alternative_dokumentation" opal-value-type="text" val-fhir-path="Observation.where(code.coding.code = '69548-6').value.text.value"/>
</container>
<container csv-filename="Sample-${TIMESTAMP}.csv" opal-table="sample" opal-entity-type="Sample">
<attribute csv-column="sample-id" primary-key="true" opal-value-type="text" val-fhir-path="Specimen.id" anonym="Sam" op="EXTRACT_RELATIVE_ID"/>
<attribute csv-column="patient-id" opal-value-type="text" val-fhir-path="Specimen.subject.reference.value" anonym="Pat" />
<attribute csv-column="entnahmedatum" opal-value-type="date" val-fhir-path="Specimen.collection.collectedDateTime.value"/>
<attribute csv-column="probenart" opal-value-type="text" val-fhir-path="Specimen.type.coding.code.value"/>
<attribute csv-column="status" opal-value-type="text" val-fhir-path="Specimen.status.code.value"/>
<attribute csv-column="projekt" opal-value-type="text" val-fhir-path="Specimen.identifier.system.value"/>
<!-- @TODO: it is still necessary to clarify whether it would not be better to take the quantity of collection.quantity -->
<attribute csv-column="menge" opal-value-type="integer" val-fhir-path="Specimen.container.specimenQuantity.value.value"/>
<attribute csv-column="einheit" opal-value-type="text" val-fhir-path="Specimen.container.specimenQuantity.unit.value"/>
<attribute csv-column="aliquot" opal-value-type="text" val-fhir-path="Specimen.parent.reference.exists().value" />
</container>
<fhir-rev-include>Observation:patient</fhir-rev-include>
<fhir-rev-include>Condition:patient</fhir-rev-include>
<fhir-rev-include>ClinicalImpression:patient</fhir-rev-include>
<fhir-rev-include>MedicationStatement:patient</fhir-rev-include>
<fhir-rev-include>Procedure:patient</fhir-rev-include>
<fhir-rev-include>Specimen:patient</fhir-rev-include>
</template>

View File

@ -0,0 +1,44 @@
#!/bin/bash -e
if [ "$ENABLE_DATASHIELD" == true ]; then
# HACK: This only works because exporter-setup.sh and teiler-setup.sh are sourced after datashield-setup.sh
if [ -z "${ENABLE_EXPORTER}" ] || [ "${ENABLE_EXPORTER}" != "true" ]; then
log WARN "The ENABLE_EXPORTER variable is either not set or not set to 'true'."
fi
OAUTH2_CALLBACK=/oauth2/callback
OAUTH2_PROXY_SECRET="$(echo \"This is a salt string to generate one consistent encryption key for the oauth2_proxy. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 32)"
add_private_oidc_redirect_url "${OAUTH2_CALLBACK}"
log INFO "DataSHIELD setup detected -- will start DataSHIELD services."
OVERRIDE+=" -f ./$PROJECT/modules/datashield-compose.yml"
EXPORTER_OPAL_PASSWORD="$(generate_password \"exporter in Opal\")"
TOKEN_MANAGER_OPAL_PASSWORD="$(generate_password \"Token Manager in Opal\")"
OPAL_DB_PASSWORD="$(echo \"Opal DB\" | generate_simple_password)"
OPAL_ADMIN_PASSWORD="$(generate_password \"admin password for Opal\")"
RSTUDIO_ADMIN_PASSWORD="$(generate_password \"admin password for R-Studio\")"
DATASHIELD_CONNECT_SECRET="$(echo \"DataShield Connect\" | generate_simple_password)"
TOKEN_MANAGER_SECRET="$(echo \"Token Manager\" | generate_simple_password)"
if [ ! -e /tmp/bridgehead/opal-cert.pem ]; then
mkdir -p /tmp/bridgehead/
openssl req -x509 -newkey rsa:4096 -nodes -keyout /tmp/bridgehead/opal-key.pem -out /tmp/bridgehead/opal-cert.pem -days 3650 -subj "/CN=opal/C=DE"
fi
mkdir -p /tmp/bridgehead/opal-map
sites="$(cat ./$PROJECT/modules/datashield-sites.json)"
echo "$sites" | docker_jq -n --args '{"sites": input | map({
"name": .,
"id": .,
"virtualhost": "\(.):443",
"beamconnect": "datashield-connect.\(.).'"$BROKER_ID"'"
})}' $sites >/tmp/bridgehead/opal-map/central.json
echo "$sites" | docker_jq -n --args '[{
"external": "'"$SITE_ID"':443",
"internal": "opal:8443",
"allowed": input | map("\(.).'"$BROKER_ID"'")
}]' >/tmp/bridgehead/opal-map/local.json
if [ "$USER" == "root" ]; then
chown -R bridgehead:docker /tmp/bridgehead
chmod g+wr /tmp/bridgehead/opal-map/*
chmod g+r /tmp/bridgehead/opal-key.pem
fi
add_private_oidc_redirect_url "/opal/*"
fi

View File

@ -0,0 +1,15 @@
[
"berlin",
"muenchen-lmu",
"dresden",
"freiburg",
"muenchen-tum",
"tuebingen",
"mainz",
"frankfurt",
"essen",
"dktk-datashield-test",
"dktk-test",
"mannheim",
"central-ds-orchestrator"
]

28
ccp/modules/datashield.md Normal file
View File

@ -0,0 +1,28 @@
# DataSHIELD
This module constitutes the infrastructure to run DataSHIELD within the bridgehead.
For more information about DataSHIELD, please visit https://www.datashield.org/
## R-Studio
To connect to the different bridgeheads of the CCP through DataSHIELD, you can use your own R-Studio environment.
However, this R-Studio has already installed the DataSHIELD libraries and is integrated within the bridgehead.
This can save you some time for extra configuration of your R-Studio environment.
## Opal
This is the core of DataSHIELD. It is made up of Opal, a Postgres database and an R-server.
For more information about Opal, please visit https://opaldoc.obiba.org
### Opal
Opal is OBiBas core database application for biobanks.
### Opal-DB
Opal requires a database to import the data for DataSHIELD. We use a Postgres instance as database.
The data is imported within the bridgehead through the exporter.
### Opal-R-Server
R-Server to execute R scripts in DataSHIELD.
## Beam
### Beam-Connect
Beam-Connect is used to route http(s) traffic through beam to enable R-Studio to access data from other bridgeheads that have datashield enabled.
### Beam-Proxy
The usual beam proxy used for communication.

View File

@ -0,0 +1,39 @@
version: "3.7"
services:
beam-proxy:
environment:
APP_dnpm-connect_KEY: ${DNPM_BEAM_SECRET_SHORT}
dnpm-beam-connect:
depends_on: [ beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://beam-proxy:8081
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
HTTP_PROXY: "http://forward_proxy:3128"
HTTPS_PROXY: "http://forward_proxy:3128"
NO_PROXY: beam-proxy,dnpm-backend,host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
TLS_CA_CERTIFICATES_DIR: ./conf/trusted-ca-certs
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
- "traefik.http.middlewares.dnpm-connect-strip.stripprefix.prefixes=/dnpm-connect"
- "traefik.http.routers.dnpm-connect.middlewares=dnpm-connect-strip"
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
- "traefik.http.routers.dnpm-connect.tls=true"
dnpm-echo:
image: docker.verbis.dkfz.de/cache/samply/bridgehead-echo:latest
container_name: bridgehead-dnpm-echo

View File

@ -0,0 +1,99 @@
version: "3.7"
services:
dnpm-mysql:
image: mysql:9
healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
interval: 3s
timeout: 5s
retries: 5
environment:
MYSQL_ROOT_HOST: "%"
MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
volumes:
- /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
dnpm-authup:
image: authup/authup:latest
container_name: bridgehead-dnpm-authup
volumes:
- /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
depends_on:
dnpm-mysql:
condition: service_healthy
command: server/core start
environment:
- PUBLIC_URL=https://${HOST}/auth/
- AUTHORIZE_REDIRECT_URL=https://${HOST}
- ROBOT_ADMIN_ENABLED=true
- ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
- ROBOT_ADMIN_SECRET_RESET=true
- DB_TYPE=mysql
- DB_HOST=dnpm-mysql
- DB_USERNAME=root
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
- DB_DATABASE=auth
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth"
- "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
- "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-auth.tls=true"
dnpm-portal:
image: ghcr.io/dnpm-dip/portal:latest
container_name: bridgehead-dnpm-portal
environment:
- NUXT_API_URL=http://dnpm-backend:9000/
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-frontend.tls=true"
dnpm-backend:
container_name: bridgehead-dnpm-backend
image: ghcr.io/dnpm-dip/backend:latest
environment:
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- HATEOAS_HOST=https://${HOST}
- CONNECTOR_TYPE=broker
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
volumes:
- /etc/bridgehead/dnpm/config:/dnpm_config
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
depends_on:
dnpm-authup:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
# expose everything
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.dnpm-backend.tls=true"
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
# except ETL
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-etl.tls=true"
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
# this needs an ETL processor with support for basic auth
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
# except peer-to-peer
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-peer.tls=true"
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
# this effectively denies all requests
# this is okay, because requests from peers don't go through Traefik
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
landing:
labels:
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"

View File

@ -0,0 +1,16 @@
#!/bin/bash
if [ -n "${ENABLE_DNPM_NODE}" ]; then
log INFO "DNPM setup detected -- will start DNPM:DIP node."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-node-compose.yml"
# Set variables required for BwHC Node. ZPM_SITE is assumed to be set in /etc/bridgehead/<project>.conf
if [ -z "${ZPM_SITE+x}" ]; then
log ERROR "Mandatory variable ZPM_SITE not defined!"
exit 1
fi
mkdir -p /var/cache/bridgehead/dnpm/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/dnpm/'. Please run sudo './bridgehead install $PROJECT' again to fix the permissions."
DNPM_SYNTH_NUM=${DNPM_SYNTH_NUM:--1}
DNPM_MYSQL_ROOT_PASSWORD="$(generate_simple_password 'dnpm mysql')"
DNPM_AUTHUP_SECRET="$(generate_simple_password 'dnpm authup')"
fi

15
ccp/modules/dnpm-setup.sh Normal file
View File

@ -0,0 +1,15 @@
#!/bin/bash -e
if [ -n "${ENABLE_DNPM}" ]; then
log INFO "DNPM setup detected (Beam.Connect) -- will start Beam.Connect for DNPM."
OVERRIDE+=" -f ./$PROJECT/modules/dnpm-compose.yml"
# Set variables required for Beam-Connect
DNPM_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
# If the DNPM_NO_PROXY variable is set, prefix it with a comma (as it gets added to a comma separated list)
if [ -n "${DNPM_NO_PROXY}" ]; then
DNPM_ADDITIONAL_NO_PROXY=",${DNPM_NO_PROXY}"
else
DNPM_ADDITIONAL_NO_PROXY=""
fi
fi

View File

@ -0,0 +1,6 @@
# Full Excel Export
curl --location --request POST 'https://${HOST}/ccp-exporter/request?query=Patient&query-format=FHIR_PATH&template-id=ccp&output-format=EXCEL' \
--header 'x-api-key: ${EXPORT_API_KEY}'
# QB
curl --location --request POST 'https://${HOST}/ccp-reporter/generate?template-id=ccp'

View File

@ -0,0 +1,72 @@
version: "3.7"
services:
exporter:
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
container_name: bridgehead-ccp-exporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
CROSS_ORIGINS: "https://${HOST}"
EXPORTER_DB_USER: "exporter"
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
HTTP_RELATIVE_PATH: "/ccp-exporter"
SITE: "${SITE_ID}"
HTTP_SERVLET_REQUEST_SCHEME: "https"
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
labels:
- "traefik.enable=true"
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
- "traefik.http.services.exporter_ccp.loadbalancer.server.port=8092"
- "traefik.http.routers.exporter_ccp.tls=true"
- "traefik.http.middlewares.exporter_ccp_strip.stripprefix.prefixes=/ccp-exporter"
- "traefik.http.routers.exporter_ccp.middlewares=exporter_ccp_strip"
volumes:
- "/var/cache/bridgehead/ccp/exporter-files:/app/exporter-files/output"
exporter-db:
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
container_name: bridgehead-ccp-exporter-db
environment:
POSTGRES_USER: "exporter"
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
POSTGRES_DB: "exporter"
volumes:
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
- "/var/cache/bridgehead/ccp/exporter-db:/var/lib/postgresql/data"
reporter:
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
container_name: bridgehead-ccp-reporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
CROSS_ORIGINS: "https://${HOST}"
HTTP_RELATIVE_PATH: "/ccp-reporter"
SITE: "${SITE_ID}"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
EXPORTER_URL: "http://exporter:8092"
LOG_FHIR_VALIDATION: "false"
HTTP_SERVLET_REQUEST_SCHEME: "https"
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
# a process that can take several hours, because it depends on the exporter.
# There is a risk that the bridgehead restarts, losing the already created export.
volumes:
- "/var/cache/bridgehead/ccp/reporter-files:/app/reports"
labels:
- "traefik.enable=true"
- "traefik.http.routers.reporter_ccp.rule=PathPrefix(`/ccp-reporter`)"
- "traefik.http.services.reporter_ccp.loadbalancer.server.port=8095"
- "traefik.http.routers.reporter_ccp.tls=true"
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"
focus:
environment:
EXPORTER_URL: "http://exporter:8092"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"

View File

@ -0,0 +1,8 @@
#!/bin/bash -e
if [ "$ENABLE_EXPORTER" == true ]; then
log INFO "Exporter setup detected -- will start Exporter service."
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
fi

15
ccp/modules/exporter.md Normal file
View File

@ -0,0 +1,15 @@
# Exporter and Reporter
## Exporter
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
## Exporter-DB
It is a database to save queries for its execution in the exporter.
The exporter manages also the different executions of the same query in through the database.
## Reporter
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
It is compatible with different template engines as Groovy, Thymeleaf,...
It is perfect to generate a document as our traditional CCP quality report.

View File

@ -0,0 +1,29 @@
version: "3.7"
services:
fhir2sql:
depends_on:
- "dashboard-db"
- "blaze"
image: docker.verbis.dkfz.de/cache/samply/fhir2sql:latest
container_name: bridgehead-ccp-dashboard-fhir2sql
environment:
BLAZE_BASE_URL: "http://bridgehead-ccp-blaze:8080"
PG_HOST: "dashboard-db"
PG_USERNAME: "dashboard"
PG_PASSWORD: "${DASHBOARD_DB_PASSWORD}" # Set in dashboard-setup.sh
PG_DBNAME: "dashboard"
dashboard-db:
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
container_name: bridgehead-ccp-dashboard-db
environment:
POSTGRES_USER: "dashboard"
POSTGRES_PASSWORD: "${DASHBOARD_DB_PASSWORD}" # Set in dashboard-setup.sh
POSTGRES_DB: "dashboard"
volumes:
- "/var/cache/bridgehead/ccp/dashboard-db:/var/lib/postgresql/data"
focus:
environment:
POSTGRES_CONNECTION_STRING: "postgresql://dashboard:${DASHBOARD_DB_PASSWORD}@dashboard-db/dashboard"

View File

@ -0,0 +1,8 @@
#!/bin/bash -e
if [ "$ENABLE_FHIR2SQL" == true ]; then
log INFO "Dashboard setup detected -- will start Dashboard backend and FHIR2SQL service."
OVERRIDE+=" -f ./$PROJECT/modules/fhir2sql-compose.yml"
DASHBOARD_DB_PASSWORD="$(generate_simple_password 'fhir2sql')"
FOCUS_ENDPOINT_TYPE="blaze-and-sql"
fi

36
ccp/modules/fhir2sql.md Normal file
View File

@ -0,0 +1,36 @@
# fhir2sql
fhir2sql connects to Blaze, retrieves data, and syncs it with a PostgreSQL database. The application is designed to run continuously, syncing data at regular intervals.
The Dashboard module is a optional component of the Bridgehead CCP setup. When enabled, it starts two Docker services: **fhir2sql** and **dashboard-db**. Data held in PostgreSQL is only stored temporarily and Blaze is considered to be the 'leading system' or 'source of truth'.
## Services
### fhir2sql
* Image: docker.verbis.dkfz.de/cache/samply/fhir2sql:latest
* Container name: bridgehead-ccp-dashboard-fhir2sql
* Depends on: dashboard-db
* Environment variables:
- BLAZE_BASE_URL: The base URL of the Blaze FHIR server (set to http://blaze:8080/fhir/)
- PG_HOST: The hostname of the PostgreSQL database (set to dashboard-db)
- PG_USERNAME: The username for the PostgreSQL database (set to dashboard)
- PG_PASSWORD: The password for the PostgreSQL database (set to the value of DASHBOARD_DB_PASSWORD)
- PG_DBNAME: The name of the PostgreSQL database (set to dashboard)
### dashboard-db
* Image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
* Container name: bridgehead-ccp-dashboard-db
* Environment variables:
- POSTGRES_USER: The username for the PostgreSQL database (set to dashboard)
- POSTGRES_PASSWORD: The password for the PostgreSQL database (set to the value of DASHBOARD_DB_PASSWORD)
- POSTGRES_DB: The name of the PostgreSQL database (set to dashboard)
* Volumes:
- /var/cache/bridgehead/ccp/dashboard-db:/var/lib/postgresql/data
The volume used by dashboard-db can be removed safely and should be restored to a working order by re-importing data from Blaze.
### Environment Variables
* DASHBOARD_DB_PASSWORD: A generated password for the PostgreSQL database, created using a salt string and the SHA1 hash function.
* POSTGRES_TAG: The tag of the PostgreSQL image to use (not set in this module, but required by the dashboard-db service).
### Setup
To enable the Dashboard module, set the ENABLE_FHIR2SQL environment variable to true. The dashboard-setup.sh script will then start the fhir2sql and dashboard-db services, using the environment variables and volumes defined above.

View File

@ -1,10 +1,12 @@
version: "3.7" version: "3.7"
services: services:
id-manager: id-manager:
image: docker.verbis.dkfz.de/bridgehead/magicpl image: docker.verbis.dkfz.de/bridgehead/magicpl
container_name: bridgehead-id-manager container_name: bridgehead-id-manager
environment: environment:
TOMCAT_REVERSEPROXY_FQDN: ${HOST} TOMCAT_REVERSEPROXY_FQDN: ${HOST}
TOMCAT_REVERSEPROXY_SSL: "true"
MAGICPL_SITE: ${IDMANAGEMENT_FRIENDLY_ID} MAGICPL_SITE: ${IDMANAGEMENT_FRIENDLY_ID}
MAGICPL_ALLOWED_ORIGINS: https://${HOST} MAGICPL_ALLOWED_ORIGINS: https://${HOST}
MAGICPL_LOCAL_PATIENTLIST_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY} MAGICPL_LOCAL_PATIENTLIST_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
@ -12,21 +14,30 @@ services:
MAGICPL_CONNECTOR_APIKEY: ${IDMANAGER_READ_APIKEY} MAGICPL_CONNECTOR_APIKEY: ${IDMANAGER_READ_APIKEY}
MAGICPL_CENTRAL_PATIENTLIST_APIKEY: ${IDMANAGER_CENTRAL_PATIENTLIST_APIKEY} MAGICPL_CENTRAL_PATIENTLIST_APIKEY: ${IDMANAGER_CENTRAL_PATIENTLIST_APIKEY}
MAGICPL_CONTROLNUMBERGENERATOR_APIKEY: ${IDMANAGER_CONTROLNUMBERGENERATOR_APIKEY} MAGICPL_CONTROLNUMBERGENERATOR_APIKEY: ${IDMANAGER_CONTROLNUMBERGENERATOR_APIKEY}
MAGICPL_OIDC_CLIENT_ID: ${IDMANAGER_AUTH_CLIENT_ID}
MAGICPL_OIDC_CLIENT_SECRET: ${IDMANAGER_AUTH_CLIENT_SECRET}
depends_on: depends_on:
- patientlist - patientlist
- traefik-forward-auth
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
# Router with Authentication
- "traefik.http.routers.id-manager.rule=PathPrefix(`/id-manager`)" - "traefik.http.routers.id-manager.rule=PathPrefix(`/id-manager`)"
- "traefik.http.services.id-manager.loadbalancer.server.port=8080"
- "traefik.http.routers.id-manager.tls=true" - "traefik.http.routers.id-manager.tls=true"
- "traefik.http.routers.id-manager.middlewares=traefik-forward-auth-idm"
- "traefik.http.routers.id-manager.service=id-manager-service"
# Router without Authentication
- "traefik.http.routers.id-manager-compatibility.rule=PathPrefix(`/id-manager/paths/translator/getIds`)"
- "traefik.http.routers.id-manager-compatibility.tls=true"
- "traefik.http.routers.id-manager-compatibility.service=id-manager-service"
# Definition of Service
- "traefik.http.services.id-manager-service.loadbalancer.server.port=8080"
- "traefik.http.services.id-manager-service.loadbalancer.server.scheme=http"
patientlist: patientlist:
image: docker.verbis.dkfz.de/bridgehead/mainzelliste image: docker.verbis.dkfz.de/bridgehead/mainzelliste
container_name: bridgehead-patientlist container_name: bridgehead-patientlist
environment: environment:
- TOMCAT_REVERSEPROXY_FQDN=${HOST} - TOMCAT_REVERSEPROXY_FQDN=${HOST}
- TOMCAT_REVERSEPROXY_SSL=true
- ML_SITE=${IDMANAGEMENT_FRIENDLY_ID} - ML_SITE=${IDMANAGEMENT_FRIENDLY_ID}
- ML_DB_PASS=${PATIENTLIST_POSTGRES_PASSWORD} - ML_DB_PASS=${PATIENTLIST_POSTGRES_PASSWORD}
- ML_API_KEY=${IDMANAGER_LOCAL_PATIENTLIST_APIKEY} - ML_API_KEY=${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
@ -42,7 +53,7 @@ services:
- patientlist-db - patientlist-db
patientlist-db: patientlist-db:
image: postgres:15.1-alpine image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
container_name: bridgehead-patientlist-db container_name: bridgehead-patientlist-db
environment: environment:
POSTGRES_USER: "mainzelliste" POSTGRES_USER: "mainzelliste"
@ -53,5 +64,49 @@ services:
# NOTE: Add backups here. This is only imported if /var/lib/bridgehead/data/patientlist/ is empty!!! # NOTE: Add backups here. This is only imported if /var/lib/bridgehead/data/patientlist/ is empty!!!
- "/tmp/bridgehead/patientlist/:/docker-entrypoint-initdb.d/" - "/tmp/bridgehead/patientlist/:/docker-entrypoint-initdb.d/"
traefik-forward-auth:
image: docker.verbis.dkfz.de/cache/oauth2-proxy/oauth2-proxy:latest
environment:
- http_proxy=http://forward_proxy:3128
- https_proxy=http://forward_proxy:3128
- OAUTH2_PROXY_PROVIDER=oidc
- OAUTH2_PROXY_SKIP_PROVIDER_BUTTON=true
- OAUTH2_PROXY_OIDC_ISSUER_URL=https://login.verbis.dkfz.de/realms/master
- OAUTH2_PROXY_CLIENT_ID=bridgehead-${SITE_ID}
- OAUTH2_PROXY_CLIENT_SECRET=${IDMANAGER_AUTH_CLIENT_SECRET}
- OAUTH2_PROXY_COOKIE_SECRET=${IDMANAGER_AUTH_COOKIE_SECRET}
- OAUTH2_PROXY_COOKIE_NAME=_BRIDGEHEAD_oauth2_idm
- OAUTH2_PROXY_COOKIE_DOMAINS=.${HOST}
- OAUTH2_PROXY_HTTP_ADDRESS=:4180
- OAUTH2_PROXY_REVERSE_PROXY=true
- OAUTH2_PROXY_WHITELIST_DOMAINS=.${HOST}
- OAUTH2_PROXY_UPSTREAMS=static://202
- OAUTH2_PROXY_EMAIL_DOMAINS=*
- OAUTH2_PROXY_SCOPE=openid profile email
# Pass Authorization Header and some user information to backend services
- OAUTH2_PROXY_SET_AUTHORIZATION_HEADER=true
- OAUTH2_PROXY_SET_XAUTHREQUEST=true
# Keycloak has an expiration time of 60s therefore oauth2-proxy needs to refresh after that
- OAUTH2_PROXY_COOKIE_REFRESH=60s
- OAUTH2_PROXY_ALLOWED_GROUPS=DKTK-CCP-PPSN
- OAUTH2_PROXY_PROXY_PREFIX=/oauth2-idm
labels:
- "traefik.enable=true"
- "traefik.http.services.traefik-forward-auth.loadbalancer.server.port=4180"
- "traefik.http.routers.traefik-forward-auth.rule=Host(`${HOST}`) && PathPrefix(`/oauth2-idm`)"
- "traefik.http.routers.traefik-forward-auth.tls=true"
- "traefik.http.middlewares.traefik-forward-auth-idm.forwardauth.address=http://traefik-forward-auth:4180"
- "traefik.http.middlewares.traefik-forward-auth-idm.forwardauth.authResponseHeaders=Authorization"
depends_on:
forward_proxy:
condition: service_healthy
ccp-patient-project-identificator:
image: docker.verbis.dkfz.de/cache/samply/ccp-patient-project-identificator
container_name: bridgehead-ccp-patient-project-identificator
environment:
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
SITE_NAME: ${IDMANAGEMENT_FRIENDLY_ID}
volumes: volumes:
patientlist-db-data: patientlist-db-data:

View File

@ -1,12 +1,12 @@
#!/bin/bash #!/bin/bash -e
function idManagementSetup() { function idManagementSetup() {
if [ -n "$IDMANAGER_UPLOAD_APIKEY" ]; then if [ -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log INFO "id-management setup detected -- will start id-management (mainzelliste & magicpl)." log INFO "id-management setup detected -- will start id-management (mainzelliste & magicpl)."
OVERRIDE+=" -f ./$PROJECT/modules/id-management-compose.yml" OVERRIDE+=" -f ./ccp/modules/id-management-compose.yml"
# Auto Generate local Passwords # Auto Generate local Passwords
PATIENTLIST_POSTGRES_PASSWORD="$(echo \"id-management-module-db-password-salt\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)" PATIENTLIST_POSTGRES_PASSWORD="$(echo \"id-management-module-db-password-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
IDMANAGER_LOCAL_PATIENTLIST_APIKEY="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)" IDMANAGER_LOCAL_PATIENTLIST_APIKEY="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
# Transform Seeds Configuration to pass it to the Mainzelliste Container # Transform Seeds Configuration to pass it to the Mainzelliste Container
@ -39,6 +39,7 @@ function applySpecialCases() {
result="$1"; result="$1";
result="${result/Lmu/LMU}"; result="${result/Lmu/LMU}";
result="${result/Tum/TUM}"; result="${result/Tum/TUM}";
result="${result/Dktk Test/Teststandort}";
echo "$result"; echo "$result";
} }

View File

@ -2,7 +2,7 @@ version: "3.7"
services: services:
mtba: mtba:
image: samply/mtba:develop image: docker.verbis.dkfz.de/cache/samply/mtba:develop
container_name: bridgehead-mtba container_name: bridgehead-mtba
environment: environment:
BLAZE_STORE_URL: http://blaze:8080 BLAZE_STORE_URL: http://blaze:8080
@ -11,22 +11,30 @@ services:
ID_MANAGER_API_KEY: ${IDMANAGER_UPLOAD_APIKEY} ID_MANAGER_API_KEY: ${IDMANAGER_UPLOAD_APIKEY}
ID_MANAGER_PSEUDONYM_ID_TYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID ID_MANAGER_PSEUDONYM_ID_TYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
ID_MANAGER_URL: http://id-manager:8080/id-manager ID_MANAGER_URL: http://id-manager:8080/id-manager
PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER} PATIENT_CSV_FIRST_NAME_HEADER: ${MTBA_PATIENT_CSV_FIRST_NAME_HEADER:-FIRST_NAME}
PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER} PATIENT_CSV_LAST_NAME_HEADER: ${MTBA_PATIENT_CSV_LAST_NAME_HEADER:-LAST_NAME}
PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER} PATIENT_CSV_GENDER_HEADER: ${MTBA_PATIENT_CSV_GENDER_HEADER:-GENDER}
PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER} PATIENT_CSV_BIRTHDAY_HEADER: ${MTBA_PATIENT_CSV_BIRTHDAY_HEADER:-BIRTHDAY}
CBIOPORTAL_URL: http://cbioportal:8080 CBIOPORTAL_URL: http://cbioportal:8080
FILE_CHARSET: ${MTBA_FILE_CHARSET} FILE_CHARSET: ${MTBA_FILE_CHARSET:-UTF-8}
FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE} FILE_END_OF_LINE: ${MTBA_FILE_END_OF_LINE:-LF}
CSV_DELIMITER: ${MTBA_CSV_DELIMITER} CSV_DELIMITER: ${MTBA_CSV_DELIMITER:-TAB}
HTTP_RELATIVE_PATH: "/mtba"
OIDC_ADMIN_GROUP: "${OIDC_ADMIN_GROUP}"
OIDC_CLIENT_ID: "${OIDC_PRIVATE_CLIENT_ID}"
OIDC_CLIENT_SECRET: "${OIDC_CLIENT_SECRET}"
OIDC_REALM: "${OIDC_REALM}"
OIDC_URL: "${OIDC_URL}"
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
- "traefik.http.routers.mtba.rule=PathPrefix(`/`)" - "traefik.http.routers.mtba_ccp.rule=PathPrefix(`/mtba`)"
- "traefik.http.services.mtba.loadbalancer.server.port=80" - "traefik.http.services.mtba_ccp.loadbalancer.server.port=8480"
- "traefik.http.routers.mtba.tls=true" - "traefik.http.routers.mtba_ccp.tls=true"
volumes: volumes:
- /tmp/bridgehead/mtba/input:/app/input - /var/cache/bridgehead/ccp/mtba/input:/app/input
- /tmp/bridgehead/mtba/persist:/app/persist - /var/cache/bridgehead/ccp/mtba/persist:/app/persist
# TODO: Include CBioPortal in Deployment ... # TODO: Include CBioPortal in Deployment ...
# NOTE: CBioPortal can't load data while the system is running. So after import of data bridgehead needs to be restarted! # NOTE: CBioPortal can't load data while the system is running. So after import of data bridgehead needs to be restarted!

12
ccp/modules/mtba-setup.sh Normal file
View File

@ -0,0 +1,12 @@
#!/bin/bash -e
function mtbaSetup() {
if [ -n "$ENABLE_MTBA" ];then
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
fi
OVERRIDE+=" -f ./$PROJECT/modules/mtba-compose.yml"
add_private_oidc_redirect_url "/mtba/*"
fi
}

6
ccp/modules/mtba.md Normal file
View File

@ -0,0 +1,6 @@
# Molecular Tumor Board Alliance (MTBA)
In this module, the genetic data to import is stored in a directory (/tmp/bridgehead/mtba/input). A process checks
regularly if there are files in the directory. The files are pseudonomized when the IDAT is provided. The files are
combined with clinical data of the blaze and imported in cBioPortal. On the other hand, this files are also imported in
Blaze.

View File

@ -0,0 +1,26 @@
version: "3.7"
volumes:
nngm-rest:
services:
connector:
container_name: bridgehead-connector
image: docker.verbis.dkfz.de/ccp/nngm-rest:main
environment:
CTS_MAGICPL_API_KEY: ${NNGM_MAGICPL_APIKEY}
CTS_API_KEY: ${NNGM_CTS_APIKEY}
CRYPT_KEY: ${NNGM_CRYPTKEY}
#CTS_MAGICPL_SITE: ${SITE_ID}TODO
labels:
- "traefik.enable=true"
- "traefik.http.routers.connector.rule=PathPrefix(`/nngm-connector`)"
- "traefik.http.middlewares.connector_strip.stripprefix.prefixes=/nngm-connector"
- "traefik.http.services.connector.loadbalancer.server.port=8080"
- "traefik.http.routers.connector.tls=true"
- "traefik.http.routers.connector.middlewares=connector_strip,auth-nngm"
volumes:
- nngm-rest:/var/log
traefik:
labels:
- "traefik.http.middlewares.auth-nngm.basicauth.users=${NNGM_AUTH}"

View File

@ -0,0 +1,6 @@
#!/bin/bash -e
if [ -n "$NNGM_CTS_APIKEY" ]; then
log INFO "nNGM setup detected -- will start nNGM Connector."
OVERRIDE+=" -f ./$PROJECT/modules/nngm-compose.yml"
fi

View File

@ -0,0 +1,19 @@
version: "3.7"
services:
obds2fhir-rest:
container_name: bridgehead-obds2fhir-rest
image: docker.verbis.dkfz.de/samply/obds2fhir-rest:main
environment:
IDTYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
SALT: ${LOCAL_SALT}
KEEP_INTERNAL_ID: ${KEEP_INTERNAL_ID:-false}
MAINZELLISTE_URL: ${PATIENTLIST_URL:-http://patientlist:8080/patientlist}
labels:
- "traefik.enable=true"
- "traefik.http.routers.obds2fhir-rest.rule=PathPrefix(`/obds2fhir-rest`) || PathPrefix(`/adt2fhir-rest`)"
- "traefik.http.middlewares.obds2fhir-rest_strip.stripprefix.prefixes=/obds2fhir-rest,/adt2fhir-rest"
- "traefik.http.services.obds2fhir-rest.loadbalancer.server.port=8080"
- "traefik.http.routers.obds2fhir-rest.tls=true"
- "traefik.http.routers.obds2fhir-rest.middlewares=obds2fhir-rest_strip,auth"

View File

@ -0,0 +1,13 @@
#!/bin/bash
function obds2fhirRestSetup() {
if [ -n "$ENABLE_OBDS2FHIR_REST" ]; then
log INFO "oBDS2FHIR-REST setup detected -- will start obds2fhir-rest module."
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
PATIENTLIST_URL=" "
fi
OVERRIDE+=" -f ./ccp/modules/obds2fhir-rest-compose.yml"
LOCAL_SALT="$(echo \"local-random-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
fi
}

View File

@ -0,0 +1,81 @@
version: "3.7"
services:
teiler-orchestrator:
image: docker.verbis.dkfz.de/cache/samply/teiler-orchestrator:latest
container_name: bridgehead-teiler-orchestrator
labels:
- "traefik.enable=true"
- "traefik.http.routers.teiler_orchestrator_ccp.rule=PathPrefix(`/ccp-teiler`)"
- "traefik.http.services.teiler_orchestrator_ccp.loadbalancer.server.port=9000"
- "traefik.http.routers.teiler_orchestrator_ccp.tls=true"
- "traefik.http.middlewares.teiler_orchestrator_ccp_strip.stripprefix.prefixes=/ccp-teiler"
- "traefik.http.routers.teiler_orchestrator_ccp.middlewares=teiler_orchestrator_ccp_strip"
environment:
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
TEILER_DASHBOARD_URL: "https://${HOST}/ccp-teiler-dashboard"
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE_LOWER_CASE}"
HTTP_RELATIVE_PATH: "/ccp-teiler"
teiler-dashboard:
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:develop
container_name: bridgehead-teiler-dashboard
labels:
- "traefik.enable=true"
- "traefik.http.routers.teiler_dashboard_ccp.rule=PathPrefix(`/ccp-teiler-dashboard`)"
- "traefik.http.services.teiler_dashboard_ccp.loadbalancer.server.port=80"
- "traefik.http.routers.teiler_dashboard_ccp.tls=true"
- "traefik.http.middlewares.teiler_dashboard_ccp_strip.stripprefix.prefixes=/ccp-teiler-dashboard"
- "traefik.http.routers.teiler_dashboard_ccp.middlewares=teiler_dashboard_ccp_strip"
environment:
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
OIDC_URL: "${OIDC_URL}"
OIDC_REALM: "${OIDC_REALM}"
OIDC_CLIENT_ID: "${OIDC_PUBLIC_CLIENT_ID}"
OIDC_TOKEN_GROUP: "${OIDC_GROUP_CLAIM}"
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
TEILER_PROJECT: "${PROJECT}"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
TEILER_DASHBOARD_HTTP_RELATIVE_PATH: "/ccp-teiler-dashboard"
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
TEILER_USER: "${OIDC_USER_GROUP}"
TEILER_ADMIN: "${OIDC_ADMIN_GROUP}"
REPORTER_DEFAULT_TEMPLATE_ID: "ccp-qb"
EXPORTER_DEFAULT_TEMPLATE_ID: "ccp"
teiler-backend:
image: docker.verbis.dkfz.de/ccp/dktk-teiler-backend:latest
container_name: bridgehead-teiler-backend
labels:
- "traefik.enable=true"
- "traefik.http.routers.teiler_backend_ccp.rule=PathPrefix(`/ccp-teiler-backend`)"
- "traefik.http.services.teiler_backend_ccp.loadbalancer.server.port=8085"
- "traefik.http.routers.teiler_backend_ccp.tls=true"
- "traefik.http.middlewares.teiler_backend_ccp_strip.stripprefix.prefixes=/ccp-teiler-backend"
- "traefik.http.routers.teiler_backend_ccp.middlewares=teiler_backend_ccp_strip"
environment:
LOG_LEVEL: "INFO"
APPLICATION_PORT: "8085"
APPLICATION_ADDRESS: "${HOST}"
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
CONFIG_ENV_VAR_PATH: "/run/secrets/ccp.conf"
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
TEILER_DASHBOARD_DE_URL: "https://${HOST}/ccp-teiler-dashboard/de"
TEILER_DASHBOARD_EN_URL: "https://${HOST}/ccp-teiler-dashboard/en"
CENTRAX_URL: "${CENTRAXX_URL}"
HTTP_PROXY: "http://forward_proxy:3128"
ENABLE_MTBA: "${ENABLE_MTBA}"
ENABLE_DATASHIELD: "${ENABLE_DATASHIELD}"
secrets:
- ccp.conf
secrets:
ccp.conf:
file: /etc/bridgehead/ccp.conf

View File

@ -0,0 +1,9 @@
#!/bin/bash -e
if [ "$ENABLE_TEILER" == true ];then
log INFO "Teiler setup detected -- will start Teiler services."
OVERRIDE+=" -f ./$PROJECT/modules/teiler-compose.yml"
TEILER_DEFAULT_LANGUAGE=DE
TEILER_DEFAULT_LANGUAGE_LOWER_CASE=${TEILER_DEFAULT_LANGUAGE,,}
add_public_oidc_redirect_url "/ccp-teiler/*"
fi

19
ccp/modules/teiler.md Normal file
View File

@ -0,0 +1,19 @@
# Teiler
This module orchestrates the different microfrontends of the bridgehead as a single page application.
## Teiler Orchestrator
Single SPA component that consists on the root HTML site of the single page application and a javascript code that
gets the information about the microfrontend calling the teiler backend and is responsible for registering them. With the
resulting mapping, it can initialize, mount and unmount the required microfrontends on the fly.
The microfrontends run independently in different containers and can be based on different frameworks (Angular, Vue, React,...)
This microfrontends can run as single alone but need an extension with Single-SPA (https://single-spa.js.org/docs/ecosystem).
There are also available three templates (Angular, Vue, React) to be directly extended to be used directly in the teiler.
## Teiler Dashboard
It consists on the main dashboard and a set of embedded services.
### Login
user and password in ccp.local.conf
## Teiler Backend
In this component, the microfrontends are configured.

View File

@ -1,32 +0,0 @@
version: "3.7"
services:
connector:
container_name: bridgehead-connector
image: docker.verbis.dkfz.de/ccp/connector:bk2
environment:
POSTGRES_PASSWORD: ${CONNECTOR_POSTGRES_PASSWORD}
NNGM_MAGICPL_APIKEY: ${NNGM_MAGICPL_APIKEY}
NNGM_MAINZELLISTE_APIKEY: ${NNGM_MAINZELLISTE_APIKEY}
NNGM_CTS_APIKEY: ${NNGM_CTS_APIKEY}
NNGM_CRYPTKEY: ${NNGM_CRYPTKEY}
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.connector.rule=PathPrefix(`/ccp-connector`)"
- "traefik.http.services.connector.loadbalancer.server.port=8080"
- "traefik.http.routers.connector.tls=true"
connector_db:
image: postgres:9.5-alpine
container_name: bridgehead-ccp-connector-db
volumes:
- "connector_db_data:/var/lib/postgresql/data"
environment:
POSTGRES_DB: "samplyconnector"
POSTGRES_USER: "samplyconnector"
POSTGRES_PASSWORD: ${CONNECTOR_POSTGRES_PASSWORD}
restart: always
volumes:
connector_db_data:

View File

@ -1,21 +0,0 @@
#!/bin/bash
function nngmSetup() {
if [ -n "$NNGM_CTS_APIKEY" ]; then
log INFO "nNGM setup detected -- will start nNGM Connector."
OVERRIDE+=" -f ./$PROJECT/nngm-compose.yml"
fi
CONNECTOR_POSTGRES_PASSWORD="$(echo \"This is a salt string to generate one consistent password. It is not required to be secret.\" | openssl rsautl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
}
function mtbaSetup() {
# TODO: Check if ID-Management Module is activated!
if [ -n "$ENABLE_MTBA" ];then
log INFO "MTBA setup detected -- will start MTBA Service and CBioPortal."
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log ERROR "Detected MTBA Module configuration but ID-Management Module seems not to be configured!"
exit 1;
fi
OVERRIDE+=" -f ./$PROJECT/mtba-compose.yml"
fi
}

View File

@ -0,0 +1,2 @@
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCgpjb250ZXh0IFBhdGllbnQKCgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX1BSSU1BUllfRElBR05PU0lTX05PX1NPUlRfU1RSQVRJRklFUgpES1RLX1NUUkFUX0FHRV9DTEFTU19TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRLVEtfUkVQTEFDRV9TUEVDSU1FTl9TVFJBVElGSUVSaWYgSW5Jbml0aWFsUG9wdWxhdGlvbiB0aGVuIFtTcGVjaW1lbl0gZWxzZSB7fSBhcyBMaXN0PFNwZWNpbWVuPgpES1RLX1NUUkFUX1BST0NFRFVSRV9TVFJBVElGSUVSCgpES1RLX1NUUkFUX01FRElDQVRJT05fU1RSQVRJRklFUgoKICBES1RLX1JFUExBQ0VfSElTVE9MT0dZX1NUUkFUSUZJRVIKIGlmIGhpc3RvLmNvZGUuY29kaW5nLndoZXJlKGNvZGUgPSAnNTk4NDctNCcpLmNvZGUuZmlyc3QoKSBpcyBudWxsIHRoZW4gMCBlbHNlIDEKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OCnRydWU=
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwoKY29kZXN5c3RlbSBsb2luYzogJ2h0dHA6Ly9sb2luYy5vcmcnCmNvZGVzeXN0ZW0gaWNkMTA6ICdodHRwOi8vZmhpci5kZS9Db2RlU3lzdGVtL2JmYXJtL2ljZC0xMC1nbScKY29kZXN5c3RlbSBtb3JwaDogJ3VybjpvaWQ6Mi4xNi44NDAuMS4xMTM4ODMuNi40My4xJwoKY29udGV4dCBQYXRpZW50CgoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUklNQVJZX0RJQUdOT1NJU19OT19TT1JUX1NUUkFUSUZJRVIKREtUS19TVFJBVF9BR0VfQ0xBU1NfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpES1RLX1JFUExBQ0VfU1BFQ0lNRU5fU1RSQVRJRklFUmlmIEluSW5pdGlhbFBvcHVsYXRpb24gdGhlbiBbU3BlY2ltZW5dIGVsc2Uge30gYXMgTGlzdDxTcGVjaW1lbj4KREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREtUS19TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKCiAgREtUS19SRVBMQUNFX0hJU1RPTE9HWV9TVFJBVElGSUVSCiBpZiBoaXN0by5jb2RlLmNvZGluZy53aGVyZShjb2RlID0gJzU5ODQ3LTQnKS5jb2RlLmZpcnN0KCkgaXMgbnVsbCB0aGVuIDAgZWxzZSAxCkRLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTihleGlzdHMgW0NvbmRpdGlvbjogQ29kZSAnQzYxJyBmcm9tIGljZDEwXSkgYW5kIAooKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQwLzMnKSBvciAKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQ3LzMnKSBvciAKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4NDgwLzMnKSBvciAKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4NTAwLzMnKSk=

View File

@ -1,20 +1,20 @@
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUMeGRSrNPhRdQ1tU7uK5+lUa4f38wDQYJKoZIhvcNAQEL MIIDNTCCAh2gAwIBAgIUN7yzueIZzwpe8PaPEIMY8zoH+eMwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjIwOTI5MTQxMjU1WhcNMzIw BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjMwNTIzMTAxNzIzWhcNMzMw
OTI2MTQxMzI1WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN NTIwMTAxNzUzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAMYyroOUeb27mYzClOrjCmgIceLalsFA0aVCh5mZ AQEBBQADggEPADCCAQoCggEBAN5JAj+HydSGaxvA0AOcrXVTZ9FfsH0cMVBlQb72
KtP8+1U3oq/7exP30gXiJojxW7xoerfyQY9s0Sz5YYbxYbuskFOYEtyAILB/pxgd bGZgrRvkqtB011TNXZfsHl7rPxCY61DcsDJfFq3+8VHT+S9HE0qV1bEwP+oA3xc4
+k+J3tlZKolpfmo7WT5tZiHxH/zjrtAYGnuB2xPHRMCWh/tHYrELgXQuilNol24y Opq77av77cNNOqDC7h+jyPhHcUaE33iddmrH9Zn2ofWTSkKHHu3PAe5udCrc2QnD
GBa1plTlARy0aKEDUHp87WLhD2qH7B8sFlLgo0+gunE1UtR2HMSPF45w3VXszyG6 4PLRF6gqiEY1mcGknJrXj1ff/X0nRY/m6cnHNXz0Cvh8oPOtbdfGgfZjID2/fJNP
fJNrAj0yPnKy3Dm1BMO3jDO2e0A9lCQ71a4j4TeKePfCk1xCArSu6PpiwiacKplF fNoNKqN+5oJAZ+ZZ9id9rBvKj1ivW3F2EoGjZF268SgZzc5QrM/D1OpSBQf5SF/V
c6CRR6KrWVm2g+8Y2hFcOBG/Py2xusm3PWbpylGq6vtFRkkCAwEAAaN7MHkwDgYD qUPcQTgt9ry3YR+SZYazLkfKMEOWEa0WsqJVgXdQ6FyergcCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEFxD6BQwQO5 VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFEa70kcseqU5
xsJ+3cvZypsnh6dDMB8GA1UdIwQYMBaAFEFxD6BQwQO5xsJ+3cvZypsnh6dDMBYG bHx2zSt4bG21HokhMB8GA1UdIwQYMBaAFEa70kcseqU5bHx2zSt4bG21HokhMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQB5zTeIhV/3 A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCGmE7NXW4T
3Am6O144EFtnIeaZ2w0D6aEHqHAZp50vJv3+uQfOliCOzgw7VDxI4Zz2JALjlR/i 6J4mV3b132cGEMD7grx5JeiXK5EHMlswUS+Odz0NcBNzhUHdG4WVMbrilHbI5Ua+
uOYHsu3YIRMIOmPOjqrdDJa6auB0ufL4oUPfCRln7Fh0f3JVlz3BUoHsSDt949p4 6jdKx5WwnqzjQvElP0MCw6sH/35gbokWgk1provOP99WOFRsQs+9Sm8M2XtMf9HZ
g0nnsciL2JHuzlqjn7Jyt3L7dAHrlFKulCcuidG5D3cqXrRCbF83f+k3TC/HRiNd m3wABwU/O+dhZZ1OT1PjSZD0OKWKqH/KvlsoF5R6P888KpeYFiIWiUNS5z21Jm8A
25oMi7I4MP/SOCdfQGUGIsHIf/0hSm3pNjDOrC/XuI/8gh2f5io+Y8V+hMwMBcm4 ZcllJjiRJ60EmDwSUOQVJJSMOvtr6xTZDZLtAKSN8zN08lsNGzyrFwqjDwU0WTqp
JbH8bdyBB+EIhsNbTwf2MWntD5bmg47sf7hh23aNvKXI67Li1pTI2t1CqiGnFR0U scMXEGBsWQjlvxqDnXyljepR0oqRIjOvgrWaIgbxcnu98tK/OdBGwlAPKNUW7Crr
fCEpeaEAHs0k vO+eHxl9iqd4
-----END CERTIFICATE----- -----END CERTIFICATE-----

View File

@ -1,19 +1,32 @@
BROKER_ID=broker.dev.ccp-it.dktk.dkfz.de BROKER_ID=broker.ccp-it.dktk.dkfz.de
BROKER_URL=https://${BROKER_ID} BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID} PROXY_ID=${SITE_ID}.${BROKER_ID}
SPOT_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)" FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
SPOT_BEAM_SECRET_LONG="ApiKey spot.${PROXY_ID} ${SPOT_BEAM_SECRET_SHORT}" FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
REPORTHUB_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
REPORTHUB_BEAM_SECRET_LONG="ApiKey report-hub.${PROXY_ID} ${REPORTHUB_BEAM_SECRET_SHORT}"
SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
# This will load id-management setup. Effective only if id-management configuration is defined. BROKER_URL_FOR_PREREQ=$BROKER_URL
source $PROJECT/modules/id-management-setup.sh
OIDC_USER_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})"
OIDC_ADMIN_GROUP="DKTK_CCP_$(capitalize_first_letter ${SITE_ID})_Verwalter"
OIDC_PRIVATE_CLIENT_ID=${SITE_ID}-private
OIDC_PUBLIC_CLIENT_ID=${SITE_ID}-public
# Use "test-realm-01" for testing
OIDC_REALM="${OIDC_REALM:-master}"
OIDC_URL="https://login.verbis.dkfz.de"
OIDC_ISSUER_URL="${OIDC_URL}/realms/${OIDC_REALM}"
OIDC_GROUP_CLAIM="groups"
POSTGRES_TAG=15.6-alpine
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
idManagementSetup idManagementSetup
# This will load nngm setup. Effective only if nngm configuration is defined.
source $PROJECT/nngm-setup.sh
nngmSetup
source $PROJECT/exliquid-setup.sh
exliquidSetup
mtbaSetup mtbaSetup
obds2fhirRestSetup
blazeSecondarySetup

66
dhki/docker-compose.yml Normal file
View File

@ -0,0 +1,66 @@
version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
container_name: bridgehead-dhki-blaze
environment:
BASE_URL: "http://bridgehead-dhki-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-data:/app/data"
labels:
- "traefik.enable=true"
- "traefik.http.routers.blaze_dhki.rule=PathPrefix(`/dhki-localdatamanagement`)"
- "traefik.http.middlewares.dhki_b_strip.stripprefix.prefixes=/dhki-localdatamanagement"
- "traefik.http.services.blaze_dhki.loadbalancer.server.port=8080"
- "traefik.http.routers.blaze_dhki.middlewares=dhki_b_strip,auth"
- "traefik.http.routers.blaze_dhki.tls=true"
focus:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
container_name: bridgehead-focus
environment:
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${PROXY_ID}
PROXY_ID: ${PROXY_ID}
BLAZE_URL: "http://bridgehead-dhki-blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
EPSILON: 0.28
QUERIES_TO_CACHE: '/queries_to_cache.conf'
volumes:
- /srv/docker/bridgehead/dhki/queries_to_cache.conf:/queries_to_cache.conf:ro
depends_on:
- "beam-proxy"
- "blaze"
beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/dhki/root.crt.pem:/conf/root.crt.pem:ro
volumes:
blaze-data:
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -0,0 +1,2 @@
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwoKY29udGV4dCBQYXRpZW50CgpES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0FHRV9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RFQ0VBU0VEX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfRElBR05PU0lTX1NUUkFUSUZJRVIKCkRIS0lfU1RSQVRfU1BFQ0lNRU5fU1RSQVRJRklFUgoKREtUS19TVFJBVF9QUk9DRURVUkVfU1RSQVRJRklFUgoKREhLSV9TVFJBVF9NRURJQ0FUSU9OX1NUUkFUSUZJRVIKCkRIS0lfU1RSQVRfRU5DT1VOVEVSX1NUUkFUSUZJRVIKREtUS19TVFJBVF9ERUZfSU5fSU5JVElBTF9QT1BVTEFUSU9OCnRydWU=
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwpjb2Rlc3lzdGVtIGljZDEwOiAnaHR0cDovL2ZoaXIuZGUvQ29kZVN5c3RlbS9iZmFybS9pY2QtMTAtZ20nCmNvZGVzeXN0ZW0gbW9ycGg6ICd1cm46b2lkOjIuMTYuODQwLjEuMTEzODgzLjYuNDMuMScKCmNvbnRleHQgUGF0aWVudAoKREtUS19TVFJBVF9HRU5ERVJfU1RSQVRJRklFUgoKREtUS19TVFJBVF9BR0VfU1RSQVRJRklFUgoKREtUS19TVFJBVF9ERUNFQVNFRF9TVFJBVElGSUVSCgpES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCgpESEtJX1NUUkFUX1NQRUNJTUVOX1NUUkFUSUZJRVIKCkRLVEtfU1RSQVRfUFJPQ0VEVVJFX1NUUkFUSUZJRVIKCkRIS0lfU1RSQVRfTUVESUNBVElPTl9TVFJBVElGSUVSCgpESEtJX1NUUkFUX0VOQ09VTlRFUl9TVFJBVElGSUVSCkRLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTgooKChleGlzdHMgW0NvbmRpdGlvbjogQ29kZSAnQzM0LjknIGZyb20gaWNkMTBdKSBvcgooZXhpc3RzIFtDb25kaXRpb246IENvZGUgJ0MzNC44JyBmcm9tIGljZDEwXSkgb3IKKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDMzQuMCcgZnJvbSBpY2QxMF0pIG9yCihleGlzdHMgW0NvbmRpdGlvbjogQ29kZSAnQzM0LjInIGZyb20gaWNkMTBdKSBvcgooZXhpc3RzIFtDb25kaXRpb246IENvZGUgJ0MzNC4xJyBmcm9tIGljZDEwXSkgb3IKKGV4aXN0cyBbQ29uZGl0aW9uOiBDb2RlICdDMzQuMycgZnJvbSBpY2QxMF0pKSBhbmQKKChleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODE0MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MTQxLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgxNDMvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODE0Ny8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MjUwLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgyNTEvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODI1Mi8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MjUzLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgyNTUvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODI2MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MzEwLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzgzMzMvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODQ3MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4NDgwLzMnKSBvcgooZXhpc3RzIGZyb20gW09ic2VydmF0aW9uOiBDb2RlICc1OTg0Ny00JyBmcm9tIGxvaW5jXSBPCndoZXJlIE8udmFsdWUuY29kaW5nLmNvZGUgY29udGFpbnMgJzg0OTAvMycpIG9yCihleGlzdHMgZnJvbSBbT2JzZXJ2YXRpb246IENvZGUgJzU5ODQ3LTQnIGZyb20gbG9pbmNdIE8Kd2hlcmUgTy52YWx1ZS5jb2RpbmcuY29kZSBjb250YWlucyAnODU1MC8zJykgb3IKKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNTk4NDctNCcgZnJvbSBsb2luY10gTwp3aGVyZSBPLnZhbHVlLmNvZGluZy5jb2RlIGNvbnRhaW5zICc4MDUyLzMnKSkp

20
dhki/root.crt.pem Normal file
View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUSWUPebUMNfJvPKMjdgX+WiH+OXgwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTA1MDg1NTM4WhcNMzQw
MTAyMDg1NjA4WjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAL/nvo9Bn1/6Z/K4BKoLM6/mVziM4cmXTVx4npVz
pnptwPPFU4rz47akRZ6ZMD5MO0bsyvaxG1nwVrW3aAGC42JIGTdZHKwMKrd35sxw
k3YlGJagGUs+bKHUCL55OcSmyDWlh/UhA8+eeJWjOt9u0nYXv+vi+N4JSHA0oC9D
bTF1v+7blrTQagf7PTPSF3pe22iXOjJYdOkZMWoMoNAjn6F958fkLNLY3csOZwvP
/3eyNNawyAEPWeIm33Zk630NS8YHggz6WCqwXvuaKb6910mRP8jgauaYsqgsOyDt
pbWuvk//aZWdGeN9RNsAA8eGppygiwm/m9eRC6I0shDwv6ECAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFFn/dbW1J3ry
7TBzbKo3H4vJr2MiMB8GA1UdIwQYMBaAFFn/dbW1J3ry7TBzbKo3H4vJr2MiMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCa2V8B8aad
XNDS1EUIi9oMdvGvkolcdFwx9fI++qu9xSIaZs5GETHck3oYKZF0CFP5ESnKDn5w
enWgm5M0y+hVZppzB163WmET1efBXwrdyn8j4336NjX352h63JGWCaI2CfZ1qG1p
kf5W9CVXllSFaJe5r994ovgyHvK2ucWwe8l8iMJbQhH79oKi/9uJMCD6aUXnpg1K
nPHW1lsVx6foqYWijdBdtFU2i7LSH2OYo0nb1PgRnY/SABV63JHfJnqW9dZy4f7G
rpsvvrmFrKmEnCZH0n6qveY3Z5bMD94Yx0ebkCTYEqAw3pV65gwxrzBTpEg6dgF0
eG0eKFUS0REJ
-----END CERTIFICATE-----

28
dhki/vars Normal file
View File

@ -0,0 +1,28 @@
BROKER_ID=broker.hector.dkfz.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
SUPPORT_EMAIL=support-ccp@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
POSTGRES_TAG=15.6-alpine
for module in ccp/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
idManagementSetup
obds2fhirRestSetup
for module in modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
transfairSetup

View File

@ -0,0 +1,42 @@
## How to Change Config Access Token
### 1. Generate a New Access Token
1. Go to your Git configuration repository provider, it might be either [git.verbis.dkfz.de](https://git.verbis.dkfz.de) or [gitlab.bbmri-eric.eu](https://gitlab.bbmri-eric.eu).
2. Navigate to the configuration repository for your site.
3. Go to **Settings → Access Tokens** to check if your Access Token is valid or expired.
- **If expired**, create a new Access Token.
4. Configure the new Access Token with the following settings:
- **Expiration date**: One year from today, minus one day.
- **Role**: Developer.
- **Scope**: Only `read_repository`.
5. Save the newly generated Access Token in a secure location.
---
### 2. Replace the Old Access Token
1. Navigate to `/etc/bridgehead` in your system.
2. Run the following command to retrieve the current Git remote URL:
```bash
git remote get-url origin
```
Example output:
```
https://name40dkfz-heidelberg.de:<old_access_token>@git.verbis.dkfz.de/bbmri-bridgehead-configs/test.git
```
3. Replace `<old_access_token>` with your new Access Token in the URL.
4. Set the updated URL using the following command:
```bash
git remote set-url origin https://name40dkfz-heidelberg.de:<new_access_token>@git.verbis.dkfz.de/bbmri-bridgehead-configs/test.git
```
5. Start the Bridgehead update service by running:
```bash
systemctl start bridgehead-update@<project>
```
6. View the output to ensure the update process is successful:
```bash
journalctl -u bridgehead-update@<project> -f
```

68
itcc/docker-compose.yml Normal file
View File

@ -0,0 +1,68 @@
version: "3.7"
services:
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
container_name: bridgehead-itcc-blaze
environment:
BASE_URL: "http://bridgehead-itcc-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: ${BLAZE_MEMORY_CAP}
CQL_EXPR_CACHE_SIZE: ${BLAZE_CQL_CACHE_CAP:-32}
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-data:/app/data"
labels:
- "traefik.enable=true"
- "traefik.http.routers.blaze_itcc.rule=PathPrefix(`/itcc-localdatamanagement`)"
- "traefik.http.middlewares.itcc_b_strip.stripprefix.prefixes=/itcc-localdatamanagement"
- "traefik.http.services.blaze_itcc.loadbalancer.server.port=8080"
- "traefik.http.routers.blaze_itcc.middlewares=itcc_b_strip,auth"
- "traefik.http.routers.blaze_itcc.tls=true"
focus:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
container_name: bridgehead-focus
environment:
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${PROXY_ID}
PROXY_ID: ${PROXY_ID}
BLAZE_URL: "http://bridgehead-itcc-blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
EPSILON: 0.28
QUERIES_TO_CACHE: '/queries_to_cache.conf'
ENDPOINT_TYPE: ${FOCUS_ENDPOINT_TYPE:-blaze}
volumes:
- /srv/docker/bridgehead/itcc/queries_to_cache.conf:/queries_to_cache.conf:ro
depends_on:
- "beam-proxy"
- "blaze"
beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/itcc/root.crt.pem:/conf/root.crt.pem:ro
volumes:
blaze-data:
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -0,0 +1,33 @@
version: "3.7"
services:
landing:
container_name: lens_federated-search
image: docker.verbis.dkfz.de/ccp/lens:${SITE_ID}
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
spot:
image: docker.verbis.dkfz.de/ccp-private/central-spot
environment:
BEAM_SECRET: "${FOCUS_BEAM_SECRET_SHORT}"
BEAM_URL: http://beam-proxy:8081
BEAM_PROXY_ID: ${SITE_ID}
BEAM_BROKER_ID: ${BROKER_ID}
BEAM_APP_ID: "focus"
PROJECT_METADATA: "dktk_supervisors"
depends_on:
- "beam-proxy"
labels:
- "traefik.enable=true"
- "traefik.http.services.spot.loadbalancer.server.port=8080"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowmethods=GET,OPTIONS,POST"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolalloworiginlist=https://${HOST}"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowcredentials=true"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolmaxage=-1"
- "traefik.http.routers.spot.rule=Host(`${HOST}`) && PathPrefix(`/backend`)"
- "traefik.http.middlewares.stripprefix_spot.stripprefix.prefixes=/backend"
- "traefik.http.routers.spot.tls=true"
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot"

View File

@ -0,0 +1,5 @@
#!/bin/bash
if [ -n "$ENABLE_LENS" ];then
OVERRIDE+=" -f ./$PROJECT/modules/lens-compose.yml"
fi

View File

@ -0,0 +1,2 @@
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwoKY29udGV4dCBQYXRpZW50CkRLVEtfU1RSQVRfR0VOREVSX1NUUkFUSUZJRVIKICBES1RLX1NUUkFUX0RJQUdOT1NJU19TVFJBVElGSUVSCiAgSVRDQ19TVFJBVF9BR0VfQ0xBU1NfU1RSQVRJRklFUgogIERLVEtfU1RSQVRfREVGX0lOX0lOSVRJQUxfUE9QVUxBVElPTgp0cnVl
bGlicmFyeSBSZXRyaWV2ZQp1c2luZyBGSElSIHZlcnNpb24gJzQuMC4wJwppbmNsdWRlIEZISVJIZWxwZXJzIHZlcnNpb24gJzQuMC4wJwpjb2Rlc3lzdGVtIFNhbXBsZU1hdGVyaWFsVHlwZTogJ2h0dHBzOi8vZmhpci5iYm1yaS5kZS9Db2RlU3lzdGVtL1NhbXBsZU1hdGVyaWFsVHlwZScKCmNvZGVzeXN0ZW0gbG9pbmM6ICdodHRwOi8vbG9pbmMub3JnJwpjb2Rlc3lzdGVtIG1vbGVjdWxhck1hcmtlcjogJ2h0dHA6Ly93d3cuZ2VuZW5hbWVzLm9yZycKCmNvbnRleHQgUGF0aWVudApES1RLX1NUUkFUX0dFTkRFUl9TVFJBVElGSUVSCiAgREtUS19TVFJBVF9ESUFHTk9TSVNfU1RSQVRJRklFUgogIElUQ0NfU1RSQVRfQUdFX0NMQVNTX1NUUkFUSUZJRVIKICBES1RLX1NUUkFUX0RFRl9JTl9JTklUSUFMX1BPUFVMQVRJT04KKGV4aXN0cyBmcm9tIFtPYnNlcnZhdGlvbjogQ29kZSAnNjk1NDgtNicgZnJvbSBsb2luY10gTwp3aGVyZSBPLmNvbXBvbmVudC53aGVyZShjb2RlLmNvZGluZyBjb250YWlucyBDb2RlICc0ODAxOC02JyBmcm9tIGxvaW5jKS52YWx1ZS5jb2RpbmcgY29udGFpbnMgQ29kZSAnQlJBRicgZnJvbSBtb2xlY3VsYXJNYXJrZXIp

20
itcc/root.crt.pem Normal file
View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUW34NEb7bl0+Ywx+I1VKtY5vpAOowDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTIyMTMzNzEzWhcNMzQw
MTE5MTMzNzQzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAL5UegLXTlq3XRRj8LyFs3aF0tpRPVoW9RXp5kFI
TnBvyO6qjNbMDT/xK+4iDtEX4QQUvsxAKxfXbe9i1jpdwjgH7JHaSGm2IjAiKLqO
OXQQtguWwfNmmp96Ql13ArLj458YH08xMO/w2NFWGwB/hfARa4z/T0afFuc/tKJf
XbGCG9xzJ9tmcG45QN8NChGhVvaTweNdVxGWlpHxmi0Mn8OM9CEuB7nPtTTiBuiu
pRC2zVVmNjVp4ktkAqL7IHOz+/F5nhiz6tOika9oD3376Xj055lPznLcTQn2+4d7
K7ZrBopCFxIQPjkgmYRLfPejbpdUjK1UVJw7hbWkqWqH7JMCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFGjvRcaIP4HM
poIguUAK9YL2n7fbMB8GA1UdIwQYMBaAFGjvRcaIP4HMpoIguUAK9YL2n7fbMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCbzycJSaDm
AXXNJqQ88djrKs5MDXS8RIjS/cu2ayuLaYDe+BzVmUXNA0Vt9nZGdaz63SLLcjpU
fNSxBfKbwmf7s30AK8Cnfj9q4W/BlBeVizUHQsg1+RQpDIdMrRQrwkXv8mfLw+w5
3oaXNW6W/8KpBp/H8TBZ6myl6jCbeR3T8EMXBwipMGop/1zkbF01i98Xpqmhx2+l
n+80ofPsSspOo5XmgCZym8CD/m/oFHmjcvOfpOCvDh4PZ+i37pmbSlCYoMpla3u/
7MJMP5lugfLBYNDN2p+V4KbHP/cApCDT5UWLOeAWjgiZQtHH5ilDeYqEc1oPjyJt
Rtup0MTxSJtN
-----END CERTIFICATE-----

14
itcc/vars Normal file
View File

@ -0,0 +1,14 @@
BROKER_ID=test-no-real-data.broker.samply.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
SUPPORT_EMAIL=arturo.macias@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done

67
kr/docker-compose.yml Normal file
View File

@ -0,0 +1,67 @@
version: "3.7"
services:
landing:
deploy:
replicas: 0 #deactivate landing page
blaze:
image: docker.verbis.dkfz.de/cache/samply/blaze:${BLAZE_TAG}
container_name: bridgehead-kr-blaze
environment:
BASE_URL: "http://bridgehead-kr-blaze:8080"
JAVA_TOOL_OPTIONS: "-Xmx${BLAZE_MEMORY_CAP:-4096}m"
DB_RESOURCE_CACHE_SIZE: ${BLAZE_RESOURCE_CACHE_CAP:-2500000}
DB_BLOCK_CACHE_SIZE: $BLAZE_MEMORY_CAP
ENFORCE_REFERENTIAL_INTEGRITY: "false"
volumes:
- "blaze-data:/app/data"
labels:
- "traefik.enable=true"
- "traefik.http.routers.blaze_kr.rule=PathPrefix(`/kr-localdatamanagement`)"
- "traefik.http.middlewares.kr_b_strip.stripprefix.prefixes=/kr-localdatamanagement"
- "traefik.http.services.blaze_kr.loadbalancer.server.port=8080"
- "traefik.http.routers.blaze_kr.middlewares=kr_b_strip,auth"
- "traefik.http.routers.blaze_kr.tls=true"
focus:
image: docker.verbis.dkfz.de/cache/samply/focus:${FOCUS_TAG}
container_name: bridgehead-focus
environment:
API_KEY: ${FOCUS_BEAM_SECRET_SHORT}
BEAM_APP_ID_LONG: focus.${PROXY_ID}
PROXY_ID: ${PROXY_ID}
BLAZE_URL: "http://bridgehead-kr-blaze:8080/fhir/"
BEAM_PROXY_URL: http://beam-proxy:8081
RETRY_COUNT: ${FOCUS_RETRY_COUNT}
EPSILON: 0.28
depends_on:
- "beam-proxy"
- "blaze"
beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:develop
container_name: bridgehead-beam-proxy
environment:
BROKER_URL: ${BROKER_URL}
PROXY_ID: ${PROXY_ID}
APP_focus_KEY: ${FOCUS_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: /conf/trusted-ca-certs
ROOTCERT_FILE: /conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/kr/root.crt.pem:/conf/root.crt.pem:ro
volumes:
blaze-data:
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -0,0 +1,6 @@
# Full Excel Export
curl --location --request POST 'https://${HOST}/ccp-exporter/request?query=Patient&query-format=FHIR_PATH&template-id=ccp&output-format=EXCEL' \
--header 'x-api-key: ${EXPORT_API_KEY}'
# QB
curl --location --request POST 'https://${HOST}/ccp-reporter/generate?template-id=ccp'

View File

@ -0,0 +1,67 @@
version: "3.7"
services:
exporter:
image: docker.verbis.dkfz.de/ccp/dktk-exporter:latest
container_name: bridgehead-ccp-exporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
CROSS_ORIGINS: "https://${HOST}"
EXPORTER_DB_USER: "exporter"
EXPORTER_DB_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
EXPORTER_DB_URL: "jdbc:postgresql://exporter-db:5432/exporter"
HTTP_RELATIVE_PATH: "/ccp-exporter"
SITE: "${SITE_ID}"
HTTP_SERVLET_REQUEST_SCHEME: "https"
OPAL_PASSWORD: "${EXPORTER_OPAL_PASSWORD}"
labels:
- "traefik.enable=true"
- "traefik.http.routers.exporter_ccp.rule=PathPrefix(`/ccp-exporter`)"
- "traefik.http.services.exporter_ccp.loadbalancer.server.port=8092"
- "traefik.http.routers.exporter_ccp.tls=true"
- "traefik.http.middlewares.exporter_ccp_strip.stripprefix.prefixes=/ccp-exporter"
- "traefik.http.routers.exporter_ccp.middlewares=exporter_ccp_strip"
volumes:
- "/var/cache/bridgehead/ccp/exporter-files:/app/exporter-files/output"
exporter-db:
image: docker.verbis.dkfz.de/cache/postgres:${POSTGRES_TAG}
container_name: bridgehead-ccp-exporter-db
environment:
POSTGRES_USER: "exporter"
POSTGRES_PASSWORD: "${EXPORTER_DB_PASSWORD}" # Set in exporter-setup.sh
POSTGRES_DB: "exporter"
volumes:
# Consider removing this volume once we find a solution to save Lens-queries to be executed in the explorer.
- "/var/cache/bridgehead/ccp/exporter-db:/var/lib/postgresql/data"
reporter:
image: docker.verbis.dkfz.de/ccp/dktk-reporter:latest
container_name: bridgehead-ccp-reporter
environment:
JAVA_OPTS: "-Xms1G -Xmx8G -XX:+UseG1GC"
LOG_LEVEL: "INFO"
CROSS_ORIGINS: "https://${HOST}"
HTTP_RELATIVE_PATH: "/ccp-reporter"
SITE: "${SITE_ID}"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}" # Set in exporter-setup.sh
EXPORTER_URL: "http://exporter:8092"
LOG_FHIR_VALIDATION: "false"
HTTP_SERVLET_REQUEST_SCHEME: "https"
# In this initial development state of the bridgehead, we are trying to have so many volumes as possible.
# However, in the first executions in the CCP sites, this volume seems to be very important. A report is
# a process that can take several hours, because it depends on the exporter.
# There is a risk that the bridgehead restarts, losing the already created export.
volumes:
- "/var/cache/bridgehead/ccp/reporter-files:/app/reports"
labels:
- "traefik.enable=true"
- "traefik.http.routers.reporter_ccp.rule=PathPrefix(`/ccp-reporter`)"
- "traefik.http.services.reporter_ccp.loadbalancer.server.port=8095"
- "traefik.http.routers.reporter_ccp.tls=true"
- "traefik.http.middlewares.reporter_ccp_strip.stripprefix.prefixes=/ccp-reporter"
- "traefik.http.routers.reporter_ccp.middlewares=reporter_ccp_strip"

View File

@ -0,0 +1,8 @@
#!/bin/bash -e
if [ "$ENABLE_EXPORTER" == true ]; then
log INFO "Exporter setup detected -- will start Exporter service."
OVERRIDE+=" -f ./$PROJECT/modules/exporter-compose.yml"
EXPORTER_DB_PASSWORD="$(echo \"This is a salt string to generate one consistent password for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
EXPORTER_API_KEY="$(echo \"This is a salt string to generate one consistent API KEY for the exporter. It is not required to be secret.\" | sha1sum | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 64)"
fi

15
kr/modules/exporter.md Normal file
View File

@ -0,0 +1,15 @@
# Exporter and Reporter
## Exporter
The exporter is a REST API that exports the data of the different databases of the bridgehead in a set of tables.
It can accept different output formats as CSV, Excel, JSON or XML. It can also export data into Opal.
## Exporter-DB
It is a database to save queries for its execution in the exporter.
The exporter manages also the different executions of the same query in through the database.
## Reporter
This component is a plugin of the exporter that allows to create more complex Excel reports described in templates.
It is compatible with different template engines as Groovy, Thymeleaf,...
It is perfect to generate a document as our traditional CCP quality report.

View File

@ -0,0 +1,35 @@
version: "3.7"
services:
landing:
deploy:
replicas: 1 #reactivate if lens is in use
container_name: lens_federated-search
image: docker.verbis.dkfz.de/ccp/lens:${SITE_ID}
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
spot:
image: docker.verbis.dkfz.de/ccp-private/central-spot
environment:
BEAM_SECRET: "${FOCUS_BEAM_SECRET_SHORT}"
BEAM_URL: http://beam-proxy:8081
BEAM_PROXY_ID: ${SITE_ID}
BEAM_BROKER_ID: ${BROKER_ID}
BEAM_APP_ID: "focus"
PROJECT_METADATA: "kr_supervisors"
depends_on:
- "beam-proxy"
labels:
- "traefik.enable=true"
- "traefik.http.services.spot.loadbalancer.server.port=8080"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowmethods=GET,OPTIONS,POST"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolalloworiginlist=https://${HOST}"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolallowcredentials=true"
- "traefik.http.middlewares.corsheaders2.headers.accesscontrolmaxage=-1"
- "traefik.http.routers.spot.rule=Host(`${HOST}`) && PathPrefix(`/backend`)"
- "traefik.http.middlewares.stripprefix_spot.stripprefix.prefixes=/backend"
- "traefik.http.routers.spot.tls=true"
- "traefik.http.routers.spot.middlewares=corsheaders2,stripprefix_spot"

5
kr/modules/lens-setup.sh Normal file
View File

@ -0,0 +1,5 @@
#!/bin/bash
if [ -n "$ENABLE_LENS" ];then
OVERRIDE+=" -f ./$PROJECT/modules/lens-compose.yml"
fi

View File

@ -0,0 +1,19 @@
version: "3.7"
services:
obds2fhir-rest:
container_name: bridgehead-obds2fhir-rest
image: docker.verbis.dkfz.de/ccp/obds2fhir-rest:main
environment:
IDTYPE: BK_${IDMANAGEMENT_FRIENDLY_ID}_L-ID
MAINZELLISTE_APIKEY: ${IDMANAGER_LOCAL_PATIENTLIST_APIKEY}
SALT: ${LOCAL_SALT}
KEEP_INTERNAL_ID: ${KEEP_INTERNAL_ID:-false}
MAINZELLISTE_URL: ${PATIENTLIST_URL:-http://patientlist:8080/patientlist}
labels:
- "traefik.enable=true"
- "traefik.http.routers.obds2fhir-rest.rule=PathPrefix(`/obds2fhir-rest`) || PathPrefix(`/adt2fhir-rest`)"
- "traefik.http.middlewares.obds2fhir-rest_strip.stripprefix.prefixes=/obds2fhir-rest,/adt2fhir-rest"
- "traefik.http.services.obds2fhir-rest.loadbalancer.server.port=8080"
- "traefik.http.routers.obds2fhir-rest.tls=true"
- "traefik.http.routers.obds2fhir-rest.middlewares=obds2fhir-rest_strip,auth"

View File

@ -0,0 +1,13 @@
#!/bin/bash
function obds2fhirRestSetup() {
if [ -n "$ENABLE_OBDS2FHIR_REST" ]; then
log INFO "oBDS2FHIR-REST setup detected -- will start obds2fhir-rest module."
if [ ! -n "$IDMANAGER_UPLOAD_APIKEY" ]; then
log ERROR "Missing ID-Management Module! Fix this by setting up ID Management:"
PATIENTLIST_URL=" "
fi
OVERRIDE+=" -f ./$PROJECT/modules/obds2fhir-rest-compose.yml"
LOCAL_SALT="$(echo \"local-random-salt\" | openssl pkeyutl -sign -inkey /etc/bridgehead/pki/${SITE_ID}.priv.pem | base64 | head -c 30)"
fi
}

View File

@ -0,0 +1,81 @@
version: "3.7"
services:
teiler-orchestrator:
image: docker.verbis.dkfz.de/cache/samply/teiler-orchestrator:latest
container_name: bridgehead-teiler-orchestrator
labels:
- "traefik.enable=true"
- "traefik.http.routers.teiler_orchestrator_ccp.rule=PathPrefix(`/ccp-teiler`)"
- "traefik.http.services.teiler_orchestrator_ccp.loadbalancer.server.port=9000"
- "traefik.http.routers.teiler_orchestrator_ccp.tls=true"
- "traefik.http.middlewares.teiler_orchestrator_ccp_strip.stripprefix.prefixes=/ccp-teiler"
- "traefik.http.routers.teiler_orchestrator_ccp.middlewares=teiler_orchestrator_ccp_strip"
environment:
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
TEILER_DASHBOARD_URL: "https://${HOST}/ccp-teiler-dashboard"
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE_LOWER_CASE}"
HTTP_RELATIVE_PATH: "/ccp-teiler"
teiler-dashboard:
image: docker.verbis.dkfz.de/cache/samply/teiler-dashboard:develop
container_name: bridgehead-teiler-dashboard
labels:
- "traefik.enable=true"
- "traefik.http.routers.teiler_dashboard_ccp.rule=PathPrefix(`/ccp-teiler-dashboard`)"
- "traefik.http.services.teiler_dashboard_ccp.loadbalancer.server.port=80"
- "traefik.http.routers.teiler_dashboard_ccp.tls=true"
- "traefik.http.middlewares.teiler_dashboard_ccp_strip.stripprefix.prefixes=/ccp-teiler-dashboard"
- "traefik.http.routers.teiler_dashboard_ccp.middlewares=teiler_dashboard_ccp_strip"
environment:
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
TEILER_BACKEND_URL: "https://${HOST}/ccp-teiler-backend"
OIDC_URL: "${OIDC_URL}"
OIDC_REALM: "${OIDC_REALM}"
OIDC_CLIENT_ID: "${OIDC_PUBLIC_CLIENT_ID}"
OIDC_TOKEN_GROUP: "${OIDC_GROUP_CLAIM}"
TEILER_ADMIN_NAME: "${OPERATOR_FIRST_NAME} ${OPERATOR_LAST_NAME}"
TEILER_ADMIN_EMAIL: "${OPERATOR_EMAIL}"
TEILER_ADMIN_PHONE: "${OPERATOR_PHONE}"
TEILER_PROJECT: "${PROJECT}"
EXPORTER_API_KEY: "${EXPORTER_API_KEY}"
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
TEILER_DASHBOARD_HTTP_RELATIVE_PATH: "/ccp-teiler-dashboard"
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
TEILER_USER: "${OIDC_USER_GROUP}"
TEILER_ADMIN: "${OIDC_ADMIN_GROUP}"
REPORTER_DEFAULT_TEMPLATE_ID: "ccp-qb"
EXPORTER_DEFAULT_TEMPLATE_ID: "ccp"
teiler-backend:
image: docker.verbis.dkfz.de/ccp/dktk-teiler-backend:latest
container_name: bridgehead-teiler-backend
labels:
- "traefik.enable=true"
- "traefik.http.routers.teiler_backend_ccp.rule=PathPrefix(`/ccp-teiler-backend`)"
- "traefik.http.services.teiler_backend_ccp.loadbalancer.server.port=8085"
- "traefik.http.routers.teiler_backend_ccp.tls=true"
- "traefik.http.middlewares.teiler_backend_ccp_strip.stripprefix.prefixes=/ccp-teiler-backend"
- "traefik.http.routers.teiler_backend_ccp.middlewares=teiler_backend_ccp_strip"
environment:
LOG_LEVEL: "INFO"
APPLICATION_PORT: "8085"
APPLICATION_ADDRESS: "${HOST}"
DEFAULT_LANGUAGE: "${TEILER_DEFAULT_LANGUAGE}"
CONFIG_ENV_VAR_PATH: "/run/secrets/ccp.conf"
TEILER_ORCHESTRATOR_HTTP_RELATIVE_PATH: "/ccp-teiler"
TEILER_ORCHESTRATOR_URL: "https://${HOST}/ccp-teiler"
TEILER_DASHBOARD_DE_URL: "https://${HOST}/ccp-teiler-dashboard/de"
TEILER_DASHBOARD_EN_URL: "https://${HOST}/ccp-teiler-dashboard/en"
CENTRAX_URL: "${CENTRAXX_URL}"
HTTP_PROXY: "http://forward_proxy:3128"
ENABLE_MTBA: "${ENABLE_MTBA}"
ENABLE_DATASHIELD: "${ENABLE_DATASHIELD}"
secrets:
- ccp.conf
secrets:
ccp.conf:
file: /etc/bridgehead/ccp.conf

View File

@ -0,0 +1,9 @@
#!/bin/bash -e
if [ "$ENABLE_TEILER" == true ];then
log INFO "Teiler setup detected -- will start Teiler services."
OVERRIDE+=" -f ./$PROJECT/modules/teiler-compose.yml"
TEILER_DEFAULT_LANGUAGE=DE
TEILER_DEFAULT_LANGUAGE_LOWER_CASE=${TEILER_DEFAULT_LANGUAGE,,}
add_public_oidc_redirect_url "/ccp-teiler/*"
fi

19
kr/modules/teiler.md Normal file
View File

@ -0,0 +1,19 @@
# Teiler
This module orchestrates the different microfrontends of the bridgehead as a single page application.
## Teiler Orchestrator
Single SPA component that consists on the root HTML site of the single page application and a javascript code that
gets the information about the microfrontend calling the teiler backend and is responsible for registering them. With the
resulting mapping, it can initialize, mount and unmount the required microfrontends on the fly.
The microfrontends run independently in different containers and can be based on different frameworks (Angular, Vue, React,...)
This microfrontends can run as single alone but need an extension with Single-SPA (https://single-spa.js.org/docs/ecosystem).
There are also available three templates (Angular, Vue, React) to be directly extended to be used directly in the teiler.
## Teiler Dashboard
It consists on the main dashboard and a set of embedded services.
### Login
user and password in ccp.local.conf
## Teiler Backend
In this component, the microfrontends are configured.

20
kr/root.crt.pem Normal file
View File

@ -0,0 +1,20 @@
-----BEGIN CERTIFICATE-----
MIIDNTCCAh2gAwIBAgIUW34NEb7bl0+Ywx+I1VKtY5vpAOowDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAxMLQnJva2VyLVJvb3QwHhcNMjQwMTIyMTMzNzEzWhcNMzQw
MTE5MTMzNzQzWjAWMRQwEgYDVQQDEwtCcm9rZXItUm9vdDCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAL5UegLXTlq3XRRj8LyFs3aF0tpRPVoW9RXp5kFI
TnBvyO6qjNbMDT/xK+4iDtEX4QQUvsxAKxfXbe9i1jpdwjgH7JHaSGm2IjAiKLqO
OXQQtguWwfNmmp96Ql13ArLj458YH08xMO/w2NFWGwB/hfARa4z/T0afFuc/tKJf
XbGCG9xzJ9tmcG45QN8NChGhVvaTweNdVxGWlpHxmi0Mn8OM9CEuB7nPtTTiBuiu
pRC2zVVmNjVp4ktkAqL7IHOz+/F5nhiz6tOika9oD3376Xj055lPznLcTQn2+4d7
K7ZrBopCFxIQPjkgmYRLfPejbpdUjK1UVJw7hbWkqWqH7JMCAwEAAaN7MHkwDgYD
VR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFGjvRcaIP4HM
poIguUAK9YL2n7fbMB8GA1UdIwQYMBaAFGjvRcaIP4HMpoIguUAK9YL2n7fbMBYG
A1UdEQQPMA2CC0Jyb2tlci1Sb290MA0GCSqGSIb3DQEBCwUAA4IBAQCbzycJSaDm
AXXNJqQ88djrKs5MDXS8RIjS/cu2ayuLaYDe+BzVmUXNA0Vt9nZGdaz63SLLcjpU
fNSxBfKbwmf7s30AK8Cnfj9q4W/BlBeVizUHQsg1+RQpDIdMrRQrwkXv8mfLw+w5
3oaXNW6W/8KpBp/H8TBZ6myl6jCbeR3T8EMXBwipMGop/1zkbF01i98Xpqmhx2+l
n+80ofPsSspOo5XmgCZym8CD/m/oFHmjcvOfpOCvDh4PZ+i37pmbSlCYoMpla3u/
7MJMP5lugfLBYNDN2p+V4KbHP/cApCDT5UWLOeAWjgiZQtHH5ilDeYqEc1oPjyJt
Rtup0MTxSJtN
-----END CERTIFICATE-----

16
kr/vars Normal file
View File

@ -0,0 +1,16 @@
BROKER_ID=test-no-real-data.broker.samply.de
BROKER_URL=https://${BROKER_ID}
PROXY_ID=${SITE_ID}.${BROKER_ID}
FOCUS_BEAM_SECRET_SHORT="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 20)"
FOCUS_RETRY_COUNT=${FOCUS_RETRY_COUNT:-64}
SUPPORT_EMAIL=arturo.macias@dkfz-heidelberg.de
PRIVATEKEYFILENAME=/etc/bridgehead/pki/${SITE_ID}.priv.pem
BROKER_URL_FOR_PREREQ=$BROKER_URL
for module in $PROJECT/modules/*.sh
do
log DEBUG "sourcing $module"
source $module
done
obds2fhirRestSetup

View File

@ -9,12 +9,31 @@ detectCompose() {
fi fi
} }
getLdmPassword() { setupProxy() {
if [ -n "$LDM_PASSWORD" ]; then ### Note: As the current data protection concepts do not allow communication via HTTP,
docker run --rm httpd:alpine htpasswd -nb $PROJECT $LDM_PASSWORD | tr -d '\n' | tr -d '\r' ### we are not setting a proxy for HTTP requests.
local http="no"
local https="no"
if [ $HTTPS_PROXY_URL ]; then
local proto="$(echo $HTTPS_PROXY_URL | grep :// | sed -e 's,^\(.*://\).*,\1,g')"
local fqdn="$(echo ${HTTPS_PROXY_URL/$proto/})"
local hostport=$(echo $HTTPS_PROXY_URL | sed -e "s,$proto,,g" | cut -d/ -f1)
HTTPS_PROXY_HOST="$(echo $hostport | sed -e 's,:.*,,g')"
HTTPS_PROXY_PORT="$(echo $hostport | sed -e 's,^.*:,:,g' -e 's,.*:\([0-9]*\).*,\1,g' -e 's,[^0-9],,g')"
if [[ ! -z "$HTTPS_PROXY_USERNAME" && ! -z "$HTTPS_PROXY_PASSWORD" ]]; then
local proto="$(echo $HTTPS_PROXY_URL | grep :// | sed -e 's,^\(.*://\).*,\1,g')"
local fqdn="$(echo ${HTTPS_PROXY_URL/$proto/})"
HTTPS_PROXY_FULL_URL="$(echo $proto$HTTPS_PROXY_USERNAME:$HTTPS_PROXY_PASSWORD@$fqdn)"
https="authenticated"
else else
echo -n "" HTTPS_PROXY_FULL_URL=$HTTPS_PROXY_URL
https="unauthenticated"
fi fi
fi
log INFO "Configuring proxy servers: $http http proxy (we're not supporting unencrypted comms), $https https proxy"
export HTTPS_PROXY_HOST HTTPS_PROXY_PORT HTTPS_PROXY_FULL_URL
} }
exitIfNotRoot() { exitIfNotRoot() {
@ -34,8 +53,8 @@ checkOwner(){
} }
printUsage() { printUsage() {
echo "Usage: bridgehead start|stop|update|install|uninstall|enroll PROJECTNAME" echo "Usage: bridgehead start|stop|logs|docker-logs|is-running|update|install|uninstall|adduser|enroll PROJECTNAME"
echo "PROJECTNAME should be one of ccp|bbmri" echo "PROJECTNAME should be one of ccp|bbmri|cce|itcc|kr|dhki"
} }
checkRequirements() { checkRequirements() {
@ -57,7 +76,7 @@ fetchVarsFromVault() {
set +e set +e
PASS=$(BW_MASTERPASS="$BW_MASTERPASS" BW_CLIENTID="$BW_CLIENTID" BW_CLIENTSECRET="$BW_CLIENTSECRET" docker run --rm -e BW_MASTERPASS -e BW_CLIENTID -e BW_CLIENTSECRET -e http_proxy samply/bridgehead-vaultfetcher $@) PASS=$(BW_MASTERPASS="$BW_MASTERPASS" BW_CLIENTID="$BW_CLIENTID" BW_CLIENTSECRET="$BW_CLIENTSECRET" docker run --rm -e BW_MASTERPASS -e BW_CLIENTID -e BW_CLIENTSECRET -e http_proxy docker.verbis.dkfz.de/cache/samply/bridgehead-vaultfetcher:latest $@)
RET=$? RET=$?
if [ $RET -ne 0 ]; then if [ $RET -ne 0 ]; then
@ -136,6 +155,30 @@ setHostname() {
fi fi
} }
# This function optimizes the usage of memory through blaze, according to the official performance tuning guide:
# https://github.com/samply/blaze/blob/master/docs/tuning-guide.md
# Short summary of the adjustments made:
# - set blaze memory cap to a quarter of the system memory
# - set db block cache size to a quarter of the system memory
# - limit resource count allowed in blaze to 1,25M per 4GB available system memory
optimizeBlazeMemoryUsage() {
if [ -z "$BLAZE_MEMORY_CAP" ]; then
system_memory_in_mb=$(LC_ALL=C free -m | grep 'Mem:' | awk '{print $2}');
export BLAZE_MEMORY_CAP=$(($system_memory_in_mb/4));
fi
if [ -z "$BLAZE_RESOURCE_CACHE_CAP" ]; then
available_system_memory_chunks=$((BLAZE_MEMORY_CAP / 1000))
if [ $available_system_memory_chunks -eq 0 ]; then
log WARN "Only ${BLAZE_MEMORY_CAP} system memory available for Blaze. If your Blaze stores more than 128000 fhir ressources it will run significally slower."
export BLAZE_RESOURCE_CACHE_CAP=128000;
export BLAZE_CQL_CACHE_CAP=32;
else
export BLAZE_RESOURCE_CACHE_CAP=$((available_system_memory_chunks * 312500))
export BLAZE_CQL_CACHE_CAP=$((($system_memory_in_mb/4)/16));
fi
fi
}
# Takes 1) The Backup Directory Path 2) The name of the Service to be backuped # Takes 1) The Backup Directory Path 2) The name of the Service to be backuped
# Creates 3 Backups: 1) For the past seven days 2) For the current month and 3) for each calendar week # Creates 3 Backups: 1) For the past seven days 2) For the current month and 3) for each calendar week
createEncryptedPostgresBackup(){ createEncryptedPostgresBackup(){
@ -169,6 +212,228 @@ function retry {
return 0 return 0
} }
##Setting Network properties function bk_is_running {
# currently not needed detectCompose
#export HOSTIP=$(MSYS_NO_PATHCONV=1 docker run --rm --add-host=host.docker.internal:host-gateway ubuntu cat /etc/hosts | grep 'host.docker.internal' | awk '{print $1}'); RUNNING="$($COMPOSE -p $PROJECT -f minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE ps -q)"
NUMBEROFRUNNING=$(echo "$RUNNING" | wc -l)
if [ $NUMBEROFRUNNING -ge 2 ]; then
return 0
else
return 1
fi
}
function do_enroll_inner {
PARAMS=""
MANUAL_PROXY_ID="${1:-$PROXY_ID}"
if [ -z "$MANUAL_PROXY_ID" ]; then
log ERROR "No Proxy ID set"
exit 1
else
log INFO "Enrolling Beam Proxy Id $MANUAL_PROXY_ID"
fi
SUPPORT_EMAIL="${2:-$SUPPORT_EMAIL}"
if [ -n "$SUPPORT_EMAIL" ]; then
PARAMS+="--admin-email $SUPPORT_EMAIL"
fi
docker run --rm -v /etc/bridgehead/pki:/etc/bridgehead/pki docker.verbis.dkfz.de/cache/samply/beam-enroll:latest --output-file $PRIVATEKEYFILENAME --proxy-id $MANUAL_PROXY_ID $PARAMS
chmod 600 $PRIVATEKEYFILENAME
}
function do_enroll {
do_enroll_inner $@
}
add_basic_auth_user() {
USER="${1}"
PASSWORD="${2}"
NAME="${3}"
PROJECT="${4}"
FILE="/etc/bridgehead/${PROJECT}.local.conf"
ENCRY_CREDENTIALS="$(docker run --rm docker.verbis.dkfz.de/cache/httpd:alpine htpasswd -nb $USER $PASSWORD | tr -d '\n' | tr -d '\r')"
if [ -f $FILE ] && grep -R -q "$NAME=" $FILE # if a specific basic auth user already exists:
then
sed -i "/$NAME/ s|='|='$ENCRY_CREDENTIALS,|" $FILE
else
echo -e "\n## Basic Authentication Credentials for:\n$NAME='$ENCRY_CREDENTIALS'" >> $FILE;
fi
log DEBUG "Saving clear text credentials in $FILE. If wanted, delete them manually."
sed -i "/^$NAME/ s|$|\n# User: $USER\n# Password: $PASSWORD|" $FILE
}
OIDC_PUBLIC_REDIRECT_URLS=${OIDC_PUBLIC_REDIRECT_URLS:-""}
OIDC_PRIVATE_REDIRECT_URLS=${OIDC_PRIVATE_REDIRECT_URLS:-""}
# Add a redirect url to the public oidc client of the bridgehead
function add_public_oidc_redirect_url() {
if [[ $OIDC_PUBLIC_REDIRECT_URLS == "" ]]; then
OIDC_PUBLIC_REDIRECT_URLS+="$(generate_redirect_urls $1)"
else
OIDC_PUBLIC_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
fi
}
# Add a redirect url to the private oidc client of the bridgehead
function add_private_oidc_redirect_url() {
if [[ $OIDC_PRIVATE_REDIRECT_URLS == "" ]]; then
OIDC_PRIVATE_REDIRECT_URLS+="$(generate_redirect_urls $1)"
else
OIDC_PRIVATE_REDIRECT_URLS+=",$(generate_redirect_urls $1)"
fi
}
function sync_secrets() {
local delimiter=$'\x1E'
local secret_sync_args=""
if [[ $OIDC_PRIVATE_REDIRECT_URLS != "" ]]; then
secret_sync_args="OIDC:OIDC_CLIENT_SECRET:private;$OIDC_PRIVATE_REDIRECT_URLS"
fi
if [[ $OIDC_PUBLIC_REDIRECT_URLS != "" ]]; then
if [[ $secret_sync_args == "" ]]; then
secret_sync_args="OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
else
secret_sync_args+="${delimiter}OIDC:OIDC_PUBLIC:public;$OIDC_PUBLIC_REDIRECT_URLS"
fi
fi
if [[ $secret_sync_args == "" ]]; then
return
fi
mkdir -p /var/cache/bridgehead/secrets/ || fail_and_report 1 "Failed to create '/var/cache/bridgehead/secrets/'. Please run sudo './bridgehead install $PROJECT' again."
touch /var/cache/bridgehead/secrets/oidc
docker run --rm \
-v /var/cache/bridgehead/secrets/oidc:/usr/local/cache \
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
-v /srv/docker/bridgehead/$PROJECT/root.crt.pem:/run/secrets/root.crt.pem:ro \
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
-e NO_PROXY=localhost,127.0.0.1 \
-e ALL_PROXY=$HTTPS_PROXY_FULL_URL \
-e PROXY_ID=$PROXY_ID \
-e BROKER_URL=$BROKER_URL \
-e OIDC_PROVIDER=secret-sync-central.central-secret-sync.$BROKER_ID \
-e SECRET_DEFINITIONS=$secret_sync_args \
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
set -a # Export variables as environment variables
source /var/cache/bridgehead/secrets/oidc
set +a # Export variables in the regular way
}
function secret_sync_gitlab_token() {
# Map the origin of the git repository /etc/bridgehead to the prefix recognized by Secret Sync
local gitlab
case "$(git -C /etc/bridgehead remote get-url origin)" in
*git.verbis.dkfz.de*) gitlab=verbis;;
*gitlab.bbmri-eric.eu*) gitlab=bbmri;;
*)
log "WARN" "Not running Secret Sync because the git repository /etc/bridgehead has unknown origin"
return
;;
esac
if [ "$PROJECT" == "bbmri" ]; then
# If the project is BBMRI, use the BBMRI-ERIC broker and not the GBN broker
proxy_id=$ERIC_PROXY_ID
broker_url=$ERIC_BROKER_URL
broker_id=$ERIC_BROKER_ID
root_crt_file="/srv/docker/bridgehead/bbmri/modules/${ERIC_ROOT_CERT}.root.crt.pem"
else
proxy_id=$PROXY_ID
broker_url=$BROKER_URL
broker_id=$BROKER_ID
root_crt_file="/srv/docker/bridgehead/$PROJECT/root.crt.pem"
fi
# Use Secret Sync to validate the GitLab token in /var/cache/bridgehead/secrets/gitlab_token.
# If it is missing or expired, Secret Sync will create a new token and write it to the file.
# The git credential helper reads the token from the file during git pull.
mkdir -p /var/cache/bridgehead/secrets
touch /var/cache/bridgehead/secrets/gitlab_token # the file has to exist to be mounted correctly in the Docker container
log "INFO" "Running Secret Sync for the GitLab token (gitlab=$gitlab)"
docker pull docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest # make sure we have the latest image
docker run --rm \
-v /var/cache/bridgehead/secrets/gitlab_token:/usr/local/cache \
-v $PRIVATEKEYFILENAME:/run/secrets/privkey.pem:ro \
-v $root_crt_file:/run/secrets/root.crt.pem:ro \
-v /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro \
-e TLS_CA_CERTIFICATES_DIR=/conf/trusted-ca-certs \
-e NO_PROXY=localhost,127.0.0.1 \
-e ALL_PROXY=$HTTPS_PROXY_FULL_URL \
-e PROXY_ID=$proxy_id \
-e BROKER_URL=$broker_url \
-e GITLAB_PROJECT_ACCESS_TOKEN_PROVIDER=secret-sync-central.central-secret-sync.$broker_id \
-e SECRET_DEFINITIONS=GitLabProjectAccessToken:BRIDGEHEAD_CONFIG_REPO_TOKEN:$gitlab \
docker.verbis.dkfz.de/cache/samply/secret-sync-local:latest
if [ $? -eq 0 ]; then
log "INFO" "Secret Sync was successful"
# In the past we used to hardcode tokens into the repository URL. We have to remove those now for the git credential helper to become effective.
CLEAN_REPO="$(git -C /etc/bridgehead remote get-url origin | sed -E 's|https://[^@]+@|https://|')"
git -C /etc/bridgehead remote set-url origin "$CLEAN_REPO"
# Set the git credential helper
git -C /etc/bridgehead config credential.helper /srv/docker/bridgehead/lib/gitlab-token-helper.sh
else
log "WARN" "Secret Sync failed"
# Remove the git credential helper
git -C /etc/bridgehead config --unset credential.helper
fi
# In the past the git credential helper was also set for /srv/docker/bridgehead but never used.
# Let's remove it to avoid confusion. This line can be removed at some point the future when we
# believe that it was removed on all/most production servers.
git -C /srv/docker/bridgehead config --unset credential.helper
}
capitalize_first_letter() {
input="$1"
capitalized="$(tr '[:lower:]' '[:upper:]' <<< ${input:0:1})${input:1}"
echo "$capitalized"
}
# Generate a string of ',' separated string of redirect urls relative to $HOST.
# $1 will be appended to the url
# If the host looks like dev-jan.inet.dkfz-heidelberg.de it will generate urls with dev-jan and the original $HOST as url Authorities
function generate_redirect_urls(){
local redirect_urls="https://${HOST}$1"
local host_without_proxy="$(echo "$HOST" | cut -d '.' -f1)"
# Only append second url if its different and the host is not an ip address
if [[ "$HOST" != "$host_without_proxy" && ! "$HOST" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
redirect_urls+=",https://$host_without_proxy$1"
fi
echo "$redirect_urls"
}
# This password contains at least one special char, a random number and a random upper and lower case letter
generate_password(){
local seed_text="$1"
local seed_num=$(awk 'BEGIN{FS=""} NR==1{print $10}' /etc/bridgehead/pki/${SITE_ID}.priv.pem | od -An -tuC)
local nums="1234567890"
local n=$(echo "$seed_num" | awk '{print $1 % 10}')
local random_digit=${nums:$n:1}
local n=$(echo "$seed_num" | awk '{print $1 % 26}')
local upper="ABCDEFGHIJKLMNOPQRSTUVWXYZ"
local lower="abcdefghijklmnopqrstuvwxyz"
local random_upper=${upper:$n:1}
local random_lower=${lower:$n:1}
local n=$(echo "$seed_num" | awk '{print $1 % 8}')
local special='@#$%^&+='
local random_special=${special:$n:1}
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
local main_password=$(echo "${combined_text}" | sha1sum | openssl pkeyutl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/\//A/g')
echo "${main_password}${random_digit}${random_upper}${random_lower}${random_special}"
}
# This password only contains alphanumeric characters
generate_simple_password(){
local seed_text="$1"
local combined_text="This is a salt string to generate one consistent password for ${seed_text}. It is not required to be secret."
echo "${combined_text}" | sha1sum | openssl pkeyutl -sign -inkey "/etc/bridgehead/pki/${SITE_ID}.priv.pem" 2> /dev/null | base64 | head -c 26 | sed 's/[+\/]/A/g'
}
docker_jq() {
docker run --rm -i docker.verbis.dkfz.de/cache/jqlang/jq:latest "$@"
}

11
lib/gitlab-token-helper.sh Executable file
View File

@ -0,0 +1,11 @@
#!/bin/bash
[ "$1" = "get" ] || exit
source /var/cache/bridgehead/secrets/gitlab_token
# Any non-empty username works, only the token matters
cat << EOF
username=bk
password=$BRIDGEHEAD_CONFIG_REPO_TOKEN
EOF

View File

@ -1,41 +0,0 @@
#!/bin/bash
if [ "$1" != "get" ]; then
echo "Usage: $0 get"
exit 1
fi
baseDir() {
# see https://stackoverflow.com/questions/59895
SOURCE=${BASH_SOURCE[0]}
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
DIR=$( cd -P "$( dirname "$SOURCE" )" >/dev/null 2>&1 && pwd )
SOURCE=$(readlink "$SOURCE")
[[ $SOURCE != /* ]] && SOURCE=$DIR/$SOURCE # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
DIR=$( cd -P "$( dirname "$SOURCE" )/.." >/dev/null 2>&1 && pwd )
echo $DIR
}
BASE=$(baseDir)
cd $BASE
source lib/functions.sh
assertVarsNotEmpty SITE_ID || fail_and_report 1 "gitpassword.sh failed: SITE_ID is empty."
PARAMS="$(cat)"
GITHOST=$(echo "$PARAMS" | grep "^host=" | sed 's/host=\(.*\)/\1/g')
fetchVarsFromVault GIT_PASSWORD
if [ -z "${GIT_PASSWORD}" ]; then
fail_and_report 1 "gitpassword.sh failed: Git password not found."
fi
cat <<EOF
protocol=https
host=$GITHOST
username=bk-${SITE_ID}
password=${GIT_PASSWORD}
EOF

View File

@ -29,12 +29,16 @@ bridgehead ALL= NOPASSWD: BRIDGEHEAD${PROJECT^^}
EOF EOF
# TODO: Determine whether this should be located in setup-bridgehead (triggered through bridgehead install) or in update bridgehead (triggered every hour) # TODO: Determine whether this should be located in setup-bridgehead (triggered through bridgehead install) or in update bridgehead (triggered every hour)
if [ -z "$LDM_PASSWORD" ]; then if [ -z "$LDM_AUTH" ]; then
log "INFO" "Now generating a password for the local data management. Please save the password for your ETL process!" log "INFO" "Now generating basic auth for the local data management (see adduser in bridgehead for more information). "
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)" generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
add_basic_auth_user $PROJECT $generated_passwd "LDM_AUTH" $PROJECT
fi
log "INFO" "Your generated credentials are:\n user: $PROJECT\n password: $generated_passwd" if [ ! -z "$NNGM_CTS_APIKEY" ] && [ -z "$NNGM_AUTH" ]; then
echo -e "## Local Data Management Basic Authentication\n# User: $PROJECT\nLDM_PASSWORD=$generated_passwd" >> /etc/bridgehead/${PROJECT}.local.conf; log "INFO" "Now generating basic auth for nNGM upload API (see adduser in bridgehead for more information). "
generated_passwd="$(cat /proc/sys/kernel/random/uuid | sed 's/[-]//g' | head -c 32)"
add_basic_auth_user "nngm" $generated_passwd "NNGM_AUTH" $PROJECT
fi fi
log "INFO" "Registering system units for bridgehead and bridgehead-update" log "INFO" "Registering system units for bridgehead and bridgehead-update"

View File

@ -47,8 +47,8 @@ function hc_send(){
if [ -n "$2" ]; then if [ -n "$2" ]; then
MSG="$2\n\nDocker stats:\n$UPTIME" MSG="$2\n\nDocker stats:\n$UPTIME"
echo -e "$MSG" | https_proxy=$HTTPS_PROXY_URL curl -A "$USER_AGENT" -s -o /dev/null -X POST --data-binary @- "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1" echo -e "$MSG" | https_proxy=$HTTPS_PROXY_FULL_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null -X POST --data-binary @- "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
else else
https_proxy=$HTTPS_PROXY_URL curl -A "$USER_AGENT" -s -o /dev/null "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1" https_proxy=$HTTPS_PROXY_FULL_URL curl --max-time 5 -A "$USER_AGENT" -s -o /dev/null "$HCURL"/"$1" || log WARN "Monitoring failed: Unable to send data to $HCURL/$1"
fi fi
} }

View File

@ -1,10 +1,21 @@
#!/bin/bash -e #!/bin/bash -e
DEV_MODE="${1:-NODEV}"
source lib/log.sh source lib/log.sh
source lib/functions.sh source lib/functions.sh
log "INFO" "Preparing your system for bridgehead installation ..." log "INFO" "Preparing your system for bridgehead installation ..."
# Check, if running in WSL
if [[ $(grep -i Microsoft /proc/version) ]]; then
# Check, if systemd is available
if [ "$(systemctl is-system-running)" = "offline" ]; then
log "ERROR" "It seems you have no active systemd environment in your WSL environment. Please follow the guide in https://devblogs.microsoft.com/commandline/systemd-support-is-now-available-in-wsl/"
exit 1
fi
fi
# Create the bridgehead user # Create the bridgehead user
if id bridgehead &>/dev/null; then if id bridgehead &>/dev/null; then
log "INFO" "Existing user with id $(id -u bridgehead) will be used by the bridgehead system units." log "INFO" "Existing user with id $(id -u bridgehead) will be used by the bridgehead system units."
@ -14,7 +25,12 @@ else
fi fi
# Clone the OpenSource repository of bridgehead # Clone the OpenSource repository of bridgehead
set +e
bridgehead_repository_url=$(git remote get-url origin)
if [ $? -ne 0 ]; then
bridgehead_repository_url="https://github.com/samply/bridgehead.git" bridgehead_repository_url="https://github.com/samply/bridgehead.git"
fi
set -e
if [ -d "/srv/docker/bridgehead" ]; then if [ -d "/srv/docker/bridgehead" ]; then
current_owner=$(stat -c '%U' /srv/docker/bridgehead) current_owner=$(stat -c '%U' /srv/docker/bridgehead)
if [ "$(su -c 'git -C /srv/docker/bridgehead remote get-url origin' $current_owner)" == "$bridgehead_repository_url" ]; then if [ "$(su -c 'git -C /srv/docker/bridgehead remote get-url origin' $current_owner)" == "$bridgehead_repository_url" ]; then
@ -26,7 +42,7 @@ if [ -d "/srv/docker/bridgehead" ]; then
else else
log "INFO" "Cloning $bridgehead_repository_url to /srv/docker/bridgehead" log "INFO" "Cloning $bridgehead_repository_url to /srv/docker/bridgehead"
mkdir -p /srv/docker/ mkdir -p /srv/docker/
git clone bridgehead_repository_url /srv/docker/bridgehead git clone $bridgehead_repository_url /srv/docker/bridgehead
fi fi
case "$PROJECT" in case "$PROJECT" in
@ -36,6 +52,24 @@ case "$PROJECT" in
bbmri) bbmri)
site_configuration_repository_middle="git.verbis.dkfz.de/bbmri-bridgehead-configs/" site_configuration_repository_middle="git.verbis.dkfz.de/bbmri-bridgehead-configs/"
;; ;;
cce)
site_configuration_repository_middle="git.verbis.dkfz.de/cce-sites/"
;;
itcc)
site_configuration_repository_middle="git.verbis.dkfz.de/itcc-sites/"
;;
dhki)
site_configuration_repository_middle="git.verbis.dkfz.de/dhki/"
;;
kr)
site_configuration_repository_middle="git.verbis.dkfz.de/krebsregister-sites/"
;;
dhki)
site_configuration_repository_middle="git.verbis.dkfz.de/dhki/"
;;
minimal)
site_configuration_repository_middle="git.verbis.dkfz.de/minimal-bridgehead-configs/"
;;
*) *)
log ERROR "Internal error, this should not happen." log ERROR "Internal error, this should not happen."
exit 1 exit 1
@ -50,18 +84,29 @@ if [ -d /etc/bridgehead ]; then
else else
log "WARN" "Your site configuration repository in /etc/bridgehead seems to have another origin than git.verbis.dkfz.de. Please check if the repository is correctly cloned!" log "WARN" "Your site configuration repository in /etc/bridgehead seems to have another origin than git.verbis.dkfz.de. Please check if the repository is correctly cloned!"
fi fi
else elif [[ "$DEV_MODE" == "NODEV" ]]; then
log "INFO" "Now cloning your site configuration repository for you." log "INFO" "Now cloning your site configuration repository for you."
if [ -z "$site" ]; then
read -p "Please enter your site: " site read -p "Please enter your site: " site
fi
if [ -z "$access_token" ]; then
read -s -p "Please enter the bridgehead's access token for your site configuration repository (will not be echoed): " access_token read -s -p "Please enter the bridgehead's access token for your site configuration repository (will not be echoed): " access_token
fi
site_configuration_repository_url="https://bytoken:${access_token}@${site_configuration_repository_middle}$(echo $site | tr '[:upper:]' '[:lower:]').git" site_configuration_repository_url="https://bytoken:${access_token}@${site_configuration_repository_middle}$(echo $site | tr '[:upper:]' '[:lower:]').git"
git clone $site_configuration_repository_url /etc/bridgehead git clone $site_configuration_repository_url /etc/bridgehead
if [ $? -gt 0 ]; then if [ $? -gt 0 ]; then
log "ERROR" "Unable to clone your configuration repository. Please obtain correct access data and try again." log "ERROR" "Unable to clone your configuration repository. Please obtain correct access data and try again."
fi fi
elif [[ "$DEV_MODE" == "DEV" ]]; then
log "INFO" "Now cloning your developer configuration repository for you."
read -p "Please enter your config repository URL: " url
git clone "$url" /etc/bridgehead
fi fi
chown -R bridgehead /etc/bridgehead /srv/docker/bridgehead chown -R bridgehead /etc/bridgehead /srv/docker/bridgehead
mkdir -p /tmp/bridgehead /var/cache/bridgehead
chown -R bridgehead:docker /tmp/bridgehead /var/cache/bridgehead
chmod -R g+wr /var/cache/bridgehead /tmp/bridgehead
log INFO "System preparation is completed and private key is present." log INFO "System preparation is completed and configuration is present."

View File

@ -3,18 +3,20 @@
source lib/functions.sh source lib/functions.sh
detectCompose detectCompose
CONFIG_DIR="/etc/bridgehead/"
COMPONENT_DIR="/srv/docker/bridgehead/"
if ! id "bridgehead" &>/dev/null; then if ! id "bridgehead" &>/dev/null; then
log ERROR "User bridgehead does not exist. Please run bridgehead install $PROJECT" log ERROR "User bridgehead does not exist. Please run bridgehead install $PROJECT"
exit 1 exit 1
fi fi
checkOwner /srv/docker/bridgehead bridgehead || exit 1 checkOwner "${CONFIG_DIR}" bridgehead || exit 1
checkOwner /etc/bridgehead bridgehead || exit 1 checkOwner "${COMPONENT_DIR}" bridgehead || exit 1
## Check if user is a su ## Check if user is a su
log INFO "Checking if all prerequisites are met ..." log INFO "Checking if all prerequisites are met ..."
prerequisites="git docker" prerequisites="git docker curl"
for prerequisite in $prerequisites; do for prerequisite in $prerequisites; do
$prerequisite --version 2>&1 $prerequisite --version 2>&1
is_available=$? is_available=$?
@ -32,52 +34,87 @@ fi
log INFO "Checking configuration ..." log INFO "Checking configuration ..."
## Download submodule ## Download submodule
if [ ! -d "/etc/bridgehead/" ]; then if [ ! -d "${CONFIG_DIR}" ]; then
fail_and_report 1 "Please set up the config folder at /etc/bridgehead. Instruction are in the readme." fail_and_report 1 "Please set up the config folder at ${CONFIG_DIR}. Instruction are in the readme."
fi fi
# TODO: Check all required variables here in a generic loop # TODO: Check all required variables here in a generic loop
#check if project env is present #check if project env is present
if [ -d "/etc/bridgehead/${PROJECT}.conf" ]; then if [ -d "${CONFIG_DIR}${PROJECT}.conf" ]; then
fail_and_report 1 "Project config not found. Please copy the template from ${PROJECT} and put it under /etc/bridgehead-config/${PROJECT}.conf." fail_and_report 1 "Project config not found. Please copy the template from ${PROJECT} and put it under ${CONFIG_DIR}${PROJECT}.conf."
fi fi
# TODO: Make sure you're in the right directory, or, even better, be independent from the working directory. # TODO: Make sure you're in the right directory, or, even better, be independent from the working directory.
log INFO "Checking ssl cert for accessing bridgehead via https" log INFO "Checking ssl cert for accessing bridgehead via https"
if [ ! -d "/etc/bridgehead/traefik-tls" ]; then if [ ! -d "${CONFIG_DIR}traefik-tls" ]; then
log WARN "TLS certs for accessing bridgehead via https missing, we'll now create a self-signed one. Please consider getting an officially signed one (e.g. via Let's Encrypt ...) and put into /etc/bridgehead/traefik-tls" log WARN "TLS certs for accessing bridgehead via https missing, we'll now create a self-signed one. Please consider getting an officially signed one (e.g. via Let's Encrypt ...) and put into /etc/bridgehead/traefik-tls"
mkdir -p /etc/bridgehead/traefik-tls mkdir -p /etc/bridgehead/traefik-tls
fi fi
if [ ! -e "/etc/bridgehead/traefik-tls/fullchain.pem" ]; then if [ ! -e "${CONFIG_DIR}traefik-tls/fullchain.pem" ]; then
openssl req -x509 -newkey rsa:4096 -nodes -keyout /etc/bridgehead/traefik-tls/privkey.pem -out /etc/bridgehead/traefik-tls/fullchain.pem -days 3650 -subj "/CN=$HOST" openssl req -x509 -newkey rsa:4096 -nodes -keyout /etc/bridgehead/traefik-tls/privkey.pem -out /etc/bridgehead/traefik-tls/fullchain.pem -days 3650 -subj "/CN=$HOST"
fi fi
if [ -e /etc/bridgehead/vault.conf ]; then if [ -e "${CONFIG_DIR}"vault.conf ]; then
if [ "$(stat -c "%a %U" /etc/bridgehead/vault.conf)" != "600 bridgehead" ]; then if [ "$(stat -c "%a %U" /etc/bridgehead/vault.conf)" != "600 bridgehead" ]; then
fail_and_report 1 "/etc/bridgehead/vault.conf has wrong owner/permissions. To correct this issue, run chmod 600 /etc/bridgehead/vault.conf && chown bridgehead /etc/bridgehead/vault.conf." fail_and_report 1 "/etc/bridgehead/vault.conf has wrong owner/permissions. To correct this issue, run chmod 600 /etc/bridgehead/vault.conf && chown bridgehead /etc/bridgehead/vault.conf."
fi fi
fi fi
log INFO "Checking network access ($BROKER_URL_FOR_PREREQ) ..."
source "${CONFIG_DIR}${PROJECT}".conf
source ${PROJECT}/vars
if [ "${PROJECT}" != "minimal" ]; then
set +e
SERVERTIME="$(https_proxy=$HTTPS_PROXY_FULL_URL curl -m 5 -s -I $BROKER_URL_FOR_PREREQ 2>&1 | grep -i -e '^Date: ' | sed -e 's/^Date: //i')"
RET=$?
set -e
if [ $RET -ne 0 ]; then
log WARN "Unable to connect to Samply.Beam broker at $BROKER_URL_FOR_PREREQ. Please check your proxy settings.\nThe currently configured proxy was \"$HTTPS_PROXY_URL\". This error is normal when using proxy authentication."
log WARN "Unable to check clock skew due to previous error."
else
log INFO "Checking clock skew ..."
SERVERTIME_AS_TIMESTAMP=$(date --date="$SERVERTIME" +%s)
MYTIME=$(date +%s)
SKEW=$(($SERVERTIME_AS_TIMESTAMP - $MYTIME))
SKEW=$(echo $SKEW | awk -F- '{print $NF}')
SYNCTEXT="For example, consider entering a correct NTP server (e.g. your institution's Active Directory Domain Controller in /etc/systemd/timesyncd.conf (option NTP=) and restart systemd-timesyncd."
if [ $SKEW -ge 300 ]; then
report_error 5 "Your clock is not synchronized (${SKEW}s off). This will cause Samply.Beam's certificate will fail. Please setup time synchronization. $SYNCTEXT"
exit 1
elif [ $SKEW -ge 60 ]; then
log WARN "Your clock is more than a minute off (${SKEW}s). Consider syncing to a time server. $SYNCTEXT"
fi
fi
fi
checkPrivKey() { checkPrivKey() {
if [ -e /etc/bridgehead/pki/${SITE_ID}.priv.pem ]; then if [ -e "${CONFIG_DIR}pki/${SITE_ID}.priv.pem" ]; then
log INFO "Success - private key found." log INFO "Success - private key found."
else else
log ERROR "Unable to find private key at /etc/bridgehead/pki/${SITE_ID}.priv.pem. To fix, please run\n bridgehead enroll ${PROJECT}\nand follow the instructions." log ERROR "Unable to find private key at ${CONFIG_DIR}pki/${SITE_ID}.priv.pem. To fix, please run\n bridgehead enroll ${PROJECT}\nand follow the instructions."
return 1 return 1
fi fi
log INFO "Success - all prerequisites are met!"
hc_send log "Success - all prerequisites are met!"
return 0 return 0
} }
if [[ "$@" =~ "noprivkey" ]]; then if [[ "$@" =~ "noprivkey" || "${PROJECT}" != "minimal" ]]; then
log INFO "Skipping check for private key for now." log INFO "Skipping check for private key for now."
else else
checkPrivKey || exit 1 checkPrivKey || exit 1
fi fi
for dir in "${CONFIG_DIR}" "${COMPONENT_DIR}"; do
log INFO "Checking branch: $(cd $dir && echo "$dir $(git branch --show-current)")"
hc_send log "Checking branch: $(cd $dir && echo "$dir $(git branch --show-current)")"
done
log INFO "Success - all prerequisites are met!"
hc_send log "Success - all prerequisites are met!"
exit 0 exit 0

View File

@ -1,8 +1,9 @@
[Unit] [Unit]
Description=Hourly Updates of Bridgehead (%i) Description=Daily Updates at 6am of Bridgehead (%i)
[Timer] [Timer]
OnCalendar=*-*-* *:00:00 OnCalendar=*-*-* 06:00:00
Persistent=true
[Install] [Install]
WantedBy=basic.target WantedBy=basic.target

View File

@ -4,17 +4,22 @@ source lib/functions.sh
AUTO_HOUSEKEEPING=${AUTO_HOUSEKEEPING:-true} AUTO_HOUSEKEEPING=${AUTO_HOUSEKEEPING:-true}
if [ "$AUTO_HOUSEKEEPING" == "true" ]; then if [ "$AUTO_HOUSEKEEPING" == "true" ]; then
A="Performing automatic maintenance: Cleaning docker images." A="Performing automatic maintenance: "
if bk_is_running; then
A="$A Cleaning docker images."
docker system prune -a -f
else
A="$A Not cleaning docker images since BK is not running."
fi
hc_send log "$A" hc_send log "$A"
log INFO "$A" log INFO "$A"
docker system prune -a -f
else else
log WARN "Automatic housekeeping disabled (variable AUTO_HOUSEKEEPING != \"true\")" log WARN "Automatic housekeeping disabled (variable AUTO_HOUSEKEEPING != \"true\")"
fi fi
hc_send log "Checking for bridgehead updates ..." hc_send log "Checking for bridgehead updates ..."
CONFFILE=/etc/bridgehead/$1.conf CONFFILE=/etc/bridgehead/$PROJECT.conf
if [ ! -e $CONFFILE ]; then if [ ! -e $CONFFILE ]; then
fail_and_report 1 "Configuration file $CONFFILE not found." fail_and_report 1 "Configuration file $CONFFILE not found."
@ -25,10 +30,10 @@ source $CONFFILE
assertVarsNotEmpty SITE_ID || fail_and_report 1 "Update failed: SITE_ID empty" assertVarsNotEmpty SITE_ID || fail_and_report 1 "Update failed: SITE_ID empty"
export SITE_ID export SITE_ID
checkOwner . bridgehead || fail_and_report 1 "Update failed: Wrong permissions in $(pwd)" checkOwner /srv/docker/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /srv/docker/bridgehead"
checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead" checkOwner /etc/bridgehead bridgehead || fail_and_report 1 "Update failed: Wrong permissions in /etc/bridgehead"
CREDHELPER="/srv/docker/bridgehead/lib/gitpassword.sh" secret_sync_gitlab_token
CHANGES="" CHANGES=""
@ -40,20 +45,17 @@ for DIR in /etc/bridgehead $(pwd); do
if [ -n "$OUT" ]; then if [ -n "$OUT" ]; then
report_error log "The working directory $DIR is modified. Changed files: $OUT" report_error log "The working directory $DIR is modified. Changed files: $OUT"
fi fi
if [ "$(git -C $DIR config --get credential.helper)" != "$CREDHELPER" ]; then
log "INFO" "Configuring repo to use bridgehead git credential helper."
git -C $DIR config credential.helper "$CREDHELPER"
fi
old_git_hash="$(git -C $DIR rev-parse --verify HEAD)" old_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
if [ -z "$HTTP_PROXY_URL" ]; then if [ -z "$HTTPS_PROXY_FULL_URL" ]; then
log "INFO" "Git is using no proxy!" log "INFO" "Git is using no proxy!"
OUT=$(retry 5 git -C $DIR fetch 2>&1 && retry 5 git -C $DIR pull 2>&1) OUT=$(retry 5 git -C $DIR fetch 2>&1 && retry 5 git -C $DIR pull 2>&1)
else else
log "INFO" "Git is using proxy ${HTTP_PROXY_URL} from ${CONFFILE}" log "INFO" "Git is using proxy ${HTTPS_PROXY_URL} from ${CONFFILE}"
OUT=$(retry 5 git -c http.proxy=$HTTP_PROXY_URL -c https.proxy=$HTTPS_PROXY_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTP_PROXY_URL -c https.proxy=$HTTPS_PROXY_URL -C $DIR pull 2>&1) OUT=$(retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR fetch 2>&1 && retry 5 git -c http.proxy=$HTTPS_PROXY_FULL_URL -c https.proxy=$HTTPS_PROXY_FULL_URL -C $DIR pull 2>&1)
fi fi
if [ $? -ne 0 ]; then if [ $? -ne 0 ]; then
report_error log "Unable to update git $DIR: $OUT" OUT_SAN=$(echo $OUT | sed -E 's|://[^:]+:[^@]+@|://credentials@|g')
report_error log "Unable to update git $DIR: $OUT_SAN"
fi fi
new_git_hash="$(git -C $DIR rev-parse --verify HEAD)" new_git_hash="$(git -C $DIR rev-parse --verify HEAD)"
@ -81,7 +83,7 @@ done
# Check docker updates # Check docker updates
log "INFO" "Checking for updates to running docker images ..." log "INFO" "Checking for updates to running docker images ..."
docker_updated="false" docker_updated="false"
for IMAGE in $(cat $PROJECT/docker-compose.yml ${OVERRIDE//-f/} | grep -v "^#" | grep "image:" | sed -e 's_^.*image: \(.*\).*$_\1_g; s_\"__g'); do for IMAGE in $($COMPOSE -p $PROJECT -f ./minimal/docker-compose.yml -f ./$PROJECT/docker-compose.yml $OVERRIDE config | grep "image:" | sed -e 's_^.*image: \(.*\).*$_\1_g; s_\"__g'); do
log "INFO" "Checking for Updates of Image: $IMAGE" log "INFO" "Checking for Updates of Image: $IMAGE"
if docker pull $IMAGE | grep "Downloaded newer image"; then if docker pull $IMAGE | grep "Downloaded newer image"; then
CHANGE="Image $IMAGE updated." CHANGE="Image $IMAGE updated."
@ -111,7 +113,7 @@ if [ -n "${BACKUP_DIRECTORY}" ]; then
mkdir -p "$BACKUP_DIRECTORY" mkdir -p "$BACKUP_DIRECTORY"
chown -R "$BACKUP_DIRECTORY" bridgehead; chown -R "$BACKUP_DIRECTORY" bridgehead;
fi fi
checkOwner "$BACKUP_DIRECTORY" bridgehead || fail_and_report 1 "Automatic maintenance failed: Wrong permissions for backup directory $(pwd)" checkOwner "$BACKUP_DIRECTORY" bridgehead || fail_and_report 1 "Automatic maintenance failed: Wrong permissions for backup directory $BACKUP_DIRECTORY"
# Collect all container names that contain '-db' # Collect all container names that contain '-db'
BACKUP_SERVICES="$(docker ps --filter name=-db --format "{{.Names}}" | tr "\n" "\ ")" BACKUP_SERVICES="$(docker ps --filter name=-db --format "{{.Names}}" | tr "\n" "\ ")"
log INFO "Performing automatic maintenance: Creating Backups for $BACKUP_SERVICES"; log INFO "Performing automatic maintenance: Creating Backups for $BACKUP_SERVICES";
@ -134,6 +136,15 @@ else
log WARN "Automated backups are disabled (variable AUTO_BACKUPS != \"true\")" log WARN "Automated backups are disabled (variable AUTO_BACKUPS != \"true\")"
fi fi
#TODO: the following block can be deleted after successful update at all sites
if [ ! -z "$LDM_PASSWORD" ]; then
FILE="/etc/bridgehead/$PROJECT.local.conf"
log "INFO" "Migrating LDM_PASSWORD to encrypted credentials in $FILE"
add_basic_auth_user $PROJECT $LDM_PASSWORD "LDM_AUTH" $PROJECT
add_basic_auth_user $PROJECT $LDM_PASSWORD "NNGM_AUTH" $PROJECT
sed -i "/LDM_PASSWORD/{d;}" $FILE
fi
exit 0 exit 0
# TODO: Print last commit explicit # TODO: Print last commit explicit

View File

@ -0,0 +1,61 @@
version: "3.7"
services:
traefik:
container_name: bridgehead-traefik
image: docker.verbis.dkfz.de/cache/traefik:latest
command:
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --providers.file.directory=/configuration/
- --api.dashboard=false
- --accesslog=true
- --entrypoints.web.http.redirections.entrypoint.to=websecure
- --entrypoints.web.http.redirections.entrypoint.scheme=https
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=PathPrefix(`/dashboard/`)"
- "traefik.http.routers.dashboard.entrypoints=websecure"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=${LDM_AUTH}"
ports:
- 80:80
- 443:443
volumes:
- /etc/bridgehead/traefik-tls:/certs:ro
- ../lib/traefik-configuration/:/configuration:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
forward_proxy:
container_name: bridgehead-forward-proxy
image: docker.verbis.dkfz.de/cache/samply/bridgehead-forward-proxy:latest
environment:
HTTPS_PROXY: ${HTTPS_PROXY_URL}
HTTPS_PROXY_USERNAME: ${HTTPS_PROXY_USERNAME}
HTTPS_PROXY_PASSWORD: ${HTTPS_PROXY_PASSWORD}
tmpfs:
- /var/log/squid
- /var/spool/squid
volumes:
- /etc/bridgehead/trusted-ca-certs:/docker/custom-certs/:ro
healthcheck:
# Wait 1s before marking this service healthy. Required for the oauth2-proxy to talk to the OIDC provider on startup which will fail if the forward proxy is not started yet.
test: ["CMD", "sleep", "1"]
landing:
container_name: bridgehead-landingpage
image: docker.verbis.dkfz.de/cache/samply/bridgehead-landingpage:main
labels:
- "traefik.enable=true"
- "traefik.http.routers.landing.rule=PathPrefix(`/`)"
- "traefik.http.services.landing.loadbalancer.server.port=80"
- "traefik.http.routers.landing.tls=true"
environment:
HOST: ${HOST}
PROJECT: ${PROJECT}
SITE_NAME: ${SITE_NAME}
ENVIRONMENT: ${ENVIRONMENT}

View File

@ -0,0 +1,142 @@
{
"sites": [
{
"id": "UKFR",
"name": "Freiburg",
"virtualhost": "ukfr.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKHD",
"name": "Heidelberg",
"virtualhost": "ukhd.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKT",
"name": "Tübingen",
"virtualhost": "ukt.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKU",
"name": "Ulm",
"virtualhost": "uku.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UM",
"name": "Mainz",
"virtualhost": "um.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKMR",
"name": "Marburg",
"virtualhost": "ukmr.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKE",
"name": "Hamburg",
"virtualhost": "uke.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKA",
"name": "Aachen",
"virtualhost": "uka.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "Charite",
"name": "Berlin",
"virtualhost": "charite.dnpm.de",
"beamconnect": "dnpm-connect.berlin-test.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "MRI",
"name": "Muenchen-tum",
"virtualhost": "mri.dnpm.de",
"beamconnect": "dnpm-connect.muenchen-tum.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "KUM",
"name": "Muenchen-lmu",
"virtualhost": "kum.dnpm.de",
"beamconnect": "dnpm-connect.muenchen-lmu.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "MHH",
"name": "Hannover",
"virtualhost": "mhh.dnpm.de",
"beamconnect": "dnpm-connect.hannover.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKDD",
"name": "dresden-dnpm",
"virtualhost": "ukdd.dnpm.de",
"beamconnect": "dnpm-connect.dresden-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKB",
"name": "Bonn",
"virtualhost": "ukb.dnpm.de",
"beamconnect": "dnpm-connect.bonn-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKD",
"name": "Duesseldorf",
"virtualhost": "ukd.dnpm.de",
"beamconnect": "dnpm-connect.duesseldorf-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKK",
"name": "Koeln",
"virtualhost": "ukk.dnpm.de",
"beamconnect": "dnpm-connect.dnpm-bridge.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UME",
"name": "Essen",
"virtualhost": "ume.dnpm.de",
"beamconnect": "dnpm-connect.essen.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKM",
"name": "Muenster",
"virtualhost": "ukm.dnpm.de",
"beamconnect": "dnpm-connect.muenster-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKF",
"name": "Frankfurt",
"virtualhost": "ukf.dnpm.de",
"beamconnect": "dnpm-connect.frankfurt.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UMG",
"name": "Goettingen",
"virtualhost": "umg.dnpm.de",
"beamconnect": "dnpm-connect.goettingen.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKW",
"name": "Würzburg",
"virtualhost": "ukw.dnpm.de",
"beamconnect": "dnpm-connect.wuerzburg-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "UKSH",
"name": "Schleswig-Holstein",
"virtualhost": "uksh.dnpm.de",
"beamconnect": "dnpm-connect.uksh-dnpm.broker.ccp-it.dktk.dkfz.de"
},
{
"id": "TKT",
"name": "Test",
"virtualhost": "tkt.dnpm.de",
"beamconnect": "dnpm-connect.tobias-develop.broker.ccp-it.dktk.dkfz.de"
}
]
}

View File

@ -0,0 +1,59 @@
version: "3.7"
services:
dnpm-beam-proxy:
image: docker.verbis.dkfz.de/cache/samply/beam-proxy:${BEAM_TAG}
container_name: bridgehead-dnpm-beam-proxy
environment:
BROKER_URL: ${DNPM_BROKER_URL}
PROXY_ID: ${DNPM_PROXY_ID}
APP_dnpm-connect_KEY: ${DNPM_BEAM_SECRET_SHORT}
PRIVKEY_FILE: /run/secrets/proxy.pem
ALL_PROXY: http://forward_proxy:3128
TLS_CA_CERTIFICATES_DIR: ./conf/trusted-ca-certs
ROOTCERT_FILE: ./conf/root.crt.pem
secrets:
- proxy.pem
depends_on:
- "forward_proxy"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /srv/docker/bridgehead/ccp/root.crt.pem:/conf/root.crt.pem:ro
dnpm-beam-connect:
depends_on: [ dnpm-beam-proxy ]
image: docker.verbis.dkfz.de/cache/samply/beam-connect:develop
container_name: bridgehead-dnpm-beam-connect
environment:
PROXY_URL: http://dnpm-beam-proxy:8081
PROXY_APIKEY: ${DNPM_BEAM_SECRET_SHORT}
APP_ID: dnpm-connect.${DNPM_PROXY_ID}
DISCOVERY_URL: "./conf/central_targets.json"
LOCAL_TARGETS_FILE: "/conf/connect_targets.json"
HTTP_PROXY: http://forward_proxy:3128
HTTPS_PROXY: http://forward_proxy:3128
NO_PROXY: dnpm-beam-proxy,dnpm-backend, host.docker.internal${DNPM_ADDITIONAL_NO_PROXY}
RUST_LOG: ${RUST_LOG:-info}
NO_AUTH: "true"
TLS_CA_CERTIFICATES_DIR: ./conf/trusted-ca-certs
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- /etc/bridgehead/trusted-ca-certs:/conf/trusted-ca-certs:ro
- /etc/bridgehead/dnpm/local_targets.json:/conf/connect_targets.json:ro
- /srv/docker/bridgehead/minimal/modules/dnpm-central-targets.json:/conf/central_targets.json:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-connect.rule=PathPrefix(`/dnpm-connect`)"
- "traefik.http.middlewares.dnpm-connect-strip.stripprefix.prefixes=/dnpm-connect"
- "traefik.http.routers.dnpm-connect.middlewares=dnpm-connect-strip"
- "traefik.http.services.dnpm-connect.loadbalancer.server.port=8062"
- "traefik.http.routers.dnpm-connect.tls=true"
dnpm-echo:
image: docker.verbis.dkfz.de/cache/samply/bridgehead-echo:latest
container_name: bridgehead-dnpm-echo
secrets:
proxy.pem:
file: /etc/bridgehead/pki/${SITE_ID}.priv.pem

View File

@ -0,0 +1,99 @@
version: "3.7"
services:
dnpm-mysql:
image: mysql:9
healthcheck:
test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
interval: 3s
timeout: 5s
retries: 5
environment:
MYSQL_ROOT_HOST: "%"
MYSQL_ROOT_PASSWORD: ${DNPM_MYSQL_ROOT_PASSWORD}
volumes:
- /var/cache/bridgehead/dnpm/mysql:/var/lib/mysql
dnpm-authup:
image: authup/authup:latest
container_name: bridgehead-dnpm-authup
volumes:
- /var/cache/bridgehead/dnpm/authup:/usr/src/app/writable
depends_on:
dnpm-mysql:
condition: service_healthy
command: server/core start
environment:
- PUBLIC_URL=https://${HOST}/auth/
- AUTHORIZE_REDIRECT_URL=https://${HOST}
- ROBOT_ADMIN_ENABLED=true
- ROBOT_ADMIN_SECRET=${DNPM_AUTHUP_SECRET}
- ROBOT_ADMIN_SECRET_RESET=true
- DB_TYPE=mysql
- DB_HOST=dnpm-mysql
- DB_USERNAME=root
- DB_PASSWORD=${DNPM_MYSQL_ROOT_PASSWORD}
- DB_DATABASE=auth
labels:
- "traefik.enable=true"
- "traefik.http.middlewares.authup-strip.stripprefix.prefixes=/auth/"
- "traefik.http.routers.dnpm-auth.middlewares=authup-strip"
- "traefik.http.routers.dnpm-auth.rule=PathPrefix(`/auth`)"
- "traefik.http.services.dnpm-auth.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-auth.tls=true"
dnpm-portal:
image: ghcr.io/dnpm-dip/portal:latest
container_name: bridgehead-dnpm-portal
environment:
- NUXT_API_URL=http://dnpm-backend:9000/
- NUXT_PUBLIC_API_URL=https://${HOST}/api/
- NUXT_AUTHUP_URL=http://dnpm-authup:3000/
- NUXT_PUBLIC_AUTHUP_URL=https://${HOST}/auth/
labels:
- "traefik.enable=true"
- "traefik.http.routers.dnpm-frontend.rule=PathPrefix(`/`)"
- "traefik.http.services.dnpm-frontend.loadbalancer.server.port=3000"
- "traefik.http.routers.dnpm-frontend.tls=true"
dnpm-backend:
container_name: bridgehead-dnpm-backend
image: ghcr.io/dnpm-dip/backend:latest
environment:
- LOCAL_SITE=${ZPM_SITE}:${SITE_NAME} # Format: {Site-ID}:{Site-name}, e.g. UKT:Tübingen
- RD_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- MTB_RANDOM_DATA=${DNPM_SYNTH_NUM:--1}
- HATEOAS_HOST=https://${HOST}
- CONNECTOR_TYPE=broker
- AUTHUP_URL=robot://system:${DNPM_AUTHUP_SECRET}@http://dnpm-authup:3000
volumes:
- /etc/bridgehead/dnpm/config:/dnpm_config
- /var/cache/bridgehead/dnpm/backend-data:/dnpm_data
depends_on:
dnpm-authup:
condition: service_healthy
labels:
- "traefik.enable=true"
- "traefik.http.services.dnpm-backend.loadbalancer.server.port=9000"
# expose everything
- "traefik.http.routers.dnpm-backend.rule=PathPrefix(`/api`)"
- "traefik.http.routers.dnpm-backend.tls=true"
- "traefik.http.routers.dnpm-backend.service=dnpm-backend"
# except ETL
- "traefik.http.routers.dnpm-backend-etl.rule=PathRegexp(`^/api(/.*)?etl(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-etl.tls=true"
- "traefik.http.routers.dnpm-backend-etl.service=dnpm-backend"
# this needs an ETL processor with support for basic auth
- "traefik.http.routers.dnpm-backend-etl.middlewares=auth"
# except peer-to-peer
- "traefik.http.routers.dnpm-backend-peer.rule=PathRegexp(`^/api(/.*)?/peer2peer(/.*)?$`)"
- "traefik.http.routers.dnpm-backend-peer.tls=true"
- "traefik.http.routers.dnpm-backend-peer.service=dnpm-backend"
- "traefik.http.routers.dnpm-backend-peer.middlewares=dnpm-backend-peer"
# this effectively denies all requests
# this is okay, because requests from peers don't go through Traefik
- "traefik.http.middlewares.dnpm-backend-peer.ipWhiteList.sourceRange=0.0.0.0/32"
landing:
labels:
- "traefik.http.routers.landing.rule=PathPrefix(`/landing`)"

Some files were not shown because too many files have changed in this diff Show More