feat: sentry for arm
Some checks failed
Lock closed issues/PRs / lock (push) Has been cancelled
Test / Sentry self-hosted end-to-end tests (push) Has been cancelled
Test / unit tests (push) Has been cancelled
Test / Sentry upgrade test (push) Has been cancelled
Test / integration test v2.19.0 - customizations disabled (push) Has been cancelled
Test / integration test v2.19.0 - customizations enabled (push) Has been cancelled
Test / integration test v2.26.0 - customizations disabled (push) Has been cancelled
Test / integration test v2.26.0 - customizations enabled (push) Has been cancelled
Some checks failed
Lock closed issues/PRs / lock (push) Has been cancelled
Test / Sentry self-hosted end-to-end tests (push) Has been cancelled
Test / unit tests (push) Has been cancelled
Test / Sentry upgrade test (push) Has been cancelled
Test / integration test v2.19.0 - customizations disabled (push) Has been cancelled
Test / integration test v2.19.0 - customizations enabled (push) Has been cancelled
Test / integration test v2.26.0 - customizations disabled (push) Has been cancelled
Test / integration test v2.26.0 - customizations enabled (push) Has been cancelled
Signed-off-by: 小草林(田梓萱) <xcl@xuegao-tzx.top>
This commit is contained in:
parent
63d12d94b7
commit
90db12dfc0
6
.craft.yml
Normal file
6
.craft.yml
Normal file
@ -0,0 +1,6 @@
|
||||
minVersion: "0.23.1"
|
||||
changelogPolicy: auto
|
||||
artifactProvider:
|
||||
name: none
|
||||
targets:
|
||||
- name: github
|
16
.editorconfig
Normal file
16
.editorconfig
Normal file
@ -0,0 +1,16 @@
|
||||
root = true
|
||||
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
indent_style = space
|
||||
insert_final_newline = true
|
||||
|
||||
[*.sh]
|
||||
indent_size = 2
|
||||
|
||||
[*.yml]
|
||||
indent_size = 2
|
||||
|
||||
[nginx/*.conf]
|
||||
indent_style = tab
|
21
.env
Normal file
21
.env
Normal file
@ -0,0 +1,21 @@
|
||||
COMPOSE_PROJECT_NAME=sentry-self-hosted
|
||||
COMPOSE_PROFILES=feature-complete
|
||||
SENTRY_EVENT_RETENTION_DAYS=90
|
||||
# You can either use a port number or an IP:PORT combo for SENTRY_BIND
|
||||
# See https://docs.docker.com/compose/compose-file/#ports for more
|
||||
SENTRY_BIND=96
|
||||
# Set SENTRY_MAIL_HOST to a valid FQDN (host/domain name) to be able to send emails!
|
||||
# SENTRY_MAIL_HOST=example.com
|
||||
SENTRY_IMAGE=getsentry/sentry:nightly
|
||||
SNUBA_IMAGE=getsentry/snuba:nightly
|
||||
RELAY_IMAGE=getsentry/relay:nightly
|
||||
SYMBOLICATOR_IMAGE=getsentry/symbolicator:nightly
|
||||
VROOM_IMAGE=getsentry/vroom:nightly
|
||||
HEALTHCHECK_INTERVAL=30s
|
||||
HEALTHCHECK_TIMEOUT=1m30s
|
||||
HEALTHCHECK_RETRIES=10
|
||||
# Caution: Raising max connections of postgres increases CPU and RAM usage
|
||||
# see https://github.com/getsentry/self-hosted/pull/2740 for more information
|
||||
POSTGRES_MAX_CONNECTIONS=100
|
||||
# Set SETUP_JS_SDK_ASSETS to 1 to enable the setup of JS SDK assets
|
||||
# SETUP_JS_SDK_ASSETS=1
|
5
.gitattributes
vendored
Normal file
5
.gitattributes
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
/.gitattributes export-ignore
|
||||
/.gitignore export-ignore
|
||||
/.github export-ignore
|
||||
/.editorconfig export-ignore
|
||||
/.craft.yml export-ignore
|
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
5
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Report a security vulnerability
|
||||
url: https://sentry.io/security/#vulnerability-disclosure
|
||||
about: Please see our guide for responsible disclosure.
|
28
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
Normal file
28
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
name: 💡 Feature Request
|
||||
description: Tell us about a problem our software could solve but doesn't.
|
||||
body:
|
||||
- type: textarea
|
||||
id: problem
|
||||
attributes:
|
||||
label: Problem Statement
|
||||
description: What problem could `self-hosted` solve that it doesn't?
|
||||
placeholder: |-
|
||||
I want to make whirled peas, but `self-hosted` doesn't blend.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected
|
||||
attributes:
|
||||
label: Solution Brainstorm
|
||||
description: We know you have bright ideas to share ... share away, friend.
|
||||
placeholder: |-
|
||||
Add a blender to `self-hosted`.
|
||||
validations:
|
||||
required: false
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |-
|
||||
## Thanks
|
||||
Check our [triage docs](https://open.sentry.io/triage/) for what to expect next.
|
||||
validations:
|
||||
required: false
|
86
.github/ISSUE_TEMPLATE/problem-report.yml
vendored
Normal file
86
.github/ISSUE_TEMPLATE/problem-report.yml
vendored
Normal file
@ -0,0 +1,86 @@
|
||||
name: 🐞 Problem Report
|
||||
description: Tell us about something that's not working the way you expect.
|
||||
body:
|
||||
- type: input
|
||||
id: self_hosted_version
|
||||
attributes:
|
||||
label: Self-Hosted Version
|
||||
placeholder: 21.7.0 ← should look like this (check the footer)
|
||||
description: What version of self-hosted Sentry are you running?
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: cpu_architecture
|
||||
attributes:
|
||||
label: CPU Architecture
|
||||
placeholder: x86_64 ← should look like this
|
||||
description: |
|
||||
What cpu architecture are you running self-hosted on?
|
||||
e.g: (docker info --format '{{.Architecture}}')
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: docker_version
|
||||
attributes:
|
||||
label: Docker Version
|
||||
placeholder: 20.10.16 ← should look like this
|
||||
description: |
|
||||
What version of docker are you using to run self-hosted?
|
||||
e.g: (docker --version)
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: docker_compose_version
|
||||
attributes:
|
||||
label: Docker Compose Version
|
||||
placeholder: 2.6.0 ← should look like this (docker compose version)
|
||||
description: |
|
||||
What version of docker compose are you using to run self-hosted?
|
||||
e.g: (docker compose version)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: repro
|
||||
attributes:
|
||||
label: Steps to Reproduce
|
||||
description: How can we see what you're seeing? Specific is terrific.
|
||||
placeholder: |-
|
||||
1. foo
|
||||
2. bar
|
||||
3. baz
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected
|
||||
attributes:
|
||||
label: Expected Result
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: actual
|
||||
attributes:
|
||||
label: Actual Result
|
||||
description: |
|
||||
Logs? Screenshots? Yes, please.
|
||||
e.g.:
|
||||
- latest install logs: `ls -1 sentry_install_log-*.txt | tail -1 | xargs cat`
|
||||
- `docker compose logs` output
|
||||
placeholder: |-
|
||||
e.g.:
|
||||
- logs output
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: event_id
|
||||
attributes:
|
||||
label: Event ID
|
||||
description: |
|
||||
If you opted into sending errors to our error monitoring and the error has an event ID, enter it here!
|
||||
placeholder: c2d85058-d3b0-4d85-a509-e2ba965845d7
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |-
|
||||
## Thanks
|
||||
Check our [triage docs](https://open.sentry.io/triage/) for what to expect next.
|
||||
validations:
|
||||
required: false
|
30
.github/ISSUE_TEMPLATE/release.yml
vendored
Normal file
30
.github/ISSUE_TEMPLATE/release.yml
vendored
Normal file
@ -0,0 +1,30 @@
|
||||
name: 📦 Release Issue
|
||||
description: Start a new self-hosted Sentry release
|
||||
title: Release YY.M.N
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Body
|
||||
description: "Edit YY.M.N in the title and three times in the first line of the body, then submit. 👍"
|
||||
value: |
|
||||
[previous YY.M.N](https://github.com/getsentry/self-hosted/issues) | ***YY.M.N*** | [next YY.M.N](https://github.com/getsentry/self-hosted/issues)
|
||||
|
||||
- [ ] Release all components (_replace items with [publish repo issue links](https://github.com/getsentry/publish/issues)_).
|
||||
- [ ] [`relay`](https://github.com/getsentry/relay/actions/workflows/release_binary.yml)
|
||||
- [ ] [`sentry`](https://github.com/getsentry/sentry/actions/workflows/release.yml)
|
||||
- [ ] [`snuba`](https://github.com/getsentry/snuba/actions/workflows/release.yml)
|
||||
- [ ] [`symbolicator`](https://github.com/getsentry/symbolicator/actions/workflows/release.yml)
|
||||
- [ ] [`vroom`](https://github.com/getsentry/vroom/actions/workflows/release.yaml)
|
||||
- [ ] Release self-hosted.
|
||||
- [ ] [Prepare the `self-hosted` release](https://github.com/getsentry/self-hosted/actions/workflows/release.yml) (_replace with publish issue repo link_).
|
||||
- [ ] Check to make sure the new release branch in self-hosted includes the appropriate CalVer images.
|
||||
- [ ] Accept (publish) the release.
|
||||
- [ ] Edit [release notes](https://github.com/getsentry/self-hosted/releases).
|
||||
- [ ] Follow up.
|
||||
- [ ] [Create the next release issue](https://github.com/getsentry/self-hosted/issues/new?assignees=&labels=&projects=&template=release.yml) and link it from this one.
|
||||
- _replace with link_
|
||||
- [ ] Update the [release issue template](https://github.com/getsentry/self-hosted/blob/master/.github/ISSUE_TEMPLATE/release.yml).
|
||||
- [ ] Create a PR to update relocation release tests to add the new version.
|
||||
- _replace with link_
|
||||
validations:
|
||||
required: true
|
16
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
16
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
|
||||
|
||||
|
||||
<!-- Describe your PR here. -->
|
||||
|
||||
|
||||
|
||||
<!--
|
||||
|
||||
Sentry employees and contractors can delete or ignore the following.
|
||||
|
||||
-->
|
||||
|
||||
### Legal Boilerplate
|
||||
|
||||
Look, I get it. The entity doing business as "Sentry" was incorporated in the State of Delaware in 2015 as Functional Software, Inc. and is gonna need some rights from me in order to utilize my contributions in this here PR. So here's the deal: I retain all rights, title and interest in and to my contributions, and by keeping this boilerplate intact I confirm that Sentry can use, modify, copy, and redistribute my contributions, under Sentry's choice of terms.
|
19
.github/dependabot.yml
vendored
Normal file
19
.github/dependabot.yml
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: docker
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: daily
|
||||
open-pull-requests-limit: 0 # only security updates
|
||||
reviewers:
|
||||
- "@getsentry/open-source"
|
||||
- "@getsentry/security"
|
||||
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
# Check for updates to GitHub Actions every week
|
||||
interval: "weekly"
|
||||
reviewers:
|
||||
- "@getsentry/open-source"
|
||||
- "@getsentry/security"
|
16
.github/workflows/enforce-license-compliance.yml
vendored
Normal file
16
.github/workflows/enforce-license-compliance.yml
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
name: Enforce License Compliance
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [master]
|
||||
pull_request:
|
||||
branches: [master]
|
||||
|
||||
jobs:
|
||||
enforce-license-compliance:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: 'Enforce License Compliance'
|
||||
uses: getsentry/action-enforce-license-compliance@main
|
||||
with:
|
||||
fossa_api_key: ${{ secrets.FOSSA_API_KEY }}
|
40
.github/workflows/fast-revert.yml
vendored
Normal file
40
.github/workflows/fast-revert.yml
vendored
Normal file
@ -0,0 +1,40 @@
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [labeled]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
pr:
|
||||
required: true
|
||||
description: pr number
|
||||
co_authored_by:
|
||||
required: true
|
||||
description: '`name <email>` for triggering user'
|
||||
|
||||
# disable all permissions -- we use the PAT's permissions instead
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
revert:
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
github.event_name == 'workflow_dispatch' || github.event.label.name == 'Trigger: Revert'
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
token: ${{ secrets.BUMP_SENTRY_TOKEN }}
|
||||
- uses: getsentry/action-fast-revert@v2.0.1
|
||||
with:
|
||||
pr: ${{ github.event.number || github.event.inputs.pr }}
|
||||
co_authored_by: ${{ github.event.inputs.co_authored_by || format('{0} <{1}+{0}@users.noreply.github.com>', github.event.sender.login, github.event.sender.id) }}
|
||||
committer_name: getsentry-bot
|
||||
committer_email: bot@sentry.io
|
||||
token: ${{ secrets.BUMP_SENTRY_TOKEN }}
|
||||
- name: comment on failure
|
||||
run: |
|
||||
curl \
|
||||
--silent \
|
||||
-X POST \
|
||||
-H 'Authorization: token ${{ secrets.BUMP_SENTRY_TOKEN }}' \
|
||||
-d'{"body": "revert failed (conflict? already reverted?) -- [check the logs](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})"}' \
|
||||
https://api.github.com/repositories/${{ github.event.repository.id }}/issues/${{ github.event.number || github.event.inputs.pr }}/comments
|
||||
if: failure()
|
17
.github/workflows/lock.yml
vendored
Normal file
17
.github/workflows/lock.yml
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
name: 'Lock closed issues/PRs'
|
||||
on:
|
||||
schedule:
|
||||
- cron: '11 3 * * *'
|
||||
workflow_dispatch:
|
||||
jobs:
|
||||
lock:
|
||||
if: github.repository_owner == 'getsentry'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: getsentry/forked-action-lock-threads@master
|
||||
with:
|
||||
github-token: ${{ github.token }}
|
||||
issue-lock-inactive-days: 15
|
||||
issue-lock-reason: ''
|
||||
pr-lock-inactive-days: 15
|
||||
pr-lock-reason: ''
|
16
.github/workflows/pre-commit.yml
vendored
Normal file
16
.github/workflows/pre-commit.yml
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
name: pre-commit
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
push:
|
||||
branches: [master]
|
||||
|
||||
jobs:
|
||||
pre-commit:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: 3.x
|
||||
- uses: pre-commit/action@v3.0.1
|
57
.github/workflows/release.yml
vendored
Normal file
57
.github/workflows/release.yml
vendored
Normal file
@ -0,0 +1,57 @@
|
||||
name: Release
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: Version to release (optional)
|
||||
required: false
|
||||
force:
|
||||
description: Force a release even when there are release-blockers (optional)
|
||||
required: false
|
||||
schedule:
|
||||
# We want the release to be at 10 or 11am Pacific Time
|
||||
# We also make this an hour after all others such as Sentry,
|
||||
# Snuba, and Relay to make sure their releases finish.
|
||||
- cron: "0 18 15 * *"
|
||||
jobs:
|
||||
release:
|
||||
if: github.repository_owner == 'getsentry'
|
||||
runs-on: ubuntu-latest
|
||||
name: "Release a new version"
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
token: ${{ secrets.GH_RELEASE_PAT }}
|
||||
fetch-depth: 0
|
||||
- name: Prepare release
|
||||
id: prepare-release
|
||||
uses: getsentry/action-prepare-release@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GH_RELEASE_PAT }}
|
||||
with:
|
||||
version: ${{ github.event.inputs.version }}
|
||||
force: ${{ github.event.inputs.force }}
|
||||
calver: true
|
||||
outputs:
|
||||
release-version: ${{ env.RELEASE_VERSION }}
|
||||
dogfood-release:
|
||||
if: github.repository_owner == 'getsentry'
|
||||
runs-on: ubuntu-latest
|
||||
name: Create release on self-hosted dogfood instance
|
||||
needs: release
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
token: ${{ secrets.GH_RELEASE_PAT }}
|
||||
fetch-depth: 0
|
||||
- uses: getsentry/action-release@v1
|
||||
env:
|
||||
SENTRY_ORG: self-hosted
|
||||
SENTRY_PROJECT: installer
|
||||
SENTRY_URL: https://self-hosted.getsentry.net/
|
||||
SENTRY_AUTH_TOKEN: ${{ secrets.SELF_HOSTED_RELEASE_TOKEN }}
|
||||
with:
|
||||
environment: production
|
||||
version: ${{ needs.release.outputs.release-version }}
|
||||
ignore_empty: true
|
||||
ignore_missing: true
|
159
.github/workflows/test.yml
vendored
Normal file
159
.github/workflows/test.yml
vendored
Normal file
@ -0,0 +1,159 @@
|
||||
name: Test
|
||||
on:
|
||||
# Run CI on all pushes to the master and release/** branches, and on all new
|
||||
# pull requests, and on all pushes to pull requests (even if a pull request
|
||||
# is not against master).
|
||||
push:
|
||||
branches:
|
||||
- "master"
|
||||
- "release/**"
|
||||
pull_request:
|
||||
schedule:
|
||||
- cron: "0 0,12 * * *"
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
jobs:
|
||||
e2e-test:
|
||||
if: github.repository_owner == 'getsentry'
|
||||
runs-on: ubuntu-22.04
|
||||
name: "Sentry self-hosted end-to-end tests"
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
path: self-hosted
|
||||
|
||||
- name: End to end tests
|
||||
uses: getsentry/action-self-hosted-e2e-tests@main
|
||||
with:
|
||||
project_name: self-hosted
|
||||
|
||||
unit-test:
|
||||
if: github.repository_owner == 'getsentry'
|
||||
runs-on: ubuntu-22.04
|
||||
name: "unit tests"
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Unit Tests
|
||||
run: ./unit-test.sh
|
||||
|
||||
upgrade-test:
|
||||
if: github.repository_owner == 'getsentry'
|
||||
runs-on: ubuntu-22.04
|
||||
name: "Sentry upgrade test"
|
||||
env:
|
||||
REPORT_SELF_HOSTED_ISSUES: 0
|
||||
steps:
|
||||
- name: Get latest self-hosted release version
|
||||
run: |
|
||||
LATEST_TAG=$(curl -s https://api.github.com/repos/getsentry/self-hosted/releases/latest | jq -r '.tag_name')
|
||||
echo "LATEST_TAG=$LATEST_TAG" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout latest release
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ env.LATEST_TAG }}
|
||||
|
||||
- name: Get Compose
|
||||
run: |
|
||||
# Docker Compose v1 is installed here, remove it
|
||||
sudo rm -f "/usr/local/bin/docker-compose"
|
||||
sudo rm -f "/usr/local/lib/docker/cli-plugins/docker-compose"
|
||||
sudo mkdir -p "/usr/local/lib/docker/cli-plugins"
|
||||
sudo curl -L https://github.com/docker/compose/releases/download/v2.26.0/docker-compose-`uname -s`-`uname -m` -o "/usr/local/lib/docker/cli-plugins/docker-compose"
|
||||
sudo chmod +x "/usr/local/lib/docker/cli-plugins/docker-compose"
|
||||
|
||||
- name: Install ${{ env.LATEST_TAG }}
|
||||
run: ./install.sh
|
||||
|
||||
- name: Checkout current ref
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install current ref
|
||||
run: ./install.sh
|
||||
|
||||
integration-test:
|
||||
if: github.repository_owner == 'getsentry'
|
||||
runs-on: ubuntu-22.04
|
||||
name: integration test ${{ matrix.compose_version }} - customizations ${{ matrix.customizations }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
customizations: ["disabled", "enabled"]
|
||||
compose_version: ["v2.19.0", "v2.26.0"]
|
||||
include:
|
||||
- compose_version: "v2.19.0"
|
||||
compose_path: "/usr/local/lib/docker/cli-plugins"
|
||||
- compose_version: "v2.26.0"
|
||||
compose_path: "/usr/local/lib/docker/cli-plugins"
|
||||
env:
|
||||
COMPOSE_PROJECT_NAME: self-hosted-${{ strategy.job-index }}
|
||||
REPORT_SELF_HOSTED_ISSUES: 0
|
||||
SELF_HOSTED_TESTING_DSN: ${{ vars.SELF_HOSTED_TESTING_DSN }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup dev environment
|
||||
run: |
|
||||
pip install -r requirements-dev.txt
|
||||
echo "PY_COLORS=1" >> "$GITHUB_ENV"
|
||||
### pytest-sentry configuration ###
|
||||
if [ "$GITHUB_REPOSITORY" = "getsentry/self-hosted" ]; then
|
||||
echo "PYTEST_SENTRY_DSN=$SELF_HOSTED_TESTING_DSN" >> $GITHUB_ENV
|
||||
echo "PYTEST_SENTRY_TRACES_SAMPLE_RATE=0" >> $GITHUB_ENV
|
||||
|
||||
# This records failures on master to sentry in order to detect flakey tests, as it's
|
||||
# expected that people have failing tests on their PRs
|
||||
if [ "$GITHUB_REF" = "refs/heads/master" ]; then
|
||||
echo "PYTEST_SENTRY_ALWAYS_REPORT=1" >> $GITHUB_ENV
|
||||
fi
|
||||
fi
|
||||
|
||||
- name: Get Compose
|
||||
run: |
|
||||
# Always remove `docker compose` support as that's the newer version
|
||||
# and comes installed by default nowadays.
|
||||
sudo rm -f "/usr/local/lib/docker/cli-plugins/docker-compose"
|
||||
# Docker Compose v1 is installed here, remove it
|
||||
sudo rm -f "/usr/local/bin/docker-compose"
|
||||
sudo rm -f "${{ matrix.compose_path }}/docker-compose"
|
||||
sudo mkdir -p "${{ matrix.compose_path }}"
|
||||
sudo curl -L https://github.com/docker/compose/releases/download/${{ matrix.compose_version }}/docker-compose-`uname -s`-`uname -m` -o "${{ matrix.compose_path }}/docker-compose"
|
||||
sudo chmod +x "${{ matrix.compose_path }}/docker-compose"
|
||||
|
||||
- name: Install self-hosted
|
||||
uses: nick-fields/retry@v3
|
||||
with:
|
||||
timeout_minutes: 10
|
||||
max_attempts: 3
|
||||
command: ./install.sh
|
||||
|
||||
- name: Integration Test
|
||||
run: |
|
||||
if [ "${{ matrix.compose_version }}" = "v2.19.0" ]; then
|
||||
pytest --reruns 3 --cov --junitxml=junit.xml _integration-test/ --customizations=${{ matrix.customizations }}
|
||||
else
|
||||
pytest --cov --junitxml=junit.xml _integration-test/ --customizations=${{ matrix.customizations }}
|
||||
fi
|
||||
|
||||
- name: Inspect failure
|
||||
if: failure()
|
||||
run: |
|
||||
docker compose ps
|
||||
docker compose logs
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
uses: codecov/codecov-action@v5
|
||||
with:
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
||||
slug: getsentry/self-hosted
|
||||
|
||||
- name: Upload test results to Codecov
|
||||
if: ${{ !cancelled() }}
|
||||
uses: codecov/test-results-action@v1
|
||||
with:
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
106
.gitignore
vendored
Normal file
106
.gitignore
vendored
Normal file
@ -0,0 +1,106 @@
|
||||
# Error reporting choice cache
|
||||
.reporterrors
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
|
||||
# Distribution / packaging
|
||||
.Python
|
||||
env/
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
||||
*.manifest
|
||||
*.spec
|
||||
|
||||
# Installer logs
|
||||
pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
sentry_install_log*.txt
|
||||
sentry_reset_log*.txt
|
||||
sentry_restore_log*.txt
|
||||
sentry_backup_log*.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
htmlcov/
|
||||
.tox/
|
||||
.coverage
|
||||
.coverage.*
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
*.cover
|
||||
.hypothesis/
|
||||
|
||||
# Translations
|
||||
*.mo
|
||||
*.pot
|
||||
|
||||
# Django stuff:
|
||||
*.log
|
||||
local_settings.py
|
||||
|
||||
# Sphinx documentation
|
||||
docs/_build/
|
||||
|
||||
# PyBuilder
|
||||
target/
|
||||
|
||||
# Ipython Notebook
|
||||
.ipynb_checkpoints
|
||||
|
||||
# pyenv
|
||||
.python-version
|
||||
|
||||
# https://docs.docker.com/compose/extends/
|
||||
docker-compose.override.yml
|
||||
|
||||
# https://docs.docker.com/compose/environment-variables/#using-the---env-file--option
|
||||
.env.custom
|
||||
|
||||
*.tar
|
||||
data/
|
||||
|
||||
# Editor / IDE
|
||||
.vscode/tags
|
||||
.idea
|
||||
|
||||
# custom Sentry config
|
||||
sentry/sentry.conf.py
|
||||
sentry/config.yml
|
||||
sentry/*.bak
|
||||
sentry/enhance-image.sh
|
||||
sentry/requirements.txt
|
||||
relay/credentials.json
|
||||
relay/config.yml
|
||||
symbolicator/config.yml
|
||||
geoip/GeoIP.conf
|
||||
geoip/*.mmdb
|
||||
geoip/.geoipupdate.lock
|
||||
|
||||
# integration testing
|
||||
_integration-test/custom-ca-roots/nginx/*
|
||||
sentry/test-custom-ca-roots.py
|
||||
|
||||
# OSX minutia
|
||||
.DS_Store
|
24
.pre-commit-config.yaml
Normal file
24
.pre-commit-config.yaml
Normal file
@ -0,0 +1,24 @@
|
||||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
# Based on https://github.com/scop/pre-commit-shfmt/blob/main/.pre-commit-hooks.yaml
|
||||
# Customized to also work on ARM, and give diff for CI on failure.
|
||||
- id: shfmt
|
||||
name: shfmt
|
||||
description: Format shell source code
|
||||
language: docker_image
|
||||
entry: --net none mvdan/shfmt:v3.5.1
|
||||
args: [-w, -d]
|
||||
files: .*\.sh
|
||||
stages: [commit, merge-commit, push, manual]
|
||||
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.3.0
|
||||
hooks:
|
||||
- id: check-case-conflict
|
||||
- id: check-executables-have-shebangs
|
||||
- id: check-merge-conflict
|
||||
- id: check-symlinks
|
||||
- id: end-of-file-fixer
|
||||
- id: trailing-whitespace
|
||||
- id: check-yaml
|
669
CHANGELOG.md
Normal file
669
CHANGELOG.md
Normal file
@ -0,0 +1,669 @@
|
||||
# Changelog
|
||||
|
||||
## 24.11.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- feat(healthcheck): Improve redis healthcheck (#3422) by @hubertdeng123
|
||||
- fix: missing mime types and turning off autoindex for js-sdk endpoint (#3395) by @aldy505
|
||||
- fix: Use js.sentry-cdn.com for JS SDK downloads (#3417) by @BYK
|
||||
- fix(loader): provide js sdk assets from 4.x (#3415) by @aldy505
|
||||
- Revert "Revert "ref(feedback): remove issue platform flags after releasing issue types"" (#3403) by @BYK
|
||||
- Revert "ref(feedback): remove issue platform flags after releasing issue types" (#3402) by @BYK
|
||||
- ref(feedback): remove issue platform flags after releasing issue types (#3397) by @aliu39
|
||||
- fix(sentry-admin): Do not wait for command finish to display output (#3390) by @Makhonya
|
||||
|
||||
## 24.10.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- chore: Disable codecov for master/release branches (#3384) by @hubertdeng123
|
||||
- chore: replace old URLs of the repo with the new docs (#3375) by @victorelec14
|
||||
- ref: span normalization allowed host config (#3245) by @aldy505
|
||||
- docs: explicitly specify `mail.use-{tls,ssl}` is mutually exclusive (#3368) by @aldy505
|
||||
- ref: allow hosted js sdk bundles (#3365) by @aldy505
|
||||
- fix(clickhouse): Allow nullable key (#3354) by @nikhars
|
||||
|
||||
## 24.9.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- docs: link to develop docs (#3307) by @joshuarli
|
||||
- fix: more leeway for minimum RAM (#3290) by @joshuarli
|
||||
- Mandate minimum requirements for ram/cpu (#3275) by @hubertdeng123
|
||||
- ref(feedback): cleanup topic rollout option (#3276) by @aliu39
|
||||
- Update release template (#3270) by @hubertdeng123
|
||||
|
||||
## 24.8.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Migrate to zookeeper-less kafka (#3263) by @hubertdeng123
|
||||
- Revert "ref(feedback): cleanup topic rollout option" (#3262) by @aliu39
|
||||
- ref(feedback): cleanup topic rollout option (#3256) by @aliu39
|
||||
- Remove cdc and wal2json and use the default postgres entrypoint (#3260) by @beezz
|
||||
- add `-euo pipefail` to enhance-image.example.sh (#3246) by @asottile-sentry
|
||||
- remove python-dev (#3242) by @asottile-sentry
|
||||
- feat: enable user feedback feature (#3193) by @aldy505
|
||||
- Use CDN by default for JS SDK Loader (#3213) by @stayallive
|
||||
|
||||
## 24.7.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Fix: errors only config flag (#3220) by @hubertdeng123
|
||||
- Add errors only self-hosted infrastructure (#3190) by @hubertdeng123
|
||||
- feat(generic-metrics): Add gauges to docker compose, re-try (#3177) by @ayirr7
|
||||
|
||||
## 24.7.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Check postgres os before proceeding with install (#3197) by @hubertdeng123
|
||||
- Update sentry-admin.sh to select its own working directory (#3184) by @theoriginalgri
|
||||
- feat: add insights feature flags (#3152) by @aldy505
|
||||
- feat(relay): Forward /api/0/relays/* to inner relays (#3144) by @iambriccardo
|
||||
|
||||
## 24.6.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Use general kafka topic creation in self-hosted (#3121) by @hubertdeng123
|
||||
- Use non-alpine postgres (#3116) by @hubertdeng123
|
||||
- Bump Python SDK version used in tests (#3108) by @sentrivana
|
||||
|
||||
## 24.5.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Update consumer flags (#3112) by @hubertdeng123
|
||||
- feat: Add crons task consumers (#3106) by @wedamija
|
||||
- Update minimum docker compose requirement (#3078) by @JannKleen
|
||||
- Different approach to editing permissions of docker volumes (#3084) by @hubertdeng123
|
||||
- ref(spans): Add new feature flags needed (#3092) by @phacops
|
||||
- chore: Add comment explaining the one liner in clickhouse config (#3085) by @hubertdeng123
|
||||
- Fix install: use dynamic docker root dir instead of hardcoded one (#3064) by @boutetnico
|
||||
- Typo in config.example.yml (#3063) by @luchaninov
|
||||
|
||||
## 24.5.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- fix: Make docker volume script respect compose project name (#3039) by @hubertdeng123
|
||||
- remove ref to skip writes (#3041) by @john-z-yang
|
||||
- Add clickhouse healthchecks to upgrade (#3024) by @hubertdeng123
|
||||
- Upgrade clickhouse to 23.8 (#3009) by @hubertdeng123
|
||||
- fix: use nginx realip module (#2977) by @oioki
|
||||
- Add upgrade test (#3012) by @hubertdeng123
|
||||
- Bump kafka and zookeeper versions (#2988) by @hubertdeng123
|
||||
|
||||
## 24.4.2
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Edit test file name (#3002) by @hubertdeng123
|
||||
- Revert "Sampling: Run e2e tests every 5 minutes" (#2999) by @hubertdeng123
|
||||
- Fix master test failures (#3000) by @hubertdeng123
|
||||
- Sampling: Run e2e tests every 5 minutes (#2994) by @hubertdeng123
|
||||
- Tweak e2e test github action (#2987) by @hubertdeng123
|
||||
- fix(performance): Add spans-first-ui flag to enable starfish/performance module views in ui (#2993) by @edwardgou-sentry
|
||||
- Bump docker compose version in CI (#2980) by @hubertdeng123
|
||||
- Upgrade postgres to 14.11 (#2975) by @mdtro
|
||||
- Add workstation configuration (#2968) by @azaslavsky
|
||||
|
||||
## 24.4.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- chore(deps): bump memcached and redis to latest patch versions (#2973) by @mdtro
|
||||
- Use docker compose exec to create additional kafka topics (#2904) by @saz
|
||||
- Add example to docker compose version in problem report (#2959) by @edgariscoding
|
||||
- Port last integration tests to python (#2966) by @hubertdeng123
|
||||
|
||||
## 24.4.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Use python for e2e tests (#2953) by @hubertdeng123
|
||||
- feat: adds group attributes consumer (#2927) by @scefali
|
||||
- fix(spans): Adds organizations:standalone-span-ingestion flag to default config (#2936) by @edwardgou-sentry
|
||||
- Bump ubuntu version for tests (#2923) by @hubertdeng123
|
||||
- Write Customization tests in python (#2918) by @hubertdeng123
|
||||
- feat(clickhouse): Added max_suspicious_broken_parts to the config.xml (#2853) by @victorelec14
|
||||
- Port backup tests to python (#2907) by @hubertdeng123
|
||||
- Fix defunct java processes (#2914) by @hubertdeng123
|
||||
- Integration tests in python (#2892) by @hubertdeng123
|
||||
- feat: run outcomes-billing consumer (#2909) by @lynnagara
|
||||
- Remove duplicate feature flags (#2899) by @JannKleen
|
||||
|
||||
## 24.3.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- feat(spans): Ingest spans (#2861) by @phacops
|
||||
- Integration test improvements (#2858) by @hubertdeng123
|
||||
- increase postgres max_connections above 100 connections (#2740) by @erfantkerfan
|
||||
- deps: bump maxmind/geoipupdate to 6.1.0 (#2859) by @victorelec14
|
||||
- Enable proxy buffering in nginx (#2844) by @RexTim
|
||||
- Add snuba rust consumers (#2831) by @hubertdeng123
|
||||
- simplify if for open-ai-suggestion (#2732) by @LvckyAPI
|
||||
- Upgrade to FSL-1.1 (#2835) by @chadwhitacre
|
||||
- chore: provide clearer csrf url example (#2833) by @aldy505
|
||||
- chore: Use django ORM to perform sql commands (#2827) by @hubertdeng123
|
||||
- revert changes in 3067683f6c0e1c6dd9ceb72cb5155c1dbf3bf501 (#2829) by @hubertdeng123
|
||||
- use rust consumers in self-hosted (3067683f) by @hubertdeng123
|
||||
|
||||
## 24.2.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Bump nginx version (#2797) by @hubertdeng123
|
||||
- build(deps): bump pre-commit/action from 3.0.0 to 3.0.1 (#2788) by @dependabot
|
||||
- Tweak postgres indexing fix (#2792) by @hubertdeng123
|
||||
- fix: DB migration script (#2779) by @hubertdeng123
|
||||
|
||||
## 24.1.2
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Check memcached backend in Django (#2778) by @chadwhitacre
|
||||
- Fix groupedmessage indexing error (#2777) by @hubertdeng123
|
||||
- build(deps): bump actions/setup-python from 4 to 5 (#2644) by @dependabot
|
||||
- feat: provide csrf settings information for sentry config (#2762) by @aldy505
|
||||
- Fix apt config generation when http_proxy is set (#2725) (#2734) by @lemrouch
|
||||
|
||||
## 24.1.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Revert "Move open ai key from env variables" (#2724) by @hubertdeng123
|
||||
- Fix cache error self hosted (#2722) by @hubertdeng123
|
||||
|
||||
## 24.1.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Enable crons (#2712) by @hubertdeng123
|
||||
- Parameterize backup restore script (#2412) by @hubertdeng123
|
||||
- Run tests only on getsentry repository (#2681) by @aminvakil
|
||||
- Tweak the template now that we can see it (#2670) by @chadwhitacre
|
||||
- Nginx client request body is buffered to a temporary file (#2630) by @zKoz210
|
||||
|
||||
## 23.12.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Make a release issue template (#2666) by @chadwhitacre
|
||||
|
||||
## 23.12.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- test(backup): Use --no-prompt for backup tests (#2618) by @azaslavsky
|
||||
|
||||
## 23.11.2
|
||||
|
||||
- No documented changes.
|
||||
|
||||
## 23.11.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- feat: Add sentry-admin.sh tool (#2594) by @azaslavsky
|
||||
- Patch for dev self-hosted environments (#2592) by @hubertdeng123
|
||||
- Relicense under FSL-1.0-Apache-2.0 (#2586) by @chadwhitacre
|
||||
- Bump minimum ram usage (#2585) by @hubertdeng123
|
||||
|
||||
## 23.11.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- feat: provide a toggle to enable discord integration (#2548) by @aldy505
|
||||
- ref: fix a typo (#2556) by @asottile-sentry
|
||||
- ref: use `git branch --show-current` instead of sed (#2550) by @asottile-sentry
|
||||
- Remove sessions infra (#2514) by @hubertdeng123
|
||||
- Upgrade Clickhouse to 21.8 (#2536) by @hubertdeng123
|
||||
- [Snyk] Security upgrade debian from bullseye-slim to bookworm-20231009-slim (#2511) by @Indigi-managed
|
||||
- snuba: Remove deprecated CLI arg (#2515) by @lynnagara
|
||||
|
||||
## 23.10.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Revert "feat: upgrade to zookeeper-less kafka (#2445)" (#2500) by @hubertdeng123
|
||||
- Add fast revert GH workflow (#2499) by @hubertdeng123
|
||||
- build(deps): bump actions/checkout from 3 to 4 (#2493) by @dependabot
|
||||
- configure dependabot (#2491) by @mdtro
|
||||
- deps: bump nginx to 1.25.2 (#2490) by @mdtro
|
||||
- feat: upgrade to zookeeper-less kafka (#2445) by @joshuarli
|
||||
- Update outdated install option in README (#2440) by @hubertdeng123
|
||||
|
||||
## 23.10.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Switch geoipupdate image to ghcr.io (#2442) by @hkraal
|
||||
- Add system.url-prefix to config for visibility (#2426) by @hubertdeng123
|
||||
- Remove CSPMiddleware since it is enabled by default in the upstream sentry (#2434) by @oioki
|
||||
- Update nginx.conf (#2455) by @mwarkentin
|
||||
- Update Redis container image to 6.2.13 (#2432) by @mencarellic
|
||||
|
||||
## 23.9.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- fix: e2e test jq bug (#2410) by @azaslavsky
|
||||
- feat(backup): Support new backup script (#2407) by @azaslavsky
|
||||
- Decrease frequency of e2e tests (#2383) by @hubertdeng123
|
||||
- Reduce logs coming from clickhouse (#2382) by @hubertdeng123
|
||||
- Increase frequency of e2e test runs (#2375) by @hubertdeng123
|
||||
- Remove nginx content-disposition hack for safari (#2381) by @hubertdeng123
|
||||
- Attempt to fix integration test flakiness (#2372) by @hubertdeng123
|
||||
- change health check for kafka service (#2371) by @johnatannvmd
|
||||
- Add metrics and generic metrics backend (#2355) by @hubertdeng123
|
||||
- Bump self-hosted e2e action commit sha (#2369) by @hubertdeng123
|
||||
|
||||
## 23.8.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Add issue platform infra (#2309) by @hubertdeng123
|
||||
|
||||
## 23.7.2
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Ignore fixture-custom-ca-roots service in integration test (#2321) by @hubertdeng123
|
||||
- Update GeoIpUpdate to v6.0.0 (#2287) by @victorelec14
|
||||
- Bump healthcheck timeout (#2300) by @hubertdeng123
|
||||
|
||||
## 23.7.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Resolve Safari Content-Disposition header bug (#2297) by @azaslavsky
|
||||
- feat: vroom cleanup script that respects default retention days (#2211) by @aldy505
|
||||
|
||||
## 23.7.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Remove nc -q option (#2275) by @hubertdeng123
|
||||
- Move open ai key from env variables (#2274) by @hubertdeng123
|
||||
- Fix command called in reset script (#2254) by @stayallive
|
||||
- Remove stale-bot in self-hosted (#2255) by @hubertdeng123
|
||||
- Update geoipupdate to 5.1.1 (#2236) by @williamdes
|
||||
|
||||
## 23.6.2
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Update nginx to 1.25.1 (#2235) by @williamdes
|
||||
- Fix error fingerprinting (#2237) by @chadwhitacre
|
||||
- A couple unit testing improvements (#2238) by @chadwhitacre
|
||||
- Fix #1684 (#2234) by @azaslavsky
|
||||
- Update memcached to 1.6.21 (#2231) by @williamdes
|
||||
- Update redis to 6.2.12 (#2230) by @williamdes
|
||||
- ref: Move all consumers to unified consumer CLI (#2224) by @hubertdeng123
|
||||
- Revert "ref: Move most consumers to unified consumer CLI" (#2223) by @hubertdeng123
|
||||
- ref: Move most consumers to unified consumer CLI (#2203) by @untitaker
|
||||
- Release 23.6.1 cleanup (#2209) by @hubertdeng123
|
||||
|
||||
## 23.6.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Fix bump version script (#2207) by @hubertdeng123
|
||||
|
||||
## 23.6.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Remove docker compose v1 (#2187) by @hubertdeng123
|
||||
- ref(compose): Separate ingest consumers (#2193) by @jan-auer
|
||||
- feat(profiling): Run profiling on self-hosted (#2154) by @phacops
|
||||
|
||||
## 23.5.2
|
||||
|
||||
- No documented changes.
|
||||
|
||||
## 23.5.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- fix(suggested-fix): key should be 'key', not 'token' (#2146) by @aldy505
|
||||
|
||||
## 23.5.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Add no strict offset reset options to consumers (#2144) by @hubertdeng123
|
||||
- Add settings for enabling CSP to config file (#2134) by @hubertdeng123
|
||||
- feat: add suggested fix feature (#2115) by @aldy505
|
||||
- adding ulimits for zookeeper, kafka, and web (#2123) by @jamincollins
|
||||
- Uninstall Docker Compose v1 from CI so it's not used for tests (#2114) by @hubertdeng123
|
||||
- Fixed docker compose issue in backup/restore (#2110) by @montaniasystemab
|
||||
- Enable upstream keepalive (#2099) by @otbutz
|
||||
- Bump commit sha for e2e test action (#2104) by @hubertdeng123
|
||||
- Use docker compose exec to account for differences in container names for Postgres upgrade (#2096) by @hubertdeng123
|
||||
- Change symbolicator to use CalVer for release (#2091) by @hubertdeng123
|
||||
|
||||
## 23.4.0
|
||||
|
||||
### Postgres 14 Upgrade
|
||||
|
||||
We've now included an upgrade from Postgres 9.6 to 14.5 that will automatically be run via the `./install.sh` script.
|
||||
|
||||
By: @hubertdeng123 (#2074)
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Remove clean function testing line (#2082) by @hubertdeng123
|
||||
- Fix command to get docker compose version in problem report template (#2080) by @hubertdeng123
|
||||
- Tweak permissioning of backup file in backup script to read/write for all users (#2043) by @hubertdeng123
|
||||
- Remove commit-batch-size parameter (#2058) by @hubertdeng123
|
||||
- Support external sourcemaps bigger, than 1Mb (#2050) by @le0pard
|
||||
- Add github setup instructions to config.example.yml (#2051) by @tm1000
|
||||
- ref(snuba): Use snuba self-hosted settings (#2039) by @enochtangg
|
||||
|
||||
## 23.3.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Bump Kafka version to keep up with SaaS (#2037) by @chadwhitacre
|
||||
- Add Backup/restore scripts (#2029) by @hubertdeng123
|
||||
- Add opt in error monitoring to reset and clean scripts (#2021) by @hubertdeng123
|
||||
|
||||
## 23.3.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Remove ZooKeeper snapshot (#2020) by @dereckson
|
||||
- feat(snuba): Add snuba sessions subscription service (#2006) by @klboke
|
||||
- Add backup/restore integration tests (#2012) by @hubertdeng123
|
||||
- ref(snuba): Remove snuba-cleanup, snuba-transactions-cleanup jobs (#2003) by @klboke
|
||||
- ref(replays): Remove the session-replay-ui flag (#2010) by @ryan953
|
||||
- Remove broken replay integration test (#2011) by @hubertdeng123
|
||||
- Bump self-hosted-e2e-tests action commit sha (#2008) by @hubertdeng123
|
||||
- Revert symbolicator tests (#2004) by @hubertdeng123
|
||||
- post-process-forwarder: Update CLI command (#1999) by @lynnagara
|
||||
- feat(replays): add replays to self hosted (#1990) by @JoshFerge
|
||||
- Remove issue status helper automation (#1989) by @hubertdeng123
|
||||
- Add proxy buffer size config to fix Bad Gateway (#1984) by @SCjona
|
||||
- Reference paths relative to project root (#1800) by @spawnia
|
||||
- Run close stale issues/PRs only on getsentry (#1969) by @aminvakil
|
||||
|
||||
## 23.2.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Run lock issues/PRs only on getsentry (#1966) by @aminvakil
|
||||
- Updates Redis to 6.2.10 (#1937) by @danielhartnell
|
||||
- Handle missing example files gracefully (#1950) by @chadwhitacre
|
||||
- Fix post-release.sh for `git pull` (#1938) by @BYK
|
||||
- Manually change 23.1.1 to nightly (#1936) by @hubertdeng123
|
||||
|
||||
## 23.1.1
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- ci: Add test for symbolicator pipeline (#1916) by @ethanhs
|
||||
|
||||
## 23.1.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- ci: Check health of services after running integration tests and fix snuba-replacer (#1897) by @ethanhs
|
||||
- Add wal2json debugging (#1906) by @chadwhitacre
|
||||
- Pick up CI bugfix (#1905) by @chadwhitacre
|
||||
- ref: Move jq build to error-handling.sh, and use proxy config (#1895) by @ethanhs
|
||||
- fix(CI): use default curl retry mechanism for wal2json install (#1890) by @volokluev
|
||||
- ref: Retry wal2json download in installer (#1881) by @ethanhs
|
||||
- ci: Remove GCB and update Github Action SHA (#1880) by @ethanhs
|
||||
|
||||
## 22.12.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Build each service image individually (#1858) by @ethanhs
|
||||
- Set higher kafka healthcheck timeout and fix clickhouse timeout (#1855) by @ethanhs
|
||||
- Add --skip-sse42-requirements to install.sh and enable SKIP_SSE42_REQUIREMENTS override (#1790) by @erinaceous
|
||||
- Fix commit-log-topic parameter configuration problem (#1817) by @klboke
|
||||
- Add .idea to .gitignore (#1803) by @spawnia
|
||||
- Add USE_X_FORWARDED_HOST to example config (#1804) by @crinjes
|
||||
- (fix): Fix contributor PR e2e tests (#1820) by @hubertdeng123
|
||||
|
||||
## 22.11.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Fix jq usage (#1814) by @ethanhs
|
||||
- Try adding end to end tests using new action (#1806) by @ethanhs
|
||||
- Add context line, error msg to envelope (#1784) by @hubertdeng123
|
||||
- Update to actions/checkoutv3 to address upcoming github deprecations (#1792) by @mattgauntseo-sentry
|
||||
- ref: upgrade actions/setup-python to avoid set-output deprecation (#1789) by @asottile-sentry
|
||||
- Enforce error reporting (#1777) by @hubertdeng123
|
||||
- Upload end of log as breadcrumbs, use exceptions and stacktrace (#1775) by @ethanhs
|
||||
- Fix sentry release for dogfood instance (#1768) by @hubertdeng123
|
||||
- Add pre-commit config (#1738) by @ethanhs
|
||||
- Do not send event on INT signal (#1773) by @hubertdeng123
|
||||
|
||||
## 22.10.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Split post process forwarders (#1759) by @chadwhitacre
|
||||
- Revert "Enforce error reporting for self-hosted" (#1755) by @hubertdeng123
|
||||
- Enforce error reporting for self-hosted (#1753) by @hubertdeng123
|
||||
- ref: Remove unused scripts and code (#1710) by @BYK
|
||||
- Check to see if docker compose exists, else error out (#1733) by @hubertdeng123
|
||||
- Fix minimum version requirements for docker and docker compose (#1732) by @hubertdeng123
|
||||
- Factor out clean and use it in unit-test (#1731) by @chadwhitacre
|
||||
- Reorganize unit test layout (#1729) by @hubertdeng123
|
||||
- Request event ID in issue template (#1723) by @ethanhs
|
||||
- Tag releases with sentry-cli (#1718) by @hubertdeng123
|
||||
- Send full logs as an attachment to our dogfood instance (#1715) by @hubertdeng123
|
||||
|
||||
## 22.9.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Fix traceback hash for error monitoring (#1700) by @hubertdeng123
|
||||
- Add section about error monitoring to the README (#1699) by @ethanhs
|
||||
- Switch from .reporterrors file to flag + envvar (#1697) by @chadwhitacre
|
||||
- Rename flag to --skip-user-creation (#1696) by @chadwhitacre
|
||||
- Default to not sending data to Sentry for now (#1695) by @chadwhitacre
|
||||
- fix(e2e tests): Pull branch that initially triggers gcp build for PRs (#1694) by @hubertdeng123
|
||||
- fix(e2e tests): Add .reporterrors file for GCP run of e2e tests (#1691) by @hubertdeng123
|
||||
- Error monitoring of the self-hosted installer (#1679) by @ethanhs
|
||||
- added docker commands in the description (#1673) by @victorelec14
|
||||
- Use docker-compose 2.7.0 instead of 2.2.3 in CI (#1591) by @aminvakil
|
||||
|
||||
## 22.8.0
|
||||
|
||||
- No documented changes.
|
||||
|
||||
## 22.7.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- ref: use sort -V to check minimum versions (#1553) by @ethanhs
|
||||
- Get more data from users in issue templates (#1497) by @aminvakil
|
||||
- Add ARM support (#1538) by @chadwhitacre
|
||||
- do not use gosu for snuba-transactions-cleanup and snuba-cleanup (#1564) by @goganchic
|
||||
- ref: Replace regex with --short flag to get compose version (#1551) by @ethanhs
|
||||
- Improve installation through proxy (#1543) by @goganchic
|
||||
- Cleanup .env{,.custom} handling (#1539) by @chadwhitacre
|
||||
- Bump nginx:1.22.0-alpine (#1506) by @aminvakil
|
||||
- Run release a new version job only on getsentry (#1529) by @aminvakil
|
||||
|
||||
## 22.6.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- fix "services.web.healthcheck.retries must be a number" (#1482) by @yuval1986
|
||||
- Add volume for nginx cache (#1511) by @glensc
|
||||
- snuba: New subscriptions infrastructure rollout (#1507) by @lynnagara
|
||||
- Ease modification of base image (#1479) by @spawnia
|
||||
|
||||
## 22.5.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- ref: reset user to root for installation (#1469) by @asottile-sentry
|
||||
- Document From email display name (#1446) by @chadwhitacre
|
||||
- Bring in CLA Lite (#1439) by @chadwhitacre
|
||||
- fix: replace git.io links with redirect targets (#1430) by @asottile-sentry
|
||||
|
||||
## 22.4.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Use better API key when available (#1408) by @chadwhitacre
|
||||
- Use a custom action (#1407) by @chadwhitacre
|
||||
- Add some debug logging (#1340) by @chadwhitacre
|
||||
- meta(gha): Deploy workflow enforce-license-compliance.yml (#1388) by @chadwhitacre
|
||||
- Turn off containers under old name as well (#1384) by @chadwhitacre
|
||||
|
||||
## 22.3.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Run CI every night (#1334) by @aminvakil
|
||||
- Docker-Compose: Avoid setting hostname to '' (#1365) by @glensc
|
||||
- meta(gha): Deploy workflow enforce-license-compliance.yml (#1375) by @chadwhitacre
|
||||
- ci: Change stale GitHub workflow to run once a day (#1371) by @kamilogorek
|
||||
- ci: Temporary fix for interactive prompt on createuser (#1370) by @BYK
|
||||
- meta(gha): Deploy workflow enforce-license-compliance.yml (#1347) by @chadwhitacre
|
||||
- Add SaaS nudge to README (#1327) by @chadwhitacre
|
||||
|
||||
## 22.2.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- fix: unbound variable _group in reset/dc-detect-version script (#1283) (#1284) by @lovetodream
|
||||
- Remove routing helper (#1323) by @chadwhitacre
|
||||
- Bump nginx:1.21.6-alpine (#1319) by @aminvakil
|
||||
- Add a cloudbuild.yaml for GCB (#1315) by @chadwhitacre
|
||||
- Update set-up-and-migrate-database.sh (#1308) by @drmrbrewer
|
||||
- Pull relay explicitly to avoid garbage in creds (#1301) by @chadwhitacre
|
||||
- Improve logging of docker versions and relay creds (#1298) by @chadwhitacre
|
||||
- Remove file again (#1299) by @chadwhitacre
|
||||
- Clean up relay credentials generation (#1289) by @chadwhitacre
|
||||
- Add CI compose version 1.29.2 / 2.0.1 / 2.2.3 (#1290) by @chadwhitacre
|
||||
- Revert "Add CI compose version 1.29.2 / 2.0.1 / 2.2.3 (#1251)" (#1272) by @chadwhitacre
|
||||
- Add CI compose version 1.29.2 / 2.0.1 / 2.2.3 (#1251) by @aminvakil
|
||||
|
||||
## 22.1.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Make healthcheck variables configurable in .env (#1248) by @aminvakil
|
||||
- Take some actions to avoid unhealthy containers (#1241) by @chadwhitacre
|
||||
- Install: setup umask (#1222) by @glensc
|
||||
- Deprecated /docker-entrypoint.sh call (#1218) by @marcinroman
|
||||
- Bump nginx:1.21.5-alpine (#1230) by @aminvakil
|
||||
- Fix reset.sh docker-compose call (#1215) by @aminvakil
|
||||
- Set worker_processes to auto (#1207) by @aminvakil
|
||||
|
||||
## 21.12.0
|
||||
|
||||
### Support Docker Compose v2 (ongoing)
|
||||
|
||||
Self-hosted Sentry mostly works with Docker Compose v2 (in addition to v1 >= 1.28.0). There is [one more bug](https://github.com/getsentry/self-hosted/issues/1133) we are trying to squash.
|
||||
|
||||
By: @chadwhitacre (#1179)
|
||||
|
||||
### Prevent Component Drift
|
||||
|
||||
When a user runs the `install.sh` script, they get the latest version of the Sentry, Snuba, Relay and Symbolicator projects. However there is no guarantee they have pulled the latest `self-hosted` version first, and running an old one may cause problems. To mitigate this, we now perform a check during installation that the user is on the latest commit if they are on the `master` branch. You can disable this check with `--skip-commit-check`.
|
||||
|
||||
By: @chadwhitacre (#1191), @aminvakil (#1186)
|
||||
|
||||
### React to log4shell
|
||||
|
||||
Self-hosted Sentry is [not vulnerable](https://github.com/getsentry/self-hosted/issues/1196) to the [log4shell](https://log4shell.com/) vulnerability.
|
||||
|
||||
By: @chadwhitacre (#1203)
|
||||
|
||||
### Forum → Issues
|
||||
|
||||
In the interest of reducing sources of truth, providing better support, and restarting the fire of the self-hosted Sentry community, we [deprecated the Discourse forum in favor of GitHub Issues](https://github.com/getsentry/self-hosted/issues/1151).
|
||||
|
||||
By: @chadwhitacre (#1167, #1160, #1159)
|
||||
|
||||
### Rename onpremise to self-hosted (ongoing)
|
||||
|
||||
In the beginning we used the term "on-premise" and over time we introduced the term "self-hosted." In an effort to regain some consistency for both branding and developer mental overhead purposes, we are standardizing on the term "self-hosted." This release includes a fair portion of the work towards this across multiple repos, hopefully a future release will include the remainder. Some orphaned containers / volumes / networks are [expected](https://github.com/getsentry/self-hosted/pull/1169#discussion_r756401917). You may clean them up with `docker-compose down --remove-orphans`.
|
||||
|
||||
By: @chadwhitacre (#1169)
|
||||
|
||||
### Add support for custom DotEnv file
|
||||
|
||||
There are several ways to [configure self-hosted Sentry](https://develop.sentry.dev/self-hosted/#configuration) and one of them is the `.env` file. In this release we add support for a `.env.custom` file that is git-ignored to make it easier for you to override keys configured this way with custom values. Thanks to @Sebi94nbg for the contribution!
|
||||
|
||||
By: @Sebi94nbg (#1113)
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Revert "Rename onpremise to self-hosted" (5495fe2e) by @chadwhitacre
|
||||
- Rename onpremise to self-hosted (9ad05d87) by @chadwhitacre
|
||||
|
||||
## 21.11.0
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- Fix #1079 - bug in reset.sh (#1134) by @chadwhitacre
|
||||
- ci: Enable parallel tests again, increase timeouts (#1125) by @BYK
|
||||
- fix: Hide compose errors during version check (#1124) by @BYK
|
||||
- build: Omit nightly bump commit from changelog (#1120) by @BYK
|
||||
- build: Set master version to nightly (d3e77857)
|
||||
|
||||
## 21.10.0
|
||||
|
||||
### Support for Docker Compose v2 (ongoing)
|
||||
|
||||
You asked for it and you did it! Sentry self-hosted now can work with Docker Compose v2 thanks to our community's contributions.
|
||||
|
||||
PRs: #1116
|
||||
|
||||
### Various fixes & improvements
|
||||
|
||||
- docs: simplify Linux `sudo` instructions in README (#1096)
|
||||
- build: Set master version to nightly (58874cf9)
|
||||
|
||||
## 21.9.0
|
||||
|
||||
- fix(healthcheck): Increase retries to 5 (#1072)
|
||||
- fix(requirements): Make compose version check bw-compatible (#1068)
|
||||
- ci: Test with the required minimum docker-compose (#1066)
|
||||
Run tests using docker-compose `1.28.0` instead of latest
|
||||
- fix(clickhouse): Use correct HTTP port for healthcheck (#1069)
|
||||
Fixes the regular `Unexpected packet` errors in Clickhouse
|
||||
|
||||
## 21.8.0
|
||||
|
||||
- feat: Support custom CA roots ([#27062](https://github.com/getsentry/sentry/pull/27062)), see the [docs](https://develop.sentry.dev/self-hosted/custom-ca-roots/) for more details.
|
||||
- fix: Fix `curl` image to version 7.77.0
|
||||
- upgrade: docker-compose version to 1.29.2
|
||||
- feat: Leverage health checks for depends_on
|
||||
|
||||
## 21.7.0
|
||||
|
||||
- No documented changes.
|
||||
|
||||
## 21.6.3
|
||||
|
||||
- No documented changes.
|
||||
|
||||
## 21.6.2
|
||||
|
||||
- BREAKING CHANGE: The frontend bundle will be loaded asynchronously (via [#25744](https://github.com/getsentry/sentry/pull/25744)). This is a breaking change that can affect custom plugins that access certain globals in the django template. Please see https://forum.sentry.io/t/breaking-frontend-changes-for-custom-plugins/14184 for more information.
|
||||
|
||||
## 21.6.1
|
||||
|
||||
- No documented changes.
|
||||
|
||||
## 21.6.0
|
||||
|
||||
- feat: Add healthchecks for redis, memcached and postgres (#975)
|
13
CONTRIBUTING.md
Normal file
13
CONTRIBUTING.md
Normal file
@ -0,0 +1,13 @@
|
||||
## Testing
|
||||
|
||||
### Running Tests with Pytest
|
||||
|
||||
We use pytest for running tests. To run the tests:
|
||||
|
||||
1) Ensure that you are in the root directory of the project.
|
||||
2) Run the following command:
|
||||
```bash
|
||||
pytest
|
||||
```
|
||||
|
||||
This will automatically discover and run all test cases in the project.
|
105
LICENSE.md
Normal file
105
LICENSE.md
Normal file
@ -0,0 +1,105 @@
|
||||
# Functional Source License, Version 1.1, Apache 2.0 Future License
|
||||
|
||||
## Abbreviation
|
||||
|
||||
FSL-1.1-Apache-2.0
|
||||
|
||||
## Notice
|
||||
|
||||
Copyright 2016-2024 Functional Software, Inc. dba Sentry
|
||||
|
||||
## Terms and Conditions
|
||||
|
||||
### Licensor ("We")
|
||||
|
||||
The party offering the Software under these Terms and Conditions.
|
||||
|
||||
### The Software
|
||||
|
||||
The "Software" is each version of the software that we make available under
|
||||
these Terms and Conditions, as indicated by our inclusion of these Terms and
|
||||
Conditions with the Software.
|
||||
|
||||
### License Grant
|
||||
|
||||
Subject to your compliance with this License Grant and the Patents,
|
||||
Redistribution and Trademark clauses below, we hereby grant you the right to
|
||||
use, copy, modify, create derivative works, publicly perform, publicly display
|
||||
and redistribute the Software for any Permitted Purpose identified below.
|
||||
|
||||
### Permitted Purpose
|
||||
|
||||
A Permitted Purpose is any purpose other than a Competing Use. A Competing Use
|
||||
means making the Software available to others in a commercial product or
|
||||
service that:
|
||||
|
||||
1. substitutes for the Software;
|
||||
|
||||
2. substitutes for any other product or service we offer using the Software
|
||||
that exists as of the date we make the Software available; or
|
||||
|
||||
3. offers the same or substantially similar functionality as the Software.
|
||||
|
||||
Permitted Purposes specifically include using the Software:
|
||||
|
||||
1. for your internal use and access;
|
||||
|
||||
2. for non-commercial education;
|
||||
|
||||
3. for non-commercial research; and
|
||||
|
||||
4. in connection with professional services that you provide to a licensee
|
||||
using the Software in accordance with these Terms and Conditions.
|
||||
|
||||
### Patents
|
||||
|
||||
To the extent your use for a Permitted Purpose would necessarily infringe our
|
||||
patents, the license grant above includes a license under our patents. If you
|
||||
make a claim against any party that the Software infringes or contributes to
|
||||
the infringement of any patent, then your patent license to the Software ends
|
||||
immediately.
|
||||
|
||||
### Redistribution
|
||||
|
||||
The Terms and Conditions apply to all copies, modifications and derivatives of
|
||||
the Software.
|
||||
|
||||
If you redistribute any copies, modifications or derivatives of the Software,
|
||||
you must include a copy of or a link to these Terms and Conditions and not
|
||||
remove any copyright notices provided in or with the Software.
|
||||
|
||||
### Disclaimer
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND WITHOUT WARRANTIES OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR
|
||||
PURPOSE, MERCHANTABILITY, TITLE OR NON-INFRINGEMENT.
|
||||
|
||||
IN NO EVENT WILL WE HAVE ANY LIABILITY TO YOU ARISING OUT OF OR RELATED TO THE
|
||||
SOFTWARE, INCLUDING INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES,
|
||||
EVEN IF WE HAVE BEEN INFORMED OF THEIR POSSIBILITY IN ADVANCE.
|
||||
|
||||
### Trademarks
|
||||
|
||||
Except for displaying the License Details and identifying us as the origin of
|
||||
the Software, you have no right under these Terms and Conditions to use our
|
||||
trademarks, trade names, service marks or product names.
|
||||
|
||||
## Grant of Future License
|
||||
|
||||
We hereby irrevocably grant you an additional license to use the Software under
|
||||
the Apache License, Version 2.0 that is effective on the second anniversary of
|
||||
the date we make the Software available. On or after that date, you may use the
|
||||
Software under the Apache License, Version 2.0, in which case the following
|
||||
will apply:
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
|
||||
this file except in compliance with the License.
|
||||
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed
|
||||
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
|
||||
CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
@ -1,2 +1,5 @@
|
||||
# Sentry
|
||||
# Self-Hosted Sentry
|
||||
|
||||
[Sentry](https://sentry.io/), feature-complete and packaged up for low-volume deployments and proofs-of-concept.
|
||||
|
||||
Documentation [here](https://develop.sentry.dev/self-hosted/).
|
||||
|
82
_integration-test/conftest.py
Normal file
82
_integration-test/conftest.py
Normal file
@ -0,0 +1,82 @@
|
||||
import os
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
import httpx
|
||||
import pytest
|
||||
|
||||
SENTRY_CONFIG_PY = "sentry/sentry.conf.py"
|
||||
SENTRY_TEST_HOST = os.getenv("SENTRY_TEST_HOST", "http://localhost:96")
|
||||
TEST_USER = "test@example.com"
|
||||
TEST_PASS = "test123TEST"
|
||||
TIMEOUT_SECONDS = 60
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--customizations", default="disabled")
|
||||
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def configure_self_hosted_environment(request):
|
||||
subprocess.run(
|
||||
["docker", "compose", "--ansi", "never", "up", "-d"],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
for i in range(TIMEOUT_SECONDS):
|
||||
try:
|
||||
response = httpx.get(SENTRY_TEST_HOST, follow_redirects=True)
|
||||
except httpx.RequestError:
|
||||
time.sleep(1)
|
||||
else:
|
||||
if response.status_code == 200:
|
||||
break
|
||||
else:
|
||||
raise AssertionError("timeout waiting for self-hosted to come up")
|
||||
|
||||
if request.config.getoption("--customizations") == "enabled":
|
||||
os.environ["TEST_CUSTOMIZATIONS"] = "enabled"
|
||||
script_content = """\
|
||||
#!/bin/bash
|
||||
touch /created-by-enhance-image
|
||||
apt-get update
|
||||
apt-get install -y gcc libsasl2-dev python-dev-is-python3 libldap2-dev libssl-dev
|
||||
"""
|
||||
|
||||
with open("sentry/enhance-image.sh", "w") as script_file:
|
||||
script_file.write(script_content)
|
||||
# Set executable permissions for the shell script
|
||||
os.chmod("sentry/enhance-image.sh", 0o755)
|
||||
|
||||
# Write content to the requirements.txt file
|
||||
with open("sentry/requirements.txt", "w") as req_file:
|
||||
req_file.write("python-ldap\n")
|
||||
os.environ["MINIMIZE_DOWNTIME"] = "1"
|
||||
subprocess.run(["./install.sh"], check=True, capture_output=True)
|
||||
# Create test user
|
||||
subprocess.run(
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"exec",
|
||||
"-T",
|
||||
"web",
|
||||
"sentry",
|
||||
"createuser",
|
||||
"--force-update",
|
||||
"--superuser",
|
||||
"--email",
|
||||
TEST_USER,
|
||||
"--password",
|
||||
TEST_PASS,
|
||||
"--no-input",
|
||||
],
|
||||
check=True,
|
||||
text=True,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def setup_backup_restore_env_variables():
|
||||
os.environ["SENTRY_DOCKER_IO_DIR"] = os.path.join(os.getcwd(), "sentry")
|
||||
os.environ["SKIP_USER_CREATION"] = "1"
|
16
_integration-test/custom-ca-roots/custom-ca-roots-test.py
Normal file
16
_integration-test/custom-ca-roots/custom-ca-roots-test.py
Normal file
@ -0,0 +1,16 @@
|
||||
import unittest
|
||||
|
||||
import requests
|
||||
|
||||
|
||||
class CustomCATests(unittest.TestCase):
|
||||
def test_valid_self_signed(self):
|
||||
self.assertEqual(requests.get("https://self.test").text, "ok")
|
||||
|
||||
def test_invalid_self_signed(self):
|
||||
with self.assertRaises(requests.exceptions.SSLError):
|
||||
requests.get("https://fail.test")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
12
_integration-test/custom-ca-roots/docker-compose.test.yml
Normal file
12
_integration-test/custom-ca-roots/docker-compose.test.yml
Normal file
@ -0,0 +1,12 @@
|
||||
version: '3.4'
|
||||
services:
|
||||
fixture-custom-ca-roots:
|
||||
image: nginx:1.21.0-alpine
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./_integration-test/custom-ca-roots/nginx:/etc/nginx:ro
|
||||
networks:
|
||||
default:
|
||||
aliases:
|
||||
- self.test
|
||||
- fail.test
|
5
_integration-test/fixtures/envelope-with-profile
Normal file
5
_integration-test/fixtures/envelope-with-profile
Normal file
@ -0,0 +1,5 @@
|
||||
{"event_id":"66578634d48d433db0ad52882d1efe5b","sent_at":"2023-05-17T14:54:31.057Z","sdk":{"name":"sentry.javascript.node","version":"7.46.0"},"trace":{"environment":"production","transaction":"fib: sourcemaps here","public_key":"05ab86aebbe14a24bcab62caa839cf27","trace_id":"33321bfbd5304bcc9663d1b53b08f84e","sample_rate":"1"}}
|
||||
{"type":"transaction"}
|
||||
{"contexts":{"profile":{"profile_id":"e73aaf1f29b24812be60132f32d09f92"},"trace":{"op":"test","span_id":"b38f2b24537c3858","trace_id":"33321bfbd5304bcc9663d1b53b08f84e"},"runtime":{"name":"node","version":"v16.16.0"},"app":{"app_start_time":"2023-05-17T14:54:27.678Z","app_memory":57966592},"os":{"kernel_version":"22.3.0","name":"macOS","version":"13.2","build":"22D49"},"device":{"boot_time":"2023-05-12T15:08:41.047Z","arch":"arm64","memory_size":34359738368,"free_memory":6861651968,"processor_count":10,"cpu_description":"Apple M1 Pro","processor_frequency":24},"culture":{"locale":"en-US","timezone":"America/New_York"}},"spans":[],"start_timestamp":1684335267.744,"tags":{},"timestamp":1684335271.033,"transaction":"fib: sourcemaps here","type":"transaction","transaction_info":{"source":"custom"},"platform":"node","server_name":"TK6G745PW1.local","event_id":"66578634d48d433db0ad52882d1efe5b","environment":"production","sdk":{"integrations":["InboundFilters","FunctionToString","Console","Http","OnUncaughtException","OnUnhandledRejection","ContextLines","LocalVariables","Context","Modules","RequestData","LinkedErrors","ProfilingIntegration"],"name":"sentry.javascript.node","version":"7.46.0","packages":[{"name":"npm:@sentry/node","version":"7.46.0"}]},"debug_meta":{"images":[]},"modules":{}}
|
||||
{"type":"profile"}
|
||||
{"event_id":"e73aaf1f29b24812be60132f32d09f92","timestamp":"2023-05-17T14:54:27.744Z","platform":"node","version":"1","release":"","environment":"production","runtime":{"name":"node","version":"16.16.0"},"os":{"name":"darwin","version":"22.3.0","build_number":"Darwin Kernel Version 22.3.0: Thu Jan 5 20:48:54 PST 2023; root:xnu-8792.81.2~2/RELEASE_ARM64_T6000"},"device":{"locale":"en_US.UTF-8","model":"arm64","manufacturer":"Darwin","architecture":"arm64","is_emulator":false},"debug_meta":{"images":[]},"profile":{"samples":[{"stack_id":0,"thread_id":"0","elapsed_since_start_ns":125000},{"stack_id":0,"thread_id":"0","elapsed_since_start_ns":13958000}],"frames":[{"lineno":14129,"colno":17,"function":"startProfiling","abs_path":"/Users/jonasbadalic/code/node-profiler/lib/index.js"}],"stacks":[[0]],"thread_metadata":{"0":{"name":"main"}}},"transaction":{"name":"fib: sourcemaps here","id":"66578634d48d433db0ad52882d1efe5b","trace_id":"33321bfbd5304bcc9663d1b53b08f84e","active_thread_id":"0"}}
|
3
_integration-test/fixtures/envelope-with-transaction
Normal file
3
_integration-test/fixtures/envelope-with-transaction
Normal file
@ -0,0 +1,3 @@
|
||||
{"event_id":"66578634d48d433db0ad52882d1efe5b","sent_at":"2023-05-17T14:54:31.057Z","sdk":{"name":"sentry.javascript.node","version":"7.46.0"},"trace":{"environment":"production","transaction":"fib: sourcemaps here","public_key":"05ab86aebbe14a24bcab62caa839cf27","trace_id":"33321bfbd5304bcc9663d1b53b08f84e","sample_rate":"1"}}
|
||||
{"type":"transaction"}
|
||||
{"contexts":{"trace":{"op":"test","span_id":"b38f2b24537c3858","trace_id":"33321bfbd5304bcc9663d1b53b08f84e"},"runtime":{"name":"node","version":"v16.16.0"},"app":{"app_start_time":"2023-05-17T14:54:27.678Z","app_memory":57966592},"os":{"kernel_version":"22.3.0","name":"macOS","version":"13.2","build":"22D49"},"device":{"boot_time":"2023-05-12T15:08:41.047Z","arch":"arm64","memory_size":34359738368,"free_memory":6861651968,"processor_count":10,"cpu_description":"Apple M1 Pro","processor_frequency":24},"culture":{"locale":"en-US","timezone":"America/New_York"}},"spans":[],"start_timestamp":1684335267.744,"tags":{},"timestamp":1684335271.033,"transaction":"fib: sourcemaps here","type":"transaction","transaction_info":{"source":"custom"},"platform":"node","server_name":"TK6G745PW1.local","event_id":"66578634d48d433db0ad52882d1efe5b","environment":"production","sdk":{"integrations":["InboundFilters","FunctionToString","Console","Http","OnUncaughtException","OnUnhandledRejection","ContextLines","LocalVariables","Context","Modules","RequestData","LinkedErrors","ProfilingIntegration"],"name":"sentry.javascript.node","version":"7.46.0","packages":[{"name":"npm:@sentry/node","version":"7.46.0"}]},"debug_meta":{"images":[]},"modules":{}}
|
73
_integration-test/test_backup.py
Normal file
73
_integration-test/test_backup.py
Normal file
@ -0,0 +1,73 @@
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
|
||||
def test_sentry_admin(setup_backup_restore_env_variables):
|
||||
sentry_admin_sh = os.path.join(os.getcwd(), "sentry-admin.sh")
|
||||
output = subprocess.run(
|
||||
[sentry_admin_sh, "--help"], check=True, capture_output=True, encoding="utf8"
|
||||
).stdout
|
||||
assert "Usage: ./sentry-admin.sh" in output
|
||||
assert "SENTRY_DOCKER_IO_DIR" in output
|
||||
|
||||
output = subprocess.run(
|
||||
[sentry_admin_sh, "permissions", "--help"],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
encoding="utf8",
|
||||
).stdout
|
||||
assert "Usage: ./sentry-admin.sh permissions" in output
|
||||
|
||||
|
||||
def test_backup(setup_backup_restore_env_variables):
|
||||
# Docker was giving me permissioning issues when trying to create this file and write to it even after giving read + write access
|
||||
# to group and owner. Instead, try creating the empty file and then give everyone write access to the backup file
|
||||
file_path = os.path.join(os.getcwd(), "sentry", "backup.json")
|
||||
sentry_admin_sh = os.path.join(os.getcwd(), "sentry-admin.sh")
|
||||
open(file_path, "a", encoding="utf8").close()
|
||||
os.chmod(file_path, 0o666)
|
||||
assert os.path.getsize(file_path) == 0
|
||||
subprocess.run(
|
||||
[
|
||||
sentry_admin_sh,
|
||||
"export",
|
||||
"global",
|
||||
"/sentry-admin/backup.json",
|
||||
"--no-prompt",
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
assert os.path.getsize(file_path) > 0
|
||||
|
||||
|
||||
def test_import(setup_backup_restore_env_variables):
|
||||
# Bring postgres down and recreate the docker volume
|
||||
subprocess.run(
|
||||
["docker", "compose", "--ansi", "never", "stop", "postgres"], check=True
|
||||
)
|
||||
subprocess.run(
|
||||
["docker", "compose", "--ansi", "never", "rm", "-f", "-v", "postgres"],
|
||||
check=True,
|
||||
)
|
||||
subprocess.run(["docker", "volume", "rm", "sentry-postgres"], check=True)
|
||||
subprocess.run(["docker", "volume", "create", "--name=sentry-postgres"], check=True)
|
||||
subprocess.run(
|
||||
["docker", "compose", "--ansi", "never", "run", "web", "upgrade", "--noinput"],
|
||||
check=True,
|
||||
)
|
||||
subprocess.run(
|
||||
["docker", "compose", "--ansi", "never", "up", "-d"],
|
||||
check=True,
|
||||
capture_output=True,
|
||||
)
|
||||
sentry_admin_sh = os.path.join(os.getcwd(), "sentry-admin.sh")
|
||||
subprocess.run(
|
||||
[
|
||||
sentry_admin_sh,
|
||||
"import",
|
||||
"global",
|
||||
"/sentry-admin/backup.json",
|
||||
"--no-prompt",
|
||||
],
|
||||
check=True,
|
||||
)
|
446
_integration-test/test_run.py
Normal file
446
_integration-test/test_run.py
Normal file
@ -0,0 +1,446 @@
|
||||
import datetime
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
from functools import lru_cache
|
||||
from typing import Callable
|
||||
|
||||
import httpx
|
||||
import pytest
|
||||
import sentry_sdk
|
||||
from bs4 import BeautifulSoup
|
||||
from cryptography import x509
|
||||
from cryptography.hazmat.backends import default_backend
|
||||
from cryptography.hazmat.primitives import hashes, serialization
|
||||
from cryptography.hazmat.primitives.asymmetric import rsa
|
||||
from cryptography.x509.oid import NameOID
|
||||
|
||||
SENTRY_CONFIG_PY = "sentry/sentry.conf.py"
|
||||
SENTRY_TEST_HOST = os.getenv("SENTRY_TEST_HOST", "http://localhost:96")
|
||||
TEST_USER = "test@example.com"
|
||||
TEST_PASS = "test123TEST"
|
||||
TIMEOUT_SECONDS = 60
|
||||
|
||||
|
||||
def poll_for_response(
|
||||
request: str, client: httpx.Client, validator: Callable = None
|
||||
) -> httpx.Response:
|
||||
for i in range(TIMEOUT_SECONDS):
|
||||
response = client.get(
|
||||
request, follow_redirects=True, headers={"Referer": SENTRY_TEST_HOST}
|
||||
)
|
||||
if response.status_code == 200:
|
||||
if validator is None or validator(response.text):
|
||||
break
|
||||
time.sleep(1)
|
||||
else:
|
||||
raise AssertionError(
|
||||
"timeout waiting for response status code 200 or valid data"
|
||||
)
|
||||
return response
|
||||
|
||||
|
||||
@lru_cache
|
||||
def get_sentry_dsn(client: httpx.Client) -> str:
|
||||
response = poll_for_response(
|
||||
f"{SENTRY_TEST_HOST}/api/0/projects/sentry/internal/keys/",
|
||||
client,
|
||||
lambda x: len(json.loads(x)[0]["dsn"]["public"]) > 0,
|
||||
)
|
||||
sentry_dsn = json.loads(response.text)[0]["dsn"]["public"]
|
||||
return sentry_dsn
|
||||
|
||||
|
||||
@pytest.fixture()
|
||||
def client_login():
|
||||
client = httpx.Client()
|
||||
response = client.get(SENTRY_TEST_HOST, follow_redirects=True)
|
||||
parser = BeautifulSoup(response.text, "html.parser")
|
||||
login_csrf_token = parser.find("input", {"name": "csrfmiddlewaretoken"})["value"]
|
||||
login_response = client.post(
|
||||
f"{SENTRY_TEST_HOST}/auth/login/sentry/",
|
||||
follow_redirects=True,
|
||||
data={
|
||||
"op": "login",
|
||||
"username": TEST_USER,
|
||||
"password": TEST_PASS,
|
||||
"csrfmiddlewaretoken": login_csrf_token,
|
||||
},
|
||||
headers={"Referer": f"{SENTRY_TEST_HOST}/auth/login/sentry/"},
|
||||
)
|
||||
assert login_response.status_code == 200
|
||||
yield (client, login_response)
|
||||
|
||||
|
||||
def test_initial_redirect():
|
||||
initial_auth_redirect = httpx.get(SENTRY_TEST_HOST, follow_redirects=True)
|
||||
assert initial_auth_redirect.url == f"{SENTRY_TEST_HOST}/auth/login/sentry/"
|
||||
|
||||
|
||||
def test_login(client_login):
|
||||
client, login_response = client_login
|
||||
parser = BeautifulSoup(login_response.text, "html.parser")
|
||||
script_tag = parser.find(
|
||||
"script", string=lambda x: x and "window.__initialData" in x
|
||||
)
|
||||
assert script_tag is not None
|
||||
json_data = json.loads(script_tag.text.split("=", 1)[1].strip().rstrip(";"))
|
||||
assert json_data["isAuthenticated"] is True
|
||||
assert json_data["user"]["username"] == "test@example.com"
|
||||
assert json_data["user"]["isSuperuser"] is True
|
||||
assert login_response.cookies["sc"] is not None
|
||||
# Set up initial/required settings (InstallWizard request)
|
||||
client.headers.update({"X-CSRFToken": login_response.cookies["sc"]})
|
||||
response = client.put(
|
||||
f"{SENTRY_TEST_HOST}/api/0/internal/options/?query=is:required",
|
||||
follow_redirects=True,
|
||||
headers={"Referer": SENTRY_TEST_HOST},
|
||||
data={
|
||||
"mail.use-tls": False,
|
||||
"mail.username": "",
|
||||
"mail.port": 25,
|
||||
"system.admin-email": "test@example.com",
|
||||
"mail.password": "",
|
||||
"system.url-prefix": SENTRY_TEST_HOST,
|
||||
"auth.allow-registration": False,
|
||||
"beacon.anonymous": True,
|
||||
},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
|
||||
|
||||
def test_receive_event(client_login):
|
||||
event_id = None
|
||||
client, _ = client_login
|
||||
with sentry_sdk.init(dsn=get_sentry_dsn(client)):
|
||||
event_id = sentry_sdk.capture_exception(Exception("a failure"))
|
||||
assert event_id is not None
|
||||
response = poll_for_response(
|
||||
f"{SENTRY_TEST_HOST}/api/0/projects/sentry/internal/events/{event_id}/", client
|
||||
)
|
||||
response_json = json.loads(response.text)
|
||||
assert response_json["eventID"] == event_id
|
||||
assert response_json["metadata"]["value"] == "a failure"
|
||||
|
||||
|
||||
def test_cleanup_crons_running():
|
||||
docker_services = subprocess.check_output(
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"--ansi",
|
||||
"never",
|
||||
"ps",
|
||||
"-a",
|
||||
],
|
||||
text=True,
|
||||
)
|
||||
pattern = re.compile(
|
||||
r"(\-cleanup\s+running)|(\-cleanup[_-].+\s+Up\s+)", re.MULTILINE
|
||||
)
|
||||
cleanup_crons = pattern.findall(docker_services)
|
||||
assert len(cleanup_crons) > 0
|
||||
|
||||
|
||||
def test_custom_certificate_authorities():
|
||||
# Set environment variable
|
||||
os.environ["COMPOSE_FILE"] = (
|
||||
"docker-compose.yml:_integration-test/custom-ca-roots/docker-compose.test.yml"
|
||||
)
|
||||
|
||||
test_nginx_conf_path = "_integration-test/custom-ca-roots/nginx"
|
||||
custom_certs_path = "certificates"
|
||||
|
||||
# Generate tightly constrained CA
|
||||
ca_key = rsa.generate_private_key(
|
||||
public_exponent=65537, key_size=2048, backend=default_backend()
|
||||
)
|
||||
|
||||
ca_name = x509.Name(
|
||||
[x509.NameAttribute(NameOID.COMMON_NAME, "TEST CA *DO NOT TRUST*")]
|
||||
)
|
||||
|
||||
ca_cert = (
|
||||
x509.CertificateBuilder()
|
||||
.subject_name(ca_name)
|
||||
.issuer_name(ca_name)
|
||||
.public_key(ca_key.public_key())
|
||||
.serial_number(x509.random_serial_number())
|
||||
.not_valid_before(datetime.datetime.utcnow())
|
||||
.not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=1))
|
||||
.add_extension(x509.BasicConstraints(ca=True, path_length=None), critical=True)
|
||||
.add_extension(
|
||||
x509.KeyUsage(
|
||||
digital_signature=False,
|
||||
key_encipherment=False,
|
||||
content_commitment=False,
|
||||
data_encipherment=False,
|
||||
key_agreement=False,
|
||||
key_cert_sign=True,
|
||||
crl_sign=True,
|
||||
encipher_only=False,
|
||||
decipher_only=False,
|
||||
),
|
||||
critical=True,
|
||||
)
|
||||
.add_extension(
|
||||
x509.NameConstraints([x509.DNSName("self.test")], None), critical=True
|
||||
)
|
||||
.sign(private_key=ca_key, algorithm=hashes.SHA256(), backend=default_backend())
|
||||
)
|
||||
|
||||
ca_key_path = f"{test_nginx_conf_path}/ca.key"
|
||||
ca_crt_path = f"{test_nginx_conf_path}/ca.crt"
|
||||
|
||||
with open(ca_key_path, "wb") as key_file:
|
||||
key_file.write(
|
||||
ca_key.private_bytes(
|
||||
encoding=serialization.Encoding.PEM,
|
||||
format=serialization.PrivateFormat.TraditionalOpenSSL,
|
||||
encryption_algorithm=serialization.NoEncryption(),
|
||||
)
|
||||
)
|
||||
|
||||
with open(ca_crt_path, "wb") as cert_file:
|
||||
cert_file.write(ca_cert.public_bytes(serialization.Encoding.PEM))
|
||||
|
||||
# Create custom certs path and copy ca.crt
|
||||
os.makedirs(custom_certs_path, exist_ok=True)
|
||||
shutil.copyfile(ca_crt_path, f"{custom_certs_path}/test-custom-ca-roots.crt")
|
||||
# Generate server key and certificate
|
||||
|
||||
self_test_key_path = os.path.join(test_nginx_conf_path, "self.test.key")
|
||||
self_test_csr_path = os.path.join(test_nginx_conf_path, "self.test.csr")
|
||||
self_test_cert_path = os.path.join(test_nginx_conf_path, "self.test.crt")
|
||||
|
||||
self_test_key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
|
||||
|
||||
self_test_req = (
|
||||
x509.CertificateSigningRequestBuilder()
|
||||
.subject_name(
|
||||
x509.Name(
|
||||
[
|
||||
x509.NameAttribute(
|
||||
NameOID.COMMON_NAME, "Self Signed with CA Test Server"
|
||||
)
|
||||
]
|
||||
)
|
||||
)
|
||||
.add_extension(
|
||||
x509.SubjectAlternativeName([x509.DNSName("self.test")]), critical=False
|
||||
)
|
||||
.sign(self_test_key, hashes.SHA256())
|
||||
)
|
||||
|
||||
self_test_cert = (
|
||||
x509.CertificateBuilder()
|
||||
.subject_name(
|
||||
x509.Name(
|
||||
[
|
||||
x509.NameAttribute(
|
||||
NameOID.COMMON_NAME, "Self Signed with CA Test Server"
|
||||
)
|
||||
]
|
||||
)
|
||||
)
|
||||
.issuer_name(ca_cert.issuer)
|
||||
.serial_number(x509.random_serial_number())
|
||||
.not_valid_before(datetime.datetime.utcnow())
|
||||
.not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=1))
|
||||
.public_key(self_test_req.public_key())
|
||||
.add_extension(
|
||||
x509.SubjectAlternativeName([x509.DNSName("self.test")]), critical=False
|
||||
)
|
||||
.sign(private_key=ca_key, algorithm=hashes.SHA256())
|
||||
)
|
||||
|
||||
# Save server key, CSR, and certificate
|
||||
with open(self_test_key_path, "wb") as key_file:
|
||||
key_file.write(
|
||||
self_test_key.private_bytes(
|
||||
encoding=serialization.Encoding.PEM,
|
||||
format=serialization.PrivateFormat.TraditionalOpenSSL,
|
||||
encryption_algorithm=serialization.NoEncryption(),
|
||||
)
|
||||
)
|
||||
with open(self_test_csr_path, "wb") as csr_file:
|
||||
csr_file.write(self_test_req.public_bytes(serialization.Encoding.PEM))
|
||||
with open(self_test_cert_path, "wb") as cert_file:
|
||||
cert_file.write(self_test_cert.public_bytes(serialization.Encoding.PEM))
|
||||
|
||||
# Generate server key and certificate for fake.test
|
||||
|
||||
fake_test_key_path = os.path.join(test_nginx_conf_path, "fake.test.key")
|
||||
fake_test_cert_path = os.path.join(test_nginx_conf_path, "fake.test.crt")
|
||||
|
||||
fake_test_key = rsa.generate_private_key(public_exponent=65537, key_size=2048)
|
||||
|
||||
fake_test_cert = (
|
||||
x509.CertificateBuilder()
|
||||
.subject_name(
|
||||
x509.Name(
|
||||
[x509.NameAttribute(NameOID.COMMON_NAME, "Self Signed Test Server")]
|
||||
)
|
||||
)
|
||||
.issuer_name(
|
||||
x509.Name(
|
||||
[x509.NameAttribute(NameOID.COMMON_NAME, "Self Signed Test Server")]
|
||||
)
|
||||
)
|
||||
.serial_number(x509.random_serial_number())
|
||||
.not_valid_before(datetime.datetime.utcnow())
|
||||
.not_valid_after(datetime.datetime.utcnow() + datetime.timedelta(days=1))
|
||||
.public_key(fake_test_key.public_key())
|
||||
.add_extension(
|
||||
x509.SubjectAlternativeName([x509.DNSName("fake.test")]), critical=False
|
||||
)
|
||||
.sign(private_key=fake_test_key, algorithm=hashes.SHA256())
|
||||
)
|
||||
|
||||
# Save server key and certificate for fake.test
|
||||
with open(fake_test_key_path, "wb") as key_file:
|
||||
key_file.write(
|
||||
fake_test_key.private_bytes(
|
||||
encoding=serialization.Encoding.PEM,
|
||||
format=serialization.PrivateFormat.TraditionalOpenSSL,
|
||||
encryption_algorithm=serialization.NoEncryption(),
|
||||
)
|
||||
)
|
||||
# Our asserts for this test case must be executed within the web container, so we are copying a python test script into the mounted sentry directory
|
||||
with open(fake_test_cert_path, "wb") as cert_file:
|
||||
cert_file.write(fake_test_cert.public_bytes(serialization.Encoding.PEM))
|
||||
shutil.copyfile(
|
||||
"_integration-test/custom-ca-roots/custom-ca-roots-test.py",
|
||||
"sentry/test-custom-ca-roots.py",
|
||||
)
|
||||
|
||||
subprocess.run(
|
||||
["docker", "compose", "--ansi", "never", "up", "-d", "fixture-custom-ca-roots"],
|
||||
check=True,
|
||||
)
|
||||
subprocess.run(
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"--ansi",
|
||||
"never",
|
||||
"run",
|
||||
"--no-deps",
|
||||
"web",
|
||||
"python3",
|
||||
"/etc/sentry/test-custom-ca-roots.py",
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
subprocess.run(
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"--ansi",
|
||||
"never",
|
||||
"rm",
|
||||
"-s",
|
||||
"-f",
|
||||
"-v",
|
||||
"fixture-custom-ca-roots",
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
|
||||
# Remove files
|
||||
os.remove(f"{custom_certs_path}/test-custom-ca-roots.crt")
|
||||
os.remove("sentry/test-custom-ca-roots.py")
|
||||
|
||||
# Unset environment variable
|
||||
if "COMPOSE_FILE" in os.environ:
|
||||
del os.environ["COMPOSE_FILE"]
|
||||
|
||||
|
||||
def test_receive_transaction_events(client_login):
|
||||
client, _ = client_login
|
||||
with sentry_sdk.init(
|
||||
dsn=get_sentry_dsn(client), profiles_sample_rate=1.0, traces_sample_rate=1.0
|
||||
):
|
||||
|
||||
def placeholder_fn():
|
||||
sum = 0
|
||||
for i in range(5):
|
||||
sum += i
|
||||
time.sleep(0.25)
|
||||
|
||||
with sentry_sdk.start_transaction(op="task", name="Test Transactions"):
|
||||
placeholder_fn()
|
||||
poll_for_response(
|
||||
f"{SENTRY_TEST_HOST}/api/0/organizations/sentry/events/?dataset=profiles&field=profile.id&project=1&statsPeriod=1h",
|
||||
client,
|
||||
lambda x: len(json.loads(x)["data"]) > 0,
|
||||
)
|
||||
poll_for_response(
|
||||
f"{SENTRY_TEST_HOST}/api/0/organizations/sentry/events/?dataset=spansIndexed&field=id&project=1&statsPeriod=1h",
|
||||
client,
|
||||
lambda x: len(json.loads(x)["data"]) > 0,
|
||||
)
|
||||
|
||||
|
||||
def test_customizations():
|
||||
commands = [
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"--ansi",
|
||||
"never",
|
||||
"run",
|
||||
"--no-deps",
|
||||
"web",
|
||||
"bash",
|
||||
"-c",
|
||||
"if [ ! -e /created-by-enhance-image ]; then exit 1; fi",
|
||||
],
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"--ansi",
|
||||
"never",
|
||||
"run",
|
||||
"--no-deps",
|
||||
"--entrypoint=/etc/sentry/entrypoint.sh",
|
||||
"sentry-cleanup",
|
||||
"bash",
|
||||
"-c",
|
||||
"if [ ! -e /created-by-enhance-image ]; then exit 1; fi",
|
||||
],
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"--ansi",
|
||||
"never",
|
||||
"run",
|
||||
"--no-deps",
|
||||
"web",
|
||||
"python",
|
||||
"-c",
|
||||
"import ldap",
|
||||
],
|
||||
[
|
||||
"docker",
|
||||
"compose",
|
||||
"--ansi",
|
||||
"never",
|
||||
"run",
|
||||
"--no-deps",
|
||||
"--entrypoint=/etc/sentry/entrypoint.sh",
|
||||
"sentry-cleanup",
|
||||
"python",
|
||||
"-c",
|
||||
"import ldap",
|
||||
],
|
||||
]
|
||||
for command in commands:
|
||||
result = subprocess.run(command, check=False)
|
||||
if os.getenv("TEST_CUSTOMIZATIONS", "disabled") == "enabled":
|
||||
assert result.returncode == 0
|
||||
else:
|
||||
assert result.returncode != 0
|
57
_unit-test/_test_setup.sh
Normal file
57
_unit-test/_test_setup.sh
Normal file
@ -0,0 +1,57 @@
|
||||
set -euo pipefail
|
||||
|
||||
source install/_lib.sh
|
||||
|
||||
_ORIGIN=$(pwd)
|
||||
|
||||
rm -rf /tmp/sentry-self-hosted-test-sandbox.*
|
||||
_SANDBOX="$(mktemp -d /tmp/sentry-self-hosted-test-sandbox.XXX)"
|
||||
|
||||
source install/detect-platform.sh
|
||||
docker build -t sentry-self-hosted-jq-local --platform="$DOCKER_PLATFORM" jq
|
||||
|
||||
report_success() {
|
||||
echo "$(basename $0) - Success 👍"
|
||||
}
|
||||
|
||||
teardown() {
|
||||
test "${DEBUG:-}" || rm -rf "$_SANDBOX"
|
||||
cd "$_ORIGIN"
|
||||
}
|
||||
|
||||
setup() {
|
||||
# Clone the local repo into a temp dir. FWIW `git clone --local` breaks for
|
||||
# me because it depends on hard-linking, which doesn't work across devices,
|
||||
# and I happen to have my workspace and /tmp on separate devices.
|
||||
git -c advice.detachedHead=false clone --depth=1 "file://$_ORIGIN" "$_SANDBOX"
|
||||
|
||||
# Now propagate any local changes from the working copy to the sandbox. This
|
||||
# provides a pretty nice dev experience: edit the files in the working copy,
|
||||
# then run `DEBUG=1 some-test.sh` to leave the sandbox up for interactive
|
||||
# dev/debugging.
|
||||
git status --porcelain | while read line; do
|
||||
# $line here is something like `M some-script.sh`.
|
||||
|
||||
local filepath="$(cut -f2 -d' ' <(echo $line))"
|
||||
local filestatus="$(cut -f1 -d' ' <(echo $line))"
|
||||
|
||||
case $filestatus in
|
||||
D)
|
||||
rm "$_SANDBOX/$filepath"
|
||||
;;
|
||||
A | M | AM | ??)
|
||||
ln -sf "$(realpath $filepath)" "$_SANDBOX/$filepath"
|
||||
;;
|
||||
**)
|
||||
echo "Wuh? $line"
|
||||
exit 77
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
cd "$_SANDBOX"
|
||||
|
||||
trap teardown EXIT
|
||||
}
|
||||
|
||||
setup
|
28
_unit-test/create-docker-volumes-test.sh
Executable file
28
_unit-test/create-docker-volumes-test.sh
Executable file
@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source _unit-test/_test_setup.sh
|
||||
|
||||
get_volumes() {
|
||||
# If grep returns no strings, we still want to return without error
|
||||
docker volume ls --quiet | { grep '^sentry-.*' || true; } | sort
|
||||
}
|
||||
|
||||
# Maybe they exist prior, maybe they don't. Script is idempotent.
|
||||
|
||||
expected_volumes="sentry-clickhouse
|
||||
sentry-data
|
||||
sentry-kafka
|
||||
sentry-postgres
|
||||
sentry-redis
|
||||
sentry-symbolicator"
|
||||
|
||||
before=$(get_volumes)
|
||||
|
||||
test "$before" == "" || test "$before" == "$expected_volumes"
|
||||
|
||||
source install/create-docker-volumes.sh
|
||||
|
||||
after=$(get_volumes)
|
||||
test "$after" == "$expected_volumes"
|
||||
|
||||
report_success
|
27
_unit-test/ensure-relay-credentials-test.sh
Executable file
27
_unit-test/ensure-relay-credentials-test.sh
Executable file
@ -0,0 +1,27 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source _unit-test/_test_setup.sh
|
||||
source install/dc-detect-version.sh
|
||||
|
||||
# using _file format for these variables since there is a creds defined in dc-detect-version.sh
|
||||
cfg_file=relay/config.yml
|
||||
creds_file=relay/credentials.json
|
||||
|
||||
# Relay files don't exist in a clean clone.
|
||||
test ! -f $cfg_file
|
||||
test ! -f $creds_file
|
||||
|
||||
# Running the install script adds them.
|
||||
source install/ensure-relay-credentials.sh
|
||||
test -f $cfg_file
|
||||
test -f $creds_file
|
||||
test "$(jq -r 'keys[2]' "$creds_file")" = "secret_key"
|
||||
|
||||
# If the files exist we don't touch it.
|
||||
echo GARBAGE >$cfg_file
|
||||
echo MOAR GARBAGE >$creds_file
|
||||
source install/ensure-relay-credentials.sh
|
||||
test "$(cat $cfg_file)" = "GARBAGE"
|
||||
test "$(cat $creds_file)" = "MOAR GARBAGE"
|
||||
|
||||
report_success
|
86
_unit-test/error-handling-test.sh
Executable file
86
_unit-test/error-handling-test.sh
Executable file
@ -0,0 +1,86 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source _unit-test/_test_setup.sh
|
||||
|
||||
export REPORT_SELF_HOSTED_ISSUES=1
|
||||
|
||||
# This is set up in dc-detect-version.sh, but for
|
||||
# our purposes we don't care about proxies.
|
||||
dbuild="docker build"
|
||||
source install/error-handling.sh
|
||||
|
||||
# mock send_envelope
|
||||
send_envelope() {
|
||||
echo "Test Sending $1"
|
||||
}
|
||||
|
||||
##########################
|
||||
|
||||
export -f send_envelope
|
||||
echo "Testing initial send_event"
|
||||
export log_file=test_log.txt
|
||||
echo "Test Logs" >"$log_file"
|
||||
echo "Error Msg" >>"$log_file"
|
||||
breadcrumbs=$(generate_breadcrumb_json | sed '$d' | $jq -s -c)
|
||||
SEND_EVENT_RESPONSE=$(
|
||||
send_event \
|
||||
"'foo' exited with status 1" \
|
||||
"Test exited with status 1" \
|
||||
"Traceback: ignore me" \
|
||||
"{\"ignore\": \"me\"}" \
|
||||
"$breadcrumbs"
|
||||
)
|
||||
rm "$log_file"
|
||||
expected_filename='sentry-envelope-f73e4da437c42a1d28b86a81ebcff35d'
|
||||
test "$SEND_EVENT_RESPONSE" == "Test Sending $expected_filename"
|
||||
ENVELOPE_CONTENTS=$(cat "/tmp/$expected_filename")
|
||||
test "$ENVELOPE_CONTENTS" == "$(cat _unit-test/snapshots/$expected_filename)"
|
||||
echo "Pass."
|
||||
|
||||
##########################
|
||||
|
||||
echo "Testing send_event duplicate"
|
||||
SEND_EVENT_RESPONSE=$(
|
||||
send_event \
|
||||
"'foo' exited with status 1" \
|
||||
"Test exited with status 1" \
|
||||
"Traceback: ignore me" \
|
||||
"{\"ignore\": \"me\"}" \
|
||||
"$breadcrumbs"
|
||||
)
|
||||
test "$SEND_EVENT_RESPONSE" == "Looks like you've already sent this error to us, we're on it :)"
|
||||
echo "Pass."
|
||||
rm "/tmp/$expected_filename"
|
||||
|
||||
##########################
|
||||
|
||||
echo "Testing cleanup without minimizing downtime"
|
||||
export REPORT_SELF_HOSTED_ISSUES=0
|
||||
export MINIMIZE_DOWNTIME=''
|
||||
export dc=':'
|
||||
echo "Test Logs" >"$log_file"
|
||||
CLEANUP_RESPONSE=$(cleanup ERROR) # the linenumber of this line must match just below
|
||||
rm "$log_file"
|
||||
test "$CLEANUP_RESPONSE" == 'Error in _unit-test/error-handling-test.sh:62.
|
||||
'\''local cmd="${BASH_COMMAND}"'\'' exited with status 0
|
||||
|
||||
Cleaning up...'
|
||||
echo "Pass."
|
||||
|
||||
##########################
|
||||
|
||||
echo "Testing cleanup while minimizing downtime"
|
||||
export REPORT_SELF_HOSTED_ISSUES=0
|
||||
export MINIMIZE_DOWNTIME=1
|
||||
echo "Test Logs" >"$log_file"
|
||||
CLEANUP_RESPONSE=$(cleanup ERROR) # the linenumber of this line must match just below
|
||||
rm "$log_file"
|
||||
test "$CLEANUP_RESPONSE" == 'Error in _unit-test/error-handling-test.sh:76.
|
||||
'\''local cmd="${BASH_COMMAND}"'\'' exited with status 0
|
||||
|
||||
*NOT* cleaning up, to clean your environment run "docker compose stop".'
|
||||
echo "Pass."
|
||||
|
||||
##########################
|
||||
|
||||
report_success
|
17
_unit-test/geoip-test.sh
Executable file
17
_unit-test/geoip-test.sh
Executable file
@ -0,0 +1,17 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source _unit-test/_test_setup.sh
|
||||
|
||||
mmdb=geoip/GeoLite2-City.mmdb
|
||||
|
||||
# Starts with no mmdb, ends up with empty.
|
||||
test ! -f $mmdb
|
||||
source install/geoip.sh
|
||||
diff -rub $mmdb $mmdb.empty
|
||||
|
||||
# Doesn't clobber existing, though.
|
||||
echo GARBAGE >$mmdb
|
||||
source install/geoip.sh
|
||||
test "$(cat $mmdb)" = "GARBAGE"
|
||||
|
||||
report_success
|
32
_unit-test/js-sdk-assets-test.sh
Executable file
32
_unit-test/js-sdk-assets-test.sh
Executable file
@ -0,0 +1,32 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
source _unit-test/_test_setup.sh
|
||||
source install/dc-detect-version.sh
|
||||
$dcb --force-rm web
|
||||
|
||||
export SETUP_JS_SDK_ASSETS=1
|
||||
|
||||
source install/setup-js-sdk-assets.sh
|
||||
|
||||
sdk_files=$(docker compose run --no-deps --rm -v "sentry-nginx-www:/var/www" nginx ls -lah /var/www/js-sdk/)
|
||||
sdk_tree=$(docker compose run --no-deps --rm -v "sentry-nginx-www:/var/www" nginx tree /var/www/js-sdk/ | tail -n 1)
|
||||
non_empty_file_count=$(docker compose run --no-deps --rm -v "sentry-nginx-www:/var/www" nginx find /var/www/js-sdk/ -type f -size +1k | wc -l)
|
||||
|
||||
# `sdk_files` should contains 5 lines, '4.*', '5.*', '6.*', `7.*` and `8.*`
|
||||
echo $sdk_files
|
||||
total_directories=$(echo "$sdk_files" | grep -c '[45678]\.[0-9]*\.[0-9]*$')
|
||||
echo $total_directories
|
||||
test "5" == "$total_directories"
|
||||
echo "Pass"
|
||||
|
||||
# `sdk_tree` should output "5 directories, 17 files"
|
||||
echo "$sdk_tree"
|
||||
test "5 directories, 17 files" == "$(echo "$sdk_tree")"
|
||||
echo "Pass"
|
||||
|
||||
# Files should all be >1k (ensure they are not empty)
|
||||
echo "Testing file sizes"
|
||||
test "17" == "$non_empty_file_count"
|
||||
echo "Pass"
|
||||
|
||||
report_success
|
@ -0,0 +1,6 @@
|
||||
{"event_id":"f73e4da437c42a1d28b86a81ebcff35d","dsn":"https://19555c489ded4769978daae92f2346ca@self-hosted.getsentry.net/3"}
|
||||
{"type":"event"}
|
||||
{"level":"error","exception":{"values":[{"type":"Error","value":"Test exited with status 1","stacktrace":{"frames":[{"ignore":"me"}]}}]},"breadcrumbs":{"values":[{"message":"Test Logs","category":"log","level":"info"}]},"fingerprint":["f73e4da437c42a1d28b86a81ebcff35d"]}
|
||||
{"type":"attachment","length":20,"content_type":"text/plain","filename":"install_log.txt"}
|
||||
Test Logs
|
||||
Error Msg
|
3
certificates/.gitignore
vendored
Normal file
3
certificates/.gitignore
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
# Add all custom CAs in this folder
|
||||
*
|
||||
!.gitignore
|
2
clickhouse/Dockerfile
Normal file
2
clickhouse/Dockerfile
Normal file
@ -0,0 +1,2 @@
|
||||
ARG BASE_IMAGE
|
||||
FROM ${BASE_IMAGE}
|
33
clickhouse/config.xml
Normal file
33
clickhouse/config.xml
Normal file
@ -0,0 +1,33 @@
|
||||
<yandex>
|
||||
<!-- This include is important! It is required for the version of Clickhouse used on ARM to read the environment variable. This must be a one-liner to avoid errors in Clickhouse. -->
|
||||
<max_server_memory_usage_to_ram_ratio><include from_env="MAX_MEMORY_USAGE_RATIO"/></max_server_memory_usage_to_ram_ratio>
|
||||
<logger>
|
||||
<level>warning</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
<query_thread_log remove="remove"/>
|
||||
<query_log remove="remove"/>
|
||||
<text_log remove="remove"/>
|
||||
<trace_log remove="remove"/>
|
||||
<metric_log remove="remove"/>
|
||||
<asynchronous_metric_log remove="remove"/>
|
||||
|
||||
<!-- Update: Required for newer versions of Clickhouse -->
|
||||
<session_log remove="remove"/>
|
||||
<part_log remove="remove"/>
|
||||
|
||||
<allow_nullable_key>1</allow_nullable_key>
|
||||
|
||||
<profiles>
|
||||
<default>
|
||||
<log_queries>0</log_queries>
|
||||
<log_query_threads>0</log_query_threads>
|
||||
</default>
|
||||
</profiles>
|
||||
<merge_tree>
|
||||
<enable_mixed_granularity_parts>1</enable_mixed_granularity_parts>
|
||||
<!-- Increase "max_suspicious_broken_parts" in case of errors with Clickhouse like "Suspiciously many broken parts to remove".
|
||||
see: https://github.com/getsentry/self-hosted/issues/2832 -->
|
||||
<max_suspicious_broken_parts>10</max_suspicious_broken_parts>
|
||||
</merge_tree>
|
||||
</yandex>
|
8
codecov.yml
Normal file
8
codecov.yml
Normal file
@ -0,0 +1,8 @@
|
||||
coverage:
|
||||
status:
|
||||
project:
|
||||
default:
|
||||
only_pulls: true
|
||||
patch:
|
||||
default:
|
||||
only_pulls: true
|
9
cron/Dockerfile
Normal file
9
cron/Dockerfile
Normal file
@ -0,0 +1,9 @@
|
||||
ARG BASE_IMAGE
|
||||
FROM ${BASE_IMAGE}
|
||||
USER 0
|
||||
RUN if [ -n "${http_proxy}" ]; then echo "Acquire::http::proxy \"${http_proxy}\";" >> /etc/apt/apt.conf; fi
|
||||
RUN if [ -n "${https_proxy}" ]; then echo "Acquire::https::proxy \"${https_proxy}\";" >> /etc/apt/apt.conf; fi
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends cron && \
|
||||
rm -r /var/lib/apt/lists/*
|
||||
COPY entrypoint.sh /entrypoint.sh
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
19
cron/entrypoint.sh
Executable file
19
cron/entrypoint.sh
Executable file
@ -0,0 +1,19 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
if [ "$(ls -A /usr/local/share/ca-certificates/)" ]; then
|
||||
update-ca-certificates
|
||||
fi
|
||||
|
||||
# Prior art:
|
||||
# - https://github.com/renskiy/cron-docker-image/blob/5600db37acf841c6d7a8b4f3866741bada5b4622/debian/start-cron#L34-L36
|
||||
# - https://blog.knoldus.com/running-a-cron-job-in-docker-container/
|
||||
|
||||
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' >/container.env
|
||||
|
||||
{ for cron_job in "$@"; do echo -e "SHELL=/bin/bash
|
||||
BASH_ENV=/container.env
|
||||
${cron_job} > /proc/1/fd/1 2>/proc/1/fd/2"; done; } |
|
||||
sed --regexp-extended 's/\\(.)/\1/g' |
|
||||
crontab -
|
||||
crontab -l
|
||||
exec cron -f -l -L 15
|
551
docker-compose.yml
Normal file
551
docker-compose.yml
Normal file
@ -0,0 +1,551 @@
|
||||
x-restart-policy: &restart_policy
|
||||
restart: unless-stopped
|
||||
x-depends_on-healthy: &depends_on-healthy
|
||||
condition: service_healthy
|
||||
x-depends_on-default: &depends_on-default
|
||||
condition: service_started
|
||||
x-healthcheck-defaults: &healthcheck_defaults
|
||||
# Avoid setting the interval too small, as docker uses much more CPU than one would expect.
|
||||
# Related issues:
|
||||
# https://github.com/moby/moby/issues/39102
|
||||
# https://github.com/moby/moby/issues/39388
|
||||
# https://github.com/getsentry/self-hosted/issues/1000
|
||||
interval: "$HEALTHCHECK_INTERVAL"
|
||||
timeout: "$HEALTHCHECK_TIMEOUT"
|
||||
retries: $HEALTHCHECK_RETRIES
|
||||
start_period: 10s
|
||||
x-sentry-defaults: &sentry_defaults
|
||||
<<: *restart_policy
|
||||
image: sentry-self-hosted-local
|
||||
# Set the platform to build for linux/arm64 when needed on Apple silicon Macs.
|
||||
platform: ${DOCKER_PLATFORM:-}
|
||||
build:
|
||||
context: ./sentry
|
||||
args:
|
||||
- SENTRY_IMAGE
|
||||
depends_on:
|
||||
redis:
|
||||
<<: *depends_on-healthy
|
||||
kafka:
|
||||
<<: *depends_on-healthy
|
||||
postgres:
|
||||
<<: *depends_on-healthy
|
||||
memcached:
|
||||
<<: *depends_on-default
|
||||
smtp:
|
||||
<<: *depends_on-default
|
||||
snuba-api:
|
||||
<<: *depends_on-default
|
||||
symbolicator:
|
||||
<<: *depends_on-default
|
||||
entrypoint: "/etc/sentry/entrypoint.sh"
|
||||
command: ["run", "web"]
|
||||
environment:
|
||||
PYTHONUSERBASE: "/data/custom-packages"
|
||||
SENTRY_CONF: "/etc/sentry"
|
||||
SNUBA: "http://snuba-api:1218"
|
||||
VROOM: "http://vroom:8085"
|
||||
# Force everything to use the system CA bundle
|
||||
# This is mostly needed to support installing custom CA certs
|
||||
# This one is used by botocore
|
||||
DEFAULT_CA_BUNDLE: &ca_bundle "/etc/ssl/certs/ca-certificates.crt"
|
||||
# This one is used by requests
|
||||
REQUESTS_CA_BUNDLE: *ca_bundle
|
||||
# This one is used by grpc/google modules
|
||||
GRPC_DEFAULT_SSL_ROOTS_FILE_PATH_ENV_VAR: *ca_bundle
|
||||
# Leaving the value empty to just pass whatever is set
|
||||
# on the host system (or in the .env file)
|
||||
COMPOSE_PROFILES:
|
||||
SENTRY_EVENT_RETENTION_DAYS:
|
||||
SENTRY_MAIL_HOST:
|
||||
SENTRY_MAX_EXTERNAL_SOURCEMAP_SIZE:
|
||||
OPENAI_API_KEY:
|
||||
volumes:
|
||||
- "sentry-data:/data"
|
||||
- "./sentry:/etc/sentry"
|
||||
- "./geoip:/geoip:ro"
|
||||
- "./certificates:/usr/local/share/ca-certificates:ro"
|
||||
x-snuba-defaults: &snuba_defaults
|
||||
<<: *restart_policy
|
||||
depends_on:
|
||||
clickhouse:
|
||||
<<: *depends_on-healthy
|
||||
kafka:
|
||||
<<: *depends_on-healthy
|
||||
redis:
|
||||
<<: *depends_on-healthy
|
||||
image: "$SNUBA_IMAGE"
|
||||
environment:
|
||||
SNUBA_SETTINGS: self_hosted
|
||||
CLICKHOUSE_HOST: clickhouse
|
||||
DEFAULT_BROKERS: "kafka:9092"
|
||||
REDIS_HOST: redis
|
||||
UWSGI_MAX_REQUESTS: "10000"
|
||||
UWSGI_DISABLE_LOGGING: "true"
|
||||
# Leaving the value empty to just pass whatever is set
|
||||
# on the host system (or in the .env file)
|
||||
SENTRY_EVENT_RETENTION_DAYS:
|
||||
services:
|
||||
smtp:
|
||||
<<: *restart_policy
|
||||
platform: linux/amd64
|
||||
image: tianon/exim4
|
||||
hostname: "${SENTRY_MAIL_HOST:-}"
|
||||
volumes:
|
||||
- "sentry-smtp:/var/spool/exim4"
|
||||
- "sentry-smtp-log:/var/log/exim4"
|
||||
memcached:
|
||||
<<: *restart_policy
|
||||
image: "memcached:1.6.26-alpine"
|
||||
command: ["-I", "${SENTRY_MAX_EXTERNAL_SOURCEMAP_SIZE:-1M}"]
|
||||
healthcheck:
|
||||
<<: *healthcheck_defaults
|
||||
# From: https://stackoverflow.com/a/31877626/5155484
|
||||
test: echo stats | nc 127.0.0.1 11211
|
||||
redis:
|
||||
<<: *restart_policy
|
||||
image: "redis:6.2.14-alpine"
|
||||
healthcheck:
|
||||
<<: *healthcheck_defaults
|
||||
test: redis-cli ping | grep PONG
|
||||
volumes:
|
||||
- "sentry-redis:/data"
|
||||
- type: bind
|
||||
read_only: true
|
||||
source: ./redis.conf
|
||||
target: /usr/local/etc/redis/redis.conf
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 10032
|
||||
hard: 10032
|
||||
postgres:
|
||||
<<: *restart_policy
|
||||
# Using the same postgres version as Sentry dev for consistency purposes
|
||||
image: "postgres:14.11"
|
||||
healthcheck:
|
||||
<<: *healthcheck_defaults
|
||||
# Using default user "postgres" from sentry/sentry.conf.example.py or value of POSTGRES_USER if provided
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
|
||||
command:
|
||||
[
|
||||
"postgres",
|
||||
"-c",
|
||||
"max_connections=${POSTGRES_MAX_CONNECTIONS:-100}",
|
||||
]
|
||||
environment:
|
||||
POSTGRES_HOST_AUTH_METHOD: "trust"
|
||||
volumes:
|
||||
- "sentry-postgres:/var/lib/postgresql/data"
|
||||
kafka:
|
||||
<<: *restart_policy
|
||||
image: "confluentinc/cp-kafka:7.6.1"
|
||||
environment:
|
||||
# https://docs.confluent.io/platform/current/installation/docker/config-reference.html#cp-kakfa-example
|
||||
KAFKA_PROCESS_ROLES: "broker,controller"
|
||||
KAFKA_CONTROLLER_QUORUM_VOTERS: "1001@127.0.0.1:29093"
|
||||
KAFKA_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
|
||||
KAFKA_NODE_ID: "1001"
|
||||
CLUSTER_ID: "MkU3OEVBNTcwNTJENDM2Qk"
|
||||
KAFKA_LISTENERS: "PLAINTEXT://0.0.0.0:29092,INTERNAL://0.0.0.0:9093,EXTERNAL://0.0.0.0:9092,CONTROLLER://0.0.0.0:29093"
|
||||
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://127.0.0.1:29092,INTERNAL://kafka:9093,EXTERNAL://kafka:9092"
|
||||
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "PLAINTEXT:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT"
|
||||
KAFKA_INTER_BROKER_LISTENER_NAME: "PLAINTEXT"
|
||||
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
|
||||
KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: "1"
|
||||
KAFKA_LOG_RETENTION_HOURS: "24"
|
||||
KAFKA_MESSAGE_MAX_BYTES: "50000000" #50MB or bust
|
||||
KAFKA_MAX_REQUEST_SIZE: "50000000" #50MB on requests apparently too
|
||||
CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
|
||||
KAFKA_LOG4J_LOGGERS: "kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,state.change.logger=WARN"
|
||||
KAFKA_LOG4J_ROOT_LOGLEVEL: "WARN"
|
||||
KAFKA_TOOLS_LOG4J_LOGLEVEL: "WARN"
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 4096
|
||||
hard: 4096
|
||||
volumes:
|
||||
- "sentry-kafka:/var/lib/kafka/data"
|
||||
- "sentry-kafka-log:/var/lib/kafka/log"
|
||||
- "sentry-secrets:/etc/kafka/secrets"
|
||||
healthcheck:
|
||||
<<: *healthcheck_defaults
|
||||
test: ["CMD-SHELL", "nc -z localhost 9092"]
|
||||
interval: 10s
|
||||
timeout: 10s
|
||||
retries: 30
|
||||
clickhouse:
|
||||
<<: *restart_policy
|
||||
image: clickhouse-self-hosted-local
|
||||
build:
|
||||
context: ./clickhouse
|
||||
args:
|
||||
BASE_IMAGE: "altinity/clickhouse-server:23.8.11.29.altinitystable"
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 262144
|
||||
hard: 262144
|
||||
volumes:
|
||||
- "sentry-clickhouse:/var/lib/clickhouse"
|
||||
- "sentry-clickhouse-log:/var/log/clickhouse-server"
|
||||
- type: bind
|
||||
read_only: true
|
||||
source: ./clickhouse/config.xml
|
||||
target: /etc/clickhouse-server/config.d/sentry.xml
|
||||
environment:
|
||||
# This limits Clickhouse's memory to 30% of the host memory
|
||||
# If you have high volume and your search return incomplete results
|
||||
# You might want to change this to a higher value (and ensure your host has enough memory)
|
||||
MAX_MEMORY_USAGE_RATIO: 0.3
|
||||
healthcheck:
|
||||
test: [
|
||||
"CMD-SHELL",
|
||||
# Manually override any http_proxy envvar that might be set, because
|
||||
# this wget does not support no_proxy. See:
|
||||
# https://github.com/getsentry/self-hosted/issues/1537
|
||||
"http_proxy='' wget -nv -t1 --spider 'http://localhost:8123/' || exit 1",
|
||||
]
|
||||
interval: 10s
|
||||
timeout: 10s
|
||||
retries: 30
|
||||
geoipupdate:
|
||||
image: "ghcr.io/maxmind/geoipupdate:v6.1.0"
|
||||
# Override the entrypoint in order to avoid using envvars for config.
|
||||
# Futz with settings so we can keep mmdb and conf in same dir on host
|
||||
# (image looks for them in separate dirs by default).
|
||||
entrypoint:
|
||||
["/usr/bin/geoipupdate", "-d", "/sentry", "-f", "/sentry/GeoIP.conf"]
|
||||
volumes:
|
||||
- "./geoip:/sentry"
|
||||
snuba-api:
|
||||
<<: *snuba_defaults
|
||||
# Kafka consumer responsible for feeding events into Clickhouse
|
||||
snuba-errors-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage errors --consumer-group snuba-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
# Kafka consumer responsible for feeding outcomes into Clickhouse
|
||||
# Use --auto-offset-reset=earliest to recover up to 7 days of TSDB data
|
||||
# since we did not do a proper migration
|
||||
snuba-outcomes-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage outcomes_raw --consumer-group snuba-consumers --auto-offset-reset=earliest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
snuba-outcomes-billing-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage outcomes_raw --consumer-group snuba-consumers --auto-offset-reset=earliest --max-batch-time-ms 750 --no-strict-offset-reset --raw-events-topic outcomes-billing
|
||||
snuba-group-attributes-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage group_attributes --consumer-group snuba-group-attributes-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
snuba-replacer:
|
||||
<<: *snuba_defaults
|
||||
command: replacer --storage errors --auto-offset-reset=latest --no-strict-offset-reset
|
||||
snuba-subscription-consumer-events:
|
||||
<<: *snuba_defaults
|
||||
command: subscriptions-scheduler-executor --dataset events --entity events --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-events-subscriptions-consumers --followed-consumer-group=snuba-consumers --schedule-ttl=60 --stale-threshold-seconds=900
|
||||
#############################################
|
||||
## Feature Complete Sentry Snuba Consumers ##
|
||||
#############################################
|
||||
# Kafka consumer responsible for feeding transactions data into Clickhouse
|
||||
snuba-transactions-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-replays-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage replays --consumer-group snuba-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-issue-occurrence-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage search_issues --consumer-group generic_events_group --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-metrics-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage metrics_raw --consumer-group snuba-metrics-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-subscription-consumer-transactions:
|
||||
<<: *snuba_defaults
|
||||
command: subscriptions-scheduler-executor --dataset transactions --entity transactions --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-transactions-subscriptions-consumers --followed-consumer-group=transactions_group --schedule-ttl=60 --stale-threshold-seconds=900
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-subscription-consumer-metrics:
|
||||
<<: *snuba_defaults
|
||||
command: subscriptions-scheduler-executor --dataset metrics --entity metrics_sets --entity metrics_counters --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-metrics-subscriptions-consumers --followed-consumer-group=snuba-metrics-consumers --schedule-ttl=60 --stale-threshold-seconds=900
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-generic-metrics-distributions-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage generic_metrics_distributions_raw --consumer-group snuba-gen-metrics-distributions-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-generic-metrics-sets-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage generic_metrics_sets_raw --consumer-group snuba-gen-metrics-sets-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-generic-metrics-counters-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage generic_metrics_counters_raw --consumer-group snuba-gen-metrics-counters-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-generic-metrics-gauges-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage generic_metrics_gauges_raw --consumer-group snuba-gen-metrics-gauges-consumers --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-profiling-profiles-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage profiles --consumer-group snuba-consumers --auto-offset-reset=latest --max-batch-time-ms 1000 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-profiling-functions-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage functions_raw --consumer-group snuba-consumers --auto-offset-reset=latest --max-batch-time-ms 1000 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
snuba-spans-consumer:
|
||||
<<: *snuba_defaults
|
||||
command: rust-consumer --storage spans --consumer-group snuba-spans-consumers --auto-offset-reset=latest --max-batch-time-ms 1000 --no-strict-offset-reset
|
||||
profiles:
|
||||
- feature-complete
|
||||
symbolicator:
|
||||
<<: *restart_policy
|
||||
image: "$SYMBOLICATOR_IMAGE"
|
||||
volumes:
|
||||
- "sentry-symbolicator:/data"
|
||||
- type: bind
|
||||
read_only: true
|
||||
source: ./symbolicator
|
||||
target: /etc/symbolicator
|
||||
command: run -c /etc/symbolicator/config.yml
|
||||
symbolicator-cleanup:
|
||||
<<: *restart_policy
|
||||
image: symbolicator-cleanup-self-hosted-local
|
||||
build:
|
||||
context: ./cron
|
||||
args:
|
||||
BASE_IMAGE: "$SYMBOLICATOR_IMAGE"
|
||||
command: '"55 23 * * * gosu symbolicator symbolicator cleanup"'
|
||||
volumes:
|
||||
- "sentry-symbolicator:/data"
|
||||
web:
|
||||
<<: *sentry_defaults
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 4096
|
||||
hard: 4096
|
||||
healthcheck:
|
||||
<<: *healthcheck_defaults
|
||||
test:
|
||||
- "CMD"
|
||||
- "/bin/bash"
|
||||
- "-c"
|
||||
# Courtesy of https://unix.stackexchange.com/a/234089/108960
|
||||
- 'exec 3<>/dev/tcp/127.0.0.1/96 && echo -e "GET /_health/ HTTP/1.1\r\nhost: 127.0.0.1\r\n\r\n" >&3 && grep ok -s -m 1 <&3'
|
||||
cron:
|
||||
<<: *sentry_defaults
|
||||
command: run cron
|
||||
worker:
|
||||
<<: *sentry_defaults
|
||||
command: run worker
|
||||
events-consumer:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-events --consumer-group ingest-consumer
|
||||
attachments-consumer:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-attachments --consumer-group ingest-consumer
|
||||
post-process-forwarder-errors:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer --no-strict-offset-reset post-process-forwarder-errors --consumer-group post-process-forwarder --synchronize-commit-log-topic=snuba-commit-log --synchronize-commit-group=snuba-consumers
|
||||
subscription-consumer-events:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer events-subscription-results --consumer-group query-subscription-consumer
|
||||
##############################################
|
||||
## Feature Complete Sentry Ingest Consumers ##
|
||||
##############################################
|
||||
transactions-consumer:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-transactions --consumer-group ingest-consumer
|
||||
profiles:
|
||||
- feature-complete
|
||||
metrics-consumer:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-metrics --consumer-group metrics-consumer
|
||||
profiles:
|
||||
- feature-complete
|
||||
generic-metrics-consumer:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-generic-metrics --consumer-group generic-metrics-consumer
|
||||
profiles:
|
||||
- feature-complete
|
||||
billing-metrics-consumer:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer billing-metrics-consumer --consumer-group billing-metrics-consumer
|
||||
profiles:
|
||||
- feature-complete
|
||||
ingest-replay-recordings:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-replay-recordings --consumer-group ingest-replay-recordings
|
||||
profiles:
|
||||
- feature-complete
|
||||
ingest-occurrences:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-occurrences --consumer-group ingest-occurrences
|
||||
profiles:
|
||||
- feature-complete
|
||||
ingest-profiles:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-profiles --consumer-group ingest-profiles
|
||||
profiles:
|
||||
- feature-complete
|
||||
ingest-monitors:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-monitors --consumer-group ingest-monitors
|
||||
profiles:
|
||||
- feature-complete
|
||||
ingest-feedback-events:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer ingest-feedback-events --consumer-group ingest-feedback
|
||||
profiles:
|
||||
- feature-complete
|
||||
monitors-clock-tick:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer monitors-clock-tick --consumer-group monitors-clock-tick
|
||||
profiles:
|
||||
- feature-complete
|
||||
monitors-clock-tasks:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer monitors-clock-tasks --consumer-group monitors-clock-tasks
|
||||
profiles:
|
||||
- feature-complete
|
||||
post-process-forwarder-transactions:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer --no-strict-offset-reset post-process-forwarder-transactions --consumer-group post-process-forwarder --synchronize-commit-log-topic=snuba-transactions-commit-log --synchronize-commit-group transactions_group
|
||||
profiles:
|
||||
- feature-complete
|
||||
post-process-forwarder-issue-platform:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer --no-strict-offset-reset post-process-forwarder-issue-platform --consumer-group post-process-forwarder --synchronize-commit-log-topic=snuba-generic-events-commit-log --synchronize-commit-group generic_events_group
|
||||
profiles:
|
||||
- feature-complete
|
||||
subscription-consumer-transactions:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer transactions-subscription-results --consumer-group query-subscription-consumer
|
||||
profiles:
|
||||
- feature-complete
|
||||
subscription-consumer-metrics:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer metrics-subscription-results --consumer-group query-subscription-consumer
|
||||
profiles:
|
||||
- feature-complete
|
||||
subscription-consumer-generic-metrics:
|
||||
<<: *sentry_defaults
|
||||
command: run consumer generic-metrics-subscription-results --consumer-group query-subscription-consumer
|
||||
profiles:
|
||||
- feature-complete
|
||||
sentry-cleanup:
|
||||
<<: *sentry_defaults
|
||||
image: sentry-cleanup-self-hosted-local
|
||||
build:
|
||||
context: ./cron
|
||||
args:
|
||||
BASE_IMAGE: sentry-self-hosted-local
|
||||
entrypoint: "/entrypoint.sh"
|
||||
command: '"0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS"'
|
||||
nginx:
|
||||
<<: *restart_policy
|
||||
ports:
|
||||
- "$SENTRY_BIND:80/tcp"
|
||||
image: "nginx:1.25.4-alpine"
|
||||
volumes:
|
||||
- type: bind
|
||||
read_only: true
|
||||
source: ./nginx.conf
|
||||
target: /etc/nginx/nginx.conf
|
||||
- sentry-nginx-cache:/var/cache/nginx
|
||||
- sentry-nginx-www:/var/www
|
||||
depends_on:
|
||||
- web
|
||||
- relay
|
||||
relay:
|
||||
<<: *restart_policy
|
||||
image: "$RELAY_IMAGE"
|
||||
volumes:
|
||||
- type: bind
|
||||
read_only: true
|
||||
source: ./relay
|
||||
target: /work/.relay
|
||||
- type: bind
|
||||
read_only: true
|
||||
source: ./geoip
|
||||
target: /geoip
|
||||
depends_on:
|
||||
kafka:
|
||||
<<: *depends_on-healthy
|
||||
redis:
|
||||
<<: *depends_on-healthy
|
||||
web:
|
||||
<<: *depends_on-healthy
|
||||
vroom:
|
||||
<<: *restart_policy
|
||||
image: "$VROOM_IMAGE"
|
||||
environment:
|
||||
SENTRY_KAFKA_BROKERS_PROFILING: "kafka:9092"
|
||||
SENTRY_KAFKA_BROKERS_OCCURRENCES: "kafka:9092"
|
||||
SENTRY_BUCKET_PROFILES: file://localhost//var/lib/sentry-profiles
|
||||
SENTRY_SNUBA_HOST: "http://snuba-api:1218"
|
||||
volumes:
|
||||
- sentry-vroom:/var/lib/sentry-profiles
|
||||
depends_on:
|
||||
kafka:
|
||||
<<: *depends_on-healthy
|
||||
profiles:
|
||||
- feature-complete
|
||||
vroom-cleanup:
|
||||
<<: *restart_policy
|
||||
image: vroom-cleanup-self-hosted-local
|
||||
build:
|
||||
context: ./cron
|
||||
args:
|
||||
BASE_IMAGE: "$VROOM_IMAGE"
|
||||
entrypoint: "/entrypoint.sh"
|
||||
environment:
|
||||
# Leaving the value empty to just pass whatever is set
|
||||
# on the host system (or in the .env file)
|
||||
SENTRY_EVENT_RETENTION_DAYS:
|
||||
command: '"0 0 * * * find /var/lib/sentry-profiles -type f -mtime +$SENTRY_EVENT_RETENTION_DAYS -delete"'
|
||||
volumes:
|
||||
- sentry-vroom:/var/lib/sentry-profiles
|
||||
profiles:
|
||||
- feature-complete
|
||||
|
||||
volumes:
|
||||
# These store application data that should persist across restarts.
|
||||
sentry-data:
|
||||
external: true
|
||||
sentry-postgres:
|
||||
external: true
|
||||
sentry-redis:
|
||||
external: true
|
||||
sentry-kafka:
|
||||
external: true
|
||||
sentry-clickhouse:
|
||||
external: true
|
||||
sentry-symbolicator:
|
||||
external: true
|
||||
# This volume stores JS SDK assets and the data inside this volume should
|
||||
# be cleaned periodically on upgrades.
|
||||
sentry-nginx-www:
|
||||
# This volume stores profiles and should be persisted.
|
||||
# Not being external will still persist data across restarts.
|
||||
# It won't persist if someone does a docker compose down -v.
|
||||
sentry-vroom:
|
||||
# These store ephemeral data that needn't persist across restarts.
|
||||
# That said, volumes will be persisted across restarts until they are deleted.
|
||||
sentry-secrets:
|
||||
sentry-smtp:
|
||||
sentry-nginx-cache:
|
||||
sentry-kafka-log:
|
||||
sentry-smtp-log:
|
||||
sentry-clickhouse-log:
|
BIN
geoip/GeoLite2-City.mmdb.empty
Normal file
BIN
geoip/GeoLite2-City.mmdb.empty
Normal file
Binary file not shown.
After Width: | Height: | Size: 1.0 KiB |
40
install.sh
Executable file
40
install.sh
Executable file
@ -0,0 +1,40 @@
|
||||
#!/usr/bin/env bash
|
||||
set -eE
|
||||
|
||||
# Pre-pre-flight? 🤷
|
||||
if [[ -n "$MSYSTEM" ]]; then
|
||||
echo "Seems like you are using an MSYS2-based system (such as Git Bash) which is not supported. Please use WSL instead."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
source install/_lib.sh
|
||||
|
||||
# Pre-flight. No impact yet.
|
||||
source install/parse-cli.sh
|
||||
source install/detect-platform.sh
|
||||
source install/dc-detect-version.sh
|
||||
source install/error-handling.sh
|
||||
# We set the trap at the top level so that we get better tracebacks.
|
||||
trap_with_arg cleanup ERR INT TERM EXIT
|
||||
source install/check-latest-commit.sh
|
||||
source install/check-minimum-requirements.sh
|
||||
|
||||
# Let's go! Start impacting things.
|
||||
# Upgrading clickhouse needs to come first before turning things off, since we need the old clickhouse image
|
||||
# in order to determine whether or not the clickhouse version needs to be upgraded.
|
||||
source install/upgrade-clickhouse.sh
|
||||
source install/turn-things-off.sh
|
||||
source install/update-docker-volume-permissions.sh
|
||||
source install/create-docker-volumes.sh
|
||||
source install/ensure-files-from-examples.sh
|
||||
source install/check-memcached-backend.sh
|
||||
source install/ensure-relay-credentials.sh
|
||||
source install/generate-secret-key.sh
|
||||
source install/update-docker-images.sh
|
||||
source install/build-docker-images.sh
|
||||
source install/bootstrap-snuba.sh
|
||||
source install/upgrade-postgres.sh
|
||||
source install/set-up-and-migrate-database.sh
|
||||
source install/geoip.sh
|
||||
source install/setup-js-sdk-assets.sh
|
||||
source install/wrap-up.sh
|
55
install/_lib.sh
Normal file
55
install/_lib.sh
Normal file
@ -0,0 +1,55 @@
|
||||
set -euo pipefail
|
||||
test "${DEBUG:-}" && set -x
|
||||
|
||||
# Override any user-supplied umask that could cause problems, see #1222
|
||||
umask 002
|
||||
|
||||
# Thanks to https://unix.stackexchange.com/a/145654/108960
|
||||
log_file=sentry_install_log-$(date +'%Y-%m-%d_%H-%M-%S').txt
|
||||
exec &> >(tee -a "$log_file")
|
||||
|
||||
# Allow `.env` overrides using the `.env.custom` file.
|
||||
# We pass this to docker compose in a couple places.
|
||||
if [[ -f .env.custom ]]; then
|
||||
_ENV=.env.custom
|
||||
else
|
||||
_ENV=.env
|
||||
fi
|
||||
|
||||
# Read .env for default values with a tip o' the hat to https://stackoverflow.com/a/59831605/90297
|
||||
t=$(mktemp) && export -p >"$t" && set -a && . $_ENV && set +a && . "$t" && rm "$t" && unset t
|
||||
|
||||
if [ "${GITHUB_ACTIONS:-}" = "true" ]; then
|
||||
_group="::group::"
|
||||
_endgroup="::endgroup::"
|
||||
else
|
||||
_group="▶ "
|
||||
_endgroup=""
|
||||
fi
|
||||
|
||||
# A couple of the config files are referenced from other subscripts, so they
|
||||
# get vars, while multiple subscripts call ensure_file_from_example.
|
||||
function ensure_file_from_example {
|
||||
target="$1"
|
||||
if [[ -f "$target" ]]; then
|
||||
echo "$target already exists, skipped creation."
|
||||
else
|
||||
# sed from https://stackoverflow.com/a/25123013/90297
|
||||
example="$(echo "$target" | sed 's/\.[^.]*$/.example&/')"
|
||||
if [[ ! -f "$example" ]]; then
|
||||
echo "Oops! Where did $example go? 🤨 We need it in order to create $target."
|
||||
exit
|
||||
fi
|
||||
echo "Creating $target ..."
|
||||
cp -n "$example" "$target"
|
||||
fi
|
||||
}
|
||||
|
||||
SENTRY_CONFIG_PY=sentry/sentry.conf.py
|
||||
SENTRY_CONFIG_YML=sentry/config.yml
|
||||
|
||||
# Increase the default 10 second SIGTERM timeout
|
||||
# to ensure celery queues are properly drained
|
||||
# between upgrades as task signatures may change across
|
||||
# versions
|
||||
STOP_TIMEOUT=60 # seconds
|
9
install/_min-requirements.sh
Normal file
9
install/_min-requirements.sh
Normal file
@ -0,0 +1,9 @@
|
||||
# Don't forget to update the README and other docs when you change these!
|
||||
MIN_DOCKER_VERSION='19.03.6'
|
||||
MIN_COMPOSE_VERSION='2.19.0'
|
||||
|
||||
# 16 GB minimum host RAM, but there'll be some overhead outside of what
|
||||
# can be allotted to docker
|
||||
MIN_RAM_HARD=14000 # MB
|
||||
|
||||
MIN_CPU_HARD=4
|
6
install/bootstrap-snuba.sh
Normal file
6
install/bootstrap-snuba.sh
Normal file
@ -0,0 +1,6 @@
|
||||
echo "${_group}Bootstrapping and migrating Snuba ..."
|
||||
|
||||
$dcr snuba-api bootstrap --no-migrate --force
|
||||
$dcr snuba-api migrations migrate --force
|
||||
|
||||
echo "${_endgroup}"
|
14
install/build-docker-images.sh
Normal file
14
install/build-docker-images.sh
Normal file
@ -0,0 +1,14 @@
|
||||
echo "${_group}Building and tagging Docker images ..."
|
||||
|
||||
echo ""
|
||||
# Build any service that provides the image sentry-self-hosted-local first,
|
||||
# as it is used as the base image for sentry-cleanup-self-hosted-local.
|
||||
$dcb --force-rm web
|
||||
# Build each other service individually to localize potential failures better.
|
||||
for service in $($dc config --services); do
|
||||
$dcb --force-rm "$service"
|
||||
done
|
||||
echo ""
|
||||
echo "Docker images built."
|
||||
|
||||
echo "${_endgroup}"
|
15
install/check-latest-commit.sh
Normal file
15
install/check-latest-commit.sh
Normal file
@ -0,0 +1,15 @@
|
||||
echo "${_group}Checking for latest commit ... "
|
||||
|
||||
# Checks if we are on latest commit from github if it is running from master branch
|
||||
if [[ -d "../.git" && "${SKIP_COMMIT_CHECK:-0}" != 1 ]]; then
|
||||
if [[ $(git branch --show-current) == "master" ]]; then
|
||||
if [[ $(git rev-parse HEAD) != $(git ls-remote $(git rev-parse --abbrev-ref @{u} | sed 's/\// /g') | cut -f1) ]]; then
|
||||
echo "Seems like you are not using the latest commit from the self-hosted repository. Please pull the latest changes and try again, or suppress this check with --skip-commit-check."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo "skipped"
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
16
install/check-memcached-backend.sh
Normal file
16
install/check-memcached-backend.sh
Normal file
@ -0,0 +1,16 @@
|
||||
echo "${_group}Checking memcached backend ..."
|
||||
|
||||
if grep -q "\.PyMemcacheCache" "$SENTRY_CONFIG_PY"; then
|
||||
echo "PyMemcacheCache found in $SENTRY_CONFIG_PY, gonna assume you're good."
|
||||
else
|
||||
if grep -q "\.MemcachedCache" "$SENTRY_CONFIG_PY"; then
|
||||
echo "MemcachedCache found in $SENTRY_CONFIG_PY, you should switch to PyMemcacheCache."
|
||||
echo "See:"
|
||||
echo " https://develop.sentry.dev/self-hosted/releases/#breaking-changes"
|
||||
exit 1
|
||||
else
|
||||
echo 'Your setup looks weird. Good luck.'
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
59
install/check-minimum-requirements.sh
Normal file
59
install/check-minimum-requirements.sh
Normal file
@ -0,0 +1,59 @@
|
||||
echo "${_group}Checking minimum requirements ..."
|
||||
|
||||
source install/_min-requirements.sh
|
||||
|
||||
# Check the version of $1 is greater than or equal to $2 using sort. Note: versions must be stripped of "v"
|
||||
function vergte() {
|
||||
printf "%s\n%s" $1 $2 | sort --version-sort --check=quiet --reverse
|
||||
echo $?
|
||||
}
|
||||
|
||||
DOCKER_VERSION=$(docker version --format '{{.Server.Version}}' || echo '')
|
||||
if [[ -z "$DOCKER_VERSION" ]]; then
|
||||
echo "FAIL: Unable to get docker version, is the docker daemon running?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$(vergte ${DOCKER_VERSION//v/} $MIN_DOCKER_VERSION)" -eq 1 ]]; then
|
||||
echo "FAIL: Expected minimum docker version to be $MIN_DOCKER_VERSION but found $DOCKER_VERSION"
|
||||
exit 1
|
||||
fi
|
||||
echo "Found Docker version $DOCKER_VERSION"
|
||||
|
||||
COMPOSE_VERSION=$($dc_base version --short || echo '')
|
||||
if [[ -z "$COMPOSE_VERSION" ]]; then
|
||||
echo "FAIL: Docker compose is required to run self-hosted"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$(vergte ${COMPOSE_VERSION//v/} $MIN_COMPOSE_VERSION)" -eq 1 ]]; then
|
||||
echo "FAIL: Expected minimum $dc_base version to be $MIN_COMPOSE_VERSION but found $COMPOSE_VERSION"
|
||||
exit 1
|
||||
fi
|
||||
echo "Found Docker Compose version $COMPOSE_VERSION"
|
||||
|
||||
CPU_AVAILABLE_IN_DOCKER=$(docker run --rm busybox nproc --all)
|
||||
if [[ "$CPU_AVAILABLE_IN_DOCKER" -lt "$MIN_CPU_HARD" ]]; then
|
||||
echo "FAIL: Required minimum CPU cores available to Docker is $MIN_CPU_HARD, found $CPU_AVAILABLE_IN_DOCKER"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
RAM_AVAILABLE_IN_DOCKER=$(docker run --rm busybox free -m 2>/dev/null | awk '/Mem/ {print $2}')
|
||||
if [[ "$RAM_AVAILABLE_IN_DOCKER" -lt "$MIN_RAM_HARD" ]]; then
|
||||
echo "FAIL: Required minimum RAM available to Docker is $MIN_RAM_HARD MB, found $RAM_AVAILABLE_IN_DOCKER MB"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
#SSE4.2 required by Clickhouse (https://clickhouse.yandex/docs/en/operations/requirements/)
|
||||
# On KVM, cpuinfo could falsely not report SSE 4.2 support, so skip the check. https://github.com/ClickHouse/ClickHouse/issues/20#issuecomment-226849297
|
||||
# This may also happen on other virtualization software such as on VMWare ESXi hosts.
|
||||
IS_KVM=$(docker run --rm busybox grep -c 'Common KVM processor' /proc/cpuinfo || :)
|
||||
if [[ ! "$SKIP_SSE42_REQUIREMENTS" -eq 1 && "$IS_KVM" -eq 0 && "$DOCKER_ARCH" = "x86_64" ]]; then
|
||||
SUPPORTS_SSE42=$(docker run --rm busybox grep -c sse4_2 /proc/cpuinfo || :)
|
||||
if [[ "$SUPPORTS_SSE42" -eq 0 ]]; then
|
||||
echo "FAIL: The CPU your machine is running on does not support the SSE 4.2 instruction set, which is required for one of the services Sentry uses (Clickhouse). See https://github.com/getsentry/self-hosted/issues/340 for more info."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
10
install/create-docker-volumes.sh
Normal file
10
install/create-docker-volumes.sh
Normal file
@ -0,0 +1,10 @@
|
||||
echo "${_group}Creating volumes for persistent storage ..."
|
||||
|
||||
echo "Created $(docker volume create --name=sentry-clickhouse)."
|
||||
echo "Created $(docker volume create --name=sentry-data)."
|
||||
echo "Created $(docker volume create --name=sentry-kafka)."
|
||||
echo "Created $(docker volume create --name=sentry-postgres)."
|
||||
echo "Created $(docker volume create --name=sentry-redis)."
|
||||
echo "Created $(docker volume create --name=sentry-symbolicator)."
|
||||
|
||||
echo "${_endgroup}"
|
23
install/dc-detect-version.sh
Normal file
23
install/dc-detect-version.sh
Normal file
@ -0,0 +1,23 @@
|
||||
if [ "${GITHUB_ACTIONS:-}" = "true" ]; then
|
||||
_group="::group::"
|
||||
_endgroup="::endgroup::"
|
||||
else
|
||||
_group="▶ "
|
||||
_endgroup=""
|
||||
fi
|
||||
|
||||
echo "${_group}Initializing Docker Compose ..."
|
||||
|
||||
# To support users that are symlinking to docker-compose
|
||||
dc_base="$(docker compose version &>/dev/null && echo 'docker compose' || echo 'docker-compose')"
|
||||
if [[ "$(basename $0)" = "install.sh" ]]; then
|
||||
dc="$dc_base --ansi never --env-file ${_ENV}"
|
||||
else
|
||||
dc="$dc_base --ansi never"
|
||||
fi
|
||||
proxy_args="--build-arg http_proxy=${http_proxy:-} --build-arg https_proxy=${https_proxy:-} --build-arg no_proxy=${no_proxy:-}"
|
||||
dcr="$dc run --rm"
|
||||
dcb="$dc build $proxy_args"
|
||||
dbuild="docker build $proxy_args"
|
||||
|
||||
echo "${_endgroup}"
|
31
install/detect-platform.sh
Normal file
31
install/detect-platform.sh
Normal file
@ -0,0 +1,31 @@
|
||||
echo "${_group}Detecting Docker platform"
|
||||
|
||||
# Sentry SaaS uses stock Yandex ClickHouse, but they don't provide images that
|
||||
# support ARM, which is relevant especially for Apple M1 laptops, Sentry's
|
||||
# standard developer environment. As a workaround, we use an altinity image
|
||||
# targeting ARM.
|
||||
#
|
||||
# See https://github.com/getsentry/self-hosted/issues/1385#issuecomment-1101824274
|
||||
#
|
||||
# Images built on ARM also need to be tagged to use linux/arm64 on Apple
|
||||
# silicon Macs to work around an issue where they are built for
|
||||
# linux/amd64 by default due to virtualization.
|
||||
# See https://github.com/docker/cli/issues/3286 for the Docker bug.
|
||||
|
||||
if ! command -v docker &>/dev/null; then
|
||||
echo "FAIL: Could not find a \`docker\` binary on this system. Are you sure it's installed?"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
export DOCKER_ARCH=$(docker info --format '{{.Architecture}}')
|
||||
if [[ "$DOCKER_ARCH" = "x86_64" ]]; then
|
||||
export DOCKER_PLATFORM="linux/amd64"
|
||||
elif [[ "$DOCKER_ARCH" = "aarch64" ]]; then
|
||||
export DOCKER_PLATFORM="linux/arm64"
|
||||
else
|
||||
echo "FAIL: Unsupported docker architecture $DOCKER_ARCH."
|
||||
exit 1
|
||||
fi
|
||||
echo "Detected Docker platform is $DOCKER_PLATFORM"
|
||||
|
||||
echo "${_endgroup}"
|
7
install/ensure-files-from-examples.sh
Normal file
7
install/ensure-files-from-examples.sh
Normal file
@ -0,0 +1,7 @@
|
||||
echo "${_group}Ensuring files from examples ..."
|
||||
|
||||
ensure_file_from_example "$SENTRY_CONFIG_PY"
|
||||
ensure_file_from_example "$SENTRY_CONFIG_YML"
|
||||
ensure_file_from_example symbolicator/config.yml
|
||||
|
||||
echo "${_endgroup}"
|
42
install/ensure-relay-credentials.sh
Normal file
42
install/ensure-relay-credentials.sh
Normal file
@ -0,0 +1,42 @@
|
||||
echo "${_group}Ensuring Relay credentials ..."
|
||||
|
||||
RELAY_CONFIG_YML=relay/config.yml
|
||||
RELAY_CREDENTIALS_JSON=relay/credentials.json
|
||||
|
||||
ensure_file_from_example $RELAY_CONFIG_YML
|
||||
|
||||
if [[ -f "$RELAY_CREDENTIALS_JSON" ]]; then
|
||||
echo "$RELAY_CREDENTIALS_JSON already exists, skipped creation."
|
||||
else
|
||||
|
||||
# There are a couple gotchas here:
|
||||
#
|
||||
# 1. We need to use a tmp file because if we redirect output directly to
|
||||
# credentials.json, then the shell will create an empty file that relay
|
||||
# will then try to read from (regardless of options such as --stdout or
|
||||
# --overwrite) and fail because it is empty.
|
||||
#
|
||||
# 2. We pull relay:nightly before invoking `run relay credentials generate`
|
||||
# because an implicit pull under the run causes extra stdout that results
|
||||
# in a garbage credentials.json.
|
||||
#
|
||||
# 3. We need to use -T to ensure that we receive output on Docker Compose
|
||||
# 1.x and 2.2.3+ (funny story about that ... ;). Note that the long opt
|
||||
# --no-tty doesn't exist in Docker Compose 1.
|
||||
|
||||
$dc pull relay
|
||||
creds="$dcr --no-deps -T relay credentials"
|
||||
$creds generate --stdout >"$RELAY_CREDENTIALS_JSON".tmp
|
||||
mv "$RELAY_CREDENTIALS_JSON".tmp "$RELAY_CREDENTIALS_JSON"
|
||||
if ! grep -q Credentials <($creds show); then
|
||||
# Let's fail early if creds failed, to make debugging easier.
|
||||
echo "Failed to create relay credentials in $RELAY_CREDENTIALS_JSON."
|
||||
echo "--- credentials.json v ---------------------------------------"
|
||||
cat -v "$RELAY_CREDENTIALS_JSON" || true
|
||||
echo "--- credentials.json ^ ---------------------------------------"
|
||||
exit 1
|
||||
fi
|
||||
echo "Relay credentials written to $RELAY_CREDENTIALS_JSON."
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
235
install/error-handling.sh
Normal file
235
install/error-handling.sh
Normal file
@ -0,0 +1,235 @@
|
||||
echo "${_group}Setting up error handling ..."
|
||||
|
||||
if [ -z "${SENTRY_DSN:-}" ]; then
|
||||
export SENTRY_DSN='https://19555c489ded4769978daae92f2346ca@self-hosted.getsentry.net/3'
|
||||
fi
|
||||
|
||||
$dbuild -t sentry-self-hosted-jq-local --platform="$DOCKER_PLATFORM" jq
|
||||
|
||||
jq="docker run --rm -i sentry-self-hosted-jq-local"
|
||||
sentry_cli="docker run --rm -v /tmp:/work -e SENTRY_DSN=$SENTRY_DSN getsentry/sentry-cli"
|
||||
|
||||
send_envelope() {
|
||||
# Send envelope
|
||||
$sentry_cli send-envelope "$envelope_file"
|
||||
}
|
||||
|
||||
generate_breadcrumb_json() {
|
||||
cat $log_file | $jq -R -c 'split("\n") | {"message": (.[0]//""), "category": "log", "level": "info"}'
|
||||
}
|
||||
|
||||
send_event() {
|
||||
# Use traceback hash as the UUID since it is 32 characters long
|
||||
local cmd_exit=$1
|
||||
local error_msg=$2
|
||||
local traceback=$3
|
||||
local traceback_json=$4
|
||||
local breadcrumbs=$5
|
||||
local fingerprint_value=$(
|
||||
echo -n "$cmd_exit $error_msg $traceback" |
|
||||
docker run -i --rm busybox md5sum |
|
||||
cut -d' ' -f1
|
||||
)
|
||||
local envelope_file="sentry-envelope-${fingerprint_value}"
|
||||
local envelope_file_path="/tmp/$envelope_file"
|
||||
# If the envelope file exists, we've already sent it
|
||||
if [[ -f $envelope_file_path ]]; then
|
||||
echo "Looks like you've already sent this error to us, we're on it :)"
|
||||
return
|
||||
fi
|
||||
# If we haven't sent the envelope file, make it and send to Sentry
|
||||
# The format is documented at https://develop.sentry.dev/sdk/envelopes/
|
||||
# Grab length of log file, needed for the envelope header to send an attachment
|
||||
local file_length=$(wc -c <$log_file | awk '{print $1}')
|
||||
|
||||
# Add header for initial envelope information
|
||||
$jq -n -c --arg event_id "$fingerprint_value" \
|
||||
--arg dsn "$SENTRY_DSN" \
|
||||
'$ARGS.named' >"$envelope_file_path"
|
||||
# Add header to specify the event type of envelope to be sent
|
||||
echo '{"type":"event"}' >>"$envelope_file_path"
|
||||
|
||||
# Next we construct the meat of the event payload, which we build up
|
||||
# inside out using jq
|
||||
# See https://develop.sentry.dev/sdk/event-payloads/
|
||||
# for details about the event payload
|
||||
|
||||
# Then we need the exception payload
|
||||
# https://develop.sentry.dev/sdk/event-payloads/exception/
|
||||
# but first we need to make the stacktrace which goes in the exception payload
|
||||
frames=$(echo "$traceback_json" | $jq -s -c)
|
||||
stacktrace=$($jq -n -c --argjson frames "$frames" '$ARGS.named')
|
||||
exception=$(
|
||||
$jq -n -c --arg "type" Error \
|
||||
--arg value "$error_msg" \
|
||||
--argjson stacktrace "$stacktrace" \
|
||||
'$ARGS.named'
|
||||
)
|
||||
|
||||
# It'd be a bit cleaner in the Sentry UI if we passed the inputs to
|
||||
# fingerprint_value hash rather than the hash itself (I believe the ultimate
|
||||
# hash ends up simply being a hash of our hash), but we want the hash locally
|
||||
# so that we can avoid resending the same event (design decision to avoid
|
||||
# spam in the system). It was also futzy to figure out how to get the
|
||||
# traceback in there properly. Meh.
|
||||
event_body=$(
|
||||
$jq -n -c --arg level error \
|
||||
--argjson exception "{\"values\":[$exception]}" \
|
||||
--argjson breadcrumbs "{\"values\": $breadcrumbs}" \
|
||||
--argjson fingerprint "[\"$fingerprint_value\"]" \
|
||||
'$ARGS.named'
|
||||
)
|
||||
echo "$event_body" >>$envelope_file_path
|
||||
# Add attachment to the event
|
||||
attachment=$(
|
||||
$jq -n -c --arg "type" attachment \
|
||||
--arg length "$file_length" \
|
||||
--arg content_type "text/plain" \
|
||||
--arg filename install_log.txt \
|
||||
'{"type": $type,"length": $length|tonumber,"content_type": $content_type,"filename": $filename}'
|
||||
)
|
||||
echo "$attachment" >>$envelope_file_path
|
||||
cat $log_file >>$envelope_file_path
|
||||
# Send envelope
|
||||
send_envelope $envelope_file
|
||||
}
|
||||
|
||||
if [[ -z "${REPORT_SELF_HOSTED_ISSUES:-}" ]]; then
|
||||
echo
|
||||
echo "Hey, so ... we would love to automatically find out about issues with your"
|
||||
echo "Sentry instance so that we can improve the product. Turns out there is an app"
|
||||
echo "for that, called Sentry. Would you be willing to let us automatically send data"
|
||||
echo "about your instance upstream to Sentry for development and debugging purposes?"
|
||||
echo
|
||||
echo " y / yes / 1"
|
||||
echo " n / no / 0"
|
||||
echo
|
||||
echo "(Btw, we send this to our own self-hosted Sentry instance, not to Sentry SaaS,"
|
||||
echo "so that we can be in this together.)"
|
||||
echo
|
||||
echo "Here's the info we may collect:"
|
||||
echo
|
||||
echo " - OS username"
|
||||
echo " - IP address"
|
||||
echo " - install log"
|
||||
echo " - runtime errors"
|
||||
echo " - performance data"
|
||||
echo
|
||||
echo "Thirty (30) day retention. No marketing. Privacy policy at sentry.io/privacy."
|
||||
echo
|
||||
|
||||
yn=""
|
||||
until [ ! -z "$yn" ]; do
|
||||
read -p "y or n? " yn
|
||||
case $yn in
|
||||
y | yes | 1)
|
||||
export REPORT_SELF_HOSTED_ISSUES=1
|
||||
echo
|
||||
echo -n "Thank you."
|
||||
;;
|
||||
n | no | 0)
|
||||
export REPORT_SELF_HOSTED_ISSUES=0
|
||||
echo
|
||||
echo -n "Understood."
|
||||
;;
|
||||
*) yn="" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo " To avoid this prompt in the future, use one of these flags:"
|
||||
echo
|
||||
echo " --report-self-hosted-issues"
|
||||
echo " --no-report-self-hosted-issues"
|
||||
echo
|
||||
echo "or set the REPORT_SELF_HOSTED_ISSUES environment variable:"
|
||||
echo
|
||||
echo " REPORT_SELF_HOSTED_ISSUES=1 to send data"
|
||||
echo " REPORT_SELF_HOSTED_ISSUES=0 to not send data"
|
||||
echo
|
||||
sleep 5
|
||||
fi
|
||||
|
||||
# Make sure we can use sentry-cli if we need it.
|
||||
if [ "$REPORT_SELF_HOSTED_ISSUES" == 1 ]; then
|
||||
if ! docker pull getsentry/sentry-cli:latest; then
|
||||
echo "Failed to pull sentry-cli, won't report to Sentry after all."
|
||||
export REPORT_SELF_HOSTED_ISSUES=0
|
||||
fi
|
||||
fi
|
||||
|
||||
# Courtesy of https://stackoverflow.com/a/2183063/90297
|
||||
trap_with_arg() {
|
||||
func="$1"
|
||||
shift
|
||||
for sig; do
|
||||
trap "$func $sig" "$sig"
|
||||
done
|
||||
}
|
||||
|
||||
DID_CLEAN_UP=0
|
||||
# the cleanup function will be the exit point
|
||||
cleanup() {
|
||||
local retcode=$?
|
||||
local cmd="${BASH_COMMAND}"
|
||||
if [[ "$DID_CLEAN_UP" -eq 1 ]]; then
|
||||
return 0
|
||||
fi
|
||||
DID_CLEAN_UP=1
|
||||
if [[ "$1" != "EXIT" ]]; then
|
||||
set +o xtrace
|
||||
# Save the error message that comes from the last line of the log file
|
||||
error_msg=$(tail -n 1 "$log_file")
|
||||
# Create the breadcrumb payload now before stacktrace is printed
|
||||
# https://develop.sentry.dev/sdk/event-payloads/breadcrumbs/
|
||||
# Use sed to remove the last line, that is reported through the error message
|
||||
breadcrumbs=$(generate_breadcrumb_json | sed '$d' | $jq -s -c)
|
||||
printf -v err '%s' "Error in ${BASH_SOURCE[1]}:${BASH_LINENO[0]}."
|
||||
printf -v cmd_exit '%s' "'$cmd' exited with status $retcode"
|
||||
printf '%s\n%s\n' "$err" "$cmd_exit"
|
||||
local stack_depth=${#FUNCNAME[@]}
|
||||
local traceback=""
|
||||
local traceback_json=""
|
||||
if [ $stack_depth -gt 2 ]; then
|
||||
for ((i = $(($stack_depth - 1)), j = 1; i > 0; i--, j++)); do
|
||||
local indent="$(yes a | head -$j | tr -d '\n')"
|
||||
local src=${BASH_SOURCE[$i]}
|
||||
local lineno=${BASH_LINENO[$i - 1]}
|
||||
local funcname=${FUNCNAME[$i]}
|
||||
JSON=$(
|
||||
$jq -n -c --arg filename "$src" \
|
||||
--arg "function" "$funcname" \
|
||||
--arg lineno "$lineno" \
|
||||
'{"filename": $filename, "function": $function, "lineno": $lineno|tonumber}'
|
||||
)
|
||||
# If we're in the stacktrace of the file we failed on, we can add a context line with the command run that failed
|
||||
if [[ $i -eq 1 ]]; then
|
||||
JSON=$(
|
||||
$jq -n -c --arg cmd "$cmd" \
|
||||
--argjson json "$JSON" \
|
||||
'$json + {"context_line": $cmd}'
|
||||
)
|
||||
fi
|
||||
printf -v traceback_json '%s\n' "$traceback_json$JSON"
|
||||
printf -v traceback '%s\n' "$traceback${indent//a/-}> $src:$funcname:$lineno"
|
||||
done
|
||||
fi
|
||||
echo "$traceback"
|
||||
|
||||
# Only send event when report issues flag is set and if trap signal is not INT (ctrl+c)
|
||||
if [[ "$REPORT_SELF_HOSTED_ISSUES" == 1 && "$1" != "INT" ]]; then
|
||||
send_event "$cmd_exit" "$error_msg" "$traceback" "$traceback_json" "$breadcrumbs"
|
||||
fi
|
||||
|
||||
if [[ -n "$MINIMIZE_DOWNTIME" ]]; then
|
||||
echo "*NOT* cleaning up, to clean your environment run \"docker compose stop\"."
|
||||
else
|
||||
echo "Cleaning up..."
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -z "$MINIMIZE_DOWNTIME" ]]; then
|
||||
$dc stop -t $STOP_TIMEOUT &>/dev/null
|
||||
fi
|
||||
}
|
||||
|
||||
echo "${_endgroup}"
|
15
install/generate-secret-key.sh
Normal file
15
install/generate-secret-key.sh
Normal file
@ -0,0 +1,15 @@
|
||||
echo "${_group}Generating secret key ..."
|
||||
|
||||
if grep -xq "system.secret-key: '!!changeme!!'" $SENTRY_CONFIG_YML; then
|
||||
# This is to escape the secret key to be used in sed below
|
||||
# Note the need to set LC_ALL=C due to BSD tr and sed always trying to decode
|
||||
# whatever is passed to them. Kudos to https://stackoverflow.com/a/23584470/90297
|
||||
SECRET_KEY=$(
|
||||
export LC_ALL=C
|
||||
head /dev/urandom | tr -dc "a-z0-9@#%^&*(-_=+)" | head -c 50 | sed -e 's/[\/&]/\\&/g'
|
||||
)
|
||||
sed -i -e 's/^system.secret-key:.*$/system.secret-key: '"'$SECRET_KEY'"'/' $SENTRY_CONFIG_YML
|
||||
echo "Secret key written to $SENTRY_CONFIG_YML"
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
34
install/geoip.sh
Normal file
34
install/geoip.sh
Normal file
@ -0,0 +1,34 @@
|
||||
echo "${_group}Setting up GeoIP integration ..."
|
||||
|
||||
install_geoip() {
|
||||
local mmdb=geoip/GeoLite2-City.mmdb
|
||||
local conf=geoip/GeoIP.conf
|
||||
local result='Done'
|
||||
|
||||
echo "Setting up IP address geolocation ..."
|
||||
if [[ ! -f "$mmdb" ]]; then
|
||||
echo -n "Installing (empty) IP address geolocation database ... "
|
||||
cp "$mmdb.empty" "$mmdb"
|
||||
echo "done."
|
||||
else
|
||||
echo "IP address geolocation database already exists."
|
||||
fi
|
||||
|
||||
if [[ ! -f "$conf" ]]; then
|
||||
echo "IP address geolocation is not configured for updates."
|
||||
echo "See https://develop.sentry.dev/self-hosted/geolocation/ for instructions."
|
||||
result='Error'
|
||||
else
|
||||
echo "IP address geolocation is configured for updates."
|
||||
echo "Updating IP address geolocation database ... "
|
||||
if ! $dcr geoipupdate; then
|
||||
result='Error'
|
||||
fi
|
||||
echo "$result updating IP address geolocation database."
|
||||
fi
|
||||
echo "$result setting up IP address geolocation."
|
||||
}
|
||||
|
||||
install_geoip
|
||||
|
||||
echo "${_endgroup}"
|
79
install/parse-cli.sh
Normal file
79
install/parse-cli.sh
Normal file
@ -0,0 +1,79 @@
|
||||
echo "${_group}Parsing command line ..."
|
||||
|
||||
show_help() {
|
||||
cat <<EOF
|
||||
Usage: $0 [options]
|
||||
|
||||
Install Sentry with \`docker compose\`.
|
||||
|
||||
Options:
|
||||
-h, --help Show this message and exit.
|
||||
--minimize-downtime EXPERIMENTAL: try to keep accepting events for as long
|
||||
as possible while upgrading. This will disable cleanup
|
||||
on error, and might leave your installation in a
|
||||
partially upgraded state. This option might not reload
|
||||
all configuration, and is only meant for in-place
|
||||
upgrades.
|
||||
--skip-commit-check Skip the check for the latest commit when on the master
|
||||
branch of a \`self-hosted\` Git working copy.
|
||||
--skip-user-creation Skip the initial user creation prompt (ideal for non-
|
||||
interactive installs).
|
||||
--skip-sse42-requirements
|
||||
Skip checking that your environment meets the
|
||||
requirements to run Sentry. Only do this if you know
|
||||
what you are doing.
|
||||
--report-self-hosted-issues
|
||||
Report error and performance data about your self-hosted
|
||||
instance upstream to Sentry. See sentry.io/privacy for
|
||||
our privacy policy.
|
||||
--no-report-self-hosted-issues
|
||||
Do not report error and performance data about your
|
||||
self-hosted instance upstream to Sentry.
|
||||
EOF
|
||||
}
|
||||
|
||||
depwarn() {
|
||||
echo "WARNING The $1 is deprecated. Please use $2 instead."
|
||||
}
|
||||
|
||||
if [ ! -z "${SKIP_USER_PROMPT:-}" ]; then
|
||||
depwarn "SKIP_USER_PROMPT variable" "SKIP_USER_CREATION"
|
||||
SKIP_USER_CREATION="${SKIP_USER_PROMPT}"
|
||||
fi
|
||||
|
||||
SKIP_USER_CREATION="${SKIP_USER_CREATION:-}"
|
||||
MINIMIZE_DOWNTIME="${MINIMIZE_DOWNTIME:-}"
|
||||
SKIP_COMMIT_CHECK="${SKIP_COMMIT_CHECK:-}"
|
||||
REPORT_SELF_HOSTED_ISSUES="${REPORT_SELF_HOSTED_ISSUES:-}"
|
||||
SKIP_SSE42_REQUIREMENTS="${SKIP_SSE42_REQUIREMENTS:-}"
|
||||
|
||||
while (($#)); do
|
||||
case "$1" in
|
||||
-h | --help)
|
||||
show_help
|
||||
exit
|
||||
;;
|
||||
--no-user-prompt)
|
||||
SKIP_USER_CREATION=1
|
||||
depwarn "--no-user-prompt flag" "--skip-user-creation"
|
||||
;;
|
||||
--skip-user-prompt)
|
||||
SKIP_USER_CREATION=1
|
||||
depwarn "--skip-user-prompt flag" "--skip-user-creation"
|
||||
;;
|
||||
--skip-user-creation) SKIP_USER_CREATION=1 ;;
|
||||
--minimize-downtime) MINIMIZE_DOWNTIME=1 ;;
|
||||
--skip-commit-check) SKIP_COMMIT_CHECK=1 ;;
|
||||
--report-self-hosted-issues) REPORT_SELF_HOSTED_ISSUES=1 ;;
|
||||
--no-report-self-hosted-issues) REPORT_SELF_HOSTED_ISSUES=0 ;;
|
||||
--skip-sse42-requirements) SKIP_SSE42_REQUIREMENTS=1 ;;
|
||||
--) ;;
|
||||
*)
|
||||
echo "Unexpected argument: $1. Use --help for usage information."
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
echo "${_endgroup}"
|
39
install/set-up-and-migrate-database.sh
Normal file
39
install/set-up-and-migrate-database.sh
Normal file
@ -0,0 +1,39 @@
|
||||
echo "${_group}Setting up / migrating database ..."
|
||||
|
||||
# Fixes https://github.com/getsentry/self-hosted/issues/2758, where a migration fails due to indexing issue
|
||||
$dc up -d postgres
|
||||
# Wait for postgres
|
||||
RETRIES=5
|
||||
until $dc exec postgres psql -U postgres -c "select 1" >/dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
|
||||
echo "Waiting for postgres server, $((RETRIES--)) remaining attempts..."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
os=$($dc exec postgres cat /etc/os-release | grep 'ID=debian')
|
||||
if [[ -z $os ]]; then
|
||||
echo "Postgres image debian check failed, exiting..."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Using django ORM to provide broader support for users with external databases
|
||||
$dcr web shell -c "
|
||||
from django.db import connection
|
||||
|
||||
with connection.cursor() as cursor:
|
||||
cursor.execute('ALTER TABLE IF EXISTS sentry_groupedmessage DROP CONSTRAINT IF EXISTS sentry_groupedmessage_project_id_id_515aaa7e_uniq;')
|
||||
cursor.execute('DROP INDEX IF EXISTS sentry_groupedmessage_project_id_id_515aaa7e_uniq;')
|
||||
"
|
||||
|
||||
if [[ -n "${CI:-}" || "${SKIP_USER_CREATION:-0}" == 1 ]]; then
|
||||
$dcr web upgrade --noinput --create-kafka-topics
|
||||
echo ""
|
||||
echo "Did not prompt for user creation. Run the following command to create one"
|
||||
echo "yourself (recommended):"
|
||||
echo ""
|
||||
echo " $dc_base run --rm web createuser"
|
||||
echo ""
|
||||
else
|
||||
$dcr web upgrade --create-kafka-topics
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
40
install/setup-js-sdk-assets.sh
Normal file
40
install/setup-js-sdk-assets.sh
Normal file
@ -0,0 +1,40 @@
|
||||
# This will only run if the SETUP_JS_SDK_ASSETS environment variable is set to 1.
|
||||
# Think of this as some kind of a feature flag.
|
||||
if [[ "${SETUP_JS_SDK_ASSETS:-}" == "1" ]]; then
|
||||
echo "${_group}Setting up JS SDK assets"
|
||||
|
||||
# If the `sentry-nginx-www` volume exists, we need to prune the contents.
|
||||
# We don't want to fill the volume with old JS SDK assets.
|
||||
# If people want to keep the old assets, they can set the environment variable
|
||||
# `SETUP_JS_SDK_KEEP_OLD_ASSETS` to any value.
|
||||
if [[ -z "${SETUP_JS_SDK_KEEP_OLD_ASSETS:-}" ]]; then
|
||||
echo "Cleaning up old JS SDK assets..."
|
||||
$dcr --no-deps --rm -v "sentry-nginx-www:/var/www" nginx rm -rf /var/www/js-sdk/*
|
||||
fi
|
||||
|
||||
$dbuild -t sentry-self-hosted-jq-local --platform="$DOCKER_PLATFORM" jq
|
||||
|
||||
jq="docker run --rm -i sentry-self-hosted-jq-local"
|
||||
|
||||
loader_registry=$($dcr --no-deps --rm -T web cat /usr/src/sentry/src/sentry/loader/_registry.json)
|
||||
# The `loader_registry` should start with "Updating certificates...", we want to delete that and the subsequent ca-certificates related lines.
|
||||
# We want to remove everything before the first '{'.
|
||||
loader_registry=$(echo "$loader_registry" | sed '0,/{/s/[^{]*//')
|
||||
|
||||
# Sentry backend provides SDK versions from v4.x up to v8.x.
|
||||
latest_js_v4=$(echo "$loader_registry" | $jq -r '.versions | reverse | map(select(.|any(.; startswith("4.")))) | .[0]')
|
||||
latest_js_v5=$(echo "$loader_registry" | $jq -r '.versions | reverse | map(select(.|any(.; startswith("5.")))) | .[0]')
|
||||
latest_js_v6=$(echo "$loader_registry" | $jq -r '.versions | reverse | map(select(.|any(.; startswith("6.")))) | .[0]')
|
||||
latest_js_v7=$(echo "$loader_registry" | $jq -r '.versions | reverse | map(select(.|any(.; startswith("7.")))) | .[0]')
|
||||
latest_js_v8=$(echo "$loader_registry" | $jq -r '.versions | reverse | map(select(.|any(.; startswith("8.")))) | .[0]')
|
||||
|
||||
echo "Found JS SDKs: v${latest_js_v4}, v${latest_js_v5}, v${latest_js_v6}, v${latest_js_v7}, v${latest_js_v8}"
|
||||
|
||||
versions="{$latest_js_v4,$latest_js_v5,$latest_js_v6,$latest_js_v7,$latest_js_v8}"
|
||||
variants="{bundle,bundle.tracing,bundle.tracing.replay,bundle.replay,bundle.tracing.replay.feedback,bundle.feedback}"
|
||||
|
||||
# Download those versions & variants using curl
|
||||
$dcr --no-deps --rm -v "sentry-nginx-www:/var/www" nginx curl -w '%{response_code} %{url}\n' --no-progress-meter --compressed --retry 3 --create-dirs -fLo "/var/www/js-sdk/#1/#2.min.js" "https://browser.sentry-cdn.com/${versions}/${variants}.min.js" || true
|
||||
|
||||
echo "${_endgroup}"
|
||||
fi
|
11
install/turn-things-off.sh
Normal file
11
install/turn-things-off.sh
Normal file
@ -0,0 +1,11 @@
|
||||
echo "${_group}Turning things off ..."
|
||||
|
||||
if [[ -n "$MINIMIZE_DOWNTIME" ]]; then
|
||||
# Stop everything but relay and nginx
|
||||
$dc rm -fsv $($dc config --services | grep -v -E '^(nginx|relay)$')
|
||||
else
|
||||
# Clean up old stuff and ensure nothing is working while we install/update
|
||||
$dc down -t $STOP_TIMEOUT --rmi local --remove-orphans
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
14
install/update-docker-images.sh
Normal file
14
install/update-docker-images.sh
Normal file
@ -0,0 +1,14 @@
|
||||
echo "${_group}Fetching and updating Docker images ..."
|
||||
|
||||
# We tag locally built images with a '-self-hosted-local' suffix. `docker
|
||||
# compose pull` tries to pull these too and shows a 404 error on the console
|
||||
# which is confusing and unnecessary. To overcome this, we add the
|
||||
# stderr>stdout redirection below and pass it through grep, ignoring all lines
|
||||
# having this '-onpremise-local' suffix.
|
||||
|
||||
$dc pull -q --ignore-pull-failures 2>&1 | grep -v -- -self-hosted-local || true
|
||||
|
||||
# We may not have the set image on the repo (local images) so allow fails
|
||||
docker pull ${SENTRY_IMAGE} || true
|
||||
|
||||
echo "${_endgroup}"
|
9
install/update-docker-volume-permissions.sh
Normal file
9
install/update-docker-volume-permissions.sh
Normal file
@ -0,0 +1,9 @@
|
||||
echo "${_group}Ensuring Kafka and Zookeeper volumes have correct permissions ..."
|
||||
|
||||
# Only supporting platforms on linux x86 platforms and not apple silicon. I'm assuming that folks using apple silicon are doing it for dev purposes and it's difficult
|
||||
# to change permissions of docker volumes since it is run in a VM.
|
||||
if [[ -n "$(docker volume ls -q -f name=sentry-zookeeper)" && -n "$(docker volume ls -q -f name=sentry-kafka)" ]]; then
|
||||
docker run --rm -v "sentry-zookeeper:/sentry-zookeeper-data" -v "sentry-kafka:/sentry-kafka-data" -v "${COMPOSE_PROJECT_NAME}_sentry-zookeeper-log:/sentry-zookeeper-log-data" busybox chmod -R a+w /sentry-zookeeper-data /sentry-kafka-data /sentry-zookeeper-log-data
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
35
install/upgrade-clickhouse.sh
Normal file
35
install/upgrade-clickhouse.sh
Normal file
@ -0,0 +1,35 @@
|
||||
echo "${_group}Upgrading Clickhouse ..."
|
||||
|
||||
function wait_for_clickhouse() {
|
||||
# Wait for clickhouse
|
||||
RETRIES=30
|
||||
until $dc ps clickhouse | grep 'healthy' || [ $RETRIES -eq 0 ]; do
|
||||
echo "Waiting for clickhouse server, $((RETRIES--)) remaining attempts..."
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
# First check to see if user is upgrading by checking for existing clickhouse volume
|
||||
if [[ -n "$(docker volume ls -q --filter name=sentry-clickhouse)" ]]; then
|
||||
# Start clickhouse if it is not already running
|
||||
$dc up -d clickhouse
|
||||
|
||||
# Wait for clickhouse
|
||||
wait_for_clickhouse
|
||||
|
||||
# In order to get to 23.8, we need to first upgrade go from 21.8 -> 22.8 -> 23.3 -> 23.8
|
||||
version=$($dc exec clickhouse clickhouse-client -q 'SELECT version()')
|
||||
if [[ "$version" == "21.8.13.1.altinitystable" || "$version" == "21.8.12.29.altinitydev.arm" ]]; then
|
||||
$dc down clickhouse
|
||||
$dcb --build-arg BASE_IMAGE=altinity/clickhouse-server:22.8.15.25.altinitystable clickhouse
|
||||
$dc up -d clickhouse
|
||||
wait_for_clickhouse
|
||||
$dc down clickhouse
|
||||
$dcb --build-arg BASE_IMAGE=altinity/clickhouse-server:23.3.19.33.altinitystable clickhouse
|
||||
$dc up -d clickhouse
|
||||
wait_for_clickhouse
|
||||
else
|
||||
echo "Detected clickhouse version $version. Skipping upgrades!"
|
||||
fi
|
||||
fi
|
||||
echo "${_endgroup}"
|
44
install/upgrade-postgres.sh
Normal file
44
install/upgrade-postgres.sh
Normal file
@ -0,0 +1,44 @@
|
||||
echo "${_group}Ensuring proper PostgreSQL version ..."
|
||||
|
||||
if [[ -n "$(docker volume ls -q --filter name=sentry-postgres)" && "$(docker run --rm -v sentry-postgres:/db busybox cat /db/PG_VERSION 2>/dev/null)" == "9.6" ]]; then
|
||||
docker volume rm sentry-postgres-new || true
|
||||
# If this is Postgres 9.6 data, start upgrading it to 14.0 in a new volume
|
||||
docker run --rm \
|
||||
-v sentry-postgres:/var/lib/postgresql/9.6/data \
|
||||
-v sentry-postgres-new:/var/lib/postgresql/14/data \
|
||||
tianon/postgres-upgrade:9.6-to-14
|
||||
|
||||
# Get rid of the old volume as we'll rename the new one to that
|
||||
docker volume rm sentry-postgres
|
||||
docker volume create --name sentry-postgres
|
||||
# There's no rename volume in Docker so copy the contents from old to new name
|
||||
# Also append the `host all all all trust` line as `tianon/postgres-upgrade:9.6-to-14`
|
||||
# doesn't do that automatically.
|
||||
docker run --rm -v sentry-postgres-new:/from -v sentry-postgres:/to alpine ash -c \
|
||||
"cd /from ; cp -av . /to ; echo 'host all all all trust' >> /to/pg_hba.conf"
|
||||
# Finally, remove the new old volume as we are all in sentry-postgres now.
|
||||
docker volume rm sentry-postgres-new
|
||||
echo "Re-indexing due to glibc change, this may take a while..."
|
||||
echo "Starting up new PostgreSQL version"
|
||||
$dc up -d postgres
|
||||
|
||||
# Wait for postgres
|
||||
RETRIES=5
|
||||
until $dc exec postgres psql -U postgres -c "select 1" >/dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
|
||||
echo "Waiting for postgres server, $((RETRIES--)) remaining attempts..."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# VOLUME_NAME is the same as container name
|
||||
# Reindex all databases and their system catalogs which are not templates
|
||||
DBS=$($dc exec postgres psql -qAt -U postgres -c "SELECT datname FROM pg_database WHERE datistemplate = false;")
|
||||
for db in ${DBS}; do
|
||||
echo "Re-indexing database: ${db}"
|
||||
$dc exec postgres psql -qAt -U postgres -d ${db} -c "reindex system ${db}"
|
||||
$dc exec postgres psql -qAt -U postgres -d ${db} -c "reindex database ${db};"
|
||||
done
|
||||
|
||||
$dc stop postgres
|
||||
fi
|
||||
|
||||
echo "${_endgroup}"
|
35
install/wrap-up.sh
Normal file
35
install/wrap-up.sh
Normal file
@ -0,0 +1,35 @@
|
||||
if [[ "$MINIMIZE_DOWNTIME" ]]; then
|
||||
echo "${_group}Waiting for Sentry to start ..."
|
||||
|
||||
# Start the whole setup, except nginx and relay.
|
||||
$dc up -d --remove-orphans $($dc config --services | grep -v -E '^(nginx|relay)$')
|
||||
$dc restart relay
|
||||
$dc exec -T nginx nginx -s reload
|
||||
|
||||
docker run --rm --network="${COMPOSE_PROJECT_NAME}_default" alpine ash \
|
||||
-c 'while [[ "$(wget -T 1 -q -O- http://web:96/_health/)" != "ok" ]]; do sleep 0.5; done'
|
||||
|
||||
# Make sure everything is up. This should only touch relay and nginx
|
||||
$dc up -d
|
||||
|
||||
echo "${_endgroup}"
|
||||
else
|
||||
echo ""
|
||||
echo "-----------------------------------------------------------------"
|
||||
echo ""
|
||||
echo "You're all done! Run the following command to get Sentry running:"
|
||||
echo ""
|
||||
if [[ "${_ENV}" =~ ".env.custom" ]]; then
|
||||
echo " $dc_base --env-file ${_ENV} up -d"
|
||||
else
|
||||
echo " $dc_base up -d"
|
||||
fi
|
||||
echo ""
|
||||
echo "-----------------------------------------------------------------"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# TODO(getsentry/self-hosted#2489)
|
||||
if docker volume ls | grep -qw sentry-zookeeper; then
|
||||
docker volume rm sentry-zookeeper
|
||||
fi
|
11
jq/Dockerfile
Normal file
11
jq/Dockerfile
Normal file
@ -0,0 +1,11 @@
|
||||
FROM debian:bookworm-slim
|
||||
|
||||
LABEL MAINTAINER="oss@sentry.io"
|
||||
|
||||
RUN set -x \
|
||||
&& apt-get update \
|
||||
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends jq \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENTRYPOINT ["jq"]
|
106
nginx.conf
Normal file
106
nginx.conf
Normal file
@ -0,0 +1,106 @@
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
reset_timedout_connection on;
|
||||
|
||||
keepalive_timeout 75s;
|
||||
|
||||
gzip off;
|
||||
server_tokens off;
|
||||
|
||||
server_names_hash_bucket_size 64;
|
||||
types_hash_max_size 2048;
|
||||
types_hash_bucket_size 64;
|
||||
client_body_buffer_size 64k;
|
||||
client_max_body_size 100m;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
proxy_redirect off;
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
proxy_next_upstream error timeout invalid_header http_502 http_503 non_idempotent;
|
||||
proxy_next_upstream_tries 2;
|
||||
|
||||
# Docker default address pools
|
||||
# https://github.com/moby/libnetwork/blob/3797618f9a38372e8107d8c06f6ae199e1133ae8/ipamutils/utils.go#L10-L22
|
||||
set_real_ip_from 172.17.0.0/16;
|
||||
set_real_ip_from 172.18.0.0/16;
|
||||
set_real_ip_from 172.19.0.0/16;
|
||||
set_real_ip_from 172.20.0.0/14;
|
||||
set_real_ip_from 172.24.0.0/14;
|
||||
set_real_ip_from 172.28.0.0/14;
|
||||
set_real_ip_from 192.168.0.0/16;
|
||||
set_real_ip_from 10.0.0.0/8;
|
||||
real_ip_header X-Forwarded-For;
|
||||
real_ip_recursive on;
|
||||
|
||||
# Remove the Connection header if the client sends it,
|
||||
# it could be "close" to close a keepalive connection
|
||||
proxy_set_header Connection '';
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Request-Id $request_id;
|
||||
proxy_read_timeout 30s;
|
||||
proxy_send_timeout 5s;
|
||||
|
||||
upstream relay {
|
||||
server relay:3000;
|
||||
keepalive 2;
|
||||
}
|
||||
|
||||
upstream sentry {
|
||||
server web:96;
|
||||
keepalive 2;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location /api/store/ {
|
||||
proxy_pass http://relay;
|
||||
}
|
||||
location ~ ^/api/[1-9]\d*/ {
|
||||
proxy_pass http://relay;
|
||||
}
|
||||
location ^~ /api/0/relays/ {
|
||||
proxy_pass http://relay;
|
||||
}
|
||||
location ^~ /js-sdk/ {
|
||||
root /var/www/;
|
||||
# This value is set to mimic the behavior of the upstream Sentry CDN. For security reasons,
|
||||
# it is recommended to change this to your Sentry URL (in most cases same as system.url-prefix).
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
}
|
||||
location / {
|
||||
proxy_pass http://sentry;
|
||||
}
|
||||
location /_static/ {
|
||||
proxy_pass http://sentry;
|
||||
proxy_hide_header Content-Disposition;
|
||||
}
|
||||
}
|
||||
}
|
27
redis.conf
Normal file
27
redis.conf
Normal file
@ -0,0 +1,27 @@
|
||||
# redis.conf
|
||||
|
||||
# The 'maxmemory' directive controls the maximum amount of memory Redis is allowed to use.
|
||||
# Setting 'maxmemory 0' means there is no limit on memory usage, allowing Redis to use as much
|
||||
# memory as the operating system allows. This is suitable for environments where memory
|
||||
# constraints are not a concern.
|
||||
#
|
||||
# Alternatively, you can specify a limit, such as 'maxmemory 15gb', to restrict Redis to
|
||||
# using a maximum of 15 gigabytes of memory.
|
||||
#
|
||||
# Example:
|
||||
# maxmemory 0 # Unlimited memory usage
|
||||
# maxmemory 15gb # Limit memory usage to 15 GB
|
||||
|
||||
maxmemory 0
|
||||
|
||||
# This setting determines how Redis evicts keys when it reaches the memory limit.
|
||||
# `allkeys-lru` evicts the least recently used keys from all keys stored in Redis,
|
||||
# allowing frequently accessed data to remain in memory while older data is removed.
|
||||
# That said we use `volatile-lru` as Redis is used both as a cache and processing
|
||||
# queue in self-hosted Sentry.
|
||||
# > The volatile-lru and volatile-random policies are mainly useful when you want to
|
||||
# > use a single Redis instance for both caching and for a set of persistent keys.
|
||||
# > However, you should consider running two separate Redis instances in a case like
|
||||
# > this, if possible.
|
||||
|
||||
maxmemory-policy volatile-lru
|
13
relay/config.example.yml
Normal file
13
relay/config.example.yml
Normal file
@ -0,0 +1,13 @@
|
||||
relay:
|
||||
upstream: "http://web:96/"
|
||||
host: 0.0.0.0
|
||||
port: 3000
|
||||
logging:
|
||||
level: WARN
|
||||
processing:
|
||||
enabled: true
|
||||
kafka_config:
|
||||
- {name: "bootstrap.servers", value: "kafka:9092"}
|
||||
- {name: "message.max.bytes", value: 50000000} # 50MB
|
||||
redis: redis://redis:6379
|
||||
geoip_path: "/geoip/GeoLite2-City.mmdb"
|
8
requirements-dev.txt
Normal file
8
requirements-dev.txt
Normal file
@ -0,0 +1,8 @@
|
||||
sentry-sdk>=2.4.0
|
||||
pytest>=8.0.0
|
||||
pytest-cov>=4.1.0
|
||||
pytest-rerunfailures>=11.0
|
||||
pytest-sentry>=0.1.11
|
||||
httpx>=0.25.2
|
||||
beautifulsoup4>=4.7.1
|
||||
cryptography>=43.0.3
|
96
scripts/_lib.sh
Executable file
96
scripts/_lib.sh
Executable file
@ -0,0 +1,96 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -eEuo pipefail
|
||||
|
||||
if [ -n "${DEBUG:-}" ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
function confirm() {
|
||||
read -p "$1 [y/n] " confirmation
|
||||
if [ "$confirmation" != "y" ]; then
|
||||
echo "Canceled. 😅"
|
||||
exit
|
||||
fi
|
||||
}
|
||||
|
||||
# The purpose of this script is to make it easy to reset a local self-hosted
|
||||
# install to a clean state, optionally targeting a particular version.
|
||||
|
||||
function reset() {
|
||||
# If we have a version given, validate it.
|
||||
# ----------------------------------------
|
||||
# Note that arbitrary git refs won't work, because the *_IMAGE variables in
|
||||
# .env will almost certainly point to :latest. Tagged releases are generally
|
||||
# the only refs where these component versions are pinned, so enforce that
|
||||
# we're targeting a valid tag here. Do this early in order to fail fast.
|
||||
if [ -n "$version" ]; then
|
||||
set +e
|
||||
git rev-parse --verify --quiet "refs/tags/$version" >/dev/null
|
||||
if [ $? -gt 0 ]; then
|
||||
echo "Bad version: $version"
|
||||
exit
|
||||
fi
|
||||
set -e
|
||||
fi
|
||||
|
||||
# Make sure they mean it.
|
||||
if [ "${FORCE_CLEAN:-}" == "1" ]; then
|
||||
echo "☠️ Seeing FORCE=1, forcing cleanup."
|
||||
echo
|
||||
else
|
||||
confirm "☠️ Warning! 😳 This is highly destructive! 😱 Are you sure you wish to proceed?"
|
||||
echo "Okay ... good luck! 😰"
|
||||
fi
|
||||
|
||||
# Hit the reset button.
|
||||
$dc down --volumes --remove-orphans --rmi local
|
||||
|
||||
# Remove any remaining (likely external) volumes with name matching 'sentry-.*'.
|
||||
for volume in $(docker volume list --format '{{ .Name }}' | grep '^sentry-'); do
|
||||
docker volume remove $volume >/dev/null &&
|
||||
echo "Removed volume: $volume" ||
|
||||
echo "Skipped volume: $volume"
|
||||
done
|
||||
|
||||
# If we have a version given, switch to it.
|
||||
if [ -n "$version" ]; then
|
||||
git checkout "$version"
|
||||
fi
|
||||
}
|
||||
|
||||
function backup() {
|
||||
type=${1:-"global"}
|
||||
touch $(pwd)/sentry/backup.json
|
||||
chmod 666 $(pwd)/sentry/backup.json
|
||||
$dc run -v $(pwd)/sentry:/sentry-data/backup --rm -T -e SENTRY_LOG_LEVEL=CRITICAL web export $type /sentry-data/backup/backup.json
|
||||
}
|
||||
|
||||
function restore() {
|
||||
type=${1:-"global"}
|
||||
$dc run --rm -T web import $type /etc/sentry/backup.json
|
||||
}
|
||||
|
||||
# Needed variables to source error-handling script
|
||||
MINIMIZE_DOWNTIME="${MINIMIZE_DOWNTIME:-}"
|
||||
STOP_TIMEOUT=60
|
||||
|
||||
# Save logs in order to send envelope to Sentry
|
||||
log_file=sentry_"$cmd"_log-$(date +'%Y-%m-%d_%H-%M-%S').txt
|
||||
exec &> >(tee -a "$log_file")
|
||||
version=""
|
||||
|
||||
while (($#)); do
|
||||
case "$1" in
|
||||
--report-self-hosted-issues) REPORT_SELF_HOSTED_ISSUES=1 ;;
|
||||
--no-report-self-hosted-issues) REPORT_SELF_HOSTED_ISSUES=0 ;;
|
||||
*) version=$1 ;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Source files needed to set up error-handling
|
||||
source install/dc-detect-version.sh
|
||||
source install/detect-platform.sh
|
||||
source install/error-handling.sh
|
||||
trap_with_arg cleanup ERR INT TERM EXIT
|
4
scripts/backup.sh
Executable file
4
scripts/backup.sh
Executable file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
cmd="backup $1"
|
||||
source scripts/_lib.sh
|
||||
$cmd
|
10
scripts/bump-version.sh
Executable file
10
scripts/bump-version.sh
Executable file
@ -0,0 +1,10 @@
|
||||
#!/bin/bash
|
||||
set -eu
|
||||
|
||||
OLD_VERSION="$1"
|
||||
NEW_VERSION="$2"
|
||||
|
||||
sed -i -e "s/^\(SENTRY\|SNUBA\|RELAY\|SYMBOLICATOR\|VROOM\)_IMAGE=\([^:]\+\):.\+\$/\1_IMAGE=\2:$NEW_VERSION/" .env
|
||||
sed -i -e "s/^\# Self-Hosted Sentry .*/# Self-Hosted Sentry $NEW_VERSION/" README.md
|
||||
|
||||
echo "New version: $NEW_VERSION"
|
7
scripts/post-release.sh
Executable file
7
scripts/post-release.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
set -eu
|
||||
|
||||
# Bring master back to nightlies after merge from release branch
|
||||
git checkout master && git pull --rebase
|
||||
./scripts/bump-version.sh '' 'nightly'
|
||||
git diff --quiet || git commit -anm $'build: Set master version to nightly\n\n#skip-changelog' && git pull --rebase && git push
|
4
scripts/reset.sh
Executable file
4
scripts/reset.sh
Executable file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
cmd=reset
|
||||
source scripts/_lib.sh
|
||||
$cmd
|
4
scripts/restore.sh
Executable file
4
scripts/restore.sh
Executable file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
cmd="restore $1"
|
||||
source scripts/_lib.sh
|
||||
$cmd
|
69
sentry-admin.sh
Executable file
69
sentry-admin.sh
Executable file
@ -0,0 +1,69 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Set the script directory as working directory.
|
||||
cd $(dirname $0)
|
||||
|
||||
# Detect docker and platform state.
|
||||
source install/dc-detect-version.sh
|
||||
source install/detect-platform.sh
|
||||
|
||||
# Define the Docker volume mapping.
|
||||
VOLUME_MAPPING="${SENTRY_DOCKER_IO_DIR:-$HOME/.sentry/sentry-admin}:/sentry-admin"
|
||||
|
||||
# Custom help text paragraphs
|
||||
HELP_TEXT_SUFFIX="
|
||||
All file paths are relative to the 'web' docker container, not the host environment. To pass files
|
||||
to/from the host system for commands that require it ('execfile', 'export', 'import', etc), you may
|
||||
specify a 'SENTRY_DOCKER_IO_DIR' environment variable to mount a volume for file IO operations into
|
||||
the host filesystem. The default value of 'SENTRY_DOCKER_IO_DIR' points to '~/.sentry/sentry-admin'
|
||||
on the host filesystem. Commands that write files should write them to the '/sentry-admin' in the
|
||||
'web' container (ex: './sentry-admin.sh export global /sentry-admin/my-export.json').
|
||||
"
|
||||
|
||||
# Actual invocation that runs the command in the container.
|
||||
invocation() {
|
||||
$dc run -v "$VOLUME_MAPPING" --rm -T -e SENTRY_LOG_LEVEL=CRITICAL web "$@" 2>&1
|
||||
}
|
||||
|
||||
# Function to modify lines starting with `Usage: sentry` to say `Usage: ./sentry-admin.sh` instead.
|
||||
rename_sentry_bin_in_help_output() {
|
||||
local output="$1"
|
||||
local help_prefix="$2"
|
||||
local usage_seen=false
|
||||
|
||||
output=$(invocation "$@")
|
||||
|
||||
echo -e "\n\n"
|
||||
|
||||
while IFS= read -r line; do
|
||||
if [[ $line == "Usage: sentry"* ]] && [ "$usage_seen" = false ]; then
|
||||
echo -e "\n\n"
|
||||
echo "${line/sentry/./sentry-admin.sh}"
|
||||
echo "$help_prefix"
|
||||
usage_seen=true
|
||||
else
|
||||
if [[ $line == "Options:"* ]] && [ -n "$1" ]; then
|
||||
echo "$help_prefix"
|
||||
fi
|
||||
echo "$line"
|
||||
fi
|
||||
done <<<"$output"
|
||||
}
|
||||
|
||||
# Check for the user passing ONLY the '--help' argument - we'll add a special prefix to the output.
|
||||
if { [ "$1" = "help" ] || [ "$1" = "--help" ]; } && [ "$#" -eq 1 ]; then
|
||||
rename_sentry_bin_in_help_output "$(invocation "$@")" "$HELP_TEXT_SUFFIX"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Check for '--help' in other contexts.
|
||||
for arg in "$@"; do
|
||||
if [ "$arg" = "--help" ]; then
|
||||
rename_sentry_bin_in_help_output "$(invocation "$@")"
|
||||
exit 0
|
||||
fi
|
||||
done
|
||||
|
||||
# Help has not been requested - go ahead and execute the command.
|
||||
echo -e "\n\n"
|
||||
invocation "$@"
|
13
sentry/Dockerfile
Normal file
13
sentry/Dockerfile
Normal file
@ -0,0 +1,13 @@
|
||||
ARG SENTRY_IMAGE
|
||||
FROM ${SENTRY_IMAGE}
|
||||
|
||||
COPY . /usr/src/sentry
|
||||
|
||||
RUN if [ -s /usr/src/sentry/enhance-image.sh ]; then \
|
||||
/usr/src/sentry/enhance-image.sh; \
|
||||
fi
|
||||
|
||||
RUN if [ -s /usr/src/sentry/requirements.txt ]; then \
|
||||
echo "sentry/requirements.txt is deprecated, use sentry/enhance-image.sh - see https://develop.sentry.dev/self-hosted/#enhance-sentry-image"; \
|
||||
pip install -r /usr/src/sentry/requirements.txt; \
|
||||
fi
|
134
sentry/config.example.yml
Normal file
134
sentry/config.example.yml
Normal file
@ -0,0 +1,134 @@
|
||||
# While a lot of configuration in Sentry can be changed via the UI, for all
|
||||
# new-style config (as of 8.0) you can also declare values here in this file
|
||||
# to enforce defaults or to ensure they cannot be changed via the UI. For more
|
||||
# information see the Sentry documentation.
|
||||
|
||||
###############
|
||||
# Mail Server #
|
||||
###############
|
||||
|
||||
# mail.backend: 'smtp' # Use dummy if you want to disable email entirely
|
||||
mail.host: 'smtp'
|
||||
# mail.port: 25
|
||||
# mail.username: ''
|
||||
# mail.password: ''
|
||||
# NOTE: `mail.use-tls` and `mail.use-ssl` are mutually exclusive and should not
|
||||
# appear at the same time. Only uncomment one of them.
|
||||
# mail.use-tls: false
|
||||
# mail.use-ssl: false
|
||||
|
||||
# NOTE: The following 2 configs (mail.from and mail.list-namespace) are set
|
||||
# through SENTRY_MAIL_HOST in sentry.conf.py so remove those first if
|
||||
# you want your values in this file to be effective!
|
||||
|
||||
# The email address to send on behalf of
|
||||
# mail.from: 'root@localhost' or ...
|
||||
# mail.from: 'System Administrator <root@localhost>'
|
||||
|
||||
# The mailing list namespace for emails sent by this Sentry server.
|
||||
# This should be a domain you own (often the same domain as the domain
|
||||
# part of the `mail.from` configuration parameter value) or `localhost`.
|
||||
# mail.list-namespace: 'localhost'
|
||||
|
||||
# If you'd like to configure email replies, enable this.
|
||||
# mail.enable-replies: true
|
||||
|
||||
# When email-replies are enabled, this value is used in the Reply-To header
|
||||
# mail.reply-hostname: ''
|
||||
|
||||
# If you're using mailgun for inbound mail, set your API key and configure a
|
||||
# route to forward to /api/hooks/mailgun/inbound/
|
||||
# Also don't forget to set `mail.enable-replies: true` above.
|
||||
# mail.mailgun-api-key: ''
|
||||
|
||||
###################
|
||||
# System Settings #
|
||||
###################
|
||||
|
||||
# If this file ever becomes compromised, it's important to generate a new key.
|
||||
# Changing this value will result in all current sessions being invalidated.
|
||||
# A new key can be generated with `$ sentry config generate-secret-key`
|
||||
system.secret-key: '!!changeme!!'
|
||||
|
||||
# The ``redis.clusters`` setting is used, unsurprisingly, to configure Redis
|
||||
# clusters. These clusters can be then referred to by name when configuring
|
||||
# backends such as the cache, digests, or TSDB backend.
|
||||
# redis.clusters:
|
||||
# default:
|
||||
# hosts:
|
||||
# 0:
|
||||
# host: 127.0.0.1
|
||||
# port: 6379
|
||||
|
||||
################
|
||||
# File storage #
|
||||
################
|
||||
|
||||
# Uploaded media uses these `filestore` settings. The available
|
||||
# backends are either `filesystem` or `s3`.
|
||||
|
||||
filestore.backend: 'filesystem'
|
||||
filestore.options:
|
||||
location: '/data/files'
|
||||
dsym.cache-path: '/data/dsym-cache'
|
||||
releasefile.cache-path: '/data/releasefile-cache'
|
||||
|
||||
# filestore.backend: 's3'
|
||||
# filestore.options:
|
||||
# access_key: 'AKIXXXXXX'
|
||||
# secret_key: 'XXXXXXX'
|
||||
# bucket_name: 's3-bucket-name'
|
||||
|
||||
# The URL prefix in which Sentry is accessible
|
||||
# system.url-prefix: https://example.sentry.com
|
||||
system.internal-url-prefix: 'http://web:96'
|
||||
symbolicator.enabled: true
|
||||
symbolicator.options:
|
||||
url: "http://symbolicator:3021"
|
||||
|
||||
transaction-events.force-disable-internal-project: true
|
||||
|
||||
######################
|
||||
# GitHub Integration #
|
||||
######################
|
||||
|
||||
# Refer to https://develop.sentry.dev/integrations/github/ for setup instructions.
|
||||
|
||||
# github-login.extended-permissions: ['repo']
|
||||
# github-app.id: GITHUB_APP_ID
|
||||
# github-app.name: 'GITHUB_APP_NAME'
|
||||
# github-app.webhook-secret: 'GITHUB_WEBHOOK_SECRET' # Use only if configured in GitHub
|
||||
# github-app.client-id: 'GITHUB_CLIENT_ID'
|
||||
# github-app.client-secret: 'GITHUB_CLIENT_SECRET'
|
||||
# github-app.private-key: |
|
||||
# -----BEGIN RSA PRIVATE KEY-----
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# -----END RSA PRIVATE KEY-----
|
||||
|
||||
#####################
|
||||
# Slack Integration #
|
||||
#####################
|
||||
|
||||
# Refer to https://develop.sentry.dev/integrations/slack/ for setup instructions.
|
||||
|
||||
# slack.client-id: <'client id'>
|
||||
# slack.client-secret: <client secret>
|
||||
# slack.signing-secret: <signing secret>
|
||||
## If legacy-app is True use verification-token instead of signing-secret
|
||||
# slack.verification-token: <verification token>
|
||||
|
||||
|
||||
#######################
|
||||
# Discord Integration #
|
||||
#######################
|
||||
|
||||
# Refer to https://develop.sentry.dev/integrations/discord/
|
||||
|
||||
# discord.application-id: "<application id>"
|
||||
# discord.public-key: "<public key>"
|
||||
# discord.client-secret: "<client secret>"
|
||||
# discord.bot-token: "<bot token>"
|
134
sentry/config.yml-e
Normal file
134
sentry/config.yml-e
Normal file
@ -0,0 +1,134 @@
|
||||
# While a lot of configuration in Sentry can be changed via the UI, for all
|
||||
# new-style config (as of 8.0) you can also declare values here in this file
|
||||
# to enforce defaults or to ensure they cannot be changed via the UI. For more
|
||||
# information see the Sentry documentation.
|
||||
|
||||
###############
|
||||
# Mail Server #
|
||||
###############
|
||||
|
||||
# mail.backend: 'smtp' # Use dummy if you want to disable email entirely
|
||||
mail.host: 'smtp'
|
||||
# mail.port: 25
|
||||
# mail.username: ''
|
||||
# mail.password: ''
|
||||
# NOTE: `mail.use-tls` and `mail.use-ssl` are mutually exclusive and should not
|
||||
# appear at the same time. Only uncomment one of them.
|
||||
# mail.use-tls: false
|
||||
# mail.use-ssl: false
|
||||
|
||||
# NOTE: The following 2 configs (mail.from and mail.list-namespace) are set
|
||||
# through SENTRY_MAIL_HOST in sentry.conf.py so remove those first if
|
||||
# you want your values in this file to be effective!
|
||||
|
||||
# The email address to send on behalf of
|
||||
# mail.from: 'root@localhost' or ...
|
||||
# mail.from: 'System Administrator <root@localhost>'
|
||||
|
||||
# The mailing list namespace for emails sent by this Sentry server.
|
||||
# This should be a domain you own (often the same domain as the domain
|
||||
# part of the `mail.from` configuration parameter value) or `localhost`.
|
||||
# mail.list-namespace: 'localhost'
|
||||
|
||||
# If you'd like to configure email replies, enable this.
|
||||
# mail.enable-replies: true
|
||||
|
||||
# When email-replies are enabled, this value is used in the Reply-To header
|
||||
# mail.reply-hostname: ''
|
||||
|
||||
# If you're using mailgun for inbound mail, set your API key and configure a
|
||||
# route to forward to /api/hooks/mailgun/inbound/
|
||||
# Also don't forget to set `mail.enable-replies: true` above.
|
||||
# mail.mailgun-api-key: ''
|
||||
|
||||
###################
|
||||
# System Settings #
|
||||
###################
|
||||
|
||||
# If this file ever becomes compromised, it's important to generate a new key.
|
||||
# Changing this value will result in all current sessions being invalidated.
|
||||
# A new key can be generated with `$ sentry config generate-secret-key`
|
||||
system.secret-key: '!!changeme!!'
|
||||
|
||||
# The ``redis.clusters`` setting is used, unsurprisingly, to configure Redis
|
||||
# clusters. These clusters can be then referred to by name when configuring
|
||||
# backends such as the cache, digests, or TSDB backend.
|
||||
# redis.clusters:
|
||||
# default:
|
||||
# hosts:
|
||||
# 0:
|
||||
# host: 127.0.0.1
|
||||
# port: 6379
|
||||
|
||||
################
|
||||
# File storage #
|
||||
################
|
||||
|
||||
# Uploaded media uses these `filestore` settings. The available
|
||||
# backends are either `filesystem` or `s3`.
|
||||
|
||||
filestore.backend: 'filesystem'
|
||||
filestore.options:
|
||||
location: '/data/files'
|
||||
dsym.cache-path: '/data/dsym-cache'
|
||||
releasefile.cache-path: '/data/releasefile-cache'
|
||||
|
||||
# filestore.backend: 's3'
|
||||
# filestore.options:
|
||||
# access_key: 'AKIXXXXXX'
|
||||
# secret_key: 'XXXXXXX'
|
||||
# bucket_name: 's3-bucket-name'
|
||||
|
||||
# The URL prefix in which Sentry is accessible
|
||||
# system.url-prefix: https://example.sentry.com
|
||||
system.internal-url-prefix: 'http://web:96'
|
||||
symbolicator.enabled: true
|
||||
symbolicator.options:
|
||||
url: "http://symbolicator:3021"
|
||||
|
||||
transaction-events.force-disable-internal-project: true
|
||||
|
||||
######################
|
||||
# GitHub Integration #
|
||||
######################
|
||||
|
||||
# Refer to https://develop.sentry.dev/integrations/github/ for setup instructions.
|
||||
|
||||
# github-login.extended-permissions: ['repo']
|
||||
# github-app.id: GITHUB_APP_ID
|
||||
# github-app.name: 'GITHUB_APP_NAME'
|
||||
# github-app.webhook-secret: 'GITHUB_WEBHOOK_SECRET' # Use only if configured in GitHub
|
||||
# github-app.client-id: 'GITHUB_CLIENT_ID'
|
||||
# github-app.client-secret: 'GITHUB_CLIENT_SECRET'
|
||||
# github-app.private-key: |
|
||||
# -----BEGIN RSA PRIVATE KEY-----
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# privatekeyprivatekeyprivatekeyprivatekey
|
||||
# -----END RSA PRIVATE KEY-----
|
||||
|
||||
#####################
|
||||
# Slack Integration #
|
||||
#####################
|
||||
|
||||
# Refer to https://develop.sentry.dev/integrations/slack/ for setup instructions.
|
||||
|
||||
# slack.client-id: <'client id'>
|
||||
# slack.client-secret: <client secret>
|
||||
# slack.signing-secret: <signing secret>
|
||||
## If legacy-app is True use verification-token instead of signing-secret
|
||||
# slack.verification-token: <verification token>
|
||||
|
||||
|
||||
#######################
|
||||
# Discord Integration #
|
||||
#######################
|
||||
|
||||
# Refer to https://develop.sentry.dev/integrations/discord/
|
||||
|
||||
# discord.application-id: "<application id>"
|
||||
# discord.public-key: "<public key>"
|
||||
# discord.client-secret: "<client secret>"
|
||||
# discord.bot-token: "<bot token>"
|
8
sentry/enhance-image.example.sh
Executable file
8
sentry/enhance-image.example.sh
Executable file
@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
# Enhance the base $SENTRY_IMAGE with additional dependencies, plugins - see https://develop.sentry.dev/self-hosted/#enhance-sentry-image
|
||||
# For example:
|
||||
# apt-get update
|
||||
# apt-get install -y gcc libsasl2-dev libldap2-dev libssl-dev
|
||||
# pip install python-ldap
|
12
sentry/entrypoint.sh
Executable file
12
sentry/entrypoint.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
if [ "$(ls -A /usr/local/share/ca-certificates/)" ]; then
|
||||
update-ca-certificates
|
||||
fi
|
||||
|
||||
if [ -e /etc/sentry/requirements.txt ]; then
|
||||
echo "sentry/requirements.txt is deprecated, use sentry/enhance-image.sh - see https://develop.sentry.dev/self-hosted/#enhance-sentry-image"
|
||||
fi
|
||||
|
||||
source /docker-entrypoint.sh
|
1
sentry/requirements.example.txt
Normal file
1
sentry/requirements.example.txt
Normal file
@ -0,0 +1 @@
|
||||
# sentry/requirements.txt is deprecated, use sentry/enhance-image.sh - see https://develop.sentry.dev/self-hosted/#enhance-sentry-image
|
404
sentry/sentry.conf.example.py
Normal file
404
sentry/sentry.conf.example.py
Normal file
@ -0,0 +1,404 @@
|
||||
# This file is just Python, with a touch of Django which means
|
||||
# you can inherit and tweak settings to your hearts content.
|
||||
|
||||
from sentry.conf.server import * # NOQA
|
||||
|
||||
BYTE_MULTIPLIER = 1024
|
||||
UNITS = ("K", "M", "G")
|
||||
|
||||
|
||||
def unit_text_to_bytes(text):
|
||||
unit = text[-1].upper()
|
||||
power = UNITS.index(unit) + 1
|
||||
return float(text[:-1]) * (BYTE_MULTIPLIER**power)
|
||||
|
||||
|
||||
# Generously adapted from pynetlinux: https://github.com/rlisagor/pynetlinux/blob/e3f16978855c6649685f0c43d4c3fcf768427ae5/pynetlinux/ifconfig.py#L197-L223
|
||||
def get_internal_network():
|
||||
import ctypes
|
||||
import fcntl
|
||||
import math
|
||||
import socket
|
||||
import struct
|
||||
|
||||
iface = b"eth0"
|
||||
sockfd = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
|
||||
ifreq = struct.pack(b"16sH14s", iface, socket.AF_INET, b"\x00" * 14)
|
||||
|
||||
try:
|
||||
ip = struct.unpack(
|
||||
b"!I", struct.unpack(b"16sH2x4s8x", fcntl.ioctl(sockfd, 0x8915, ifreq))[2]
|
||||
)[0]
|
||||
netmask = socket.ntohl(
|
||||
struct.unpack(b"16sH2xI8x", fcntl.ioctl(sockfd, 0x891B, ifreq))[2]
|
||||
)
|
||||
except IOError:
|
||||
return ()
|
||||
base = socket.inet_ntoa(struct.pack(b"!I", ip & netmask))
|
||||
netmask_bits = 32 - int(round(math.log(ctypes.c_uint32(~netmask).value + 1, 2), 1))
|
||||
return "{0:s}/{1:d}".format(base, netmask_bits)
|
||||
|
||||
|
||||
INTERNAL_SYSTEM_IPS = (get_internal_network(),)
|
||||
|
||||
|
||||
DATABASES = {
|
||||
"default": {
|
||||
"ENGINE": "sentry.db.postgres",
|
||||
"NAME": "postgres",
|
||||
"USER": "postgres",
|
||||
"PASSWORD": "",
|
||||
"HOST": "postgres",
|
||||
"PORT": "",
|
||||
}
|
||||
}
|
||||
|
||||
# You should not change this setting after your database has been created
|
||||
# unless you have altered all schemas first
|
||||
SENTRY_USE_BIG_INTS = True
|
||||
|
||||
# If you're expecting any kind of real traffic on Sentry, we highly recommend
|
||||
# configuring the CACHES and Redis settings
|
||||
|
||||
###########
|
||||
# General #
|
||||
###########
|
||||
|
||||
# Instruct Sentry that this install intends to be run by a single organization
|
||||
# and thus various UI optimizations should be enabled.
|
||||
SENTRY_SINGLE_ORGANIZATION = True
|
||||
|
||||
SENTRY_OPTIONS["system.event-retention-days"] = int(
|
||||
env("SENTRY_EVENT_RETENTION_DAYS", "90")
|
||||
)
|
||||
|
||||
#########
|
||||
# Redis #
|
||||
#########
|
||||
|
||||
# Generic Redis configuration used as defaults for various things including:
|
||||
# Buffers, Quotas, TSDB
|
||||
|
||||
SENTRY_OPTIONS["redis.clusters"] = {
|
||||
"default": {
|
||||
"hosts": {0: {"host": "redis", "password": "", "port": "6379", "db": "0"}}
|
||||
}
|
||||
}
|
||||
|
||||
#########
|
||||
# Queue #
|
||||
#########
|
||||
|
||||
# See https://develop.sentry.dev/services/queue/ for more
|
||||
# information on configuring your queue broker and workers. Sentry relies
|
||||
# on a Python framework called Celery to manage queues.
|
||||
|
||||
rabbitmq_host = None
|
||||
if rabbitmq_host:
|
||||
BROKER_URL = "amqp://{username}:{password}@{host}/{vhost}".format(
|
||||
username="guest", password="guest", host=rabbitmq_host, vhost="/"
|
||||
)
|
||||
else:
|
||||
BROKER_URL = "redis://:{password}@{host}:{port}/{db}".format(
|
||||
**SENTRY_OPTIONS["redis.clusters"]["default"]["hosts"][0]
|
||||
)
|
||||
|
||||
|
||||
#########
|
||||
# Cache #
|
||||
#########
|
||||
|
||||
# Sentry currently utilizes two separate mechanisms. While CACHES is not a
|
||||
# requirement, it will optimize several high throughput patterns.
|
||||
|
||||
CACHES = {
|
||||
"default": {
|
||||
"BACKEND": "django.core.cache.backends.memcached.PyMemcacheCache",
|
||||
"LOCATION": ["memcached:11211"],
|
||||
"TIMEOUT": 3600,
|
||||
"OPTIONS": {"ignore_exc": True},
|
||||
}
|
||||
}
|
||||
|
||||
# A primary cache is required for things such as processing events
|
||||
SENTRY_CACHE = "sentry.cache.redis.RedisCache"
|
||||
|
||||
DEFAULT_KAFKA_OPTIONS = {
|
||||
"bootstrap.servers": "kafka:9092",
|
||||
"message.max.bytes": 50000000,
|
||||
"socket.timeout.ms": 1000,
|
||||
}
|
||||
|
||||
SENTRY_EVENTSTREAM = "sentry.eventstream.kafka.KafkaEventStream"
|
||||
SENTRY_EVENTSTREAM_OPTIONS = {"producer_configuration": DEFAULT_KAFKA_OPTIONS}
|
||||
|
||||
KAFKA_CLUSTERS["default"] = DEFAULT_KAFKA_OPTIONS
|
||||
|
||||
###############
|
||||
# Rate Limits #
|
||||
###############
|
||||
|
||||
# Rate limits apply to notification handlers and are enforced per-project
|
||||
# automatically.
|
||||
|
||||
SENTRY_RATELIMITER = "sentry.ratelimits.redis.RedisRateLimiter"
|
||||
|
||||
##################
|
||||
# Update Buffers #
|
||||
##################
|
||||
|
||||
# Buffers (combined with queueing) act as an intermediate layer between the
|
||||
# database and the storage API. They will greatly improve efficiency on large
|
||||
# numbers of the same events being sent to the API in a short amount of time.
|
||||
# (read: if you send any kind of real data to Sentry, you should enable buffers)
|
||||
|
||||
SENTRY_BUFFER = "sentry.buffer.redis.RedisBuffer"
|
||||
|
||||
##########
|
||||
# Quotas #
|
||||
##########
|
||||
|
||||
# Quotas allow you to rate limit individual projects or the Sentry install as
|
||||
# a whole.
|
||||
|
||||
SENTRY_QUOTAS = "sentry.quotas.redis.RedisQuota"
|
||||
|
||||
########
|
||||
# TSDB #
|
||||
########
|
||||
|
||||
# The TSDB is used for building charts as well as making things like per-rate
|
||||
# alerts possible.
|
||||
|
||||
SENTRY_TSDB = "sentry.tsdb.redissnuba.RedisSnubaTSDB"
|
||||
|
||||
#########
|
||||
# SNUBA #
|
||||
#########
|
||||
|
||||
SENTRY_SEARCH = "sentry.search.snuba.EventsDatasetSnubaSearchBackend"
|
||||
SENTRY_SEARCH_OPTIONS = {}
|
||||
SENTRY_TAGSTORE_OPTIONS = {}
|
||||
|
||||
###########
|
||||
# Digests #
|
||||
###########
|
||||
|
||||
# The digest backend powers notification summaries.
|
||||
|
||||
SENTRY_DIGESTS = "sentry.digests.backends.redis.RedisBackend"
|
||||
|
||||
###################
|
||||
# Metrics Backend #
|
||||
###################
|
||||
|
||||
SENTRY_RELEASE_HEALTH = "sentry.release_health.metrics.MetricsReleaseHealthBackend"
|
||||
SENTRY_RELEASE_MONITOR = (
|
||||
"sentry.release_health.release_monitor.metrics.MetricReleaseMonitorBackend"
|
||||
)
|
||||
|
||||
##############
|
||||
# Web Server #
|
||||
##############
|
||||
|
||||
SENTRY_WEB_HOST = "0.0.0.0"
|
||||
SENTRY_WEB_PORT = 96
|
||||
SENTRY_WEB_OPTIONS = {
|
||||
"http": "%s:%s" % (SENTRY_WEB_HOST, SENTRY_WEB_PORT),
|
||||
"protocol": "uwsgi",
|
||||
# This is needed in order to prevent https://github.com/getsentry/sentry/blob/c6f9660e37fcd9c1bbda8ff4af1dcfd0442f5155/src/sentry/services/http.py#L70
|
||||
"uwsgi-socket": None,
|
||||
"so-keepalive": True,
|
||||
# Keep this between 15s-75s as that's what Relay supports
|
||||
"http-keepalive": 15,
|
||||
"http-chunked-input": True,
|
||||
# the number of web workers
|
||||
"workers": 3,
|
||||
"threads": 4,
|
||||
"memory-report": False,
|
||||
# Some stuff so uwsgi will cycle workers sensibly
|
||||
"max-requests": 100000,
|
||||
"max-requests-delta": 500,
|
||||
"max-worker-lifetime": 86400,
|
||||
# Duplicate options from sentry default just so we don't get
|
||||
# bit by sentry changing a default value that we depend on.
|
||||
"thunder-lock": True,
|
||||
"log-x-forwarded-for": False,
|
||||
"buffer-size": 32768,
|
||||
"limit-post": 209715200,
|
||||
"disable-logging": True,
|
||||
"reload-on-rss": 600,
|
||||
"ignore-sigpipe": True,
|
||||
"ignore-write-errors": True,
|
||||
"disable-write-exception": True,
|
||||
}
|
||||
|
||||
###########
|
||||
# SSL/TLS #
|
||||
###########
|
||||
|
||||
# If you're using a reverse SSL proxy, you should enable the X-Forwarded-Proto
|
||||
# header and enable the settings below
|
||||
|
||||
# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
|
||||
# USE_X_FORWARDED_HOST = True
|
||||
# SESSION_COOKIE_SECURE = True
|
||||
# CSRF_COOKIE_SECURE = True
|
||||
# SOCIAL_AUTH_REDIRECT_IS_HTTPS = True
|
||||
|
||||
# End of SSL/TLS settings
|
||||
|
||||
########
|
||||
# Mail #
|
||||
########
|
||||
|
||||
SENTRY_OPTIONS["mail.list-namespace"] = env("SENTRY_MAIL_HOST", "localhost")
|
||||
SENTRY_OPTIONS["mail.from"] = f"sentry@{SENTRY_OPTIONS['mail.list-namespace']}"
|
||||
|
||||
############
|
||||
# Features #
|
||||
############
|
||||
|
||||
SENTRY_FEATURES["projects:sample-events"] = False
|
||||
SENTRY_FEATURES.update(
|
||||
{
|
||||
feature: True
|
||||
for feature in (
|
||||
"organizations:discover",
|
||||
"organizations:events",
|
||||
"organizations:global-views",
|
||||
"organizations:incidents",
|
||||
"organizations:integrations-issue-basic",
|
||||
"organizations:integrations-issue-sync",
|
||||
"organizations:invite-members",
|
||||
"organizations:metric-alert-builder-aggregate",
|
||||
"organizations:sso-basic",
|
||||
"organizations:sso-rippling",
|
||||
"organizations:sso-saml2",
|
||||
"organizations:performance-view",
|
||||
"organizations:advanced-search",
|
||||
"organizations:session-replay",
|
||||
"organizations:issue-platform",
|
||||
"organizations:profiling",
|
||||
"organizations:monitors",
|
||||
"organizations:dashboards-mep",
|
||||
"organizations:mep-rollout-flag",
|
||||
"organizations:dashboards-rh-widget",
|
||||
"organizations:metrics-extraction",
|
||||
"organizations:transaction-metrics-extraction",
|
||||
"projects:custom-inbound-filters",
|
||||
"projects:data-forwarding",
|
||||
"projects:discard-groups",
|
||||
"projects:plugins",
|
||||
"projects:rate-limits",
|
||||
"projects:servicehooks",
|
||||
)
|
||||
# Starfish related flags
|
||||
+ (
|
||||
"organizations:deprecate-fid-from-performance-score",
|
||||
"organizations:indexed-spans-extraction",
|
||||
"organizations:insights-entry-points",
|
||||
"organizations:insights-initial-modules",
|
||||
"organizations:insights-addon-modules",
|
||||
"organizations:mobile-ttid-ttfd-contribution",
|
||||
"organizations:performance-calculate-score-relay",
|
||||
"organizations:standalone-span-ingestion",
|
||||
"organizations:starfish-browser-resource-module-image-view",
|
||||
"organizations:starfish-browser-resource-module-ui",
|
||||
"organizations:starfish-browser-webvitals",
|
||||
"organizations:starfish-browser-webvitals-pageoverview-v2",
|
||||
"organizations:starfish-browser-webvitals-replace-fid-with-inp",
|
||||
"organizations:starfish-browser-webvitals-use-backend-scores",
|
||||
"organizations:starfish-mobile-appstart",
|
||||
"projects:span-metrics-extraction",
|
||||
"projects:span-metrics-extraction-addons",
|
||||
)
|
||||
# User Feedback related flags
|
||||
+ (
|
||||
"organizations:user-feedback-ingest",
|
||||
"organizations:user-feedback-replay-clip",
|
||||
"organizations:user-feedback-ui",
|
||||
)
|
||||
}
|
||||
)
|
||||
|
||||
#######################
|
||||
# MaxMind Integration #
|
||||
#######################
|
||||
|
||||
GEOIP_PATH_MMDB = "/geoip/GeoLite2-City.mmdb"
|
||||
|
||||
#########################
|
||||
# Bitbucket Integration #
|
||||
#########################
|
||||
|
||||
# BITBUCKET_CONSUMER_KEY = 'YOUR_BITBUCKET_CONSUMER_KEY'
|
||||
# BITBUCKET_CONSUMER_SECRET = 'YOUR_BITBUCKET_CONSUMER_SECRET'
|
||||
|
||||
##############################################
|
||||
# Suggested Fix Feature / OpenAI Integration #
|
||||
##############################################
|
||||
|
||||
# See https://docs.sentry.io/product/issues/issue-details/ai-suggested-solution/
|
||||
# for more information about the feature. Make sure the OpenAI's privacy policy is
|
||||
# aligned with your company.
|
||||
|
||||
# Set the OPENAI_API_KEY on the .env or .env.custom file with a valid
|
||||
# OpenAI API key to turn on the feature.
|
||||
OPENAI_API_KEY = env("OPENAI_API_KEY", "")
|
||||
|
||||
SENTRY_FEATURES["organizations:open-ai-suggestion"] = bool(OPENAI_API_KEY)
|
||||
|
||||
##############################################
|
||||
# Content Security Policy settings
|
||||
##############################################
|
||||
|
||||
# CSP_REPORT_URI = "https://{your-sentry-installation}/api/{csp-project}/security/?sentry_key={sentry-key}"
|
||||
CSP_REPORT_ONLY = True
|
||||
|
||||
# optional extra permissions
|
||||
# https://django-csp.readthedocs.io/en/latest/configuration.html
|
||||
# CSP_SCRIPT_SRC += ["example.com"]
|
||||
|
||||
#################
|
||||
# CSRF Settings #
|
||||
#################
|
||||
|
||||
# Since version 24.1.0, Sentry migrated to Django 4 which contains stricter CSRF protection.
|
||||
# If you are accessing Sentry from multiple domains behind a reverse proxy, you should set
|
||||
# this to match your IPs/domains. Ports should be included if you are using custom ports.
|
||||
# https://docs.djangoproject.com/en/4.2/ref/settings/#std-setting-CSRF_TRUSTED_ORIGINS
|
||||
|
||||
# CSRF_TRUSTED_ORIGINS = ["https://example.com", "http://127.0.0.1:96"]
|
||||
|
||||
#################
|
||||
# JS SDK Loader #
|
||||
#################
|
||||
|
||||
# Configure Sentry JS SDK bundle URL template for Loader Scripts.
|
||||
# Learn more about the Loader Scripts: https://docs.sentry.io/platforms/javascript/install/loader/
|
||||
# If you wish to host your own JS SDK bundles, set `SETUP_JS_SDK_ASSETS` environment variable to `1`
|
||||
# on your `.env` or `.env.custom` file. Then, replace the value below with your own public URL.
|
||||
# For example: "https://sentry.example.com/js-sdk/%s/bundle%s.min.js"
|
||||
#
|
||||
# By default, the previous JS SDK assets version will be pruned during upgrades, if you wish
|
||||
# to keep the old assets, set `SETUP_JS_SDK_KEEP_OLD_ASSETS` environment variable to any value on
|
||||
# your `.env` or `.env.custom` file. The files should only be a few KBs, and this might be useful
|
||||
# if you're using it directly like a CDN instead of using the loader script.
|
||||
JS_SDK_LOADER_DEFAULT_SDK_URL = "https://browser.sentry-cdn.com/%s/bundle%s.min.js"
|
||||
|
||||
|
||||
# If you would like to use self-hosted Sentry with only errors enabled, please set this
|
||||
SENTRY_SELF_HOSTED_ERRORS_ONLY = env("COMPOSE_PROFILES") != "feature-complete"
|
||||
|
||||
#####################
|
||||
# Insights Settings #
|
||||
#####################
|
||||
|
||||
# Since version 24.3.0, Insights features are available on self-hosted. For Requests module,
|
||||
# there are scrubbing logic done on Relay to prevent high cardinality of stored HTTP hosts.
|
||||
# However in self-hosted scenario, the amount of stored HTTP hosts might be consistent,
|
||||
# and you may have allow list of hosts that you want to keep. Uncomment the following line
|
||||
# to allow specific hosts. It might be IP addresses or domain names (without `http://` or `https://`).
|
||||
|
||||
# SENTRY_OPTIONS["relay.span-normalization.allowed_hosts"] = ["example.com", "192.168.10.1"]
|
8
symbolicator/config.example.yml
Normal file
8
symbolicator/config.example.yml
Normal file
@ -0,0 +1,8 @@
|
||||
# See: https://getsentry.github.io/symbolicator/#configuration
|
||||
cache_dir: "/data"
|
||||
bind: "0.0.0.0:3021"
|
||||
logging:
|
||||
level: "warn"
|
||||
metrics:
|
||||
statsd: null
|
||||
sentry_dsn: null # TODO: Automatically fill this with the internal project DSN
|
21
unit-test.sh
Executable file
21
unit-test.sh
Executable file
@ -0,0 +1,21 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
export REPORT_SELF_HOSTED_ISSUES=0 # will be over-ridden in the relevant test
|
||||
|
||||
FORCE_CLEAN=1 "./scripts/reset.sh"
|
||||
fail=0
|
||||
for test_file in _unit-test/*-test.sh; do
|
||||
if [ "$1" -a "$1" != "$test_file" ]; then
|
||||
echo "🙊 Skipping $test_file ..."
|
||||
continue
|
||||
fi
|
||||
echo "🙈 Running $test_file ..."
|
||||
$test_file
|
||||
exit_code=$?
|
||||
if [ $exit_code != 0 ]; then
|
||||
echo fail 👎 with exit code $exit_code
|
||||
fail=1
|
||||
fi
|
||||
done
|
||||
|
||||
exit $fail
|
11
workstation/200_download-self-hosted.sh
Normal file
11
workstation/200_download-self-hosted.sh
Normal file
@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
|
||||
# Create getsentry folder and enter.
|
||||
mkdir /home/user/getsentry
|
||||
cd /home/user/getsentry
|
||||
|
||||
# Pull down sentry and self-hosted.
|
||||
git clone https://github.com/getsentry/sentry.git
|
||||
git clone https://github.com/getsentry/self-hosted.git
|
||||
cd self-hosted
|
8
workstation/201_install-self-hosted.sh
Normal file
8
workstation/201_install-self-hosted.sh
Normal file
@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
|
||||
# Install self-hosted. Assumed `200_download-self-hosted.sh` has already run.
|
||||
./install.sh --skip-commit-check --skip-user-creation --skip-sse42-requirements --no-report-self-hosted-issues
|
||||
|
||||
# Apply CSRF override to the newly installed sentry settings.
|
||||
echo "CSRF_TRUSTED_ORIGINS = [\"https://96-$WEB_HOST\"]" >>/home/user/getsentry/self-hosted/sentry/sentry.conf.py
|
9
workstation/299_setup-completed.sh
Normal file
9
workstation/299_setup-completed.sh
Normal file
@ -0,0 +1,9 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
|
||||
# Add a dot file to the home directory indicating that the setup has been completed successfully.
|
||||
# The host-side of the connection will look for this file when polling for completion to indicate to
|
||||
# the user that the workstation is ready for use.
|
||||
#
|
||||
# Works under the assumption that this is the last setup script to run!
|
||||
echo "ready_at: $(date -u +"%Y-%m-%dT%H:%M:%SZ")" >/home/user/.sentry.workstation.remote
|
160
workstation/README.md
Normal file
160
workstation/README.md
Normal file
@ -0,0 +1,160 @@
|
||||
# Remote Self-Hosted Development on Google Cloud Workstation
|
||||
|
||||
This document specifies how to set up remote workstation development for `self-hosted` using Google
|
||||
Cloud. While this feature is primarily intended for Sentry developers seeking to develop or test
|
||||
changes to `self-hosted`, in theory anyone with a Google Cloud account and the willingness to incur
|
||||
the associated costs could replicate the setup described here.
|
||||
|
||||
The goal of remote workstations is to provide turn-key instances for developing on `self-hosted`, in
|
||||
either postinstall (the `/.install.sh` script has already run) or preinstall (it has not) modes. By
|
||||
using Ubuntu as a base image, we are able to provide a fresh development environment that is very
|
||||
similar to the Linux-based x86 instances that self-hosted is intended to be deployed to.
|
||||
|
||||
Specifically, the goals of this effort are:
|
||||
|
||||
- Create and manage turn-key virtual machines for development in either preinstall or postinstall
|
||||
mode quickly and with minimal manual user input.
|
||||
- Simulate real `self-hosted` deployment environments as faithfully as possible.
|
||||
- Create a smooth developer experience when using VSCode and GitHub.
|
||||
|
||||
The last point is worth emphasizing: this tool is specifically optimized to work well with VSCode
|
||||
remote server (for the actual development) and GitHub (for pushing changes). Supporting any other
|
||||
workflows is an explicit ***non-goal*** of this setup.
|
||||
|
||||
The instructions here are for how to setup workstations as an administrator (that is, the person in
|
||||
charge of managing and paying for the entire fleet of workstations). End users are expected to
|
||||
create, connect to, manage, and shut down workstations as needed via the the `sentry` developer CLI
|
||||
using the `sentry workstations ...` set of commands. For most use cases outside of
|
||||
Sentry-the-company, the administrator and end user will be the same individual: they'll configure
|
||||
their Google Cloud projects and billing, and then use them via `sentry workstations ...` commands on
|
||||
their local machine.
|
||||
|
||||
## Configuring Google Cloud
|
||||
|
||||
You'll need to use two Google Cloud services to enable remote `self-hosted` deployment: Google Cloud
|
||||
Workstations to run the actual virtual machines, and the Artifact Registry to store the base images
|
||||
described in the adjacent Dockerfiles.
|
||||
|
||||
The rest of this document will assume that you are configuring these services to be used from the
|
||||
west coast of the United States (ie: `us-west1`), but a similar set of processes could be applied
|
||||
for any region supported by Google Cloud.
|
||||
|
||||
### Creating an Artifact Registry
|
||||
|
||||
You can create an artifact registry using the Google Cloud Platform UI
|
||||
[here](https://console.cloud.google.com/artifacts):
|
||||
|
||||

|
||||
|
||||
The dialog should be straightforward. We'll name our new repository `sentry-workstation-us` and put
|
||||
it in `us-west1`, but you could change these to whatever options suit your liking. Leave the
|
||||
remaining configurations as they are:
|
||||
|
||||

|
||||
|
||||
### Setting up Cloud Workstations
|
||||
|
||||
To use Google Cloud Workstations, you'll need to make at least one workstation cluster, and at least
|
||||
one configuration therein.
|
||||
|
||||
Navigate to the services [control panel](https://console.cloud.google.com/workstations/overview).
|
||||
From here, you'll need to make one cluster for each region you plan to support. We'll make one for
|
||||
`us-west1` in this tutorial, naming it `us-west` for clarity:
|
||||
|
||||

|
||||
|
||||
Now, create a new configuration for that cluster. There are a few choices to make here:
|
||||
|
||||
- Do you want it to be a preinstall (ie, `./install.sh` has not run) or postinstall (it has)
|
||||
instance?
|
||||
- Do you want to use a small, standard or large resource allocation?
|
||||
- How aggressively do you want to auto-sleep and auto-shutdown instances? More aggressive setups
|
||||
will save money, but be more annoying for end users.
|
||||
|
||||
For this example, we'll make a `postinstall` instance with a `standard` resource allocation, but you
|
||||
can of course change these as you wish.
|
||||
|
||||
On the first panel, name the instance (we recommend using the convention
|
||||
`[INSTALL_KIND]-[SIZE]-[CLUSTER_NAME]`, so this one `postinstall-standard-us-west`) and assign it to
|
||||
the existing cluster:
|
||||
|
||||

|
||||
|
||||
Next, pick a resource and cost saving configuration that makes sense for you. In our experience, an
|
||||
E2 instance is plenty for most day-to-day development work.
|
||||
|
||||

|
||||
|
||||
On the third panel, select `Custom container image`, then choose one of your `postinstall` images
|
||||
(see below for how to generate these). Assign the default Compute Engine service account to it, then
|
||||
choose to `Create a new empty persistent disk` for it. A balanced 10GB disk should be plenty for
|
||||
shorter development stints:
|
||||
|
||||

|
||||
|
||||
On the last screen, set the appropriate IAM policy to allow access to the new machine for your
|
||||
users. You should be ready to go!
|
||||
|
||||
## Creating and uploading an image artifact
|
||||
|
||||
Each Cloud Workstation configuration you create will need to use a Docker image, the `Dockerfile`s
|
||||
and scripts for which are found in this directory. There are two kinds of images: `preinstall` (ie,
|
||||
`./install.sh` has not run) and `postinstall` (it has). To proceed, you'll need to install the
|
||||
`gcloud` and `docker` CLI, then login to both and set your project as the default:
|
||||
|
||||
```shell
|
||||
$> export GCP_PROJECT_ID=my-gcp-project # Obviously, your project is likely to have another name.
|
||||
$> gcloud auth application-default login
|
||||
$> gcloud config set project $GCP_PROJECT_ID
|
||||
$> gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://us-docker.pkg.dev
|
||||
```
|
||||
|
||||
Next, you'll set some useful variables for this session (note: the code below assumes we are pushing
|
||||
to the `sentry-workstation-us` repository defined above):
|
||||
|
||||
```shell
|
||||
$> export GROUP=sentry-workstation # Pick whatever name you like here.
|
||||
$> export REGION=us # Name your regions as you see fit - these are not tied to GCP definitions.
|
||||
$> export PHASE=pre # Use `pre` for preinstall, `post` for postinstall.
|
||||
$> export REPO=${GROUP}-${REGION}
|
||||
$> export IMAGE_TAG=${GROUP}/${PHASE}install:latest
|
||||
$> export IMAGE_URL=us-docker.pkg.dev/${GCP_PROJECT_ID}/${REPO}/${GROUP}/${PHASE}install:latest
|
||||
```
|
||||
|
||||
Now, build the docker image of your choosing:
|
||||
|
||||
```shell
|
||||
$> docker build -t ${IMAGE_TAG} -f ./${PHASE}install/Dockerfile .
|
||||
$> docker image ls | grep "${GROUP}/${PHASE}install"
|
||||
```
|
||||
|
||||
Finally, upload it to the Google Cloud Artifact Registry repository of interest:
|
||||
|
||||
```shell
|
||||
$> docker tag ${IMAGE_TAG} ${IMAGE_URL}
|
||||
$> docker push ${IMAGE_URL}
|
||||
```
|
||||
|
||||
## Creating and connecting to a workstation
|
||||
|
||||
Once the Google Cloud services are configured and the docker images uploaded per the instructions
|
||||
above, end users should be able to list the configurations available to them using `sentry
|
||||
workstations config`:
|
||||
|
||||
```shell
|
||||
$> sentry workstations configs --project=$GCP_PROJECT_ID
|
||||
|
||||
NAME CLUSTER REGION MACHINE TYPE
|
||||
postinstall-standard-us-west us-west us-west1 e2-standard-4
|
||||
postinstall-large-us-west us-west us-west1 e2-standard-8
|
||||
preinstall-standard-us-west us-west us-west1 e2-standard-4
|
||||
preinstall-large-us-west us-west us-west1 e2-standard-8
|
||||
```
|
||||
|
||||
They will then be able to create a new workstation using the `sentry workstations create` command,
|
||||
connect to an existing one using `sentry workstations connect`, and use similar `sentry workstations
|
||||
...` commands to disconnect from and destroy workstations, as well as to check the status of their
|
||||
active connections. The `create` and `connect` commands will provide further instructions in-band on
|
||||
how to connect a their local VSCode to the remote server, use SSH to connect to the terminal
|
||||
directly, add a GitHub access token, or access their running `self-hosted` instance via a web
|
||||
browser.
|
4
workstation/commands.sh
Normal file
4
workstation/commands.sh
Normal file
@ -0,0 +1,4 @@
|
||||
sudo wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor >packages.microsoft.gpg
|
||||
sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg
|
||||
sudo sh -c 'echo "deb [arch=amd64,arm64,armhf signed-by=/etc/apt/keyrings/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main" > /etc/apt/sources.list.d/vscode.list'
|
||||
sudo rm -f packages.microsoft.gpg
|
BIN
workstation/docs/img/create_artifcat_registry.png
Normal file
BIN
workstation/docs/img/create_artifcat_registry.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 58 KiB |
BIN
workstation/docs/img/create_cluster.png
Normal file
BIN
workstation/docs/img/create_cluster.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 62 KiB |
BIN
workstation/docs/img/create_config_1.png
Normal file
BIN
workstation/docs/img/create_config_1.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 87 KiB |
BIN
workstation/docs/img/create_config_2.png
Normal file
BIN
workstation/docs/img/create_config_2.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 103 KiB |
BIN
workstation/docs/img/create_config_3.png
Normal file
BIN
workstation/docs/img/create_config_3.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 124 KiB |
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user