Set up alerts
By default, findings live in the Ctadel UI. Alerts push them to your team's tools so nobody has to log in to find out.
Channels
Three options, often used together:
| Channel | Best for |
|---|---|
| Slack | Real-time team awareness, CRITICAL / HIGH severities. |
| Daily / weekly digest, stakeholders who don't live in the UI. | |
| Webhook | SIEM, ticketing, on-call escalation, anything programmatic. |
You can configure all three on one project.
Slack
1. Create an incoming webhook
In Slack, go to Apps, Custom Integrations, Incoming Webhooks. Pick the channel
(usually #sec-alerts or per-team). Copy the webhook URL.
2. Configure in Ctadel
- Settings, Connectors, Slack, Add channel.
- Paste the webhook URL.
- Optional: name the channel for routing (e.g.
prod-critical,staging-info). - Severity threshold: pick the minimum severity that triggers a Slack message
(typical:
HIGH). - Detector filter (optional): only some detectors. For example, route Toxic Combinations to a senior team channel.
- Save.
3. Test
Click Send test message in the connector page. A test card appears in your Slack channel.
Real alerts arrive within ~1 minute of the finding being created.
1. Configure in Ctadel
- Settings, Connectors, Email.
- Add recipients (one per line).
- Pick the schedule:
- Daily digest at a time of day (typical: 09:00 local).
- Weekly digest on a day of week (typical: Monday).
- Real-time CRITICAL only for incident response.
- Save.
2. Test
Click Send test email. A digest with synthetic data lands in the recipients' inboxes within a minute.
Generic webhook
For SIEM, ticketing, or anything custom.
1. Configure in Ctadel
- Settings, Connectors, Webhooks, Add webhook.
- URL: your endpoint.
- Shared secret (recommended): used to sign payloads.
- Severity threshold and detector filter, same as Slack.
- Payload format: native, Splunk, or Elastic Common Schema.
- Save.
2. Retry behaviour
Ctadel retries failed deliveries with exponential backoff. After several failures the delivery is dropped and a warning surfaces on the connector page.
Routing per severity
In any connector, the Severity threshold field acts as a floor. Common patterns:
- Slack
#sec-critical: thresholdCRITICAL. - Slack
#sec-alerts: thresholdHIGH. - Email digest to RSSI: threshold
MEDIUM(weekly). - PagerDuty webhook: threshold
CRITICAL, detector filter to Toxic Combinations only.
Common pitfalls
- Alert fatigue. Don't route every detector to your busiest channel. Route Toxic Combinations + CRITICAL CSPM/KSPM to the noisy channel; the rest to digest.
- Duplicate alerts. Each connector fires independently. Three connectors with
threshold
HIGHmean each HIGH finding sends three messages. - Slack rate limits. Slack throttles incoming webhooks at ~1/sec. A burst of 100 findings spaces out over a minute or two.