Ctadel

Set up alerts

By default, findings live in the Ctadel UI. Alerts push them to your team's tools so nobody has to log in to find out.

Channels

Three options, often used together:

ChannelBest for
SlackReal-time team awareness, CRITICAL / HIGH severities.
EmailDaily / weekly digest, stakeholders who don't live in the UI.
WebhookSIEM, ticketing, on-call escalation, anything programmatic.

You can configure all three on one project.

Slack

1. Create an incoming webhook

In Slack, go to Apps, Custom Integrations, Incoming Webhooks. Pick the channel (usually #sec-alerts or per-team). Copy the webhook URL.

2. Configure in Ctadel

  1. Settings, Connectors, Slack, Add channel.
  2. Paste the webhook URL.
  3. Optional: name the channel for routing (e.g. prod-critical, staging-info).
  4. Severity threshold: pick the minimum severity that triggers a Slack message (typical: HIGH).
  5. Detector filter (optional): only some detectors. For example, route Toxic Combinations to a senior team channel.
  6. Save.

3. Test

Click Send test message in the connector page. A test card appears in your Slack channel.

Real alerts arrive within ~1 minute of the finding being created.

Email

1. Configure in Ctadel

  1. Settings, Connectors, Email.
  2. Add recipients (one per line).
  3. Pick the schedule:
    • Daily digest at a time of day (typical: 09:00 local).
    • Weekly digest on a day of week (typical: Monday).
    • Real-time CRITICAL only for incident response.
  4. Save.

2. Test

Click Send test email. A digest with synthetic data lands in the recipients' inboxes within a minute.

Generic webhook

For SIEM, ticketing, or anything custom.

1. Configure in Ctadel

  1. Settings, Connectors, Webhooks, Add webhook.
  2. URL: your endpoint.
  3. Shared secret (recommended): used to sign payloads.
  4. Severity threshold and detector filter, same as Slack.
  5. Payload format: native, Splunk, or Elastic Common Schema.
  6. Save.

2. Retry behaviour

Ctadel retries failed deliveries with exponential backoff. After several failures the delivery is dropped and a warning surfaces on the connector page.

Routing per severity

In any connector, the Severity threshold field acts as a floor. Common patterns:

  • Slack #sec-critical: threshold CRITICAL.
  • Slack #sec-alerts: threshold HIGH.
  • Email digest to RSSI: threshold MEDIUM (weekly).
  • PagerDuty webhook: threshold CRITICAL, detector filter to Toxic Combinations only.

Common pitfalls

  • Alert fatigue. Don't route every detector to your busiest channel. Route Toxic Combinations + CRITICAL CSPM/KSPM to the noisy channel; the rest to digest.
  • Duplicate alerts. Each connector fires independently. Three connectors with threshold HIGH mean each HIGH finding sends three messages.
  • Slack rate limits. Slack throttles incoming webhooks at ~1/sec. A burst of 100 findings spaces out over a minute or two.

What's next