Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.struct.ai/llms.txt

Use this file to discover all available pages before exploring further.

Struct integrates with your existing stack to receive alerts, query logs and metrics, and deliver investigation reports where your team already works.
CategoryConnections
Work & CollaborationGitHub (required), Slack, Linear, Asana
Observability PlatformsSentry, Datadog, Grafana, Prometheus, Loki, SumoLogic, Render
Cloud Log ProvidersAWS CloudWatch, Azure Monitor, Google Cloud Logging
Container & OrchestrationKubernetes
All observability and cloud log integrations are read-only. Struct never modifies your monitoring configuration, log groups, or workspaces.

Work & Collaboration

IntegrationRequiredPurpose
GitHubYesCodebase context and PR creation
SlackNoReceive investigations, trigger on-demand queries
LinearNoTrigger from issues, track progress
AsanaNoTrigger from tasks (requires configuration)

GitHub

Required. GitHub is how Struct accesses your codebase during investigations and creates pull requests.
1

Go to Connections

2

Click Connect next to GitHub

3

Install the Struct GitHub App

4

Select which repositories to enable

With GitHub connected, Struct can:
  • Cross-reference errors with recent commits and deployments
  • Search your codebase for relevant code paths
  • Create branches and pull requests when a fix is approved
Start with a single repository to test the workflow before expanding access.

Slack

Connect Slack to receive investigation reports and trigger on-demand investigations.
1

Go to Connections

2

Click Connect next to Slack

3

Authorize the Struct app in your workspace

4

Invite @Struct to channels where alerts are posted

Triggering Investigations

Mention Struct in any channel where it’s been invited:
@Struct investigate high error rate on the payments API
Struct responds in-thread with a structured investigation report.
You can ask follow-up questions directly in the thread—Struct maintains full context throughout the conversation.

Auto-Investigation

Struct can also automatically investigate messages in your configured investigation channels. Configure keyword filters to control which alerts trigger investigations, or let AI classification decide. See Auto-Investigations for setup details.

Linear

Connect Linear to trigger investigations from issues and receive updates as comments.
1

Go to Connections

2

Click Connect next to Linear

3

Authorize and select teams to monitor

Assign an issue to Struct or @tag it in a comment to trigger an investigation. Struct posts its findings as a comment on the issue.

Auto-Investigation

Enable auto-investigation to have Struct automatically investigate new issues created in your default team. Configure keyword filters to focus on specific issue types. See Auto-Investigations for setup details.

Asana

Connect Asana to trigger investigations from tasks.
1

Go to Connections

2

Click Connect next to Asana

3

Authorize and select projects to monitor

Automatic investigation triggers are not enabled by default. Enable auto-investigation and configure keyword filters in ConnectionsAsanaManage.
See Auto-Investigations for setup details.

Observability Platforms

Struct queries your observability platforms during investigations to pull error details, metrics, traces, and alert context. The more sources connected, the richer the investigation.
PlatformWhat Struct Pulls
SentryStack traces, breadcrumbs, error frequency, affected users
DatadogMetrics, monitors, APM traces, event context
GrafanaDashboard data, alert rules, annotations
PrometheusMetrics, alerting rules, time-series data
LokiLog streams, label-based log queries
SumoLogicLog analytics, search results
RenderService logs, deploy events
To connect an observability platform, go to Connections and click Connect next to the platform.

Sentry

Recommended. Sentry is one of the most impactful integrations—it gives Struct direct access to error details and stack traces.
1

Go to Connections

2

Click Connect next to Sentry

3

Follow the instructions to complete setup

What Struct uses from Sentry

  • Stack traces — Pinpoints the exact code path that threw the error
  • Breadcrumbs — Reconstructs the sequence of events leading to the error
  • Error frequency — Determines if this is a new issue or a recurring pattern
  • Affected users — Assesses impact scope

Example

When a Sentry alert fires, Struct automatically:
  1. Pulls the full stack trace and breadcrumbs
  2. Cross-references the failing code with recent commits
  3. Checks if this error pattern has appeared in previous investigations
  4. Delivers a root cause report to Slack

Datadog

1

Go to Connections

2

Click Connect next to Datadog

3

Follow the instructions to complete setup

What Struct uses from Datadog

  • Metrics — CPU, memory, latency, error rates, and custom metrics
  • APM traces — Distributed traces across services
  • Monitors — Alert context and monitor history
  • Events — Deployment events, configuration changes
Struct can query Datadog metrics in natural language during an investigation. For example: “Show me p99 latency for the checkout service over the last 2 hours.”

Grafana

1

Go to Connections

2

Click Connect next to Grafana

3

Follow the instructions to complete setup

What Struct uses from Grafana

  • Dashboard data — Queries panels for relevant metrics
  • Alert rules — Understands what thresholds triggered
  • Annotations — Correlates deployments and incidents with metric changes

Prometheus

1

Go to Connections

2

Click Connect next to Prometheus

3

Follow the instructions to complete setup

What Struct uses from Prometheus

  • Metrics — Queries PromQL-compatible metrics during investigations
  • Alerting rules — Reads active and pending alerts for context
  • Time-series data — Analyzes trends around the time of the incident

Loki

1

Go to Connections

2

Click Connect next to Loki

3

Follow the instructions to complete setup

What Struct uses from Loki

  • Log streams — Queries logs by labels, service, and time range
  • Log context — Pulls surrounding log lines for errors

Example query Struct might run

{service="api-gateway"} |= "error" | json | status >= 500
If you use Grafana with Loki as a data source, connect both for the richest investigation context.

SumoLogic

1

Go to Connections

2

Click Connect next to SumoLogic

3

Follow the instructions to complete setup

What Struct uses from SumoLogic

  • Log search — Runs targeted searches across your log data
  • Analytics — Aggregates and patterns from log analytics

Render

1

Go to Connections

2

Click Connect next to Render

3

Follow the instructions to complete setup

What Struct uses from Render

  • Service logs — Pulls application logs from your Render services
  • Deploy events — Correlates errors with recent deployments

Cloud Log Providers

Struct queries your cloud logs in real time during investigations—pulling relevant log entries, filtering by time range, and correlating log patterns with errors and alerts.
To connect a cloud log provider, go to Connections and follow the setup instructions for your provider.

AWS CloudWatch

1

Go to Connections

2

Click Connect next to AWS CloudWatch

3

Follow the instructions to complete setup

What Struct queries

  • Log groups — Application and service logs
  • Log streams — Filtered by time range and keywords
  • CloudWatch Insights — Structured queries across log groups

Example

During an investigation into a Lambda timeout, Struct might query:
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
| limit 50
Grant access to specific log groups rather than all logs if you want to limit scope.

Azure Monitor

1

Go to Connections

2

Click Connect next to Azure Monitor

3

Follow the instructions to complete setup

What Struct queries

  • Log Analytics workspaces — Application logs, custom logs, diagnostic logs
  • KQL queries — Structured queries across Azure log data
  • Resource logs — Logs from Azure services (App Service, Functions, AKS, etc.)

Example

Investigating a spike in Azure App Service errors, Struct might run:
AppServiceHTTPLogs
| where ScStatus >= 500
| where TimeGenerated > ago(1h)
| summarize count() by CsUriStem, bin(TimeGenerated, 5m)
| order by count_ desc

Google Cloud Logging

1

Go to Connections

2

Click Connect next to Google Cloud Logging

3

Follow the instructions to complete setup

What Struct queries

  • Cloud Logging entries — Application logs, request logs, system logs
  • Log-based queries — Filtered by resource, severity, and time
  • GKE and Cloud Run logs — Container and serverless application logs

Example

Investigating a Cloud Run service crash, Struct might query:
resource.type="cloud_run_revision"
severity>=ERROR
timestamp>="2024-01-15T10:00:00Z"
The more log sources Struct can access, the faster and more accurate root cause analysis becomes. Connect all relevant providers.

How Struct Uses Cloud Logs

During every investigation, Struct:
  1. Identifies relevant log sources based on the alert and affected services
  2. Builds targeted queries to pull error logs, warnings, and contextual entries
  3. Correlates log patterns with metrics, traces, and code changes
  4. Highlights key log lines in the investigation report with timestamps and context
Ensure your cloud provider credentials have sufficient permissions to read logs. Investigations will be less accurate without log access.

Container & Orchestration

Struct can query your Kubernetes clusters directly to pull pod logs, events, and resource status during investigations.
PlatformWhat Struct Pulls
KubernetesPod logs, events, namespace resources

Kubernetes

1

Go to Connections

2

Click Connect next to Kubernetes

3

Choose ServiceAccount Token (recommended) or kubeconfig YAML

4

Follow the setup instructions below

What Struct uses from Kubernetes

  • Pod logs — Pulls application logs directly from pods
  • Events — Surfaces Kubernetes events (restarts, scheduling issues, OOM kills)
  • Namespace resources — Lists pods and their status
This approach uses a secret-bound token that works with all Kubernetes clusters, including GKE, GKE Autopilot, EKS, and AKS:
# Create the ServiceAccount
kubectl create serviceaccount log-reader -n default

# Create RBAC permissions
kubectl apply -f - <<'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: log-reader
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/log", "namespaces", "events"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: log-reader-binding
subjects:
  - kind: ServiceAccount
    name: log-reader
    namespace: default
roleRef:
  kind: ClusterRole
  name: log-reader
  apiGroup: rbac.authorization.k8s.io
EOF

# Create a Secret-bound token (does not expire)
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Secret
metadata:
  name: log-reader-token
  namespace: default
  annotations:
    kubernetes.io/service-account.name: log-reader
type: kubernetes.io/service-account-token
EOF

# Retrieve the token
kubectl get secret log-reader-token -n default -o jsonpath='{.data.token}' | base64 -d
Secret-bound tokens are long-lived and don’t require rotation. This approach works on all clusters including GKE Autopilot, which enforces a 48-hour max on kubectl create token.

Getting the API Server URL

Run this command to find your cluster’s API server URL:
kubectl cluster-info | grep "control plane"

CA Certificate (Optional)

If your cluster uses a self-signed CA, you’ll need to provide the CA certificate:
kubectl config view --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' | base64 -d

Need an integration we don’t support yet? New integrations can be built in days — reach out at help@struct.ai. Jira, ClickUp, and PagerDuty integrations are coming soon.