Agent Orchestration Loop
SignalPilot’s agent loop is the core engine that transforms single questions into comprehensive data investigations. Unlike single-shot AI completions, this loop maintains context, executes multi-step plans, and keeps analysts in control throughout the process.How the Loop Works
The agent orchestration loop follows a 7-step process that repeats until the investigation is complete:Step-by-Step Breakdown
Step 1: Context Resolution (Parallel)
When you ask a question, SignalPilot immediately fetches relevant context from multiple sources in parallel:Internal Context Sources
Internal Context Sources
- Kernel State: Current variables, dataframes, and their schemas
- Database Schemas: Table structures, column types, relationships
- Query History: Recent queries and their results
- Notebook Content: Code cells, markdown, and outputs
- Local Files: CSVs, configs, and related notebooks
External MCP Sources
External MCP Sources
- dbt: Model lineage, documentation, test results
- Slack: Recent discussions about relevant data or metrics
- Jira: Related tickets, deployment history
- Notion/GDocs: Design docs, runbooks, data dictionaries
Step 2: System Prompt Construction
SignalPilot builds a comprehensive system prompt that includes:- Base Instructions: How to investigate data problems
- Available Tools: What actions the agent can take
- Resolved Context: All fetched organizational knowledge
- Memory Recall: Relevant findings from past investigations
- Active Rules: Team-specific constraints and standards
The system prompt is dynamically constructed based on the question type. A revenue question includes financial schemas and business logic; an ML question includes model metadata and feature definitions.
Step 3: Plan Generation
The AI analyzes your question and generates a structured investigation plan:Plans are structured with phases to break complex investigations into manageable steps. Each phase has clear objectives that can be reviewed and modified.
Step 4: Analyst Approval Checkpoint
This is what makes SignalPilot “analyst-in-the-loop” rather than fully autonomous. Before executing any plan, SignalPilot presents it for your approval:- ✅ Approve: Execute the plan as proposed
- ✏️ Modify: Adjust steps, add constraints, change approach
- ❌ Reject: Provide new direction or ask a different question
Step 5: Tool Execution (Parallel)
Once approved, SignalPilot executes plan steps using available tools:| Tool | Purpose | Example |
|---|---|---|
execute_code | Run Python/SQL in notebook cells | Query database, transform data |
write_cell | Create or update notebook cells | Add visualizations, documentation |
read_schema | Introspect database structure | Discover tables and relationships |
search_context | Query MCP sources | Find Slack discussions, Jira tickets |
create_plot | Generate visualizations | Charts, graphs, dashboards |
Step 6: Completion Check
After each execution phase, SignalPilot evaluates:- Goal Achievement: Did we answer the original question?
- Data Quality: Are results statistically significant?
- Completeness: Are there unexplored hypotheses?
- Confidence Level: How certain is the conclusion?
Step 7: Memory Persistence
When the investigation completes, SignalPilot saves:- Findings: What was discovered and concluded
- Validated Assumptions: Business logic that was confirmed
- Data Quirks: Anomalies or gotchas discovered
- Solution Patterns: Approaches that worked for this type of question
Learn More: Multi-Session Memory
See how memory makes future investigations faster and more accurate
Why “Long-Running” Matters
Traditional AI interactions are single-shot: you ask, AI responds, done. This fails for data investigations because:| Single-Shot Limitations | Long-Running Solution |
|---|---|
| Loses context between queries | Maintains full investigation state |
| Can’t adapt based on findings | Adjusts approach as data reveals patterns |
| No memory of what was tried | Tracks hypotheses tested and ruled out |
| Manual context management | Automatic context orchestration |
Performance Characteristics
| Metric | Typical Value | Notes |
|---|---|---|
| Context resolution | 1-3 seconds | Parallel fetching from all sources |
| Plan generation | 2-5 seconds | Depends on question complexity |
| Tool execution | Varies | Database queries, code execution |
| Completion check | <1 second | Lightweight evaluation |
| Full investigation | 2-10 minutes | Complex questions with multiple phases |
Best Practices
Ask Clear Questions
Specific questions lead to focused investigation plans. “Why did MRR drop?” is better than “Something seems wrong with revenue.”
Review Plans Carefully
Take time at the approval checkpoint to ensure the plan addresses your actual question. Modify if needed.
Let the Loop Complete
Resist the urge to interrupt mid-investigation. The loop is designed to reach conclusions, not just surface data.