Skip to main content
Automation modes control how much autonomy your agents have. Configure when agents need human approval, when they should pause for input, and how to handle failures—balancing automation efficiency with human oversight.

Understanding Automation Modes

Beam supports three automation modes that determine agent behavior when encountering specific workflow checkpoints: Fully Autonomous - Agents execute end-to-end without human intervention Human-in-the-Loop (HITL) - Agents pause at designated checkpoints for human review and approval Hybrid - Combination of autonomous execution with selective human oversight at critical steps Inbox System: All tasks requiring human attention route to a centralized Inbox across all agents in your workspace.

Human-in-the-Loop (HITL)

Inbox Overview

The Inbox consolidates all agent tasks requiring human attention in one location.
Inbox Features:
  • Task Count: Total pending items requiring attention (24 in example)
  • Task List: All paused workflows across different agents
  • Task Type Indicators: Visual icons showing consent required, input needed, or failed execution
  • Agent Context: Which agent created the task
  • Timestamp: How long ago the task was created
Task Types in Inbox:

Consent Required

Agent needs approval before executing sensitive action

Input Required

Agent missing variable or data needed to continue workflow

Failed Execution

Workflow stopped due to error requiring human intervention
Agents pause before executing actions requiring explicit human permission.
Consent Workflow:
  1. Agent Pauses: Workflow stops at consent checkpoint node
  2. Context Provided: Shows execution steps completed so far and proposed action
  3. Human Reviews: Examines agent reasoning, data extracted, and draft output
  4. Decision Made: Approve to continue or Reject to stop workflow
Example Use Case: Agent drafts customer email response after extracting invoice data and checking payment records. Before sending, requires human approval to ensure tone and accuracy. Approval Actions:
  • Accept: Agent proceeds to next workflow step with approved action
  • Reject: Workflow terminates with rejection reason logged
  • Provide Feedback: Optional context on why rejected for agent learning

Input Requests

Agents pause when missing required data or variables to complete workflow.
Input Request Workflow:
  1. Agent Identifies Gap: Cannot infer or extract required variable
  2. Execution Pauses: Workflow stops at step needing the data
  3. Input Form Presented: Human sees what’s needed with context
  4. Data Provided: User fills in missing information
  5. Agent Resumes: Continues from pause point without restarting
Example: Agent classifying debt reminder tier needs “days past due” variable. After fetching debt details from database, cannot determine this value and requests human input. Input Form Elements:
  • Question: Clear prompt for what data is needed
  • Context: Why agent needs this information
  • Input Field: Form field matching expected data type
  • Continue Button: Submits response and resumes workflow
Common Scenarios:
  • Variable not available in trigger data or memory
  • Conditional logic requires human judgment
  • External system unavailable, manual lookup needed
  • Ambiguous data requiring clarification
  • Custom business rules not encoded in workflow
Configuration:
  • Use “Request Input” node in workflow
  • Define variable name and data type
  • Provide clear question text
  • Set optional timeout and default value
Minimize Input Requests:
  • Automate data fetching where possible
  • Use integrations to pull missing data
  • Configure memory files with reference data
  • Set default values for optional fields
Clear Communication:
  • Explain why input is needed
  • Show what agent has done so far
  • Indicate how response will be used
  • Provide examples of valid inputs
Validation:
  • Define expected data type and format
  • Validate input before resuming workflow
  • Provide error messages for invalid data

Managing Inbox Tasks

Task Details View

Click any inbox item to see complete execution context and take action.
Detail Panel Sections: Task Header - Shows task ID, type (Consent/Input/Failed), agent name Task Activity - Timeline of creation and updates Task Execution - Visual workflow showing completed steps and where agent paused Context Documents - Files, emails, or data agent processed Action Required - Form, approval buttons, or re-run option

Workflow Continuation

After providing input or approval, agents resume exactly where they paused.
Seamless Resumption:
  • No workflow restart required
  • Completed steps not re-executed
  • New input/approval incorporated
  • Execution continues to next node
Example Flow: Steps 1-4: Agent detects email, extracts info, fetches data from Airtable, classifies tier Step 5: Paused for human input (days past due) After Input: Agent proceeds directly to Step 6 with provided data Steps 6-9: Completes remaining workflow steps This prevents redundant processing and maintains execution efficiency.
Handle multiple inbox items efficiently:Filtering:
  • Filter by task type (Consent, Input, Failed)
  • Filter by agent
  • Filter by age or priority
Bulk Actions:
  • Approve multiple similar consent requests
  • Mark multiple failures for batch re-run
  • Delegate tasks to team members
Prioritization:
  • Sort by creation date (oldest first)
  • Sort by agent criticality
  • Custom priority tags
Stay informed of tasks requiring attention:Email Notifications:
  • New consent request created
  • Input needed for critical workflow
  • Failed execution requiring review
Slack Integration:
  • Post to dedicated channel for urgent items
  • @mention specific team members
  • Include direct link to inbox task
In-App Badges:
  • Inbox count indicator in navigation
  • Real-time updates as tasks arrive
  • Desktop notifications for high-priority items
Handle execution failures in inbox:Review Failure:
  • See which step failed and why
  • Check error messages from tools/integrations
  • Examine input data that caused failure
Recovery Options:
  • Re-run: Retry from beginning with same inputs
  • Modify & Re-run: Edit trigger data before retry
  • Fix Workflow: Update agent flow to prevent future failures
  • Mark Resolved: Document failure and close task
Failure Prevention:
  • Add error handling nodes
  • Improve validation criteria
  • Enhance tool configurations
  • Update prompts for edge cases

Configuring Automation Modes

Setting Up HITL Nodes

Add human checkpoints to your workflow for controlled execution. Consent Node Configuration:
1

Add Consent Node

Insert “Require Consent” node before action requiring approval. Position after preparation steps (data extraction, analysis) but before execution step (send email, update database).
2

Configure Consent Context

Define what information human reviewer sees:
  • Previous step outputs to review
  • Proposed action to approve
  • Reasoning for recommendation
  • Deadline for response (optional)
3

Set Approval Logic

Configure workflow paths:
  • If Approved: Continue to next action step
  • If Rejected: Route to alternative handling or end workflow
  • If Timeout: Default action or escalation
Input Node Configuration:
1

Add Input Request Node

Insert “Request Input” node where variable is needed. Place when agent cannot reliably infer or extract the required data.
2

Define Input Schema

Specify what data to collect:
  • Variable name matching workflow reference
  • Data type (text, number, date, boolean)
  • Question text prompting for input
  • Validation rules (optional)
3

Handle Response

Configure timeout and defaults:
  • Timeout duration (e.g., 24 hours)
  • Default value if no response
  • Escalation if critical path blocked

Automation Strategy

Choose the right balance of autonomy vs oversight for each workflow. Fully Autonomous:
  • Use When: High confidence in agent accuracy, low-risk actions, well-tested workflows
  • Examples: Data extraction, classification, reporting, simple notifications
  • Benefits: Maximum efficiency, 24/7 operation, instant processing
  • Risks: Errors propagate without review, missed edge cases
Human-in-the-Loop:
  • Use When: Sensitive actions, compliance requirements, learning phase, complex decisions
  • Examples: Customer communications, financial transactions, legal documents, hiring decisions
  • Benefits: Quality control, compliance adherence, human judgment
  • Tradeoffs: Slower processing, requires human availability, potential bottlenecks
Hybrid Approach:
  • Use When: Most production scenarios balancing efficiency and control
  • Examples: Agent extracts/analyzes autonomously, human approves final action
  • Benefits: 80% automation with 20% oversight on critical steps
  • Configuration: Consent nodes at decision points, input nodes for exceptions
Start with oversight, remove as confidence grows:Phase 1: Full HITL (Weeks 1-2)
  • Consent required for all actions
  • Review every agent decision
  • Identify patterns in approvals/rejections
Phase 2: Selective HITL (Weeks 3-4)
  • Remove consent for consistently approved actions
  • Keep oversight on edge cases
  • Monitor evaluation scores
Phase 3: Autonomous with Exceptions (Weeks 5+)
  • Fully autonomous for standard cases
  • HITL only for low-confidence predictions
  • Periodic spot-checks for quality
Metrics to Track:
  • Approval rate by action type
  • Time to resolve inbox items
  • Error rate in autonomous vs HITL modes
Distribute inbox management across team:Role-Based Routing:
  • Finance team handles payment approvals
  • Customer success reviews client communications
  • Legal approves compliance-sensitive actions
Assignment Rules:
  • Auto-assign based on agent or task type
  • Round-robin distribution
  • Skill-based routing
Handoff Workflows:
  • First approver reviews, second approver finalizes
  • Escalation path for complex decisions
  • Delegation when team member unavailable

Best Practices

Make agent reasoning clear for human reviewers:Show Work:
  • Display data extraction results
  • Explain classification logic
  • Highlight confidence scores
  • Surface validation checks performed
Provide Context:
  • Link to source documents
  • Show related historical tasks
  • Display relevant memory/knowledge base references
Reduce inbox bottlenecks:SLA Targets:
  • Set response time goals (e.g., 2 hours for consent)
  • Monitor actual vs target
  • Alert when approaching deadline
Reduce Friction:
  • Pre-fill forms with best guesses
  • Provide one-click approvals for simple cases
  • Batch similar requests for efficient review
Fallback Handling:
  • Auto-approve after timeout for low-risk items
  • Escalate to manager for high-stakes decisions
  • Queue for next business hours if after-hours
Use inbox data to improve automation:Approval Analytics:
  • Track approval/rejection rates by task type
  • Identify consistently approved categories
  • Find patterns in rejections
Automation Opportunities:
  • High approval rate (95%+) → Remove consent requirement
  • Repeated input requests → Add data source integration
  • Common failures → Improve error handling
Continuous Improvement:
  • Review inbox monthly for optimization
  • Gradually increase autonomy for proven patterns
  • Add checkpoints when quality degrades
Maintain records for regulated workflows:Decision Logging:
  • Record all consent approvals/rejections
  • Capture human input provided
  • Store failure resolution actions
Audit Trail:
  • Who made decision and when
  • Reasoning/feedback provided
  • Original vs modified data
  • Outcome of continued workflow
Retention:
  • Archive completed inbox tasks
  • Export for compliance reporting
  • Link to final workflow execution records

Next Steps