Agents bring reasoning capabilities that allow automation to handle ambiguity and change, while deterministic workflows provide reliability for structured, repeatable, and rule-based tasks. Today’s automation can combine both: agent steps excel at interpretation, decision-making, and unstructured data, whereas deterministic steps ensure consistency and speed. However, agents still face some constraints, including hallucinations, higher cost, execution limits, and maintenance challenges. To mitigate this, micro agents focus on narrow, well-defined tasks and integrate with Blink’s deterministic workflows. This hybrid approach delivers reliable execution, controls cost by using agents selectively, preserves performance through fast deterministic steps, and simplifies maintenance with safer debugging and updates. The following sections provide guidance on when to use agents so you can design workflows that are both effective and efficient.

When to use Micro-Agents in Workflows

1. Reasoning & Summarization

Ideal to use when the task requires interpretation or synthesis of complex information.
Unlike deterministic workflows, which follow fixed rules and outputs, Micro-Agents are used when a workflow requires reasoning, analysis, or summarization. They can process unstructured or complex information and generate meaningful insights that the workflow can then use for follow-up actions.Workflow Breakdown:
  1. Trigger: An incident is raised and the investigation workflow begins.
  2. Micro-Agent Task:
    • The agent reviews the incident timeline, log entries, and enrichment data.
    • It identifies key events, interprets patterns, and summarizes findings in natural language.
    • The agent suggests potential remediation steps (e.g., isolate endpoint, block IP, reset credentials).
  3. Follow-up Workflow Action:
    • The workflow takes the agent’s summary and automatically adds it to the Case Management ticket as the analyst’s notes.
    • This ensures structured documentation of the incident while keeping human analysts in the loop.
  4. Outcome:
    • The workflow blends automation with reasoning: repetitive tasks remain automated, while the Micro-Agent provides human-like interpretation and actionable recommendations.

2. Unstructured / Ambiguous Inputs

Ideal to use for handling messy or unfamiliar input structures (such as incoming logs or alert payloads), especially when specific data must be extracted from new or unexpected formats. When the structure is already known, a deterministic workflow is recommended to ensure accuracy and consistency.
Micro-Agents are particularly valuable when workflows need to interpret or extract meaning from unstructured or inconsistent inputs. Unlike deterministic workflows, which rely on a clean schema and fixed mappings, Micro-Agents can adapt to varied formats and return usable, structured data for the workflow to act on.Workflow Breakdown:
  1. Trigger: A new alert or log payload is ingested into the workflow.
  2. Format Check:
  • The workflow evaluates the type and format of the payload.
  • If the format matches an existing schema, the workflow proceeds with a deterministic path designed for that specific fixed input.
  • If the format is unknown, inconsistent, or ambiguous, the payload is routed to a Micro-Agent.
  1. Micro-Agent Task (for unknown/unstructured inputs):
  • The agent scans the payload, even if it is messy, inconsistent, or deeply nested.
  • It extracts the relevant field (e.g., an IP address), whether it appears as src_ip, ip_address, or buried in nested JSON.
  • The agent outputs the extracted IP(s) in a clean, standardized format.
  1. Follow-up Workflow Action:
  • The workflow consumes the standardized IP address from either the deterministic workflow or the Micro-Agent.
  • Example: block the identified IP across the firewall or EDR platform.
  1. Outcome:
  • Known formats are processed quickly and accurately through deterministic workflows.
  • Unknown or inconsistent formats are still usable thanks to the adaptive capabilities of Micro-Agents.

3. Exploratory / Open-Ended Work

Ideal to use when steps are not predefined or the workflow may branch in different directions.
Micro-Agents can perform correlation and reasoning across different systems, making them ideal for investigations that require connecting signals from multiple sources. Unlike deterministic workflows that rely on predefined fields, the agent can interpret varied log formats and synthesize findings into a meaningful assessment.Workflow Breakdown:
  1. Trigger: A suspicious login event is detected (e.g., unusual location, time, or behavior).
  2. Micro-Agent Task:
  • The agent collects and reviews logs across identity providers (Okta, Azure AD, Google Workspace) and endpoints.
  • It correlates the login activity with endpoint events (e.g., device posture, EDR signals).
  • The agent determines whether the login pattern suggests a possible account compromise.
  1. Follow-up Workflow Action:
  • The workflow appends the agent’s investigation summary to the Case Management record.
  • If compromise is likely, trigger automated containment actions (e.g., force MFA reset, block session).
  1. Outcome:
  • The agent provides human-like investigation capabilities, surfacing insights from multiple noisy data sources.
  • The workflow ensures those insights are immediately actionable and documented.

4. Contextual Decision-Making

Ideal to use when a process requires evaluating tradeoffs or choosing the best next step from several options.
Micro-Agents are ideal when workflows require more nuanced, context-aware decisions instead of relying on rigid thresholds. They can weigh multiple signals together and reason about the likelihood of malicious activity, enabling more accurate outcomes than simple rule-based checks.Workflow Breakdown:
  1. Trigger: A login event is detected.
  2. Micro-Agent Task:
  • The agent evaluates multiple contextual signals including:
    • User behavior patterns
    • Device posture
    • Login time and location
    • Text from related logs
  • Instead of a binary threshold (e.g., x > 0.9), the agent reasons about the combination of factors and determines whether the login is suspicious.
  1. Follow-up Workflow Action:
    • If suspicious, flag the login and create a Case Management entry or force MFA challenge.
    • If benign, allow the workflow to continue without escalation.
  2. Outcome:
    • The workflow benefits from context-aware decision-making, reducing false positives while still catching true anomalies.

When to use Fully Deterministic Workflows

1. Well-Defined, Structured Processes

Ideal to use when the task can be fully described as a fixed sequence of steps with clear inputs and outputs.
This workflow ensures that whenever a high-severity alert is raised, a Jira ticket is automatically created with consistent fields, reducing manual effort and maintaining compliance.Workflow Breakdown:
  1. Trigger: High-severity alert occurs in the system.
  2. Data Extraction: Capture alert details (name, source system, timestamp, description).
  3. Ticket Creation: Automatically populate Jira fields:
  • Project: Security Incidents
  • Issue Type: Bug/Incident
  • Priority: Highest/Blocker
  • Summary: High-Severity Alert: [Alert Name]
  • Description: Alert details and context
  1. Notification (Optional): Notify security team via Slack or email.
  2. Logging: Record alert ID and ticket ID for traceability.
Outcome: Every high-severity alert consistently produces a properly formatted Jira ticket.

2. Clear, Rule-Based Decisions

Ideal to use When the logic can be expressed through strict conditions or thresholds.
This workflow automatically performs remediation actions whenever a detection confidence exceeds a defined threshold, ensuring predictable and repeatable outcomes.Workflow Breakdown:
  1. Trigger: Alert is detected.
  2. Evaluate Condition: Check if Detection Confidence > 0.9.
  3. Remediation Actions: Automatically execute predefined actions such as:
  • Isolating affected systems
  • Blocking malicious IPs or users
  • Quarantining suspicious files
  1. Logging: Record all remediation actions for auditing.
Outcome: High-confidence alerts are handled automatically, minimizing response time and human error.

3. High-Volume, Cost-Sensitive Workflows

Ideal for tasks that run frequently and must remain efficient and low-cost.
This workflow enriches every incoming alert with additional context from threat intelligence and geo-IP data, enabling consistent analysis while remaining efficient.Workflow Breakdown:
  1. Trigger: Runs automatically for every incoming alert.
  2. Data Extraction: Capture fields such as source IP, domain, or alert type.
  3. Enrichment Actions: Query threat intel sources and perform geo-IP lookups.
  4. Attach Enrichment Data: Add structured enrichment fields (e.g., Threat Level, Malware Family, Country of Origin).
  5. Logging: Track enrichment actions for traceability and prevent duplication.
Outcome: All alerts are consistently enriched, supporting faster investigation and decision-making.

4. Predictable & Consistent Outcomes

Ideal to use when outcomes should always follow the same fixed sequence.
This workflow ensures that user offboarding always follows the same fixed sequence, removing access to systems in a predictable and auditable way.Workflow Breakdown:
  1. Trigger: User offboarding event is initiated (e.g., HR system update).
  2. Disable Okta Account: Revoke authentication immediately.
  3. Revoke Cloud Credentials: Remove access to AWS, Azure, GCP, etc.
  4. Remove Slack Access: Remove from workspace and channels.
  5. Logging: Ensure all actions are recorded and the sequence is strictly enforced.
Outcome: Users are offboarded securely, consistently, and without risk of skipped steps.

5. Structured Data and Clear Schemas

Ideal to use when inputs and outputs follow a stable, well-defined structure that the workflow can rely on deterministically.
This workflow triggers whenever a new user is added in Okta and automatically provisions them in downstream systems like Salesforce and Slack, ensuring structured data and predictable results.Workflow Breakdown:
  1. Trigger: Okta “New user created” event occurs. Fields include: Email, Name, Role.
  2. Data Extraction: Capture and validate user details from Okta.
  3. Salesforce Provisioning: Create the user with correct email, name, and role.
  4. Slack Provisioning: Create the user in Slack with correct workspace access and channels.
  5. Logging: Record provisioning actions and flag any failures for review.
Outcome: Every new Okta user is automatically provisioned in multiple systems with consistent and accurate data.