Skill System¶
The skill system provides modular, composable capabilities that the AI agent can use to generate responses. Skills are organized by category and can be enabled/disabled per client.
Architecture¶
Skill Definition¶
Each skill is defined in a SKILL.md file with YAML frontmatter:
---
name: skill_name
default_enabled: true
description: Brief description for LLM selection
category: response_ending
required: false
emoji: "🎯"
always_on: false
---
# Instructions
Detailed instructions for the LLM...
Skill Examples¶
Skills use two types of examples to guide LLM behavior:
1. Trigger Examples (Frontmatter)¶
trigger_examples in the YAML frontmatter help the skill selector LLM decide WHEN to select a skill. Only the first 3 examples are shown in the selection prompt.
trigger_examples:
- "We're a fintech company" # Industry signal
- "What ROI can I expect?" # Results signal
- "Do you have case studies?" # Proof signal
Best practices for trigger_examples:
- Use semantic variety (different ways to express the same intent)
- Avoid language-specific phrases; prefer patterns the LLM can generalize
- Put the most important trigger type first (first 3 are shown to LLM)
- Match the categories described in the
descriptionfield
2. Instruction Examples (Body)¶
The # examples section in the skill body shows the answer LLM HOW to use the skill correctly once selected. Use GOOD/BAD patterns:
# examples
**GOOD - Natural transition to demo:**
**Visitor:** "We're looking to automate our onboarding"
**Agent:** "Automating onboarding can reduce manual work...
👇 Feel free to connect with a Sales expert using the button below."
**BAD - Missing emoji:**
**Agent:** "Feel free to connect with our Sales team."
**Why bad:** Missing 👇 emoji at the beginning of the last sentence.
Best practices for instruction examples:
- Always explain Why bad for BAD examples
- Show realistic visitor/agent exchanges
- Cover edge cases and common mistakes
- Keep examples generic (not client-specific) in global skills
Skill Categories¶
| Category | Purpose | Examples |
|---|---|---|
response_ending |
How to end responses | demo_offer, content_gating, context_gathering, clean_ending |
response_handling |
Content generation | pricing, competitors, support |
system |
Auto-injected context | knowledge_retrieval, visitor_profile, conversation_history |
Category Definition (CATEGORY.md)¶
Each category folder contains a CATEGORY.md file that configures how the category behaves in both the skill selector and the answer writer.
See backend/apps/shared_data/prompts/website-agent/skills/response_ending/CATEGORY.md for a full example.
CATEGORY.md Fields¶
Fields are consumed by two different LLMs:
| Field | Consumer | Purpose |
|---|---|---|
display_name |
Answer Writer | Heading text in assembled prompt |
answer_instructions |
Answer Writer | Instructions shown before skills in main prompt |
uncertainty_guidance |
Answer Writer | Shown when multiple skills from category are selected |
selector_description |
Skill Selector | Brief description to help LLM understand category |
selector_guidance |
Skill Selector | Detailed rules for when to select skills |
order |
Both | Sort order for categories (lower = first) |
hidden_from_llm |
Both | If true, skills in category are auto-injected by code |
How Categories Affect Skill Selection¶
The selector_guidance field is critical for LLM decision-making. Use it to:
- Define defaults: "clean_ending is the DEFAULT for most responses"
- Create decision frameworks: List conditions for each skill in priority order
- Set exclusivity rules: "content_gating > context_gathering when visitor states industry"
- Handle uncertainty: "When uncertain, select BOTH and let the answer LLM decide"
Always-On Skills¶
Skills with always_on: true in their YAML frontmatter are always injected into the answer prompt, regardless of whether the LLM skill selector picked them. This is useful when the skill selector cannot determine if a skill is needed (e.g., because the relevant context isn't available yet at selection time), but the answer LLM can decide at generation time.
When to use always_on:
- The skill depends on RAG context that isn't available during skill selection
- The answer LLM should conditionally apply the skill based on its own judgment
- The skill adds a fallback behavior (e.g., offering a CTA when the bot can't answer)
Example: The book_a_call skill for augment.org uses always_on: true because the skill selector runs before RAG retrieval and cannot know if the knowledge base lacks information. The answer LLM sees the RAG context and decides whether to include the advisor booking link.
Skill Selection Pipeline¶
The skill_selector node uses a two-phase pipeline:
Phase 1: LLM Selection
The LLM selects skills based on conversation context and skill descriptions.
Phase 2: Post-Processing Rules
Mechanical rules enforce priority and exclusivity:
- Ending Rules (priority-based, first match wins):
demo_offer- Priority 1, removes all other ending skillscontent_gating- Priority 2, removescontext_gathering/clean_ending-
context_gathering- Priority 3, can addclean_ending -
System Rules (all apply additively):
visitor_profile- Injected if valid company data existsconversation_history- Injected if turn > 0knowledge_retrieval- Always injected (RAG context)- Always-on skills - Any skill with
always_on: truefor the current client
Skill Toggles¶
Skills can be enabled/disabled per client via the skill_toggles column in the agent_config table.
Default Behavior¶
Skills are ENABLED by default unless they have default_enabled: false in their SKILL.md metadata.
Configuration¶
-- Enable content_gating for a client
UPDATE agent_config
SET skill_toggles = '{"content_gating": {"enabled": true}}'
WHERE site_domain = 'example.com';
Lookup Cascade¶
1. agent_config.skill_toggles (explicit toggle)
2. SKILL_DEFAULT_ENABLED (from SKILL.md metadata)
3. True (default - skills enabled unless explicitly disabled)
Content Gating Skill¶
The content_gating skill enables email capture by offering valuable content in exchange.
How It Works¶
2-Step Flow:
- Turn N (Offer): Agent answers question + offers content with 💌 emoji
- Turn N+1 (Capture): Agent asks for email if visitor accepts
Availability States¶
| State | Condition | Behavior |
|---|---|---|
allowed |
Enabled, no restrictions | Skill can be selected |
discouraged |
Within cooldown period | LLM discouraged from selecting |
forbidden |
Disabled OR email captured | Skill removed from selection |
Cooldown Logic¶
After offering content (Turn N): - Turn N+1: Allowed (must handle visitor response) - Turn N+2 to N+3: Discouraged (cooldown active) - Turn N+4+: Allowed (cooldown passed)
Qualification Criteria¶
Optional criteria that trigger content gating offers:
UPDATE agent_config
SET qualification_criteria = '["number of support agents", "current tool"]'
WHERE site_domain = 'example.com';
When visitors mention these criteria, the LLM is more likely to offer relevant content.
Adding New Skills¶
1. Create SKILL.md¶
---
name: my_skill
default_enabled: true
description: When to use this skill
category: response_ending
emoji: "🎯"
always_on: false # Set to true to bypass LLM selector and always inject
---
# Instructions
Your skill instructions here...
2. Add Post-Processing Rules (if needed)¶
For ending skills with priority/exclusivity rules, update skill_selector.py:
ENDING_RULES = [
_rule_demo_offer,
_rule_content_gating,
_rule_my_skill, # Add new rule
_rule_context_gathering,
]
3. Add Default Toggle (if disabled by default)¶
# In agent_config.py
SKILL_DEFAULT_ENABLED: dict[str, bool] = {
"content_gating": False,
"my_skill": False, # Requires explicit enablement
}
4. Test¶
Related Documentation¶
- Prompt & Skill Development Workflow - Local development and Langfuse sync
- IXChat Package - Full chatbot architecture