StackOne AI provides a unified interface for accessing various SaaS tools through AI-friendly APIs.
- Unified interface for multiple SaaS tools
- AI-friendly tool descriptions and parameters
- Tool Calling: Direct method calling with
tool.call()for intuitive usage - MCP-backed Dynamic Discovery: Fetch tools at runtime via
fetch_tools()with provider, action, and account filtering - Advanced Tool Filtering:
- Glob pattern filtering with patterns like
"salesforce_*"and exclusions"!*_delete_*" - Provider and action filtering
- Multi-account support
- Glob pattern filtering with patterns like
- Semantic Search: AI-powered tool discovery using natural language queries
- Search Tool: Callable tool discovery for agent loops via
get_search_tool() - Integration with popular AI frameworks:
- OpenAI Functions
- LangChain Tools
- CrewAI Tools
- LangGraph Tool Node
- Python 3.10+
pip install 'stackone-ai[mcp]'
# Or with uv
uv add 'stackone-ai[mcp]'# Install with CrewAI examples
pip install 'stackone-ai[mcp,examples]'
# or
uv add 'stackone-ai[mcp,examples]'from stackone_ai import StackOneToolSet
# Initialize with API key
toolset = StackOneToolSet() # Uses STACKONE_API_KEY env var
# Or explicitly: toolset = StackOneToolSet(api_key="your-api-key")
# Get HRIS-related tools with glob patterns
tools = toolset.fetch_tools(actions=["bamboohr_*"], account_ids=["your-account-id"])
# Use a specific tool with the call method
employee_tool = tools.get_tool("bamboohr_get_employee")
# Call with keyword arguments
employee = employee_tool.call(id="employee-id")
# Or with traditional execute method
employee = employee_tool.execute({"id": "employee-id"})StackOne AI SDK provides powerful filtering capabilities to help you select the exact tools you need.
The fetch_tools() method provides filtering by providers, actions, and account IDs:
from stackone_ai import StackOneToolSet
toolset = StackOneToolSet()
# Filter by account IDs
tools = toolset.fetch_tools(account_ids=["acc-123", "acc-456"])
# Filter by providers (case-insensitive)
tools = toolset.fetch_tools(providers=["hibob", "bamboohr"])
# Filter by action patterns with glob support
tools = toolset.fetch_tools(actions=["*_list_employees"])
# Combine multiple filters
tools = toolset.fetch_tools(
account_ids=["acc-123"],
providers=["hibob"],
actions=["*_list_*"]
)
# Use set_accounts() for chaining
toolset.set_accounts(["acc-123", "acc-456"])
tools = toolset.fetch_tools(providers=["hibob"])Filtering Options:
account_ids: Filter tools by account IDs. Tools will be loaded for each specified account.providers: Filter by provider names (e.g.,["hibob", "bamboohr"]). Case-insensitive matching.actions: Filter by action patterns with glob support:- Exact match:
["bamboohr_list_employees"] - Glob pattern:
["*_list_employees"]matches all tools ending with_list_employees - Provider prefix:
["bamboohr_*"]matches all BambooHR tools
- Exact match:
The Python SDK can emit implicit behavioral feedback to LangSmith so you can triage low-quality tool results without manually tagging runs.
Set LANGSMITH_API_KEY in your environment and the SDK will initialize the implicit feedback manager on first tool execution. You can optionally fine-tune behavior with:
STACKONE_IMPLICIT_FEEDBACK_ENABLED(true/false, defaults totruewhen an API key is present)STACKONE_IMPLICIT_FEEDBACK_PROJECTto pin a LangSmith project nameSTACKONE_IMPLICIT_FEEDBACK_TAGSwith a comma-separated list of tags applied to every run
If you want custom session or user resolvers, call configure_implicit_feedback during start-up:
from stackone_ai import configure_implicit_feedback
configure_implicit_feedback(
api_key="/path/to/langsmith.key",
project_name="stackone-agents",
default_tags=["python-sdk"],
)Providing your own session_resolver/user_resolver callbacks lets you derive identifiers from the request context before events are sent to LangSmith.
Both tool.execute and tool.call accept an options keyword that is excluded from the API request but forwarded to the feedback manager:
tool.execute(
{"id": "employee-id"},
options={
"feedback_session_id": "chat-42",
"feedback_user_id": "user-123",
"feedback_metadata": {"conversation_id": "abc"},
},
)When two calls for the same session happen within a few seconds, the SDK emits a refinement_needed event, and you can inspect suitability scores directly in LangSmith.
LangChain Integration
StackOne tools work seamlessly with LangChain, enabling powerful AI agent workflows:
from langchain_openai import ChatOpenAI
from stackone_ai import StackOneToolSet
# Initialize StackOne tools
toolset = StackOneToolSet()
tools = toolset.fetch_tools(actions=["bamboohr_*"], account_ids=["your-account-id"])
# Convert to LangChain format
langchain_tools = tools.to_langchain()
# Use with LangChain models
model = ChatOpenAI(model="gpt-4o-mini")
model_with_tools = model.bind_tools(langchain_tools)
# Execute AI-driven tool calls
response = model_with_tools.invoke("Get employee information for ID: emp123")
# Handle tool calls
for tool_call in response.tool_calls:
tool = tools.get_tool(tool_call["name"])
if tool:
result = tool.execute(tool_call["args"])
print(f"Result: {result}")LangGraph Integration
StackOne tools convert to LangChain tools, which LangGraph consumes via its prebuilt nodes:
Prerequisites:
pip install langgraph langchain-openaifrom langchain_openai import ChatOpenAI
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import tools_condition
from stackone_ai import StackOneToolSet
from stackone_ai.integrations.langgraph import to_tool_node, bind_model_with_tools
# Prepare tools
toolset = StackOneToolSet()
tools = toolset.fetch_tools(actions=["bamboohr_*"], account_ids=["your-account-id"])
langchain_tools = tools.to_langchain()
class State(TypedDict):
messages: Annotated[list, add_messages]
# Build a small agent loop: LLM -> maybe tools -> back to LLM
graph = StateGraph(State)
graph.add_node("tools", to_tool_node(langchain_tools))
def call_llm(state: dict):
llm = ChatOpenAI(model="gpt-4o-mini")
llm = bind_model_with_tools(llm, langchain_tools)
resp = llm.invoke(state["messages"]) # returns AIMessage with optional tool_calls
return {"messages": state["messages"] + [resp]}
graph.add_node("llm", call_llm)
graph.add_edge(START, "llm")
graph.add_conditional_edges("llm", tools_condition)
graph.add_edge("tools", "llm")
app = graph.compile()
_ = app.invoke({"messages": [("user", "Get employee with id emp123") ]})CrewAI Integration
CrewAI uses LangChain tools natively, making integration seamless:
from crewai import Agent, Crew, Task
from stackone_ai import StackOneToolSet
# Get tools and convert to LangChain format
toolset = StackOneToolSet()
tools = toolset.fetch_tools(actions=["bamboohr_*"], account_ids=["your-account-id"])
langchain_tools = tools.to_langchain()
# Create CrewAI agent with StackOne tools
agent = Agent(
role="HR Manager",
goal="Analyze employee data and generate insights",
backstory="Expert in HR analytics and employee management",
tools=langchain_tools,
llm="gpt-4o-mini"
)
# Define task and execute
task = Task(
description="Find all employees in the engineering department",
agent=agent,
expected_output="List of engineering employees with their details"
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()The SDK includes a feedback collection tool (tool_feedback) that allows users to submit feedback about their experience with StackOne tools. This tool is automatically included in the toolset and is designed to be invoked by AI agents after user permission.
from stackone_ai import StackOneToolSet
toolset = StackOneToolSet()
# Get the feedback tool (included with "tool_*" pattern or all tools)
tools = toolset.fetch_tools(actions=["tool_*"])
feedback_tool = tools.get_tool("tool_feedback")
# Submit feedback (typically invoked by AI after user consent)
result = feedback_tool.call(
feedback="The HRIS tools are working great! Very fast response times.",
account_id="acc_123456",
tool_names=["bamboohr_list_employees", "bamboohr_get_employee"]
)Important: The AI agent should always ask for user permission before submitting feedback:
- "Are you ok with sending feedback to StackOne? The LLM will take care of sending it."
- Only call the tool after the user explicitly agrees.
Search for tools using natural language queries. Works with both semantic (cloud) and local BM25+TF-IDF search.
# Get a callable search tool
toolset = StackOneToolSet()
all_tools = toolset.fetch_tools(account_ids=["your-account-id"])
search_tool = toolset.get_search_tool()
# Search for relevant tools — returns a Tools collection
tools = search_tool("manage employees", top_k=5)
# Execute a discovered tool directly
tools[0](limit=10)Discover tools using natural language instead of exact names. Queries like "onboard new hire" resolve to the right actions even when the tool is called bamboohr_create_employee.
from stackone_ai import StackOneToolSet
toolset = StackOneToolSet()
# Search by intent — returns Tools collection ready for any framework
tools = toolset.search_tools("manage employee records", account_ids=["your-account-id"], top_k=5)
openai_tools = tools.to_openai()
# Lightweight: inspect results without fetching full tool definitions
results = toolset.search_action_names("time off requests", top_k=5)Control which search backend search_tools() uses via the search parameter:
# "auto" (default) — tries semantic search first, falls back to local
tools = toolset.search_tools("manage employees", search="auto")
# "semantic" — semantic API only, raises if unavailable
tools = toolset.search_tools("manage employees", search="semantic")
# "local" — local BM25+TF-IDF only, no semantic API call
tools = toolset.search_tools("manage employees", search="local")Results are automatically scoped to connectors in your linked accounts. See Semantic Search Example for SearchTool (get_search_tool) integration, OpenAI, and LangChain patterns.
For more examples, check out the examples/ directory:
- StackOne Account IDs
- File Uploads
- OpenAI Integration
- LangChain Integration
- CrewAI Integration
- Search Tool
- Semantic Search
This project includes a Nix flake for reproducible development environments. All development tools are defined in flake.nix and provided via Nix.
# Install Nix with flakes enabled (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf -L https://artifacts.nixos.org/experimental-installer | \
sh -s -- install
# If flakes are not enabled, enable them with:
mkdir -p ~/.config/nix && echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf# Automatic activation with direnv (recommended)
direnv allow
# Or manual activation
nix developThe Nix development environment includes:
- Python with uv package manager
- Automatic dependency installation
- Git hooks (treefmt + ty) auto-configured
- Consistent environment across all platforms
Apache 2.0 License