feat(meta-tools): add get_meta_tools() for LLM-driven search and execute#151
feat(meta-tools): add get_meta_tools() for LLM-driven search and execute#151shashi-stackone wants to merge 10 commits intomainfrom
Conversation
There was a problem hiding this comment.
2 issues found across 3 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="stackone_ai/meta_tools.py">
<violation number="1" location="stackone_ai/meta_tools.py:29">
P2: Validate `MetaToolsOptions.search`, `top_k`, and `min_similarity` to prevent invalid meta-tool configuration from silently producing incorrect search behavior.</violation>
</file>
<file name="examples/meta_tools_example.py">
<violation number="1" location="examples/meta_tools_example.py:132">
P2: Handle tool execution exceptions in the loop; otherwise one malformed tool call can crash the example instead of letting the agent recover.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
examples/meta_tools_example.py
Outdated
| if tool is None: | ||
| result = {"error": f"Unknown tool: {tool_call.function.name}"} | ||
| else: | ||
| result = tool.execute(tool_call.function.arguments) |
There was a problem hiding this comment.
P2: Handle tool execution exceptions in the loop; otherwise one malformed tool call can crash the example instead of letting the agent recover.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At examples/meta_tools_example.py, line 132:
<comment>Handle tool execution exceptions in the loop; otherwise one malformed tool call can crash the example instead of letting the agent recover.</comment>
<file context>
@@ -0,0 +1,187 @@
+ if tool is None:
+ result = {"error": f"Unknown tool: {tool_call.function.name}"}
+ else:
+ result = tool.execute(tool_call.function.arguments)
+
+ messages.append(
</file context>
There was a problem hiding this comment.
Pull request overview
Adds an agent-friendly “meta tools” interface to the SDK so LLMs can dynamically discover and execute StackOne tools via two stable tool definitions, instead of loading the entire catalog into the prompt.
Changes:
- Add
StackOneToolSet.get_meta_tools()returning aToolscollection containingtool_searchandtool_execute. - Introduce
stackone_ai/meta_tools.pyimplementing the two meta tools and a factory to build them. - Add
examples/meta_tools_example.pydemonstrating an agent loop using the meta tools with OpenAI/Gemini and LangChain.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
| stackone_ai/toolset.py | Exposes get_meta_tools() as the public entry point and wires it to the meta tools factory. |
| stackone_ai/meta_tools.py | Implements tool_search (delegates to search_tools) and tool_execute (fetches by name and executes). |
| examples/meta_tools_example.py | Demonstrates end-to-end usage of meta tools in an LLM-driven tool loop. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
stackone_ai/toolset.py
Outdated
| def get_meta_tools( | ||
| self, | ||
| *, | ||
| account_ids: list[str] | None = None, | ||
| search: SearchMode | None = None, | ||
| connector: str | None = None, | ||
| top_k: int | None = None, | ||
| min_similarity: float | None = None, | ||
| ) -> Tools: | ||
| """Get LLM-callable meta tools (tool_search + tool_execute) for agent-driven workflows. | ||
|
|
||
| Returns a Tools collection that can be passed directly to any LLM framework. | ||
| The LLM uses tool_search to discover available tools, then tool_execute to run them. | ||
|
|
||
| Args: | ||
| account_ids: Account IDs to scope tool discovery and execution | ||
| search: Search mode ('auto', 'semantic', or 'local') | ||
| connector: Optional connector filter (e.g. 'bamboohr') | ||
| top_k: Maximum number of search results. Defaults to 5. | ||
| min_similarity: Minimum similarity score threshold 0-1 | ||
|
|
||
| Returns: | ||
| Tools collection containing tool_search and tool_execute | ||
|
|
||
| Example:: | ||
|
|
||
| toolset = StackOneToolSet(account_id="acc-123") | ||
| meta_tools = toolset.get_meta_tools() | ||
|
|
||
| # Pass to OpenAI | ||
| tools = meta_tools.to_openai() | ||
|
|
||
| # Pass to LangChain | ||
| tools = meta_tools.to_langchain() | ||
| """ | ||
| if self._search_config is None: | ||
| raise ToolsetConfigError( | ||
| "Search is disabled. Initialize StackOneToolSet with a search config to enable." | ||
| ) | ||
|
|
||
| from stackone_ai.meta_tools import MetaToolsOptions, create_meta_tools | ||
|
|
||
| options = MetaToolsOptions( | ||
| account_ids=account_ids, | ||
| search=search, | ||
| connector=connector, | ||
| top_k=top_k, | ||
| min_similarity=min_similarity, | ||
| ) | ||
| return create_meta_tools(self, options) |
There was a problem hiding this comment.
This PR introduces get_meta_tools() but there are no accompanying tests. Since the repo has good coverage for StackOneToolSet behavior, consider adding unit tests that (1) tool_search delegates to search_tools() with defaults/overrides, (2) tool_execute returns structured errors on StackOneAPIError, and (3) multi-account scoping behaves as intended.
stackone_ai/meta_tools.py
Outdated
| class SearchMetaTool(StackOneTool): | ||
| """LLM-callable tool that searches for available StackOne tools.""" | ||
|
|
||
| _toolset: Any = None | ||
| _options: MetaToolsOptions = None # type: ignore[assignment] | ||
|
|
There was a problem hiding this comment.
SearchMetaTool/ExecuteMetaTool store _toolset and _options as normal model attributes. Because these classes inherit from Pydantic BaseModel (via StackOneTool), these should be PrivateAttr to avoid accidental serialization/model_dump of a large StackOneToolSet object (and to match how _execute_config, _api_key, etc. are handled in StackOneTool).
stackone_ai/meta_tools.py
Outdated
| { | ||
| "name": t.name, | ||
| "description": t.description, | ||
| "parameters": t.parameters.properties, |
There was a problem hiding this comment.
tool_search returns "parameters": t.parameters.properties, which drops the top-level schema info (at least the type). Since the intent is to return a “parameter schema” the LLM can follow, consider returning the full ToolParameters schema (e.g., include both type and properties) rather than only properties.
| "parameters": t.parameters.properties, | |
| "parameters": t.parameters.model_dump(), |
stackone_ai/meta_tools.py
Outdated
| all_tools = self._toolset.fetch_tools(account_ids=self._options.account_ids) | ||
| target = all_tools.get_tool(parsed.tool_name) | ||
|
|
||
| if target is None: | ||
| return { | ||
| "error": f'Tool "{parsed.tool_name}" not found. Use tool_search to find available tools.', | ||
| } |
There was a problem hiding this comment.
tool_execute selects a tool only by name (all_tools.get_tool(parsed.tool_name)). If account_ids contains multiple accounts, tools with the same name but different account contexts will collide in Tools._tool_map, which can cause execution against the wrong account. Consider requiring an account_id argument (when multiple accounts are in scope) and selecting the matching tool instance by both name and get_account_id().
stackone_ai/meta_tools.py
Outdated
| all_tools = self._toolset.fetch_tools(account_ids=self._options.account_ids) | ||
| target = all_tools.get_tool(parsed.tool_name) |
There was a problem hiding this comment.
tool_execute calls fetch_tools() on every invocation, which re-downloads the full tool catalog each time and can dominate latency/cost in agent loops with multiple tool calls. Consider caching the fetched Tools per (account_ids, base_url) within the meta tool instance (optionally with a TTL) so repeated executions don’t repeatedly hit /mcp.
stackone_ai/meta_tools.py
Outdated
| def _create_search_tool(api_key: str, opts: MetaToolsOptions) -> SearchMetaTool: | ||
| name = "tool_search" | ||
| description = ( | ||
| "Search for available tools by describing what you need. " | ||
| "Returns matching tool names, descriptions, and parameter schemas. " | ||
| "Use the returned parameter schemas to know exactly what to pass when calling tool_execute." | ||
| ) |
There was a problem hiding this comment.
_create_search_tool(api_key: str, opts: MetaToolsOptions) takes opts but never uses it. Consider removing the parameter (and the corresponding argument at the callsite) to avoid confusion, or use it to set defaults that are reflected in the tool’s description/schema.
stackone_ai/meta_tools.py
Outdated
| "Use tool_search first to find available tools. " | ||
| "The parameters field must match the parameter schema returned by tool_search. " | ||
| "Pass parameters as a nested object matching the schema structure." | ||
| ) |
There was a problem hiding this comment.
_create_execute_tool(api_key: str, opts: MetaToolsOptions) takes opts but never uses it. Consider removing the parameter (and the corresponding argument at the callsite) to avoid confusion, or use it to set defaults that are reflected in the tool’s description/schema.
| ) | |
| ) | |
| # Keep `opts` in the signature for API compatibility but reference it to avoid unused-parameter warnings. | |
| _ = opts |
stackone_ai/meta_tools.py
Outdated
| top_k: int | None = None | ||
| min_similarity: float | None = None | ||
|
|
||
|
|
There was a problem hiding this comment.
MetaToolsOptions accepts top_k and min_similarity but doesn’t validate their ranges. Since these values are passed through to search_tools() (and can affect slicing/thresholding), negative or out-of-range values can produce incorrect behavior. Consider adding Pydantic Field constraints (e.g., top_k ge=1/le=50 and min_similarity ge=0/le=1) and optionally normalizing empty strings for connector.
| top_k: int | None = None | |
| min_similarity: float | None = None | |
| top_k: int | None = Field(default=None, ge=1, le=50) | |
| min_similarity: float | None = Field(default=None, ge=0, le=1) | |
| @field_validator("connector", mode="before") | |
| @classmethod | |
| def normalize_connector(cls, v: str | None) -> str | None: | |
| if v is None: | |
| return None | |
| if isinstance(v, str) and not v.strip(): | |
| return None | |
| return v |
stackone_ai/meta_tools.py
Outdated
| except StackOneAPIError as exc: | ||
| # Return API errors to the LLM so it can adjust parameters and retry | ||
| return { | ||
| "error": str(exc), | ||
| "status_code": exc.status_code, | ||
| "tool_name": parsed.tool_name if "parsed" in dir() else "unknown", | ||
| } |
There was a problem hiding this comment.
The StackOneAPIError handler uses parsed.tool_name if "parsed" in dir() else "unknown". This is brittle and unnecessary here (a StackOneAPIError is only likely after parsed is created). Consider initializing tool_name before the try/except (or using locals().get("parsed")) to keep the handler simpler and avoid dir() checks.
stackone_ai/meta_tools.py
Outdated
| parameters = ToolParameters( | ||
| type="object", | ||
| properties={ | ||
| "tool_name": { | ||
| "type": "string", | ||
| "description": "Exact tool name from tool_search results", | ||
| }, | ||
| "parameters": { | ||
| "type": "object", | ||
| "description": "Parameters for the tool. Pass {} if none needed.", | ||
| }, | ||
| }, |
There was a problem hiding this comment.
🔴 The to_langchain() type mapping in models.py:415-424 has no "object" → dict case — it falls through to the default python_type = str. This means tool_execute's parameters field (declared as type: object at meta_tools.py:257) gets str in the generated Pydantic args_schema, so Pydantic v2 will reject the dict value with a ValidationError when a LangChain agent invokes tool_execute. Add elif type_str == "object": python_type = dict to the mapping, or similarly for "array" → list.
Extended reasoning...
What the bug is
The to_langchain() method on StackOneTool (models.py:415-431) converts tool parameter schemas into a dynamically-created Pydantic BaseModel for LangChain's args_schema. It maps JSON Schema types to Python types: number → float, integer → int, boolean → bool, with str as the fallback default (line 416). There is no mapping for "object" → dict.
How it manifests
The new tool_execute meta tool declares a parameters property with "type": "object" (meta_tools.py:257-259). When to_langchain() processes this, the "object" type falls through to the default, producing parameters: str in the generated args_schema class.
Step-by-step proof
- Call
toolset.get_meta_tools()— returns aToolscollection containingtool_execute. - Call
.to_langchain()on the collection — iterates each tool, callingStackOneTool.to_langchain(). - For
tool_execute, the method iteratesself.parameters.properties, which includes{"parameters": {"type": "object", "description": "..."}}. - At line 418,
type_str = details.get("type", "string")yields"object". - The
if/elifchain checks"number","integer","boolean"— none match"object". python_typestays as the defaultstrfrom line 416.- The dynamically created Pydantic model has
parameters: str. - When an LLM invokes
tool_execute, it produces{"parameters": {...}}— a dict value. - LangChain passes this to Pydantic v2 validation, which rejects
dictfor astr-typed field withValidationError: Input should be a valid string. - The tool call fails before
_run()is ever reached.
Impact
The PR description claims "all existing framework converters work automatically" and the example example_langchain_meta_tools explicitly demonstrates meta_tools.to_langchain(). In practice, tool_execute — the core tool for LLM-driven execution — is completely broken when used through LangChain. The tool_search tool works fine (it only has string and integer parameters), but tool_execute will always fail on validation.
How to fix
Add "object" → dict (and optionally "array" → list) to the type mapping in to_langchain() at models.py:419-424:
elif type_str == "object":
python_type = dict
elif type_str == "array":
python_type = listWhile the missing mapping is technically pre-existing in to_langchain(), no previous tool had a user-facing object-typed parameter critical to its operation. This PR introduces the first tool where it causes a practical failure, so it should be fixed as part of this PR.
stackone_ai/meta_tools.py
Outdated
| "query": parsed.query, | ||
| } | ||
| except json.JSONDecodeError as exc: | ||
| raise StackOneError(f"Invalid JSON in arguments: {exc}") from exc | ||
| except Exception as error: | ||
| if isinstance(error, StackOneError): | ||
| raise | ||
| raise StackOneError(f"Error searching tools: {error}") from error |
There was a problem hiding this comment.
🔴 Pydantic ValidationError from invalid LLM input (e.g., top_k=-1, empty query) is caught by the broad except Exception handler in both SearchMetaTool.execute() and ExecuteMetaTool.execute(), wrapped as StackOneError, and raised — crashing the agent loop instead of returning an error dict the LLM can retry with. Similarly, in ExecuteMetaTool, network-level StackOneError (DNS failure, timeout) is re-raised while only StackOneAPIError is caught and returned. Both error types should be caught and returned as error dicts, consistent with the existing StackOneAPIError handling and the PR's stated design goal of returning errors to the LLM for retry.
Extended reasoning...
What the bug is
Both SearchMetaTool.execute() and ExecuteMetaTool.execute() have error handling that contradicts the PR's stated design goal: "API errors are returned to the LLM (not thrown) so the agent can retry with different parameters." Two categories of recoverable errors crash the agent loop instead of being returned as error dicts:
- Pydantic
ValidationErrorfrom invalid LLM input (e.g.,top_k=-1violatingge=1, emptyqueryviolatingmin_length=1) StackOneErrorfrom network failures (DNS failure, timeout, connection refused) inExecuteMetaToolonly
How ValidationError crashes the loop
Step-by-step proof for SearchMetaTool:
- LLM calls
tool_searchwith{"query": "employees", "top_k": -1} SearchInput(query="employees", top_k=-1)raises PydanticValidationErrorbecausetop_khasge=1constraint- The
except json.JSONDecodeErrorhandler (line 92) does not match - The
except Exception as errorhandler (line 94) catches it isinstance(error, StackOneError)isFalse—ValidationErroris not aStackOneError- The error is wrapped as
StackOneError("Error searching tools: ...")and raised - The example agent loop (meta_tools_example.py line 131) calls
tool.execute()with no try/except, so the loop crashes
The same flow applies to ExecuteMetaTool when ExecuteInput validation fails (e.g., empty tool_name).
How network errors crash the loop (ExecuteMetaTool only)
Step-by-step proof:
- LLM calls
tool_executewith valid parameters target.execute()makes an HTTP request that fails with a DNS error- In
StackOneTool.execute()(models.py line 277-279),httpx.RequestErroris caught and re-raised asStackOneError("Request failed: ...") - Back in
ExecuteMetaTool.execute(), theexcept StackOneAPIErrorhandler (line 144) does NOT match —StackOneErroris the parent class, notStackOneAPIError - The
except Exception as errorhandler (line 153) catches it isinstance(error, StackOneError)isTrue, so it is re-raised- The agent loop crashes
This creates an inconsistency: HTTP 4xx/5xx errors (StackOneAPIError) are gracefully returned to the LLM, but network errors (StackOneError) crash the loop.
Addressing the refutation about intentional design
One verifier argued this follows an established pattern from FeedbackTool. However, there is a key difference: ExecuteMetaTool already breaks from the FeedbackTool pattern by explicitly catching StackOneAPIError and returning it as an error dict (lines 144-150). This demonstrates the PR author's intent to make meta tools more resilient for agent loops. The ValidationError and StackOneError cases are gaps in that intent, not intentional design choices. The FeedbackTool also catches ValueError explicitly (line 140), which partially addresses validation — the meta tools don't even do that.
Impact
In LLM-driven agent loops, malformed tool calls are common — LLMs frequently send slightly invalid parameters. The meta tools are specifically designed for autonomous LLM workflows (the PR description and example code make this clear). Crashing the entire loop on a validation error or transient network failure defeats the purpose of the error-returning design.
Fix
Catch pydantic.ValidationError before the broad except Exception in both meta tools and return it as an error dict. In ExecuteMetaTool, also catch StackOneError (not just StackOneAPIError) and return it as an error dict. This is consistent with the existing StackOneAPIError handling pattern already in ExecuteMetaTool.
stackone_ai/meta_tools.py
Outdated
| parameters = ToolParameters( | ||
| type="object", | ||
| properties={ | ||
| "query": { | ||
| "type": "string", | ||
| "description": ( | ||
| "Natural language description of what you need " | ||
| '(e.g. "create an employee", "list time off requests")' | ||
| ), | ||
| }, | ||
| "connector": { | ||
| "type": "string", | ||
| "description": 'Optional connector filter (e.g. "bamboohr", "hibob")', | ||
| }, | ||
| "top_k": { | ||
| "type": "integer", | ||
| "description": "Max results to return (1-50, default 5)", | ||
| "minimum": 1, | ||
| "maximum": 50, | ||
| }, | ||
| }, |
There was a problem hiding this comment.
🔴 Optional parameters connector and top_k in _create_search_tool (and parameters in _create_execute_tool) are missing "nullable": True, so to_openai_function() marks all properties as required. This forces the LLM to always provide values for these optional fields — it will hallucinate connector names instead of searching broadly. Add "nullable": True to the optional property dicts to match the convention in _normalize_schema_properties.
Extended reasoning...
What the bug is
The _create_search_tool function (meta_tools.py:198-218) defines the tool_search schema with three properties: query, connector, and top_k. Only query is truly required — connector and top_k are optional (their descriptions even say "Optional connector filter" and "default 5", and the SearchInput Pydantic model defaults both to None). Similarly, _create_execute_tool (meta_tools.py:250-262) defines parameters as optional (the ExecuteInput model defaults it to {}).
However, none of these optional properties include "nullable": True in their property dicts.
How to_openai_function() uses nullable
In models.py:377-379, the to_openai_function() method determines which properties are required:
if not prop.get("nullable", False):
required.append(name)Since connector, top_k, and parameters all lack "nullable": True, they are all added to the required array. The resulting OpenAI function schema tells the LLM that ALL three properties of tool_search are required, and both properties of tool_execute are required.
Step-by-step proof
toolset.get_meta_tools()callscreate_meta_tools()which calls_create_search_tool().- The
connectorproperty dict is{"type": "string", "description": "Optional connector filter..."}— nonullablekey. - When
meta_tools.to_openai()is called,to_openai_function()iterates over properties. - For
connector:prop.get("nullable", False)returnsFalse(key missing), soconnectoris appended torequired. - Same for
top_k: missingnullable, so it is marked required. - The OpenAI tool definition now has
"required": ["query", "connector", "top_k"]. - The LLM is told it MUST provide
connectorandtop_kon every call.
The same issue affects the to_langchain() conversion path (models.py:415-431), where all properties without a default get Field(description=...) without a default value, making them required in the generated Pydantic schema.
Impact
The LLM is forced to provide connector and top_k values for every tool_search call, even when the user wants a broad search across all connectors. In practice, the LLM will hallucinate connector names (e.g., guessing "bamboohr" when the user has "hibob") or always specify a top_k, wasting tokens and potentially returning wrong results. For tool_execute, the LLM is forced to provide parameters even for tools that take no arguments.
The codebase convention
The _normalize_schema_properties method (toolset.py:896-898) already establishes the convention:
if name in required_fields:
prop.setdefault("nullable", False)
else:
prop.setdefault("nullable", True)All MCP-fetched tools get this treatment automatically. The meta tools manually define their schemas and missed this convention.
Fix
Add "nullable": True to the connector and top_k property dicts in _create_search_tool, and to the parameters property dict in _create_execute_tool:
"connector": {
"type": "string",
"description": "Optional connector filter...",
"nullable": True,
},
"top_k": {
"type": "integer",
"description": "Max results to return (1-50, default 5)",
"minimum": 1,
"maximum": 50,
"nullable": True,
},And in _create_execute_tool:
"parameters": {
"type": "object",
"description": "Parameters for the tool...",
"nullable": True,
},
stackone_ai/meta_tools.py
Outdated
| connector=parsed.connector or self._options.connector, | ||
| top_k=parsed.top_k or self._options.top_k or 5, |
There was a problem hiding this comment.
🟡 Two minor code quality nits: (1) Lines 73-74 use or-chains (parsed.top_k or self._options.top_k or 5) instead of the is not None pattern used consistently elsewhere in the codebase (e.g., SearchTool.__call__, search_tools), which would silently skip falsy values like top_k=0. (2) Line 149 uses "parsed" in dir() to guard a variable reference, but dir() is not guaranteed to include locals per the Python spec, and parsed is always defined at that point anyway, making the check dead code. Consider using x if x is not None else default for (1) and removing the unnecessary guard for (2).
Extended reasoning...
or-chain pattern on lines 73-74
In SearchMetaTool.execute(), the code uses:
connector=parsed.connector or self._options.connector,
top_k=parsed.top_k or self._options.top_k or 5,The or operator in Python treats any falsy value (0, empty string, False, etc.) as equivalent to None. This means if someone explicitly sets MetaToolsOptions(top_k=0), the or chain would skip past it and fall through to 5. Similarly, if parsed.connector were an empty string "", it would fall through to self._options.connector.
Practical impact is low but the pattern is inconsistent
In practice, parsed.top_k cannot be 0 due to the ge=1 constraint on SearchInput, and top_k=0 on MetaToolsOptions is semantically nonsensical. However, the rest of the codebase consistently uses the x if x is not None else default pattern — for example, SearchTool.__call__ uses top_k if top_k is not None else self._config.get("top_k"), and search_tools() uses top_k if top_k is not None else self._search_config.get("top_k"). The or-chain here is an inconsistency that could confuse future maintainers or cause subtle bugs if constraints change.
Step-by-step proof for the or-chain issue
- A caller creates
MetaToolsOptions(top_k=0)— this is allowed since the field isint | Nonewith nogeconstraint. parsed.top_kisNone(the LLM didn’t specify it).- The expression evaluates:
None or 0 or 5→0is falsy → result is5. - The caller’s explicit
top_k=0is silently ignored.
The fix is straightforward:
connector=parsed.connector if parsed.connector is not None else self._options.connector,
top_k=parsed.top_k if parsed.top_k is not None else (self._options.top_k if self._options.top_k is not None else 5),"parsed" in dir() on line 149
In ExecuteMetaTool.execute(), the StackOneAPIError handler uses parsed.tool_name if "parsed" in dir() else "unknown". The dir() builtin without arguments is documented as returning "the most relevant, rather than complete, information" — it is not guaranteed to include local variables on all Python implementations. However, tracing the control flow shows that parsed is always assigned on line 133 (via ExecuteInput(**raw_params)), and StackOneAPIError can only be raised on line 143 (target.execute(...)) which executes after parsed is assigned. So "parsed" in dir() is dead code that always evaluates to True. The proper fix is to either use parsed.tool_name directly (since it’s always defined) or initialize a sentinel like tool_name = "unknown" before the try block.
Summary
Both issues are minor code quality nits with negligible practical impact. The or-chain is the slightly more meaningful one since it represents an inconsistency with the established codebase pattern that could mask explicit falsy values in edge cases.
There was a problem hiding this comment.
1 issue found across 4 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="stackone_ai/meta_tools.py">
<violation number="1" location="stackone_ai/meta_tools.py:91">
P2: `tool_execute` can throw uncaught `TypeError` on non-object JSON input because the narrowed exception handler omits that case.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
examples/meta_tools_example.py
Outdated
| toolset = StackOneToolSet(search={"method": "semantic", "top_k": 3}) | ||
|
|
||
| # Get meta tools — returns a Tools collection with tool_search + tool_execute | ||
| meta_tools = toolset.get_meta_tools(account_ids=_account_ids or None) | ||
| openai_tools = meta_tools.to_openai() | ||
|
|
||
| print(f"Meta tools: {[t.name for t in meta_tools]}") | ||
| print() |
There was a problem hiding this comment.
you can pass account_ids to StackOneToolSet
also don't we have toolset.to_openai() - why do we need a get_meta_tools method when StackOneToolSet is the meta tool collection?
There was a problem hiding this comment.
At the moment, we don't have toolset.to_openai().. It exists on the exists on Tools (the collection) and StackOneTool (individual tool) ..
However, we can add this to the StackOneToolSet but we need set the API desing to separate the all tools vs 2 tools (search, execute) ..
Asked Claude and suggestes following options
Option 1: Explicit parameter
toolset.to_openai() # meta tools (default)
toolset.to_openai(mode="all") # all tools (calls fetch_tools internally)
Problem: "meta tools by default" is opinionated, could surprise users.
Option 2: Separate named methods
toolset.to_openai_meta() # 2 meta tools
toolset.to_openai_all() # all tools
Problem: Method explosion — need this for every format (langchain, anthropic, etc.)
Option 3: Keep get_meta_tools() as-is, don't add shortcuts
toolset.get_meta_tools().to_openai() # 2 meta tools
toolset.fetch_tools().to_openai() # all tools
This is already clear and symmetrical. The pattern is:
- get_meta_tools() → Tools collection (2 tools)
- fetch_tools() → Tools collection (N tools)
- Both return Tools, both have .to_openai(), .to_langchain(), etc.
Each has its pros and cons and option 3 is what we already have
There was a problem hiding this comment.
I think option 1 is the best. @glebedel
There was a problem hiding this comment.
toolset(search=None, execute=None) as the defaults
add toolset().openai()
add toolset().openai(mode=search_and_execute)
keep toolset(search=semantic)
keep toolset().search(mode=semantic)
keep toolset().get_search_tool(mode=local)
There was a problem hiding this comment.
Happy with this approach ..
-
execute=None-> What would be the behaviour assuming we do default execution config or dont includetool_executefrom meta tools? -
toolset.openai() (no mode) call fetch_tools() I assume
Can add more methods like .athropic .langchain etc but will focus on .openai for now .
There was a problem hiding this comment.
This is addressed now
There was a problem hiding this comment.
the first example I looked at did not follow this pattern and had a .to_meta_tools()
examples/meta_tools_example.py
Outdated
| for iteration in range(max_iterations): | ||
| print(f"--- Iteration {iteration + 1} ---") | ||
|
|
||
| response = client.chat.completions.create( | ||
| model=model, | ||
| messages=messages, | ||
| tools=openai_tools, | ||
| tool_choice="auto", | ||
| ) | ||
|
|
||
| choice = response.choices[0] | ||
|
|
||
| if not choice.message.tool_calls: | ||
| print(f"\n{provider} final response: {choice.message.content}") | ||
| break | ||
|
|
There was a problem hiding this comment.
just have one message so the example is simple
There was a problem hiding this comment.
Made example simple
| def main() -> None: | ||
| """Run all meta tools examples.""" | ||
| api_key = os.getenv("STACKONE_API_KEY") | ||
| if not api_key: | ||
| print("Set STACKONE_API_KEY to run these examples.") | ||
| return | ||
|
|
||
| example_openai_meta_tools() | ||
| example_langchain_meta_tools() | ||
|
|
There was a problem hiding this comment.
these examples should be using with a client. i.e. how do you use with anthropic and openai. not just like here's it in langchain format. how would they then use the langchain format in an LLM
we want init_stackone_tools -> pass to LLM client -> show how to use LLM client with stackone tools
(for each and all LLM clients / tool formats that we support)
There was a problem hiding this comment.
1 issue found across 5 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="stackone_ai/toolset.py">
<violation number="1" location="stackone_ai/toolset.py:496">
P2: openai() uses `account_ids or ...`, so an explicit empty list cannot override the constructor execute config as documented.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| effective_account_ids = account_ids or ( | ||
| self._execute_config.get("account_ids") if self._execute_config else None | ||
| ) |
There was a problem hiding this comment.
P2: openai() uses account_ids or ..., so an explicit empty list cannot override the constructor execute config as documented.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 496:
<comment>openai() uses `account_ids or ...`, so an explicit empty list cannot override the constructor execute config as documented.</comment>
<file context>
@@ -444,6 +464,44 @@ def get_meta_tools(
+ toolset = StackOneToolSet()
+ tools = toolset.openai(mode="search_and_execute")
+ """
+ effective_account_ids = account_ids or (
+ self._execute_config.get("account_ids") if self._execute_config else None
+ )
</file context>
| effective_account_ids = account_ids or ( | |
| self._execute_config.get("account_ids") if self._execute_config else None | |
| ) | |
| effective_account_ids = account_ids if account_ids is not None else ( | |
| self._execute_config.get("account_ids") if self._execute_config else None | |
| ) |
examples/meta_tools_example.py
Outdated
| ) | ||
|
|
||
| # 2. Get meta tools in OpenAI format | ||
| meta_tools = toolset.get_meta_tools() |
There was a problem hiding this comment.
this is not what we agreed. we said toolset().to_openai()
There was a problem hiding this comment.
@willleeney Just double checking , toolset().to_openai()or toolset.openai() as per previous comment we agree on toolset.openai() ?
There was a problem hiding this comment.
Also should I write the toolset.langchain/crewai/pydantic as part of this PR or should we do it separately?
There was a problem hiding this comment.
I think to_openai is makes more linguistic sense. But I was more speaking about the getting rid of get_meta_tools()
stackone_ai/meta_tools.py
Outdated
| class MetaToolsOptions(BaseModel): | ||
| """Options for get_meta_tools().""" | ||
|
|
||
| account_ids: list[str] | None = None | ||
| search: Any | None = Field(default=None, description="Search mode: 'auto', 'semantic', or 'local'") | ||
| connector: str | None = None | ||
| top_k: int | None = None | ||
| min_similarity: float | None = None | ||
|
|
There was a problem hiding this comment.
This is all removed by now
stackone_ai/meta_tools.py
Outdated
| def create_meta_tools( | ||
| toolset: StackOneToolSet, | ||
| options: MetaToolsOptions | None = None, | ||
| ) -> Tools: | ||
| """Create tool_search + tool_execute for LLM-driven workflows. | ||
|
|
||
| Args: | ||
| toolset: The StackOneToolSet to delegate search and execution to. | ||
| options: Options to scope search and execution. | ||
|
|
||
| Returns: | ||
| Tools collection containing tool_search and tool_execute. | ||
| """ | ||
| opts = options or MetaToolsOptions() | ||
| api_key = toolset.api_key | ||
|
|
||
| # tool_search | ||
| search_tool = _create_search_tool(api_key) | ||
| search_tool._toolset = toolset | ||
| search_tool._options = opts | ||
|
|
||
| # tool_execute | ||
| execute_tool = _create_execute_tool(api_key) | ||
| execute_tool._toolset = toolset | ||
| execute_tool._options = opts | ||
|
|
||
| return Tools([search_tool, execute_tool]) |
There was a problem hiding this comment.
Sorry meta_tool removal changes not pushed by this time!
stackone_ai/toolset.py
Outdated
| def get_meta_tools( | ||
| self, | ||
| *, | ||
| account_ids: list[str] | None = None, | ||
| search: SearchMode | None = None, | ||
| connector: str | None = None, | ||
| top_k: int | None = None, | ||
| min_similarity: float | None = None, |
There was a problem hiding this comment.
meta_tool removal changes not pushed by this time
There was a problem hiding this comment.
1 issue found across 3 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="stackone_ai/toolset.py">
<violation number="1" location="stackone_ai/toolset.py:529">
P1: `execute()` drops the account scoping used by `openai(mode="search_and_execute")`, so manual agent loops can execute tools against the wrong accounts.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
6 issues found across 4 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="examples/meta_tools_example.py">
<violation number="1" location="examples/meta_tools_example.py:159">
P3: Use `response.text` here. Gemini returns structured content blocks in `AIMessage.content`, so this prints the raw payload instead of the final answer.</violation>
<violation number="2" location="examples/meta_tools_example.py:167">
P2: Handle missing MCP dependencies before invoking LangChain tools. `tool_execute` calls `fetch_tools()` under the hood, so this example crashes at the first tool call when `stackone-ai[mcp]` is not installed.</violation>
</file>
<file name="stackone_ai/toolset.py">
<violation number="1" location="stackone_ai/toolset.py:638">
P1: Expose a public `get_meta_tools()` API; `_build_tools()` leaves the new meta-tools feature inaccessible outside the OpenAI/LangChain wrappers.</violation>
<violation number="2" location="stackone_ai/toolset.py:646">
P2: Don't persist `account_ids` on the toolset here; a scoped meta-tools call leaks into later `fetch_tools()`/`search_tools()` calls.</violation>
<violation number="3" location="stackone_ai/toolset.py:719">
P1: The LangChain meta-tools path makes optional args required, because `to_langchain()` ignores the new schemas' `nullable` fields.</violation>
</file>
<file name="tests/test_meta_tools.py">
<violation number="1" location="tests/test_meta_tools.py:45">
P2: These tests bypass `StackOneToolSet.get_meta_tools()`, so the new public API can break without failing the suite.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| ) | ||
|
|
||
| if mode == "search_and_execute": | ||
| return self._build_tools(account_ids=effective_account_ids).to_langchain() |
There was a problem hiding this comment.
P1: The LangChain meta-tools path makes optional args required, because to_langchain() ignores the new schemas' nullable fields.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 719:
<comment>The LangChain meta-tools path makes optional args required, because `to_langchain()` ignores the new schemas' `nullable` fields.</comment>
<file context>
@@ -499,10 +687,38 @@ def openai(
+ )
+
+ if mode == "search_and_execute":
+ return self._build_tools(account_ids=effective_account_ids).to_langchain()
+
+ return self.fetch_tools(account_ids=effective_account_ids).to_langchain()
</file context>
|
|
||
| return SearchTool(self, config=config) | ||
|
|
||
| def _build_tools(self, account_ids: list[str] | None = None) -> Tools: |
There was a problem hiding this comment.
P1: Expose a public get_meta_tools() API; _build_tools() leaves the new meta-tools feature inaccessible outside the OpenAI/LangChain wrappers.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 638:
<comment>Expose a public `get_meta_tools()` API; `_build_tools()` leaves the new meta-tools feature inaccessible outside the OpenAI/LangChain wrappers.</comment>
<file context>
@@ -414,56 +635,23 @@ def get_search_tool(self, *, search: SearchMode | None = None) -> SearchTool:
- # Pass to LangChain
- tools = meta_tools.to_langchain()
- """
+ def _build_tools(self, account_ids: list[str] | None = None) -> Tools:
+ """Build tool_search + tool_execute tools scoped to this toolset."""
if self._search_config is None:
</file context>
| for tool_call in response.tool_calls: | ||
| print(f" -> {tool_call['name']}({json.dumps(tool_call['args'])})") | ||
| tool = tools_by_name[tool_call["name"]] | ||
| result = tool.invoke(tool_call["args"]) |
There was a problem hiding this comment.
P2: Handle missing MCP dependencies before invoking LangChain tools. tool_execute calls fetch_tools() under the hood, so this example crashes at the first tool call when stackone-ai[mcp] is not installed.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At examples/meta_tools_example.py, line 167:
<comment>Handle missing MCP dependencies before invoking LangChain tools. `tool_execute` calls `fetch_tools()` under the hood, so this example crashes at the first tool call when `stackone-ai[mcp]` is not installed.</comment>
<file context>
@@ -110,14 +111,74 @@ def example_gemini() -> None:
+ for tool_call in response.tool_calls:
+ print(f" -> {tool_call['name']}({json.dumps(tool_call['args'])})")
+ tool = tools_by_name[tool_call["name"]]
+ result = tool.invoke(tool_call["args"])
+ messages.append(ToolMessage(content=json.dumps(result), tool_call_id=tool_call["id"]))
+
</file context>
| ) | ||
|
|
||
| if account_ids: | ||
| self._account_ids = account_ids |
There was a problem hiding this comment.
P2: Don't persist account_ids on the toolset here; a scoped meta-tools call leaks into later fetch_tools()/search_tools() calls.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 646:
<comment>Don't persist `account_ids` on the toolset here; a scoped meta-tools call leaks into later `fetch_tools()`/`search_tools()` calls.</comment>
<file context>
@@ -414,56 +635,23 @@ def get_search_tool(self, *, search: SearchMode | None = None) -> SearchTool:
- from stackone_ai.meta_tools import MetaToolsOptions, create_meta_tools
+ if account_ids:
+ self._account_ids = account_ids
- options = MetaToolsOptions(
</file context>
|
|
||
| # 4. If no tool calls, print final answer and stop | ||
| if not response.tool_calls: | ||
| print(f"Answer: {response.content}") |
There was a problem hiding this comment.
P3: Use response.text here. Gemini returns structured content blocks in AIMessage.content, so this prints the raw payload instead of the final answer.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At examples/meta_tools_example.py, line 159:
<comment>Use `response.text` here. Gemini returns structured content blocks in `AIMessage.content`, so this prints the raw payload instead of the final answer.</comment>
<file context>
@@ -110,14 +111,74 @@ def example_gemini() -> None:
+
+ # 4. If no tool calls, print final answer and stop
+ if not response.tool_calls:
+ print(f"Answer: {response.content}")
+ break
+
</file context>
| print(f"Answer: {response.content}") | |
| print(f"Answer: {response.text}") |
|
@willleeney Made changes and removed |
Summary
get_meta_tools()method toStackOneToolSetthat returnstool_search+tool_executeas LLM-callable toolsload all tools upfront
Toolscollection so all existing framework converters work automatically(
.to_openai(),.to_langchain(),.to_anthropic(), etc.)Details
tool_searchdelegates tosearch_tools()internally, returns tool names, descriptions,and parameter schemas
tool_executefetches the real tool by name and callsexecute()— API errors arereturned to the LLM (not thrown) so the agent can retry with different parameters
FeedbackTool(StackOneTool subclass with custom executeoverride)
examples/meta_tools_example.pydemonstrates the full agent loop with Calendlyusing OpenAI/Gemini and LangChain
Summary by cubic
Adds LLM-driven search-and-execute mode so agents can discover and run StackOne tools on demand, with lower token cost and optional account scoping. Removes the old
get_meta_tools()API.New Features
StackOneToolSet.openai(mode="search_and_execute")and.langchain(mode="search_and_execute")returntool_searchandtool_execute; honor constructorexecute.account_idsand per-callaccount_ids.StackOneToolSet.execute(tool_name, arguments)for manual agent loops; caches tools and returns API errors as dicts.ExecuteToolsConfigfor default account scoping; search is off by default—enable via thesearchconstructor option. Example:examples/agent_tool_search.py.Bug Fixes
object→dict,array→list, and return tools as aSequenceto resolve CI/type errors.Written for commit 97f59f4. Summary will update on new commits.