Skip to content

feat(meta-tools): add get_meta_tools() for LLM-driven search and execute#151

Open
shashi-stackone wants to merge 10 commits intomainfrom
meta_tools
Open

feat(meta-tools): add get_meta_tools() for LLM-driven search and execute#151
shashi-stackone wants to merge 10 commits intomainfrom
meta_tools

Conversation

@shashi-stackone
Copy link
Contributor

@shashi-stackone shashi-stackone commented Mar 11, 2026

Summary

  • Add get_meta_tools() method to StackOneToolSet that returns tool_search +
    tool_execute as LLM-callable tools
  • The LLM autonomously searches for relevant tools and executes them, replacing the need to
    load all tools upfront
  • Returns a Tools collection so all existing framework converters work automatically
    (.to_openai(), .to_langchain(), .to_anthropic(), etc.)

Details

  • tool_search delegates to search_tools() internally, returns tool names, descriptions,
    and parameter schemas
  • tool_execute fetches the real tool by name and calls execute() — API errors are
    returned to the LLM (not thrown) so the agent can retry with different parameters
  • Follows the same pattern as FeedbackTool (StackOneTool subclass with custom execute
    override)
  • New example examples/meta_tools_example.py demonstrates the full agent loop with Calendly
    using OpenAI/Gemini and LangChain

Summary by cubic

Adds LLM-driven search-and-execute mode so agents can discover and run StackOne tools on demand, with lower token cost and optional account scoping. Removes the old get_meta_tools() API.

  • New Features

    • StackOneToolSet.openai(mode="search_and_execute") and .langchain(mode="search_and_execute") return tool_search and tool_execute; honor constructor execute.account_ids and per-call account_ids.
    • StackOneToolSet.execute(tool_name, arguments) for manual agent loops; caches tools and returns API errors as dicts.
    • ExecuteToolsConfig for default account scoping; search is off by default—enable via the search constructor option. Example: examples/agent_tool_search.py.
  • Bug Fixes

    • LangChain typing: map objectdict, arraylist, and return tools as a Sequence to resolve CI/type errors.

Written for commit 97f59f4. Summary will update on new commits.

Copilot AI review requested due to automatic review settings March 11, 2026 00:14
@shashi-stackone shashi-stackone changed the title add LLM-driven tool_search and tool_execute feat(meta-tools): add get_meta_tools() for LLM-driven search and execute Mar 11, 2026
Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 3 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="stackone_ai/meta_tools.py">

<violation number="1" location="stackone_ai/meta_tools.py:29">
P2: Validate `MetaToolsOptions.search`, `top_k`, and `min_similarity` to prevent invalid meta-tool configuration from silently producing incorrect search behavior.</violation>
</file>

<file name="examples/meta_tools_example.py">

<violation number="1" location="examples/meta_tools_example.py:132">
P2: Handle tool execution exceptions in the loop; otherwise one malformed tool call can crash the example instead of letting the agent recover.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

if tool is None:
result = {"error": f"Unknown tool: {tool_call.function.name}"}
else:
result = tool.execute(tool_call.function.arguments)
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Handle tool execution exceptions in the loop; otherwise one malformed tool call can crash the example instead of letting the agent recover.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At examples/meta_tools_example.py, line 132:

<comment>Handle tool execution exceptions in the loop; otherwise one malformed tool call can crash the example instead of letting the agent recover.</comment>

<file context>
@@ -0,0 +1,187 @@
+            if tool is None:
+                result = {"error": f"Unknown tool: {tool_call.function.name}"}
+            else:
+                result = tool.execute(tool_call.function.arguments)
+
+            messages.append(
</file context>
Fix with Cubic

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an agent-friendly “meta tools” interface to the SDK so LLMs can dynamically discover and execute StackOne tools via two stable tool definitions, instead of loading the entire catalog into the prompt.

Changes:

  • Add StackOneToolSet.get_meta_tools() returning a Tools collection containing tool_search and tool_execute.
  • Introduce stackone_ai/meta_tools.py implementing the two meta tools and a factory to build them.
  • Add examples/meta_tools_example.py demonstrating an agent loop using the meta tools with OpenAI/Gemini and LangChain.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.

File Description
stackone_ai/toolset.py Exposes get_meta_tools() as the public entry point and wires it to the meta tools factory.
stackone_ai/meta_tools.py Implements tool_search (delegates to search_tools) and tool_execute (fetches by name and executes).
examples/meta_tools_example.py Demonstrates end-to-end usage of meta tools in an LLM-driven tool loop.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +396 to +445
def get_meta_tools(
self,
*,
account_ids: list[str] | None = None,
search: SearchMode | None = None,
connector: str | None = None,
top_k: int | None = None,
min_similarity: float | None = None,
) -> Tools:
"""Get LLM-callable meta tools (tool_search + tool_execute) for agent-driven workflows.

Returns a Tools collection that can be passed directly to any LLM framework.
The LLM uses tool_search to discover available tools, then tool_execute to run them.

Args:
account_ids: Account IDs to scope tool discovery and execution
search: Search mode ('auto', 'semantic', or 'local')
connector: Optional connector filter (e.g. 'bamboohr')
top_k: Maximum number of search results. Defaults to 5.
min_similarity: Minimum similarity score threshold 0-1

Returns:
Tools collection containing tool_search and tool_execute

Example::

toolset = StackOneToolSet(account_id="acc-123")
meta_tools = toolset.get_meta_tools()

# Pass to OpenAI
tools = meta_tools.to_openai()

# Pass to LangChain
tools = meta_tools.to_langchain()
"""
if self._search_config is None:
raise ToolsetConfigError(
"Search is disabled. Initialize StackOneToolSet with a search config to enable."
)

from stackone_ai.meta_tools import MetaToolsOptions, create_meta_tools

options = MetaToolsOptions(
account_ids=account_ids,
search=search,
connector=connector,
top_k=top_k,
min_similarity=min_similarity,
)
return create_meta_tools(self, options)
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR introduces get_meta_tools() but there are no accompanying tests. Since the repo has good coverage for StackOneToolSet behavior, consider adding unit tests that (1) tool_search delegates to search_tools() with defaults/overrides, (2) tool_execute returns structured errors on StackOneAPIError, and (3) multi-account scoping behaves as intended.

Copilot uses AI. Check for mistakes.
Comment on lines +54 to +59
class SearchMetaTool(StackOneTool):
"""LLM-callable tool that searches for available StackOne tools."""

_toolset: Any = None
_options: MetaToolsOptions = None # type: ignore[assignment]

Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SearchMetaTool/ExecuteMetaTool store _toolset and _options as normal model attributes. Because these classes inherit from Pydantic BaseModel (via StackOneTool), these should be PrivateAttr to avoid accidental serialization/model_dump of a large StackOneToolSet object (and to match how _execute_config, _api_key, etc. are handled in StackOneTool).

Copilot uses AI. Check for mistakes.
{
"name": t.name,
"description": t.description,
"parameters": t.parameters.properties,
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tool_search returns "parameters": t.parameters.properties, which drops the top-level schema info (at least the type). Since the intent is to return a “parameter schema” the LLM can follow, consider returning the full ToolParameters schema (e.g., include both type and properties) rather than only properties.

Suggested change
"parameters": t.parameters.properties,
"parameters": t.parameters.model_dump(),

Copilot uses AI. Check for mistakes.
Comment on lines +135 to +141
all_tools = self._toolset.fetch_tools(account_ids=self._options.account_ids)
target = all_tools.get_tool(parsed.tool_name)

if target is None:
return {
"error": f'Tool "{parsed.tool_name}" not found. Use tool_search to find available tools.',
}
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tool_execute selects a tool only by name (all_tools.get_tool(parsed.tool_name)). If account_ids contains multiple accounts, tools with the same name but different account contexts will collide in Tools._tool_map, which can cause execution against the wrong account. Consider requiring an account_id argument (when multiple accounts are in scope) and selecting the matching tool instance by both name and get_account_id().

Copilot uses AI. Check for mistakes.
Comment on lines +135 to +136
all_tools = self._toolset.fetch_tools(account_ids=self._options.account_ids)
target = all_tools.get_tool(parsed.tool_name)
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tool_execute calls fetch_tools() on every invocation, which re-downloads the full tool catalog each time and can dominate latency/cost in agent loops with multiple tool calls. Consider caching the fetched Tools per (account_ids, base_url) within the meta tool instance (optionally with a TTL) so repeated executions don’t repeatedly hit /mcp.

Copilot uses AI. Check for mistakes.
Comment on lines +191 to +197
def _create_search_tool(api_key: str, opts: MetaToolsOptions) -> SearchMetaTool:
name = "tool_search"
description = (
"Search for available tools by describing what you need. "
"Returns matching tool names, descriptions, and parameter schemas. "
"Use the returned parameter schemas to know exactly what to pass when calling tool_execute."
)
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_create_search_tool(api_key: str, opts: MetaToolsOptions) takes opts but never uses it. Consider removing the parameter (and the corresponding argument at the callsite) to avoid confusion, or use it to set defaults that are reflected in the tool’s description/schema.

Copilot uses AI. Check for mistakes.
"Use tool_search first to find available tools. "
"The parameters field must match the parameter schema returned by tool_search. "
"Pass parameters as a nested object matching the schema structure."
)
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_create_execute_tool(api_key: str, opts: MetaToolsOptions) takes opts but never uses it. Consider removing the parameter (and the corresponding argument at the callsite) to avoid confusion, or use it to set defaults that are reflected in the tool’s description/schema.

Suggested change
)
)
# Keep `opts` in the signature for API compatibility but reference it to avoid unused-parameter warnings.
_ = opts

Copilot uses AI. Check for mistakes.
Comment on lines +31 to +34
top_k: int | None = None
min_similarity: float | None = None


Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MetaToolsOptions accepts top_k and min_similarity but doesn’t validate their ranges. Since these values are passed through to search_tools() (and can affect slicing/thresholding), negative or out-of-range values can produce incorrect behavior. Consider adding Pydantic Field constraints (e.g., top_k ge=1/le=50 and min_similarity ge=0/le=1) and optionally normalizing empty strings for connector.

Suggested change
top_k: int | None = None
min_similarity: float | None = None
top_k: int | None = Field(default=None, ge=1, le=50)
min_similarity: float | None = Field(default=None, ge=0, le=1)
@field_validator("connector", mode="before")
@classmethod
def normalize_connector(cls, v: str | None) -> str | None:
if v is None:
return None
if isinstance(v, str) and not v.strip():
return None
return v

Copilot uses AI. Check for mistakes.
Comment on lines +144 to +150
except StackOneAPIError as exc:
# Return API errors to the LLM so it can adjust parameters and retry
return {
"error": str(exc),
"status_code": exc.status_code,
"tool_name": parsed.tool_name if "parsed" in dir() else "unknown",
}
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The StackOneAPIError handler uses parsed.tool_name if "parsed" in dir() else "unknown". This is brittle and unnecessary here (a StackOneAPIError is only likely after parsed is created). Consider initializing tool_name before the try/except (or using locals().get("parsed")) to keep the handler simpler and avoid dir() checks.

Copilot uses AI. Check for mistakes.
Comment on lines +250 to +261
parameters = ToolParameters(
type="object",
properties={
"tool_name": {
"type": "string",
"description": "Exact tool name from tool_search results",
},
"parameters": {
"type": "object",
"description": "Parameters for the tool. Pass {} if none needed.",
},
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The to_langchain() type mapping in models.py:415-424 has no "object"dict case — it falls through to the default python_type = str. This means tool_execute's parameters field (declared as type: object at meta_tools.py:257) gets str in the generated Pydantic args_schema, so Pydantic v2 will reject the dict value with a ValidationError when a LangChain agent invokes tool_execute. Add elif type_str == "object": python_type = dict to the mapping, or similarly for "array"list.

Extended reasoning...

What the bug is

The to_langchain() method on StackOneTool (models.py:415-431) converts tool parameter schemas into a dynamically-created Pydantic BaseModel for LangChain's args_schema. It maps JSON Schema types to Python types: numberfloat, integerint, booleanbool, with str as the fallback default (line 416). There is no mapping for "object"dict.

How it manifests

The new tool_execute meta tool declares a parameters property with "type": "object" (meta_tools.py:257-259). When to_langchain() processes this, the "object" type falls through to the default, producing parameters: str in the generated args_schema class.

Step-by-step proof

  1. Call toolset.get_meta_tools() — returns a Tools collection containing tool_execute.
  2. Call .to_langchain() on the collection — iterates each tool, calling StackOneTool.to_langchain().
  3. For tool_execute, the method iterates self.parameters.properties, which includes {"parameters": {"type": "object", "description": "..."}} .
  4. At line 418, type_str = details.get("type", "string") yields "object".
  5. The if/elif chain checks "number", "integer", "boolean" — none match "object".
  6. python_type stays as the default str from line 416.
  7. The dynamically created Pydantic model has parameters: str.
  8. When an LLM invokes tool_execute, it produces {"parameters": {...}} — a dict value.
  9. LangChain passes this to Pydantic v2 validation, which rejects dict for a str-typed field with ValidationError: Input should be a valid string.
  10. The tool call fails before _run() is ever reached.

Impact

The PR description claims "all existing framework converters work automatically" and the example example_langchain_meta_tools explicitly demonstrates meta_tools.to_langchain(). In practice, tool_execute — the core tool for LLM-driven execution — is completely broken when used through LangChain. The tool_search tool works fine (it only has string and integer parameters), but tool_execute will always fail on validation.

How to fix

Add "object"dict (and optionally "array"list) to the type mapping in to_langchain() at models.py:419-424:

elif type_str == "object":
    python_type = dict
elif type_str == "array":
    python_type = list

While the missing mapping is technically pre-existing in to_langchain(), no previous tool had a user-facing object-typed parameter critical to its operation. This PR introduces the first tool where it causes a practical failure, so it should be fixed as part of this PR.

Comment on lines +90 to +97
"query": parsed.query,
}
except json.JSONDecodeError as exc:
raise StackOneError(f"Invalid JSON in arguments: {exc}") from exc
except Exception as error:
if isinstance(error, StackOneError):
raise
raise StackOneError(f"Error searching tools: {error}") from error
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Pydantic ValidationError from invalid LLM input (e.g., top_k=-1, empty query) is caught by the broad except Exception handler in both SearchMetaTool.execute() and ExecuteMetaTool.execute(), wrapped as StackOneError, and raised — crashing the agent loop instead of returning an error dict the LLM can retry with. Similarly, in ExecuteMetaTool, network-level StackOneError (DNS failure, timeout) is re-raised while only StackOneAPIError is caught and returned. Both error types should be caught and returned as error dicts, consistent with the existing StackOneAPIError handling and the PR's stated design goal of returning errors to the LLM for retry.

Extended reasoning...

What the bug is

Both SearchMetaTool.execute() and ExecuteMetaTool.execute() have error handling that contradicts the PR's stated design goal: "API errors are returned to the LLM (not thrown) so the agent can retry with different parameters." Two categories of recoverable errors crash the agent loop instead of being returned as error dicts:

  1. Pydantic ValidationError from invalid LLM input (e.g., top_k=-1 violating ge=1, empty query violating min_length=1)
  2. StackOneError from network failures (DNS failure, timeout, connection refused) in ExecuteMetaTool only

How ValidationError crashes the loop

Step-by-step proof for SearchMetaTool:

  1. LLM calls tool_search with {"query": "employees", "top_k": -1}
  2. SearchInput(query="employees", top_k=-1) raises Pydantic ValidationError because top_k has ge=1 constraint
  3. The except json.JSONDecodeError handler (line 92) does not match
  4. The except Exception as error handler (line 94) catches it
  5. isinstance(error, StackOneError) is FalseValidationError is not a StackOneError
  6. The error is wrapped as StackOneError("Error searching tools: ...") and raised
  7. The example agent loop (meta_tools_example.py line 131) calls tool.execute() with no try/except, so the loop crashes

The same flow applies to ExecuteMetaTool when ExecuteInput validation fails (e.g., empty tool_name).

How network errors crash the loop (ExecuteMetaTool only)

Step-by-step proof:

  1. LLM calls tool_execute with valid parameters
  2. target.execute() makes an HTTP request that fails with a DNS error
  3. In StackOneTool.execute() (models.py line 277-279), httpx.RequestError is caught and re-raised as StackOneError("Request failed: ...")
  4. Back in ExecuteMetaTool.execute(), the except StackOneAPIError handler (line 144) does NOT match — StackOneError is the parent class, not StackOneAPIError
  5. The except Exception as error handler (line 153) catches it
  6. isinstance(error, StackOneError) is True, so it is re-raised
  7. The agent loop crashes

This creates an inconsistency: HTTP 4xx/5xx errors (StackOneAPIError) are gracefully returned to the LLM, but network errors (StackOneError) crash the loop.

Addressing the refutation about intentional design

One verifier argued this follows an established pattern from FeedbackTool. However, there is a key difference: ExecuteMetaTool already breaks from the FeedbackTool pattern by explicitly catching StackOneAPIError and returning it as an error dict (lines 144-150). This demonstrates the PR author's intent to make meta tools more resilient for agent loops. The ValidationError and StackOneError cases are gaps in that intent, not intentional design choices. The FeedbackTool also catches ValueError explicitly (line 140), which partially addresses validation — the meta tools don't even do that.

Impact

In LLM-driven agent loops, malformed tool calls are common — LLMs frequently send slightly invalid parameters. The meta tools are specifically designed for autonomous LLM workflows (the PR description and example code make this clear). Crashing the entire loop on a validation error or transient network failure defeats the purpose of the error-returning design.

Fix

Catch pydantic.ValidationError before the broad except Exception in both meta tools and return it as an error dict. In ExecuteMetaTool, also catch StackOneError (not just StackOneAPIError) and return it as an error dict. This is consistent with the existing StackOneAPIError handling pattern already in ExecuteMetaTool.

Comment on lines +198 to +218
parameters = ToolParameters(
type="object",
properties={
"query": {
"type": "string",
"description": (
"Natural language description of what you need "
'(e.g. "create an employee", "list time off requests")'
),
},
"connector": {
"type": "string",
"description": 'Optional connector filter (e.g. "bamboohr", "hibob")',
},
"top_k": {
"type": "integer",
"description": "Max results to return (1-50, default 5)",
"minimum": 1,
"maximum": 50,
},
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Optional parameters connector and top_k in _create_search_tool (and parameters in _create_execute_tool) are missing "nullable": True, so to_openai_function() marks all properties as required. This forces the LLM to always provide values for these optional fields — it will hallucinate connector names instead of searching broadly. Add "nullable": True to the optional property dicts to match the convention in _normalize_schema_properties.

Extended reasoning...

What the bug is

The _create_search_tool function (meta_tools.py:198-218) defines the tool_search schema with three properties: query, connector, and top_k. Only query is truly required — connector and top_k are optional (their descriptions even say "Optional connector filter" and "default 5", and the SearchInput Pydantic model defaults both to None). Similarly, _create_execute_tool (meta_tools.py:250-262) defines parameters as optional (the ExecuteInput model defaults it to {}).

However, none of these optional properties include "nullable": True in their property dicts.

How to_openai_function() uses nullable

In models.py:377-379, the to_openai_function() method determines which properties are required:

if not prop.get("nullable", False):
    required.append(name)

Since connector, top_k, and parameters all lack "nullable": True, they are all added to the required array. The resulting OpenAI function schema tells the LLM that ALL three properties of tool_search are required, and both properties of tool_execute are required.

Step-by-step proof

  1. toolset.get_meta_tools() calls create_meta_tools() which calls _create_search_tool().
  2. The connector property dict is {"type": "string", "description": "Optional connector filter..."} — no nullable key.
  3. When meta_tools.to_openai() is called, to_openai_function() iterates over properties.
  4. For connector: prop.get("nullable", False) returns False (key missing), so connector is appended to required.
  5. Same for top_k: missing nullable, so it is marked required.
  6. The OpenAI tool definition now has "required": ["query", "connector", "top_k"].
  7. The LLM is told it MUST provide connector and top_k on every call.

The same issue affects the to_langchain() conversion path (models.py:415-431), where all properties without a default get Field(description=...) without a default value, making them required in the generated Pydantic schema.

Impact

The LLM is forced to provide connector and top_k values for every tool_search call, even when the user wants a broad search across all connectors. In practice, the LLM will hallucinate connector names (e.g., guessing "bamboohr" when the user has "hibob") or always specify a top_k, wasting tokens and potentially returning wrong results. For tool_execute, the LLM is forced to provide parameters even for tools that take no arguments.

The codebase convention

The _normalize_schema_properties method (toolset.py:896-898) already establishes the convention:

if name in required_fields:
    prop.setdefault("nullable", False)
else:
    prop.setdefault("nullable", True)

All MCP-fetched tools get this treatment automatically. The meta tools manually define their schemas and missed this convention.

Fix

Add "nullable": True to the connector and top_k property dicts in _create_search_tool, and to the parameters property dict in _create_execute_tool:

"connector": {
    "type": "string",
    "description": "Optional connector filter...",
    "nullable": True,
},
"top_k": {
    "type": "integer",
    "description": "Max results to return (1-50, default 5)",
    "minimum": 1,
    "maximum": 50,
    "nullable": True,
},

And in _create_execute_tool:

"parameters": {
    "type": "object",
    "description": "Parameters for the tool...",
    "nullable": True,
},

Comment on lines +73 to +74
connector=parsed.connector or self._options.connector,
top_k=parsed.top_k or self._options.top_k or 5,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Two minor code quality nits: (1) Lines 73-74 use or-chains (parsed.top_k or self._options.top_k or 5) instead of the is not None pattern used consistently elsewhere in the codebase (e.g., SearchTool.__call__, search_tools), which would silently skip falsy values like top_k=0. (2) Line 149 uses "parsed" in dir() to guard a variable reference, but dir() is not guaranteed to include locals per the Python spec, and parsed is always defined at that point anyway, making the check dead code. Consider using x if x is not None else default for (1) and removing the unnecessary guard for (2).

Extended reasoning...

or-chain pattern on lines 73-74

In SearchMetaTool.execute(), the code uses:

connector=parsed.connector or self._options.connector,
top_k=parsed.top_k or self._options.top_k or 5,

The or operator in Python treats any falsy value (0, empty string, False, etc.) as equivalent to None. This means if someone explicitly sets MetaToolsOptions(top_k=0), the or chain would skip past it and fall through to 5. Similarly, if parsed.connector were an empty string "", it would fall through to self._options.connector.

Practical impact is low but the pattern is inconsistent

In practice, parsed.top_k cannot be 0 due to the ge=1 constraint on SearchInput, and top_k=0 on MetaToolsOptions is semantically nonsensical. However, the rest of the codebase consistently uses the x if x is not None else default pattern — for example, SearchTool.__call__ uses top_k if top_k is not None else self._config.get("top_k"), and search_tools() uses top_k if top_k is not None else self._search_config.get("top_k"). The or-chain here is an inconsistency that could confuse future maintainers or cause subtle bugs if constraints change.

Step-by-step proof for the or-chain issue

  1. A caller creates MetaToolsOptions(top_k=0) — this is allowed since the field is int | None with no ge constraint.
  2. parsed.top_k is None (the LLM didn’t specify it).
  3. The expression evaluates: None or 0 or 50 is falsy → result is 5.
  4. The caller’s explicit top_k=0 is silently ignored.

The fix is straightforward:

connector=parsed.connector if parsed.connector is not None else self._options.connector,
top_k=parsed.top_k if parsed.top_k is not None else (self._options.top_k if self._options.top_k is not None else 5),

"parsed" in dir() on line 149

In ExecuteMetaTool.execute(), the StackOneAPIError handler uses parsed.tool_name if "parsed" in dir() else "unknown". The dir() builtin without arguments is documented as returning "the most relevant, rather than complete, information" — it is not guaranteed to include local variables on all Python implementations. However, tracing the control flow shows that parsed is always assigned on line 133 (via ExecuteInput(**raw_params)), and StackOneAPIError can only be raised on line 143 (target.execute(...)) which executes after parsed is assigned. So "parsed" in dir() is dead code that always evaluates to True. The proper fix is to either use parsed.tool_name directly (since it’s always defined) or initialize a sentinel like tool_name = "unknown" before the try block.

Summary

Both issues are minor code quality nits with negligible practical impact. The or-chain is the slightly more meaningful one since it represents an inconsistency with the established codebase pattern that could mask explicit falsy values in edge cases.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 4 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="stackone_ai/meta_tools.py">

<violation number="1" location="stackone_ai/meta_tools.py:91">
P2: `tool_execute` can throw uncaught `TypeError` on non-object JSON input because the narrowed exception handler omits that case.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment on lines +73 to +80
toolset = StackOneToolSet(search={"method": "semantic", "top_k": 3})

# Get meta tools — returns a Tools collection with tool_search + tool_execute
meta_tools = toolset.get_meta_tools(account_ids=_account_ids or None)
openai_tools = meta_tools.to_openai()

print(f"Meta tools: {[t.name for t in meta_tools]}")
print()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can pass account_ids to StackOneToolSet

also don't we have toolset.to_openai() - why do we need a get_meta_tools method when StackOneToolSet is the meta tool collection?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment, we don't have toolset.to_openai().. It exists on the exists on Tools (the collection) and StackOneTool (individual tool) ..

However, we can add this to the StackOneToolSet but we need set the API desing to separate the all tools vs 2 tools (search, execute) ..
Asked Claude and suggestes following options

Option 1: Explicit parameter                                                                                              
  toolset.to_openai()              # meta tools (default)                                                                   
  toolset.to_openai(mode="all")    # all tools (calls fetch_tools internally)                                               
  Problem: "meta tools by default" is opinionated, could surprise users.                                                    
                                                                                                                            
  Option 2: Separate named methods                                                                                          
  toolset.to_openai_meta()         # 2 meta tools                                                                           
  toolset.to_openai_all()          # all tools                                                                              
  Problem: Method explosion — need this for every format (langchain, anthropic, etc.)                                       
                                                                                     
  Option 3: Keep get_meta_tools() as-is, don't add shortcuts                                                                
  toolset.get_meta_tools().to_openai()   # 2 meta tools                                                                     
  toolset.fetch_tools().to_openai()      # all tools                                                                        
  This is already clear and symmetrical. The pattern is:                                                                    
  - get_meta_tools() → Tools collection (2 tools)           
  - fetch_tools() → Tools collection (N tools)                                                                              
  - Both return Tools, both have .to_openai(), .to_langchain(), etc.

Each has its pros and cons and option 3 is what we already have

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think option 1 is the best. @glebedel

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

toolset(search=None, execute=None) as the defaults
add toolset().openai()
add toolset().openai(mode=search_and_execute)
keep toolset(search=semantic)
keep toolset().search(mode=semantic)
keep toolset().get_search_tool(mode=local)

Copy link
Contributor Author

@shashi-stackone shashi-stackone Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy with this approach ..

  • execute=None -> What would be the behaviour assuming we do default execution config or dont include tool_execute from meta tools?

  • toolset.openai() (no mode) call fetch_tools() I assume

Can add more methods like .athropic .langchain etc but will focus on .openai for now .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is addressed now

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the first example I looked at did not follow this pattern and had a .to_meta_tools()

Comment on lines +102 to +117
for iteration in range(max_iterations):
print(f"--- Iteration {iteration + 1} ---")

response = client.chat.completions.create(
model=model,
messages=messages,
tools=openai_tools,
tool_choice="auto",
)

choice = response.choices[0]

if not choice.message.tool_calls:
print(f"\n{provider} final response: {choice.message.content}")
break

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just have one message so the example is simple

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made example simple

Comment on lines +173 to +182
def main() -> None:
"""Run all meta tools examples."""
api_key = os.getenv("STACKONE_API_KEY")
if not api_key:
print("Set STACKONE_API_KEY to run these examples.")
return

example_openai_meta_tools()
example_langchain_meta_tools()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these examples should be using with a client. i.e. how do you use with anthropic and openai. not just like here's it in langchain format. how would they then use the langchain format in an LLM

we want init_stackone_tools -> pass to LLM client -> show how to use LLM client with stackone tools
(for each and all LLM clients / tool formats that we support)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated!

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 5 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="stackone_ai/toolset.py">

<violation number="1" location="stackone_ai/toolset.py:496">
P2: openai() uses `account_ids or ...`, so an explicit empty list cannot override the constructor execute config as documented.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment on lines +496 to +498
effective_account_ids = account_ids or (
self._execute_config.get("account_ids") if self._execute_config else None
)
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: openai() uses account_ids or ..., so an explicit empty list cannot override the constructor execute config as documented.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 496:

<comment>openai() uses `account_ids or ...`, so an explicit empty list cannot override the constructor execute config as documented.</comment>

<file context>
@@ -444,6 +464,44 @@ def get_meta_tools(
+            toolset = StackOneToolSet()
+            tools = toolset.openai(mode="search_and_execute")
+        """
+        effective_account_ids = account_ids or (
+            self._execute_config.get("account_ids") if self._execute_config else None
+        )
</file context>
Suggested change
effective_account_ids = account_ids or (
self._execute_config.get("account_ids") if self._execute_config else None
)
effective_account_ids = account_ids if account_ids is not None else (
self._execute_config.get("account_ids") if self._execute_config else None
)
Fix with Cubic

)

# 2. Get meta tools in OpenAI format
meta_tools = toolset.get_meta_tools()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not what we agreed. we said toolset().to_openai()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@willleeney Just double checking , toolset().to_openai()or toolset.openai() as per previous comment we agree on toolset.openai() ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also should I write the toolset.langchain/crewai/pydantic as part of this PR or should we do it separately?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to_openai is makes more linguistic sense. But I was more speaking about the getting rid of get_meta_tools()

Comment on lines +24 to +32
class MetaToolsOptions(BaseModel):
"""Options for get_meta_tools()."""

account_ids: list[str] | None = None
search: Any | None = Field(default=None, description="Search mode: 'auto', 'semantic', or 'local'")
connector: str | None = None
top_k: int | None = None
min_similarity: float | None = None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all removed by now

Comment on lines +160 to +186
def create_meta_tools(
toolset: StackOneToolSet,
options: MetaToolsOptions | None = None,
) -> Tools:
"""Create tool_search + tool_execute for LLM-driven workflows.

Args:
toolset: The StackOneToolSet to delegate search and execution to.
options: Options to scope search and execution.

Returns:
Tools collection containing tool_search and tool_execute.
"""
opts = options or MetaToolsOptions()
api_key = toolset.api_key

# tool_search
search_tool = _create_search_tool(api_key)
search_tool._toolset = toolset
search_tool._options = opts

# tool_execute
execute_tool = _create_execute_tool(api_key)
execute_tool._toolset = toolset
execute_tool._options = opts

return Tools([search_tool, execute_tool])

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this still here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry meta_tool removal changes not pushed by this time!

Comment on lines +417 to +424
def get_meta_tools(
self,
*,
account_ids: list[str] | None = None,
search: SearchMode | None = None,
connector: str | None = None,
top_k: int | None = None,
min_similarity: float | None = None,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

meta_tool removal changes not pushed by this time

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 3 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="stackone_ai/toolset.py">

<violation number="1" location="stackone_ai/toolset.py:529">
P1: `execute()` drops the account scoping used by `openai(mode="search_and_execute")`, so manual agent loops can execute tools against the wrong accounts.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6 issues found across 4 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="examples/meta_tools_example.py">

<violation number="1" location="examples/meta_tools_example.py:159">
P3: Use `response.text` here. Gemini returns structured content blocks in `AIMessage.content`, so this prints the raw payload instead of the final answer.</violation>

<violation number="2" location="examples/meta_tools_example.py:167">
P2: Handle missing MCP dependencies before invoking LangChain tools. `tool_execute` calls `fetch_tools()` under the hood, so this example crashes at the first tool call when `stackone-ai[mcp]` is not installed.</violation>
</file>

<file name="stackone_ai/toolset.py">

<violation number="1" location="stackone_ai/toolset.py:638">
P1: Expose a public `get_meta_tools()` API; `_build_tools()` leaves the new meta-tools feature inaccessible outside the OpenAI/LangChain wrappers.</violation>

<violation number="2" location="stackone_ai/toolset.py:646">
P2: Don't persist `account_ids` on the toolset here; a scoped meta-tools call leaks into later `fetch_tools()`/`search_tools()` calls.</violation>

<violation number="3" location="stackone_ai/toolset.py:719">
P1: The LangChain meta-tools path makes optional args required, because `to_langchain()` ignores the new schemas' `nullable` fields.</violation>
</file>

<file name="tests/test_meta_tools.py">

<violation number="1" location="tests/test_meta_tools.py:45">
P2: These tests bypass `StackOneToolSet.get_meta_tools()`, so the new public API can break without failing the suite.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

)

if mode == "search_and_execute":
return self._build_tools(account_ids=effective_account_ids).to_langchain()
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: The LangChain meta-tools path makes optional args required, because to_langchain() ignores the new schemas' nullable fields.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 719:

<comment>The LangChain meta-tools path makes optional args required, because `to_langchain()` ignores the new schemas' `nullable` fields.</comment>

<file context>
@@ -499,10 +687,38 @@ def openai(
+        )
+
+        if mode == "search_and_execute":
+            return self._build_tools(account_ids=effective_account_ids).to_langchain()
+
+        return self.fetch_tools(account_ids=effective_account_ids).to_langchain()
</file context>
Fix with Cubic


return SearchTool(self, config=config)

def _build_tools(self, account_ids: list[str] | None = None) -> Tools:
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Expose a public get_meta_tools() API; _build_tools() leaves the new meta-tools feature inaccessible outside the OpenAI/LangChain wrappers.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 638:

<comment>Expose a public `get_meta_tools()` API; `_build_tools()` leaves the new meta-tools feature inaccessible outside the OpenAI/LangChain wrappers.</comment>

<file context>
@@ -414,56 +635,23 @@ def get_search_tool(self, *, search: SearchMode | None = None) -> SearchTool:
-            # Pass to LangChain
-            tools = meta_tools.to_langchain()
-        """
+    def _build_tools(self, account_ids: list[str] | None = None) -> Tools:
+        """Build tool_search + tool_execute tools scoped to this toolset."""
         if self._search_config is None:
</file context>
Fix with Cubic

for tool_call in response.tool_calls:
print(f" -> {tool_call['name']}({json.dumps(tool_call['args'])})")
tool = tools_by_name[tool_call["name"]]
result = tool.invoke(tool_call["args"])
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Handle missing MCP dependencies before invoking LangChain tools. tool_execute calls fetch_tools() under the hood, so this example crashes at the first tool call when stackone-ai[mcp] is not installed.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At examples/meta_tools_example.py, line 167:

<comment>Handle missing MCP dependencies before invoking LangChain tools. `tool_execute` calls `fetch_tools()` under the hood, so this example crashes at the first tool call when `stackone-ai[mcp]` is not installed.</comment>

<file context>
@@ -110,14 +111,74 @@ def example_gemini() -> None:
+        for tool_call in response.tool_calls:
+            print(f"  -> {tool_call['name']}({json.dumps(tool_call['args'])})")
+            tool = tools_by_name[tool_call["name"]]
+            result = tool.invoke(tool_call["args"])
+            messages.append(ToolMessage(content=json.dumps(result), tool_call_id=tool_call["id"]))
+
</file context>
Fix with Cubic

)

if account_ids:
self._account_ids = account_ids
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Don't persist account_ids on the toolset here; a scoped meta-tools call leaks into later fetch_tools()/search_tools() calls.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At stackone_ai/toolset.py, line 646:

<comment>Don't persist `account_ids` on the toolset here; a scoped meta-tools call leaks into later `fetch_tools()`/`search_tools()` calls.</comment>

<file context>
@@ -414,56 +635,23 @@ def get_search_tool(self, *, search: SearchMode | None = None) -> SearchTool:
 
-        from stackone_ai.meta_tools import MetaToolsOptions, create_meta_tools
+        if account_ids:
+            self._account_ids = account_ids
 
-        options = MetaToolsOptions(
</file context>
Fix with Cubic


# 4. If no tool calls, print final answer and stop
if not response.tool_calls:
print(f"Answer: {response.content}")
Copy link

@cubic-dev-ai cubic-dev-ai bot Mar 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: Use response.text here. Gemini returns structured content blocks in AIMessage.content, so this prints the raw payload instead of the final answer.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At examples/meta_tools_example.py, line 159:

<comment>Use `response.text` here. Gemini returns structured content blocks in `AIMessage.content`, so this prints the raw payload instead of the final answer.</comment>

<file context>
@@ -110,14 +111,74 @@ def example_gemini() -> None:
+
+        # 4. If no tool calls, print final answer and stop
+        if not response.tool_calls:
+            print(f"Answer: {response.content}")
+            break
+
</file context>
Suggested change
print(f"Answer: {response.content}")
print(f"Answer: {response.text}")
Fix with Cubic

@shashi-stackone
Copy link
Contributor Author

@willleeney Made changes and removed meta_tools concepts completely. Just supported openAI raw (that needs execute) and langchain (executed via langchain) .. Examples working well .. It might still further tuning of the code but feel free to have another look whenever get chance..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants