Skip to content

Fix #180: Handle unsupported response_format parameter for non-OpenAI providers#186

Open
hkc5 wants to merge 1 commit into666ghj:mainfrom
hkc5:fix-180-response-format
Open

Fix #180: Handle unsupported response_format parameter for non-OpenAI providers#186
hkc5 wants to merge 1 commit into666ghj:mainfrom
hkc5:fix-180-response-format

Conversation

@hkc5
Copy link

@hkc5 hkc5 commented Mar 14, 2026

Problem

Ontology generation fails with 500 error when using LLM providers like Groq that don't support the OpenAI-specific response_format={"type": "json_object"} parameter.

Root Cause

The code uses response_format={"type": "json_object"} which is an OpenAI-specific feature. When using Groq or other providers that don't support this parameter, the API call fails with a 500 error.

Solution

Added graceful fallback logic in three files:

  1. llm_client.py - Core LLM client with automatic retry without response_format
  2. simulation_config_generator.py - Direct API calls with fallback
  3. oasis_profile_generator.py - Direct API calls with fallback

When a 400/500 error related to response_format is detected, the code automatically retries without the parameter and relies on the system prompt to enforce JSON output format.

Testing

  • Tested with Groq API (llama-3.1-8b-instant, llama-3.2-1b-preview, llama-3.1-70b-versatile)
  • Backwards compatible with OpenAI API
  • No breaking changes to existing functionality

Fixes #180

The LLM client now gracefully handles providers (like Groq) that don't
support the OpenAI-specific response_format={type: json_object} parameter.

Changes:
- llm_client.py: Added try/except with fallback when response_format fails
- simulation_config_generator.py: Same fallback logic for direct API calls
- oasis_profile_generator.py: Same fallback logic for direct API calls

When a 400/500 error related to response_format is detected, the code
automatically retries without the parameter and relies on the system
prompt to enforce JSON output format.
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. LLM API Any questions regarding the LLM API labels Mar 14, 2026
@lslsl3q
Copy link

lslsl3q commented Mar 15, 2026

很关键,希望赶紧修复

@lslsl3q
Copy link

lslsl3q commented Mar 15, 2026

感谢,已经验证确实有用,可以用,终于可以使用我自己的API了。

@hkc5
Copy link
Author

hkc5 commented Mar 15, 2026

Hi @666ghj @hzr1937,

This PR fixes the 500 error when using non-OpenAI LLM providers (like Groq). The issue was the hardcoded response_format={"type": "json_object"} which isn't supported by all providers.

The fix adds graceful fallback - when a provider doesn't support response_format, it retries without it while relying on the system prompt for JSON enforcement.

A user has already verified this works (see comments above). Would appreciate a review when you have time!

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

LLM API Any questions regarding the LLM API size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

500 Internal Server Error during ontology generation / handleNewProject (GraphRAG build fails)

2 participants