handle_ipc_connection:route:chat:dispatch:execute_task:generate:process_task:exe...

2025年10月10日 18:05 响应: 2.53秒 状态: success

🚨 错误信息

2025-10-10T14:26:05.544697+08:00 ERROR handle_ipc_connection:route:chat:dispatch:execute_task:generate:process_task:execute_plan: execute_plan:llm_raw_chat:llm_raw_chat:llm_raw_chat_custom_model: llm_client::llm_client: [LLM Client][Chat] Error sending request: url: "https://ssvip.dmxapi.com/v1/chat/completions", status: 404, text: "{\"error\":{\"message\":\"This model is only supported in v1/responses and not in v1/chat/completions. (tid: 2025101006260593753184461314102)\",\"type\":\"invalid_request_error\",\"param\":\"model\", \"code\":null}}" trace_id="07232d58d79f9d47f6b908f4a5d0c907" session_id=68e893b1e9f463e25b36ffaf task_id=68e8a6f94dea3174577b5706 message_id=68e8a6f94dea3174577b5705 session_id=68e893b1e9f463e25b36ffaf

🤖 AI解决方案

[LLM Client][Chat] Error sending request: url: "https://ssvip.dmxapi.com/v1/chat/completions",
status: 404, text: "{\"error\":{\"message\":\"This model is only supported in v1/responses and
not in v1/chat/completions. (tid:
2025101006260593753184461314102)\",\"type\":\"invalid_request_error\",\"param\":\"model\",
\"code\":null}}"

错误翻译
[LLM Client][Chat] 发送请求时出错:URL:"https://ssvip.dmxapi.com/v1/chat/completions", 状态:404, 文本:"{"error":{"message":"该模型仅在 v1/responses 中支持,而在 v1/chat/completions 中不支持。(tid: 2025101006260593753184461314102)","type":"invalid_request_error","param":"model","code":null}}"

原因分析
您调用的模型不支持 v1/chat/completions API 端点,它只能在 v1/responses 端点中使用。

解决方案
请将API请求的端点从 v1/chat/completions 修改为 v1/responses

预防措施

  • 在集成LLM API时,仔细查阅官方文档,确认模型与API端点的兼容性。

  • 对于不同模型,保持API端点配置的灵活性,避免硬编码。