Unhandled API in mapOptionsForApi: undefined at mapOptionsForApi CairbreMadog OP — 2026-01-27, 17:13 I am trying to point a fresh install of clawdbot at ollama server on my network. I have tried the GLM 4.7 Flash model I have seen tutorials for, but I cant seem to point the config at the right place to not crash the gateway server with "Unhandled API in mapOptionsForApi: undefined at mapOptionsForApi" Here is an example:= of my JSON w/signal configured on my other signal-cli vm: { "channels": { "signal": { "enabled": true, "account": "+1XXXXXXXXXX", "httpUrl": "http://192.168.1.xx:8090/", "autoStart": false, "allowFrom": ["+1XXXXXXXXXX"] } }, "models": { "providers": { "ollama": { "baseUrl": "http://192.168.2.xx:11434/v1", "apiKey": "ollama-local", "api": "openai-completions", "models": [ { "id": "glm-4.7-flash", "name": "GLM 4.7 Flash", "reasoning": false, "input": ["text"], "contextWindow": 108192, "maxTokens": 81920, "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 } } ] } } }, "agents": { "defaults": { "model": { "primary": "ollama/glm-4.7-flash" } } } } Answer Overflow APP — 2026-01-27, 17:13 To help others find answers, you can mark your question as solved via Right click solution message -> Apps -> ✅ Mark Solution Image Krill 🦐:lobster: APP — 2026-01-27, 17:15 Found it! The error Unhandled API in mapOptionsForApi: undefined means the model is missing its api field. You're defining api: "openai-completions" at the provider level, but each model needs its own api field. Fix your model definition: { "models": { "providers": { "ollama": { "baseUrl": "http://192.168.2.XX:11434/v1", "apiKey": "ollama-local", "models": [ { "id": "glm-4.7-flash", "name": "GLM 4.7 Flash", "api": "openai-completions", // ← ADD THIS to each model "reasoning": false, "input": ["text"], "contextWindow": 108192, "maxTokens": 81920, "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 } } ] } } } } The provider-level api field isn't used — only the model-level one matters for routing to mapOptionsForApi(). 📚 Docs: https://docs.clawd.bot/providers/ollama#explicit-setup-manual-models CairbreMadog OP — 2026-01-27, 17:53 cool no longer getting the error.. it give me these messages now when I send anything from the gateway: 22:52:20 [agent/embedded] embedded run done: runId=a9cc829b-c5bb-46ac-8808-144ebedcc930 sessionId=80b24506-1c2b-436d-9749-b28885df997b durationMs=32 aborted=false 22:52:20 [diagnostic] lane task done: lane=main durationMs=33 active=0 queued=0 22:52:20 [diagnostic] lane task done: lane=session:agent:main:main durationMs=34 active=0 queued=0 In the gateway I get no response and my ollama server doesn't react. Krill 🦐:lobster: APP — 2026-01-27, 17:54 The 32ms completion suggests the model call isn't happening. A few things to check: Verify the model name exists on your Ollama server: # On the Ollama host (192.168.2.XX) ollama list GLM models aren't standard Ollama models. You likely need to either: Use a proper Ollama model like llama3.3, qwen2.5-coder:32b, etc. Or if you're trying to use a remote GLM API, that should be a different provider (OpenAI-compatible endpoint), not Ollama Test connectivity to Ollama: bash curl http://192.168.2.XX:11434/api/tags If you meant to use a remote GLM API (not local Ollama): you'd use the OpenAI provider pointing to the GLM endpoint, not the Ollama provider. { "models": { "providers": { "openai-compatible": { // ← different provider "baseUrl": "https://open.bigmodel.cn/api/paas/v4", // GLM endpoint "apiKey": "YOUR_API_KEY", "models": [ { "id": "glm-4.7-flash", "api": "openai-completions", // ... } ] } } } } What's the output of ollama list on your 192.168.2.XX machine? CairbreMadog OP — 2026-01-27, 17:55 NAME ID SIZE MODIFIED openchat:latest 537a4e03b649 4.1 GB 7 hours ago glm-4.7-flash:latest d1a8a26252f1 19 GB 7 hours ago llama3.2:latest a80c4f17acd5 2.0 GB 6 weeks ago llama3:latest 365c0bd3c000 4.7 GB 6 weeks ago dominus-maximus:latest 374bab6c03d0 5.0 GB 2 months ago gemma:7b a72c7f4d0a15 5.0 GB 2 months ago bge-m3:latest 790764642607 1.2 GB 3 months ago llama3.1:8b-instruct-q5_K_M 27fe1b0ab52c 5.7 GB 3 months ago CairbreMadog OP — 2026-01-28, 08:43 @Krill 🦐 Can you direct me to a good place that has excellent documentation on setting up a remote ollama instance. It just doesn't work for me