AI Agent Studio GPT 4.1 Mini LLM node output/input issue
Summary:
In 26B AI Agent Studio, LLM node output is null in downstream nodes when the LLM is set to the default LLM (GPT 4.1 Mini in our environment).
Content (please ensure you mask any confidential information):
When the LLM node is configured to use the environment’s default LLM, its output appears as null/empty in downstream nodes, even though the LLM node itself returns a valid, correctly formatted JSON payload during testing. Switching the LLM node to a specific non-default model resolves the issue.
I confirmed the output JSON format is correct and when run in testing the node shows non-null values for the output with the correct format. However, when I go to use the LLM node output as input for downstream nodes, it shows as null/empty string for the downstream node input. The only solution I have found for this is using a non-default model in the LLM settings for the agent.