You're almost there! Please answer a few more questions for access to the Applications content. Complete registration
Interested in joining? Complete your registration by providing Areas of Interest here. Register

AI Agent Studio GPT 4.1 Mini LLM node output/input issue

edited May 8, 2026 7:47PM in AI for Fusion Applications

Summary:

In 26B AI Agent Studio, LLM node output is null in downstream nodes when the LLM is set to the default LLM (GPT 4.1 Mini in our environment).

Content (please ensure you mask any confidential information):

When the LLM node is configured to use the environment’s default LLM, its output appears as null/empty in downstream nodes, even though the LLM node itself returns a valid, correctly formatted JSON payload during testing. Switching the LLM node to a specific non-default model resolves the issue.

I confirmed the output JSON format is correct and when run in testing the node shows non-null values for the output with the correct format. However, when I go to use the LLM node output as input for downstream nodes, it shows as null/empty string for the downstream node input. The only solution I have found for this is using a non-default model in the LLM settings for the agent.

Howdy, Stranger!

Log In

To view full details, sign in.

Register

Don't have an account? Click here to get started!