Prompt Engineering Course
Function Calling
Function calling is a prompt engineering technique that allows a language model to interact with structured functions instead of responding only with free-form text.
This technique is used when you want the model to decide an action and pass structured data to your application logic.
In production systems, function calling is what turns a chatbot into a real application.
Why Function Calling Exists
Pure text generation has limits.
Real systems need to:
- Fetch data from APIs
- Trigger workflows
- Run calculations
- Store or update information
Function calling bridges the gap between natural language and executable actions.
What Problem It Solves
Without function calling:
- The model guesses output formats
- Developers must parse text manually
- Errors increase in complex workflows
With function calling:
- Outputs follow strict schemas
- Applications remain deterministic
- LLMs behave like decision engines
How Function Calling Works Conceptually
At a high level, the flow is:
- User sends a natural language request
- The model decides whether a function is needed
- The model returns a structured function call
- Your code executes the function
The model does not execute the function itself — it only decides what should be called and with what arguments.
Defining a Function Schema
Before the model can call a function, you must describe it clearly.
This description acts as a contract between the model and your code.
const functions = [
{
name: "get_weather",
description: "Get current weather for a city",
parameters: {
type: "object",
properties: {
city: {
type: "string",
description: "Name of the city"
}
},
required: ["city"]
}
}
];
This schema tells the model:
- What the function does
- What inputs are allowed
- Which inputs are mandatory
Why Structure Matters
If the schema is vague, the model produces unreliable arguments.
Clear structure ensures:
- Correct parameter types
- Valid JSON output
- Lower error rates
Calling the Model With Functions
Now the model is given both the user message and the available functions.
const response = await openai.chat.completions.create({
model: "gpt-4.1",
messages: [
{ role: "user", content: "What's the weather in London?" }
],
functions: functions,
function_call: "auto"
});
Here, the model decides whether calling get_weather is appropriate.
What Happens Inside the Model
Internally, the model:
- Understands the user intent
- Matches intent with function descriptions
- Generates structured arguments
Instead of replying with text, it returns a function call payload.
Example Function Call Output
{
"name": "get_weather",
"arguments": {
"city": "London"
}
}
This output is machine-readable and ready for execution.
Executing the Function
Your application reads the function call and executes it.
The model never touches your backend logic.
This separation keeps systems secure and predictable.
Real-World Use Cases
Function calling is used in:
- Booking systems
- Customer support automation
- Database querying
- Agent-based workflows
Any system that requires structured actions benefits from this approach.
Common Mistakes
Developers often:
- Overload functions with too many parameters
- Use vague descriptions
- Trust function output without validation
Function calling improves structure — it does not replace validation.
Best Practices
For reliable function calling:
- Keep schemas simple
- Use clear descriptions
- Validate inputs before execution
Practice
What type of output does function calling produce?
What does the model return when calling a function?
What is the model's main role in function calling?
Quick Quiz
Function calling outputs are typically:
Who executes the function?
Why are function schemas important?
Recap: Function calling enables LLMs to produce structured actions instead of unstructured text.
Next up: Agentic prompting — coordinating multiple steps and tools intelligently.