Skip to content

Cognitive Functions as Tools: integrating IA to communicate with other architecture components

March 2, 2025 | 07:00 PM

In many scenarios, applications integrating Large Language Models (LLMs) need to communicate with external systems or services to obtain information the model does not possess locally. To achieve this integration, we can expose a set of “Cognitive Functions as Tools,” which the LLM dynamically “discovers” and decides to invoke when it detects the need for external information or specific actions.

This integration strategy creates an interaction flow where the model, after processing the initial prompt or context, generates a message (usually in JSON format) describing the invocation of one of these cognitive functions as tools. The host system (the client application) receives this message and:

Cognitive Functions as Tools refers to specialized functionalities or modules that an AI model can discover and invoke dynamically to access external knowledge, perform specific tasks, or carry out actions that extend beyond its native capabilities.

Table of contents

Open Table of contents

Cognitive Functions as Tools

Imagine you have a Digital Twin service that needs extra data from external modules—like a occupation iot sensor. The diagram illustrates a high-level flow:

In essence, the diagram shows an architectural pattern where a conversational engine can dynamically invoke external services and seamlessly merge their results into the user-facing response.

cognitive-functions

Registering custom functions

To enable this dynamic, we define a mechanism for function registration in which:

Each published function exposes:

The LLM can consult this function catalog to determine if it should call one of them and, if so, constructs a JSON object with the appropriate arguments.

The component architecture or library orchestrating communication with the LLM intercepts that JSON object and triggers the execution of the corresponding function.

This design pattern is inspired by the concept of a “messaging bus” that transfers control from the model (which “requests”) to the host system (which “executes”) without strong coupling between them. The flexibility lies in the fact that the model does not directly invoke the underlying code; rather, it produces the “intention” to invoke in JSON form.

Define a Python function:

def get_current_occupation(location: str) -> dict:
    """
    Get the current occupation of a specific location in the city.

    Args:
        location: The name of the location to check.

    Returns:
        dict: A dictionary containing the location, the number of people currently there, and the status of the occupation.
    """
    # Simulated response (in a real case, this would come from an external API or sensor data)
    occupation_data = {
        "location": location,
        "occupation": 1250,  # Example value
        "status": "high"  # Could be "low", "moderate", or "high"
    }

    return occupation_data

Pass the function as a tool to LLM model:

response = model.chat(
    messages=[{'role': 'user', 'content': 'What is the current occupation in Plaza Central?'}],
    tools=[get_current_occupation],  # Actual function reference
)

Call the function from the model response:

# Process function calls from the response
for tool in response.message.tool_calls or []:
    function_to_call = available_functions.get(tool.function.name)
    if function_to_call:
        function_args = tool.function.arguments
        print('Function output:', function_to_call(**function_args))
    else:
        print('Function not found:', tool.function.name)

Workflow

{
  "name": "getOccupation",
  "arguments": {
    "location": "ZoneX"
  }
}

Architectural Benefits

Implementation Considerations