This workshop explores how standalone agents operate at the runtime level and how they differ from traditional AI pipelines. We examine agent architecture, planning loops, memory models, and tool execution. We also cover multi-agent coordination, including state isolation and resource control. A key focus is security and governance — capability-based access, sandboxing, and injection risks. Finally, we address observability and supervision: tracing reasoning, auditing tool usage, and implementing control mechanisms for production systems. All examples and concepts are grounded in the Node.js stack and we explore why Node.js is particularly well-suited for building production-ready agent runtimes — serving as the control plane for supervision, integration, streaming execution, and distributed coordination.
Software Engineer, Netherlands
My primary interest is self development and craftsmanship. I enjoy exploring technologies, coding open source and enterprise projects, teaching, speaking and writing about programming - JavaScript, Node.js, TypeScript, Go, Java, Docker, Kubernetes, JSON Schema, DevOps, Web Components, Algorithms 🎧 ⚽️ 💻 👋 ☕️ 🌊 🎾

Software Engineer, Netherlands
JavaScript developer with full-stack experience and frontend passion. He happily works at ING in a Fraud Prevention department, where helps to protect the finances of the ING customers.
npm i langchain
- Workflows are systems where LLMs and tools are orchestrated through predefined code paths
Systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks
A system that autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools
LLM - Large Language Models trained on tons of sources and materials, having billions of parameters
Loop
Planning - task decomposition, multi-plan selection, external module-aided planning, reflection and refinement, memory-augmented planning, evaluation

p = (a0, a1, · · · , at) = plan(E, g; Θ, P).
g0, g1, · · · , gn = decompose(E, g; Θ, P);
pi = (ai0, ai1, · · · aim) = sub-plan(E, gi; Θ, P).
Prompt Architectures - ReAct, PRACT, RAISE, Reflexion, …
// ReAct
while (true) {
const response = await llm(messages, tools);
if (response.tool_call) {
const result = await runTool(response.tool_call);
messages.push(result);
} else {
return response.output;
}
}
// Reflexion
export async function reflexionLoop(task: string) {
let bestAnswer = null;
let bestScore = -Infinity;
for (let i = 0; i < 3; i++) {
console.log(`Attempt ${i + 1}`);
const trajectory = await runAgent(task);
const evaluation = await evaluateTrajectory(task, trajectory);
const reflection = await reflect(task, trajectory, evaluation);
storeReflection(reflection);
console.log("Score:", evaluation.score);
console.log("Lessons:", reflection.lessons);
if (evaluation.score > bestScore) {
bestScore = evaluation.score;
bestAnswer = trajectory.finalAnswer;
}
}
return bestAnswer;
}
Memory - the processes used to gain, store, retain, and later retrieve information. Short-term vs long-term memory.
Tools - extend LLM with ability to act outside its context - read data (files, APIs, web), compute (code execution), act (send email, write DB, click UI)
| Claude Agent SDK | OpenAI Agents SDK | Google ADK | AI SDK Vercel | LangChain / LangGraph | |
|---|---|---|---|---|---|
| Primary purpose | Runtime for Claude-based agents with tool use + MCP | Build multi-step agents on OpenAI APIs | Build agents on Gemini / Vertex AI | Fullstack AI toolkit (not agent-first) | Composable chains + stateful agent graphs |
| Languages | TypeScript, Python ⚠️ (Python partial) | TypeScript, Python | Python, TypeScript, Go, and Java | TypeScript / JavaScript | Python, TypeScript |
| Model support | Claude only | OpenAI (⚠️ LiteLLM workaround) | Gemini / Vertex | Model-agnostic | Model-agnostic |
| Agent loop / orchestration | Subagents, tool loops, hooks | Agents + handoffs | Pipelines (seq/parallel) ⚠️ (loop flexibility unclear) | Tool-based loops (lightweight) | LangGraph DAG + cycles (full state machines) |
| Loop control | ⚠️ Hooks into steps, loop is internal | ❌ Hidden — tools + instructions only | ⚠️ Orchestration-based, not loop-level | ❌ Loop is internal | ✅ Full — define nodes, edges, stop conditions |
| Tools | MCP, bash, browser, file system | Function calling, tools, MCP | Google tools + functions ⚠️ (MCP maturity?) | Tool calling, MCP | 500+ integrations |
| Memory | CLAUDE.md + runtime context ⚠️ (not true long-term memory) | Threads + state | Vertex memory ⚠️ (needs validation depth) | Per-request (stateless by default) | Buffers + vector DB |
| Multi-agent | Subagents ⚠️ (basic vs true orchestration) | Native handoffs | A2A protocol ⚠️ (early stage) | ❌ Limited | ✅ Advanced (LangGraph multi-node) |
| MCP support | ✅ First-class | ✅ | ⚠️ Emerging | ✅ | ⚠️ Via adapters |
| Best fit | Tool-heavy automation agents | Fast production agents | Google ecosystem | AI web apps | Complex agent systems |
import {FunctionTool, LlmAgent} from '@google/adk';
import {z} from 'zod';
/* Mock tool implementation */
const getCurrentTime = new FunctionTool({
name: 'get_current_time',
description: 'Returns the current time in a specified city.',
parameters: z.object({
city: z.string().describe("The name of the city for which to retrieve the current time."),
}),
execute: ({city}) => {
return {status: 'success', report: `The current time in ${city} is 10:30 AM`};
},
});
export const rootAgent = new LlmAgent({
name: 'hello_time_agent',
model: 'gemini-flash-latest',
description: 'Tells the current time in a specified city.',
instruction: `You are a helpful assistant that tells the current time in a city.
Use the 'getCurrentTime' tool for this purpose.`,
tools: [getCurrentTime],
});
Key features:
Agentic AI - systems composed of multiple co-ordinated AI agents that can break down tasks, collaborate, and pursue complex objectives autonomously over extended periods.
Agent protocols are standardized frameworks that define the rules, formats, and procedures for structured communication among agents and between agents and external systems (c)
MCP - Model Context Protocol
Anthropic, November 2024, Specification based on the Function calling flow
for (const toolCall of response.output) {
if (toolCall.type !== "function_call") {
continue
}
const name = toolCall.name
const args = JSON.parse(toolCall.arguments)
const result = callFunction(name, args)
input.push({
type: "function_call_output",
call_id: toolCall.call_id,
output: result.toString(),
})
}
MCP provides a standardized way for applications to:
- Share contextual information with language models
- Expose tools and capabilities to AI systems
- Build composable integrations and workflows

{
"name": "get_weather_data",
"title": "Weather Data Retriever",
"description": "Get current weather data for a location",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or zip code"
}
},
"required": ["location"]
},
"outputSchema": {
"type": "object",
"properties": {
"temperature": {
"type": "number",
"description": "Temperature in celsius"
},
"conditions": {
"type": "string",
"description": "Weather conditions description"
},
"humidity": {
"type": "number",
"description": "Humidity percentage"
}
},
"required": ["temperature", "conditions", "humidity"]
}
}
Agent-to-Agent (A2A) - Inter-Agent Protocol, April 2025
](https://a2a-protocol.org/latest/assets/agentic-stack.png)
Concepts:
const movieAgentCard: AgentCard = {
name: 'Movie Agent',
description: 'An agent that can answer questions about movies and actors using TMDB.',
// Adjust the base URL and port as needed. /a2a is the default base in A2AExpressApp
url: 'http://localhost:41241/', // Example: if baseUrl in A2AExpressApp
provider: {
organization: 'A2A Samples',
url: 'https://example.com/a2a-samples' // Added provider URL
},
version: '0.0.2', // Incremented version
capabilities: {
streaming: true, // The new framework supports streaming
pushNotifications: false, // Assuming not implemented for this agent yet
stateTransitionHistory: true, // Agent uses history
},
securitySchemes: undefined, // Or define actual security schemes if any
security: undefined,
defaultInputModes: ['text'],
defaultOutputModes: ['text', 'task-status'], // task-status is a common output mode
skills: [
{
id: 'general_movie_chat',
name: 'General Movie Chat',
description: 'Answer general questions or chat about movies, actors, directors.',
tags: ['movies', 'actors', 'directors'],
examples: [
'Tell me about the plot of Inception.',
'Recommend a good sci-fi movie.',
'Who directed The Matrix?',
'What other movies has Scarlett Johansson been in?',
'Find action movies starring Keanu Reeves',
'Which came out first, Jurassic Park or Terminator 2?',
],
inputModes: ['text'], // Explicitly defining for skill
outputModes: ['text', 'task-status'] // Explicitly defining for skill
},
],
supportsAuthenticatedExtendedCard: false,
};
{
"jsonrpc": "2.0",
"id": "req-001",
"method": "SendMessage",
"params": {
"message": {
"role": "user",
"parts": [
{
"text": "Generate an image of a sailboat on the ocean."
}
],
"messageId": "msg-user-001"
}
}
}
{
"jsonrpc": "2.0",
"id": "req-001",
"result": {
"task": {
"id": "task-boat-gen-123",
"contextId": "ctx-conversation-abc",
"status": {
"state": "TASK_STATE_COMPLETED"
},
"artifacts": [
{
"artifactId": "artifact-boat-v1-xyz",
"name": "sailboat_image.png",
"description": "A generated image of a sailboat on the ocean.",
"parts": [
{
"filename": "sailboat_image.png",
"mediaType": "image/png",
"raw": "base64_encoded_png_data_of_a_sailboat"
}
]
}
]
}
}
}
Part - Holds one of: text content, a file reference (URL or inline bytes), or structured data in messages and artifacts.
Service Discovery

ANP - Agent Network Protocol - Defines how agents connect with each other in an open, secure, and efficient collaboration network

options: {
allowedTools: ["Read", "Glob", "Grep"],
permissionMode: "acceptEdits",
continue: true
},
docker run \
--cap-drop ALL \
--security-opt no-new-privileges \
--security-opt seccomp=/path/to/seccomp-profile.json \
--read-only \
--tmpfs /tmp:rw,noexec,nosuid,size=100m \
--tmpfs /home/agent:rw,noexec,nosuid,size=500m \
--network none \
--memory 2g \
--cpus 2 \
--pids-limit 100 \
--user 1000:1000 \
-v /path/to/code:/workspace:ro \
-v /var/run/proxy.sock:/var/run/proxy.sock:ro \
agent-image
Infrastructure Frameworks
| n8n | CrewAI | MetaGPT | OpenClaw | |
|---|---|---|---|---|
| Purpose | Workflow automation platform | Multi-agent framework | Multi-agent meta-framework | Agent orchestration & deployment |
| Orchestration style | Visual workflow DAG | Role-based agent crews | Role-based SOPs & pipelines | Graph-based agent routing |
| Hosting | Self-hosted / cloud | Self-hosted / cloud | Self-hosted | Self-hosted / cloud |
| Agent integration | Custom nodes, webhooks | Python-native | Python-native | API-first |
| Use case | Connect agents to business workflows | Collaborative task agents | Complex software development tasks | Production agent deployment |
| Language | JavaScript / TypeScript | Python | Python | Python / API |
An agent runtime is a control plane — coordination, policy, memory, and tooling around an LLM. Node.js fits well as that plane, and SDKs plus protocols (MCP, A2A) give you the building blocks. Production readiness comes from structured outputs, guardrails, tracing, and connecting it all to automation tools like n8n.
What’s coming in next years?
Please share your feedback on the workshop. Thank you and have a great coding!
If you like the workshop, you can become our patron, yay! 🙏