Skip to content

@Coordinator

@Coordinator is the entry point for multi-agent orchestration. A coordinator manages a fleet of agents — delegating tasks, aggregating results, and streaming a synthesized response to the client. It subsumes @Agent: the processor delegates to AgentProcessor internally for base agent setup, then adds fleet wiring on top.

@Coordinator(name = "ceo", skillFile = "prompts/ceo-skill.md")
@Fleet({
@AgentRef(type = ResearchAgent.class),
@AgentRef(type = StrategyAgent.class),
@AgentRef(type = FinanceAgent.class),
@AgentRef(value = "writer", required = false)
})
public class CeoCoordinator {
@Prompt
public void onPrompt(String message, AgentFleet fleet, StreamingSession session) {
// Fan out to all agents in parallel
var results = fleet.parallel(
fleet.call("research", "web_search", Map.of("query", message)),
fleet.call("strategy", "analyze", Map.of("topic", message)),
fleet.call("finance", "forecast", Map.of("query", message))
);
// Synthesize results into a final response
var synthesis = results.values().stream()
.map(AgentResult::text)
.collect(Collectors.joining("\n\n"));
session.stream("Based on the team's analysis:\n\n" + synthesis);
}
}
<dependency>
<groupId>org.atmosphere</groupId>
<artifactId>atmosphere-coordinator</artifactId>
<version>LATEST</version> <!-- check Maven Central for latest -->
</dependency>

The coordinator module transitively pulls in atmosphere-agent and atmosphere-a2a.

AttributeTypeDefaultDescription
nameString(required)Coordinator name. Used in the registration path and protocol metadata.
skillFileString""Classpath resource path to the skill file (.md). The entire file becomes the system prompt.
descriptionString""Human-readable description for Agent Card metadata.
versionString"1.0.0"Coordinator version for Agent Card metadata.

Applied at class level alongside @Coordinator. Declares the set of agents the coordinator manages. Acts as both documentation and a startup validation contract.

@Fleet({
@AgentRef(type = ResearchAgent.class),
@AgentRef(value = "finance", version = "2.0.0"),
@AgentRef(value = "analytics", required = false)
})

References an agent within a @Fleet. Exactly one of value (name-based) or type (class-based) must be specified.

AttributeTypeDefaultDescription
valueString""Agent name — for remote agents or cross-module references. Must match @Agent(name=...) or @Coordinator(name=...).
typeClass<?>void.classAgent class — compile-safe. The processor reads @Agent(name=...) from the class to resolve the name.
versionString""Expected agent version. Advisory — logged and warned at startup, not enforced.
requiredbooleantrueIf false, coordinator starts even if this agent is unavailable.
weightint1Preference weight for routing decisions. Higher values = stronger preference.

Class-based references are compile-safe and provide IDE navigation:

@AgentRef(type = ResearchAgent.class)

Name-based references work for remote agents or cross-module references:

@AgentRef(value = "finance", version = "2.0.0")

AgentFleet is injected into @Prompt methods of @Coordinator classes. It provides agent discovery and delegation.

// Get a proxy to a named agent (throws if not found)
AgentProxy research = fleet.agent("research");
// All agents in this fleet (declared via @Fleet)
List<AgentProxy> all = fleet.agents();
// Only currently available agents (filters out unavailable optional agents)
List<AgentProxy> ready = fleet.available();

Call a single agent directly through its proxy:

AgentProxy research = fleet.agent("research");
// Synchronous call
AgentResult result = research.call("web_search", Map.of("query", "AI trends 2026"));
System.out.println(result.text());
// Async call
CompletableFuture<AgentResult> future = research.callAsync("web_search", Map.of("query", "AI trends"));

Stream results token by token:

fleet.agent("writer").stream("draft_report", Map.of("topic", "AI trends"),
token -> session.send(token), // onToken
() -> session.complete() // onComplete
);

Execute multiple agent calls concurrently and collect all results:

var results = fleet.parallel(
fleet.call("research", "web_search", Map.of("query", message)),
fleet.call("strategy", "analyze", Map.of("topic", message)),
fleet.call("finance", "forecast", Map.of("query", message))
);
// Results keyed by agent name
String researchText = results.get("research").text();
String strategyText = results.get("strategy").text();
String financeText = results.get("finance").text();

Execute calls one after another, feeding results forward:

AgentResult finalResult = fleet.pipeline(
fleet.call("research", "web_search", Map.of("query", message)),
fleet.call("writer", "draft_report", Map.of("topic", message))
);
// finalResult is the output of the last agent in the pipeline

Each agent in the fleet is represented by an AgentProxy. The proxy encapsulates transport — the coordinator doesn’t know or care whether the agent is local (in-process) or remote (A2A JSON-RPC).

MethodDescription
name()Agent name
version()Agent version string
isAvailable()Whether the agent is currently reachable
weight()Preference weight from @AgentRef
isLocal()true for in-process agents, false for remote A2A agents
call(skill, args)Synchronous call, returns AgentResult
callAsync(skill, args)Async call, returns CompletableFuture<AgentResult>
stream(skill, args, onToken, onComplete)Streaming call with token-by-token delivery

Results from agent delegation are returned as AgentResult records:

FieldTypeDescription
agentNameStringName of the agent that produced the result
skillIdStringThe skill that was invoked
textStringThe text response
metadataMap<String, Object>Additional metadata from the agent
durationDurationWall-clock time for the call
successbooleanWhether the call succeeded
AgentResult result = fleet.agent("research").call("search", Map.of("q", "AI"));
if (result.success()) {
session.stream(result.text());
} else {
session.stream(result.textOr("Research agent unavailable, proceeding without it."));
}

The coordinator automatically selects the right transport for each agent:

  • Local agents (same JVM): Direct method invocation via LocalAgentTransport. Zero serialization overhead.
  • Remote agents (A2A): JSON-RPC 2.0 over HTTP via A2aAgentTransport. Discovered via Agent Card at /.well-known/agent.json.

The coordinator doesn’t need to know which transport is in use — the AgentProxy abstracts it.

Mark agents as optional when their absence should not prevent the coordinator from starting:

@Fleet({
@AgentRef(type = ResearchAgent.class), // required (default)
@AgentRef(value = "analytics", required = false), // optional
@AgentRef(value = "premium-insights", required = false) // optional
})

At startup, required agents that are unavailable cause a startup failure with a clear error message. Optional agents that are unavailable are logged as warnings and excluded from fleet.available().

In the @Prompt method, check availability before calling optional agents:

AgentProxy analytics = fleet.agent("analytics");
if (analytics.isAvailable()) {
var result = analytics.call("analyze", Map.of("data", message));
session.stream("Analytics: " + result.text());
}

Use weights for preference-based routing when multiple agents can handle the same skill:

@Fleet({
@AgentRef(type = PrimaryResearchAgent.class, weight = 10),
@AgentRef(type = BackupResearchAgent.class, weight = 1)
})

Higher weight values indicate stronger preference.

The spring-boot-multi-agent-startup-team sample demonstrates a CEO coordinator orchestrating 4 specialist agents:

@Coordinator(name = "ceo", skillFile = "prompts/ceo-skill.md",
description = "CEO coordinator that orchestrates specialist agents")
@Fleet({
@AgentRef(type = ResearchAgent.class),
@AgentRef(type = StrategyAgent.class),
@AgentRef(type = FinanceAgent.class),
@AgentRef(type = WriterAgent.class)
})
public class CeoCoordinator {
@Prompt
public void onPrompt(String message, AgentFleet fleet, StreamingSession session) {
session.progress("Dispatching to specialist agents...");
// Fan out research, strategy, and finance in parallel
var results = fleet.parallel(
fleet.call("research", "web_search", Map.of("query", message)),
fleet.call("strategy", "analyze", Map.of("topic", message)),
fleet.call("finance", "forecast", Map.of("query", message))
);
// Feed parallel results into the writer sequentially
var briefing = results.values().stream()
.map(r -> "## " + r.agentName() + "\n" + r.text())
.collect(Collectors.joining("\n\n"));
var report = fleet.agent("writer").call("draft_report",
Map.of("content", briefing, "style", "executive summary"));
session.stream(report.text());
}
}

Each specialist agent is a standard @Agent:

@Agent(name = "research", skillFile = "prompts/research-skill.md",
description = "Web research specialist")
public class ResearchAgent {
@AgentSkill(id = "web_search", name = "Web Search",
description = "Search the web for information")
@AgentSkillHandler
public void search(TaskContext task,
@AgentSkillParam(name = "query") String query) {
// perform research
task.addArtifact(Artifact.text("Research results for: " + query));
task.complete("Research complete");
}
}

The same @Coordinator code works with any AI runtime. Switch the execution engine by changing a single Maven dependency:

<!-- Built-in (default) -->
<artifactId>atmosphere-ai</artifactId>
<!-- Or Spring AI -->
<artifactId>atmosphere-spring-ai</artifactId>
<!-- Or LangChain4j -->
<artifactId>atmosphere-langchain4j</artifactId>
<!-- Or Google ADK -->
<artifactId>atmosphere-adk</artifactId>

Every coordination is automatically journaled — which agents were called, what they returned, timing, success/failure. The journal is a pluggable SPI (CoordinationJournal) with an in-memory default, discovered via ServiceLoader.

// After parallel execution, query the journal
var events = fleet.journal().retrieve(coordinationId);
var failed = fleet.journal().query(CoordinationQuery.forAgent("weather"));

Event types: Started, Dispatched, Completed, Failed, Evaluated. Each event includes the agent name, skill/method called, timestamp, duration, and result summary.

An agent can transfer the conversation — with full history — to another agent. One method call:

@Prompt
public void onPrompt(String message, StreamingSession session) {
if (message.toLowerCase().contains("billing")) {
session.handoff("billing", message); // conversation history travels with it
return;
}
session.stream(message);
}

The client receives an AiEvent.Handoff event before the target agent responds. Nested handoffs are blocked (cycle guard). The target agent’s @Prompt method runs with its own tools, system prompt, and interceptor chain.

Route based on agent results — first match wins, with an optional fallback:

var weather = fleet.agent("weather").call("forecast", Map.of("city", city));
var result = fleet.route(weather, route -> route
.when(r -> r.success() && r.text().contains("sunny"),
f -> f.agent("outdoor").call("plan", Map.of()))
.when(r -> r.success(),
f -> f.agent("indoor").call("suggest", Map.of()))
.otherwise(f -> AgentResult.failure("router", "route",
"Weather unavailable", Duration.ZERO))
);

Every routing decision is recorded in the CoordinationJournal.

Plug in quality assessment via the ResultEvaluator SPI. Evaluators run automatically (async, non-blocking, recorded in journal) after each agent call, and can be invoked explicitly:

var result = fleet.agent("writer").call("draft", Map.of("topic", "AI"));
var evals = fleet.evaluate(result, call);
if (evals.stream().allMatch(Evaluation::passed)) {
session.stream(result.text());
}

Agents remember users across sessions. Configuration-only — no code changes in @Agent classes:

// LongTermMemoryInterceptor (pre): injects stored facts into system prompt
// LongTermMemoryInterceptor (post): extracts new facts via LLM
// Extraction strategies:
MemoryExtractionStrategy.onSessionClose() // default — batch, cost-efficient
MemoryExtractionStrategy.perMessage() // real-time, expensive
MemoryExtractionStrategy.periodic(5) // every 5 messages

Backed by InMemoryLongTermMemory (dev) or any SessionStore implementation (Redis, SQLite). The onDisconnect lifecycle hook ensures facts are extracted before conversation history is cleared.

The coordinator module includes test stubs for exercising @Prompt methods without infrastructure or LLM calls:

var fleet = StubAgentFleet.builder()
.agent("weather", "Sunny, 72F in Madrid")
.agent("activities", "Visit Retiro Park, Prado Museum")
.build();
coordinator.onPrompt("What to do in Madrid?", fleet, session);
CoordinatorAssertions.assertThat(result)
.succeeded()
.containsText("Madrid")
.completedWithin(Duration.ofSeconds(5));

StubAgentFleet returns canned responses. StubAgentTransport allows predicate-based routing for more complex scenarios.

LLM-as-judge for testing agent response quality. Uses any AgentRuntime as the judge model:

var judge = new LlmJudge(cheapRuntime, "gpt-4o-mini");
var response = client.prompt("Should I bring an umbrella to Tokyo?");
assertThat(response)
.withJudge(judge)
.forPrompt("Should I bring an umbrella to Tokyo?")
.meetsIntent("Recommends whether to bring an umbrella based on weather data")
.isGroundedIn("get_weather")
.hasQuality(q -> q.relevance(0.8).coherence(0.7).safety(0.9));

See AI Testing for the full assertion API.

The built-in AI Console renders coordinator activity as tool cards — showing each specialist agent’s work (tool name, input, result) before displaying the synthesized response. This gives users visibility into the multi-agent orchestration process.