The PromptResult class wraps every response from TestAgent.prompt(). It provides methods to inspect tool calls, performance metrics, errors, and conversation history.
Import
// PromptResult is returned by TestAgent.prompt()
// You don't import it directly
const result = await agent.prompt("...");
// result is a PromptResult
Text Response
text
The final text response from the LLM.
Value
string - The assistant’s text response. Empty string if no text response.
Example
const result = await agent.prompt("What is 2 + 2?");
console.log(result.text); // "2 + 2 equals 4."
Returns the names of all tools that were called.
Returns
string[] - Array of tool names in call order.
Example
const result = await agent.prompt("Add 2 and 3, then multiply by 4");
console.log(result.toolsCalled()); // ["add", "multiply"]
Checks if a specific tool was called.
hasToolCall(toolName: string): boolean
Parameters
| Parameter | Type | Description |
|---|
toolName | string | The tool name to check |
Returns
boolean - true if the tool was called.
Example
if (result.hasToolCall("add")) {
console.log("Addition was performed");
}
Returns detailed information about all tool calls.
getToolCalls(): ToolCall[]
Returns
ToolCall[] - Array of tool call objects.
| Property | Type | Description |
|---|
toolName | string | Name of the tool |
arguments | Record<any, any> | Arguments passed to the tool |
Example
const calls = result.getToolCalls();
for (const call of calls) {
console.log(`${call.toolName}(${JSON.stringify(call.arguments)})`);
}
// add({"a":2,"b":3})
// multiply({"a":5,"b":4})
Returns the arguments passed to a specific tool.
getToolArguments(toolName: string): Record<string, unknown> | undefined
Parameters
| Parameter | Type | Description |
|---|
toolName | string | The tool name |
Returns
Record<string, unknown> | undefined - The arguments, or undefined if tool wasn’t called.
If the tool was called multiple times, returns arguments from the first call. Use getToolCalls() to access all calls.
Example
const args = result.getToolArguments("add");
console.log(args); // { a: 2, b: 3 }
Error Handling
hasError()
Checks if an error occurred.
Returns
boolean - true if an error occurred.
Example
if (result.hasError()) {
console.error("Something went wrong");
}
getError()
Returns the error message if one occurred.
getError(): string | undefined
Returns
string | undefined - The error message, or undefined if no error.
Example
if (result.hasError()) {
console.error("Error:", result.getError());
}
Latency Metrics
e2eLatencyMs()
Total wall-clock time for the prompt.
Returns
number - Milliseconds.
llmLatencyMs()
Time spent waiting for LLM API responses.
Returns
number - Milliseconds.
mcpLatencyMs()
Time spent executing MCP tools.
Returns
number - Milliseconds.
getLatency()
Returns the complete latency breakdown.
getLatency(): LatencyBreakdown
Returns
interface LatencyBreakdown {
e2eMs: number; // Total time
llmMs: number; // LLM API time
mcpMs: number; // Tool execution time
}
Example
const latency = result.getLatency();
console.log(`Total: ${latency.e2eMs}ms`);
console.log(`LLM: ${latency.llmMs}ms`);
console.log(`Tools: ${latency.mcpMs}ms`);
Token Usage
totalTokens()
Total tokens used (input + output).
Tokens used for input (prompt + context).
outputTokens()
Tokens generated by the LLM.
Example
console.log(`Input: ${result.inputTokens()}`);
console.log(`Output: ${result.outputTokens()}`);
console.log(`Total: ${result.totalTokens()}`);
Conversation History
getMessages()
Returns the full conversation as a CoreMessage[] array.
getMessages(): CoreMessage[]
Returns
CoreMessage[] - Vercel AI SDK message format.
Example
const messages = result.getMessages();
for (const msg of messages) {
console.log(`[${msg.role}]:`, msg.content);
}
getUserMessages()
Returns only user messages.
getUserMessages(): CoreMessage[]
getAssistantMessages()
Returns only assistant messages.
getAssistantMessages(): CoreMessage[]
Returns only tool result messages.
getToolMessages(): CoreMessage[]
Example
const toolMsgs = result.getToolMessages();
for (const msg of toolMsgs) {
console.log("Tool result:", msg.content);
}
Using as Context
Pass PromptResult as context for multi-turn conversations.
const r1 = await agent.prompt("Create a project");
// Single result as context
const r2 = await agent.prompt("Add a task", { context: r1 });
// Multiple results as context
const r3 = await agent.prompt("Show summary", { context: [r1, r2] });
Complete Example
const result = await agent.prompt("Add 100 and 200");
// Response
console.log("Text:", result.text);
// Tool calls
console.log("Tools:", result.toolsCalled());
console.log("Has add?", result.hasToolCall("add"));
console.log("Add args:", result.getToolArguments("add"));
for (const call of result.getToolCalls()) {
console.log(` ${call.toolName}:`, call.arguments);
}
// Errors
if (result.hasError()) {
console.error("Error:", result.getError());
}
// Performance
console.log("Latency:", result.e2eLatencyMs(), "ms");
console.log(" LLM:", result.llmLatencyMs(), "ms");
console.log(" Tools:", result.mcpLatencyMs(), "ms");
// Tokens
console.log("Tokens:", result.totalTokens());
console.log(" Input:", result.inputTokens());
console.log(" Output:", result.outputTokens());
// History
console.log("Messages:", result.getMessages().length);