Skip to main content

OpenBB AI SDK

The OpenBB AI SDK simplifies building custom agents for OpenBB Workspace by providing type-safe models and helper functions that handle schema validation for streaming Server-Sent Events (SSE). Instead of manually crafting SSE messages and managing event types, you can use simple Python functions to stream text, show reasoning steps, fetch widget data, and create visualizations.

Install the package in your agent backend:

pip install openbb-ai

The code is open source and is available in this repository.

Building Your First Agent

Every agent starts with a query handler that receives a QueryRequest object containing everything your agent needs:

from openbb_ai.models import QueryRequest
from openbb_ai import message_chunk, reasoning_step

async def query(request: QueryRequest):
"""Main entry point for your agent."""

# Show the user what you're doing
yield reasoning_step(
event_type="INFO",
message="Processing your request..."
).model_dump()

# Access the user's latest message
last_message = request.messages[-1]
if last_message.role == "human":
user_query = last_message.content

# Stream a response from LLM
client = openai.AsyncOpenAI()
async for event in await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_query}],
stream=True,
):
if chunk := event.choices[0].delta.content:
yield message_chunk(chunk).model_dump()

Understanding the QueryRequest

The QueryRequest object contains all context your agent needs:

Core Fields

messages - Chat conversation history

for message in request.messages:
if message.role == "human":
user_query = message.content
elif message.role == "ai":
previous_response = message.content

widgets - Widgets available to your agent

# User-selected widgets (requires widget-dashboard-select feature)
if request.widgets and request.widgets.primary:
for widget in request.widgets.primary:
print(f"User added: {widget.name}")

# Dashboard widgets (requires widget-dashboard-search feature)
if request.widgets and request.widgets.secondary:
for widget in request.widgets.secondary:
print(f"On dashboard: {widget.name}")

See the agents.json reference for details how to enable the widget-dashboard-select and widget-dashboard-search features.

workspace_state - Current workspace context

if request.workspace_state and request.workspace_state.current_dashboard_info:
dashboard = request.workspace_state.current_dashboard_info
print(f"Active tab: {dashboard.current_tab_id}")

Additional Fields

  • tools - MCP tools available for execution
  • urls - URLs shared in chat (max 4)
  • timezone - User's browser timezone (e.g., "America/New_York")
  • api_keys - Custom API keys from user
  • workspace_options - Enabled feature flags (including custom ones)

Requesting Widget Data

Both get_widget_data and MCP tool calls are executed client-side. When you yield these function calls, the following sequence occurs:

  1. Agent sends tool call - Your agent yields the function call (widget request or MCP tool)
  2. Connection closes - The SSE stream is terminated, breaking the connection
  3. Frontend executes - Workspace executes the requested tool call/widget request in the browser
  4. New request initiated - Frontend sends a new POST /query request with the tool results
  5. Agent resumes - Your agent receives a new QueryRequest with the execution results as a tool message

This isn't a simple "pause" - it's a complete request/response cycle. Your agent must be stateless and handle being called multiple times. For a detailed sequence diagram of this flow, see the OpenBB AI SDK repository.

Example:

from openbb_ai import WidgetRequest, message_chunk, reasoning_step, get_widget_data
from openbb_ai.models import QueryRequest, ClientFunctionCallError, DataContent

async def query(request: QueryRequest):
last_message = request.messages[-1]

# Check if this is a tool response first
is_tool_response = (
last_message
and hasattr(last_message, "role")
and last_message.role == "tool"
and hasattr(last_message, "data")
and last_message.data
)

if is_tool_response:
# Process widget data (Step 3)
yield reasoning_step(event_type="INFO", message="Processing data...").model_dump()

for item in last_message.data:
if isinstance(item, DataContent):
# Process the data and respond
yield message_chunk("Here's what I found in the data...").model_dump()
return

# Check for orchestration requests
orchestration_requested = (
last_message.role == "ai" and last_message.agent_id == "openbb-copilot"
)

# Phase 1: Check if we need to fetch primary data (added to context)
if ((last_message.role == "human" or orchestration_requested)
and request.widgets and request.widgets.primary):
widget_requests = [
WidgetRequest(
widget=widget,
input_arguments={
param.name: param.current_value for param in widget.params
}
)
for widget in request.widgets.primary
]

yield reasoning_step(event_type="INFO", message="Fetching widget data...").model_dump()
yield get_widget_data(widget_requests).model_dump()
return # EXIT AND WAIT

# Phase 2: Process fetched data
if hasattr(last_message, 'data'):
yield reasoning_step(event_type="INFO", message="Processing data...").model_dump()

for item in last_message.data:
if isinstance(item, DataContent):
# Process the data and respond
yield message_chunk("Here's what I found in the data...").model_dump()

Use request.widgets.primary for widgets the user selected in chat and request.widgets.secondary for widgets already on the dashboard (when the dashboard features are enabled). The SDK formats the tool call for you. Your only responsibility is to pause after yielding get_widget_data and handle the callback that arrives as a tool message.

Streaming Responses

The SDK provides several ways to stream content back to users:

Text Streaming

from openbb_ai import message_chunk

# Stream from LLM response
async for event in await client.chat.completions.create(
model="gpt-4o",
messages=messages,
stream=True,
):
if chunk := event.choices[0].delta.content:
yield message_chunk(chunk).model_dump()

Reasoning Steps

Show users what your agent is thinking:

from openbb_ai import reasoning_step

# Different event types for different states
yield reasoning_step(event_type="INFO", message="Analyzing market data").model_dump()
yield reasoning_step(event_type="SUCCESS", message="Found 50 matching results").model_dump()
yield reasoning_step(event_type="WARNING", message="Some data may be delayed").model_dump()
yield reasoning_step(event_type="ERROR", message="Failed to fetch real-time data").model_dump()

# Include additional details
yield reasoning_step(
event_type="SUCCESS",
message="Data retrieved",
details={"records": 1000, "timeframe": "1Y"}
).model_dump()

Tables

Create interactive data tables:

from openbb_ai import table

yield table(
data=[
{"Symbol": "AAPL", "Price": 150.25, "Change": "+2.5%"},
{"Symbol": "GOOGL", "Price": 2800.00, "Change": "-0.3%"},
{"Symbol": "MSFT", "Price": 380.50, "Change": "+1.2%"}
],
name="Stock Prices",
description="Current market prices"
).model_dump()

Charts

Create visualizations with different chart types:

from openbb_ai import chart

# Line chart for time series
yield chart(
type="line",
data=[
{"date": "2024-01-01", "price": 150.25},
{"date": "2024-01-02", "price": 151.30},
{"date": "2024-01-03", "price": 148.90}
],
x_key="date",
y_keys=["price"],
name="Price History",
description="Stock price over time"
).model_dump()

# Bar chart for comparisons
yield chart(
type="bar",
data=[
{"symbol": "AAPL", "volume": 50000000},
{"symbol": "GOOGL", "volume": 25000000},
{"symbol": "MSFT", "volume": 40000000}
],
x_key="symbol",
y_keys=["volume"],
name="Trading Volume",
description="Volume by symbol"
).model_dump()

Supported chart types: line, bar, scatter, pie, donut

Citations & Attribution

Always cite your data sources to maintain transparency:

from openbb_ai import cite, citations

# Create citations for widgets you used
citation_list = []

for widget in request.widgets.primary:
citation_list.append(
cite(
widget=widget,
input_arguments={
param.name: param.current_value
for param in widget.params
},
extra_details={"timeframe": "1D"}
)
)

# Send all citations at once
yield citations(citation_list).model_dump()

Data Format Reference

Widget data can come in various formats:

from openbb_ai.models import (
PdfDataFormat,
ImageDataFormat,
SpreadsheetDataFormat,
RawObjectDataFormat,
SingleDataContent,
SingleFileReference
)

async def handle_widget_data(data: list[DataContent | DataFileReferences]):
for result in data:
for item in result.items:
if isinstance(item.data_format, PdfDataFormat):
# Handle PDF - use pdfplumber or similar
if isinstance(item, SingleDataContent):
pdf_bytes = base64.b64decode(item.content)
elif isinstance(item, SingleFileReference):
pdf_url = item.url

elif isinstance(item.data_format, SpreadsheetDataFormat):
# Handle Excel/CSV - use pandas
df = pd.read_json(item.content)

elif isinstance(item.data_format, ImageDataFormat):
# Handle images - may need OCR
image_data = base64.b64decode(item.content)

else: # RawObjectDataFormat
# Handle JSON/dict data
data = json.loads(item.content)

Complete Agent Example

Here's a minimal but complete agent that demonstrates the key concepts:

from openbb_ai import WidgetRequest, message_chunk, reasoning_step, cite, citations, table, get_widget_data
from openbb_ai.models import QueryRequest, DataContent, ClientFunctionCallError
import json

async def query(request: QueryRequest):
"""Complete agent implementation with widget data flow."""

last_message = request.messages[-1]

# Check for orchestration requests
orchestration_requested = (
last_message.role == "ai" and last_message.agent_id == "openbb-copilot"
)

# Phase 1: Fetch widget data if needed
if ((last_message.role == "human" or orchestration_requested)
and request.widgets and request.widgets.primary):
widget_requests = [
WidgetRequest(
widget=widget,
input_arguments={
param.name: param.current_value for param in widget.params
}
)
for widget in request.widgets.primary
]

yield reasoning_step(
event_type="INFO",
message="Fetching market data..."
).model_dump()

yield get_widget_data(widget_requests).model_dump()
return # Exit and wait for callback

# Phase 2: Process widget data
if hasattr(last_message, 'data'):
yield reasoning_step(
event_type="INFO",
message="Analyzing data..."
).model_dump()

# Process the data
results = []
for item in last_message.data:
if isinstance(item, ClientFunctionCallError):
yield reasoning_step(
event_type="ERROR",
message=f"Failed: {item.content}"
).model_dump()
continue

if isinstance(item, DataContent):
for data_item in item.items:
data = json.loads(data_item.content)
results.append(data)

# Stream response from LLM
yield message_chunk("Based on the market data analysis:\n").model_dump()

# Continue with LLM streaming
client = openai.AsyncOpenAI()
async for event in await client.chat.completions.create(
model="gpt-4o",
messages=openai_messages,
stream=True,
):
if chunk := event.choices[0].delta.content:
yield message_chunk(chunk).model_dump()

# Show data table
if results:
yield table(
data=results[:5], # Show top 5
name="Market Summary",
description="Key metrics"
).model_dump()

# Add citations
citation_list = [
cite(widget=widget, input_arguments={
param.name: param.current_value for param in widget.params
})
for widget in request.widgets.primary
]
yield citations(citation_list).model_dump()

yield reasoning_step(
event_type="SUCCESS",
message="Analysis complete"
).model_dump()

# Phase 3: Handle regular chat without widgets
else:
yield message_chunk("Please add some widgets to analyze market data.").model_dump()

Model Reference

To see the available models check https://github.com/OpenBB-finance/openbb-ai/blob/main/openbb_ai/models.py.