Skip to main content

Documentation Index

Fetch the complete documentation index at: https://daily-mb-ui-agent.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

The Pipecat React SDK provides hooks for accessing client functionality, managing media devices, and handling events.

usePipecatClient

Provides access to the PipecatClient instance originally passed to PipecatClientProvider.
import { usePipecatClient } from "@pipecat-ai/client-react";

function MyComponent() {
  const pcClient = usePipecatClient();

  await pcClient.startBotAndConnect({
    endpoint: '/api/start',
    requestData: {
      // Any custom data your /start endpoint requires
    }
  });
}

useRTVIClientEvent

Allows subscribing to RTVI client events. It is advised to wrap handlers with useCallback.
import { useCallback } from "react";
import { RTVIEvent, TransportState } from "@pipecat-ai/client-js";
import { useRTVIClientEvent } from "@pipecat-ai/client-react";

function EventListener() {
  useRTVIClientEvent(
    RTVIEvent.TransportStateChanged,
    useCallback((transportState: TransportState) => {
      console.log("Transport state changed to", transportState);
    }, [])
  );
}
Arguments
event
RTVIEvent
required
handler
function
required

usePipecatClientMediaDevices

Manage and list available media devices.
import { usePipecatClientMediaDevices } from "@pipecat-ai/client-react";

function DeviceSelector() {
  const {
    availableCams,
    availableMics,
    selectedCam,
    selectedMic,
    updateCam,
    updateMic,
  } = usePipecatClientMediaDevices();

  return (
    <>
      <select
        name="cam"
        onChange={(ev) => updateCam(ev.target.value)}
        value={selectedCam?.deviceId}
      >
        {availableCams.map((cam) => (
          <option key={cam.deviceId} value={cam.deviceId}>
            {cam.label}
          </option>
        ))}
      </select>
      <select
        name="mic"
        onChange={(ev) => updateMic(ev.target.value)}
        value={selectedMic?.deviceId}
      >
        {availableMics.map((mic) => (
          <option key={mic.deviceId} value={mic.deviceId}>
            {mic.label}
          </option>
        ))}
      </select>
    </>
  );
}

usePipecatClientMediaTrack

Access audio and video tracks.
import { usePipecatClientMediaTrack } from "@pipecat-ai/client-react";

function MyTracks() {
  const localAudioTrack = usePipecatClientMediaTrack("audio", "local");
  const botAudioTrack = usePipecatClientMediaTrack("audio", "bot");
}
Arguments
trackType
'audio' | 'video'
required
participantType
'bot' | 'local'
required

usePipecatClientTransportState

Returns the current transport state.
import { usePipecatClientTransportState } from "@pipecat-ai/client-react";

function ConnectionStatus() {
  const transportState = usePipecatClientTransportState();
}

usePipecatClientCamControl

Controls the local participant’s camera state.
import { usePipecatClientCamControl } from "@pipecat-ai/client-react";
function CamToggle() {
  const { enableCam, isCamEnabled } = usePipecatClientCamControl();

  return (
    <button onClick={() => enableCam(!isCamEnabled)}>
      {isCamEnabled ? "Disable Camera" : "Enable Camera"}
    </button>
  );
}

usePipecatClientMicControl

Controls the local participant’s microphone state.
import { usePipecatClientMicControl } from "@pipecat-ai/client-react";
function MicToggle() {
  const { enableMic, isMicEnabled } = usePipecatClientMicControl();

  return (
    <button onClick={() => enableMic(!isMicEnabled)}>
      {isMicEnabled ? "Disable Microphone" : "Enable Microphone"}
    </button>
  );
}

usePipecatConversation

The primary hook for accessing the conversation message stream. Returns the current list of messages (ordered for display) and a function to inject messages programmatically. Each assistant message’s text parts are split into spoken and unspoken segments based on real-time speech progress, so you can style them differently (e.g. dim unspoken text).
import { usePipecatConversation } from "@pipecat-ai/client-react";
import type { ConversationMessage } from "@pipecat-ai/client-react";

function Messages() {
  const { messages } = usePipecatConversation({
    onMessageCreated(message: ConversationMessage) {
      console.log("New message:", message);
    },
    onMessageUpdated(message: ConversationMessage) {
      if (message.final) {
        console.log("Message finalized:", message);
      }
    },
  });

  return (
    <ul>
      {messages.map((msg, i) => (
        <li key={`${msg.createdAt}-${i}`}>
          <strong>{msg.role}:</strong>{" "}
          {msg.parts?.map((part, j) => {
            if (typeof part.text === "string") {
              return <span key={j}>{part.text}</span>;
            }
            // BotOutputText: { spoken, unspoken }
            return (
              <span key={j}>
                <span>{part.text.spoken}</span>
                <span style={{ opacity: 0.5 }}>{part.text.unspoken}</span>
              </span>
            );
          })}
        </li>
      ))}
    </ul>
  );
}
Options
onMessageCreated
(message: ConversationMessage) => void
Called once when a new message first enters the conversation. The message may or may not be complete at this point — check message.final.
onMessageUpdated
(message: ConversationMessage) => void
Called whenever an existing message’s content changes (e.g. streaming text appended, function call status changed, message finalized). Check message.final to detect finalization.
aggregationMetadata
Record<string, AggregationMetadata>
Metadata for aggregation types to control rendering and speech progress behavior. Used to determine which aggregations should be excluded from position-based speech splitting.
Returns
messages
ConversationMessage[]
The current list of conversation messages, ordered for display. Assistant messages have their text parts split into { spoken, unspoken } based on real-time speech progress.
injectMessage
(message: { role: string; parts: ConversationMessagePart[] }) => void
Programmatically inject a message into the conversation (e.g. a system prompt or user-typed input).

useConversationContext

Lower-level hook that provides direct access to the conversation context. Use this when you only need injectMessage without subscribing to the message stream, or to check whether the connected bot supports BotOutput events.
import { useConversationContext } from "@pipecat-ai/client-react";

function TextInput() {
  const { injectMessage, botOutputSupported } = useConversationContext();

  const send = (text: string) => {
    injectMessage({
      role: "user",
      parts: [{ type: "text", text }],
    });
  };

  return (
    <input
      onKeyDown={(e) => e.key === "Enter" && send(e.currentTarget.value)}
    />
  );
}
Returns
injectMessage
(message: { role: string; parts: ConversationMessagePart[] }) => void
Programmatically inject a message into the conversation.
botOutputSupported
boolean | null
Whether the connected bot supports BotOutput events (RTVI 1.1.0+). null means detection hasn’t completed yet.

UI Agent hooks

These hooks use the PipecatClient from the ambient PipecatClientProvider. For the full client/server pattern, including snapshots, commands, and task lifecycle, see the UI Agent guide.

useUIEventSender

Returns a callable that sends a named UI event to the server. The returned function is a no-op until a PipecatClient is available.
import { useUIEventSender } from "@pipecat-ai/client-react";

function NavLink({ view, label }: { view: string; label: string }) {
  const sendEvent = useUIEventSender();
  return <a onClick={() => sendEvent("nav_click", { view })}>{label}</a>;
}
Returns
sendEvent
<T>(event: string, payload?: T) => void
Function that emits a UI event with the given event name and optional payload.

useUICommandHandler

Register a handler for a named UI command. The handler is registered on mount and unregistered on unmount; if the handler reference changes between renders, the registration is refreshed. Pass a stable reference (via useCallback) to avoid per-render churn.
import { useCallback } from "react";
import { useUICommandHandler } from "@pipecat-ai/client-react";

function Toaster() {
  const onToast = useCallback((payload: { title: string }) => {
    showToast(payload.title);
  }, []);
  useUICommandHandler("toast", onToast);
  return null;
}
Arguments
name
string
required
App-defined command name, matching what the server emits via UIAgent.send_command.
handler
UICommandHandler<T>
required
Callback invoked with the command payload.

useUISnapshot

Capture a structured accessibility snapshot of the document and stream it to the server as a first-class ui-snapshot RTVI message. The server-side UIAgent stores the latest snapshot and renders it into LLM context as <ui_state> so the agent can reason about what’s on screen. Call once near the root of your app, inside a PipecatClientProvider. This hook is a thin lifecycle wrapper around PipecatClient.startUISnapshotStream; non-React apps should use the managed client method directly.
import { useUISnapshot } from "@pipecat-ai/client-react";

function App() {
  useUISnapshot();
  return <Routes>...</Routes>;
}
Behaviour
  • Emits an initial snapshot shortly after mount.
  • Re-emits on DOM mutations, ARIA attribute changes, focus changes, scroll-end, window resize, tab visibility change, and text selection changes, coalesced by debounceMs.
  • No-op until a PipecatClient is available from the ambient PipecatClientProvider.
Options
enabled
boolean
default:"true"
Whether the hook is active. Set to false to stop emitting snapshots without unmounting the component.
debounceMs
number
default:"300"
Minimum interval between snapshot emissions, in milliseconds.
trackViewport
boolean
default:"true"
When true, annotate every emitted node with "offscreen" in its state list if its bounding rect sits entirely outside the viewport.
logSnapshots
boolean
default:"false"
When true, log each emitted snapshot to the browser console (node count, rough token estimate, raw tree).

useDefaultScrollToHandler

Install the default scroll_to handler: scrollIntoView on the resolved target. Resolves by snapshot ref first, then falls back to document.getElementById(target_id).
import { useDefaultScrollToHandler } from "@pipecat-ai/client-react";

function App() {
  useDefaultScrollToHandler({
    block: "center",
    container: () => document.querySelector(".main"),
  });
  return <...>;
}
Options
block
"start" | "center" | "end" | "nearest"
default:"\"start\""
scrollIntoView block position.
inline
"start" | "center" | "end" | "nearest"
default:"\"nearest\""
scrollIntoView inline position.
defaultBehavior
"auto" | "instant" | "smooth"
default:"\"smooth\""
Fallback scroll behavior when payload.behavior is unset.
container
Element | null | (() => Element | null | undefined)
When set, scroll inside this element instead of relying on scrollIntoView walking to the nearest scrollable ancestor. Function form is evaluated on each scroll so it can account for containers mounted after the hook.
offset
{ top?: number; left?: number }
Pixel offsets applied after scrolling, typically to clear a sticky header. Only applied when container is set.

useDefaultFocusHandler

Install the default focus handler: .focus() on the resolved target.
import { useDefaultFocusHandler } from "@pipecat-ai/client-react";
useDefaultFocusHandler({ preventScroll: true });
Options
preventScroll
boolean
default:"false"
Pass { preventScroll: true } to element.focus() so the focus change doesn’t also pan the viewport. Useful when focus happens alongside an explicit scroll_to.

useDefaultHighlightHandler

Install the default highlight handler: toggle a CSS class on the resolved target for duration_ms. Apps style the class themselves, e.g.
.ui-highlight {
  outline: 2px solid gold;
  transition: outline 0.25s;
}
import { useDefaultHighlightHandler } from "@pipecat-ai/client-react";

useDefaultHighlightHandler({
  className: "ui-highlight",
  defaultDurationMs: 2000,
  scrollIntoViewFirst: true,
});
Options
className
string
default:"\"ui-highlight\""
CSS class toggled on the target for duration_ms.
defaultDurationMs
number
default:"1500"
Fallback duration when payload.duration_ms is missing.
scrollIntoViewFirst
boolean
default:"false"
When true, the target is scrolled into view before the class is applied, so the flash is visible to the user even if the target is currently offscreen.

useDefaultSelectTextHandler

Enable the default select_text handler. Resolves the target by ref / target_id and applies the page text selection (Range.selectNodeContents for document elements, el.select() for <input> / <textarea>). With start_offset / end_offset set, walks descendant text nodes to apply a sub-range selection.
import { useDefaultSelectTextHandler } from "@pipecat-ai/client-react";

useDefaultSelectTextHandler({ scrollIntoViewFirst: true });
Options
scrollIntoViewFirst
boolean
default:"true"
When true, scroll the target into view before applying the selection so the user actually sees what was selected.
block
ScrollLogicalPosition
default:"\"center\""
scrollIntoView block position when scrolling first.

useDefaultSetInputValueHandler

Enable the default set_input_value handler. Resolves the target by ref / target_id; refuses on disabled, readonly, and <input type="hidden"> (silent no-op so the agent can’t bypass UI affordances the user is meant to control), then assigns el.value and dispatches single-shot input and change events so React-controlled inputs and vanilla onChange listeners pick up the new value naturally. With replace: false on the payload, the new text is appended to the current value; the default replaces. The flag is ignored for native <select> since a select either has the value or doesn’t. Native <select> is supported alongside text inputs and textareas (the handler sets el.value and dispatches change).
import { useDefaultSetInputValueHandler } from "@pipecat-ai/client-react";

useDefaultSetInputValueHandler({ focusFirst: true });
Options
focusFirst
boolean
default:"false"
When true, fire focus() on the target before writing so the user sees the cursor land in the field. The element is blurred after the change events fire to avoid stealing keyboard focus mid-conversation.

useDefaultClickHandler

Enable the default click handler. Resolves the target by ref / target_id and calls el.click(). Silently no-ops on disabled and aria-disabled="true" targets so the agent can’t bypass UI affordances meant to be user-controlled.
import { useDefaultClickHandler } from "@pipecat-ai/client-react";

useDefaultClickHandler();
Takes no options.

useDefaultUICommandHandlers

Install all DOM-based default handlers (scroll_to, focus, highlight, select_text, set_input_value, click) at once. Pass per-handler option objects to customize.
import { useDefaultUICommandHandlers } from "@pipecat-ai/client-react";

useDefaultUICommandHandlers({
  scrollTo: { block: "center" },
  highlight: { defaultDurationMs: 2000 },
  setInputValue: { focusFirst: true },
});
Options
scrollTo
DefaultScrollToOptions
Forwarded to useDefaultScrollToHandler.
focus
DefaultFocusOptions
Forwarded to useDefaultFocusHandler.
highlight
DefaultHighlightOptions
Forwarded to useDefaultHighlightHandler.
selectText
DefaultSelectTextOptions
Forwarded to useDefaultSelectTextHandler.
setInputValue
DefaultSetInputValueOptions
Forwarded to useDefaultSetInputValueHandler.
useDefaultClickHandler takes no options and is always installed.

useToastHandler

Typed sugar for useUICommandHandler<ToastPayload>("toast", handler). Wire a toast renderer of your choice; the SDK doesn’t ship one.
import { useToastHandler } from "@pipecat-ai/client-react";
import { useCallback } from "react";

useToastHandler(
  useCallback((payload) => {
    myToastLib.show(payload.title, payload.subtitle);
  }, []),
);
Arguments
handler
UICommandHandler<ToastPayload>
required

useNavigateHandler

Typed sugar for useUICommandHandler<NavigatePayload>("navigate", handler). Wire into your router of choice; the SDK doesn’t ship one.
import { useNavigateHandler } from "@pipecat-ai/client-react";
import { useCallback } from "react";
import { useNavigate } from "react-router-dom";

function NavWiring() {
  const navigate = useNavigate();
  useNavigateHandler(
    useCallback(
      (payload) => {
        navigate(`/${payload.view}`, { state: payload.params });
      },
      [navigate],
    ),
  );
  return null;
}
Arguments
handler
UICommandHandler<NavigatePayload>
required

useUITasks

Subscribe to the in-flight task-group state surfaced by the server’s ui-task lifecycle envelopes. Returns the live list of task groups plus methods to cancel and prune task groups. For live task state, mount a UITasksProvider inside PipecatClientProvider. The provider listens to RTVIEvent.UITask envelopes from the underlying PipecatClient and reduces them into groups state.
import { useUITasks } from "@pipecat-ai/client-react";

function DiscoveryPanel() {
  const { groups, cancelTask } = useUITasks();
  const inflight = groups.find((g) => g.status === "running");
  if (!inflight) return null;
  return (
    <div>
      <h3>{inflight.label}</h3>
      {inflight.cancellable && (
        <button onClick={() => cancelTask(inflight.taskId, "user requested")}>
          Cancel
        </button>
      )}
    </div>
  );
}
Returns
groups
TaskGroup[]
Live array of in-flight and recently completed task groups, in arrival order. Each group has taskId, label, status ("running" / "completed" / "cancelled" / "error"), cancellable, timestamps, and per-worker entries with status, progress updates, and response.
cancelTask
(taskId: string, reason?: string) => void
Ask the server to cancel the named task group. Forwards to PipecatClient.cancelUITask. Honored only when the group was registered with cancellable=True on the server.
dismissTask
(taskId: string) => void
Remove a non-running group from local UI state. Running groups are kept.
clearCompleted
() => void
Remove every non-running group from local UI state.
When no UITasksProvider is mounted, useUITasks returns an empty groups list and no-op methods rather than throwing.