PipeLLM gives teams a control plane for agent runtime, WebSearch, integrations, and governance across Bedrock, Anthropic, OpenAI, Gemini, and other approved models.
Keep your existing SDKs and agent tools while PipeLLM handles routing, provider access, and production controls.
PipeLLM helps teams run agents with managed tools, policy controls, and clear operational visibility.
View the platform docsKeep agent runs stateful, reviewable, and ready for longer workflows.
Route through approved models, cost limits, and action-level permissions.
Add fallback logic, traceability, and operator visibility before launch.
Runtime flow
An agent run opens under one PipeLLM runtime with user, model, and team context attached.
Model allowlists, usage budgets, and tool permissions are checked before the next step executes.
The agent can call managed services like WebSearch while the runtime keeps every action traceable.
Runs stay visible for cost checks, incident review, and enterprise approval workflows.
Focus
Runtime and orchestration
Value
Stable production agent runs
Buyer signal
Governed agent infrastructure
Keep WebSearch concrete and compact. Then let Companion carry the broader story around sessions, memory, tools, and guided work.
Equip your AI agents with real-time web context via a single unified API.
Agent Action
research: summarize new AWS agent announcements this week
PipeLLM Route
https://api.pipellm.ai/v1/websearch/search?q=new+AWS+agent...
Vanilla LLM
"I don't have real-time data to provide this week's AWS announcements. My primary knowledge cutoff is 2023. Please check the official AWS news blog."
PipeLLM Gateway
Grounded Agent
This week, AWS announced major updates for Bedrock Agents:
1. Memory: Agents can retain state across sessions.
2. Routing: Dynamic routing between models.
A unified conversational interface powered by your Sentinel orchestration system. Companion maintains long-term memory across sessions, executes workflows inside team channels, and intelligently awaits human review for sensitive actions.
Persistent sessions
Keep conversation state, run history, and cumulative usage across ongoing work.
Long-term memory
Save durable decisions and preferences, then pull them back when work resumes.
Channel-native work
Operate inside Feishu or Telegram with docs, tasks, calendars, and workspace context.
Approval-aware actions
Ask for confirmation before sensitive actions and keep a clear pending-action step.
Plugin and tool access
Load managed services, npm plugins, and workspace skills into the same assistant.
Delegated workflows
Spawn isolated sub-agents for deeper tasks with timeout and depth controls.
Architecture Context
While Sentinel handles the underlying orchestration layer (state management, plugins, and guarded executions), Companion is the interactive surface built on top for your team to use.
PipeLLM stays underneath the tools teams already know.
Keep Claude Code, OpenClaw, OpenCode, or LangChain in front. PipeLLM becomes the layer for routing, tool access, and governance under the stack your team already uses.
Keep the original SDK demo here. PipeLLM still lets teams keep familiar clients while moving execution onto governed runtime, managed search, and approved provider access.
VIEW INTEGRATIONSKeep your existing OpenAI client and route it through PipeLLM to reach approved models.
Use Anthropic-compatible tooling while PipeLLM handles routing, provider access, and runtime controls.
Keep familiar agent patterns while moving execution onto PipeLLM-managed models and tools.
Start with managed runtime and tool access, then layer in governance, approvals, and observability as your agent usage grows.