AI Integrations
WSO2 Integrator lets you build AI-powered integrations, including direct LLM calls, RAG pipelines, AI agents, and MCP servers.
Getting started
- Build a Sentiment Analyzer: Your first AI integration with a direct LLM call.
- Build a Hotel Finder Agent: An agent with two custom tools and session-scoped memory.
Develop AI applications
- Direct LLM calls: The simplest AI block. Send a prompt and bind the response to a typed value, in a single round-trip.
- RAG: Retrieval-augmented generation that grounds LLM responses in your own documents by retrieving relevant content at query time and injecting it into the prompt.
- AI agents: Autonomous LLM-driven agents that reason over a system prompt, call tools, and maintain conversation state across turns.
- MCP integration: Expose your integrations as MCP tools for AI assistants, or use external MCP tools with your agents.
- Natural functions: (Experimental) Write the function body in plain English. The LLM returns a value that conforms to your declared return type.
- Model providers: Connect to OpenAI, Anthropic, Azure OpenAI, Google Vertex, Mistral, and others through one consistent interface.
- Embedding providers: Turn text into semantic vectors used on both ingest and query for similarity search.
- Vector stores: Persist embeddings and run similarity search across in-memory, Pinecone, pgvector, Weaviate, or Milvus vector databases.
- Knowledge bases: The indexable document store RAG reads from and writes to, composed of a vector store, embedding provider, and chunker.
- Chunkers: Split documents into chunks before embedding. Smaller chunks improve retrieval precision. Larger chunks preserve more surrounding context.