Getting started
Sails AI is a multi-provider AI hook for Sails.js. It gives you a clean, ergonomic API to chat and stream with any LLM provider — Ollama, Cloudflare Workers AI, OpenAI, Anthropic, or your own.
The hook uses an adapter pattern (similar to Sails Pay): you install the core hook, pick an adapter for your provider, configure it, and call sails.ai.chat() or sails.ai.stream(). Swap providers by changing one line of config.
Install the hook
npm i sails-aiInstall a provider adapter
Sails AI ships adapters as separate packages. Install the one that matches your provider:
npm i @sails-ai/localnpm i @sails-ai/openaiTIP
@sails-ai/local connects to Ollama, which runs open-source LLMs on your machine for free. Perfect for development.
@sails-ai/openai works with any OpenAI-compatible provider — Together AI, Groq, Fireworks, OpenRouter, OpenAI, and more. See the OpenAI adapter docs for the full list.
Set up Ollama
If you're using the local adapter, install and start Ollama:
# macOS
brew install ollama
# Or download from https://ollama.com
# Start the server
ollama serve
# Pull a model
ollama pull qwen2.5:1.5bCreate config/ai.js
// config/ai.js
module.exports.ai = {
provider: 'local',
providers: {
local: {
adapter: '@sails-ai/local',
baseUrl: process.env.OLLAMA_BASE_URL || 'http://localhost:11434'
}
}
}Try it out
Once Sails lifts, you'll see:
info: sails-ai: Loaded provider 'local'Now use it in any action or helper:
// api/controllers/example.js
const reply = await sails.ai.chat('What is the capital of France?')
console.log(reply.content) // "The capital of France is Paris..."That's it. Read on to learn about configuration, chat, streaming, and building adapters.