Skip to content

Getting started

Sails AI is a multi-provider AI hook for Sails.js. It exposes sails.ai.chat() and sails.ai.stream() across configured LLM providers such as Ollama, Cloudflare Workers AI, OpenAI, Anthropic, or a custom adapter.

The hook uses an adapter pattern: install the core hook, add an adapter for the provider you want, configure it, and call sails.ai.chat() or sails.ai.stream().

Install the hook

sh
npm i sails-ai

Install a provider adapter

Sails AI ships adapters as separate packages. Install the one that matches your provider:

sh
npm i @sails-ai/local
sh
npm i @sails-ai/openai

TIP

@sails-ai/local connects to Ollama, which runs open-source LLMs on your machine. This is useful for local development.

@sails-ai/openai works with any OpenAI-compatible provider — Together AI, Groq, Fireworks, OpenRouter, OpenAI, and more. See the OpenAI adapter docs for the full list.

Set up Ollama

If you're using the local adapter, install and start Ollama:

sh
# macOS
brew install ollama

# Or download from https://ollama.com

# Start the server
ollama serve

# Pull a model
ollama pull qwen2.5:1.5b

Create config/ai.js

js
// config/ai.js
module.exports.ai = {
  provider: 'local',

  providers: {
    local: {
      adapter: '@sails-ai/local',
      baseUrl: process.env.OLLAMA_BASE_URL || 'http://localhost:11434'
    }
  }
}

Try it out

Once Sails lifts, you'll see:

info: sails-ai: Loaded provider 'local'

Now use it in any action or helper:

js
// api/controllers/example.js
const reply = await sails.ai.chat('What is the capital of France?')
console.log(reply.content) // "The capital of France is Paris..."

That's it. Read on to learn about configuration, chat, streaming, and building adapters.

All open source projects are released under the MIT License.