🔌Adapter PatternSwap AI providers without changing application code. Use Ollama locally, Cloudflare in production, or build your own adapter.
🌊Streaming Built-inFirst-class streaming support via async generators. Stream AI responses token-by-token to your users in real time.
🎯Ergonomic APIFrom simple one-liners to full conversations: sails.ai.chat('Hello') or sails.ai.stream({ messages, system, model }).