Describe your problem

Experience local, SOTA agentic AI in your browser

Start Chatting Now

Features of core.ml

Boost your AI workflow

Local Model Icon

Local model in the browser

Chat with a powerful AI model running entirely in your browser. No data is sent to external servers.

Upload your files and MCP

Leverages your GPU for accelerated AI inference through WebGPU technology, making interactions faster and more responsive.

Agents collaboration

Model responses support markdown formatting for rich text, code blocks, lists, and other structured content.

Reasoning Icon

Reasoning and long context

Works across different platforms and devices as long as they have a compatible browser with WebGPU support.

RAG Icon

Libraries and news RAG

Model is cached in your browser's storage after first load, enabling faster startup times on subsequent visits.

Voice Icon

Voice to learn and interact

Watch as the model generates responses in real-time, providing a more interactive and engaging experience.