use-ai will be open source soon! → https://github.com/meetsmore/use-ai
For a full listing of features, check the README.md.
Overview
use-ai is a client-side library and server for easily adding AI power to your React apps.
You can quickly add an AI chat to your app that can control components on your page, unlocking many use cases for automation for your users with very little code.
1. TodoList.tsx
export default function TodoList() { const { todos, addTodo, deleteTodo, toggleTodo } = useTodoLogic(); const { ref } = useAI({ tools: { addTodo, deleteTodo, toggleTodo }, prompt: `Todo List: ${JSON.stringify(todos)}`, }); }
2. index.tsx
root.render( <UseAIProvider serverUrl="ws://localhost:8081"> <App /> </UseAIProvider> );
- Components call
useAIto declare their tools and state (prompt) touse-ai. UseAIProviderprovides a floating-action-button chat UI and aggregatesuseAItools + prompts from all child components.@meetsmore/use-ai-serveracts as a co-ordinator between your frontend and an LLM.- ✨ The LLM can now call your
toolsfunctions in the frontend as MCPs.
Why?
We built use-ai because we had a lot of different use cases for both our MeetsMore and ProOne products, where we wanted to enable users to quickly do things with AI.
We had a list of desired prompts that we wanted to enable users to do, some examples:
- "I want to delete a job I registered by mistake"
- "I want to move a specific job that is already in/after the Order phase back to before order"
- "I want to know this month’s revenue and costs"
Many members of our organization had ideas for how this could be used, so we wanted to provide simple, scalable tools through which this could be achieved, instead of each team building their own AI capabilities.
Architecture

- 💻 [client] →
useAIcalls provide javascript functions with metadata to be used as tools. - 💻 [client] →
UseAIProvidercollects all mounted components withuseAIhooks and sends their tools to aUseAIServer. - ☁️ [server] →
UseAIServerco-ordinates between the clientside and the LLM, providing the clientside tools as MCP tools to the LLM. - 🤖 [LLM] → The LLM agent runs and invokes clientside tools if needed.
- ☁️ [server] → The server requests the clientside invoke the clientside tool with the desired arguments from the LLM.
- 💻 [client] → The client invokes the requested function with its arguments.
What use-ai solves for you
State
Internally, use-ai handles a lot of the complexity about keeping state in sync with the AI, so that the AI is always aware of changes to your UI components and their state.
It uses the prompt argument of useAI hooks to represent state, so the actual React component state is not included, which reduces token waste.
Essentially, you can be sure that use-ai and its chat pane always have the right state for your UI, before and after changes.
Only mounted components state/tools are included, so if you switch screens in your app, the AI only knows what it can ‘see’.
You can still use invisible components (Providers) to provide ‘global’ tools and state to the AI.
Fully featured Chat UI
use-ai comes with a fully feature chat UI out of the box, including:
- Local storage of chat history.
- Multiple chats.
- Agent selection.
- ‘/slash’ commands.
- File upload (drag and drop).
Emergent Behaviour
One of the coolest things about use-ai is that you can enable a lot of power even by just adding simple tools.
export default function TodoList() { const tools = useMemo(() => { const addTodo = defineTool( 'Add a new todo item to the list', z.object({ text: z.string().describe('The text content of the todo item'), }), (input) => addTodoFn(input.text) ); const deleteTodo = defineTool( 'Delete a todo item by its ID', z.object({ id: z.number().describe('The ID of the todo item to delete'), }), (input) => deleteTodoFn(input.id), { confirmationRequired: true } ); const toggleTodo = defineTool( 'Toggle the completed status of a todo item', z.object({ id: z.number().describe('The ID of the todo item to toggle'), }), (input) => toggleTodoFn(input.id) ); return { addTodo, deleteTodo, toggleTodo, }; }, []); const { ref } = useAI({ tools: { addTodo, deleteTodo, toggleTodo }, prompt: `Todo List: ${JSON.stringify(todos)}`, }); }
Even with these simple tools, you can enable batch edit functionalities like:
- 'Add a shopping list to make tonkotsu ramen'.
- 'I already have all of the vegetables, check them off.'
Because LLMs support multi tool calls in a single response, use-ai can perform all these actions in one batch!
AG-UI
The Agent-User Interaction Protocol (AG-UI) is an event based protocol that standardizes how AI agents connect to user facing applications.
use-ai conforms to this protocol, which means you could technically implement your own backend or frontend part if you wished, provided you conform to the same protocol.
Features
use-ai has many features, all of which are detailed in the official README.md.
We won’t go into each in detail, but here is a quick list of all of its other capabilities.
| Component resolution | The AI can tell the difference between multiple instances of the same component. |
|---|---|
| Invisible components | ‘Provider’ components can be created that dont render, for simply giving the AI tools or context. |
| Suggestions | useAI hooks can provide suggestions, which the chat pane will show if the chat is blank. |
confirmationRequired |
Mark tools as requiring confirmation before the LLM executes them (for destructive actions). |
| Fully featured chat UI. | A production ready chat UI with many quality of life features. |
| Theming | Ability to theme the chat UI. |
| Internationalization | Ability to provide your own strings for any string the chat UI uses. |
| ready-to-use server | A batteries included UseAIServer you can use for most cases. |
| Serverside MCPs | Ability to add server-side MCPs. You can provide code to UseAIProvider in the frontend to pass auth credentials to these MCPs. |
| Rate limiting | Rate limiting including in UseAIServer to prevent abuse of your tokens. |
| Langfuse integration | Out of the box support for Langfuse. |
| Plugin support | Ability to extend the server with plugins to add new features. |
| Headless workflows | Define headless workflows you can trigger using the Workflows Plugin. |
Open Source
We’ve decided to open source use-ai in order to enable you to build great things with AI quickly.
We welcome contributions on our repository.
Thanks
Thank you to Masuda-san for his contributions to the project.