smolagents
A smol library to build great agents! smolagents
is a library that enables you to run powerful agents in a few lines of code. It offers:
- Simplicity: the logic for agents fits in ~1,000 lines of code. We kept abstractions to their minimal shape above raw code!
- First-class support for Code Agents: Our
CodeAgent
writes its actions in code (as opposed to "agents being used to write code"). To make it secure, we support executing in sandboxed environments via E2B or via Docker. - Hub integrations: you can share/pull tools to/from the Hub, and more is to come!
- Model-agnostic: smolagents supports any LLM. It can be a local
transformers
orollama
model, one of many providers on the Hub, or any model from OpenAI, Anthropic and many others via our LiteLLM integration. - Modality-agnostic: Agents support text, vision, video, even audio inputs!
- Tool-agnostic: you can use tools from LangChain, Anthropic's MCP, you can even use a Hub Space as a tool.
Our CodeAgent
works mostly like classical ReAct agents - the exception being that the LLM engine writes its actions as Python code snippets.
Actions are now Python code snippets. Hence, tool calls will be performed as Python function calls. Writing actions as code snippets is demonstrated to work better than the current industry practice of letting the LLM output a dictionary of the tools it wants to call: uses 30% fewer steps and reaches higher performance on difficult benchmarks.