跳到主要内容

Agents on the Hub

本文汇总了 Hugging Face 为 agent 工作流提供的全部库与工具:

  • HF MCP Server:将兼容 MCP 的 AI 助手直接连接到 Hugging Face Hub。
  • tiny-agents:一个轻量级的 MCP agent 工具包,同时支持 JS(@huggingface/tiny-agents)和 Python(huggingface_hub)。
  • Gradio MCP Server:可以快速从 Gradio 应用和 Spaces 创建 MCP 服务器。
  • smolagents:一个 Python 库,只需几行代码就能运行强大的 agents。

HF MCP Server

官方的 Hugging Face MCP(Model Context Protocol)Server 可以让任意兼容 MCP 的 AI 助手(包括 VSCode、Cursor、Claude Desktop 等)与 Hugging Face Hub 无缝集成。

借助 HF MCP Server,你可以通过直接连接 Hub 生态来增强 AI 助手的能力。它内置了:

  • 一组精选的内置工具,例如 Spaces、论文语义搜索、模型和数据集浏览等
  • 支持 MCP 的 Gradio 应用:可以连接社区中任何 MCP 兼容的 Gradio 应用

Getting Started

访问 huggingface.co/settings/mcp 配置 MCP 客户端并开始使用。你也可以查看专门的一页指南:HF MCP Server

[!WARNING] 此功能目前仍处于实验阶段 ⚗️,未来会持续演进。

tiny-agents (JS and Python)

NEW:tiny-agents 现已支持 AGENTS.md 标准。🥳

tiny-agents 是一个轻量级工具包,用于在 Hugging Face Inference Client 与 Model Context Protocol(MCP)之上运行和构建 MCP agents。它同时以 JS 包 @huggingface/tiny-agentshuggingface_hub Python 包的形式提供。

@huggingface/tiny-agents (JS)

@huggingface/tiny-agents 提供了一个简单直接的 CLI 和简洁的编程接口,用于在 JS 中运行和构建 MCP agents。

Getting Started

首先安装依赖包:

npm install @huggingface/tiny-agents
# or
pnpm add @huggingface/tiny-agents

Then, you can your agent:

npx @huggingface/tiny-agents [command] "agent/id"

Usage:
tiny-agents [flags]
tiny-agents run "agent/id"
tiny-agents serve "agent/id"

Available Commands:
run Run the Agent in command-line
serve Run the Agent as an OpenAI-compatible HTTP server

You can load agents directly from the tiny-agents Dataset, or specify a path to your own local agent configuration.

Advanced Usage In addition to the CLI, you can use the Agent class for more fine-grained control. For lower-level interactions, use the MCPClient from the @huggingface/mcp-client package to connect directly to MCP servers and manage tool calls.

Learn more about tiny-agents in the huggingface.js documentation.

huggingface_hub (Python)

The huggingface_hub library is the easiest way to run MCP-powered agents in Python. It includes a high-level tiny-agents CLI as well as programmatic access via the Agent and MCPClient classes — all built to work with Hugging Face Inference Providers, local LLMs, or any inference endpoint compatible with OpenAI's API specs.

Getting started

Install the latest version with MCP support:

pip install "huggingface_hub[mcp]>=0.32.2"

Then, you can run your agent:

> tiny-agents run --help

Usage: tiny-agents run [OPTIONS] [PATH] COMMAND [ARGS]...

Run the Agent in the CLI


╭─ Arguments ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ path [PATH] Path to a local folder containing an agent.json file or a built-in agent stored in the 'tiny-agents/tiny-agents' Hugging Face dataset │
│ (https://huggingface.co/datasets/tiny-agents/tiny-agents) │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help Show this message and exit. │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

The CLI pulls the config, connects to its MCP servers, prints the available tools, and waits for your prompt.

Advanced Usage

For more fine-grained control, use the MCPClient directly. This low-level interface extends AsyncInferenceClient and allows LLMs to call tools via the Model Context Protocol (MCP). It supports both local (stdio) and remote (http/sse) MCP servers, handles tool registration and execution, and streams results back to the model in real-time.

Learn more in the huggingface_hub MCP documentation.

Custom Agents

To create your own agent, simply create a folder (e.g., my-agent/) and define your agent’s configuration in an agent.json file. The following example shows a web-browsing agent configured to use the Qwen/Qwen2.5-72B-Instruct model via Nebius inference provider, and it comes equipped with a playwright MCP server, which lets it use a web browser

{
"model": "Qwen/Qwen2.5-72B-Instruct",
"provider": "nebius",
"servers": [
{
"type": "stdio",
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
]
}

To use a local LLM (such as llama.cpp, or LM Studio), just provide an endpointUrl:

{
"model": "Qwen/Qwen3-32B",
"endpointUrl": "http://localhost:1234/v1",
"servers": [
{
"type": "stdio",
"command": "npx",
"args": ["@playwright/mcp@latest"]
}
]
}

Optionally, add a PROMPT.md to customize the system prompt.

提示

Don't hesitate to contribute your agent to the community by opening a Pull Request in the tiny-agents Hugging Face dataset.

Gradio MCP Server / Tools

You can build an MCP server in just a few lines of Python with Gradio. If you have an existing Gradio app or Space you'd like to use as an MCP server / tool, it's just a single-line change.

To make a Gradio application an MCP server, simply pass in mcp_server=True when launching your demo like follows.

# pip install gradio

import gradio as gr

def generate_image(prompt: str):
"""
Generate an image based on a text prompt

Args:
prompt: a text string describing the image to generate
"""
pass

demo = gr.Interface(
fn=generate_image,
inputs="text",
outputs="image",
title="Image Generator"
)

demo.launch(mcp_server=True)

The MCP server will be available at http://your-space-id.hf.space/gradio_api/mcp/sse where your application is served. It will have a tool corresponding to each function in your Gradio app, with the tool description automatically generated from the docstrings of your functions.

Lastly, add this to the settings of the MCP Client of your choice (e.g. Cursor).

{
"mcpServers": {
"gradio": {
"url": "http://your-server:port/gradio_api/mcp/sse"
}
}
}

This is very powerful because it lets the LLM use any Gradio application as a tool. You can find thousands of them on Spaces. Learn more here.

smolagents

smolagents is a lightweight library to cover all agentic use cases, from code-writing agents to computer use, in few lines of code. It is model agnostic, supporting local models served with Hugging Face Transformers, as well as models offered with Inference Providers, and proprietary model providers.

It offers a unique kind of agent :CodeAgent, an agent that writes its actions in Python code. It also supports the standard agent that writes actions in JSON blobs as most other agentic frameworks do, called ToolCallingAgent. To learn more about write actions in code vs JSON, check out our new short course on DeepLearning.AI.

If you want to avoid defining agents yourself, the easiest way to start an agent is through the CLI, using the smolagent command.

smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7." \
--model-type "InferenceClientModel" \
--model-id "Qwen/Qwen2.5-Coder-32B-Instruct" \
--imports "pandas numpy" \
--tools "web_search"

Agents can be pushed to Hugging Face Hub as Spaces. Check out all the cool agents people have built here.

smolagents also supports MCP servers as tools, as follows:

# pip install --upgrade smolagents mcp
from smolagents import MCPClient, CodeAgent
from mcp import StdioServerParameters
import os

server_parameters = StdioServerParameters(
command="uvx", # Using uvx ensures dependencies are available
args=["--quiet", "[email protected]"],
env={"UV_PYTHON": "3.12", **os.environ},
)

with MCPClient(server_parameters) as tools:
agent = CodeAgent(tools=tools, model=model, add_base_tools=True)
agent.run("Please find the latest research on COVID-19 treatment.")

Learn more in the documentation.