What is The Cloud for AI Agents
Hopx provides secure, blazing-fast Linux micro-VMs designed for AI agents and untrusted code execution. It allows users to spin up isolated Linux micro-VMs in milliseconds, offering unlimited runtime and secure execution environments for various programming languages including Python, JavaScript, and Go.
How to use The Cloud for AI Agents
- Install the HopX SDK: Use
npm install @hopx-ai/sdkfor JavaScript. - Create a sandbox: Instantiate a
Sandboxobject, specifying a template (e.g., 'code-interpreter'). - Execute code: Use the
runCodemethod to execute scripts within the sandbox. - Manage files: Interact with the sandbox's filesystem for uploading and downloading files.
- Execute commands: Run shell commands and capture their output.
- Stream output: Receive real-time output from code execution via WebSocket.
- Manage processes: Start, monitor, and manage background processes within the sandbox.
- Cleanup: Terminate the sandbox using the
killmethod when no longer needed.
Features of The Cloud for AI Agents
- Speed: Milliseconds startup time (~100ms cold start) due to prebuilt microVM snapshots.
- Security: Isolated at the VM Level using Firecracker microVMs, providing kernel isolation.
- Runtime: Unlimited execution time with full state persistence.
- SDKs: Available for Python, JavaScript/TypeScript, Go, .NET, Java, and PHP.
- Code Execution: Run Python, JavaScript, and other languages with rich output capture.
- File Operations: Upload, download, and manage files within the sandbox.
- Command Execution: Execute shell commands and capture stdout/stderr.
- Real-time Streaming: Stream output via WebSocket.
- Process Management: Start, monitor, and manage background processes.
- Templates: Predefined environments like 'code-interpreter' and options for custom templates.
- Desktop Automation: Control GUI, mouse, keyboard, and browser interactions.
- Metrics: Monitor CPU, memory, network, and disk usage in real time.
- Persistence: Long-running jobs and continuous execution without arbitrary timeouts.
- Orchestration: Compatible with frameworks like LangGraph and AutoGen for multi-agent systems.
Use Cases of The Cloud for AI Agents
- AI Agents: Running autonomous agents that write and execute code in dedicated runtimes.
- Untrusted Code Execution: Safely executing user-submitted or LLM-generated code.
- LLM Execution: Running LLM-generated code and dynamic workloads.
- Data Analysis: Launching Jupyter notebooks with preinstalled ML libraries.
- Deep Research: Enabling autonomous agents for continuous data gathering and analysis.
- Desktop Automation: Controlling cloud desktops for GUI automation and application interaction.
- Background Automations: Running workers, schedulers, and recurring jobs.
- Reinforcement Learning: Training and evaluating RL agents in isolated environments.
- Secure MCP Servers: Hosting MCP servers and tools in controlled, isolated perimeters.
- Long-Running Jobs: Executing tasks that require extended runtime without interruptions.
- Multi-Agent Mesh: Coordinating specialized agents across a network of isolated micro-VMs.
Pricing
Pricing is based on usage, with costs per second for compute, memory, and storage.
- Compute: $0.00001400/s (vCPU)
- Memory: $0.00000450/s (GiB)
- Storage: $0.00000003/s (GiB)
A free tier with $200 in credits is available for new users.
FAQ
- What is a Bunnyshell Sandbox? Secure, isolated Linux micro-VMs that launch in milliseconds, providing a runtime environment for code execution and AI agents.
- How is it different from containers or serverless functions? MicroVMs offer stronger isolation at the VM level, beyond what containers provide, and persistent runtimes unlike typical serverless functions.
- Do I need a credit card to get started? The website mentions a free $200 credit offer, implying a credit card might be required for signup or to access paid tiers after the free credits are used.
- What happens if a sandbox fails or shuts down unexpectedly? The platform aims for stability with persistent state and options for relaunching or snapshotting.




