Back to Blog
agentic-seek Mar 4, 2026 5 min read

Deploy AgenticSeek — A Fully Local Manus AI Alternative — to Your Own Server

H

HowToDeploy Team

Lead Engineer @ howtodeploy

Deploy AgenticSeek — A Fully Local Manus AI Alternative — to Your Own Server

AgenticSeek is a fully local alternative to Manus AI — a voice-enabled autonomous agent that browses the web, writes code, and plans tasks while keeping all data on your server. No cloud APIs required, no $200 monthly bills. Run it for the cost of electricity with local models via Ollama, LM Studio, or any OpenAI-compatible endpoint.

Setting it up manually involves Docker, Python, SearxNG, Redis, ChromeDriver configuration, and a local LLM provider. With HowToDeploy, the entire stack is provisioned and configured in minutes.

Why AgenticSeek?

  • 100% local — all data stays on your server, nothing leaves your machine
  • Autonomous web browsing — the agent navigates websites, fills forms, and extracts information
  • Code generation — writes, runs, and debugs Python scripts autonomously
  • Task planning — breaks complex requests into subtasks and delegates to specialised agents
  • Voice-enabled — speech-to-text input with text-to-speech responses
  • Multi-provider — works with Ollama, LM Studio, llama.cpp, or cloud APIs as a fallback
  • Web UI — clean browser-based interface for chatting and monitoring agent activity
  • No subscription — open-source, GPL-3.0 licensed, zero recurring API costs

Prerequisites

Before you start, you'll need:

  • A HowToDeploy account (sign up free)
  • A cloud provider API key (DigitalOcean, Hetzner, Vultr, Linode, or AWS)

Note: AgenticSeek runs best with a GPU for local LLM inference. If your cloud provider supports GPU instances, choose one for the best experience. Without a GPU, use a cloud LLM API key instead.

Step 1: Connect your cloud provider

Go to Settings → Cloud Providers and paste your API key.

Tip: AgenticSeek needs more resources than most apps — the default 8GB RAM / 4 CPU server handles 14B parameter models with Ollama. For larger models (70B+), consider a GPU instance or use a cloud API.

Step 2: Deploy AgenticSeek

Head to the Dashboard and find AgenticSeek in the AI Agents section. Click the card to open the deploy form.

You need one field:

  • LLM Model — the model identifier for your provider (e.g. deepseek-r1:14b, qwen2:7b)

Server size, region, Docker, SearxNG, and Redis are all pre-configured.

Step 3: Add cloud API keys (optional)

If your server can't run LLMs locally, expand Advanced Settings and add a cloud provider key:

  • OpenAI API Key — use GPT models via the OpenAI API
  • Deepseek API Key — use Deepseek models via their cloud API
  • Google AI API Key — use Gemini models via Google AI Studio

These are optional — AgenticSeek's primary purpose is fully local inference.

Step 4: Add a custom domain (optional)

Want your agent at seek.yourdomain.com? Enter your domain in Advanced Settings.

After deployment, point an A record for your domain to the server IP, click Verify DNS, and Caddy issues the SSL certificate automatically.

Step 5: Start using your agent

Once deployment completes, open the web interface and start giving your agent tasks:

  • "Search the web for the latest AI news and summarise the top 3 articles"
  • "Write a Python script that fetches stock prices for Tesla and save it as stock.py"
  • "Find free APIs for weather data, pick the best one, and write a script to fetch the forecast"

Be explicit in your requests — tell the agent how to proceed (e.g. "search the web for..." rather than "do you know about...").

How the agent system works

AgenticSeek routes your query to the best specialised agent:

AgentHandles
Web agentBrowsing, searching, form filling, data extraction
Code agentWriting, running, and debugging Python scripts
Planner agentBreaking complex tasks into subtasks and coordinating
File agentReading, writing, and organising files

The router picks the right agent automatically based on your query. For complex tasks, the planner agent delegates subtasks to the others.

What's included

Every AgenticSeek deployment includes:

  • Docker — containerised services for isolation and reproducibility
  • SearxNG — privacy-respecting meta search engine for web browsing
  • Redis — required by SearxNG for caching and rate limiting
  • Ollama — pre-installed for local LLM inference
  • Chromium — headless browser for autonomous web navigation
  • Caddy — automatic HTTPS with Let's Encrypt (when using a custom domain)
  • Full SSH access — your server, your agent, your data

AgenticSeek vs. Manus AI

FeatureAgenticSeekManus AI
Monthly cost$24-48/month (server)$200/month
Data privacy100% on your serverCloud-based
Open source✅ GPL-3.0
Web browsing
Code execution
Local LLMs
Voice input
Self-hosted

Choose AgenticSeek if: you want Manus-level capabilities without the $200/month bill and with complete data privacy.

Pricing

You pay your cloud provider directly for the server (typically $24-48/month for the recommended spec). HowToDeploy charges a small monthly management fee for monitoring and support.

Start with a 7-day free trial — no credit card required.


Ready to run your own local Manus alternative? Deploy AgenticSeek now →