Run any local AI model. The hive has your back.
Your laptop can't run Llama 3 70B alone — but a few friends' laptops can. HiveBear is a free, open-source P2P mesh that pools everyday machines so anyone can run big local AI models together. No cloud, no subscription, no one to sign up with. Just neighbors sharing compute.
Free forever. MIT-licensed. Built in the open. The more of us show up, the stronger the hive gets.

Nobody should have to run AI alone
The best local AI models need serious hardware. The hive pools everyone's spare compute into shared, self-hosted AI infrastructure — no cloud, no company, no catch.
The hive has your back
A neighbor's gaming PC sits idle 22 hours a day. You've got a laptop that can't quite run the local LLM you want. The hive connects you — their spare GPU becomes part of your AI. Together, everyday machines run models that used to need a data center.
Your den, your data
When you use cloud AI, your conversations live on someone else's servers. Maybe they train on it. Maybe it leaks in a breach. With self-hosted HiveBear, everything stays on your machines and the neighbors you choose to share with. Your thoughts are yours. Period.
Runs on what you already own
ChatGPT costs $20/month. A decent GPU costs $800+. HiveBear runs on the machine sitting on your desk right now — even if it's five years old. No subscription, no credit card, no one to sign up with. Just local AI that works.
One machine starts it. The hive finishes it.
HiveBear gets the most out of whatever you've got — and when one machine isn't enough, the hive pools the rest.
The hive
Your laptop handles the layers it can. A neighbor's gaming PC picks up the rest. The hive splits local LLMs across machines using pipeline parallelism — so a 70B model that needs a $3,000 GPU alone can run across a few everyday laptops together. Neighbors helping neighbors. That's the whole idea.
Every machine helps the hive
Pool GPUs and CPUs across every device you've got. Linux, Mac, Windows, Raspberry Pi, even your browser. From the tiniest cub to the biggest bear — if it has a processor, it has a place in the hive.
Knows your hardware
HiveBear sniffs out exactly what your machine can handle and finds the best local AI model for it. Community benchmarks from similar rigs give you real performance numbers — not guesses or marketing claims.
Your den, your data
Self-hosted, private AI that lives on your machines. Your conversations, your code, your weird 2am questions — none of it touches someone else's servers. None of it trains someone else's model. Your den, your rules.
Works with your tools
Already using tools that talk to OpenAI? Point them at HiveBear instead. Same API shape, local models, your data stays home. Drop-in replacement for anything speaking OpenAI's chat or completions API.
Model foraging
Browse thousands of open-source AI models — Llama 3, Mistral, DeepSeek R1, Qwen 2.5, Phi-3, Gemma 2, CodeLlama — and grab the one that fits your setup. One command and you're ready to go.
Community intelligence
Every benchmark you share helps a neighbor with similar hardware find the right local LLM. Anonymized, opt-in, privacy-first — the hive gets smarter the more of us show up.
From solo to swarm in 3 steps
Profile, connect, run. Models that need a data center now run on your kitchen table.
Let HiveBear sniff out your hardware
$ hivebear profile CPU: Apple M2 (8 cores)RAM: 16 GB (12.4 GB available)GPU: Integrated (shared memory)Join the hive
$ hivebear hive join 🐝 Connected to the hive 12 peers nearby · 847 peers worldwide Your machine: contributing 8 GB · receiving up to 48 GBRun models bigger than your machine
$ hivebear run llama-3.1-70b Splitting across 3 peers (pipeline parallel)...You: Explain quantum computingAI: Quantum computing uses quantum bits...HiveBear, Ollama, LM Studio & friends
We love the other tools in the local AI space — Ollama, LM Studio, Jan.ai, llama.cpp. They all make local LLMs easier. HiveBear is for the moments when one machine isn't enough: pool a few together and run models none of them could handle alone.
| Feature | HiveBear | Ollama | LM Studio | Jan.ai |
|---|---|---|---|---|
| P2P distributed inference | ||||
| Auto hardware profiling | ||||
| Smart model recommendation | ||||
| Multi-engine inference | llama.cpp + Candle + more | llama.cpp only | llama.cpp only | llama.cpp only |
| Community benchmarks | ||||
| Browser inference (WASM) | ||||
| OpenAI-compatible API | ||||
| Native desktop GUI | ||||
| Open source | MIT | MIT | Proprietary | AGPL |
| Written in | Rust | Go | C++/Electron | TypeScript |
Whatever you've got, it works
From a Raspberry Pi in your closet to a workstation under your desk. And when you pool machines through the hive, there's no local AI model too big.
Models our neighbors are running right now. Pick one, or see them all →
| Device Class | RAM | What You Can Run |
|---|---|---|
| Raspberry Pi 5 | 8 GB | TinyLlama 1.1B, Phi-2 2.7B |
| Old laptop | 8 GB | Llama 3.1 8B (Q4), Mistral 7B (Q4) |
| Gaming PC | 16 GB | Llama 3.1 8B (Q8), CodeLlama 13B (Q4) |
| Workstation | 32+ GB | Llama 3.1 70B (Q4), Mixtral 8x7B |
| Multi-device meshThe Hive | Any | Models too large for any single device |
We think local AI should belong to everyone.
We kept watching friends get priced out of running the models they wanted. A cloud subscription here, a $3,000 GPU there, another paywall every quarter. That's not how any of this should work.
HiveBear is a passion project, not a company. There's nothing to buy, nothing to sign up for, nothing gated. Just a growing hive of neighbors who think running big local LLMs on everyday hardware should be free, private, and something we do together.
If that sounds like your kind of thing — come hang out. Download HiveBear, join the hive, poke at the code on GitHub, or just drop into the Discord and say hi. Every machine that shows up makes the hive stronger for everyone.

Come join the hive
Download HiveBear for your platform, join the hive, and run local AI models your machine could never handle alone. Free, open-source, no sign-up. Your AI, your neighbors, your rules.
Prefer the terminal?
$ brew install BeckhamLabsLLC/hivebear/hivebear$ curl -fsSL https://hivebear.com/install.sh | shMeet the neighbors.
Every machine in the hive makes local AI more accessible for someone who needs it. Maybe your gaming PC helps a student run their first LLM. Maybe a cluster of Raspberry Pis gives a researcher the power to experiment. This isn't a feature — it's people deciding that AI shouldn't be locked behind a paywall.

