People helping people run AI

Run any local AI model. The hive has your back.

Your laptop can't run Llama 3 70B alone — but a few friends' laptops can. HiveBear is a free, open-source P2P mesh that pools everyday machines so anyone can run big local AI models together. No cloud, no subscription, no one to sign up with. Just neighbors sharing compute.

Free forever. MIT-licensed. Built in the open. The more of us show up, the stronger the hive gets.

Join the hiveMeet the neighbors
All platforms & install options →Read the docs →
hivebear - quickstart
PawPaw the bear, HiveBear's mascot waving hello

Nobody should have to run AI alone

The best local AI models need serious hardware. The hive pools everyone's spare compute into shared, self-hosted AI infrastructure — no cloud, no company, no catch.

The hive has your back

A neighbor's gaming PC sits idle 22 hours a day. You've got a laptop that can't quite run the local LLM you want. The hive connects you — their spare GPU becomes part of your AI. Together, everyday machines run models that used to need a data center.

Your den, your data

When you use cloud AI, your conversations live on someone else's servers. Maybe they train on it. Maybe it leaks in a breach. With self-hosted HiveBear, everything stays on your machines and the neighbors you choose to share with. Your thoughts are yours. Period.

Runs on what you already own

ChatGPT costs $20/month. A decent GPU costs $800+. HiveBear runs on the machine sitting on your desk right now — even if it's five years old. No subscription, no credit card, no one to sign up with. Just local AI that works.

One machine starts it. The hive finishes it.

HiveBear gets the most out of whatever you've got — and when one machine isn't enough, the hive pools the rest.

The hive

Your laptop handles the layers it can. A neighbor's gaming PC picks up the rest. The hive splits local LLMs across machines using pipeline parallelism — so a 70B model that needs a $3,000 GPU alone can run across a few everyday laptops together. Neighbors helping neighbors. That's the whole idea.

Every machine helps the hive

Pool GPUs and CPUs across every device you've got. Linux, Mac, Windows, Raspberry Pi, even your browser. From the tiniest cub to the biggest bear — if it has a processor, it has a place in the hive.

Knows your hardware

HiveBear sniffs out exactly what your machine can handle and finds the best local AI model for it. Community benchmarks from similar rigs give you real performance numbers — not guesses or marketing claims.

Your den, your data

Self-hosted, private AI that lives on your machines. Your conversations, your code, your weird 2am questions — none of it touches someone else's servers. None of it trains someone else's model. Your den, your rules.

Works with your tools

Already using tools that talk to OpenAI? Point them at HiveBear instead. Same API shape, local models, your data stays home. Drop-in replacement for anything speaking OpenAI's chat or completions API.

Model foraging

Browse thousands of open-source AI models — Llama 3, Mistral, DeepSeek R1, Qwen 2.5, Phi-3, Gemma 2, CodeLlama — and grab the one that fits your setup. One command and you're ready to go.

Community intelligence

Every benchmark you share helps a neighbor with similar hardware find the right local LLM. Anonymized, opt-in, privacy-first — the hive gets smarter the more of us show up.

From solo to swarm in 3 steps

Profile, connect, run. Models that need a data center now run on your kitchen table.

01

Let HiveBear sniff out your hardware

$ hivebear profile
 
CPU: Apple M2 (8 cores)
RAM: 16 GB (12.4 GB available)
GPU: Integrated (shared memory)
02

Join the hive

$ hivebear hive join
 
🐝 Connected to the hive
12 peers nearby · 847 peers worldwide
Your machine: contributing 8 GB · receiving up to 48 GB
03

Run models bigger than your machine

$ hivebear run llama-3.1-70b
 
Splitting across 3 peers (pipeline parallel)...
You: Explain quantum computing
AI: Quantum computing uses quantum bits...

HiveBear, Ollama, LM Studio & friends

We love the other tools in the local AI space — Ollama, LM Studio, Jan.ai, llama.cpp. They all make local LLMs easier. HiveBear is for the moments when one machine isn't enough: pool a few together and run models none of them could handle alone.

FeatureHiveBearOllamaLM StudioJan.ai
P2P distributed inference
Auto hardware profiling
Smart model recommendation
Multi-engine inferencellama.cpp + Candle + morellama.cpp onlyllama.cpp onlyllama.cpp only
Community benchmarks
Browser inference (WASM)
OpenAI-compatible API
Native desktop GUI
Open sourceMITMITProprietaryAGPL
Written inRustGoC++/ElectronTypeScript

Whatever you've got, it works

From a Raspberry Pi in your closet to a workstation under your desk. And when you pool machines through the hive, there's no local AI model too big.

Models our neighbors are running right now. Pick one, or see them all →

Llama 3 8BLlama 3 70BMistral 7BMixtral 8x7BDeepSeek R1Qwen 2.5 72BPhi-3Gemma 2CodeLlama
Device ClassRAMWhat You Can Run
Raspberry Pi 58 GBTinyLlama 1.1B, Phi-2 2.7B
Old laptop8 GBLlama 3.1 8B (Q4), Mistral 7B (Q4)
Gaming PC16 GBLlama 3.1 8B (Q8), CodeLlama 13B (Q4)
Workstation32+ GBLlama 3.1 70B (Q4), Mixtral 8x7B
Multi-device meshThe HiveAnyModels too large for any single device
GPU acceleration is automatic when available (CUDA, Metal, Vulkan, WebGPU).
Why we're building this

We think local AI should belong to everyone.

We kept watching friends get priced out of running the models they wanted. A cloud subscription here, a $3,000 GPU there, another paywall every quarter. That's not how any of this should work.

HiveBear is a passion project, not a company. There's nothing to buy, nothing to sign up for, nothing gated. Just a growing hive of neighbors who think running big local LLMs on everyday hardware should be free, private, and something we do together.

If that sounds like your kind of thing — come hang out. Download HiveBear, join the hive, poke at the code on GitHub, or just drop into the Discord and say hi. Every machine that shows up makes the hive stronger for everyone.

Come join the hive

Download HiveBear for your platform, join the hive, and run local AI models your machine could never handle alone. Free, open-source, no sign-up. Your AI, your neighbors, your rules.

macOSWindowsLinux

Prefer the terminal?

$ brew install BeckhamLabsLLC/hivebear/hivebear
$ curl -fsSL https://hivebear.com/install.sh | sh

View all download options →

Every idle GPU helps a neighbor

Meet the neighbors.

Every machine in the hive makes local AI more accessible for someone who needs it. Maybe your gaming PC helps a student run their first LLM. Maybe a cluster of Raspberry Pis gives a researcher the power to experiment. This isn't a feature — it's people deciding that AI shouldn't be locked behind a paywall.

-
Devices in the Hive
-
Models shared
-
Cubs online
Come say hiContribute on GitHub
The Hive — a glowing beehive with network nodes representing the P2P mesh
HiveBearHiveBear

Free, open-source, self-hosted AI that actually fits your machine. A P2P mesh of neighbors pooling everyday hardware to run big local AI models together. Written in Rust, powered by the hive.

Product

  • Download
  • Documentation
  • Playground
  • FAQ

Run a model

  • Run Llama 3 70B
  • Run DeepSeek R1
  • Run Qwen 2.5 72B
  • Run Mistral 7B
  • All models →

Compare

  • HiveBear vs Ollama
  • HiveBear vs LM Studio
  • HiveBear vs exo
  • HiveBear vs Jan.ai

Community

  • Discord
  • GitHub
  • Discussions
  • Community hub
PawPaw the bear, chilling

Built with Rust. MIT License. © 2026 BeckhamLabs.

Privacy Policy
HiveBearHiveBear
DownloadDocsModelsFAQCommunity
GitHubSign inInstall