miolini/autoresearch-macos
Selected Choose this fork if your target machine is a Mac and you want the upstream workflow adapted for MPS. Stick with upstream if you want the latest fixes or a more CUDA-oriented baseline.
mutable-state-inc/autoresearch-at-home
active
significant_divergence
Choose this fork if you want the autoresearch idea wrapped in stronger coordination, onboarding, and collaboration tooling. Stay with upstream if you want the smallest, most current baseline with fewer abstractions.
jsegov/autoresearch-win-rtx
Prefer this fork if you want autoresearch to run on Windows and consumer RTX hardware with fewer manual runtime adjustments. Prefer upstream if you need the original Linux/H100-oriented setup or want maximum fidelity to the mainline research environment.
Prefer this fork if you want the autoresearch idea in Chinese with stronger onboarding and a skill-based workflow. Prefer upstream if you care most about staying current with bug fixes and the latest repository state.
kousun12/darwin-derby
active
significant_divergence
Choose this fork if you want a general agent benchmark and execution framework with scoring, CLI, and server tooling. Stay with upstream if you want the simplest possible single-GPU autonomous LLM training sandbox.
ncdrone/autoresearch-ANE
active
significant_divergence
Choose this fork if you specifically want Apple Neural Engine experimentation and are willing to trade away upstream simplicity. Stick with upstream if you want the cleanest, smallest Python/PyTorch agent-research loop.
Choose this fork if your goal is to run autoresearch on Hugging Face infra and you want the surrounding workflow to reflect that environment. Stick with upstream if you want the cleanest, most portable, local-first version of the project with the fewest HF-specific assumptions.
thenamangoyal/autoresearch
Choose this fork if you want the same autonomous research idea but with MLX/Apple Silicon support and more configurable training. Stick with upstream if you want the original CUDA-centered experience and the freshest upstream fixes.
Choose this fork if your main blocker is AMD GPU support. If you are on NVIDIA hardware, the upstream repo is likely the safer default because it is newer and this fork is only a small ROCm-focused divergence.
ar0cket1/test-time-rl-discover-autoresearch
active
significant_divergence
Choose this fork if you want a substantially expanded agentic research system with rollout validation, reward logic, and remote execution. Stick with upstream if you want the smallest possible autonomous-training sandbox and prefer to stay close to the original nanochat-style loop.