Arcee AI: Coder Large
Coder‑Large is a 32 B‑parameter offspring of Qwen 2.5‑Instruct that has been further trained on permissively‑licensed GitHub, CodeSearchNet and synthetic bug‑fix corpora. It supports a 32k context window, enabling multi‑file refactoring or long diff review in a single call, and understands 30‑plus programming languages with special attention to TypeScript, Go and Terraform. Internal benchmarks show 5–8 pt gains over CodeLlama‑34 B‑Python on HumanEval and competitive BugFix scores thanks to a reinforcement pass that rewards compilable output. The model emits structured explanations alongside code blocks by default, making it suitable for educational tooling as well as production copilot scenarios. Cost‑wise, Together AI prices it well below proprietary incumbents, so teams can scale interactive coding without runaway spend.
Undisclosed
Parameters
33K tokens
Context Window
Proprietary
License
May 5, 2025
Released
💰 Pricing
Input
$0.50
per 1M tokens
Output
$0.80
per 1M tokens
API Available
This model is accessible via API for integration into your applications.
⭐ Related Models
Codestral
Mistral
Mistral's code-specialized model. Trained specifically for code generation, completion, and understanding.
StarCoder 2 15B
BigCode
Open-source code model trained on The Stack v2. Strong at code completion and understanding across 600+ languages.
Arcee AI: Trinity Large Thinking
arcee-ai
Trinity Large Thinking is a powerful open source reasoning model from the team at Arcee AI. It shows strong performance in PinchBench, agentic workloads, and reasoning tasks. It is free in open claw for the first five days. Launch video: https://youtu.be/Gc82AXLa0Rg?si=4RLn6WBz33qT--B7