Introducing Sonus-1: A New Era of LLMs
At Sonus AI, we're driven by a mission to push the boundaries of AI. We're thrilled to announce the release of the Sonus-1 family of Large Language Models (LLMs). Sonus-1 has been designed to bring the perfect blend of high performance with versatility across various applications.
At Sonus AI, we're driven by a mission to push the boundaries of AI. We're thrilled to announce the release of the Sonus-1 family of Large Language Models (LLMs). Sonus-1 has been designed to bring the perfect blend of high performance with versatility across various applications.
Meet the Sonus-1 Family: Pro, Air, and Mini
The Sonus-1 series is tailored to meet various needs:
Sonus-1 Mini: For when speed is of the essence, offering high-speed and cost-effective solutions
Sonus-1 Air: A versatile model with a balance of performance and resource usage.
Sonus-1 Pro: Our top-tier model, optimized for complex tasks requiring the best performance.
Sonus-1 Pro (w/ Reasoning): Our flagship model featuring chain of thought reasoning for even more capability.
Benchmark Performance
The Sonus-1 family of models demonstrates impressive performance across different areas, with improvements in general reasoning, mathematics, and coding. Here are some highlights:
MMLU: Sonus-1 Pro (w/ Reasoning) reaches 90.15% on MMLU, demonstrating its powerful general reasoning skills.
MMLU-Pro: Sonus-1 Pro (w/ Reasoning) reaches 73.1% on MMLU-Pro, showing its robust capabilities.
Math (MATH-500): Sonus-1 Pro (w/ Reasoning) excels with a score of 91.8%, highlighting its ability to tackle complex math problems.
Reasoning (DROP): Sonus-1 Pro (w/ Reasoning) reaches 88.9%, showing its strong capabilities in reasoning tasks.
Reasoning (GPQA-Diamond): Sonus-1 Pro (w/ Reasoning) scores 67.3% on the challenging GPQA-Diamond benchmark, emphasizing its aptitude in scientific reasoning.
Code (HumanEval): Sonus-1 Pro (w/ Reasoning) scores 90.0%, a testament to its powerful coding abilities.
Code (LiveCodeBench): Sonus-1 Pro (w/ Reasoning) scores 51.9%, displaying solid performance in real-world code environments.
Math (GSM-8k): Sonus-1 Pro (w/ Reasoning) reached 97.0% on the challenging math test.
Code (Aider-Edit): Sonus-1 Pro (w/ Reasoning) shows impressive performance in code editing by achieving 72.6%.
Notably, Sonus-1 Pro leads the field in various benchmarks and is particularly strong in reasoning and mathematical problems. This shows its ability to outpace other proprietary models.
Capabilities
Sonus-1 demonstrates significant performance capabilities, comparable to the most advanced proprietary models worldwide. See the benchmark results below:
Where To Try Sonus-1?
You can access the Sonus-1 suite of models at chat.sonus.ai.
What's Next?
We're committed to developing high-performance, affordable, reliable, and privacy-focused LLMs. In our commitment to expanding the Sonus-1 series, we're working towards a stable release of additional models that can tackle even harder problems.