Particle.news

Download on the App Store

Mistral AI Unveils Codestral Mamba 7B for Advanced Code Generation

The new model offers fast, efficient coding with linear time inference and extensive token handling capabilities.

  • Codestral Mamba 7B achieves 75% on HumanEval for Python coding, outperforming other open-source models.
  • The model supports up to 256k tokens, doubling the capacity of OpenAI's GPT-4o.
  • Available under Apache 2.0 license, it allows free use, modification, and distribution.
  • Mamba architecture simplifies attention mechanisms for faster inference and longer context handling.
  • Mistral also launched Mathstral 7B, specializing in STEM subjects with a 32k context window.
Hero image