Mistral Large 2
AI Assistantsv2407The new generation of our flagship model.
Overview
123B parameters, offering state-of-the-art performance.
128k token context window for long-context applications.
Strong multilingual support, excelling in over a dozen languages.
Advanced reasoning and mathematical capabilities.
Native function calling and JSON outputting for agentic capabilities.
Visual Guide
✨ Nano Banana SlidesInteractive presentation generated by Manus Nano Banana
Key Features
Supports dozens of languages including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch and Polish.
Trained on over 80 coding languages such as Python, Java, C, C++, JavaScript, and Bash, as well as more specific languages like Swift and Fortran.
Best-in-class agentic capabilities with native function calling and JSON outputting.
State-of-the-art mathematical and reasoning capabilities, with high scores on benchmarks like GSM8K and MATH.
A large 128k context window, allowing for the processing of extensive documents and complex conversations.
Real-World Use Cases
Complex Code Generation
ForA developer needs to generate a complex algorithm in Python for a data analysis task.
Example Prompt / Workflow
Multilingual Customer Support
ForA global company wants to build a chatbot to handle customer support queries in multiple languages.
Example Prompt / Workflow
Financial Data Analysis
ForA financial analyst needs to analyze a large dataset of financial reports to identify trends and anomalies.
Example Prompt / Workflow
Frequently Asked Questions
Pricing
Pros & Cons
Pros
- ✓ State-of-the-art performance in reasoning, code, and math.
- ✓ Excellent multilingual capabilities.
- ✓ Large 128k context window.
- ✓ Open weights under a research license.
- ✓ Strong instruction-following and conversational abilities.
Cons
- ✕ Commercial use requires a separate license.
- ✕ Requires significant computational resources for self-deployment.
- ✕ No built-in moderation mechanisms.
Quick Start
Step 1
Visit the Mistral AI website to access the model via la Plateforme.
Step 2
For self-deployment, download the model weights from Hugging Face.
Step 3
Follow the provided documentation to set up the model with `mistral_inference` or `transformers`.
Step 4
Start building applications by leveraging the model's capabilities through the API.
