IN TODAY'S SIGNAL |
Read time: 4 min 48 sec |
🎖️ Top News
📌 CodeRabbit
⚡️ Trending Signals
📌 NVIDIA
🧠 Top Papers
|
|
|
|
If you're enjoying AlphaSignal please forward this email to a colleague.
It helps us keep this content free. |
|
|
|
TOP NEWS |
Open Source |
Mistral Releases Large 2: An Open Source Multilingual LLM w. 80+ Coding Languages |
⇧ 2519 Likes |
 |
What's New |
Mistral AI released Mistral Large 2, their largest dense model with 123 billion parameters. This model fits on a single H100 node and supports non-commercial open-weights usage. The release follows Meta's Llama 405B.
Key Specifications and Performance Metrics
- 123 billion parameters on a single H100 node
- 128k context window, supporting dozens of languages
- Achieves 84% on MMLU, 8.63 on MT Bench, and 92% on HumanEval
- Available on Hugging Face for research and non-commercial use
- Commercial license available for deployment
Enhanced Coding and Reasoning Capabilities
Mistral Large 2 excels in coding, trained on 80+ programming languages. It matches or surpasses models like GPT-4o, Opus-3, and Llama-3 405B in coding benchmarks.
Compared to its predecessor, Mistral Large 1, it has reduced hallucinations and improved reliability, making it more dependable for complex tasks.
Improved Instruction Following and Conversation Handling
Mistral Large 2 follows instructions better and handles long multi-turn conversations. On benchmarks like Wild Bench, Arena Hard, and MT Bench, it outperforms Llama 3.1 405B and Opus-3, and matches Sonnet-3.5 and GPT-4o. This improvement makes it suitable for applications requiring precise and sustained interactions.
Multilingual Training and Performance
Trained on a significant amount of multilingual data, Mistral Large 2 excels in languages like English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi.
It outperforms Llama-3.1 70B and is comparable to Llama-3.1 405B in multilingual tasks, making it versatile for global applications.
Advanced Function Calling
Mistral Large 2 features enhanced function calling and retrieval skills, performing parallel and sequential function calls effectively.
Access
You can use Mistral Large 2 today via la Plateforme under the name mistral-large-2407, and test it on le Chat. Weights for the instruct model are available and are also hosted on HuggingFace. |
|
READ MORE |
|
|
|
 |
CodeRabbit: Merge Code 10x faster with AI-driven code reviews |
With CodeRabbit, you get:
-
Codebase-aware, line-by-line reviews with 1-click fixes.
-
Smart real-time chat for advice, code generation, and issue creation directly from review comments.
-
Comprehensive pull request summaries and sequence diagrams of changes.
Most installed AI app on GitHub and GitLab marketplace, loved by 1000’s of developers.
Enjoy a 7-day trial and access for OSS projects. |
TRY NOW |
partner with us |
|
|
|
TRENDING SIGNALS |
Language Models |
|
⇧ 205 Likes |
|
Open Source |
|
⇧ 1225 Likes |
|
Notebooks |
|
⇧ 242 Likes |
|
Fine-tuning |
|
⇧ 1429 Likes |
|
Search |
|
⇧ 3949 Likes |
|
|
|
|
|
|
NVIDIA releases a new way to instantly use and deploy the best models including Llama 3.1 405B, 70B, and 8B |
Their AI Foundry lets you create custom "supermodels" tailored to your needs and train them with proprietary data as well as synthetic data generated from Llama 3.1 405B.
It can handle, data curation, synthetic data generation, fine-tuning with proprietary data, accurate response retrieval, comprehensive evaluation and deployment. |
Try it now ↗️ |
|
|
|
TOP PAPERS |
Video Generation |
|
⇧ 1528 Likes |
Problem
Reconstructing dynamic scenes from single videos is complex due to the ill-posed nature of the task. Traditional methods are limited as they require templates, function only in nearly static scenes, or cannot track full-sequence 3D motion, which makes them unsuitable for complex, moving scenes.
Solution
This approach uses SE(3) motion bases to model motion as a combination of base movements. It integrates data-driven priors like depth maps and 2D motion tracks into a unified scene representation, enhancing consistency and accuracy.
Results
The technique sets a new standard in 3D/2D motion tracking and novel view synthesis. It lowers 3D tracking error to 0.082 EPE and raises 3D tracking accuracy (within 5cm) to 43.0% on the iPhone dataset.
|
|
Language Models |
|
⇧ 1411 Likes |
Problem
Modern AI requires foundation models that integrate multilinguality, coding, reasoning, and tool usage. Existing models, while advanced, do not fully integrate these capabilities with high performance across various tasks.
Solution
Llama 3, a dense Transformer model with 405B parameters and a 128K token context window, addresses this need. It was pre-trained on a 15T token multilingual corpus using 3.8 × 10^25 FLOPs, significantly outscaling previous models. Llama 3 supports extensive multilingual capabilities and integrates coding, reasoning, and tool usage more seamlessly.
Results
Llama 3 matches GPT-4's performance across numerous benchmarks. On tasks like MMLU and IFEval, Llama 3.1 405B scores 87.3% and 88.6%, respectively, showing competitiveness with or superiority to other models. Smaller versions also outperform comparable models, making Llama 3.1 a leading option across size scales. |
|
KAN |
|
⇧ 685 Likes |
Problem
Comparisons between Kolmogorov-Arnold Networks (KAN) and Multi-Layer Perceptrons (MLP) are unfair due to differing parameters and FLOPs.
Solution
This study controls parameters and FLOPs to fairly compare KAN and MLP across tasks like machine learning, computer vision, NLP, audio processing, and symbolic formula representation. B-spline activation's impact is explored.
Results
MLP outperformed KAN in machine learning (86.16% vs. 85.96%), computer vision (85.88% vs. 77.88%), NLP (80.45% vs. 79.95%), and audio processing (17.74% vs. 15.49%). KAN excelled only in symbolic formula representation (1.2e-3 RMSE vs. 7.4e-3). Access the code here. |
|
|
|
|
|
|
|