Share

On Google's Python optimizer, Stability open weight audio model, new chinese video generation model, the multi-agent llm course, and more..
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

Signup  |  Past Issues  |  Follow on X  |  Read on Web

AlphaSignal

Hey ,

Welcome to today's edition of AlphaSignal. 


Whether you are a researcher, engineer, developer, or data scientist, our summaries are there to keep you up-to-date with the latest breakthroughs in AI. 


Let's get into it,


Lior

.

IN TODAY'S SIGNAL

Top News

  • Mistral launches fine-tuning API for Mistral 7B and Mistral Small models.

Axelera

Top 5 Signals

Top of HuggingFace

  • MiniCPM-Llama3-V-2_5 run a multimodal LLM on mobile devices.

  • Mobius generates high-quality, unbiased images with fewer resources.

  • RMBG-1.4 removes image backgrounds with high accuracy using IS-Net.

  • fineweb-edu dataset offers 1.3 trillion educational tokens.

  • humanevalpack extends HumanEval to six languages with 4TB of commits.

  • imageinwords provides detailed image descriptions with 1,612 examples.

Recommended Lecture

Read Time: 4 min 22 sec

Enjoying this newsletter?
Please forward it to a friend or colleague. It helps us keep this content free.

TOP NEWS

Fine-Tuning

Mistral launches fine-tuning API

⇧ 1121 Likes

What's New

New Fine-Tuning Capabilities

Mistral AI launched services and an SDK to fine-tune its models. Users can customize models using the Mistral fine-tuning API and SDK.

Fine-tuning is a powerful technique for customizing and improving the performance of LLMs, providing better responses, flexibility and efficiency to specific applications. When tailoring a smaller model to suit specific domains or use cases, it offers a way to match the performance of larger models, reducing deployment costs and improving application speed.


Mistral models are known for their strong performance in generating text, coding assistance, and various natural language processing tasks. 


LoRA Training

Mistral uses the LoRA (Low-Rank Adaptation) training paradigm to enable memory-efficient and performant fine-tuning. This allows fine-tuning on various infrastructures without sacrificing performance or memory efficiency.


SDK and Infrastructure Support

The mistral-finetune SDK supports multi-GPU setups and scales down to a single Nvidia A100 or H100 GPU for smaller models like Mistral 7B. Fine-tuning a dataset such as UltraChat (1.4 million dialogs) takes about 30 minutes using eight H100 GPUs.


Three Entry Points for Fine-Tuning

Mistral offers three methods for fine-tuning:

  • Open-source SDK: Fine-tune models on your infrastructure.
  • Serverless Fine-Tuning Services: Use Mistral’s managed services on la Plateforme for quick and cost-effective model adaptation.

  • Custom Training Services: Tailor models using proprietary data for specialized applications

How to Start Using It

  1. Register on la Plateforme: Sign up to access the fine-tuning services.
  2. Download the SDK: Get the mistral-finetune SDK from the GitHub repository.

  3. Follow the Guide and Tutorial: Use the provided guide and tutorial to start building applications with your custom fine-tuned models.

  4. Choose Your Fine-Tuning Method

MISTRAL API

AI acceleration hardware that doesn't break budgets

Discover Next-Gen Computer Vision Solutions
Do you seek top AI performance, but are jaded by expensive and power-hungry GPU cards? Explore the Metis Evaluation Kits for building cutting-edge computer vision applications.


Industry-Defining Al Vision Inference
Axelera Al’s Metis Evaluation Kits are designed for rapid development of computer vision applications at the edge. The integrated Metis AI accelerators each provide up to 214 TOPS at a fraction of the cost and power consumption of GPU cards.

TRY NOW

partner with us

TRENDING SIGNALS

Coding

Google releases Code Transformation: a tool for automated Python code editing and optimization

⇧  221 Likes

Audio

Stability AI releases Stable Audio Open, an open weights model that creates <47 sec of audio from simple text

⇧ 1006 Likes

Embeddings

Nomic expands its embedding to multiple modalities, introducing Nomic-Embed-Vision

⇧ 505 Likes

Open Source

Google releases an open-source Python UI framework to build AI/ML apps

⇧ 789 Likes

Video Generation

New Chinese video generation model drops before Sora, available on IOS (scan the QR)

⇧ 315 Likes

TOP OF HUGGINGFACE

Models

  • MiniCPM-Llama3-V-2_5 : Allows you to run a multimodal LLM on mobile devices with efficient deployment. It achieves 65.1 on OpenCompass and excels in OCR with a 700+ score on OCRBench. It supports over 30 languages and provides low hallucination rates. Deployable on low VRAM GPUs, it offers efficient inference and fine-tuning.


  • mobius: helps you generate unbiased, high-quality images across various styles and domains. It uses a constructive deconstruction framework, eliminating biases without needing extensive pretraining. Mobius outperforms other diffusion models in fairness, adaptability, and efficiency, requiring fewer resources for fine-tuning.

  • RMBG-1.4 : elps you remove backgrounds from images with high accuracy. Trained on 12,000+ licensed images, it handles diverse scenarios like stock photos, e-commerce, and gaming. Achieves pixel-wise accuracy using IS-Net architecture.

Datasets

  • fineweb-edu:  contains 1.3 trillion educational tokens filtered from FineWeb using an educational quality classifier. It outperforms FineWeb on educational benchmarks like MMLU, ARC, and OpenBookQA. You can access the dataset, classifier, and training code. Sample versions are available, including subsets of 350B, 100B, and 10B tokens.

  • humanevalpack: extends OpenAI's HumanEval to six languages (Python, JavaScript, Java, Go, C++, Rust) with three tasks. It includes 4TB of GitHub commits across 350 languages, filtered for high-quality commit messages. It provides prompts, solutions, buggy solutions, bug types, and unit tests.

  • Google imageinwords: a dataset for generating hyper-detailed image descriptions. It includes 1,612 images with human-generated and machine-generated descriptions. Metrics include comprehensiveness, specificity, hallucination, and human-likeness.

LECTURE

AI AGENTS

The AI Agentic Course: Building multi-agent LLM applications with LangGraph

The AI Agentic course teaches you to use LangGraph for building single and multi-agent LLM applications. This is a free course offered by Deeplearning.ai.


Led by Harrison Chase, founder of LangChain, and Rotem Weiss, founder of Tavily, this course covers integrating agentic search for enhanced agent knowledge, implementing agentic memory for state management, and using human-in-the-loop guidance. 


You'll build an agent from scratch and then reconstruct it with LangGraph, learning about its components. 


The AI Agentic course will teach you to:

  • Build an agent from scratch using Python and an LLM.

  • Rebuild the agent using LangGraph to understand its components.

  • Integrate agentic search to enhance agent knowledge with query-focused answers.

  • Implement agentic memory for managing state across multiple threads and conversations.
  • Use human-in-the-loop input to guide agents at key points.
CHECK THE COURSE

Stop receiving emails here.

AlphaSignal, 214 Barton Springs RD, Austin, Texas 94123, United States

Email Marketing by ActiveCampaign