Liquid AI's LFM-2 Models: A New Era for Efficient On-Device AI
July 12, 2025
Just when the AI world seemed to have settled into a rhythm of bigger and bigger transformer models, Liquid AI has once again shaken things up. Following the success of their first-generation models, the company has just unveiled the LFM-2 series—a new family of Liquid Foundation Models that represents a major leap forward in efficient, powerful, and private AI.
For anyone running AI locally on their iPhone or Mac, this is huge news. The LFM architecture, built on a new hybrid of convolutions and attention, is uniquely suited for on-device applications where speed and memory are critical.
From LFM-1 to LFM-2: A Focus on Hyper-Efficiency
The first generation of LFMs introduced the world to a new way of thinking about AI. The LFM-2 series builds on this foundation, addressing initial limitations and doubling down on what makes Liquid AI special: performance per watt. The new lineup is smaller, faster, and punches well above its weight class:
- LFM-2 350M: An ultra-lightweight model competitive with models twice its size, perfect for instant responses on any device.
- LFM-2 700M: The new sweet spot for mobile, outperforming 1B+ parameter models like Gemma 3 1B while being significantly faster.
- LFM-2 1.2B: A versatile and powerful model for Mac users that competes with models like Qwen3-1.7B, offering similar quality in a much smaller and faster package.
Why LFM-2 is a Game-Changer for Local AI
Liquid AI’s official benchmarks show the LFM-2 models are up to 2x faster than competitors on CPU, making them the new leaders for on-device inference.
What this means for your devices:
- Unmatched Speed on iPhone: The LFM-2 700M model is a breakthrough for mobile. It delivers a superior experience to many 1B models but with the speed you’d expect from a much smaller model, enabling complex, real-time interactions.
- Peak Efficiency on Mac: The LFM-2 1.2B model gives Mac users a powerful new option for coding, writing, and analysis. It provides the quality of a much larger model without the associated memory and performance overhead, making it ideal for Apple Silicon’s unified memory.
- True General-Purpose Capability: The LFM-2 series shows strong performance across the board in knowledge, mathematics, and instruction following, making these small models surprisingly capable for a wide range of tasks.
Running LFM-2 Models with Enclave AI
The promise of highly efficient, powerful local models is at the heart of what we do at Enclave AI. Our team is already working to integrate the new LFM-2 models into our app, with a focus on optimizing them for Apple Silicon and the Neural Engine.
With Enclave AI, you’ll be able to run these next-generation models with the same privacy-first guarantee we offer for all our supported models. Your data never leaves your device. We expect to roll out support for the LFM-2 700M and LFM-2 1.2B models in the coming weeks.
The Future is Efficient
The LFM-2 series is a powerful statement that the future of AI isn’t just about scale, but about smart architecture. By creating models that deliver outsized performance in a tiny footprint, Liquid AI has opened the door to a new class of AI systems that are faster and better suited for the personal devices we use every day.
The era of truly powerful, private, and personal AI is here, and it’s running on liquid.
Stay tuned for updates on LFM-2 integration in Enclave AI. The future of on-device AI is evolving faster than ever, and we’re excited to bring these new capabilities directly to you.