ICLR 2026

Liquid Ai

Liquid Ai

San Francisco, CA, USA · Remote

Posted on Apr 21, 2026

Location

San Francisco; Boston; Remote

Employment Type

Full time

Location Type

Hybrid

Department

R&D

Deadline to Apply

April 27, 2026 at 12:00 AM EDT

About Liquid AI

Spun out of MIT CSAIL, we build foundation models from scratch using a fundamentally different architecture. Our Liquid Foundation Models (LFMs) are built on a fundamentally different hybrid architecture, they deliver faster inference, lower memory, and deploy where traditional models can't. We ship open-weight text, vision-language, and audio-language models that run on phones, laptops, vehicles, and embedded devices.

Why We're at ICLR

We're here because ICLR brings together the people working on the problems we care most about: efficient architectures, representation learning, multimodal reasoning, and the science of how models learn. If our conversation at the booth was interesting, this is the next step.

What We're Building

Liquid AI is hiring across several research and engineering areas. You don't need to fit neatly into one. If your work touches any of these, we want to talk:

  • Efficient Architectures. State space models, hybrid attention designs, neural ODEs, and alternatives to the transformer paradigm. Our LFM2 architecture combines gated short convolutions with grouped query attention. We're looking for people who think about what comes next.

  • Multimodal Vision. Vision-language models that run on-device under tight latency and memory constraints. Our VLM team has shipped multiple best-in-class models and owns the full pipeline from architecture through deployment.

  • Multimodal Audio. Speech foundation models, end-to-end audio-language systems, and real-time voice on constrained hardware. Our LFM2.5-Audio runs natively on devices with dramatically faster decoding than its predecessor.

  • Data Engineering. Pre-training data curation, synthetic data generation, data mixtures, and scaling strategies. The quality of what goes in determines everything that comes out.

  • Infrastructure & Performance. Distributed training, GPU kernel optimization, edge inference, and model serving at scale. We build the systems that make our architecture fast in practice, from custom kernels to on-device deployment pipelines.

  • Post-Training & Alignment. RLHF, preference optimization, multi-stage reinforcement learning, and evaluation. Our latest models were shaped by large-scale RL without supervised fine-tuning warmup.

Who Thrives Here

This is a small team where individuals own entire work-streams end-to-end, from research through shipped models. We publish. We release open weights. We present at the conferences you attend. If you want to do work that is visible and that ships, this is the environment for it.

What We're Looking For

  • Demonstrated research or engineering contribution in one or more of the areas above.

  • Ability to move from idea to implementation to shipped result.

  • M.S. or Ph.D. in Computer Science, Mathematics, Electrical Engineering, or a related field; or equivalent industry experience.

Stronger candidates will also have:

  • Published research at top-tier venues (NeurIPS, ICML, ICLR, CVPR, ACL, Interspeech, etc.).

  • Experience training or fine-tuning foundation models at scale.

  • Hands-on work with distributed training infrastructure (DeepSpeed, FSDP, Megatron-LM).

  • Open-source contributions (code, data, or models) on GitHub or HuggingFace.

  • Experience deploying models to edge or on-device environments.

What We Offer

  • Full ownership: You own your work from architecture to deployment.

  • Compensation: Competitive base salary with equity in a unicorn-stage company.

  • Health: We pay 100% of medical, dental, and vision premiums for employees and dependents.

  • Financial: 401(k) matching up to 4% of base pay.

  • Time Off: Unlimited PTO plus company-wide Refill Days throughout the year.

  • Visa Sponsorship: We sponsor O-1 and H-1B visas for exceptional talent. If you can't relocate, we'll find a way to work together.