Train Once, Use Everywhere — Universal-Adopter LoRA (UAL) for Google ADK Multi-Agent Systems #3373
hamehrabi
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone, 👋
I’d like to share a project that I believe could contribute to the next generation of multi-agent systems, particularly for those building with the Google ADK framework.
Universal-Adopter LoRA (UAL) is a portable skill layer that allows you to train a LoRA once and then reuse that same “skill” across heterogeneous models (GPT-2, LLaMA, Qwen, TinyLLaMA, etc.) — without retraining, without original data, and with only a few seconds of adoption time.
The motivation came from building agentic systems where different models operate in different environments — small edge devices, mid-size servers, and large cloud models. Each time I needed domain-specific expertise (for example, in medicine, chemistry, or law), I had to rebuild everything: redesign prompts, add RAG pipelines, or fine-tune new LoRAs. It was costly, repetitive, and didn’t scale well. Moreover, in long conversations, I observed the “vanishing effect” — middle instructions quietly lose influence, making behaviour inconsistent over time.
UAL is designed to solve these challenges by introducing an Architecture-Agnostic Intermediate Representation (AIR) — a format that describes adapter roles semantically (for example,
attention_query,mlp_up_projection) rather than relying on model-specific layer names. A lightweight runtime binder connects these roles to any model family, and an SVD-based projection adjusts the tensors so they fit properly during inference.In practice:
Train → Export (AIR) → Adopt (Any Model) → Answer
This allows true portable expertise: the same “medical reasoning” skill, for instance, can move from an edge device to a cloud model instantly — no retraining, no prompt drift, no added latency. It keeps domain behaviour consistent and durable across models.
The implementation currently includes:
GitHub: https://github.com/hamehrabi/ual-adapter
Medium article: [Train Once, Use Everywhere — Make Your AI Agents “Wear” Portable Skills
This idea also aligns with concepts like Skill.md (Anthropic), but instead of prompt-based instructions that compete with user tokens, UAL embeds expertise directly into portable weight layers. Skills become composable, transferable assets that models can adopt like modules — durable across updates and architectures.
I’d be glad to discuss how this approach could be integrated with Google ADK’s skill routing or extended into shared skill libraries. Any feedback or collaboration ideas from the community would be greatly appreciated.
Thanks for reading,
Hamed Mehrabi, PhD
Beta Was this translation helpful? Give feedback.
All reactions