NHacker Next
login
▲Text-to-LoRA: Hypernetwork that generates task-specific LLM adapters (LoRAs)github.com
108 points by dvrp 4 days ago | 10 comments
Loading comments...
phildini 9 hours ago [-]
I got very briefly excited that this might be a new application layer on top of meshtastic.
smcleod 4 hours ago [-]
Out of interest, why does it depend on or at least recommend such an old version of Python? (3.10)
jph00 7 hours ago [-]
The paper link on that site doesn't work -- here's a working link:

https://arxiv.org/abs/2506.06105

watkinss 7 hours ago [-]
Interesting work to adapt LoRa adapters. Similar idea applied to VLMs: https://arxiv.org/abs/2412.16777
etaioinshrdlu 4 hours ago [-]
What is such a thing good for?
gdiamos 10 hours ago [-]
An alternative to prefix caching?
npollock 7 hours ago [-]
LoRA adapters modify the model's internal weights
jsight 5 hours ago [-]
Yeah, I honestly think some of the language used with LoRA gets in the way of people understanding them. It becomes much easier to understand when looking at an actual implementation, as well as how they can be merged or kept separate.
make3 6 hours ago [-]
not unless they're explicitly merged, which is not a requirement but a small speed only thing
normal01081975 4 hours ago [-]
[dead]
dvrp 4 days ago [-]
[flagged]
17 hours ago [-]
vessenes 13 hours ago [-]
Sounds like a good candidate for an mcp tool!