Remix.run Logo
modeless 2 days ago

This seems confusingly phrased. When they say things like "500 Vision Transformers", what they mean is 500 finetunes of the same base model, downloaded from the huggingface accounts of anonymous randos. These spaces are only "universal" to a single pretrained base model AFAICT. Is it really that surprising that finetunes would be extremely similar to each other? Especially LoRAs?

I visited one of the models they reference and huggingface says it has malware in it: https://huggingface.co/lucascruz/CheXpert-ViT-U-MultiClass

tech_ken a day ago | parent | next [-]

This is an important clarification; from the abstract and title I was super confused how they identified a "subspace" that could be consistently identified across model structures (I was assuming they meant that they saw stability in the dimension of the weight subspace or something), but if they're just referring to one model class that clears things up substantially. It's definitely also a much weaker result IMO, basically just confirming that the model's loss function has a well-posed minima, which...duh? I mean I guess I'm glad someone checked that, but called it "the universal weight subspace hypothesis" seems a bit dramatic.

daemonologist 2 days ago | parent | prev | next [-]

I agree - the results on the finetunes are not very surprising. The trained-from-scratch ResNets (Figure 2 and Section 3.2.1) are definitely more interesting, though somewhat limited in scope.

In any case, my impression is that this is not immediately more useful than a LoRA (and is probably not intended to be), but is maybe an avenue for further research.

augment_me 2 days ago | parent [-]

I don't think its that surprising actually. And I think the paper in general completely oversells the idea.

The ResNet results hold from scratch because strict local constraints (e.g., 3x3 convolutions) force the emergence of fundamental signal-processing features (Gabor/Laplacian filters) regardless of the dataset. The architecture itself enforces the subspace.

The Transformer/ViT results rely on fine-tunes because of permutation symmetry. If you trained two ViTs from scratch, "Attention Head 4" in Model A might be functionally identical to "Head 7" in Model B, but mathematically orthogonal.

Because the authors' method (SVD) lacks a neuron-alignment step, scratch-trained ViTs would not look aligned. They had to use pre-trained models to ensure the weights shared a coordinate system. Effectively, I think that they proved that CNNs converge due to it's arch, but for Transformers, they mostly just confirmed that fine-tuning doesn't drift far from the parent model.

mlpro a day ago | parent | next [-]

I think its very surprising, although I would like the paper to show more experiments (they already have a lot, i know).

The ViT models are never really trained from scratch - they are always finetuned as they require large amounts of data to converge nicely. The pretraining just provides a nice initialization. Why would one expect two ViT's finetuned on two different things - image and text classification end up in the same subspace as they show? I think this is groundbreaking.

I don't really agree with the drift far from the parent model idea. I think they drift pretty far in terms of their norms. Even the small LoRA adapters drift pretty far from the base model.

rhaps0dy 2 days ago | parent | prev | next [-]

Thank you for saving me a skim

swivelmaster 2 days ago | parent | prev [-]

You’ve explained this in plain and simple language far more directly than the linked study. Score yet another point for the theory that academic papers are deliberately written to be obtuse to laypeople rather than striving for accessibility.

bmacho 2 days ago | parent [-]

Vote for the Party that promises academic grants for people that write 1k character long forum posts for the laypeople instead of other experts of the field.

mapt a day ago | parent | next [-]

We have this already. It's called an abstract. Some do it better than others.

Perhaps we need to revisit the concept and have a narrow abstract and a lay abstract, given how niche science has become.

rocqua 2 days ago | parent | prev | next [-]

I don't think the parent post is complaining that academics are writing proposals (e.g as opposed to people with common sense). Instead, it seems to me that he is complaining that academics are writing proposals and papers to impress funding committees and journal editors, and to some extend to increase their own clout among their peers. Instead of writing to communicate clearly and honestly to their peers, or occasionally to laymen.

And this critique is likely not aimed at academics so much as the systems and incentives of academia. This is partially on the parties managing grants (caring much more about impact and visibility than actually moving science forwards, which means everyone is scrounging for or lying about low hanging fruit). It is partially on those who set (or rather maintain) the culture at academic institutions of gathering clout by getting 'impactful' publications. And those who manage journals also share blame, by trying to defend their moat, very much hamming up "high impact", and aggressively rent-seeking.

a day ago | parent [-]
[deleted]
swivelmaster 2 days ago | parent | prev | next [-]

I’m not sure that’s something we get to vote on.

eru a day ago | parent [-]

On the margin, you can let anything influence your voting decision.

swivelmaster a day ago | parent [-]

File under "technically true but not particularly useful"

eru 13 hours ago | parent [-]

Well, it's not like voting is particularly useful in the first place.

shoubidouwah a day ago | parent | prev [-]

and hope for a president that can do both

markisus a day ago | parent | prev | next [-]

Each fine tune drags the model weights away from the base model in a certain direction.

Given 500 fine tune datasets, we could expect the 500 drag directions to span a 500 dimensional space. After all, 500 random vectors in a high dimensional space are likely to be mutually orthogonal.

The paper shows, however, that the 500 drag directions live in a ~40 dimensional subspace.

Another way to say it is that you can compress fine tune weights into a vector of 40 floats.

Imagine if, one day, fine tunes on huggingface were not measured in gigabytes, megabytes, or even kilobytes. Suppose you started to see listings like 160 bytes. Would that be surprising?

I’m leaving out the detail that the basis direction vectors themselves would have to be on your machine and each basis direction is as big as the model itself. And I’m also taking for granted that the subspace dimension will not increase as the number of fine tune datasets increases.

I agree that the authors decision to use random models on hugging face is unfortunate. I’m hopeful that this paper will inspire follow up works that train large models from scratch.

mapontosevenths a day ago | parent [-]

Agreed. What's surprising here to me isn't that the fine tunes are compressible, it's the degree to which they're compressible. It seems like very little useful new information is being added by the fine-tune.

They're using SVD to throw away almost all of the "new information" and apparently getting solid results anyhow. Which of course raises interesting questions if replicable. The code doesn't seem to have been released yet though.

farhanhubble 18 hours ago | parent [-]

Yeah but it also made me think if deep down neural networks are curated random basis vectors, like in random projections.

mlpro 2 days ago | parent | prev | next [-]

Why would they be similar if they are trained on very different data? Also, trained from scratch models are also analyzed, imo.

modeless 2 days ago | parent | next [-]

They are trained on exactly the same data in the same order with the same optimizer because they are literally the same base model. With a little fine tuning added on top.

I see now that they did one experiment with trained from scratch models. They trained five Resnet-50s on five disjoint datasets of natural images, most quite small. And IIUC they were able to, without further training, combine them into one "universal" model that can be adapted to have only somewhat worse performance on any one of the five datasets (actually one of them is pretty bad) using only ~35 adaptation parameters. Which is kind of cool I guess but I also don't find it that surprising?

I don't expect that you'd get the same finding at large scale in LLMs trained from scratch on disjoint and dissimilar data with different optimizers etc. I would find that surprising. But it would be very expensive to do that experiment so I understand why they weren't able to.

mlpro a day ago | parent [-]

They are not trained on the same data. Even a skim of the paper shows very disjoint data.

The LLMs are finetuned on very disjoint data. I checked some are on Chinese and other are for Math. The pretrained model provides a good initialization. I'm convinced.

augment_me 2 days ago | parent | prev | next [-]

The trained from scratch models are similar because CNN's are local and impose a strong inductive bias. If you train a CNN for any task of recognizing things, you will find edge detection filters in the first layers for example. This can't happen for attention the same way because its a global association, so the paper failed to find this using SVD and just fine-tuned existing models instead.

2 days ago | parent | prev | next [-]
[deleted]
godelski 2 days ago | parent | prev [-]

I think there's two maybe subtle, but key concepts you're missing.

  1) "pertaining"
  2) architecture
1) Yes, they're trained on different data but "tune" implies most of the data is identical. So it should be surprising if the models end up significantly different.

2) the architecture and training methods matter. As a simple scenario to make things a bit easier to understand let's say we have two models with identical architectures and we'll use identical training methods (e.g. optimizer, learning rate, all that jazz) but learn on different data. Also to help so you can even reproduce this on your own let's train one on MNIST (numbers) and the other in FashionMNIST (clothing).

Do you expect these models to have similar latent spaces? You should! This is because despite the data being very different visually there are tons of implicit information that's shared (this is a big reason we do tuning in the first place!). One of the most obvious things you'll see is subnetworks that do edge detection (there's a famous paper showing this with convolutions but transformers do this too, just in a bit different way). The more similar the data (orders shouldn't matter too much with modern training methods but it definitely influences things) the more similar this will be too. So if we trained on LAION we should expect it to do really well on ImageNet because even if there aren't identical images (there are some) there are the same classes (even if labels are different)[0].

If you think a bit here you'll actually realize that some of this will happen even if you change architectures because some principles are the same. Where the architecture similarity and training similarity really help is that they bias features being learned at the same rate and in the same place. But this idea is also why you can distill between different architectures, not just by passing the final output but even using intermediate information.

To help, remember that these models converge. Accuracy jumps a lot in the beginning then slows. For example you might get 70% accuracy in a few epochs but need a few hundred to get to 90% (example numbers). So ask yourself "what's being learned first and why?" A lot will make more sense if you do this.

[0] I have a whole rant on the indirect of saying "zero shot" on ImageNet (or COCO) when trained in things like LAION or JFT. It's not zero shot because ImageNet is in distribution! We wouldn't say "we zero shotted the test set" smh

Havoc 2 days ago | parent | prev [-]

Looks like both mistral and llamas per text but yeah incredibly underwhelming for „universal“