▲ | jwuphysics 4 days ago | |
Incredible, well-documented work -- this is an amazing effort! Two things that caught my eye were (i) your loss curves and (ii) the assessment of dead latents. Our team also studied SAEs -- trained to reconstruct dense embeddings of paper abstracts rather than individual tokens [1]. We observed a power-law scaling of the lower bound of loss curves, even when we varied the sparsity level and the dimensionality of the SAE latent space. We also were able to totally mitigate dead latents with an auxiliary loss, and we saw smooth sinusoidal patterns throughout training iterations. Not sure if these were due to the specific application we performed (over paper abstracts embeddings) or if they represent more general phenomena. | ||
▲ | PaulPauls 4 days ago | parent [-] | |
I'm very happy you appreciate it - particularly the documentation. Writing the documentation was much harder for me than writing the code so I'm happy it is appreciated. I furthermore downloaded your paper and will read through it tomorrow morning - thank you for sharing it! |