▲ | cowsandmilk 4 hours ago | |
1. I would be hesitant to not categorize folding@home as statistics based; they use Markov state models which is very much based on statistics. And their current force fields are parameterized via machine learning ( https://pubs.acs.org/doi/10.1021/acs.jctc.0c00355 ). 2. The biggest difference between folding@home and alphafold is that folding@home tries to generate the full folding trajectory while alphafold is just protein structure prediction; only looking to match the folded crystal structure. Folding@home can do things like look into how a mutation may make a protein take longer to fold or be more or less stable in its folded state. Alphafold doesn’t try to do that. | ||
▲ | roughly 2 hours ago | parent [-] | |
You’re right, that’s true - I’d glossed over the folding@ methodology a bit. I think the core distinction is still that Folding is trying to divine the fold via simulation, while Alphafold is playing closer to a gpt-style predictor relying on training data. I actually really like Alphafold because of that - the core recognition that an amino acid string’s relationship to the structure and function of the protein was akin to the cross-interactions of words in a paragraph to the overall meaning of the excerpt is one of those beautiful revelations that come along only so often and are typically marked by leaps like what Alphafold was for the field. The technique has a lot of limitations, but it’s the kind of field cross-pollination that always generates the most interesting new developments. |