▲ | roughly 9 hours ago | |||||||
As I understand it, folding at home was a physics based simulation solver, whereas alphafold and its progeny (including this) are statistical methods. The statistical methods are much, much cheaper computationally, but rely on existing protein folds and can’t generate strong predictions for proteins that don’t have some similarities to proteins in their training set. In other words, it’s a different approach that trades off versatility for speed, but that trade off is significant enough to make it viable to generate protein folds for really any protein you’re interested in - it moves folding from something that’s almost computationally infeasible for most projects to something that you can just do for any protein as part of a normal workflow. | ||||||||
▲ | cowsandmilk 4 hours ago | parent | next [-] | |||||||
1. I would be hesitant to not categorize folding@home as statistics based; they use Markov state models which is very much based on statistics. And their current force fields are parameterized via machine learning ( https://pubs.acs.org/doi/10.1021/acs.jctc.0c00355 ). 2. The biggest difference between folding@home and alphafold is that folding@home tries to generate the full folding trajectory while alphafold is just protein structure prediction; only looking to match the folded crystal structure. Folding@home can do things like look into how a mutation may make a protein take longer to fold or be more or less stable in its folded state. Alphafold doesn’t try to do that. | ||||||||
| ||||||||
▲ | 6 hours ago | parent | prev [-] | |||||||
[deleted] |