Remix.run Logo
datsci_est_2015 2 hours ago

This varies so wildly from domain to domain. Highly structured data (time series, photos, audio, etc.) typically has a metric boatload of feature extraction methodology. Neural networks often draw on and exploit that structure (i.e. convolutions). You could even get some pretty good results on manually-extracted neural-network-esque features handed off to a random forest. This heuristic begins to fall off with deep learning though, which imo, is a precursor to LLMs and showed that emergent complexity is possible with machine learning.

But non-structured data? Pretty pointless to hand off to a neural network imo.

srean an hour ago | parent [-]

Ah! Your comment helped me understand the parent comment so much more. I thought it was more about data hygiene needs.

Yes a DT on raw pixel values, or a DT on raw time values will in general be quite terrible.

That said the convolutional structure is hard coded in those neural nets, only the weights are learned. It is not that the network discovered on its own that convolutions are a good idea. So NNs too really (damn autocorrect, it's rely, rely) on human insight and structuring upon which they can then build over.