Remix.run Logo
amelius 6 hours ago

What makes these models different from models used for e.g. audio?

Or other low-dimensional time domain signals?

carschno 3 hours ago | parent [-]

You could abstract speech or other audio as a series of sounds, where time is indeed a factor. Speech, however, has patterns that are more similar to written language than to seasonal patterns that are typically assumed in time series. While trained on different data, the architecture of TimesFM is actually similar to LLMs. But not identical, as pointed out at https://research.google/blog/a-decoder-only-foundation-model...:

> Firstly, we need a multilayer perceptron block with residual connections to convert a patch of time-series into a token that can be input to the transformer layers along with positional encodings (PE).

> [...]

> Secondly, at the other end, an output token from the stacked transformer can be used to predict a longer length of subsequent time-points than the input patch length, i.e., the output patch length can be larger than the input patch length.

amelius 3 hours ago | parent [-]

If "seasonal patterns" is the thing that differentiates between these two data sources, then perhaps time series models should be called seasonal models?