Remix.run Logo
jtsylve 7 days ago

The answer may be in your question.

- This is currently solved by inference pipelines. - Models and techniques improve over time.

The ability for different agents with different specialties to add additional content while being able to take advantage of existing context is what makes the pipeline work.

Storing that content in the format could allow us to continue to refine the information we get from the image over time. Each tool that touches the image can add new context or improve existing context and the image becomes more and more useful over time.

I like the idea.

kuberwastaken 7 days ago | parent | next [-]

Said it better than I could have

also, the idea is to integrate the conversion processes/ pipelines with other data that'll help with customized workflows.

ai_critic 7 days ago | parent | prev [-]

> Each tool that touches the image can add new context or improve existing context and the image becomes more and more useful over time.

This is literally the problem solved by chunk-based file formats. "How do we use multiple authoring tools without stepping on each other" is a very old and solved problem.