Remix.run Logo
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?(dunkels.com)
57 points by adunk 8 hours ago | 12 comments
fouc an hour ago | parent | next [-]

think about how much faster it would've been with a small local model!

ValdikSS 4 hours ago | parent | prev | next [-]

That's why LLM will eventually be used only for initial interaction between the user in their language, to prepare the data to a specialized model.

Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".

FeepingCreature 29 minutes ago | parent | next [-]

That's actually how vision language models already work, pretty much.

stingraycharles 10 minutes ago | parent [-]

Huh? The images are tokenized in the same way language is and it’s just fed into one single model. Not multiple smaller expert models.

Image gets rasterized into smaller pieces (eg 4x4 pixels) and each of those is assigned a token, similarly how text is broken up into tokens. And the whole thing is fed into a single model.

stingraycharles 23 minutes ago | parent | prev [-]

Do you know that MoE is a thing?

jampekka 2 minutes ago | parent [-]

The experts in MoEs aren't specialized in any meaningful task sense. From level of what we would think as tasks MoEs are selected essentially arbitrarily per token.

westurner 4 hours ago | parent | prev | next [-]

Wouldn't this be faster with an agent skill that has code?

/skill-creator [or /create-skill] Write an agent skill with code script(s) that use an existing user space IP library that works with your agent runtime, to [...]

ComposioHQ/awesome-claude-skills: https://github.com/ComposioHQ/awesome-claude-skills

anthopics/skills//skill-creator/SKILL.md: https://github.com/anthropics/skills/blob/main/skills/skill-...

/.agents/skills/skill-name/SKILL.md, scripts/{script_name.py,__init__.py}

https://agentskills.io/what-are-skills

trollbridge 3 hours ago | parent [-]

Well, yeah, of course it would be.

Even faster would just to be use code in the first place!

brcmthrowaway 4 hours ago | parent | prev | next [-]

Next up: Claude replacement to handle simdjson processing.

jeremyjh 2 hours ago | parent | prev [-]

Perhaps one day, all network services will be provided by LLMs natively. Truly, that would be a day in the future.

vrighter an hour ago | parent | next [-]

why? We already have more efficient specialized hardware.

codezero 2 hours ago | parent | prev [-]

I mean, we did decades of JavaScript, so... I mean... anything is possible, right? :)