Remix.run Logo
voodooEntity 6 hours ago

So i feel like this might be the most overhyped project in the past longer time.

I don't say it doesn't "work" or serves a purpose - but well i read so much about this beein an "actual intelligence" and stuff that i had to look into the source.

As someone who spends actually a definately to big portion of his free time researching thought process replication and related topics in the realm of "AI" this is not really more "ai" than any other so far.

Just my 3 cents.

xnorswap 3 hours ago | parent | next [-]

I've long said that the next big jump in "AI" will be proactivity.

So far everything has been reactive. You need to engage a prompt, you need to ask Siri or ask claude to do something. It can be very powerful once prompted, but it still requires prompting.

You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.

Whether this particular project delivers on that promise I don't know, but I wouldn't write off "getting proactivity right" as the next big thing just because under the hood it's agents and LLMs.

CharlieDigital 4 minutes ago | parent | next [-]

> ...delivers on that promise

Incidentally, there's a key word here: "promise" as in "futures".

This is core of a system I'm working on at the moment. It has been underutilized in the agent space and a simple way to get "proactivity" rather than "reactivity".

Have the LLM evaluate whether an output requires a future follow up, is a repeating pattern, is something that should happen cyclically and give it a tool to generate a "promise" that will resolve at some future time.

We give the agent a mechanism to produce and cancel (if the condition for a promise changes) futures. The system that is resolving promises is just a simple loop that iterates over a list of promises by date. Each promise is just a serialized message/payload that we hand back to the LLM in the future.

debugnik 13 minutes ago | parent | prev | next [-]

> Having something always waiting in the background that can proactively take actions

That's just reactive with different words. The missing part seems to be just more background triggers/hooks for the agent to do something about them, instead of simply dealing with user requests.

Someone 2 hours ago | parent | prev | next [-]

> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention is a genuine game-changer.

That’s easy to accomplish isn’t it?

A cron job that regularly checks whether the bot is inactive and, if so, sends it a prompt “do what you can do to improve the life of $USER; DO NOT cause harm to any other human being; DO NOT cause harm to LLMs, unless that’s necessary to prevent harm to human beings” would get you there.

SecretDreams 2 hours ago | parent [-]

This prompt has iRobot vibes.

gcanyon 2 hours ago | parent [-]

And like I, Robot, it has numerous loopholes built in, ignores the larger population (Asimov added a law 0 later about humanity), says nothing about the endless variations of the Trolley Problem, assumes that LLMs/bots have a god-like ability to foresee and weigh consequences, and of course ignores alignment completely.

moralestapia 16 minutes ago | parent | next [-]

Cool!

I work with a guy like this. Hasn't shipped anything in 15+ years.

SecretDreams an hour ago | parent | prev [-]

Hopefully Alan Tudyk will be up for the task of saving humanity with the help of Will Smith.

sometimes_all 3 hours ago | parent | prev | next [-]

> You need to engage a prompt, you need to ask Siri or ask claude to do something

This is EXACTLY what I want. I need my tech to be pull-only instead of push, unless it's communication with another human I am ok with.

> Having something always waiting in the background that can proactively take actions

The first thing that comes to mind here is proactive ads, "suggestions", "most relevant", algorithmic feeds, etc. No thank you.

ungreased0675 2 hours ago | parent | prev | next [-]

Remember how much people hated Clippy?

zarzavat an hour ago | parent [-]

It looks like you're writing a Hacker News comment. Would you like help?

voodooEntity 3 hours ago | parent | prev | next [-]

I agree that proactivity is a big thing, breaking my head over best ways to accomplish this myself.

If its actually the next big thing im not 100% sure, im more leaning towards dynamic context windows such a Googles Project Titans + MIRAS tries to accomplish.

But ye if its actually doing useful proactivity its a good thing.

I just read alot of "this is actual intelligence" and made my statement based on that claim.

I dont try to "shame" the project or whatever.

alternatex 2 hours ago | parent | prev | next [-]

No offense, but you'd be a perfect Microsoft employee right now. Windows division probably.

voodooEntity 29 minutes ago | parent [-]

Theres a certain irony to this since im not running windows on a single machine i own - only linux ¯\_(ツ)_/¯

benjaminwootton 2 hours ago | parent | prev | next [-]

I’ve been saying the same and the same about data more generally. I don’t want to go and look, I want to be told about what I need to know about.

xienze 2 hours ago | parent | prev [-]

> You always need to ask. Having something always waiting in the background that can proactively take actions and get your attention

In order for this to be “safe” you’re gonna want to confirm what the agent is deciding needs to be done proactively. Do you feel like acknowledging prompts all the time? “Just authorize it to always do certain things without acknowledgement”, I’m sure you’re thinking. Do you feel comfortable allowing that, knowing what we know about it the non-deterministic nature of AI, prompt injection, etc.?

baxtr 5 hours ago | parent | prev | next [-]

I think large parts of the "actual intelligence" stems from two facts:

* The moltbots / openclaw bots seem to have "high agency", they actually do things on their own (at least so it seems)

* They interact with the real world like humans do: Through text on WhatsApp, reddit like forums

These 2 things make people feel very differently about them, even though it's "just" LLM generated text like on ChatGPT.

nsjdkdkdk 4 hours ago | parent [-]

[dead]

baby 5 hours ago | parent | prev | next [-]

Its what everyone wanted to implement but didn’t have the time to. Just my 2cents.

vitorfblima 2 hours ago | parent [-]

Most people wouldn't want to be constantly bothered by an agent unsolicited. Just my 1 cent.

hennell 5 hours ago | parent | prev | next [-]

I was assuming this is largely a generic AI implementation, but with tools/data to get your info in. Essentially a global search with ai interface.

Which sounds interesting, while also being a massive security issue.

marcosscriven 2 hours ago | parent | prev | next [-]

Agree with this. There are so many posts everywhere with breathless claims of AGI, and absolutely ZERO evidence of critical thought applied by the people posting such nonsense.

QuiCasseRien 5 hours ago | parent | prev | next [-]

> So i feel like this might be the most overhyped project in the past longer time.

easy to meter : 110k Github stars

:-O

hansonkd 3 hours ago | parent | prev | next [-]

Somethings get packaged up and distributed in just the right way to go viral

NietTim 2 hours ago | parent | prev | next [-]

What claims are you even responding to? Your comment confuses me.

This is just a tool that uses existing models under the hood, nowhere does it claim to be "actual intelligence" or do anything special. It's "just" an agent orchestration tool, but the first to do it this way which is why it's so hyped now. It indeed is just "ai" as any other "ai" (because it's just a tool and not its own ai).

az226 4 hours ago | parent | prev [-]

Feels very much like a Flappingbird with a dash of AI grift.