| ▲ | fnoef a day ago |
| My Linux server runs a cron job, that can spin off a thread and even use other ~apps~ tools. Did I invent AGI? |
|
| ▲ | ctoth a day ago | parent | next [-] |
| Does your Linux server decide what processes it should launch at what time with a theory of what will happen next in order to complete a goal you specified in natural language? If so yes, I reckon you sure have! |
| |
| ▲ | balls187 a day ago | parent | next [-] | | Claude does not have a "theory" of anything, and I'd argue applying that mental model to LLM+Tools is a major reason why Claude can delete a production database. | | |
| ▲ | Jtarii a day ago | parent [-] | | Well, humans also routinely accidentely delete production databases. I think at this point arguing that LLMs are just clueless automatons that have no idea what they are doing is a losing battle. | | |
| ▲ | timacles a day ago | parent | next [-] | | They’re not clueless they just don’t have a memory and they don’t have judgement. They create the illusion of being able to make decisions but they are always just following a simple template.They do not consider nuance, they cannot judge between two difficult options in a real sense. Which is why they can delete prod databases and why they cannot do expert level work | | |
| ▲ | Jtarii a day ago | parent [-] | | >they cannot do expert level work Well this is just factually incorrect considering they are currently on par with grad students in some areas of mathematics. |
| |
| ▲ | liquid_thyme a day ago | parent | prev | next [-] | | I like to think of LLMs as idiot savants. Exceptional at certain tasks, but might also eat the table cloth if you stop paying attention at the wrong time. With humans, you can kind of interview/select for a more normalized distribution of outcomes, with outliers being less probable, but not impossible. | |
| ▲ | californical a day ago | parent | prev | next [-] | | I mean maybe it’s a losing battle today, but it is correct. So in a few years when the dust settles, we’ll probably all be using LLMs as clueless automatons that still do useful work as tools | |
| ▲ | freejazz a day ago | parent | prev [-] | | When you're applying reasoning like this, sure, why not? What difference would it make? |
|
| |
| ▲ | parliament32 a day ago | parent | prev [-] | | So... systemd is AGI now? |
|
|
| ▲ | recursive a day ago | parent | prev | next [-] |
| Maybe. But probably not. It doesn't matter if it's AGI though. If those other apps and tools do simple things that are predictable, then we can be pretty sure what will happen. If those tools can modify their own configuration and create new cron jobs, it becomes much harder to say anything about what will happen. |
| |
| ▲ | munk-a a day ago | parent [-] | | Most of us work on software that can modify its own configuration and create new jobs. I, too, have worked in ansible and terraform. The key break here is the lack of predictability and I think it's important that we don't get too starry eyed and accept that that might be a weakness - not a strength. |
|
|
| ▲ | ahoka a day ago | parent | prev [-] |
| Well do you make 100 billion bucks with it? If no, then not AGI. |