| ▲ | solid_fuel 7 hours ago |
| LLMs are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop. After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying: - advertising - astroturfing - other forms of botting - scamming old people out of their money |
|
| ▲ | simianwords 7 minutes ago | parent | next [-] |
| Extremely exaggerated comment. LLMs dont hallucinate that much.
That doesn’t rule them out of any control loop. I mean, I think you have not put much thought into your theory. |
|
| ▲ | echelon 7 hours ago | parent | prev | next [-] |
| It's easily doubled my productivity as an engineer. As a filmmaker, my friends and I are getting more and more done as well: https://www.youtube.com/watch?v=tAAiiKteM-U https://www.youtube.com/watch?v=oqoCWdOwr2U As long as humans are driving, I see AI as an exoskeleton for productivity: https://github.com/storytold/artcraft (this is what I'm making) It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems. I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv. Apart from all the other madness in the world, this is the one thing that has been a dream come true. As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way. There's financial capital and there's labor capital. AI is a force multiplier for labor capital. |
| |
| ▲ | navigate8310 7 hours ago | parent | next [-] | | > I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv. While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own. | |
| ▲ | blks 7 hours ago | parent | prev | next [-] | | So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot? Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments. | | |
| ▲ | echelon 7 hours ago | parent [-] | | > So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot? There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay. I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc. Here's a really old example of what that looks like (the models are a lot better at this now) : https://www.youtube.com/watch?v=QYVgNNJP6Vc There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur. | | |
| ▲ | heliumtera 4 hours ago | parent [-] | | there is probably more tools to achieve this level of productivity than real humans interested in consuming this goyslop |
|
| |
| ▲ | jacquesm 7 hours ago | parent | prev | next [-] | | As a rule real creativity blossoms under constraints, not under abundance. | | |
| ▲ | echelon 5 hours ago | parent [-] | | Trying to make a dent in the universe while we metabolize and oxidize our telomeres away is a constraint. But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws. Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel. |
| |
| ▲ | gllmariuty 7 hours ago | parent | prev | next [-] | | > AI is a force multiplier for labor capital for an 2011 account that's a shockingly naive take yes, AI is a labor capital multiplier. and the multiplicand is zero hint: soon you'll be competing not with humans without AI, but with AIs using AIs | | |
| ▲ | Terr_ 6 hours ago | parent [-] | | Even if it's >1, it doesn't follow that it's good news for the "labor capitalist". "OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!" |
| |
| ▲ | queenkjuul 6 hours ago | parent | prev | next [-] | | Genuine question: does the agent work for you if you didn't build it, train it, or host it? It's ostensibly doing things you asked it, but in terms dictated by its owner. | | |
| ▲ | blibble 6 hours ago | parent [-] | | indeed and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding and you're even paying them to replace you |
| |
| ▲ | heliumtera 4 hours ago | parent | prev [-] | | always good to be in the pick and shovel biz |
|
|
| ▲ | ajross 7 hours ago | parent | prev [-] |
| > [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop. True, but no more true than it is if you replace the antecedent with "people". Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order. History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess. |
| |
| ▲ | solid_fuel 7 hours ago | parent | next [-] | | > True, but no more true than it is if you replace the antecedent with "people". Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example. Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0] [0] https://arxiv.org/abs/2401.11817 | | |
| ▲ | TheOtherHobbes 7 hours ago | parent | next [-] | | The suggestion that hallucinations are avoidable in humans is quite a bold claim. | |
| ▲ | CamperBob2 7 hours ago | parent | prev [-] | | What you (and the authors) call "hallucination," other people call "imagination." Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies. | | |
| ▲ | blibble 6 hours ago | parent [-] | | what I call it is "buggy garbage" it's not a person, it doesn't hallucinate or have imagination it's simply unreliable software, riddled with bugs | | |
|
| |
| ▲ | fao_ 7 hours ago | parent | prev | next [-] | | > Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order. It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM. | | |
| ▲ | ajross 7 hours ago | parent [-] | | > We have numerous studies on why hallucinations are central to the architecture, And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point? Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks. | | |
| ▲ | TheOtherHobbes 6 hours ago | parent [-] | | It's a fine line. Humans don't always fuck shit up. But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so. The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight. |
|
| |
| ▲ | 7 hours ago | parent | prev [-] | | [deleted] |
|