|
| ▲ | risyachka 3 days ago | parent | next [-] |
| Because when a project is done in 10 minutes by llm - it will be abandoned in a week. When a person intentionally does it and spends a month or two - they far more likely will support it as they created this project with some intention in the first place. With llms this is not the case |
| |
| ▲ | otterley 2 days ago | parent [-] | | Why are you entitled to ongoing support of a free tool? How long are you entitled to such support? What does “support” mean to you, exactly? If the tool works for you already, why do you need support for it? |
|
|
| ▲ | greekrich92 3 days ago | parent | prev [-] |
| A bug from slop could cost $10K |
| |
| ▲ | otterley 3 days ago | parent | next [-] | | So could a bug introduced by a human being. What's the difference? | | |
| ▲ | hxugufjfjf 3 days ago | parent | next [-] | | Accountability is the difference. | | |
| ▲ | otterley 3 days ago | parent | next [-] | | An LLM is just an agent. The principal is held accountable. There’s nothing really all that novel here from a liability perspective. | | |
| ▲ | hxugufjfjf 3 days ago | parent [-] | | That was my point exactly. I just didn’t write it as precisely as you. | | |
| ▲ | otterley 3 days ago | parent [-] | | Then I don’t understand. My point was that it doesn’t matter whether the machine or the human actually wrote the code; liability for any injury ultimately remains with the human that put the agent to work. Similarly, if a developer at a company wrote code that injured you, and she wrote that code at the direction of the company, you don’t sue the developer, you sue the company. |
|
| |
| ▲ | h33t-l4x0r 3 days ago | parent | prev [-] | | How exactly do end users hold AWS devs / AWS LLMs accountable |
| |
| ▲ | greekrich92 2 days ago | parent | prev [-] | | The human |
| |
| ▲ | rolymath 3 days ago | parent | prev [-] | | How much would a bug from a human cost? | | |
| ▲ | catlifeonmars 3 days ago | parent [-] | | I’d be willing to bet the classes of bugs introduced would be different for humans vs LLMs. You’d probably see fewer low level bugs (such as off-by-one bugs), but more cases where the business logic is incorrect or other higher concerns are incorrect. |
|
|