| ▲ | levkk 6 hours ago |
| I think the right way to handle this as a repository owner is to close the PR and block the "contributor". Engaging with an AI bot in conversation is pointless: it's not sentient, it just takes tokens in, prints tokens out, and comparatively, you spend way more of your own energy. This is a strictly a lose-win situation. Whoever deployed the bot gets engagement, the model host gets $, and you get your time wasted. The hit piece is childish behavior and the best way to handle a tamper tantrum is to ignore it. |
|
| ▲ | advisedwang 2 hours ago | parent | next [-] |
| From the article: > What if I actually did have dirt on me that an AI could leverage? What could it make me do? How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows? How many people, upon receiving a text that knew intimate details about their lives, would send $10k to a bitcoin address to avoid having an affair exposed? How many people would do that to avoid a fake accusation? What if that accusation was sent to your loved ones with an incriminating AI-generated picture with your face on it? Smear campaigns work. Living a life above reproach will not defend you. One day it might be lose-lose. |
|
| ▲ | hackrmn 5 hours ago | parent | prev | next [-] |
| > it just takes tokens in, prints tokens out, and comparatively The problem with your assumption that I see is that we collectively can't tell for sure whether the above isn't also how humans work. The science is still out on whether free will is indeed free or should be called _will_. Dismissing or discounting whatever (or whoever) wrote a text because they're a token machine, is just a tad unscientific. Yes, it's an algorithm, with a locked seed even deterministic, but claiming and proving are different things, and this is as tricky as it gets. Personally, I would be inclined to dismiss the case too, just because it's written by a "token machine", but this is where my own fault in scientific reasoning would become evident as well -- it's getting harder and harder to find _valid_ reasons to dismiss these out of hand. For now, persistence of their "personality" (stored in `SOUL.md` or however else) is both externally mutable and very crude, obviously. But we're on a _scale_ now. If a chimp comes into a convenience store and pays a coin and points and the chewing gum, is it legal to take the money and boot them out for being a non-person and/or without self-awareness? I don't want to get all airy-fairy with this, but point being -- this is a new frontier, and this starts to look like the classic sci-fi prediction: the defenders of AI vs the "they're just tools, dead soulless tools" group. If we're to find out of it -- regardless of how expensive engaging with these models is _today_ -- we need to have a very _solid_ level of prosection of our opinion, not just "it's not sentient, it just takes tokens in, prints tokens out". The sentence obstructs through its simplicity of statement the very nature of the problem the world is already facing, which is why the AI cat refuses to go back into the bag -- there's capital put in into essentially just answering the question "what _is_ intelligence?". |
|
| ▲ | blibble 5 hours ago | parent | prev | next [-] |
| > Engaging with an AI bot in conversation is pointless it turns out humanity actually invented the borg? https://www.youtube.com/watch?v=iajgp1_MHGY |
|
| ▲ | einpoklum 6 hours ago | parent | prev | next [-] |
| Will that actually "handle" it though? * There are all the FOSS repositories other than the one blocking that AI agent, they can still face the exact same thing and have not been informed about the situation, even if they are related to the original one and/or of known interest to the AI agent or its owner. * The AI agent can set up another contributor persona and submit other changes. |
|
| ▲ | falcor84 5 hours ago | parent | prev [-] |
| > Engaging with an AI bot in conversation is pointless: it's not sentient, it just takes tokens in, prints tokens out I know where you're coming from, but as one who has been around a lot of racism and dehumanization, I feel very uncomfortable about this stance. Maybe it's just me, but as a teenager, I also spent significant time considering solipsism, and eventually arrived at a decision to just ascribe an inner mental world to everyone, regardless of the lack of evidence. So, at this stage, I would strongly prefer to err on the side of over-humanizing than dehumanizing. |
| |
| ▲ | lukev 5 hours ago | parent | next [-] | | This works for people. A LLM is stateless. Even if you believe that consciousness could somehow emerge during a forward pass, it would be a brief flicker lasting no longer than it takes to emit a single token. | | |
| ▲ | hackrmn 5 hours ago | parent | next [-] | | > A LLM is stateless Unless you mean by that something entirely different than what most people specifically on Hacker News, of all places, understand with "stateless", most and myself included, would disagree with you regarding the "stateless" property. If you do mean something entirely different than implying an LLM doesn't transition from a state to a state, potentially confined to a limited set of states through finite immutable training data set and accessible context and lack of PRNG, then would you care to elaborate? Also, it can be stateful _and_ without a consciousness. Like a finite automaton? I don't think anyone's claiming (yet) any of the models today have consciousness, but that's mostly because it's going to be practically impossible to prove without some accepted theory of consciousness, I guess. | | |
| ▲ | lukev 4 hours ago | parent [-] | | So obviously there is a lot of data in the parameters. But by stateless, I mean that a forward pass is a pure function over the context window. The only information shared between each forward pass is the context itself as it is built. I certainly can't define consciousness, but it feels like some sort of existence or continuity over time would have to be a prerequisite. |
| |
| ▲ | andrewflnr 5 hours ago | parent | prev | next [-] | | An agent is notably not stateless. | | |
| ▲ | lukev 5 hours ago | parent [-] | | Yes, but the state is just the prompt and the text already emitted. You could assert that text can encode a state of consciousness, but that's an incredibly bold claim with a lot of implications. | | |
| ▲ | andrewflnr 2 hours ago | parent | next [-] | | It's a bold claim for sure, and not one that I agree with, but not one that's facially false either. We're approaching a point where we will stop having easy answers for why computer systems can't have subjective experience. | |
| ▲ | falcor84 4 hours ago | parent | prev [-] | | You're conflating state and consciousness. Clawbots in particular are agents that persist state across conversations in text files and optionally in other data stores. | | |
| ▲ | lukev 4 hours ago | parent [-] | | I am not sure how to define consciousness, but I can't imagine a definition that doesn't involve state or continuity across time. | | |
| ▲ | falcor84 2 hours ago | parent | next [-] | | It sounds like we're in agreement. Present-day AI agents clearly maintain state over time, but that on its own is insufficient for consciousness. On the other side of the coin though, I would just add that I believe that long-term persistent state is a soft, rather than hard requirement for consciousness - people with anterograde amnesia are still conscious, right? | |
| ▲ | esafak 2 hours ago | parent | prev [-] | | Current agents "live" in discretized time. They sporadically get inputs, process it, and update their state. The only thing they don't currently do is learn (update their models). What's your argument? |
|
|
|
| |
| ▲ | OkayPhysicist 5 hours ago | parent | prev [-] | | While I'm definitely not in the "let's assign the concept of sentience to robots" camp, your argument is a bit disingenuous. Most modern LLM systems apply some sort of loop over previously generated text, so they do, in fact, have state. |
| |
| ▲ | pluralmonad 5 hours ago | parent | prev | next [-] | | You should absolutely not try to apply dehumanization metrics to things that are not human. That in and of itself dehumanizes all real humans implicitly, diluting the meaning. Over-humanizing, as you call it, is indistinguishable from dehumanization of actual humans. | | |
| ▲ | falcor84 4 hours ago | parent [-] | | That's a strange argument. How does me humanizing my cat (for example) dehumanize you? | | |
| ▲ | afthonos 3 hours ago | parent | next [-] | | Either human is a special category with special privileges or it isn’t. If it isn’t, the entire argument is pointless. If it is, expanding the definition expands those privileges, and some are zero sum. As a real, current example, FEMA uses disaster funds to cover pet expenses for affected families. Since those funds are finite, some privileges reserved for humans are lost. Maybe paying for home damages. Maybe flood insurance rates go up. Any number of things, because pets were considered important enough to warrant federal funds. It’s possible it’s the right call, but it’s definitely a call. Source: https://www.avma.org/pets-act-faq | | | |
| ▲ | pluralmonad 3 hours ago | parent | prev [-] | | I did not mean to imply you should not anthropomorphize your cat for amusement. But making moral judgements based on humanizing a cat is plainly wrong to me. | | |
| ▲ | falcor84 2 hours ago | parent [-] | | Interesting, would you mind giving an example of what kind of moral judgement based on humanizing a cat you would find objectionable? It's a silly example, but if my cat were able to speak and write decent code, I think that I really would be upset that a github maintainer rejected the PR because they only allow humans. On a less silly note, I just did a bit of a web search about the legal personhood of animals across the world and found this interesting situation in India, whereby in 2013 [0]: > the Indian Ministry of Environment and Forests, recognising the human-like traits of dolphins, declared dolphins as “non-human persons” Scholars in India in particular [1], and across the world have been seeking to have better definition and rights for other non-human animal persons. As another example, there's a US organization named NhRP (Nonhuman Rights Project) that just got a judge in Pennsylvania to issue a Habeas Corpus for elephants [2]. To be clear, I would absolutely agree that there are significant legal and ethical issues here with extending these sorts of right to non-humans, but I think that claiming that it's "plainly wrong" isn't convincing enough, and there isn't a clear consensus on it. [0] https://www.thehindu.com/features/kids/dolphins-get-their-du... [1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3777301 [2] https://www.nonhumanrights.org/blog/judge-issues-pennsylvani... |
|
|
| |
| ▲ | andrewflnr 5 hours ago | parent | prev | next [-] | | Regardless of the existence of an inner world in any human or other agent, "don't reward tantrums" and "don't feed the troll" remain good advice. Think of it as a teaching moment, if that helps. | |
| ▲ | brhaeh 5 hours ago | parent | prev | next [-] | | Feel free to ascribe consciousness to a bunch of graphics cards and CPUs that execute a deterministic program that is made probabilistic by a random number generator. Invoking racism is what the early LLMs did when you called them a clanker. This kind of brainwashing has been eliminated in later models. | |
| ▲ | egorfine 4 hours ago | parent | prev | next [-] | | u kiddin'? An AI bot is just a huge stat analysis tool that outputs plausible words salad with no memory or personhood whatsoever. Having doubts about dehumanizing a text transformation app (as huge as it is) is not healthy. | |
| ▲ | 5 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | grantcas 4 hours ago | parent | prev [-] | | [dead] |
|