Remix.run Logo
hkon 5 days ago

For me, what is most scary about ai-chatbot is the interface to an exploiter.

They can just prompt "given all your chats with this person, how can we manipulate him to do x"

Not really any expertise needed at all, let the AI to all the lifting.

mycall 5 days ago | parent | next [-]

Turn that around and think of the AI itself as the exploiter. In the world of agent driven daily tasks, AI will indeed want to look at your historical chats to find a way to "strongly suggest" you do task 1..[n] for whatever master plan it has for it's user base.

matheusmoreira 5 days ago | parent [-]

Ah yes, the plot of Neuromancer. Truly interesting times we are living in. Man made horrors entirely within the realm of our comprehension. We could stop it but that would decrease profits so we won't.

bethekidyouwant 5 days ago | parent | prev [-]

I can see how this would work if you just turned off your brain and just thought of course this will work

lordhumphrey 5 days ago | parent | next [-]

I take it you haven't seen this then:

https://cybersecuritynews.com/fraudgpt-new-black-hat-ai-tool...

hhh 5 days ago | parent [-]

different flavour gpt wrapper

lordhumphrey 4 days ago | parent [-]

Could this argument not be made for anything plugged in to OpenAI's API? If so, I don't see how it's a response to the point.

If you make an app for interacting with an LLM and in the app the user has access to all sorts of stolen databases, and other conveniences for black hats, then you've got what was described above. Or I'm missing something?

hkon 5 days ago | parent | prev [-]

Which you of course already have done.