Remix.run Logo
cookiengineer 5 hours ago

Before 2023 I thought the way Star Trek portrayed humans fiddling with tech and not understanding any side effects was fiction.

After 2023 I realized that's exactly how it's going to turn out.

I just wish those self proclaimed AI engineers would go the extra mile and reimplement older models like RNNs, LSTMs, GRUs, DNCs and then go on to Transformers (or the Attention is all you need paper). This way they would understand much better what the limitations of the encoding tricks are, and why those side effects keep appearing.

But yeah, here we are, humans vibing with tech they don't understand.

dijksterhuis 5 hours ago | parent | next [-]

curiosity (will probably) kill humanity

although whether humanity dies before the cat is an open question

hacker_homie 5 hours ago | parent | prev [-]

is this new tho, I don't know how to make a drill but I use them. I don't know how to make a car but i drive one.

The issue I see is the personification, some people give vehicles names, and that's kinda ok because they usually don't talk back.

I think like every technological leap people will learn to deal with LLMs, we have words like "hallucination" which really is the non personified version of lying. The next few years are going to be wild for sure.

cowl 11 minutes ago | parent | next [-]

not the same thing. to use your tool analogy, the AI companies are saying , here is a fantastic angle grinder, you can do everything with it, even cut your bread. technically yes but not the best and safest tool to give to the average joe to cut his bread.

cookiengineer 21 minutes ago | parent | prev | next [-]

I think the general problem what I have with LLMs, even though I use it for gruntwork, is that people that tend to overuse the technology try to absolve themselves from responsibilities. They tend to say "I dunno, the AI generated it".

Would you do that for drill, too?

"I dunno, the drill told me to screw the wrong way round" sounds pretty stupid, yet for AI/LLM or more intelligent tools it suddenly is okay?

And the absolution of human responsibilities for their actions is exactly why AI should not be used in wars. If there is no consequences to killing, then you are effectively legalizing killing without consequence or without the rule of law.

le-mark 5 hours ago | parent | prev [-]

Do you not see your own contradiction? Cars and drills don’t kill people, self driving cars can! Normal cars can if they’re operated unsafely by human. These types of uncritical comments really highlight the level of euphoria in this moment.

hacker_homie 2 hours ago | parent [-]

https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in...