Remix.run Logo
tropicalfruit 15 hours ago

reminds me of crypto a bit. most people i know are apathetic or dismissive.

when i see normies use it - its to make selfies with celebrities.

in 5-10 years AI will everywhere. a massive inequality creator.

those who know how to use it and those who can afford the best tools.

the biggest danger is dependency on AI. i really see people becoming dumber and dumber as they outsource more basic cognitive functions and decisions to AI.

and business will use it like any other tool. to strengthen their monopolies and extract more and more value out of less and less resources.

galaxyLogic 14 hours ago | parent [-]

> in 5-10 years AI will everywhere. a massive inequality creator.

That is possible, even likely. But AI can also decrease inequality. I'm thinking of how rich people and companies spend millions if not hundreds of millions on legal fees which keep them out of prison. But me, I can't afford a lawyer. Heck I can't even afford a doctor. I can't afford Stanford, Yale nor Harvard.

But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.

The "moat" in front of AI is not wide nor deep because AI by its very nature is designed to be easy to use. Just talk to it.

There is also lots of competition in AI, which should keep prices low.

The root-cause of inequality is corruption. AI could help reveal that and advise people how to fight it, making world a better more equal place.

mns 13 hours ago | parent | next [-]

> But now I can ask legal advice from AI, which levels that playing field. Everybody who has a computer or smartphone and internet-access can consult an AI lawyer or doctor. AI can be my Harvard. I can start a business and basically rely on AI for handling all the paperwork and basic business decisions, and also most recurring business tasks. At least that's the direction we are going I believe.

We had a discussion in a group chat with some friends about some random sports stuff and one of my friends used ChatGPT to ask for some fact about a random thing. It was completely wrong, but sounded so real. All you had to do was to go on wikipedia or on a website of the sports entity we were discussing to see the real fact. Now considering that it just hallucinated some random facts that are on Wikipedia and on the website of an entity, what are the chances that the legal advice you will get will be real and not some random hallucination?

aflag 14 hours ago | parent | prev | next [-]

The flaw with that idea is that the big lawyer firms will also have access to the AI and they will have better prompts

forgotoldacc 12 hours ago | parent | prev | next [-]

AI usage has been noticed by judges in court and they aren't fond of it.

AI is just a really good bullshitter. Sometimes you want a bullshitter, and sometimes you need to be a bullshitter. But when your wealth are at risk due to lawsuits or you're risking going to prison, you want something rock solid to back your case and just endless mounds of bullshit around you is not what you want. Bullshit is something you only pull out when you're definitely guilty and need to fight against all the facts, and even better than bullshit in those cases is finding cases similar to yours or obscure laws that can serve as a loophole. And AI, instead of pulling out real cases, will bullshit against you with fake cases.

For things like code, where a large bulk of some areas are based on general feels and vibes, yeah, it's fine. It's good for general front end development. But I wouldn't trust it for anything requiring accuracy, like scientific applications or OS level code.

lazide 14 hours ago | parent | prev [-]

And when that legal advice is dangerously wrong?

At least lawyers can lose their bar license.