Remix.run Logo
bilekas 14 hours ago

> The way work gets done has changed, and enterprises are starting to feel it in big ways.

Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.

When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.

For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.

WarmWash 13 hours ago | parent | next [-]

For classic engineering it's been a boon. This is in a pretty similar vein to the gains mathematicians have been making with AI.

These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.

And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.

I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).

It's still "trust, but verify" for sure, but damn, it's powerful.

Incipient 12 hours ago | parent [-]

If the AI is pointing you in a direction, how much creativity is lost through the mathematician no longer doing that?

I genuinely feel AI makes the ability to come up with approaches worse in software dev.

gritspants 11 hours ago | parent | next [-]

I probe Claude on occasion when I don't feel like looking up source documentation. A couple weeks ago, I was interrogating it about a system that happened to use Kafka. Last night, I was asking it to propose an entirely different solution and it kept trying to shoe horn Kafka into it. I asked it why and proposed a simpler alternative. I was absolutely right! Claude was just very eager to demonstrate familiarity with other systems I was working on!

I shudder to think of the things people will wind up shipping blindly accepting AI guidance.

ayuoo9 12 hours ago | parent | prev [-]

Reminds me of this quote:

     To presume to point a man to the right and ultimate goal — to point with a trembling finger in the RIGHT direction is something only a fool would take upon himself.
     - Hunter S. Thompson
Waterluvian 14 hours ago | parent | prev | next [-]

> when everyone knows it’s not exactly true yet

I think two things:

1. Not everyone knows.

2. As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.

dbshapco 13 hours ago | parent [-]

I'm agnostic, but ...

"And for this cause God shall send them strong delusion, that they should believe a lie: That they all might be damned who believed not the truth, but had pleasure in unrighteousness."

For a more modern take, paraphrasing Hannah Arendt.

“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”

We live in a an age where for many media is reality, uncritical, unchecked. Press releases are about creating reality, not reporting it, they are about psychological manipulation, not information.

> As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.

This actually happened in reverse with the spread of social media dynamics to politics and major media. Twitter made Trump president, not the other way around.

* No LLMs were harmed in the making of this comment.

throwaw12 13 hours ago | parent | prev | next [-]

I disagree with your sentiment and genuinely think something big is coming. It doesn't need to be perfect now, but it could be good enough to disrupt SaaS market.

> say all of this fluff when everyone knows it’s not exactly true yet

How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.

* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying

* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives

* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)

To make message shorter, I will share my vision in the reply

throwaw12 13 hours ago | parent [-]

Let's imagine AI is not there yet and won't be there for 100% accuracy, but you still need accountability, you can't run everything in autopilot and hope you will make 10B ARR.

How do you overcome this limitation?

By making human accountable, imagine you come to work in the morning and your only task is to: "Approve / Request improvement / Reject". You just press 3 buttons all day long:

* Customer is requesting pricing for X, based on the requirements, I found CustomerA had similar requirements and we offered them 100$ / piece last month. What should I do? Approve / Reject / "Ask for 110$"

* Customer (or their agent) is not happy with your 110$ proposal, I used historical data and based on X,Y,Z min we can offer is 104$ to make our ARR increase 15% year-over-year, what should I do? Approve / Reject / Your input

....

Avicebron 11 hours ago | parent | next [-]

So what, you show up to work one day, hit your three button rotation and one day you just end up in prison because your agent asked you to approve fraud/abuse because the legal ramifications section was outside of it's context window? This is asinine.

throwaw12 11 hours ago | parent [-]

nope, you won't.

your agentic platform vendor will be responsible for not showing important things

weikju 4 hours ago | parent | next [-]

> your agentic platform vendor will be responsible for not showing important things

That'll be covered by their ToS/contracts so they won't be liable

sjakzbbz 11 hours ago | parent | prev [-]

Ok, you’re the platform vendor and just enabled fraud. Now what?

torginus 11 hours ago | parent | prev [-]

That's exactly how I play RPGs

baxtr 14 hours ago | parent | prev | next [-]

Is there any reason not to assume that the article was created by an LLM?

bilekas 14 hours ago | parent | next [-]

It sure reads like it.. These days unfortunately so many things do, there is a real “impersonality” (if that’s the right word) to the whole new communication theme.

pixl97 14 hours ago | parent [-]

>a real “impersonality”

I mean that's been a lot of corporate writing for some time.

pegasus 14 hours ago | parent [-]

These two are not unrelated. It's just a step further along the same path.

mcmcmc 14 hours ago | parent | prev | next [-]

Without a byline it’s probably the safer assumption.

imafish 13 hours ago | parent | prev [-]

Well, it is OpenAI - I would be disappointed if it wasn't in some way created by an LLM.

rvnx 13 hours ago | parent [-]

If anything with the latest OpenAI releases, it's that they are probably developed using Claude Code

throwaw12 13 hours ago | parent | prev | next [-]

> for the majority of power users you could stop now and people would be generally okay with it

Why stop though? Google didn't say Altavista and Yahoo is good enough for the majority of power users, let's not create something better.

When you have something good at your hand and you see other possibilities, would you say let's stop, this is enough?

infecto 13 hours ago | parent | prev | next [-]

To be frank I don’t think your worldview is directionally accurate. OpenAI is certainly trying to sell something but every incremental update to these models there are more avenues of value generation being unlocked. For sure it’s not as it was hyped up to be as all the talking heads in the industry were spouting, but there is a lot of interesting ways to use these tools and it’s not for generating slop.

pier25 14 hours ago | parent | prev | next [-]

> Why do they say all of this fluff

They're desperate?

13 hours ago | parent [-]
[deleted]
mikemarsh 13 hours ago | parent | prev | next [-]

It's a kind of gaslighting, probably first and foremost for themselves before others.

intended 13 hours ago | parent | prev [-]

> When can we say we have enough AI?

I’m good for now.

I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.

The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.

I would say that tech firms are definitely running around setting society on fire at this point.

God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.