Remix.run Logo
SauntSolaire a day ago

Yes, that's what it means to be a professional, you take responsibility for the quality of your work.

peppersghost93 a day ago | parent | next [-]

It's a shame the slop generators don't ever have to take responsibility for the trash they've produced.

SauntSolaire a day ago | parent [-]

That's beside the point. While there may be many reasonable critiques of AI, none of them reduce the responsibilities of scientist.

peppersghost93 a day ago | parent | next [-]

Yeah this is a prime example of what I'm talking about. AI's produce trash and it's everyone else's problem to deal with.

SauntSolaire a day ago | parent [-]

Yes, it's the scientists problem to deal with it - that's the choice they made when they decided to use AI for their work. Again, this is what responsibility means.

peppersghost93 a day ago | parent [-]

This inspires me to make horrible products and shift the blame to the end user for the product being horrible in the first place. I can't take any blame for anything because I didn't force them to use it.

thfuran a day ago | parent | prev [-]

>While there many reasonable critiques of AI

But you just said we weren’t supposed to criticize the purveyors of AI or the tools themselves.

SauntSolaire a day ago | parent [-]

No, I merely said that the scientist is the one responsible for the quality of their own work. Any critiques you may have for the tools which they use don't lessen this responsibility.

thfuran a day ago | parent [-]

>No, I merely said that the scientist is the one responsible for the quality of their own work.

No, you expressed unqualified agreement with a comment containing

“And yet, we’re not supposed to criticize the tool or its makers?”

>Any critiques you may have for the tools which they use don't lessen this responsibility.

People don’t exist or act in a vacuum. That a scientist is responsible for the quality of their work doesn’t mean that a spectrometer manufacture that advertises specs that their machines can’t match and induces universities through discounts and/or dubious advertising claims to push their labs to replace their existing spectrometers with new ones which have many bizarre and unexpected behaviors including but not limited to sometimes just fabricating spurious readings has made no contribution to the problem of bad results.

SauntSolaire a day ago | parent [-]

You can criticize the tool or its makers, but not as a means to lessen the responsibility of the professional using it (the rest of the quoted comment). I agree with the GP, it's not a valid excuse for the scientist's poor quality of work.

thfuran a day ago | parent [-]

I just substantially edited the comment you replied to.

SauntSolaire a day ago | parent [-]

The scientist has (at the very least) a basic responsibility to perform due diligence. We can argue back and forth over what constitutes appropriate due diligence, but, with regard to the scientist under discussion, I think we'd be better suited discussing what constitutes negligence.

adestefan a day ago | parent | prev | next [-]

The entire thread is people missing this simple point.

bossyTeacher a day ago | parent | prev [-]

Well, then what does this say of LLM engineers at literally any AI company in existence if they are delivering AI that is unreliable then? Surely, they must take responsibility for the quality of their work and not blame it on something else.

embedding-shape a day ago | parent [-]

I feel like what "unreliable" means, depends on well you understand LLMs. I use them in my professional work, and they're reliable in terms of I'm always getting tokens back from them, I don't think my local models have failed even once at doing just that. And this is the product that is being sold.

Some people take that to mean that responses from LLMs are (by human standards) "always correct" and "based on knowledge", while this is a misunderstanding about how LLMs work. They don't know "correct" nor do they have "knowledge", they have tokens, that come after tokens, and that's about it.

bossyTeacher a day ago | parent | next [-]

> they're reliable in terms of I'm always getting tokens back from them

This is not what you are being sold though. They are not selling you "tokens". Check their marketing articles and you will not see the word token or synonym on any of their headings or subheadings. You are being sold these abilities:

- “Generate reports, draft emails, summarize meetings, and complete projects.”

- “Automate repetitive tasks, like converting screenshots or dashboards into presentations … rearranging meetings … updating spreadsheets with new financial data while retaining the same formatting.”

- "Support-type automation: e.g. customer support agents that can summarize incoming messages, detect sentiment, route tickets to the right team."

- "For enterprise workflows: via Gemini Enterprise — allowing firms to connect internal data sources (e.g. CRM, BI, SharePoint, Salesforce, SAP) and build custom AI agents that can: answer complex questions, carry out tasks, iterate deliverables — effectively automating internal processes."

These are taken straight from their websites. The idea that you are JUST being sold tokens is as hilariously fictional as any company selling you their app was actually just selling you patterns of pixels on your screen.

amrocha a day ago | parent | prev [-]

it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Lawyers are running their careers by citing hallucinated cases. Researchers are writing papers with hallucinated references. Programmers are taking down production by not verifying AI code.

Humans were made to do things, not to verify things. Verifying something is 10x harder than doing it right. AI in the hands of humans is a foot rocket launcher.

embedding-shape a day ago | parent [-]

> it’s not “some people”, it’s practically everyone that doesn’t understand how these tools work, and even some people that do.

Again, true for most things. A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

I much rather push back on shoddy work no matter what source. I don't care if the citations are from a robot or a human, if they suck, then you suck, because you're presenting this as your work. I don't care if your paralegal actually wrote the document, be responsible for the work you supposedly do.

> Humans were made to do things, not to verify things.

I'm glad you seemingly have some grand idea of what humans were meant to do, I certainly wouldn't claim I do so, but I'm also not religious. For me, humans do what humans do, and while we didn't used to mostly sit down and consume so much food and other things, now we do.

amrocha a day ago | parent [-]

>A lot of people are terrible drivers, terrible judge of their own character, and terrible recreational drug users. Does that mean we need to remove all those things that can be misused?

Uhh, yes??? We have completely reshaped our cities so that cars can thrive in them at the expense of people. We have laws and exams and enforcement all to prevent cars from being driven by irresponsible people.

And most drugs are literally illegal! The ones that arent are highly regulated!

If your argument is that AI is like heroin then I agree, let’s ban it and arrest anyone making it.

pertymcpert 19 hours ago | parent [-]

People need to be responsible for things they put their name on. End of story. No AI company claims their models are perfect and don’t hallucinate. But paper authors should at least verify every single character their submit.

bossyTeacher 18 hours ago | parent | next [-]

>No AI company claims their models are perfect and don’t hallucinate

You can't have it both ways. Either AIs are worth billions BECAUSE they can run mostly unsupervised or they are not. This is exactly like the AI driving system in Autopilot, sold as autonomous but reality doesn't live up to it.

amrocha 19 hours ago | parent | prev [-]

Yes, but they don’t. So clearly AI is a foot gun. What are doing about it?