Remix.run Logo
shevy-java 5 hours ago

The title is a bit odd.

For instance:

"The affordances of AI systems have the effect of eroding expertise"

So, some expertise will be gone, that is true. At the same time I am not sure this is solely AIs fault. If a lawyer wants 500€ per half hour of advice, whereas some AI tool is almost zero-cost, even if the advice may only be up to 80% of the quality of a good lawyer, then there is no contest here. AI wins, even if it may arguably be worse.

If it were up to me, AI would be gone, but to insinuate that it is solely AI's fault that "institutions" are gone, makes no real sense. It depends a lot on context and the people involved; as well as opportunity cost of services. The above was an example of lawyers but you can find this for many other professions too. If 3D printing plastic parts does not cost much, would anyone want to overpay a shop that has these plastic parts but it may take you a long time to search for, or you may pay more, compared to just 3D print it? Some technology simply changes society. I don't like AI but AI definitely does change society, and not all ways are necessarily bad. Which institution has been destroyed by AI? Has that institution been healthy prior to AI?

macleginn 4 hours ago | parent | next [-]

I think part of the negative attitude towards the effects of AI stems from the fact that it demolishes a lot of structure. The traditional institutes maintain well-structured, low-entropy societies in terms of knowledge: one goes to a lawyer for legal advice or to a doctor for medical advice, one goes/send one's children to a university for higher education, etc. One knows what to do and whom to ask. With the advent of internet, this started to change, and now the old system is almost useless: as you note, it may be more efficient to go to AI for legal advice, and AI is definitely more knowledgeable about most things than most university teachers, if used correctly. In the limit, the society as it existed before is not simply transformed but is completely gone: everybody is a fully autonomous agent with a $AI_PROVIDER subscription. Ditto for professional groups and other types of association that were needed to organise and disseminate knowledge (what is a lawyer these days? a person with a $LEGAL_AI_PROVIDER subscription, if this is even a thing? what is a SWE?). Now we live in a maximum-entropy situation. How do values evolve and disseminate in this scenario? Everybody has an AI-supported opinion about what is right. How do we agree? How do we decided on the next steps? AI doesn't give us a structure for that.

empath75 2 hours ago | parent [-]

> AI is definitely more knowledgeable about most things than most university teachers

I think this is under-appreciated so much. Yes, every university professor is going to know more about quite a lot of things than ChatGPT does, especially in their specialty, but there is no university professor on earth who knows as much about as many things as ChatGPT, nor do they have the patience or time to spend explaining what they know to people at scale, in an interactive way.

I was randomly watching a video about calculus on youtube this morning and didn't understand something (Feynman's integration trick) and then spent 90 minutes talking to ChatGPT getting some clarity on the topic, and finding related work and more reading to do about it, along with help working through more examples. I don't go to college. I don't have a college math teacher on call. Wikipedia is useless for learning anything in math that you don't already know. ChatGPT has endless patience to drill down on individual topics and explaining things at different levels of expertise.

This is just a capability for individual learning that _didn't exist before ai_ and we have barely begun to unlock it for people.

mrwh 4 hours ago | parent | prev | next [-]

> If a lawyer wants 500€ per half hour of advice, whereas some AI tool is almost zero-cost, even if the advice may only be up to 80% of the quality of a good lawyer, then there is no contest here. AI wins, even if it may arguably be worse

Interesting example, because when I look at it I think of course I'm going to pay for the advice I can trust, when it really matters that I get advice I can trust. 20% confidently wrong legal advice is worse than no advice at all. Where it gets difficult is when that lawyer is offloading their work to an AI...

chrisjj 3 hours ago | parent [-]

Indeed the biggest threat to the legal profession is not chatbots used by non-lawyers. It is chatbots used by lawyers, undermining performance of and confidence in the entire profession.

chrisjj 3 hours ago | parent | prev [-]

> even if the advice may only be up to 80% of the quality of a good lawyer, then there is no contest here. AI wins, even if it may arguably be worse.

The overlooks the effect on quality of the penalty for failure. The lawyer giving bad advice can get the sued. The "AI" is totaaly immune.