Remix.run Logo
ceroxylon 4 hours ago

As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".

This appears everywhere, with every tool trying to autocomplete every sentence and action, creating a very clunky ecosystem where I am constantly pressing 'escape' and 'backspace' to undo some action that is trying to rewrite what I am doing to something I don't want or didn't intend.

It is wasting time and none of the things I want are optimized, their tools feel like they are helping people write "good morning team, today we are going to do a Business, but first we must discuss the dinner reservations" emails.

xnorswap 4 hours ago | parent | next [-]

I broadly agree. They package "copilot" in a way that constantly gets in your way.

The one time I thought it could be useful, in diagnosing why two Azure services seemingly couldn't talk to each other, it was completely useless.

I had more success describing the problem in vague terms to a different LLM, than an AI supposedly plugged into the Azure organisation that could supposedly directly query information.

mk89 34 minutes ago | parent | next [-]

My 2 cents. It's when OKRs are executed without a vision, or the vision is that one and well, it sucks.

The goal is AI everywhere, so this means top-down everyone will implement it and will be rewarded for doing so, so thrre are incentives for each team to do it - money, promotions, budget.

100 teams? 100 AI integrations or more. It's not 10 entry points as it should be (maybe).

This means for a year or more, a lot of AI everywhere, impossible to avoid, will make usability sink.

Now, if this was only done by Microsoft, I would not mind. The issue is that this behavior is getting widespread.

Things are becoming increasingly unusable.

Arwill 2 hours ago | parent | prev | next [-]

I had a WTF moment last week, i was writing SQL, and there was no autocomplete at all. Then a chunk of autocomplete code appeared, what looked like an SQL injection attack, with some "drop table" mixed in. The code would have not worked, it was syntactically rubbish, but still looked spooky, should have made a screenshot of it.

xnorswap 2 hours ago | parent | next [-]

This is the most annoying thing, and it's even happened to Jetbrains' rider too.

Some stuff that used to work well with smart autocomplete / intellisense got worse with AI based autocomplete instead, and there isn't always an easy way to switch back to the old heuristic based stuff.

You can disable it entirely and get dumb autocomplete, or get the "AI powered" rubbish, but they had a very successful heuristic / statistics based approach that worked well without suggesting outright rubbish.

In .NET we've had intellisense for 25 years that would only suggest properties that could exist, and then suddenly I found a while ago that vscode auto-completed properties that don't exist.

It's maddening! The least they could have done is put in a roslyn pass to filter out the impossible.

harvey9 2 hours ago | parent | next [-]

Loosely related: voice control on Android with Gemini is complete rubbish compared to the old assistant. I used to be able to have texts read out and dictate replies whilst driving. Now it's all nondeterministic which adds cognitive load on me and is unsafe in the same way touch screens in cars are worse than tactile controls.

blackadder 2 hours ago | parent | prev | next [-]

This is my biggest frustration. Why not check with the compiler to generate code that would actually compile? I've had this with Go and .Net in the Jetbrains IDE. Had to turn ML auto-completion off. It was getting in the way.

tbd23 an hour ago | parent | prev | next [-]

The most WTF moment for me was that recent Visual Studio versions hooked up the “add missing import” quick fix suggestion to AI. The AI would spin for 5s, then delete the entire file and only leave the new import statement.

I’m sure someone on the VS team got a pat on the back for increasing AI usage but it’s infuriating that they broke a feature that worked perfectly for a decade+ without AI. Luckily there was a switch buried in settings to disable the AI integration.

cyberax 2 hours ago | parent | prev [-]

The regular JetBrains IDEs have a setting to disable the AI-based inline completion, you can then just assign it to a hotkey and call it when needed.

I found that it makes the AI experience so much better.

mk89 26 minutes ago | parent | prev | next [-]

Same thing happened to me today in vs code. A simple helm template:

```{{ .default .Values.whatever 10 }}``` instead of the correct ```{{ default 10 .Values.whatever }}```.

Pure garbage which should be solved by now. I don't understand how it can make such a mistake.

a_t48 2 hours ago | parent | prev | next [-]

The last time I asked Gemini to assist me with some SQL I got (inside my postgres query form):

  This task cannot be accomplished
  USING
    standard SQL queries against the provided database schema. Replication slots
    managed through PostgreSQL system views AND functions,
    NOT through user-defined tables. Therefore,
    I must return
It's feels almost haiku-like.
wubrr an hour ago | parent [-]

Gemini weirdly messes things up, even though it seems to have the right information - something I started noticing more often recently. I'd ask it to generate a curl command to call some API, and it would describe (correctly) how to do it, and then generate the code/command, but the command would have obvious things missing like the 'https://' prefix in some case, sometimes the API path, sometimes the auth header/token - even though it mentioned all of those things correctly in the text summary it gave above the code.

I feel like this problem was far less prevalent a few months/weeks ago (before gemini-3?).

Using it for research/learning purposes has been pretty amazing though, while claude code is still best for coding based on my experience.

zoeysmithe 2 hours ago | parent | prev [-]

The problem with scrapping the web for teaching AI is that the web is full of 'little bobby tables' jokes.

yoyohello13 3 hours ago | parent | prev | next [-]

I had the experience too. Working with Azure is already a nightmare, but the copilot tool built in to Azure is completely useless for troubleshooting. I just pasted log output into Claude and got actual answers. Mincrosoft’s first party stuff just seems so half assed and poorly thought out.

raw_anon_1111 25 minutes ago | parent | prev | next [-]

I have had great luck with ChatGPT trying to figure out a complex AWS issue with

“I am going to give you the problem I have. I want you to help me work backwards step by step and give me the AWS cli commands to help you troubleshoot. I will give you the output of the command”.

It’s a combination of advice that ChatGPT gives me and my own rubberducking.

vjvjvjvjghv 2 hours ago | parent | prev | next [-]

"They package "copilot" in a way that constantly gets in your way."

And when you try to make it something useful, the response is usually "I can't do that"

greazy 12 minutes ago | parent [-]

I asked copilot in outlook webmail to search my emails for something I needed.

I can't do that.

that's the one use case where LLM is helpful!

smileson2 2 hours ago | parent | prev | next [-]

that's what happens when everyone is under the guillotine and their lives depend on overselling this shit ASAP instead of playing/experimenting to figure things out

ihsw 2 hours ago | parent | prev [-]

[dead]

jfarmer 3 hours ago | parent | prev | next [-]

I've worked in tech and lived in SF for ~20 years and there's always been something I couldn't quite put my finger on.

Tech has always had a culture of aiming for "frictionless" experiences, but friction is necessary if we want to maneuver and get feedback from the environment. A car can't drive if there's no friction between the tires and the road, despite being helped when there's no friction between the chassis and the air.

Friction isn't fungible.

John Dewey described this rationale in Human Nature and Conduct as thinking that "Because a thirsty man gets satisfaction in drinking water, bliss consists in being drowned." He concludes:

”It is forgotten that success is success of a specific effort, and satisfaction the fulfillment of a specific demand, so that success and satisfaction become meaningless when severed from the wants and struggles whose consummations they are, or when taken universally.”

In "Mind and World", McDowell criticizes this sort of thinking, too, saying:

> We need to conceive this expansive spontaneity as subject to control from outside our thinking, on pain off representing the operations of spontaneity as a frictionless spinning in a void.

And that's really what this is about, I think. Friction-free is the goal but friction-free "thought" isn't thought at all. It's frictionless spinning in a void.

I teach and see this all the time in EdTech. Imagine if students could just ask the robot XYZ and how much time it'd free up! That time could be spent on things like relationship-building with the teacher, new ways of motivating students, etc.

Except...those activities supply the "wants and struggles whose consummations" build the relationships! Maybe the robot could help the student, say, ask better questions to the teacher, or direct the student to peers who were similarly confused but figure it out.

But I think that strikes many tech-minded folks as "inefficient" and "friction-ful". If the robot knows the answer to my question, why slow me down by redirecting me to another person?

This is the same logic that says making dinner is a waste of time and we should all live off nutrient mush. The purposes of preparing dinner is to make something you can eat and the purpose of eating is nutrient acquisition, right? Just beam those nutrients into my bloodstream and skip the rest.

Not sure how to put this all together into something pithy, but I see it all as symptoms of the same cultural impulse. One that's been around for decades and decades, I think.

TheOtherHobbes 2 hours ago | parent | next [-]

I think that's partially true. The point is to have the freedom to pursue higher-level goals. And one thing tech doesn't do - and education in general doesn't do either - is give experience of that kind of goal setting.

I'm completely happy to hand over menial side-quest programming goals to an AI. Things like stupid little automation scripts that require a lot of learning from poor docs.

But there's a much bigger issue with tech products - like Facebook, Spotify, and AirBnB - that promise lower friction and more freedom but actually destroy collective and cultural value.

AI is a massive danger to that. It's not just about forgetting how to think, but how to desire - to make original plans and have original ideas that aren't pre-scripted and unconsciously enforced by algorithmic control over motivation, belief systems, and general conformity.

Tech has been immensely destructive to that impulse. Which is why we're in a kind of creative rut where too much of the culture is nostalgic and backward-looking, and there isn't that sense of a fresh and unimagined but inspiring future to work towards.

greenavocado 3 hours ago | parent | prev | next [-]

People want the cookie, but they also want to be healthy. They want to never be bored, but they also want to have developed deep focus. They want instant answers, but they also want to feel competent and capable. Tech optimizes for revealed preference in the moment. Click-through rates, engagement metrics, conversion funnels: these measure immediate choices. But they don't measure regret, or what people wish they had become, or whether they feel their life is meaningful.

Nobody woke up in 2005 thinking "I wish I could outsource my spatial navigation to a device." They just wanted to not be lost. But now a generation has grown up without developing spatial awareness.

phantasmish 2 hours ago | parent | next [-]

> Tech optimizes for revealed preference in the moment.

I appreciate the way you distinguish this from actual revealed preference, which I think is key to understanding why what tech is doing is so wrong (and, bluntly, evil) despite it being what "people want". I like the term "revealed impulse" for this distinction.

It's the difference between choosing not to buy a bag of chips at the store or a box of cookies, because you know it'll be a problem and your actual preference is not to eat those things, and having someone leave chips and cookies at your house without your asking, and giving in to the impulse to eat too many of them when you did not want them in the first place.

Example from social media: My "revealed preference" is that I sometimes look at and read comments from shit on my Instagram algo feed. My actual preference is that I have no algo feed, just posts on my "following" tab, or at least that I could default my view to that. But IG's gone out of their way (going so far as disabling deep link shortcuts to the following tab, which used to work) to make sure I don't get any version of my preference.

So I "revealed" that my preference is to look at those algo posts sometimes, but if you gave me the option to use the app to follow the few accounts I care about (local businesses, largely) but never see algo posts at all, ever, I'd hit that toggle and never turn it off. That's my actual preference, despite whatever was "revealed". That other preference isn't "revealed" because it's not even an option.

greenavocado 2 hours ago | parent [-]

Just like the chips and cookies the costs of social meida are delayed and diffuse. Eating/scrolling feels good now. The cost (diminished attention span, shallow relationships, health problems) shows up gradually over years.

seg_lol 2 hours ago | parent | prev [-]

> They want to never be bored

This is the problem. Learning to embrace boredom is best thing I have ever done.

jjkaczor 28 minutes ago | parent | prev | next [-]

This is perhaps one of the most articulate takes on this I have ever read - thank-you!

And - for myself, it was friction that kickstarted my interest in "tech" - I bought a janky modem, and it had IRQ conflicts with my Windows 3 mouse at the time - so, without internet (or BBS's at that time), I had to troubleshot and test different settings with the 2-page technical manual that came with it.

It was friction that made me learn how to program and read manuals/syntax/language/framework/API references to accomplish things for hobby projects - which then led to paying work. It was friction not having my "own" TV and access to all the visual media I could consume "on-demand" as a child, therefore I had to entertain myself by reading books.

Friction is good.

ecshafer 2 hours ago | parent | prev | next [-]

I don't think I could agree with you more. I think that more in tech and business should think about and read about philosophy, the mind, social interactions, and society.

ED Tech for example I think really seems to neglect the kind of bonds that people form when they go through difficult things together, and the pushing through difficulties is how we improve. Asking a robot xyz does not improve ourselves. AI and LLMs do not know how to teach, they are not Socratic pushing and prodding at our weaknesses and assessing us to improve. The just say how smart we are.

bwfan123 an hour ago | parent | prev | next [-]

> but friction is necessary if we want to maneuver and get feedback from the environment

You are positing that we are active learners whose goal is clarity of cognition and friction and cognitive-struggle is part of that. Clarity is attempting to understand the "know-how" of things.

Tech and dare I say the natural laziness inherent in us instead wants us to be zombies being fed the "know-that" as that is deemed sufficient. ie the dystopia portrayed in the matrix movie or the rote student regurgitating memes. But know-that is not the same as know-how, and know-how is evolving requiring a continuously learning agent.

isk517 2 hours ago | parent | prev | next [-]

In my experience part of the 'frictionless' experience is also to provide minimal information about any issues and no way to troubleshoot. Everything works until it doesn't, and when it doesn't you are now at the mercy of the customer support que and getting an agent with the ability to fix your problem.

davidivadavid 2 hours ago | parent | prev | next [-]

Looking at it from a slightly different angle, one I find most illuminating, removing "friction" is like removing "difficulty" from a game, and "friction free" as an ideal is like "cheat codes from the start" as an ideal. It's making a game where there's a single button that says "press here to win." The goal isn't the remove "friction", it's the remove a specific type of valueless friction, to replace it with valuable friction.

whatever1 2 hours ago | parent | prev [-]

I don't know. You can be banging your head against the wall to demolish it or you can use manual/mechanical equipment to do so. If the wall is down, it is down. Either way you did it.

PyWoody 3 hours ago | parent | prev | next [-]

> ...Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you"

I feel like that describes nearly all of the "productivity" tools I see in AI ads. Sadly enough, it also aligns with how most people use it, in my personal experience. Just a total off-boarding of needing to think.

Edmond 3 hours ago | parent | prev | next [-]

>As someone who appreciates machine learning, the main dissonance I have with interacting with Microsoft's implementation of AI feels like "don't worry, we will do the thinking for you".

This the nightmare scenario with AI, ie people settling for Microsoft/OpenAI et al to do the "thinking" for you.

It is alluring but of course it is not going to work. It is similar to what happened to the internet via social media, ie "kickback and relax, we'll give you what you really want, you don't have to take any initiative".

My pitch against this is to vehemently resist the chatbot-style solutions/interfaces and demand intelligent workspaces:

https://codesolvent.com/botworx/intelligent-workspace/

pupppet 3 hours ago | parent | prev | next [-]

Too many companies have bolted AI on to their existing products with the value-prop Let us do the work (poorly) for you.

dustingetz an hour ago | parent | prev | next [-]

Dear MS please use AI to autocomplete my billing address correctly when I fill out web forms, thanks

butlike 4 hours ago | parent | prev | next [-]

That's because in its current form, that's all it's good for reliably. Can't sell that it might hallucinate the numbers in the Q4 report

35 minutes ago | parent | prev | next [-]
[deleted]
latchkey 3 hours ago | parent | prev | next [-]

Dissonance runs straight through from top of the org chart.

https://x.com/satyanadella/status/1996597609587470504

Just 22 hours ago... https://news.ycombinator.com/item?id=46138952

stogot 2 hours ago | parent | prev [-]

The disappointing thing is I’d rather them spend the time improving security but it sounds like all cycles are shoved into making AI shovels. Last year, the CEO promised security would come first but it’s not the case

https://www.techspot.com/news/102873-microsoft-now-security-...