| ▲ | glitchc 6 days ago |
| I'm not sure I follow: Both Apple and Amazon are working on AI as we speak. They're just not following the popular approach of releasing a chatbot in the wild. Apple is focusing on a privacy-first approach with smaller models that run locally. Amazon is tying it's models to an AWS subscription and incentivizing use by offering discounts, making it cheaper to use their models over GPT, Opus, etc. |
|
| ▲ | twobitshifter 6 days ago | parent | next [-] |
| Apple is not focusing on AI with any real emphasis. The engineers asked for $50B to train a model and Apple instead did stock buybacks. The stock kept underperforming and so they touted apple intelligence and a revamped Siri, only for it to fall flat. Siri was underinvested in for many years and should be at least as good as Claude or ChatGPT. ‘They’re not investing in a chatbot’ is a huge miss by apple who had a chatbot on everyone’s devices and a headstart on the whole concept. |
| |
| ▲ | andsoitis 6 days ago | parent | next [-] | | > The engineers asked for $50B to train a model and Apple instead did stock buybacks. It is probably cheaper to simply integrate with OpenAI or Anthropic or whoever might unseat them in the future, than spend $50B on training a model. Not only is it cheaper, but it also gives them the flexibility to ride the wave of popularity, without ceding hardware or software sales. | | |
| ▲ | twobitshifter 5 days ago | parent | next [-] | | And in that is the issue, Apple does not believe they could do better than Google, Meta, xAI, Anthropic, or OpenAI. They are paying Google rather than building out their own products. Pre-Tim, Apple was pouring profits back into R&D but now the priority is rewarding shareholders. | | |
| ▲ | const_cast 5 days ago | parent | next [-] | | It depends on what you think is going to happen with models. The way I see it, models were always predicated on openness and data-sharing. That, too, will be the competitive downfall of those who poured billions into creating said models. They won't stay caged up forever. Ultimately the only thing OpenAI has between itself and it's competitors is some really strong computers. Well... anybody can buy strong computers. This is, of course, assuming you don't believe the promise of ever-increasing cognition and eventual AGI, which I don't. The people going fast aren't going to be the winners. The second movers and later on will be. They get all the model, with 1/100th the cost. Ultimately models, right now, for most consumers, are nothing more than novelties. Just look at Google Pixel - I have one, by the way. I can generate a custom image? Neat... I guess. It's a cool novelty. I can edit people out of my pictures? Well... actually Apple had that a couple years ago... and it's not as useful as you would think. It's hard to see it because we're programmers, but right now, the AI products are really lacking for consumers. They're not good for music, or TV, or Movies, or short-form entertainment. ChatGPT is neat for specific usecases like cheating on an essay or vibe coding, but how many people are doing that? Very, very few. Let me put it this way. Do I think Claude Code is good? Yes. Do I think Claude Code is the next Madonna? No. | | |
| ▲ | rovr138 5 days ago | parent [-] | | > Ultimately the only thing OpenAI has between itself and it's competitors is some really strong computers. There's a lot of difference between OpenAI and, let's say, Facebook (Llama). The difference between them is not only strong computers. There's architectural differences to the models. | | |
| ▲ | hakfoo 4 days ago | parent [-] | | From a technical perspective, yes. But from a business perspective they're going to try to cram every model into every possible use case for as long as possible. It will be a sign of a maturing market when we see vendors actually say "bad for X" and pulling away from general-purpose messaging. You see it a little bit with the angling for code-specific products, but I think we're nowhere near a differentiated market. |
|
| |
| ▲ | bigyabai 5 days ago | parent | prev | next [-] | | > They are paying Google rather than building out their own products. This is the real death knell people should focus on. Apple buried their AI R&D to rush complete flops like Vision Pro out the door. Now that the dust has settled, the opportunity cost of these hardware ventures was clearly a mistake. Apple had more than a decade to sharpen their knives and prepare for war with Nvidia, and now they're missing out on Nvidia's share of the datacenter market. Adding insult to injury, they're probably also ~10 years behind SOTA in the industry unless they hire well-paid veterans at great expense. Apple's chronic disdain for unprofitable products, combined with boneheaded ambition, will be the death of them. They cannot obviate real innovation and competition while dropping nothingburger software and hardware products clearly intended to bilk an unconscious userbase. | |
| ▲ | insane_dreamer 5 days ago | parent | prev [-] | | Apple doesn't need its own model--it can license Google's model; or if terms are inprofitable, license Anthropic's or OpenAI's or Mistral's etc. And eventually it can build its own model. Think of how important it is for any AI model company to be the go-to model on the iPhone. Google pays Apple billions to be the default search engine on the iPhone. | | |
| ▲ | bigyabai 4 days ago | parent [-] | | According to my information, Apple is currently being paid ~0.0 billion dollars for the privilege of being the default AI provider. Am I supposed to keep waiting until that changes one day? | | |
| ▲ | insane_dreamer 4 days ago | parent [-] | | Exactly my point - OpenAI is giving Apple its model for free. Which saves Apple many $B's in compute to train their own model. You could argue it's being paid in-kind. Unlike OpenAI, Apple doesn't need to charge a subscription to get AI-based revenue. It just needs to properly integrate it into its products that billions of people are already using, to make those products more useful so people continue buying them. At that point most users don't care what model is powering it - could be GPT, Claude, Mistral etc. |
|
|
| |
| ▲ | zimpenfish 5 days ago | parent | prev [-] | | > Not only is it cheaper, but it also gives them the flexibility to ride the wave And also to hop off without any penalty if/when the wave collapses. |
| |
| ▲ | dpoloncsak 5 days ago | parent | prev | next [-] | | Didn't Apple's research lab release some open source/weights diffusion-based LLM that was blowing away all the benchmarks? Edit: Yes it exists, seems to be built off qwen2.5 coder. Not sure it proves the point I thought it was, but diffusion LLMs still seem neat | |
| ▲ | BlindEyeHalo 5 days ago | parent | prev | next [-] | | > ‘They’re not investing in a chatbot’ is a huge miss by apple Why? Because everyone else is doing it (and not making a profit btw)? | | |
| ▲ | zaphirplane 5 days ago | parent [-] | | Why bungle an AI release named Apple intelligence that doesn’t do what was advertised then half ass integration with OpenAI. Something about incentive for people to buy a phone that looks and acts identical to a 5 year old phone otherwise |
| |
| ▲ | glitchc 5 days ago | parent | prev | next [-] | | > The engineers asked for $50B to train a model and Apple instead did stock buybacks. Source? | | | |
| ▲ | Gud 5 days ago | parent | prev [-] | | I’ll train their model for 49.9B |
|
|
| ▲ | gmays 6 days ago | parent | prev [-] |
| Right, but remember Microsoft was 'working on' mobile also. The issue is that they're working on it the wrong way. Amazon is focused on price and treating it like a commodity. Apple trying to keep the iPhone at the center of everything. Thus neither are fully committing to the paradigm shift because they say it is, but not acting like it because their existing strategy/culture precludes them from doing so. |
| |
| ▲ | 9rx 6 days ago | parent | next [-] | | > The issue is that they're working on it the wrong way. So is everyone else, to be fair. Chat is a horrible way to interact with computers — and even if we accept worse is better its only viable future is to include ads in the responses. That isn't a game Apple is going to want to play. They are a hardware company. More likely someday we'll get the "iPhone moment" when we realize all previous efforts were misguided. Can Apple rise up then? That remains to be seen, but it will likely be someone unexpected. Look at any successful business venture and the eventual "winner" is usually someone who sat back and watched all the mistakes be made first. | | |
| ▲ | nailer 6 days ago | parent | next [-] | | > Chat is a horrible way to interact with computers Why? We interact with people via chat when possible. It seems pretty clear that's humanity's preferred ineraction model. | | |
| ▲ | 9rx 5 days ago | parent [-] | | We begrudgingly accept chat as the lowest common denominator when there is no better option, but it's clear we don't prefer it when better options are available. Just look in any fast food restaurant that has adopted those ordering terminals and see how many are still lining up at the counter to chat with the cashier... In fact, McDonalds found that their sales rose by 30% when they eliminated chatting from the process, so clearly people found it to be a hinderance. We don't know what is better for this technology yet, so it stands to reason that we reverted to the lowest common denominator again, but there is no reason why we will or will want to stay there. Someone is bound to figure out a better way. Maybe even Apple. That business was built on being late to the party. Although, granted, it remains to be seen if that is something it can continue with absent of Jobs. | | |
| ▲ | nailer 5 days ago | parent [-] | | > In fact, McDonalds found that their sales rose by 30% when they eliminated chatting from the process, so clearly people found it to be a hinderance. That's a good supporting argument, but I don't think McDonald's adequately represents more complex discussions. | | |
| ▲ | 9rx 5 days ago | parent [-] | | What is representative, though, is simple use: All you have to do is use chat to see how awful it is. It is better than nothing. It is arguably the best we have right now to make use of the technology. But, unless this is AI thing is all hype and goes nowhere, smart minds aren't going to sit idle as progression moves towards maturity. | | |
| ▲ | nailer 5 days ago | parent [-] | | I imagine it's like how humans converse: we talk, but sometimes we need diagrams and pictures. "What burgers do you have?" (expands to show a set of pictures) "I'll have the thing with chicken and lettuce" | | |
| ▲ | solid_fuel 4 days ago | parent | next [-] | | The problem with UX driven by this kind of interface is latency. Right now, this kind of flow goes more like: "What burgers do you have?" (Thinking...)
(4 seconds later:) (expands to show a set of pictures) "Sigh. I'll have the thing with chicken and lettuce" (Thinking...)
(3 seconds later:) > "Do you mean the Crispy McChicken TM McSandwich TM?" "Yes" (Thinking...)
(4 seconds later:) > "Would you like anything else?" "No" (Thinking...)
(5 seconds later:) > "Would you like to supersize that?" "Is there a human I can speak with? Or perhaps I can just point and grunt to one of the workers behind the counter? Anyone?" It's just exasperating, and it's not easy to overcome until local inference is cheap and common. Even if you do voice recognition on the kiosk, which probably works well enough these days, there's still the round trip to OpenAI and then the inference time there. And of course, this whole scenario gets even worse and more frustrating anywhere with subpar internet. | | | |
| ▲ | 9rx 5 days ago | parent | prev [-] | | Right. We talk when it is the only viable choice in front of us, but as soon as options are available, talk goes out the window pretty quickly. It is not our ideal mode of communication, just the lowest common denominator that works in most situations. But, now, remember, unlike humans, AI can do things like materialize diagrams and pictures out of "thin air" and can even make them interactive right on the spot. It can also do a whole lot of things that you and I haven't even thought of yet. It is not bound by the same limitations of the human mind and body. It is not human. For what reason is there to think that chat will remain the primary mode of using this technology? It is the easiest to conceive of way to use the technology, so it is unsurprising that it is what we got first, but why would we stop here? Chat works, but it is not good. There are so many unexplored possibilities to find better and we're just getting started. | | |
| ▲ | nailer 5 days ago | parent [-] | | I think chat will remain dominant, but we'll go into other modes as needed. There's no more efficient way to communicate "show me the burgers" than saying it - thinking it is possible, but sending thoughts is too far off right now. Then you switch to imagery or hand gestures or whatever else when they're a better way to show something. |
|
|
|
|
|
| |
| ▲ | jpadkins 5 days ago | parent | prev | next [-] | | > Chat is a horrible way to interact with computers Chat is like the command line, but with easier syntax. This makes it usable by an order of magnitude more people. Entertainment tasks lend themselves well to GUI type interfaces. Information retrieval and manipulation tasks will probably be better with chat type interfaces. Command and control are also better with chat or voice (beyond the 4-6 most common controls that can be displayed on a GUI). | | |
| ▲ | kemayo 5 days ago | parent | next [-] | | > Chat is like the command line, but with easier syntax. I kinda disagree with this analogy. The command line is precise, concise, and opaque. If you know the right incantations, you can do some really powerful things really quickly. Some people understand the rules behind it, and so can be incredibly efficient with it. Most don't, though. Chat with LLMs is fuzzy, slow-and-iterative... and differently opaque. You don't need to know how the system works, but you can probably approach something powerful if you accept a certain amount of saying "close, but don't delete files that end in y". The "differently-opaque" for LLM chatbots comes in you needing to ultimately trust that the system is going to get it right based on what you said. The command line will do exactly what you told it to, if you know enough to understand what you told it to. The chatbot will do... something that's probably related to what you told it to, and might be what it did last time you asked for the same thing, or might not. For a lot of people the chatbot experience is undeniably better, or at least lets them attempt things they'd never have even approached with the raw command line. | |
| ▲ | 9rx 5 days ago | parent | prev [-] | | > Chat is like the command line Exactly. Nobody really wants to use the command-line as the primary mode of computing; even the experts who know how to use it well. People will accept it when there is no better tool for the job, but it is not going to become the preferred way to use computers again no matter how much easier it is to use this time. We didn't move away from the command-line simply because it required some specialized knowledge to use. Chatting with LLMs looks pretty good right now because we haven't yet figured out a better way, but there is no reason to think we won't figure out a better way. Almost certainly people will revert to chat for certain tasks, like people still use the command-line even today, but it won't be the primary mode of computing like the current crop of services are betting on. This technology is much too valuable for it to stay locked in shitty chat clients (and especially shitty chat clients serving advertisements, which is the inevitable future for these businesses betting on chat — they can't keep haemorrhaging money forever and individuals won't pay enough for a software service). |
| |
| ▲ | bobbylarrybobby 6 days ago | parent | prev [-] | | My experience with Claude Code is a fantastic way to interact with a (limited subset) of my computer. I do not think Claude is too far off from being able to do stuff like read my texts, emails, and calendar and take actions in those apps, which is pretty much what people want Siri to (reliably) do these days. |
| |
| ▲ | glitchc 6 days ago | parent | prev | next [-] | | > Apple trying to keep iPhone at the centre of everything. Mac, iPad and iPhone, eventually Watch and Vision. Which makes sense since Apple is first and foremost a hardware company. | |
| ▲ | ninetyninenine 6 days ago | parent | prev | next [-] | | Well no Alexa plus is the first LLM to integrate with the smart home in a big way. Aws is making strides but in a different area. | |
| ▲ | fruitworks 6 days ago | parent | prev [-] | | it is a commodity. that's the paradigm shift. there is no moat |
|