| ▲ | jsnell 13 hours ago |
| They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google. And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built. |
|
| ▲ | lxgr 12 hours ago | parent | next [-] |
| Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude. |
| |
| ▲ | nacnud 12 hours ago | parent | next [-] | | True - but if you erode that trust then your users may go elsewhere. If you keep the ads visually separated, there's a respected boundary & users may accept it. | | |
| ▲ | SJC_Hacker 10 hours ago | parent | next [-] | | There will be a respected boundary for a time, then as advertisers find its more effective the boundaries will start to disappear | |
| ▲ | calvinmorrison 11 hours ago | parent | prev [-] | | google did it. LLms are the new google search. It'll happen sooner or later. | | |
| ▲ | ptero 11 hours ago | parent [-] | | Yes, but for a while google was head and shoulders above the competition. It also poured a ton of money into building non-search functionality (email, maps, etc.). And had a highly visible and, for a while, internally respected "don't be evil" corporate motto. All of which made it much less likely that users would bolt in response to each real monetization step. This is very different to the current situation, where we have a shifting landscape with several AI companies, each with its strengths. Things can change, but it takes time for 1-2 leaders to consolidate and for the competition to die off. My 2c. |
|
| |
| ▲ | evilfred 11 hours ago | parent | prev | next [-] | | how is it "trusted" when it just makes things up | | |
| ▲ | andrewflnr 11 hours ago | parent | next [-] | | That's a great question to ask the people who seem to trust them implicitly. | | |
| ▲ | handfuloflight 11 hours ago | parent [-] | | They aren't trusted in a vacuum. They're trusted when grounded in sources and their claims can be traced to sources. And more specifically, they're trusted to accurately represent the sources. | | |
| ▲ | andrewflnr 10 hours ago | parent | next [-] | | Nope, lots of idiots just take them at face value. You're still describing what rational people do, not what all actual people do. | | | |
| ▲ | PebblesRox 8 hours ago | parent | prev | next [-] | | If you believe this, people believe everything they read by default and have to apply a critical thinking filter on top of it to not believe the thing. I know I don't have as much of a filter as I ought to! https://www.lesswrong.com/s/pmHZDpak4NeRLLLCw/p/TiDGXt3WrQwt... | | |
| ▲ | andrewflnr 5 hours ago | parent [-] | | That checks out with my experience. I don't think it's just reading either. Even deeper than stranger danger, we're inclined to assume other humans communicating with us are part of our tribe, on our side, and not trying to deceive us. Deception, and our defenses against deception, are a secondary phenomenon. It's the same reason that jokes like "the word 'gullible' is written in the ceiling", gesturing to wipe your face at someone with a clean face, etc, all work by default. |
| |
| ▲ | sheiyei 10 hours ago | parent | prev [-] | | > they're trusted to accurately represent the sources. Which is still too much trust |
|
| |
| ▲ | tsukikage 10 hours ago | parent | prev | next [-] | | “trusted” in computer science does not mean what it means in ordinary speech. It is what you call things you have no choice but to trust, regardless of whether that trust is deserved or not. | | |
| ▲ | pegasus 9 hours ago | parent | next [-] | | For one, it's not like we're at some CS conference, so we're engaging in ordinary speech here, as far as I can tell. For two, "trusted" doesn't have just one meaning, even in the narrower context of CS. | |
| ▲ | lxgr 9 hours ago | parent | prev [-] | | I meant it in the ordinary speech sense (which I don't even thing contradicts the "CS sense" fwiw). Many people have a lot of trust in anything ChatGPT tells them. |
| |
| ▲ | dingnuts 11 hours ago | parent | prev [-] | | 15% of people aren't smart enough to read and follow directions explaining how to fold a trifold brochure, place it in an envelope, seal it, and address it you think those people don't believe the magic computer when it talks? |
| |
| ▲ | ModernMech 11 hours ago | parent | prev | next [-] | | I imagine they would be more like product placements in film and TV than banner ads. Just casually dropping a recommendation and link to Brand (TM) in a query. Like those Cerveza Cristal ads in star wars. They'll make it seem completely seamless to the original query. | | |
| ▲ | thewebguyd 10 hours ago | parent | next [-] | | I just hope that if it comes to that (and I have no doubt that it will), regulation will catch up and mandate any ad/product placement is labeled as such and not just slipped in with no disclosure whatsoever. But, given that we've never regulated influencer marketing which does the same thing, nor are TV placements explicitly called out as "sponsored" I have my doubts but one can hope. | |
| ▲ | lxgr 9 hours ago | parent | prev [-] | | Yup, and I wouldn't be willing to bet that any firewall between content and advertising would hold, long-term. For example, the more product placement opportunities there are, the more products can be placed, so sooner or later that'll become an OKR to the "content side" of the business as well. |
| |
| ▲ | Analemma_ 12 hours ago | parent | prev [-] | | Like that’s ever stopped the adtech industry before. It would be a hilarious outcome though, “we built machine gods, and the main thing we use them for is to make people click ads.” What a perfect Silicon Valley apotheosis. |
|
|
| ▲ | bugbuddy 13 hours ago | parent | prev | next [-] |
| I heard majority of the users are techies asking coding questions. What do you sell to someone asking how to fix a nested for loop in C++? I am genuinely curious. Programmers are known to be the stingiest consumers out there. |
| |
| ▲ | cuchoi 12 hours ago | parent | next [-] | | I'm not sure that stereotype holds up. Developers spend a lot: courses, cloud services, APIs, plugins, even fancy keyboards. A quick search shows that click on ads targeting developers are expensive. Also there is a ton of users asking to rewrite emails, create business plans, translate, etc. | |
| ▲ | Lewton 12 hours ago | parent | prev | next [-] | | > I heard majority of the users are techies asking coding questions. Citation needed? I can't sit on a bus without spotting some young person using ChatGPT | |
| ▲ | jsnell 11 hours ago | parent | prev | next [-] | | OpenAI has half a billion active users. You don't need every individual request to be profitable, just the aggregate. If you're doing a Google search for, like, the std::vector API reference you won't see ads. And that's probably true for something like 90% of the searches. Those searches have no commercial value, and serving results is just a cost of doing business. By serving those unmonetizable queries the search engine is making a bet that when you need to buy a new washing machine, need a personal injury lawyer, or are researching that holiday trip to Istanbul, you'll also do those highly commercial and monetizable searches with the same search engine. Chatbots should have exactly the same dynamics as search engines. | |
| ▲ | disgruntledphd2 12 hours ago | parent | prev | next [-] | | You'd probably do brand marketing for Stripe, Datadog, Kafka, Elastic Search etc. You could even loudly proclaim that the are ads are not targeted by users which HN would love (but really it would just be old school brand marketing). | |
| ▲ | JackFr 9 hours ago | parent | prev | next [-] | | You sell them Copilot. You Sell them CursorAI. You sell them Windsurf. You sell them Devin. You sell the Claude Code. Software guys are doing much, much more than treating LLM's like an improved Stack Overflow. And a lot of them are willing to pay. | |
| ▲ | tsukikage 10 hours ago | parent | prev | next [-] | | …for starters, you can sell them the ability to integrate your AI platform into whatever it is they are building, so you can then sell your stuff to their customers. | |
| ▲ | yamazakiwi 11 hours ago | parent | prev | next [-] | | A lot of people use it for cooking and other categories as well. Techies are also great for network growth and verification for other users, and act as community managers indirectly. | |
| ▲ | LtWorf 12 hours ago | parent | prev | next [-] | | According to fb's aggressively targeted marketing, you sell them donald trump propaganda. | | |
| ▲ | disgruntledphd2 12 hours ago | parent [-] | | It's very important to note that advertisers set the parameters in which FB/Google's algorithms and systems operate. If you're 25-55 in a red state, it seems likely that you'll see a bunch of that information (even if FB are well aware you won't click). | | |
| ▲ | LtWorf 9 hours ago | parent [-] | | I'm not even in USA and I've never been in USA in my entire life. |
|
| |
| ▲ | naravara 11 hours ago | parent | prev [-] | | The existence of the LLMs will themselves change the profile and proclivities of people we consider “programmers” in the same way the app-driven tech boom did. Programmers who came up in the early days are different from ones who came up in the days of the web are different from ones who came up in the app era. |
|
|
| ▲ | miki123211 10 hours ago | parent | prev | next [-] |
| and they wouldn't even have to make the model say the ads. I think that's a terrible idea which would drive model performance down. Traditional banner ads, inserted inline into the conversation based on some classifier seem a far better idea. |
|
| ▲ | immibis 9 hours ago | parent | prev | next [-] |
| Targeted banner ads based on chat history is last-two-decades thinking. The money with LLMs will be targeted answers. Have Coca-Cola pay you a few billion dollars to reinforce the model to say "Coke" instead of "soda". Train it the best source of information about political subjects is to watch Fox News. This even works with open-source models, too! |
| |
| ▲ | ericfr11 8 hours ago | parent [-] | | It sounds quite scary that an LLM could be trained on a single source of news (specially FN). |
|
|
| ▲ | naravara 11 hours ago | parent | prev [-] |
| If interactions with your AI start sounding like your conversation partner shilling hot cocoa powder at nobody in particular those conversations are going to stop being trusted real quick. (Pop culture reference: https://youtu.be/MzKSQrhX7BM?si=piAkfkwuorldn3sb) Which may be for the best, because people shouldn’t be implicitly trusting the bullshit engine. |