Remix.run Logo
jackb4040 14 hours ago

Didn't they explicitly say the ads wouldn't be made aware of prompt data when they announced them? And if so, how is that not securities fraud?

c7b 13 hours ago | parent | next [-]

Maybe someone with more time at hands could look up what Google said with respect to ads and what happened later.

This is one of the rare instances where it's very easy to predict the future: the prompt auction market will look similar to the existing online ad market, financial firms will pay for prompt streams for sentiment analysis, companies and interest groups will pay to have their products or agenda included favorably in the training data for future open weights models... any way you can think of that LLMs can be monetized, you will see it happen. And fast. The financial pressure is way too high for there to be too long of a honeymoon phase like we had with web 2.0

dd82 11 hours ago | parent [-]

And how much trust are you going to have with your model results that they haven't been transformed and adjusted by advertising priorities?

search engine results do this all the time, reordering output by advertiser input. its a pretty small jump from that to rewriting output from models, and even better where its all a black box.

duskdozer 8 hours ago | parent | next [-]

>And how much trust are you going to have with your model results that they haven't been transformed and adjusted by advertising priorities?

None.

eswdd 11 hours ago | parent | prev | next [-]

Also Google did it over-time - they didn't suddenly become who they are today 10 years ago even.

tyre 11 hours ago | parent | prev [-]

I mean search engine results are pretty poor and have been for a long time. They reflect SEO, not credibility or quality.

LLMs have plenty of issues, but they’re relatively clean compared with what the future will look like.

jamiequint 11 hours ago | parent | prev | next [-]

In what way would that be securities fraud? I guess you could get nailed under Section 17(a), but really hard to make a case they're defrauding investors by representing they were going to make ads worse performing than they ended up making them.

In order for it to be securities fraud it has to be tied to a securities transaction and the misstatement has to be material to a reasonable investor's decision.

Esophagus4 9 hours ago | parent | next [-]

Because everything is securities fraud: https://www.bloomberg.com/opinion/articles/2019-06-26/everyt...

d0odk 8 hours ago | parent [-]

it's not securities fraud if investors make a lot of money

potamic 6 hours ago | parent [-]

For every investor who has made money, there is another who has lost an equal amount. Money cannot be created, it can only change hands!

parineum 6 hours ago | parent [-]

Money is created all the time.

mcmcmc 10 hours ago | parent | prev [-]

A plan to gamble the brand’s reputation on whether people will remember their promises seems risky enough to be considered material.

> representing they were going to make ads worse performing than they ended up making them.

This is disingenuous. It’s a tradeoff between lower performing ads or losing market share by degrading trust in your product.

aabhay 11 hours ago | parent | prev | next [-]

I think they said the ad vendors wouldn't but the matching algorithm would still be aware of it. Which IMO is the bare requirement to have ads be anything but magazine style ads.

johanyc 2 hours ago | parent | prev | next [-]

I don't remember them ever saying that. They did say ads will not affect the response, like the ad in Truman show: https://youtu.be/6U4-KZSoe6g

Frost1x 13 hours ago | parent | prev | next [-]

I mean, the ad doesn’t necessarily have to be made aware of the exact prompt context, just that the ad itself was relevant. You can basically have the ads prequalified for areas and serve them when relevant. Now that does show the user is talking about something relevant most likely, and depending on how they decide to serve them or provide referring, it may traceable to a profile/identity built for that user externally.

I’d be more concerned as to how this ends up in agent platforms using the LLMs, when you don’t have a fairly autonomous agent based system using these the entire point is that a human isn’t involved, so who are you serving ads to and where are you injecting them.

Moreover, if you are injecting them everywhere, does that survive stare for subsequent steps, meaning from the first set of results I get, does that loop back in again with the ad injected into the context. Because now, we have yet another dangerous way of injecting instructions into an already issue prone surface area.

I’m guessing they’re going to have special APIs that don’t include ads, and those are going to cost more, especially for non embedded agents (processes that already exist inside ChatGPT that kick off transparently from prompts, like asking it to work with an office document). After all the customers using agents aside from developers are mostly businesses, so it’s where the money is. The ads will exist for the poor to subsidize their use, and probably create even more barriers for agentic use like I described. Just my thoughts.

And good luck litigating against any business in this administration. Unless they explicitly tick off certain people or refuse to kiss the ring, they can get away with almost anything right now and there’s little risk of doing it or not because ticking off this admin will raise illegitimate prosecution even if you’re perfectly legal, almost the same level of if you’re not. It’s the ideal playground for doing all sorts of manipulation, just kiss the ring and you’ll be fine.

jmalicki 13 hours ago | parent | prev | next [-]

Wouldn't it have to have a negative effect on the security to be securities fraud? Causing an investor loss is a key point of securities fraud.

"We made a ton more money with ads and the stock went up" lacks that key element of fraud?

nkrisc 12 hours ago | parent [-]

Investors who bought an artificially inflated stock would be harmed.

jamiequint 11 hours ago | parent | next [-]

How would the stock be harmed by them selling better performing or more relevant ads?

bee_rider 11 hours ago | parent [-]

I don’t know that there were any promises anyway. But if there were, then an investor could have plausibly believed that that was a better long-term business model.

It’s early days for these LLM hosts, maybe investors could be worried about taking the really annoying business notes before users are properly addicted.

11 hours ago | parent | prev [-]
[deleted]
10 hours ago | parent | prev | next [-]
[deleted]
12 hours ago | parent | prev | next [-]
[deleted]
david_shi 13 hours ago | parent | prev | next [-]

who is "they"? might have been a stealth terms and conditions update

TZubiri 13 hours ago | parent | prev | next [-]

It would also be a huge security risk. But I can't think of any fundamental difference with Google queries, other than the sheer entropy of user data involved.

And I'm not a tinfoil internet anarchist, but just because Google only leaks user data in aggregated form to advertisers, doesn't mean that they don't leak their user data, it's just that they did so in a legal and responsible manner.

Maybe considering the difference in data volume and intimacy between queries and AI conversations, the privacy implications of advertising merit a difference in treatment, but I wouldn't be surprised if that is lost to a more simple 'Google did this so we can do it too' momentum.

gxs 13 hours ago | parent [-]

The difference is you can make full use of Google without logging in

Even with a throw away, no chance I use OpenAI now - if/when Anthropocene does this I’ll be in a tough spot

spongebobstoes 12 hours ago | parent [-]

you can use chatgpt without an account, just not all of it

and you can't make full use of Google without an account. for example, you need an account to upload to YouTube, manage your website in search, place ads, opt out of data usage. the list goes on

oaweoifjwpo 12 hours ago | parent [-]

None of those examples are "run an internet search".

spongebobstoes 11 hours ago | parent [-]

I don't understand. you can talk to chatgpt without an account, what's the difference?

both are a limited subset of what the companies offer, available for free

hacker_homie 11 hours ago | parent | prev | next [-]

Easy they lied to the public not investors and have more money than you.

Local llm or nothing at all.

bitmasher9 11 hours ago | parent | next [-]

This is a classic example highlighting the upside of local llms.

However the local llms I can run on reasonable hardware are so dumb compared to opus, and even if I shelled out five figures of hardware to run the largest/smartest open model it still will be noticeably worse.

Right now the remote models are just so much smarter and more affordable under most usage patterns.

echelon 11 hours ago | parent | prev [-]

> Local llm or nothing at all.

I'm not as familiar with LLMs as I am media models, but there can't seriously be local contenders for beating Opus, GPT-5, etc. Right?

At home hardware isn't good enough.

Nobody "far enough behind" that isn't scared to release their model as open weights actually has a competitive model within 70% of the lead models.

Now that the Chinese are catching up and even pulling ahead (eg. in video), they've stopped releasing the weights.

Stragglers release weights. And those weights aren't competitive.

Am I missing something?

zozbot234 6 hours ago | parent [-]

GLM and Kimi are still releasing weights for near-SOTA models. DeepSeek, Qwen and arguably MiniMax are the ones that are perhaps falling behind.

qotgalaxy 14 hours ago | parent | prev [-]

[dead]