Remix.run Logo
bearjaws 18 hours ago

Feel like the canary was when Grokpedia became a project.

Giant waste of time while Anthropic/OAI keep surging forward.

I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.

paulbjensen 16 hours ago | parent | next [-]

The Twitter social graph was an amazing data asset. I worked at a consumer insights firm and the data on followers/followings was quite powerful.

Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.

With that data, you could work out:

- What celebrities/influencers to use in marketing campaigns - Where to advertise, and on which tv/radio channels - What potential brands to collaborate with to expand your customer base - What tone of voice to use in your advertising - In some cases, we educated clients about who their actual customers were, better than they understood themselves.

One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.

When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”

The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.

That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.

I’m pretty sure he must be delighted with how things have panned out since.

BLKNSLVR 15 hours ago | parent | next [-]

That entire description sounds worthless to any positive direction of humanity. Therefore probably rapaciously profitable

Very sad face.

mbs159 7 hours ago | parent | prev | next [-]

Damn, this only validates the use of ad-blockers / sponsor-blockers even more

rchaud 13 hours ago | parent | prev | next [-]

In other words, using flash-in-the-pan data to build an advertising goldmine.

smcin 16 hours ago | parent | prev | next [-]

That Zuckerberg quote was published in 2013 and supposedly was made a year or more before. Was it about when Dick Costolo was CEO (2010-2012)?

johnisgood 14 hours ago | parent | prev | next [-]

This reads very dystopian. You are not optimizing to understand people, you are optimizing to weaponize that understanding against them.

When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.

And this happens at scale, invisibly. People never see the manipulation.

In any case, it is not useful for most people. It is useful for the people doing the deceiving.

caaqil 14 hours ago | parent | next [-]

The tech is interesting and useful, no need for the scary moral framing.

The original application of the entire field of data science or ML is/was actually based on this paradigm of finding "unconscious preferences" (your words) and hidden patterns. How one chooses to deploy the tech should be judged on its own.

On the current trajectory of tool/data abuse where Palantir et al. are leading the way, this is very low on the sinister scale.

johnisgood 13 hours ago | parent | next [-]

I am not disputing that the tech is interesting. My point is about how it is being applied. The examples above are not about understanding people, they are about exploiting their latent preferences (before: "unconscious preference") for persuasion at scale.

Attempting to normalize that by saying "Palantir is worse" does not make it any less manipulative and sinister.

And to be more on topic, Twitter's value as dataset is overstated. Hardly the panacea people make it out to be.

hananova 13 hours ago | parent | prev [-]

To not frame the amorality and negative effects centrally and primarily is to be dishonest. There is absolutely not a single person whose wage doesn't rely on not seeing it, that doesn't see that that entire branch of tech has strictly negative value to society.

But of course, line must go up, and it's not you personally being negatively affected, so it doesn't matter.

etchalon 14 hours ago | parent | prev [-]

It's marketing. That's how marketing works.

rhubarbtree 3 hours ago | parent [-]

And it’s far more important in capitalism than your products.

With the advent of AI, startups become solely about marketing, sales, and defensibility.

So most of the capitalist system will become of this nature. Doesn’t seem like such a good system, and inevitably unsustainable.

Gud 3 hours ago | parent | prev | next [-]

Ok, in that case I am glad that Elon fucked it up.

gwern 15 hours ago | parent | prev | next [-]

It's definitely very valuable, but for what AI model? How does any of that lead to AGI, or even just a good coding agent?

applfanboysbgon 15 hours ago | parent | next [-]

It doesn't need to lead to AGI or a good coding agent. Some of the only people who are actually profitable in the LLM industry are the people making actual chatbots. There are several bootstrapped startups that run open-weight models with a $10 or $20 monthly sub and make millions in profit off of inference from people just talking to the things, usually for character roleplay / "AI boyfriend/girlfriend" stuff etc. Some of them even took those profits and invested it into training their own bespoke models from scratch, usually on the smaller side although finetunes/retrains of Llama 70b, GLM, and Deepseek 670b have also been done. Grok could probably be profitable if it targeted this space, as the most "intelligent" conversational/uncensored model.

This is already presupposing that profit even matters, though. Musk already burned some $50 billion dollars to control messaging on political discourse with his acquisition of Twitter. It was not about money, but power. After you already have infinite money, the only thing left to spend it on is acquiring more power, which is achieved through influencing politics. LLMs represent a potentially even better propaganda tool than social media platforms. They give you unprecedented access to people's thoughts that they would probably not share online otherwise, and they allow you to more subtly influence people with deeply-personalised narratives.

KaiserPro 15 hours ago | parent | prev [-]

> but for what AI model?

Sentiment analysis. Working out what words lead to what outcomes, and then being able to predict on new data is super useful.

For coding or "AGI" no, its not useful. For building a text based (possibly image based) recategorisation system top class.

15 hours ago | parent | prev | next [-]
[deleted]
alex1138 15 hours ago | parent | prev | next [-]

As an aside that quote from MZ does bother me. There's more to making a web-scale human rights respecting (because it has to, it's the internet, social media needs guidelines) than just making money (which Zuck doesn't seem to care much about anyway if he's sinking apparently billions into metaverse while having no account support)

Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with

Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?

[1] https://news.ycombinator.com/item?id=4151433 [2] https://news.ycombinator.com/item?id=1692122

cyanydeez 16 hours ago | parent | prev [-]

It _was_ a great asset, however, just like models need proper data, as soon as musk removed the clamps on valuable social signals, well, he basically took a dump where he intended to eat.

ohyoutravel 14 hours ago | parent [-]

They did say was, and did say Twitter, which existed in the past.

brokencode 17 hours ago | parent | prev | next [-]

It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.

freehorse 17 hours ago | parent | next [-]

Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.

dmarcos 16 hours ago | parent | next [-]

FWIW it looks there’s now a demand surge with the introduction of the new cheap cybertruck variant. delivery dates pushed out to the fall of 2026.

robrain 16 hours ago | parent | next [-]

That was an artificial boost created by setting a time-limit for a low price. There were ten days to buy at the price, then they put it back up. [1]

[1] https://electrek.co/2026/03/01/tesla-cybertruck-awd-price-in...

EDIT: grammar

parineum 15 hours ago | parent [-]

What's an artificial boost? Sounds like you're describing a sale.

hananova 13 hours ago | parent [-]

Sales are artificial boosts yes. The difference is in the connotation. A sale is given for something that people generally would buy anyway, but now more people will. An artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy.

Or in other words, sales raise $high_number to $higher_number while artificial boosts raise $essentially_zero to $acceptable_number.

dmarcos 12 hours ago | parent | next [-]

Your claim is that people that bought the cybertruck at a lower price don’t actually want it?

sigmarule 11 hours ago | parent [-]

I believe the claim is that the demand side did not change, the supply side did, as in sales != demand.

dmarcos 10 hours ago | parent [-]

Just quoting the above

“An artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy”

So people spent 60k on a cybertruck that they didn’t want? Is that the claim?

pas 19 minutes ago | parent [-]

the claim is that it moved sales forward in time, but it'll have a corresponding dip in sales later, whereas a good sales campaign increases total volume (virtually no dip, brings in new customers, etc)

parineum 12 hours ago | parent | prev | next [-]

> artificial boost is given to stuff nobody wants, but at a lower price can be convinced to buy.

People do want it, clearly, but it's too expensive for them.

Sales don't make people want things they otherwise don't.

sillyfluke 8 minutes ago | parent | next [-]

literally almost everything I have bought on sale is something I wasn't looking to buy at that moment in time.

bdangubic 11 hours ago | parent | prev [-]

> Sales don't make people want things they otherwise don't.

That is exactly what sales do. most sales are made sellings things to people they don’t want, until sales does what sales does

dmarcos 11 hours ago | parent [-]

So people spent 60k on a cybertruck they don’t want? Do you believe that?

bdangubic 10 hours ago | parent [-]

look around your house and see how much shit you got that you really want(ed). great salesman (and elon is the best in the history of the civilization) will sell you shit you never thought you wanted :)

dmarcos 9 hours ago | parent [-]

The motivation to buy something is always because you want it. That a product doesn’t meet your needs or expectations later is a different story. What’s your evidence to claim that people spending 60k in a cybertruck don’t want it? What’s your evidence to make a similar claim or the opposite for any other purchase? Without evidence it feels you are making baseless claims about peoples motivations.

RobRivera 11 hours ago | parent | prev [-]

[X] doubt

NewJazz 16 hours ago | parent | prev | next [-]

Look up what their production targets were and compare that to their sales. A small temporary demand surge isn't going to be enough to chew through their current inventory, let alone keep the production lines busy.

MPSimmons 16 hours ago | parent | prev [-]

A push on delivery dates is as likely to mean production issues as it is an influx of interest.

scottyah 16 hours ago | parent | prev | next [-]

[flagged]

annexrichmond 14 hours ago | parent | prev [-]

Drivel. They’re selling just as well as Rivians.

squarefoot 17 hours ago | parent | prev | next [-]

Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.

annexrichmond 14 hours ago | parent | prev | next [-]

Are really suggesting everything in Wikipedia is truthful, complete, and free of all biases?

hananova 13 hours ago | parent | next [-]

Maybe not all of it, but a vast majority of it is. And almost certainly the parts that drove Elon to slopify it are true.

annexrichmond 11 hours ago | parent [-]

Citation needed.

comicjk 13 hours ago | parent | prev | next [-]

Not everything on Wikipedia is true, but the parts Elon Musk hates most are probably true.

annexrichmond 11 hours ago | parent [-]

So we just make things up on HN now? Care to share any examples?

scared_together 4 hours ago | parent | next [-]

Not sure if this is an example of something Musk hates, but here’s a paragraph from the “2016 presidential campaign” section of the Donald Trump article on Wikipedia.

> Trump's FEC-required reports listed assets above $1.4 billion and outstanding debts of at least $265 million.[140][141] He did not release his tax returns, contrary to the practice of every major candidate since 1976 and to promises he made in 2014 and 2015 to release them if he ran for office.[142][143]

I could not find any mention of tax returns on the Donald Trump page of Grokipedia.

Wikipedia:

https://en.wikipedia.org/wiki/Donald_Trump

Grokipedia:

https://grokipedia.com/page/Donald_Trump

mbs159 7 hours ago | parent | prev [-]

Well, you yiyrself did not provide any sources for asserting the argument that some of what is on Wikipedia is false

6 hours ago | parent | prev | next [-]
[deleted]
6 hours ago | parent | prev [-]
[deleted]
Timon3 17 hours ago | parent | prev | next [-]

I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.

It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.

danabramov 17 hours ago | parent [-]

I've seen Claude pick it up too. It's disconcerting.

Rover222 14 hours ago | parent | prev | next [-]

Wikipedia obviously is left leaning.

hananova 12 hours ago | parent [-]

Well yes, but so is reality. And Wikipedia as an encyclopedia is supposed to document reality. So what's the problem?

Rover222 36 minutes ago | parent | next [-]

That's an interesting take. Left or right leaning is kind of just relative to society as a whole. If the world really was so left, I think we'd be calling Wikipedia neutral.

beeflet 11 hours ago | parent | prev [-]

[flagged]

brokencode 8 hours ago | parent | next [-]

Have you ever wondered why the most educated and scholarly people in the country are left leaning?

I suppose you think they were indoctrinated. But finding and teaching the truth is essentially their job. Learning how to evaluate sources and approach research logically is like academia 101.

So doesn’t it seem strange that so few of them ever manage to see that they’re being indoctrinated?

Or do you think a person’s political beliefs are assigned at birth and lefties just like academia for some reason?

baublet 11 hours ago | parent | prev | next [-]

Are you suggesting that academia and all of the other actual places people who learn and know stuff for a living being full of leftists is some conspiracy against you and the right wing?

Touch grass, my dude. These are the thoughts of someone who spends too much time on X.

beeflet 9 hours ago | parent [-]

I don't use X or Grok.

The fact that the elite and knowledge workers in this country are generally more left-leaning is pretty evident. The right wingers in these ranks make up a distinctive subgroup. These are the thoughts of pretty much everyone everywhere in the country and this becomes apparent if you ask randoms on the street, or you have attended college lectures, or have used a dictionary, or have read wikipedia talk pages, or have compared news sources.

Being a welder or a farmer or a carpenter requires "learning and knowing stuff". These right-wing associated jobs just don't produce knowledge for other people as an end product. That is what makes knowledge work an elite position; not everyone has the luxury of doing knowledge work.

I take issue with your implication that we should all bow down to knowledge workers because they know better. The knowledge workers are the issue here. They are the subject of discussion. This is like when the police investigate themselves and find no wrongdoing.

If we found that the population of professional athletes became dominated by a certain cultural ingroup, and eventually we failed to bring home gold medals at the olympics, we might be correct to question the state of our meritocracy wrt athleticism. Regardless of what people who "improve and use their bodies for a living" think.

The US is losing intellectually and technologically to countries like China. I call into question the general legitimacy of our academic and journalistic institutions.

adi_kurian 9 hours ago | parent | next [-]

You know the trades are union, i.e., left wing?

If you went to my hometown and asked working class men who work with their hands whether they support the conservatives, they'd laugh in your face.

Have you heard of the AFL-CIO?

nixon_why69 9 hours ago | parent | prev | next [-]

> The US is losing intellectually and technologically to countries like China. I call into question the general legitimacy of our academic and journalistic institutions.

China has tons of green energy, high speed rail and frequently extols the virtues of socialism.

kartakrak 8 hours ago | parent | prev [-]

hn is left leaning too

even pg tries to shit on elon on x time to time

funniest bit was garry tan hinting that yc x and y seasons has to be renamed because pg doesnt like it

6 hours ago | parent | prev [-]
[deleted]
alex1138 17 hours ago | parent | prev | next [-]

I can both not like Elon and also think Wikipedia is also very captured on some things

ryandrake 17 hours ago | parent | next [-]

Are there actual good examples showing errors of fact on Wikipedia that are verifiably incorrect, that demonstrate how it is "captured"?

calqacon 12 hours ago | parent | next [-]

How about Gabrowski et al.: "Wikipedia’s Intentional Distortion of the History of the Holocaust", about the outsize influence of certain coordinated Polish editors on the Wikipedia articles about Poland and the Holocaust?

https://www.tandfonline.com/doi/epdf/10.1080/25785648.2023.2...

Quote from the conclusion:

> This essay has shown that in the last decade, a handful of editors have been steering Wikipedia’s narrative on Holocaust history away from sound, evidence-driven research, toward a skewed version of events touted by right-wing Polish groups. Wikipedia’s articles on Jewish topics, especially on Polish–Jewish history before, during, and after World War II, contain and bolster harmful stereotypes and fallacies. Our study provides numerous examples, but many more exist. We have shown how the distortionist editors add false content and use unreliable sources or misrepresent legitimate ones.

For a more recent paper, "Disinformation as a tool for digital political activism: Croatian Wikipedia and the case for critical information literacy" by Car et al. says that:

> The Hr.WP [Croatian Wikipedia] case exemplifies disinformation not only as content manipulation, but also as process manipulation weaponising neutrality and verifiability policies to suppress dissent and enforce a single ideological position.

https://doi.org/10.1108/JD-01-2025-0020

servo_sausage 15 hours ago | parent | prev | next [-]

I find it more surprising that the common understanding has shifted away from "wikis are crap for anything new or political".

As soon as there is a plausible agenda for selecting a narrative the way Wikipedia works we should be sceptical.

For recent examples, everything to do with Biden and family, and Gamergate. These pages are still full of discussion; and what's written is more ideological than factual. You can follow these pages to see how an in-group selects a narrative.

And these topics are not nearly as controversial as race, feminism, or transgender topics.

ryandrake 14 hours ago | parent [-]

OK, is there a specific example on either the Biden or Gamergate page that is factually incorrect? Or are you saying the entire pages are false?

servo_sausage 12 hours ago | parent | next [-]

My point is more that the history of those pages is a good example of how Wikipedia works for controversial topics; it's not really a process of becoming more correct as better sources are found and argued about like it is on more neutral pages, instead it's an in group deciding what to represent, collecting their preferred opinion pieces. And this changes over time, getting no closer to neutrality within the same articles history.

You can write an equivalent article starting with "Gamergate was a movement reacting to the improper collusion between game developers and journalists" and find just as many sources, but the current article wants to promote the idea that it was a harrassment campaign first.

datsci_est_2015 10 hours ago | parent [-]

It was also pretty credibly a psyop orchestrated by Steve Bannon and Jeffrey Epstein, but that’s probably better served in history books and biographies rather than an encyclopedia.

scarmig 12 hours ago | parent | prev | next [-]

Wiki's Gamergate opening paragraph:

> Gamergate or GamerGate (GG) was a loosely organized misogynistic online harassment campaign motivated by a right-wing backlash against feminism, diversity, and progressivism in video game culture. It was conducted using the hashtag "#Gamergate" primarily in 2014 and 2015. Gamergate targeted women in the video game industry, most notably feminist media critic Anita Sarkeesian and video game developers Zoë Quinn and Brianna Wu.

Grokipedia's:

> Gamergate was a grassroots online movement that emerged in August 2014, primarily focused on exposing conflicts of interest and lack of transparency in video game journalism, initiated by a blog post detailing the romantic involvement of indie developer Zoë Quinn with journalists who covered her work without disclosure. The controversy began when Eron Gjoni, Quinn's ex-boyfriend, published "The Zoe Post," accusing her of infidelity with multiple individuals, including Kotaku journalist Nathan Grayson, whose article on Quinn's game Depression Quest omitted any mention of their prior personal contact. This revelation highlighted broader patterns of undisclosed relationships and coordinated industry practices, such as private mailing lists among journalists, fueling demands for ethical reforms like mandatory disclosure policies.

I don't care about "Gamergate" and never use Grokipedia, but Wiki definitely has a stronger slant: it's as if an article about Black Lives Matter started with a statement that it was a campaign meant to scam people to pay for mansions for leadership.

yongjik 12 hours ago | parent | next [-]

Well, I'm naively assuming Grokipedia is being sympathetic to the cause(?) of Gamergate, but if the best thing they could lead the article was essentially "It all started when someone got mad at his ex-girlfriend and her many other boyfriends and wrote something that went viral" ...

... it does sound like an online harassment campaign.

baublet 10 hours ago | parent [-]

It was. In hindsight it signaled the beginning of the mass weaponization of the internet via social media. It also was NOT grassroots lol. It was very specifically and intentionally enflamed and groomed and funded by people like Steve Bannon and his good buddy Jeffrey Epstein. It wouldn’t have such a big Wikipedia article without them.

brendoelfrendo 11 hours ago | parent | prev [-]

Wikipedia's assessment is more accurate. Wikipedia does go on in its second paragraph to explain the context of the start of the campaign, including "The Zoe Post" and the accusations of conflict of interest. But the broader impact of Gamergate was as a misogynistic online harassment campaign, and Wikipedia is correct to make that the central part of its summary. Just because Grokipedia is more reluctant to state a conclusion does not make it less biased.

andoando 14 hours ago | parent | prev [-]

Which facts are represented is equally important as being factual though.

Brian hit Jim can be a fact. But if you emit "Jim murdered Brians whole family", its a disortation of truth

bdangubic 14 hours ago | parent [-]

specific examples other than ficticious Jim&Brian?

andoando 13 hours ago | parent [-]

I haven't read wikipedia in a long time so I can't answer your question, I am just pointing out that just saying "the facts are correct" is not enough to say there is no bias on wikipedia

AuryGlenz 17 hours ago | parent | prev | next [-]

[flagged]

JumpCrisscross 16 hours ago | parent | next [-]

The Minnesota Transracial Adoption Study was methodologically flawed. “Children with two black parents were significantly older at adoption, had been in the adoptive home a shorter time, and had experienced a greater number of preadoption placements.”

Reframed, the study seemed to find (a) black kids are adopted less readily and (b) the longer a kid spends in the foster system, the lower their IQ at 17. (There is also limited controlling for epigenetic factors because we didn’t understand those well in the 1970s and 80s.)

Based on how new human cognition is, and genetically similar human races are, it would be somewhat groundbreaking to find an emergent complex trait like IQ to map to social constructs like race, particularly ones as broad as American white and black. (There is more genetic diversity in single African tribes than in some small European countries. And American whites and blacks are all complex hybridized social categories.)

[1] https://en.wikipedia.org/wiki/Minnesota_Transracial_Adoption...

AuryGlenz 11 hours ago | parent [-]

[flagged]

tptacek 11 hours ago | parent [-]

What? No you can't.

And: it remains perfectly OK to study racial differences in IQ. It's an actively studied topic. In fact, it's studied by at least three major scientific fields (quantitative psychology, behavioral genetics, and molecular genetics). The idea that you can't is a cringe online racist canard borne out of the fact that the studies aren't coming out the way they want them to.

AuryGlenz 10 hours ago | parent [-]

Does it now? Noah Carl would disagree. He was a researcher at Cambridge University that was dismissed after an open letter signed by over 1,400 academics and students accusing him of "racist pseudoscience" for merely arguing that race-IQ research should not be off-limits.

James Flynn (of the Flynn effect) has also publicly stated that grants for research clarifying genetic vs. environmental causes of IQ gaps weren't approved because of university fears of public furor.

tptacek 10 hours ago | parent | next [-]

You're trying to axiomatically win an argument that is already settled empirically. It won't work. You can just read the papers. My point being: the papers exist, and more are published every year. Once you acknowledge that, your argument is dead. Literally no matter what the papers say. Don't make dumb arguments.

Noah Carl has a sociology doctorate. He doesn't work in the fields that study this; he just tries to launder his way into them.

Flynn is, famously, a race/IQ skeptic.

akerl_ 10 hours ago | parent | prev [-]

https://medium.com/@racescienceopenletter/open-letter-no-to-...

https://www.theguardian.com/education/2019/may/01/cambridge-...

> for merely arguing that race-IQ research should not be off-limits.

Help me connect the dots here.

AlotOfReading 16 hours ago | parent | prev | next [-]

It seems like the root of your statement is with the existence of "race" as a purely biological classification. Wikipedia correctly notes the consensus position that race is a social construct [0] that's difficult to use accurately when discussing IQ. Grok makes the implicit and incorrect assumption that genetic factors = race, among other issues.

[0] https://www.genome.gov/genetics-glossary/Race

darkwater 15 hours ago | parent | next [-]

I wonder how much longer that link will stay up with the current administration...

AuryGlenz 11 hours ago | parent | prev [-]

Ok, change it to "what we call race as a proxy for general geographic locations that people's ancestors come from."

Which is what we all mean by race, anyways.

AlotOfReading 10 hours ago | parent | next [-]

That's not what your previous post was talking about. But if you insist, at least make your point clear. "African Americans" and "Africans" are wildly different genetic populations that get subsumed under the same "Black" racial category in the US. Which one were you talking about?

The latter is more genetically diverse than any other human population by an incredible margin. Making generalized statements about them is impossible (including this one). As for African American populations, ancestry estimates of how closely related they are to African populations vary massively for each individual. Many people are much closer to "white" populations than any African population, due to the history of African Americans in North America. If you really mean race as a geographic proxy, the "black" label is simply confusing what you actually mean.

AuryGlenz 8 hours ago | parent [-]

I understand your point (although I find the babybathwater-ing to be tiring), and I didn't mean to be drawn into a debate about this. But that was entirely the point - that there's a debate. Wikipedia would have you believe that there isn't.

For what it's worth, I'm mixed as hell. European, Asian, Jewish, north african, and native american. I look white, though - and I am, in fact, majority European ancestry. Therefore in most studies (of anything race related), I would presumably be lumped in with white people. It's not a perfect "measure," but it's still the easiest proxy for geographic location of our ancestors that we have and on a population level it works just fine for studies.

lobf 11 hours ago | parent | prev [-]

But then what are you arguing? Geographic location determines IQ? (An inherently flawed measurement itself)

AuryGlenz 10 hours ago | parent [-]

I'm not arguing anything other than the fact that Wikipedia is biased.

Though I will say it's beyond argument that geographic ancestry has an effect on IQ on a statistical group level (the reasons for this are what's debated), and that IQ is the best measurement of G that we have.

lcnPylGDnU4H9OF 5 minutes ago | parent | next [-]

> I'm not arguing anything other than the fact that Wikipedia is biased.

It "is biased" to document human knowledge as accurately as possible. Is there something wrong with that?

lobf 9 hours ago | parent | prev [-]

Okay but you need to… actually present these arguments. Right now you’re stating your position and then affirming it as fact and expecting everyone to trust you.

AuryGlenz 7 hours ago | parent [-]

I already gave you two large meta-analyses and more on the first point along with a and as far as the second goes in the field of psychology that's as established as 2+2=4 is in the math world. If you really want to research that yourself go ahead; I don't feel like I should need to waste my time.

epgui 16 hours ago | parent | prev | next [-]

Have you considered the possibility that your opinion is just not representative of the scientific consensus?

AuryGlenz 11 hours ago | parent | next [-]

I asked ChatGPT on whether or not it was the "scientific consensus."

"Anonymous surveys of intelligence experts reveal division: a 2016 survey found that about 49% attributed 50% or more of the Black-White gap to genetics, while over 80% attributed at least 20%; an earlier 1980s survey showed similar splits. These views are more common in private or anonymous contexts, contrasting with public statements from bodies like the APA that find no support for genetic explanations."

Hm, sure seems like Wikipedia should probably have a more balanced, nuanced discussion considering the experts are split at least 50/50.

charcircuit 16 hours ago | parent | prev [-]

Wikipedia does not care about scientific consensus. It just summarizes "reliable" secondary sources.

epgui 12 hours ago | parent [-]

Wrong in two different ways:

- this tends to approximate consensus.

- Wikipedia does care, and has a policy on this: https://en.wikipedia.org/wiki/Wikipedia:Scientific_consensus

charcircuit 10 hours ago | parent [-]

>and has a policy on this

Look at the top of that page.

>This is an essay. It contains the advice or opinions of one or more Wikipedia contributors. This page is not an encyclopedia article or a Wikipedia policy, as it has not been reviewed by the community.

lobf 16 hours ago | parent | prev | next [-]

>As you can see, Wikipedia is very dismissive to the point of effectively lying.

Did I miss where you presented evidence that wikipedia is wrong? You seem to be taking an assumption you carry (race is related to IQ) and assuming everyone believes it's true as well, thus wikipedia is lying.

AuryGlenz 11 hours ago | parent [-]

There have been many, many studies that show that "race" is related to IQ. A true, unbiased article would show that as well as any well-founded criticisms of it.

lobf 11 hours ago | parent [-]

Can you cite them then?

AuryGlenz 10 hours ago | parent [-]

Roth, P. L., Bevier, C. A., Bobko, P., Switzer, F. S., & Tyler, P. (2001). Ethnic group differences in cognitive ability in employment and educational settings: A meta-analysis. Personnel Psychology, 54(2), 297–330.

Rushton, J. P., & Jensen, A. R. (2005). Thirty years of research on race differences in cognitive ability. Psychology, Public Policy, and Law, 11(2), 235–294.

Neisser, U., et al. (1996). Intelligence: Knowns and unknowns. (APA Task Force report). American Psychologist, 51(2), 77–101.

erxam 16 hours ago | parent | prev [-]

[flagged]

gowld 17 hours ago | parent | prev | next [-]

It's not errors of fact, it's errors of omitted facts.

ibero 16 hours ago | parent | next [-]

Are there actual good examples showing errors of omitted facts on Wikipedia that are verifiably correct, that demonstrate how it is "captured"?

decimalenough 16 hours ago | parent | prev [-]

[flagged]

arjie 13 hours ago | parent | prev [-]

I’d say Wikipedia definitely has a strong “woke” bent to it. Either in the language or the choice of what facts to show. Here’s an example I deleted that had been there for quite a while https://en.wikipedia.org/w/index.php?title=Salvadoran_gang_c...

I really like Wikipedia, though, and I think over time we will get around to fixing it up.

klausa 12 hours ago | parent [-]

Why did you feel this passage was worth deleting?

arjie 12 hours ago | parent [-]

Anyone familiar with Wikipedia etiquette knows how to find the answer to this question. Rather than getting into an argument here about a subject there, I'd prefer you familiarize yourself with the norms of that community, and if you already have or are experienced with them, then you know where to discuss the subject guided by those norms.

scared_together 4 hours ago | parent [-]

But you’re responding to a comment here, not there. So why not abide by the norms that prevail here?

freehorse 17 hours ago | parent | prev | next [-]

I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

atonse 15 hours ago | parent | next [-]

> I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

Really? Have you used AI to write documentation for software? Or used AI to generate deep research reports by scouring the internet?

Because, while both can have some issues (but so do humans), AI already does extremely well at both those tasks (multiple models do, look at the various labs' Deep Research products, or look at NotebookLM).

Grokipedia is roughly the same concept of "take these 10,000 topics, and for each topic make a deep research report, verify stuff, etc, and make minimal changes to the existing deep research report on it. preserve citations"

So it's not like it's automatically some anti-woke can't-be-trusted thing. In fact, if you trust the idea of an AI doing deep research reports, this is a generalizable and automated form of that.

We can judge an idea by its merits, politics aside. I think it's a fascinating idea in general (like the idea of writing software documentation or doing deep research reports), whether it needs tweaks to remove political bias aside.

freehorse 15 hours ago | parent | next [-]

No, I don't trust an encyclopedia generated by AI. Projects with much narrower scopes are not comparable.

edit: I am not very excited by AI-generated documentations either. I think that LLMs are very useful tools, but I see a potential problem when the sources of information that their usefulness is largely based on is also LLM-generated. I am afraid that this will inevitably result in drop in quality that will also affect the LLMs themselves downstream. I think we underestimate the importance that intentionality in human-written text plays in being in the training sets/context windows of LLMs for them to give relevant/useful output.

chipotle_coyote 15 hours ago | parent | prev | next [-]

> Have you used AI to write documentation for software?

Hi. I have edited AI-generated first drafts of documentation -- in the last few months, so we are not talking about old and moldy models -- and describing the performance as "extremely well" is exceedingly generous. Large language models write documentation the same way they do all tasks, i.e., through statistical computation of the most likely output. So, in no particular order:

- AI-authored documentation is not aware of your house style guide. (No, giving it your style guide will not help.)

- AI-authored documentation will not match your house voice. (No, saying "please write this in the voice of the other documentation in this repo" will not help.)

- The generated documentation will tend to be extremely generic and repetitive, often effectively duplicating other work in your documentation repo.

- Internal links to other pages will often be incorrect.

- Summaries will often be superfluous.

- It will love "here is a common problem and here is how to fix it" sections, whether or not that's appropriate for the kind of document it's writing. (It won't distinguish reliably between tutorial documentation, reference documentation, and cookbook articles.)

- The common problems it tells you how to fix are sometimes imagined and frequently not actually problems worth documenting.

- It's subject to unnecessary digression, e.g., while writing a high-level overview of how to accomplish a task, it will mention that using version control is a good idea, then detour for a hundred lines giving you a quick introduction to Git.

As for using AI "to generate deep research reports by scouring the internet", that sounds like an incredibly fraught idea. LLMs are not doing searches, they are doing statistical computation of likely results. In practice the results of that computation and a web search frequently line up, but "frequently" is not good enough for "deep research": the fewer points of reference for a complex query there are in an LLM's training corpus, the more likely it is to generate a bullshit answer delivered with a veneer of absolute confidence. Perhaps you can make the case that that's still a good place to start, but it is absolutely not something to rely on.

dyates 9 hours ago | parent [-]

>LLMs are not doing searches, they are doing statistical computation of likely results.

This was true of ChatGPT in 2022, but any modern platform that advertises a "deep research" feature provides its LLMs with tools to actually do a web search, pull the results it finds into context and cite them in the generated text.

15 hours ago | parent | prev [-]
[deleted]
scottyah 16 hours ago | parent | prev | next [-]

> "grokipedia" as idea

So you can understand someone not liking something, but you cannot understand that person liking the idea of an alternative? What is the idea for you if not just an alternative to the established service with the undesired part changed?

freehorse 15 hours ago | parent | next [-]

Because not liking something does not imply liking any possible alternative.

Which one is the "undesirable part changed" here? Wikipedia is written by humans, it has a not-for-profit governance model, it encompasses a large, international community of authors/editors that attempt to operate democratically, it has an investment/commitment in being an openly available and public source of information. Grokipedia, on the other hand, is AI-generated, and operated by a for-profit AI company. Even if "grokipedia" managed somehow to get traction and "overthrow" wikipedia, there is no reason on earth why a company would operate it for free and not try to make profit out of it, or use it for their ends in ways much more direct than what may or may not be happening to wikipedia. Having a billionaire basically control something that may be considered "ground truth" of information seems a bad idea, and having AI generate that an even worse one.

I can understand somebody not liking something in how wikipedia is governed or operating, after all whatever has to do with getting humans work together in such a scale is bound to be challenging. I can understand somebody ideologically disagreeing with some of the stances that such a project has to take eventually (even if one tries to be neutral as much as possible, it is inevitable to avoid some clash somewhere about where this neutrality exactly lies). But grokipedia much more than "wikipedia but different ideologically".

edit: just to be clear, I see a critique of the "idea of grokipedia" as eg the critique of it being a billionaire controlled, AI generated project to substitute wikipedia; a critique of the implementation would be finding flaws to actual articles in grokipedia (overall). I think the idea of it is already flawed enough.

15 hours ago | parent | prev | next [-]
[deleted]
debugnik 16 hours ago | parent | prev | next [-]

They meant the idea of Wikipedia rewritten by Grok (or another controversial LLM) specifically, not just any alternative.

wat10000 15 hours ago | parent | prev [-]

Not all alternatives are necessarily worthy. I can understand someone not liking tomatoes. I can't understand someone liking depleted uranium.

hunterpayne 13 hours ago | parent | next [-]

Maybe ask a Ukrainian soldier which they prefer (modern armor is often made of depleted uranium). Environment shapes such preferences far more than personality.

bdangubic 14 hours ago | parent | prev [-]

what do you have against depleted uranium? you know what they say, one man’s trash is another man’s treasure :)

psyklic 6 hours ago | parent | prev [-]

Elon at some point threatened to have an LLM rewrite all of the training data to remove woke. I assume Grokipedia is his experiment at doing this (and perhaps hoping it will infect other training sets too?) ...

Rover222 14 hours ago | parent | prev [-]

I appreciate you

tclancy 17 hours ago | parent | prev [-]

[flagged]

notahacker 18 hours ago | parent | prev | next [-]

Twitter's communication style being based around brevity, slang, memes, spam and non-threaded conversations seems particularly unlikely to be helpful for optimising LLMs

tclancy 17 hours ago | parent | next [-]

>Twitter's communication style being based around brevity

Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.

3rodents 17 hours ago | parent | next [-]

Elon was running some sort of $1m competition for the “best” Twitter post for a few months. I think those type of dissertations about Phrenology and the like have fallen off a cliff since the competition ended.

tclancy 12 hours ago | parent [-]

Ooohhhh. I am both glad and horrified to know this. Not how Seneca told me life would be when I learned things.

delecti 9 hours ago | parent | prev [-]

There's probably a selection bias involved. I haven't been a regular user for a while now, but the big threads like that were significantly outnumbered by individual posts. Meanwhile I'm not likely to send a link to someone of a single single-sentence tweet, because there's not enough meat to it. The stuff that could be shared would usually be an image from the tweet, which I could share directly.

aleph_minus_one 18 hours ago | parent | prev | next [-]

> Twitter's communication style [...] seems particularly unlikely to be helpful for optimising LLMs

This depends on what one wants to optimize the AI for. ;-)

libertine 17 hours ago | parent | prev [-]

And the amount of bots there isn't helpful either.

facemelt2 17 hours ago | parent [-]

recent changes in their comment system have reduced my exposure to bots to a level I much prefer over every other platform I use

tanjtanjtanj 17 hours ago | parent | next [-]

How recent? As recently as last weekend I was seeing blue check marks replying with AI generated only-technically-related replies on top of the majority of the posts I looked at.

rvnx 16 hours ago | parent | prev | next [-]

There are bots here too, lot of them, to a point that rules were amended, this is because it's very valuable to give points to new publications

libertine 17 hours ago | parent | prev [-]

If that's actually true, good for them, but after what I've witnessed there not that long ago, I doubt I'll try it ever again.

UncleOxidant 17 hours ago | parent | prev | next [-]

> Giant waste of time while Anthropic/OAI keep surging forward.

And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.

koakuma-chan 17 hours ago | parent [-]

Has Antigravity gotten any better?

sunaookami 16 hours ago | parent | next [-]

It has gotten worse and they tightened the limits for paying customers recently: https://x.com/antigravity/status/2031835833716625883 (only announcement on Twitter, not in the app nor via email)

kivle 16 hours ago | parent [-]

Limits are so low that I cancelled after about two weeks on my initial $0 trial. I tried making a change to a tiny code base with Claude Sonnet (which they offer in Antigravity). It couldn't even finish the change before my weekly limit was used up, reset in 7 days.

koakuma-chan an hour ago | parent [-]

to be fair you shouldn't expect them to subsidize Anthropic models. what about limits for gemini?

kivle 12 minutes ago | parent [-]

I tried the Anthropic models because gemini-pro had already been rate limited with a 5 day wait. I got some actual usage out of the Google model, but laughably little compared to what I got with ChatGPT Plus. This is definitely not an imagined thing from my side, you just have to look at the Antigravity forums:

https://discuss.ai.google.dev/new

htrp 12 hours ago | parent | prev | next [-]

>There is currently no support for:

>Bring-your-own-key or bring-your-own-endpoint for additional rate limits >Organizational tiers in general availability, or via contract[1]

Literal clown car product.

No plan for serious enterprise support (even 6 months after launch)

[1]https://antigravity.google/docs/plans

UncleOxidant 15 hours ago | parent | prev | next [-]

I find it pretty good. And Gemini 3.1 pro seems quite capable. Not as good at some things as Claude, but better at others. I was trying to target a verilog design to an uncommon FPGA and board and Gemini went out and searched for the FPGA docs and examined the schematics for the board in able to do the pin assignments (generated .ccf file). Not sure of Claude could've done that.

BoredPositron 17 hours ago | parent | prev [-]

Probably the best value for a good amount of anthropic credits. You can also share your Google ai subscription with up to four family members and they all get the same amount of credits...

jmspring 18 hours ago | parent | prev | next [-]

Twitter has the mass adoption, and it takes an effort to avoid bot/particular view bias - but as a valuable content source, it's a far cry from what it once was before Musk took it over.

ben_w 17 hours ago | parent | prev | next [-]

> Feel like the canary was when Grokpedia became a project. Giant waste of time while Anthropic/OAI keep surging forward.

Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`

Agree re Twitter "good" != valuable.

sroussey 15 hours ago | parent [-]

Where system prompt lists a certain someone’s latest tweets.

sheepscreek 16 hours ago | parent | prev | next [-]

AFAIK Grok still doesn’t have a CLI coding agent that works with a subscription. That’s a shame. Grok Code Fast 1 was pretty impressive when it came out - for what it did, and they never followed it up with a new version.

sroussey 15 hours ago | parent [-]

You can use cursor with grok, though my experience is that grok is the worst of the API providers cursor supports.

giancarlostoro 17 hours ago | parent | prev | next [-]

> but I cannot imagine it's a valuable dataset.

It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.

samrus 15 hours ago | parent | prev | next [-]

Twotter as a data source is interesting. I think it gets over hyped because thats elons grift. But i cant deny that the real time info aspect of it is pretty valuable. But i definitely think that its not that much more valuable than the open internet from a context source perspective. Everything worthwhile on twitter will end up elsewhere with a bit of lag. And the stuff that wont is noise anyway

laidoffamazon 14 hours ago | parent | prev | next [-]

As someone trying to monitor the situation using Twitter the last few weeks it’s awful and it used to not be!

Rover222 14 hours ago | parent [-]

It’s flawed, but still the obvious place to monitor a situation.

rchaud 13 hours ago | parent [-]

It's long been taken over by Telegram, which among its other advantages (more like a message board than 'town square'), doesn't have hordes of people commenting "@grok explain this to me" under every post.

Rover222 37 minutes ago | parent [-]

I've never even heard of telegram competing with X for live world events updates. But maybe I'm just missing out.

BurningFrog 17 hours ago | parent | prev | next [-]

Grok is trained on pretty much the same giant web crawl/text corpus as the other AIs.

vibeprofessor 15 hours ago | parent | prev | next [-]

[dead]

EGreg 16 hours ago | parent | prev [-]

I'm not a fan of Elon's software endeavors, ever since he bought Twitter and turned it into an even worse cesspool of angry political nonsense than it used to be. I don't like how he's been biasing Grok, etc.

But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.

kennywinker 16 hours ago | parent | next [-]

I think the issue is simply this: wikipedia trends towards unbiased info through use of the crowd. Grok, with a single owner with an ax to grind, trends towards whatever elon wants. It’s poisoned information under the control of one man - cyberpunk novels have been written about less.

wat10000 15 hours ago | parent | next [-]

A concrete example: a few weeks ago, Musk was making a big deal about how most of his massive net worth was not held in cash, and by a total coincidence the phrase "primarily derived from equity stakes rather than cash" showed up on his Grokipedia page in the section about net worth. I checked the pages of several other extremely wealthy people and none of them had such a comment.

tmp10423288442 14 hours ago | parent | prev [-]

> wikipedia trends towards unbiased info through use of the crowd

See, this is why people even give a project like Grokipedia the time of day. While in theory anyone can edit Wikipedia, in practice the moderators form a much smaller and weirder cabal, and they reject edits that go against their views. The frustration with the naive assertion that Wikipedia distills the wisdom of the crowds with the reality of Wikipedia on any page of note is what provides the psychic permission to even entertain a project with such obvious flaws as Grokipedia.

kennywinker 13 hours ago | parent [-]

> and they reject edits that go against their views

Citation needed. See what i did there ;)

They reject edits that go against their views on tone and sourcing not political views that i am aware of - i am sure it happens from time to time but unless there’s a consistant bias in one direction this isn’t a valid criticism of the political neutrality of wikipedia.

Even if there is rampant bias in wikipedia, that’s a reason to fork it and change the structure and gatekeeping - not to replace it with a techno-authoritarian ai version controlled by a single billionaire. That’s amplifying the problem from an aggregate bias of 600,000 users who have made an edit in the last 30 days[1] to just one editor who uses ai to make it seem impartial.

[1] https://expandedramblings.com/index.php/wikipedia-statistics...

tmp10423288442 11 hours ago | parent [-]

I would prefer to fork Wikipedia as well, but in practice I don't think that works, given the many failed Wikipedia forks of the past 20 years. On the internet, the only way to get any alternative to a widely-used source like Wikipedia is to use a significantly different approach. Otherwise, you just look like a cheap knockoff, even to people who might otherwise agree with your approach. Worse is better, after all - worse in most ways, but better or different in at least one innovative way.

kennywinker 10 hours ago | parent [-]

Well, here’s hoping grokpedia goes and joins the rest of the failed attempts.

Avshalom 14 hours ago | parent | prev | next [-]

>>I don't like how he's been biasing Grok, etc.

>>But, what exactly is so bad about Grokipedia

sumeno 14 hours ago | parent | prev [-]

It's controlled by a guy who spends all day retweeting white supremacists and lying about his companies. Why should anyone who isn't a white supremacist use it?

baublet 10 hours ago | parent [-]

They would not. The do not.