| ▲ | moralestapia 7 hours ago | ||||||||||||||||
I was just thinking the other day about how AI will pretty soon be able to create this kind of explainers on everything quite quickly. Amazing times! | |||||||||||||||||
| ▲ | bayesnet 7 hours ago | parent | next [-] | ||||||||||||||||
It’s kind of sad IMO. Bartosz has made a ton of these super interesting and meticulously designed explainers. Something thrown together with AI is much more likely to be made by someone who doesn’t know what they’re talking about, and I’m worried that the sheer volume will crowd out actually quality content like this. | |||||||||||||||||
| |||||||||||||||||
| ▲ | hollowturtle 7 hours ago | parent | prev | next [-] | ||||||||||||||||
Don't think so, and we should stop spread damaging narrative like this. I'd say it's already able to imitate this kind of explainers(badly) thanks to his training data. All the subtle teaching nuances, effort, know-how and visual creativity that people like Bartosz Ciechanowski put on this kind of work is not reproducible if not statistically imitating it | |||||||||||||||||
| |||||||||||||||||
| ▲ | jkubicek 5 hours ago | parent | prev | next [-] | ||||||||||||||||
Good rule of thumb: it should take less time to consume content than it does to create it. I don’t know how long it takes Ciechanowski to create these explainers, probably a few months? It shows and it’s well worth spending your time reading through his content meticulously. How long does it take for an LLM to crap out an equivalent explainer? 60 seconds? You should be spending less time than that reading it. | |||||||||||||||||
| ▲ | carlos-menezes 7 hours ago | parent | prev | next [-] | ||||||||||||||||
Haven't people been saying this since 2023? Yet to see AI build this kind of stuff "quite quickly". | |||||||||||||||||
| ▲ | tolerance 4 hours ago | parent | prev | next [-] | ||||||||||||||||
In order to be taken serious I feel like statements like this need to be qualified with who the claimant is imagining to be responsible for generating the anticipated output. The ‘A’ in AI isn’t for ‘autonomous’. Bartosz Ciechanowski could generate an explainer like this using Claude today if he wanted to. But would he? If someone like him had the mind to do it then they could instead. But where’s it at? These types may hold themselves to a standard above this method. No shame in that. | |||||||||||||||||
| ▲ | estsauver 7 hours ago | parent | prev | next [-] | ||||||||||||||||
I think it's actually already there. It's definitely possible to make these sorts of explainers with something like a Claude Code, you just have to spend a fair amount of time making sure that it's actually doing what you expect it to do. I think the biggest danger with something like a Claude Code is that you get something that looks functionally correct but that the details are suddenly wrong on. I wrote a little bit about this on my blog for some of the places that I've done visualizations actually, and I think it's remarkably easy to iterate on them now. | |||||||||||||||||
| |||||||||||||||||
| ▲ | _verandaguy 7 hours ago | parent | prev [-] | ||||||||||||||||
It's been said before, but this prediction isn't amazing, imo. I look forward to Bartosz's articles because they're rock-solid sources of information and the visualizations are both easy-to-understand and surprisingly light on performance. It's all shockingly digestible. Honestly, as popular science writing goes, this is art as far as I'm concerned, and art is best when it comes from a place of passion and conviction, something AI will never be able to reproduce. | |||||||||||||||||