| ▲ | cupofjoakim 7 hours ago |
| > Opus 4.7 uses an updated tokenizer that improves how the model processes text. The tradeoff is that the same input can map to more tokens—roughly 1.0–1.35× depending on the content type. caveman[0] is becoming more relevant by the day. I already enjoy reading its output more than vanilla so suits me well. [0] https://github.com/JuliusBrussee/caveman/tree/main |
|
| ▲ | Tiberium 7 hours ago | parent | next [-] |
| I hope people realize that tools like caveman are mostly joke/prank projects - almost the entirety of the context spent is in file reads (for input) and reasoning (in output), you will barely save even 1% with such a tool, and might actually confuse the model more or have it reason for more tokens because it'll have to formulate its respone in the way that satisfies the requirements. |
| |
| ▲ | embedding-shape 6 hours ago | parent | next [-] | | > I hope people realize that tools like caveman are mostly joke/prank projects This seems to be a common thread in the LLM ecosystem; someone starts a project for shits and giggles, makes it public, most people get the joke, others think it's serious, author eventually tries to turn the joke project into a VC-funded business, some people are standing watching with the jaws open, the world moves on. | | |
| ▲ | simonw 6 hours ago | parent | next [-] | | I was convinced https://github.com/memvid/memvid was a joke until it turned out it wasn't. | | |
| ▲ | embedding-shape 6 hours ago | parent | next [-] | | To be fair, most of us looked at GPT1 and GPT2 as fun and unserious jokes, until it started putting together sentences that actually read like real text, I remember laughing with a group of friends about some early generated texts. Little did we know. | | |
| ▲ | Alifatisk 6 hours ago | parent | next [-] | | Are there any public records I can see from GPT1 and GPT2 output and how it was marketed? | | |
| ▲ | embedding-shape 5 hours ago | parent | next [-] | | HN submissions have a bunch of examples in them, but worth remembering they were released as "Look at this somewhat cool and potentially useful stuff" rather than what we see today, LLMs marketed as tools. https://news.ycombinator.com/item?id=21454273 / https://news.ycombinator.com/item?id=19830042 - OpenAI Releases Largest GPT-2 Text Generation Model HN search for GPT between 2018-2020, lots of results, lots of discussions: https://hn.algolia.com/?dateEnd=1577836800&dateRange=custom&... | |
| ▲ | mlsu 5 hours ago | parent | prev | next [-] | | I was first made aware of GPT2 from reading Gwern -- "huh, that sounds interesting" -- but really didn't start really reading model output until I saw this subreddit: https://www.reddit.com/r/SubSimulatorGPT2/ There is a companion Reddit, where real people discuss what the bots are posting: https://www.reddit.com/r/SubSimulatorGPT2Meta/ You can dig around at some of the older posts in there. | |
| ▲ | walthamstow 6 hours ago | parent | prev | next [-] | | I don't think it was marketed as such, they were research projects. GPT-3 was the first to be sold via API | |
| ▲ | PufPufPuf 2 hours ago | parent | prev | next [-] | | I used GPT-2 (fine-tuned) to generate Peppa Pig cartoons, it was cutely incoherent https://youtu.be/B21EJQjWUeQ | |
| ▲ | maplethorpe 5 hours ago | parent | prev | next [-] | | From a 2019 news article: > New AI fake text generator may be too dangerous to release, say creators > The Elon Musk-backed nonprofit company OpenAI declines to release research publicly for fear of misuse. > OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough. https://www.theguardian.com/technology/2019/feb/14/elon-musk... | | |
| ▲ | ethbr1 5 hours ago | parent [-] | | Aka 'We cared about misuse right up until it became apparent that was profit to be had' OpenAI sure speed ran the Google and Facebook 'Don't be evil' -> 'Optimize money' transition. | | |
| ▲ | sfn42 4 hours ago | parent [-] | | Or - making sensational statements gets attention. A dangerous tool is necessarily a powerful tool, so that statement is pretty much exactly what you'd say if you wanted to generate hype, make people excited and curious about your mysterious product that you won't let them use. | | |
|
| |
| ▲ | wat10000 5 hours ago | parent | prev | next [-] | | You can run GPT2! Here's the medium model: https://huggingface.co/openai-community/gpt2-medium I will now have it continue this comment: I've been running gps for a long time, and I always liked that there was something in my pocket (and not just me). One day when driving to work on the highway with no GPS app installed, I noticed one of the drivers had gone out after 5 hours without looking. He never came back! What's up with this?
So i thought it would be cool if a community can create an open source GPT2 application which will allow you not only to get around using your smartphone but also track how long you've been driving and use that data in the future for improving yourself...and I think everyone is pretty interested. [Updated on July 20]
I'll have this running from here, along with a few other features such as: - an update of my Google Maps app to take advantage it's GPS capabilities (it does not yet support driving directions) - GPT2 integration into your favorite web browser so you can access data straight from the dashboard without leaving any site!
Here is what I got working. [Updated on July 20] | | |
| ▲ | fancyfredbot 36 minutes ago | parent [-] | | Wow that is terrible. In my memory GPT 2 was more interesting than that. I remember thinking it could pass a Turing test but that output is barely better than a Markov chain. I guess I was using the large model? | | |
| |
| ▲ | 5 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | Bombthecat 5 hours ago | parent | prev [-] | | And now gpt is laughing,while it replaces coders lol |
| |
| ▲ | MarcelOlsz 6 hours ago | parent | prev | next [-] | | Why? Doesn't have jokey copy. Any thoughts on claude-mem[0] + context-mode[1]? [0] https://github.com/thedotmack/claude-mem [1] https://github.com/mksglu/context-mode | | |
| ▲ | simonw 5 hours ago | parent [-] | | The big idea with Memvid was to store embedding vector data as frames in a video file. That didn't seem like a serious idea to me. | | |
| ▲ | nico 5 hours ago | parent | next [-] | | Very cool idea. Been playing with a similar concept: break down one image into smaller self-similar images, order them by data similarity, use them as frames for a video You can then reconstruct the original image by doing the reverse, extracting frames from the video, then piecing them together to create the original bigger picture Results seem to really depend on the data. Sometimes the video version is smaller than the big picture. Sometimes it’s the other way around. So you can technically compress some videos by extracting frames, composing a big picture with them and just compressing with jpeg | |
| ▲ | jermaustin1 5 hours ago | parent | prev [-] | | > embedding vector data as frames in a video file Interesting, when I heard about it, I read the readme, and I didn't take that as literal. I assumed it was meant as we used video frames as inspiration. I've never used it or looked deeper than that. My LLM memory "project" is essentially a `dict<"about", list<"memory">>` The key and memories are all embeddings, so vector searchable. I'm sure its naive and dumb, but it works for my tiny agents I write. |
|
| |
| ▲ | niuzeta 6 hours ago | parent | prev | next [-] | | Just read through the readme and I was fairly sure this was a well-written satire through "Smart Frames". Honestly part of me still thinks this is a satire project but who knows. | |
| ▲ | DiffTheEnder 5 hours ago | parent | prev | next [-] | | Is this... just one file acting as memory? | |
| ▲ | 5 hours ago | parent | prev [-] | | [deleted] |
| |
| ▲ | combobyte 4 hours ago | parent | prev | next [-] | | > most people get the joke I hope you're right, but from my own personal experience I think you're being way too generous. | |
| ▲ | dakolli 4 hours ago | parent | prev | next [-] | | Its the same as cyrpto/nft hype cyles, except this time one of the joke projects is going to crash the economy. | |
| ▲ | imiric 6 hours ago | parent | prev [-] | | A major reason for that is because there's no way to objectively evaluate the performance of LLMs. So the meme projects are equally as valid as the serious ones, since the merits of both are based entirely on anecdata. It also doesn't help that projects and practices are promoted and adopted based on influencer clout. Karpathy's takes will drown out ones from "lesser" personas, whether they have any value or not. |
| |
| ▲ | stingraycharles 6 hours ago | parent | prev | next [-] | | While the caveman stuff is obviously not serious, there is a lot of legit research in this area. Which means yes, you can actually influence this quite a bit. Read the paper “Compressed Chain of Thought” for example, it shows it’s really easy to make significant reductions in reasoning tokens without affecting output quality. There is not too much research into this (about 5 papers in total), but with that it’s possible to reduce output tokens by about 60%. Given that output is an incredibly significant part of the total costs, this is important. https://arxiv.org/abs/2412.13171 | | |
| ▲ | altruios 5 hours ago | parent | next [-] | | Who would suspect that the companies selling 'tokens' would (unintentionally) train their models to prefer longer answers, reaping a HIGHER ROI (the thing a publicly traded company is legally required to pursue: good thing these are all still private...)... because it's not like private companies want to make money... | | |
| ▲ | stingraycharles 4 hours ago | parent | next [-] | | I don’t think this is a plausible argument, as they’re generally capacity constrained, and everyone would like shorter (= faster) responses. I’m fairly certain that in a few more releases we’ll have models with shorter CoT chains. Whether they’ll still let us see those is another question, as it seems like Anthropic wants to start hiding their CoT, potentially because it reveals some secret sauce. | |
| ▲ | fancyfredbot 28 minutes ago | parent | prev | next [-] | | Try setting up one laundry which charges by the hour and washes clothes really really slowly, and another which washes clothes at normal speed at cost plus some margin similar to your competitors. The one which maximizes ROI will not be the one you rigged to cost more and take longer. | |
| ▲ | gwern 2 hours ago | parent | prev [-] | | LLM APIs sell on value they deliver to the user, not the sheer number of tokens you can buy per $. The latter is roughly labor-theory-of-value levels of wrong. |
| |
| ▲ | ACCount37 6 hours ago | parent | prev | next [-] | | Some labs do it internally because RLVR is very token-expensive. But it degrades CoT readability even more than normal RL pressure does. It isn't free either - by default, models learn to offload some of their internal computation into the "filler" tokens. So reducing raw token count always cuts into reasoning capacity somewhat. Getting closer to "compute optimal" while reducing token use isn't an easy task. | | |
| ▲ | stingraycharles 6 hours ago | parent [-] | | Yeah the readability suffers, but as long as the actual output (ie the non-CoT part) stays unaffected it’s reasonably fine. I work on a few agentic open source tools and the interesting thing is that once I implemented these things, the overall feedback was a performance improvement rather than performance reduction, as the LLM would spend much less time on generating tokens. I didn’t implement it fully, just a few basic things like “reduce prose while thinking, don’t repeat your thoughts” etc would already yield massive improvements. |
| |
| ▲ | AdamN 6 hours ago | parent | prev [-] | | Yeah you could easily imagine stenography like inputs and outputs for rapid iteration loops. It's also true that in social media people already want faster-to-read snippets that drop grammar so the desire for density is already there for human authors/readers. |
| |
| ▲ | ieie3366 6 hours ago | parent | prev | next [-] | | All LLMs also effectively work by ”larping” a role. You steer it towards larping a caveman and well.. let’s just say they weren’t known for their high iq | | |
| ▲ | roughly 6 hours ago | parent | next [-] | | Fun fact: Neanderthals actually had larger brains than Homo Sapiens! Modern humans are thought to have outcompeted them by working better together in larger groups, but in terms of actual individual intelligence, Neanderthals may have had us beat. Similarly, humans have been undergoing a process of self-domestication over the last couple millenia that have resulted in physiological changes that include a smaller brain size - again, our advantage over our wilder forebearers remains that we're better in larger social groups than they were and are better at shared symbolic reasoning and synchronized activity, not necessarily that our brains are more capable. (No, none of this changes that if you make an LLM larp a caveman it's gonna act stupid, you're right about that.) | | |
| ▲ | adwn 5 hours ago | parent [-] | | I thought we were way past the "bigger brain means more intelligence" stage of neuroscience? | | |
| ▲ | seba_dos1 5 hours ago | parent | next [-] | | Bigger brain does not automatically mean more intelligence, but we have reasons to suspect that homo neanderthalensis may have been more intelligent than contemporary homo sapiens other than bigger brains. | |
| ▲ | nomel 5 hours ago | parent | prev | next [-] | | All data shows there's a moderate correlation. | |
| ▲ | dtech 4 hours ago | parent | prev | next [-] | | You can't draw conclusions on individuals, but at a species level bigger brain, especially compared to body size, strongly correlates with intelligence | |
| ▲ | waffletower 5 hours ago | parent | prev [-] | | Even neuronal density is simplistic, and the dimension of size alone doesn't consider that. |
|
| |
| ▲ | Hikikomori 6 hours ago | parent | prev | next [-] | | Modern humans were also cavemen. | |
| ▲ | DiogenesKynikos 6 hours ago | parent | prev [-] | | This is why ancient Chinese scholar mode (also extremely terse) is better. |
| |
| ▲ | SEJeff an hour ago | parent | prev | next [-] | | I believe tools like graphify cut down the tokens in thinking dramatically. It makes a knowledge graph and dumps it into markdown that is honestly awesome. Then it has stubs that pretend to be some tools like grep that read from the knowledge graph first so it does less work. Easy to setup and use too. I like it. https://graphify.net/ | |
| ▲ | sambellll 32 minutes ago | parent | prev | next [-] | | Someone should make an MCP that parses every non-code file before it hits claude to turn it into caveman talk | |
| ▲ | bensyverson 6 hours ago | parent | prev | next [-] | | Exactly. The model is exquisitely sensitive to language. The idea that you would encourage it to think like a caveman to save a few tokens is hilarious but extremely counter-productive if you care about the quality of its reasoning. | |
| ▲ | reacharavindh 5 hours ago | parent | prev | next [-] | | This specific form may be a joke, but token conscious work is becoming more and more relevant..
Look at
https://github.com/AgusRdz/chop And https://github.com/toon-format/toon | | | |
| ▲ | sidrag22 3 hours ago | parent | prev | next [-] | | I hesitated 100% when i saw caveman gaining steam, changing something like this absolutely changes the behaviour of the models responses, simply including like a "lmao" or something casual in any reply will change the tone entirely into a more relaxed style like ya whatever type mode. I think a lot of people echo my same criticism, I would assume that the major LLM providers are the actual winners of that repo getting popular as well, for the same reason you stated. > you will barely save even 1% with such a tool For the end user, this doesnt make a huge impact, in fact it potentially hurts if it means that you are getting less serious replies from the model itself. However as with any minor change across a ton of users, this is significant savings for the providers. I still think just keeping the model capable of easily finding what it needs without having to comb through a lot of files for no reason, is the best current method to save tokens. it takes some upfront tokens potentially if you are delegating that work to the agent to keep those navigation files up to date, but it pays dividends when future sessions your context window is smaller and only the proper portions of the project need to be loaded into that window. | |
| ▲ | causal 5 hours ago | parent | prev | next [-] | | Output tokens are more expensive | |
| ▲ | 6 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | Waterluvian 6 hours ago | parent | prev | next [-] | | Help me understand: I get that the file reading can be a lot. But I also expand the box to see its “reasoning” and there’s a ton of natural language going on there. | |
| ▲ | egorfine 6 hours ago | parent | prev | next [-] | | They are indeed impractical in agentic coding. However in deep research-like products you can have a pass with LLM to compress web page text into caveman speak, thus hugely compressing tokens. | | |
| ▲ | claytongulick 6 hours ago | parent [-] | | I don't understand how this would work without a huge loss in resolution or "cognitive" ability. Prediction works based on the attention mechanism, and current humans don't speak like cavemen - so how could you expect a useful token chain from data that isn't trained on speech like that? I get the concept of transformers, but this isn't doing a 1:1 transform from english to french or whatever, you're fundamentally unable to represent certain concepts effectively in caveman etc... or am I missing something? | | |
| ▲ | egorfine 4 hours ago | parent [-] | | Good catch actually. Okay maybe not exactly caveman dialect, but text compression using LLM is definitely possible to save on tokens in deep research. |
|
| |
| ▲ | addandsubtract 5 hours ago | parent | prev | next [-] | | We started out with oobabooga, so caveman is the next logical evolution on the road to AGI. | |
| ▲ | make3 7 hours ago | parent | prev | next [-] | | I wonder if you can have it reason in caveman | | |
| ▲ | 0123456789ABCDE 6 hours ago | parent [-] | | would you be surprised if this is what happens when you ask it to write like one? folks could have just asked for _austere reasoning notes_ instead of "write like you suffer from arrested development" | | |
| ▲ | Sohcahtoa82 6 hours ago | parent [-] | | > "write like you suffer from arrested development" My first thought was that this would mean that my life is being narrated by Ron Howard. |
|
| |
| ▲ | acedTrex 7 hours ago | parent | prev | next [-] | | You really think the 33k people that starred a 40 line markdown file realize that? | | |
| ▲ | andersa 6 hours ago | parent | next [-] | | You mean the 33k bots that created a nearly linear stars/day graph? There's a dip in the middle, but it was very blatant at the start (and now) | |
| ▲ | verdverm 6 hours ago | parent | prev | next [-] | | Stars are more akin to bookmarks and likes these days, as opposed to a show of support or "I use this" | | | |
| ▲ | pdntspa 6 hours ago | parent | prev [-] | | The amount of cargo culting amongst AI halfwits (who seem to have a lot of overlap with influencers and crypto bros) is INSANE I mean just look at the growth of all these "skills" that just reiterate knowledge the models already have |
| |
| ▲ | micromacrofoot 5 hours ago | parent | prev [-] | | I mean we had a shoe company pivot to AI and raise their stock value by 300%, how can we even know anymore |
|
|
| ▲ | gghootch 6 hours ago | parent | prev | next [-] |
| Caveman is fun, but the real tool you want to reduce token usage is headroom https://github.com/gglucass/headroom-desktop (mac app) https://github.com/chopratejas/headroom (cli) |
| |
| ▲ | gilles_oponono 3 hours ago | parent | next [-] | | Different positionning
- headroom compress inputs and open source project
- caveman is output and open source
- edgee more corporate offer | |
| ▲ | kokakiwi 5 hours ago | parent | prev | next [-] | | Headroom looks great for client-side trimming. If you want to tackle this at the infrastructure level, we built Edgee (https://www.edgee.ai) as an AI Gateway that handles context compression, caching, and token budgeting across requests, so you're not relying on each client to do the right thing. (I work at Edgee, so biased, but happy to answer questions.) | | | |
| ▲ | stavros 4 hours ago | parent | prev [-] | | I tried to use rtk for the same, and my agent session would just loop the same tool call over and over again. Does headroom work better? | | |
|
|
| ▲ | computomatic 7 hours ago | parent | prev | next [-] |
| I was doing some experiments with removing top 100-1000 most common English words from my prompts. My hypothesis was that common words are effectively noise to agents. Based on the first few trials I attempted, there was no discernible difference in output. Would love to compare results with caveman. Caveat: I didn’t do enough testing to find the edge cases (eg, negation). |
| |
| ▲ | computerphage 6 hours ago | parent | next [-] | | Yeah, when I'm writing code I try to avoid zeros and ones, since those are the most common bits, making them essentially noise | |
| ▲ | ruairidhwm 6 hours ago | parent | prev | next [-] | | I literally just posted a blog on this. Some seemingly insignificant words are actually highly structural to the model. https://www.ruairidh.dev/blog/compressing-prompts-with-an-au... | | |
| ▲ | cheschire 6 hours ago | parent [-] | | I suspect even typos have an impact on how the model functions. I wonder if there’s a pre-processor that runs to remove typos before processing. If not, that feels like a space that could be worked on more thoroughly. | | |
| ▲ | ruairidhwm 5 hours ago | parent | next [-] | | I guess just a spell-check in the repo? But yes, I'd imagine that they have an effect. Even running the same input twice is non-deterministic. | | |
| ▲ | cheschire 5 hours ago | parent | next [-] | | The ability for audio processing to figure out spelling from context, especially with regards to acronyms that are pronounced as words, leads me to believe there’s potential for a more intelligent spell check preprocess using a cheaper model. | |
| ▲ | mathieudombrock 4 hours ago | parent | prev [-] | | The same input twice is only nondeterministic if you don't control the seed. |
| |
| ▲ | 0123456789ABCDE 6 hours ago | parent | prev [-] | | there is no pre-processor, i've had typos go through, with claude asking to make sure i meant one thing instead of the other | | |
| ▲ | PhilipRoman 5 hours ago | parent [-] | | I strongly suspected that there was some pre/postprocessing going on when trying to get it to output rot13("uryyb, jbyeq"), but it's probably just due to massively biased token probabilities. Still, it creates some hilarious output, even when you clearly point out the error: Hmm, but wait — the original you gave was jbyeq not jbeyq:
j→w, b→o, y→l, e→r, q→d = world
So the final answer is still hello, world. You're right that I was misreading the input. The result stands.
|
|
|
| |
| ▲ | AlecSchueler 6 hours ago | parent | prev [-] | | Doesn't it just use more tokens in reasoning? |
|
|
| ▲ | alach11 2 hours ago | parent | prev | next [-] |
| On my private internal oil and gas benchmark, I found a counterintuitive result. Opus 4.7 scores 80%, outperforming Opus 4.6 (64%) and GPT-5.4 (76%). But it's the cheapest of the three models by 2x. This is mainly driven by reduced reasoning token usage. It goes to show that "sticker price" per token is no longer adequate for comparing model cost. |
|
| ▲ | TIPSIO 6 hours ago | parent | prev | next [-] |
| Oh wow, I love this idea even if it's relatively insignificant in savings. I am finding my writing prompt style is naturally getting lazier, shorter, and more caveman just like this too. If I was honest, it has made writing emails harder. While messing around, I did a concept of this with HTML to preserve tokens, worked surprisingly well but was only an experiment. Something like: > <h1 class="bg-red-500 text-green-300"><span>Hello</span></h1> AI compressed to: > h1 c bgrd5 tg3 sp hello sp h1 Or something like that. |
| |
|
| ▲ | motoboi 5 hours ago | parent | prev | next [-] |
| Caveman hurt model performance. If you need a dumber model with less token output, just use sonnet-4-6 or other non-reasoning model. |
| |
| ▲ | hayd 3 hours ago | parent [-] | | Does it? I'm not sure I'd necessarily notice but I haven't found it noticeably worse. |
|
|
| ▲ | chrisweekly 6 hours ago | parent | prev | next [-] |
| I really enjoy the party game "Neanderthal Poetry", in which you can only speak using monosyllabic words. I bet you would too. |
|
| ▲ | JustFinishedBSG 4 hours ago | parent | prev | next [-] |
| Interesting, it doesn't seem intuitive at all to me. My (wrong?) understanding was that there was a positive correlation between how "good" a tokenizer is in terms of compression and the downstream model performance. Guess not. |
|
| ▲ | nickspag 5 hours ago | parent | prev | next [-] |
| I find grep and common cli command spam to be the primary issue. I enjoy Rust Token Killer https://github.com/rtk-ai/rtk, and agents know how to get around it when it truncates too hard. |
|
| ▲ | fzaninotto 4 hours ago | parent | prev | next [-] |
| To reduce token count on command outputs you can also use RTK [0] [0]: https://github.com/rtk-ai/rtk |
|
| ▲ | 6 hours ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | user34283 6 hours ago | parent | prev | next [-] |
| I used Opus 4.7 for about 15 minutes on the auto effort setting. It nicely implemented two smallish features, and already consumed 100% of my session limit on the $20 plan. See you again in five hours. |
| |
|
| ▲ | p_stuart82 4 hours ago | parent | prev | next [-] |
| caveman stops being a style tool and starts being self-defense. once prompt comes in up to 1.35x fatter, they've basically moved visibility and control entirely into their black box. |
|
| ▲ | hayd 6 hours ago | parent | prev | next [-] |
| me feel that it needs some tweaking - it's a little annoyingly cute (and could be even terser). |
|
| ▲ | ctoth 4 hours ago | parent | prev | next [-] |
| 1.35 times! For Input!
For what kinds of tokens precisely? Programming? Unicode? If they seriously increased token usage by 35% for typical tasks this is gonna be rough. |
|
| ▲ | OtomotO 7 hours ago | parent | prev [-] |
| Another supply chain attack waiting? Have you tried just adding an instruction to be terse? Don't get me wrong, I've tried out caveman as well, but these days I am wondering whether something as popular will be hijacked. |
| |
| ▲ | pawelduda 6 hours ago | parent [-] | | People are really trigger-happy when it comes to throwing magic tools on top of AI that claim to "fix" the weak parts (often placeboing themselves because anthropic just fixed some issue on their end). Then the next month 90% of this can be replaced with new batch of supply chain attack-friendly gimmicks Especially Reddit seems to be full of such coding voodoo | | |
| ▲ | JohnMakin 6 hours ago | parent | next [-] | | My favorite to chuckle at are the prompt hack voodoo stuff, like, “tell it to be correct” or “say please” or “tell it someone will die if it doesnt do a good job,” often presented very seriously and with some fast cutting animations in a 30 second reel | | | |
| ▲ | xienze 6 hours ago | parent | prev [-] | | > coding voodoo Well, we've sacrificed the precision of actual programming languages for the ease of English prose interpreted by a non-deterministic black box that we can't reliably measure the outputs of. It's only natural that people are trying to determine the magical incantations required to get correct, consistent results. |
|
|