| ▲ | Open-source Zig book(zigbook.net) |
| 577 points by rudedogg 14 hours ago | 211 comments |
| |
|
| ▲ | poly2it 3 hours ago | parent | next [-] |
| > Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software. I'm not sure what they expect, but to me Zig looks very much like C with a modern standard lib and slightly different syntax. This isn't groundbreaking, not a thought paradigm which should be that novel to most system engineers like for example OCaml could be. Stuff like this alienates people who want a technical justification for the use of a language. |
| |
| ▲ | obviouslynotme 2 hours ago | parent | next [-] | | There is nothing new under the Sun. However, some languages manifest as good rewrites of older languages. Rust is that for C++. Zig is that for C. Rust is the small, beautiful language hiding inside of Modern C++. Ownership isn't new. It's the core tenet of RAII. Rust just pulls it out of the backwards-compatible kitchen sink and builds it into the type system. Rust is worth learning just so that you can fully experience that lens of software development. Zig is Modern C development encapsulated in a new language. Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions. All realtime development has to rewrite their entire standard libraries, like with the EASTL. On top of the great standard library design, you get comptime, native build scripts, (err)defer, error sets, builtin simd, and tons of other small but important ideas. It's just a really good language that knows exactly what it is and who its audience is. | | |
| ▲ | kibwen a minute ago | parent | next [-] | | > Rust and C++'s biggest mistake, not passing allocators into containers and functions Rather, basing its entire personality around this philosophy is Zig's biggest mistake. If you want to pass around allocators in C++ or Rust, you can just go ahead and do that. But the reason people don't isn't because it's impossible, it's because the overwhelming majority of the time it's a lot of ceremony for no benefit. | |
| ▲ | simonask an hour ago | parent | prev | next [-] | | I don't know man, Rust's borrowing semantics are pretty new under the sun, and actually do change the way you think about software. It's a pretty momentous paradigm shift. Zig is nice too, but it's not that. | | |
| ▲ | hu3 10 minutes ago | parent | next [-] | | There were languages with lifetimes and borrowing mechanics before Rust. Rust packages these mechanics in a nice way. Just like Zig encodes many niceties in a useful C language (comptime, simple cross-compilation, stdlib). | |
| ▲ | rjzzleep an hour ago | parent | prev [-] | | To call Rust syntax beautiful is a stretch. It seems that way in the beginning but then quickly devolves into a monstrosity when you start doing more complex things. Zig on the other specifically addresses syntax shortcomings in part of C. And it does it well. That claim of rust making C more safe because it’s more readable applies to Zig more than it does to Rust. I feel like the reason the rust zealots lobby like crazy to embed rust everywhere is twofold. One is that they genuinely believe in it and the other is that they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel. Because once they’re in it’s a decade long job market | | |
| ▲ | kibwen 3 minutes ago | parent | next [-] | | > if other languages that address one of the main rust claims without all the cruft But regardless of how much one likes Zig, it addresses none of the problems that Rust seeks to solve. It's not a replacement for Rust at all, and isn't suitable for any of the domains where Rust excels. | |
| ▲ | hnarn 41 minutes ago | parent | prev [-] | | > they know that if other languages that address one of the main rust claims without all the cruft gains popularity they lose the chance of being permanently embdedded in places like the kernel First of all, I'm really opposed to saying "the kernel". I am sure you're talking about the Linux kernel, but there are other kernels (BSD, Windows etc.) that are certainly big enough to not call it "the" kernel, and that may also have their own completely separate "rust-stories". Secondly, I think the logic behind this makes no sense, primarily because Rust at this point is 10 years old from stable and almost 20 years old from initial release; the adoption into the Linux kernel wasn't exactly rushed. Even if it was, why would Rust adoption in the Linux kernel exclude adoption of another language as well, or a switch to another, if it's better? The fact that Rust was accepted at all to begin with aside from C disproves the assumption, because clearly that kernel is open for "better" languages. The _simplest_ explanation to why Rust has succeeded is that it's solves actual problems, not that "zealots" are lobbying for it to ensure they "have a job". |
|
| |
| ▲ | renewiltord 2 hours ago | parent | prev | next [-] | | > Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions Funny. This was a great sell to me. I wonder why it isn’t the blurb. Maybe it isn’t a great sell to others. The problem for me with so many of these languages is that they’re always eager to teach you how to write a loop when I couldn’t care less and would rather see the juice. However, nowadays with comprehensive books like this, LLM tools can better produce good results for me as I try it out. Thank you. | | |
| ▲ | obviouslynotme 2 hours ago | parent [-] | | Very, very few people outside of foundational system software, HFT shops, and game studios understand why it's a great selling point. Everyone else likes the other points and don't realize the actual selling point of the language. |
| |
| ▲ | ninetyninenine 2 hours ago | parent | prev [-] | | >Rust is that for C++ No it's not. Rust has roots in functional languages. It is completely orthoganol to C++. | | |
| ▲ | jasode 30 minutes ago | parent [-] | | Graydon Hoare, a former C++ programmer on Mozilla Firefox and the original creator of Rust, acknowledges that for many people, Rust has become a viable alternative to C++ : https://graydon2.dreamwidth.org/307291.html And on slide #4, he mentions that "C++ is well past expiration date" : https://venge.net/graydon/talks/intro-talk-2.pdf It's possible that Graydon's earliest private versions of Rust the 4 years prior to that pdf were an OCaml language but it's clear that once the team of C++ programmers at Mozilla started adding their influences, they wanted it to be a cleaner version of C++. That's also how the rest of the industry views it. |
|
| |
| ▲ | wavemode 3 hours ago | parent | prev | next [-] | | Much of the book's copy appears to have been written by AI (despite the foreword statement that none of it was), which explains the hokey overenthusiasm and exaggerations. | | |
| ▲ | ninetyninenine 2 hours ago | parent [-] | | That's ok, in the near future nobody will actually read this book. AI will be reading it. This is training data. |
| |
| ▲ | userbinator 3 hours ago | parent | prev | next [-] | | For those who actually want to learn languages which are "fundamentally changing how you think about software", I'd recommend the Lisp family and APL family. | | |
| ▲ | miki123211 2 hours ago | parent | next [-] | | Am I correct that you can essentially "learn APL without learning APL" by just learning Numpy / Pytorch? I looked at array languages briefly, and my impression was that"ooh this is just Numpy but weirder." | | |
| ▲ | veqq an hour ago | parent [-] | | Not even close. While Numpy has many similar operations, it lacks the terseness, concepts like trains and forks etc. Modern APL style doesn't use... control flow (neither recursion nor loops nor if...) and often avoids variables (tacit/point-free style). |
| |
| ▲ | pjmlp 2 hours ago | parent | prev | next [-] | | And Prolog as well. | |
| ▲ | zwnow 2 hours ago | parent | prev [-] | | I'd also throw Erlang/Elixir out there. And I really wished Elm wasn't such a trainwreck of a project... | | |
| ▲ | matu3ba 38 minutes ago | parent [-] | | What is the most optimal Erlang/Elixir you can think of regarding standardized effect systems for recording non-determinism, replaying and reversible computing? How comparable are performance numbers of Erlang/Elixir with Java and wasm? | | |
| ▲ | zwnow 31 minutes ago | parent [-] | | I'd recommend asking the Elixir community about this as I didn't even understand your question.
I am by no means a professional with Erlang/Elixir. I threw it out there because these language force you to think differently compared to common OOP languages. |
|
|
| |
| ▲ | adev_ 16 minutes ago | parent | prev | next [-] | | > It is about fundamentally changing how you think about software. > I'm not sure what they expect, but to me Zig looks very much like C Yes. I think people should sincerely stop with this kind of wording. That makes Zig looks like some kind of cult. Technically speaking, Zig democratized the concept of imperative compile time meta-programming (which is an excellent thing). For everything else, this is mainly reuse and cherry pick from other languages. | |
| ▲ | 43 minutes ago | parent | prev | next [-] | | [deleted] | |
| ▲ | IshKebab an hour ago | parent | prev | next [-] | | I guess comptime is a little different but yeah I wouldn't say it fundamentally changes how you think about software. I wouldn't say that about OCaml either really though. It's not wildly different in the way that e.g. Lean's type system, or Rust's borrow checker or Haskell's purity is. | |
| ▲ | wolvesechoes an hour ago | parent | prev [-] | | Zig community really tries to match Rust one in terms of cult resemblance. | | |
| ▲ | IshKebab an hour ago | parent | next [-] | | Did it occur to you that Rust and Zig might actually be very good? | |
| ▲ | hnarn 37 minutes ago | parent | prev [-] | | People that consider other people that are excited about something "culty" are usually people that themselves are excited by absolutely nothing. |
|
|
|
| ▲ | atwrk 8 minutes ago | parent | prev | next [-] |
| > Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software. Written by ChatGPT? |
|
| ▲ | PaulRobinson 13 hours ago | parent | prev | next [-] |
| This looks fantastic. Pedagogically it makes sense to me, and I love this approach of not just teaching a language, but a paradigm (in this case, low-level systems programming), in a single text. Zig got me excited when I stumbled into it about a year ago, but life got busy and then the io changes came along and I thought about holding off until things settled down - it's still a very young language. But reading the first couple of chapters has piqued my interest in a language and the people who are working with it in a way I've not run into since I encountered Ruby in ~2006 (before Rails hit v1.0), I just hope the quality stays this high all the way through. |
|
| ▲ | eibrahim 9 hours ago | parent | prev | next [-] |
| So many comments about the AI generation part. Why does it matter? If it’s good and accurate and helpful why do you care? That’s like saying you used a calculator to calculate your equations so I can’t trust you. I am just impressed by the quality and details and approach of it all. Nicely done (PS: I know nothing about systems programming and I have been writing code for 25 years) |
| |
| ▲ | johnfn 7 hours ago | parent | next [-] | | Because site site explicitly says: > The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices. If the site would have said something like "We use AI to clean up our prose, but it was all audited thoroughly by a human after", I wouldn't have an issue. Even better if they shared their prompts. | |
| ▲ | gassi 9 hours ago | parent | prev | next [-] | | > Why does it matter? Because AI gets things wrong, often, in ways that can be very difficult to catch. By their very nature LLMs write text that sounds plausible enough to bypass manual review (see https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...), so some find it best to avoid using it at all when writing documentation. | | |
| ▲ | cycomanic 8 hours ago | parent | next [-] | | But all those "it's AI posts" are about the prose and "style", not the actual content. So even if (and that is a big if) the text was written using the help of AI (and there are many valid reasons to use it, e.g. if you're not a native speaker) that does not mean the content was written from AI and thus contains AI mistakes. If it was so obviously written by AI then finding those mistakes should be easy? | | |
| ▲ | spaqin 7 hours ago | parent [-] | | The style is the easiest thing to catch for people; GP has said that the technical issues can be more difficult to find, especially in longer texts; there are times where it indeed are caught. Passing even correct information through an LLM may or may not taint it; it may create sentences which on first glance are similar, but may have different, imprecise meaning - specific wording may be crucial in some cases. So if the style is under question, the content is as well. And if you can write the technically correct text at first, why would you put it through another step? |
| |
| ▲ | DrNosferatu 9 hours ago | parent | prev [-] | | Humans get things wrong too. Quality prose usually only becomes that after many reviews. | | |
| ▲ | gassi 8 hours ago | parent | next [-] | | AI tools make different types of mistakes than humans, and that's a problem. We've spent eons creating systems to mitigate and correct human mistakes, which we don't have for the more subtle types of mistakes AI tends to make. | |
| ▲ | loeg 6 hours ago | parent | prev | next [-] | | AI gets things wrong ("hallucinates") much more often than actual subject matter experts. This is disingenuous. | | |
| ▲ | rootlocus 3 hours ago | parent [-] | | Presumably the "subject matter expert" will review the output of the LLM, just like a reviewer. I think it's disingenuous to assume that just because someone used AI they didn't look at or reviewed the output. | | |
| ▲ | littlestymaar 2 hours ago | parent [-] | | A serious one yes. But why would a serious person claim that they wrote this without AI when it's obvious they used it?! Using any tool is fine, but someone bragging about not having used a tool they actually used should make you suspicious about the amount of care that went to their work. |
|
| |
| ▲ | ares623 5 hours ago | parent | prev | next [-] | | Fortunately, we can't just get rid of humans (right?) so we have to use them _somehow_ | |
| ▲ | DrNosferatu 2 hours ago | parent | prev | next [-] | | If AI is used by “fire and forget”, sure - there’s a good chance of slop. But if you carefully review and iterate the contributions of your writers - human or otherwise - you get a quality outcome. | | |
| ▲ | littlestymaar 2 hours ago | parent [-] | | Absolutely. But why would you trust the author to have done that when they are lying in a very obvious way about not using AI? Using AI is fine, it's a tool, it's not bad per se. But claiming very loud you didn't use that tool when it's obvious you did is very off-putting. |
| |
| ▲ | righthand 7 hours ago | parent | prev [-] | | That’s fine. Write it out yourself and then ask an AI how it could be improved with a diff. Now you’ve given it double human review (once in creation then again reviewing the diff) and single AI review. | | |
| ▲ | maxbond 3 hours ago | parent [-] | | That's one review with several steps and some AI assistance. Checking your work twice is not equivalent to it having it reviewed by two people, part of reviewing your work (or the work of others) is checking multiple times and taking advantage of whatever tools are at your disposal. |
|
|
| |
| ▲ | nmilo 7 hours ago | parent | prev | next [-] | | Because the first thing you see when you click the link is "Zero AI" pasted under the most obviously AI-generated copy I've ever seen. It's just an insult to our intelligence, obviously we're gonna call OP out on this. Why lie like that? | | |
| ▲ | xeonmc 6 hours ago | parent [-] | | It's funny how everyone has gaslit themselves into doubting their own intuitions on the most blatant specimen where it's not just a mere whiff of the reek but an overpowering pungency assaulting the senses at every turn, forcing themselves to exclaim "the Emperor's fart smells wonderful!" “The Party told you to reject the evidence of your eyes and ears. It was their final, most essential command.”
|
| |
| ▲ | kaycey2022 7 hours ago | parent | prev | next [-] | | It matters because, it irritates me to no end that I have to review AI generated content that a human did not verify before. I don't like being made to work in the guise of someone giving me free content. | |
| ▲ | Klonoar 9 hours ago | parent | prev | next [-] | | > That’s like saying you used a calculator to calculate your equations so I can’t trust you. A calculator exists solely for the realm of mathematics, where you can afford to more or less throw away the value of human input and overall craftsmanship. That is not the case with something like this, which - while it leans in to engineering - is in effect viewed as a work of art by people who give a shit about the actual craft of writing software. | |
| ▲ | wolvesechoes an hour ago | parent | prev | next [-] | | > Why does it matter? I am just a human supremacist. | |
| ▲ | hatefulmoron 9 hours ago | parent | prev | next [-] | | If you believed that you wouldn't explicitly say there was no AI generated content at all, you'd let it speak for itself. | |
| ▲ | Barrin92 6 hours ago | parent | prev | next [-] | | >That’s like saying you used a calculator to calculate your equations so I can’t trust you. No it isn't. My TI-83 is deterministic and will give me exactly what I ask for, and will always do so, and when someone uses it they need to understand the math first or otherwise the calculator is useless. These AI models on the other hand don't care about correctness, by design don't give you deterministic answers, and the person asking the question might as well be a monkey as far as their own understanding of the subject matter goes. These models are if anything an anti-calculator. As Dijkstra points out in his fantastic essay on the idiocy of natural language "computation", what you are doing is exactly not computation but a kind of medieval incantation. Computers were designed to render impossible precisely the nonsense that LLMs produce. The biggest idiot on earth will still get a correct result from the calculator because unlike the LLM it is based on boolean logic, not verbal or pictorial garbage. https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667... | |
| ▲ | ropable 3 hours ago | parent | prev | next [-] | | An awful lot of commenters are convinced that it's AI-generated, despite explicit statements to the contrary. Maybe they're wrong, maybe they're right, but none of them currently have any proof stronger than vibes. It's like everyone has gaslit themselves into thinking that humans can't write well-structured neutral-tone docs any more. | |
| ▲ | zwnow 2 hours ago | parent | prev | next [-] | | I value human work and I do NOT value work that has been done with heavy AI usage.
Most AI things I've seen are slop, I instantly recognize AI songs for example. I just dont want anything to do with it. The uniqueness of creative work is lost with using AI. | |
| ▲ | maipen 9 hours ago | parent | prev [-] | | Insecurity, that's why. I too have this feeling sometimes. It's a coping mechanism. I don't know why we have this but I guess we have to see past it and adapt to reality. |
|
|
| ▲ | jimmytucson 5 hours ago | parent | prev | next [-] |
| > [Learning Zig] is about fundamentally changing how you think about software. Learning LISP, Fortran, APL, Perl, or really any language that is different from what you’re used to, will also do this for you. |
| |
| ▲ | userbinator 3 hours ago | parent | next [-] | | I'd add Prolog to that list; but Fortran and Perl aren't all that different from other procedural languages. | | | |
| ▲ | jamiejquinn an hour ago | parent | prev [-] | | I agree, I love zig but the things that make me program differently are features like excellent enum/union support, defer and comptime, which aren't readily available in the other languages I tend to use (C++, Fortran and Python). |
|
|
| ▲ | charlie90 8 hours ago | parent | prev | next [-] |
| >Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software. Zig is just C with a marketing push. Most developers already know C. |
| |
| ▲ | pjmlp 2 hours ago | parent | next [-] | | I would rephrase it as, Zig is just Modula-2 with a C like syntax. | |
| ▲ | dcre 7 hours ago | parent | prev | next [-] | | I suspect most developers do not know C. | | |
| ▲ | downrightmike 6 hours ago | parent [-] | | C is fine C++ is where they jumped the shark | | |
| ▲ | MyOutfitIsVague 5 hours ago | parent | next [-] | | C++ is far better than C in very many ways. It's also far worse than C in very many other ways. Given a choice between the two, I'd still choose C++ every day just for RAII. There's only so much that we can blame programmers for memory leaks, use-after-free, buffer overflows, and other things that are still common in new C code. At some point, it is the language itself that is unsuitable and insufficient. | |
| ▲ | bnolsen 5 hours ago | parent | prev | next [-] | | C++ explored a lot of ideas that some modern languages borrowed. C++ just had to haul along all the cruft it inherited and built up. | |
| ▲ | jeltz 5 hours ago | parent | prev [-] | | No, C is not fine. It is a really bad language that I unfortunately have to code professionally. |
|
| |
| ▲ | keyle 7 hours ago | parent | prev [-] | | That tagline unfortunately turned me off the book, without even starting to read. I really don't need this kind of self-enlightenment rubbish. What if I read the whole book and felt no change? I think I understand SoA just fine. | | |
| ▲ | xeonmc 7 hours ago | parent [-] | | It is also just such a supremely unziglike thing to state. | | |
| ▲ | Zambyte 6 hours ago | parent [-] | | Early talks by Andrew explicitly leaned into the notion that "software can be perfect", which is a deviation from how most programmers view software development. Zig also encourages you to "think like a computer" (also an explicit goal stated by Andrew) even more than C does on modern machines, given things like real vectors instead of relying on auto vectorization, the lack of a standard global allocator, and the lack of implicit buffering on standard io functions. I would definitely put Zig on the list of languages that made me think about programming differently. | | |
| ▲ | jamiejquinn 38 minutes ago | parent | next [-] | | Has it changed how you program in other languages? Because that to me is the true mark of a thought-shifting language. | |
| ▲ | keyle 4 hours ago | parent | prev [-] | | I'm not sure how what you stated is different from writing highly performance C. | | |
| ▲ | budro 2 hours ago | parent [-] | | I think it mostly comes down to the standard library guiding you down this path explicitly. The C stdlib is quite outdated and is full of bad design that affects both performance and ergonomics. It certainly doesn't guide you down the path of smart design. Zig _the language_ barely does any of the heavy lifting on this front. The allocator and io stories are both just stdlib interfaces. Really the language just exists to facilitate the great toolchain and stdlib. From my experience the stdlib seems to make all the right choices, and the only time it doesn't is when the API was quickly created to get things working, but hasn't been revisited since. A great case study of the stdlib being almost perfect is SinglyLinkedList [1]. Many other languages implement it as a container, but Zig has opted to implement it as an intrusively embedded element. This might confuse a beginner who would expect SinglyLinkedList(T) instead, but it has implications surrounding allocation and it turns out that embedding it gives you a more powerful API. And of course all operations are defined with performance in mind. prepend is given to you since it's cheap, but if you want postpend you have to implement it yourself (it's a one liner, but clearly more expensive to the reader). Little decisions add up to make the language feel great to use and genuinely impressive for learning new things. [1] https://ziglang.org/documentation/master/std/#std.SinglyLink... |
|
|
|
|
|
|
| ▲ | smj-edison 12 hours ago | parent | prev | next [-] |
| It's pretty incredible how much ground this covers! However, the ordering feels a little confusing to me. One example is in chapter 1. It talks about symbol exporting based on platform type, without explaining ELF. This is before talking about while loops. It's had some interesting nuggets so far, and I've followed along since I'm familiar with some of the broad strokes, but I can see it being confusing to someone new to systems programming. |
|
| ▲ | amitav1 13 hours ago | parent | prev | next [-] |
| It looks cool! No experience with Zig so can't comment on the accuracy, but I will take a look at it this week. Also a bit annoying that there is no PDF version that I could download as the website is pretty slow. After taking a look at the repository (https://github.com/zigbook/zigbook/tree/main), each page seems to be written in AsciiDoc, so I'll take a look about compiling a PDF version later today. |
| |
| ▲ | hoshikihao 9 hours ago | parent [-] | | If there is a PDF version, please remember to give me one. Thank you in advance. | | |
| ▲ | forgotpwd16 6 minutes ago | parent [-] | | zigbook.pdf => https://files.catbox.moe/gobtw7.pdf HOWTO: The text can be found per-chapter in `/pages/{chapter}.adoc` but each chapter includes code snippets found in a respective `/chapters-data/code/{chapter}/` subdirectory. So, very hacky way to do it but quite lazy to fully figure out asciidoctor flags, moved all code subdirectories in `/pages`, created using a script a combined book.adoc that includes all others with `include::{chapter}.adoc` directives, then run `asciidoctor-pdf -a sourcedir=. -r asciidoctor-diagram -o book.pdf book.adoc`. |
|
|
|
| ▲ | CraftingLinks 2 hours ago | parent | prev | next [-] |
| The gratuit accusations in this thread should be flagged. |
|
| ▲ | pkphilip 3 hours ago | parent | prev | next [-] |
| Very well done! wow! Thanks for this. Going through this now. One comment: About the syntax highlighting, the dark blue for keywords against a black background is very difficult to read. And if you opt for the white background, the text becauses off white / grey which again is very difficult to read. |
|
| ▲ | delifue 7 hours ago | parent | prev | next [-] |
| A nitpick about website: the top progress bar is kind of distracting (high-constrast color with animation). It's also unnecessary because there is already scrollbar on the right side. |
|
| ▲ | p2detar 12 hours ago | parent | prev | next [-] |
| Hmm, the explanation of Allocators is much more detailed in the book, but I feel although more compact, it seems much more reasonable in the language reference. [0] I'll keep exploring this book though, it does look very impressive. 0 - https://ziglang.org/documentation/master/#Memory |
|
| ▲ | shuraman7 12 hours ago | parent | prev | next [-] |
| It's really hard to believe this isn't AI generated, but today I was trying to use the HTTP server from std after the 0.15 changes, couldn't figure out how it's supposed to work until I've searched repos in Github. LLM's couldn't figure it out as well, they were stuck in a loop of changing/breaking things even further until they arrived at the solution of using the deprecated way. so I guess this is actually handwritten which is amazing because it looks like the best resource I've seen up until now for Zig |
| |
| ▲ | blks 11 hours ago | parent | next [-] | | > It's really hard to believe this isn't AI generated Case of a person who is relying on LLMs so much he cannot imagine doing something big by themselves. | | |
| ▲ | shuraman7 11 hours ago | parent [-] | | it's not only the size - it was pushed all at once, anonymously, using text that highly resembles that of an AI. I still think that some of the text is AI generated. perhaps not the code, but the wording of the text just reeks of AI | | |
| ▲ | trenchpilgrim 7 hours ago | parent | next [-] | | > it was pushed all at once For some of my projects I develop against my own private git server, then when I'm ready to go public, create a new git repo with a fully squashed history. My early commits are basically all `git commit -m "added stuff"` | |
| ▲ | BlackjackCF 10 hours ago | parent | prev [-] | | Can you provide some examples where the text reeks of AI? | | |
| ▲ | wild_egg 10 hours ago | parent | next [-] | | Literally the heading as soon as you click the submitted link > Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software. The "it's not X, it's Y" phrasing screams LLM these days | | |
| ▲ | ropable 10 hours ago | parent | next [-] | | It's almost as though the LLMs were trained on all the writing conventions which are used by humans and are parroting those, instead of generating novel outputs themselves. | | |
| ▲ | fifhtbtbf 9 hours ago | parent [-] | | They haven’t picked up any one human writing style, they’ve converged on a weird amalgamation of expressions and styles that taken together don’t resemble any real humans writing and begin to feel quite unnatural. | | |
| |
| ▲ | anon7000 7 hours ago | parent | prev [-] | | Plenty of people use “it’s not X, it’s Y” As someone who uses em-dashes a lot, I’m getting pretty tired of hearing something “screams AI” about extremely simple (and common) human constructs. Yeah, the author does use that convention a number of times. But that makes sense, if that’s a tool in your writing toolbox, you’ll pull it out pretty frequently. It’s not signal by itself, it’s noise. (does that make me an AI!?) We really need to be considering a lot more than that. Reading through the first article, it appears to be compelling writing and a pretty high quality presentation. That’s all that matters, tbh. People get upset about AI slop because it’s utterly worthless and exceptionally low quality. |
| |
| ▲ | xeonmc 8 hours ago | parent | prev | next [-] | | https://www.zigbook.net/chapters/45__text-formatting-and-uni... The repetitiveness of the shell commands (and using zig build-exe instead of zig run when the samples consist of short snippets), the filler bullet points and section organization that fail to convey any actual conceptual structure.
And ultimately throughout the book the general style of thought processes lacks any of the zig community’s cultural anachronisms. If you take a look at the repository you’ll also notice baffling tech choices not justified by the author that runs counter against the zig ethos. (Edit: the build system chapter is an even worse offender in meaningless cognitively-cluttering headings and flowcharts, it’s almost certainly entirely hallucinated, there is just an absurd degree of unziglikeness everywhere: https://www.zigbook.net/chapters/26__build-system-advanced-t... -- What’s with the completely irrelevant flowchart of building the zig compliler? What even is the point of module-graph.txt? And icing on the cake in the “Vendoring vs Registry Dependencies” section.) | |
| ▲ | dilap 8 hours ago | parent | prev [-] | | I read the first few paragraphs. Very much reads like LLM slop to me... E.g., "Zig takes a different path. It reveals complexity—and then gives you the tools to master it." If we had a reliable oracle, I would happily bet a $K on significant LLM authorship. | | |
| ▲ | sgt an hour ago | parent [-] | | Yeah and then why would they explicitly deny it? Maybe the AI was instructed not to reveal its origin. It's painful to enjoy this book if I know it's likely made by an LLM. |
|
|
|
| |
| ▲ | tredre3 12 hours ago | parent | prev [-] | | I've had the same experience as you with Zig. I quite love the idea of it Zig but the undocumented churn is a bit much. I wish they had auto generated docs that reflect the current state of the stdlib, at least. Even if it just listed the signatures with no commentary. I was trying to solve a simple problem but Google, the official docs, and LLMs were all out of date. I eventually found what I needed in Zig's commit history, where they casually renamed something without updating the docs. It's been renamed once more apparently, still not reflected in the docs :shrugs:. | | |
| ▲ | smj-edison 12 hours ago | parent | next [-] | | Wait, doesn't `zig std` launch the autogenerated docs? | | |
| ▲ | geophph 4 hours ago | parent [-] | | It’s currently broken, or was recently on the 0.16 dev branch (master) |
| |
| ▲ | Aeolun 5 hours ago | parent | prev [-] | | But you can tell your LLM to just go look at the source code (after checking it out so it doesn’t try 20s github requests). Always works like a charm for me. |
|
|
|
| ▲ | pelasaco 26 minutes ago | parent | prev | next [-] |
| I'm a C/C++ developer. I write production code in MQL5 (C-like) and Go, and I use Python for research and Automation. I can work with other languages as well, but I keep asking myself: why should I learn Zig? If I want to do system or network programming, my current stack already covers those needs — and adding Rust would probably make it even more future-proof. But Zig? This is a genuine question, because the "Zig book" doesn’t give me much insight into what are the real use cases for Zig. |
|
| ▲ | thomascountz 11 hours ago | parent | prev | next [-] |
| The book content itself is deliberately free of AI-generated prose. Drafts may start anywhere, but final text should be reviewed, edited, and owned by a human contributor.
There is more specificity around AI use in the project README. There may have been LLMs used during drafting, which has led to the "hallmarks" sticking around that some commenters are pointing out. |
| |
| ▲ | ummonk 6 hours ago | parent [-] | | That statement is honestly self-contradictory. If a draft was AI-generated and then reviewed, edited, and owned by a human contributor, then the parts which survived reviewing and editing verbatim were still AI-generated... | | |
| ▲ | ako 3 hours ago | parent [-] | | Why do you care, if a human reviewed and edited it, someone filtered it to make sure it’s correct. It’s validated to be correct, that is the main point. | | |
| ▲ | littlestymaar 2 hours ago | parent [-] | | > if a human reviewed and edited it, someone filtered it to make sure it’s correct Yes. But it's not “free from AI-generated prose”, so why advertise it as such? And since the first sentence is a lie, why should we believe the second sentence at all? |
|
|
|
|
| ▲ | reactordev 9 hours ago | parent | prev | next [-] |
| A lot of love went into this. It's evident throughout. Great job! |
| |
|
| ▲ | johnfn 12 hours ago | parent | prev | next [-] |
| The book claims it’s not written with the help of AI, but the content seems so blatantly AI-generated that I’m not sure what to conclude, unless the author is the guy OpenAI trained GPT-5 on: > Learning Zig is not just about adding a language to your resume. It is about fundamentally changing how you think about software. “Not just X - Y” constructions. > By Chapter 61, you will not just know Zig; you will understand it deeply enough to teach others, contribute to the ecosystem, and build systems that reflect your complete mastery. More not just X - Y constructions with parallelism. Even the “not made with AI” banner seems AI generated! Note the 3 item parallelism. > The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices. I don’t have anything against AI generated content. I’m just confused what’s going on here! EDIT: after scanning the contents of the book itself I don’t believe it’s AI generated - perhaps it’s just the intro? EDIT again: no, I’ve swung back to the camp of mostly AI generated. I would believe it if you told me the author wrote it by hand and then used AI to trim the style, but “no AI” seems hard to believe. The flow charts in particular stand out like a sore thumb - they just don’t have the kind of content a human would put in flow charts. |
| |
| ▲ | finder83 11 hours ago | parent | next [-] | | Every time I read things like this, it makes me think that AI was trained off of me. Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are all examples of how they teach to write essays in high school and college. They're also all examples of how I think and have learned to communicate. I'm not sure what to make of that either. | | |
| ▲ | johnfn 11 hours ago | parent | next [-] | | To be explicit, it’s not general hallmarks of good writing. It’s exactly two common constructions: not X but Y, and 3 items in parallel. These two pop up in extreme disproportion to normal “good writing”. Good writers know to save these tricks for when they really want to make a point. | | |
| ▲ | anon7000 7 hours ago | parent | next [-] | | Most people aren’t great writers, though (including myself). I’d guess that if people find the “not X but Y” compelling, they’ll overuse it. Overusing some stylistic element is such a normal writing “mistake”. Unless they’re an extremely good writer with lots of tools in their toolbox. But that’s not most people. | | |
| ▲ | johnfn 7 hours ago | parent [-] | | I find the probability that a particular writer latches onto the exact same patterns that AI latches onto, and does not latch onto any of the patterns AI does not latch onto, to be quite low. Is it a 100% smoking gun? No. But it’s suspicious. |
| |
| ▲ | finder83 11 hours ago | parent | prev [-] | | Interesting, I'll have to look for those. |
| |
| ▲ | ummonk 6 hours ago | parent | prev [-] | | But you didn't write that "Using semicolons, utilizing classic writing patterns, and common use of compare and contrast are not just examples of how they teach to write essays in high school and college; they're also all examples of how I think and have learned to communicate." |
| |
| ▲ | rmonvfer an hour ago | parent | prev | next [-] | | I mean maybe the content is not AI generated (I wouldn’t say it is) but the website does have an AI generated smell to it. From the colors to the shapes, it looks like Sonnet or Opus definitely made some tweaks. | |
| ▲ | geysersam 12 hours ago | parent | prev [-] | | Clearly your perception of what is AI generated is wrong. You can't tell something is AI generated only because it uses "not just X - Y" constructions. I mean, the reason AI text often uses it is because it's common in the training material. So of course you're going to see it everywhere. | | |
| ▲ | gamegoblin 10 hours ago | parent | next [-] | | I sent the text through an AI detector with 0.1% false positive rate and it was highly confident the Zig book introduction was fully AI-written | |
| ▲ | johnfn 11 hours ago | parent | prev [-] | | Find me some text from pre-AI that uses so many of these constructions in such close proximity if it’s really so easy - I don’t think you’ll have much luck. Good authors have many tactics in their rhetorical bag of tricks. They don’t just keep using the same one over and over. | | |
| ▲ | andyferris 9 hours ago | parent | next [-] | | The style of marketing material was becoming SO heavily cargo-culted with telltale signs exactly like these in the leadup to LLMs. Humans were learning the same patterns off each other. Such style advice has been floating around on e.g. LinkedIn for a while now. Just a couple years later, humans are (predictably) still doing it, even if the LLMs are now too. We should be giving each other a bit of break. I'd personally be offended if someone thought I was a clanker. | |
| ▲ | anon7000 7 hours ago | parent | prev [-] | | You’re completely right, but blogs on the internet are almost entirely not written by great authors. So that’s of no use when checking if something is AI generated. |
|
|
|
|
| ▲ | jasonjmcghee 13 hours ago | parent | prev | next [-] |
| So despite this... > The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices. I just don't buy it. I'm 99% sure this is written by an LLM. Can the author... Convince me otherwise? > This journey begins with simplicity—the kind you encounter on the first day. By the end, you will discover a different kind of simplicity: the kind you earn by climbing through complexity and emerging with complete understanding on the other side. > Welcome to the Zigbook. Your transformation starts now. ... > You will know where every byte lives in memory, when the compiler executes your code, and what machine instructions your abstractions compile to. No hidden allocations. No mystery overhead. No surprises. ... > This is not about memorizing syntax. This is about earning mastery. |
| |
| ▲ | CathalMullan 12 hours ago | parent | next [-] | | Pretty clear it's all AI. The @zigbook account only has 1 activity prior to publishing this repo, and that's an issue where they mention "ai has made me too lazy": https://github.com/microsoft/vscode/issues/272725 | | |
| ▲ | smj-edison 11 hours ago | parent [-] | | After reading the first five chapters, I'm leaning this way. Not because of a specific phrase, but because the pacing is way off. It's really strange to start with symbol exporting, then moving to while loops, then moving to slices. It just feels like a strange order. The "how it works" and "key insights" also feel like a GPT summarization. Maybe that's just a writing tic, but the combination of correct grammar with bad pacing isn't something I feel like a human writer has. Either you have neither (due to lack of practice), or both (because when you do a lot of writing you also pick up at least some ability to pace). Could be wrong though. |
| |
| ▲ | simonklee 12 hours ago | parent | prev | next [-] | | It's just an odd claim to make when it feels very much like AI generated content + publish the text anonymously. It's obviously possible to write like this without AI, but I can't remember reading something like this that wasn't written by AI. It doesn't take away from the fact that someone used a bunch of time and effort on this project. | | |
| ▲ | jasonjmcghee 12 hours ago | parent | next [-] | | To be clear, I did not dismiss the project or question its value - simply questioned this claim as my experience tells me otherwise and they make a big deal out of it being human written and "No AI" in multiple places. | | |
| ▲ | simonklee 12 hours ago | parent [-] | | I agree with you. After reading a couple of the chapters I'd be surprised if this wasn't written by an LLM. |
| |
| ▲ | gre 9 hours ago | parent | prev [-] | | Did they actually spend a bunch of time and effort though? I think you could get an llm to generate the entire thing, website and all. Check out the sleek looking terminal--there's no ls, cd, it's just an ai hallucination. |
| |
| ▲ | rudedogg 13 hours ago | parent | prev | next [-] | | I was pretty skeptical too, but it looks legit to me. I've been doing Zig off and on for several years, and have read through the things I feel like I have a good understanding of (though I'm not working on the compiler, contributing to the language, etc.) and they are explained correctly in a logical/thoughtful way. I also work with LLMs a ton at work, and you'd have to spoon-feed the model to get outputs this cohesive. | |
| ▲ | gamegoblin 13 hours ago | parent | prev | next [-] | | Pangram[1] flags the introduction as totally AI-written, which I also suspected for the same reasons you did [1] one of the only AI detectors that actually works, 99.9% accuracy, 0.1% false positive | | |
| ▲ | ants_everywhere 8 hours ago | parent [-] | | Keep in mind that pangram flags many hand-written things as AI. > I just ran excerpts from two unpublished science fiction / speculative fiction short stories through it. Both came back as ai with 99.9% confidence. Both stories were written in 2013. > I've been doing some extensive testing in the last 24 hours and I can confidently say that I believe the 1 in 10,000 rate is bullshit. I've been an author for over a decade and have dozens of books at hand that I can throw at this from years prior to AI even existing in anywhere close to its current capacity. Most of the time, that content is detected as AI-created, even when it's not. > Pangram is saying EVERYTHING I have hand written for school is AI. I've had to rewrite my paper four times already and it still says 99.9% AI even though I didn't even use AI for the research. > I've written an overview of a project plan based on a brief and, after reading an article on AI detection, I thought it would be interesting to run it through AI detection sites to see where my writing winds up. All of them, with the exception of Pangram, flagged the writing as 100% written by a human. Pangram has "99% confidence" of it being written by AI. I generally don't give startups my contact info, but if folks don't mind doing so, I recommend running pangram on some of their polished hand written stuff. https://www.reddit.com/r/teachingresources/comments/1icnren/... | | |
| ▲ | agucova 5 hours ago | parent | next [-] | | How long were the extracts you gave to Pangram? Pangram only has the stated very high accuracy for long-form text covering at least a handful of paragraphs. When I ran this book, I used an entire chapter. | |
| ▲ | gamegoblin 7 hours ago | parent | prev [-] | | Weird to me that nobody ever posts the actual alleged false positive text in these criticisms I've yet to see a single real Pangram false positive that was provably published when it says it was, yet plenty such comments claiming they exist |
|
| |
| ▲ | the-anarchist 12 hours ago | parent | prev | next [-] | | Doesn't mean that the author might not use AI to optimise legibility. You can write stuff yourself and use an LLM to enhance the reading flow. Especially for non-native speakers it is immensely helpful to do so. Doesn't mean that the content is "AI-generated". The essence is still written by a human. | | |
| ▲ | tredre3 12 hours ago | parent | next [-] | | > Doesn't mean that the author might not use AI to optimise legibility. I agree that there is a difference between entirely LLM-generated, and LLM-reworded. But the statement is unequivocal to me: > The Zigbook intentionally contains no AI-generated content—it is hand-written If an LLM was used in any fashion, then this statement is simply a lie. | | |
| ▲ | mrob 2 hours ago | parent [-] | | >If an LLM was used in any fashion, then this statement is simply a lie. While I don't believe the article was created this way, it's possible to use an LLM purely as a classifier. E.g. prompt along the lines of "Does this paragraph contain any errors? Answer only yes or no." and generate only a single set of token probabilities, without any autoregression. Flag any paragraphs with sufficient probability of "yes" for human review. |
| |
| ▲ | lukan 12 hours ago | parent | prev [-] | | But then you cannot write that "The Zigbook intentionally contains no AI-generated content—it is hand-written" |
| |
| ▲ | NoboruWataya 11 hours ago | parent | prev | next [-] | | > Can the author... Convince me otherwise? Not disagreeing with you, but out of interest, how could you be convinced otherwise? | | |
| ▲ | Jach 5 hours ago | parent | next [-] | | To me it's another specimen in the "demonstrating personhood" problem that predates LLMs. e.g. Someone replies to you on HN or twitter or wherever, are they a real person worth engaging with? Sometimes it'll literally be a person but their behavior is indistinguishable from a bot, that's their problem. Convincing signs of life include account age, past writing samples, and topic diversity. | |
| ▲ | jasonjmcghee 10 hours ago | parent | prev | next [-] | | I'm not sure, but I try my best to assume good faith / be optimistic. This one hit a sore spot b/c many people are putting time and effort into writing things themselves and to claim "no ai use" if it is untrue is not fair. If the author had a good explanation... Idk not a native English writer and used an LLM to translate and that included the "no LLMs used" call-out and that was translated improperly etc | | |
| ▲ | chris_pie 10 hours ago | parent [-] | | note that the front page also says: "61 chapters • Project-based • Zero AI" |
| |
| ▲ | ummonk 6 hours ago | parent | prev [-] | | Git log / draft history |
| |
| ▲ | ninetyninenine 2 hours ago | parent | prev | next [-] | | The sweet irony of this post is that this very post itself is written by an LLM. | |
| ▲ | keyle 7 hours ago | parent | prev | next [-] | | I wish AI had the self-built irony of adding vomit emojis to their sycophantic sentences. | |
| ▲ | chris_pie 12 hours ago | parent | prev | next [-] | | I don't think so, I think it's just a pompous style of writing. | |
| ▲ | PaulRobinson 13 hours ago | parent | prev | next [-] | | You can't just say that a linguistic style "proves" or even "suggests" AI. Remember, AI is just spitting out things its seen before elsewhere. There's plenty of other texts I've seen with this sort of writing style, written long before AI was around. Can I also ask: so what if it is or it isn't? While AI slop is infuriating, and the bubble hype is maddening, I'm not sure every time somebody sees some content they don't like the style of we just call out it "must" be AI, and debate if it is or it isn't is not at least as maddening. It feels like all content published now gets debated like this, and I'm definitely not enjoying it. | | |
| ▲ | maxbond 12 hours ago | parent [-] | | You can be skeptical of anything but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest that it's generated text. As to why it matters, doesn't it matter when people lie? Aren't you worried about the veracity of the text if it's not only generated but was presented otherwise? That wouldn't erode your trust that the author reviewed the text and corrected any hallucinations even by an iota? | | |
| ▲ | geysersam 11 hours ago | parent [-] | | > but I think it's silly to say that these "Not just A, but B" constructions don't strongly suggest ai generated text Why? Didn't people use such constructions frequently before AI? Some authors probably overused them the same frequency AI does. | | |
| ▲ | maxbond 11 hours ago | parent [-] | | I don't think there was very much abuse of "not just A, but B" before ChatGPT. I think that's more of a product of RLHF than the initial training. Very few people wrote with the incredibly overwrought and flowery style of AI, and the English speaking Internet where most of the (English language) training data was sourced from is largely casual, everyday language. I imagine other language communities on the Internet are similar but I wouldn't know. Don't we all remember 5 years ago? Did you regularly encounter people who write like every followup question was absolutely brilliant and every document was life changing? I think about why's (poignant) Guide to Ruby [1], a book explicitly about how learning to program is a beautiful experience. And the language is still pedestrian compared to the language in this book. Because most people find writing like that saccharin, and so don't write that way. Even when they're writing poetically. Regardless, some people born in England can speak French with a French accent. If someone speaks French to you with a French accent, where are you going to guess they were born? [1] https://poignant.guide/book/chapter-1.html | | |
| ▲ | PaulRobinson 10 hours ago | parent [-] | | It's been alleged that a major source of training data for many LLMs was libgen and SciHub - hardly casual. | | |
| ▲ | maxbond 10 hours ago | parent [-] | | Even if that were comparable in size to the conversational Internet, how many novels and academic papers have you read that used multiple "not just A, but B" constructions in a single chapter/paper (that were not written by/about AI)? |
|
|
|
|
| |
| ▲ | ants_everywhere 11 hours ago | parent | prev | next [-] | | IMO HN should add a guideline about not insinuating things were written by AI. It degrades the quality of the site similarly to many of the existing rules. Arguably it would be covered by some of the existing rules, but it's become such a common occurrence that it may need singling out. | | |
| ▲ | ModernMech 9 hours ago | parent [-] | | What degrades conversation is to lie about something being not AI when it actually is. People pointing out the fraud are right to do so. One thing I've learned is that comment sections are a vital defense on AI content spreading, because while you might fool some people, it's hard to fool all the people. There have been times I've been fooled by AI only to see in the comments the consensus that it is AI. So now it's my standard practice to check comments to see what others are saying. If mods put a rule into place that muzzles this community when it comes to alerting others a fraud is being affected, that just makes this place a target for AI scams. | | |
| ▲ | ants_everywhere 9 hours ago | parent [-] | | It's 2025, people are going to use technology and its use will spread. There are intentional communities devoted to stopping the spread of technology, but HN isn't currently one of them. And I've never seen an HN discussion where curiosity was promoted by accusations or insinuations of LLM use. It seems consistent to me with the rules against low effort snark, sarcasm, insinuating shilling, and ideological battles. I don't personally have a problem with people waging ideological battles about AI, but it does seem contrary to the spirit of the site for so many technical discussions to be derailed so consistently in ways that specifically try to silence a form of expression. | | |
| ▲ | ModernMech 9 hours ago | parent [-] | | I'm 100% okay with AI spreading. I use it every day. This isn't a matter of an ideological battle against AI, it's a matter of fraudulent misrepresentation. This wouldn't be a discussion if the author themselves hadn't claimed what they had, so I don't see why the community should be barred from calling that out. Why bother having curious discussions about this book when they are blatantly lying about what is presented here? Here's some curiosity: what else are they lying about, and why are they lying about this? | | |
| ▲ | ants_everywhere 9 hours ago | parent [-] | | To clarify there is no evidence of any lying or fraud. So far all we have evidence of is HN commenters assuming bad faith and engaging in linguistic phrenology. | | |
| ▲ | ModernMech 8 hours ago | parent [-] | | There is evidence, it's circumstantial, but there's never going to be 100% proof. And that's the point, that's why community detection is the best weapon we have against such efforts. | | |
| ▲ | maxbond 8 hours ago | parent [-] | | (Nitpick: it's actually direct evidence, not circumstantial evidence. I think you mean it isn't conclusive evidence. Circumstantial evidence is evidence that requires an additional inference, like the accused being placed at the scene of the crime implying they may have been the perpetrator. But stylometry doesn't require any additional inference, it's just not foolproof.) |
|
|
|
|
|
| |
| ▲ | Rochus 13 hours ago | parent | prev [-] | | Who cares? Still better than just nagging. | | |
| ▲ | maxbond 12 hours ago | parent | next [-] | | Using AI to write is one thing, claiming you didn't when you did should be objectionable to everyone. | | |
| ▲ | littlestymaar 12 hours ago | parent | next [-] | | This. I wouldn't mind a technical person transparently using AI for doing the writing which isn't necessary their strength, as long as the content itself comes from the author's expertise and the generated writing is thoroughly vetted to make sure there's no hallucinationated misunderstanding in the final text. At the end of the day this would just increase the amount of high quality technical content available, because the set of people with both a good writing skill and a deep technical expertise is much narrower than just the later. But claiming you didn't use AI when you did breaks all trust between you a your readership and makes the end result pretty much worthless because why read a book if you don't trust the author not to waste your time? | |
| ▲ | Rochus 12 hours ago | parent | prev [-] | | Who wants to be so petty. I'm sure there are more interesting things to say about this book. | | |
| ▲ | maxbond 12 hours ago | parent [-] | | So petty as to lie about using AI or so petty as to call it out? Calling it out doesn't seem petty to me. I intend to learn Zig when it reaches 1.0 so I was interested in this book. Now that I see it was probably generated by someone who claimed otherwise, I suspect this book would have as much of a chance of hurting my understanding as helping it. So I'll skip it. Does that really sound petty? |
|
| |
| ▲ | Rochus 7 hours ago | parent | prev [-] | | My statement refers to this claim: "I'm 99% sure this is written by an LLM." The hypocrisy and entitlement mentality that prevails in this discussion is disgusting. My recommendation to the fellow below that he should write a book himself (instead of complaining) was even flagged, demonstrating once again the abuse of this feature to suppress other, completely legitimate opinions. | | |
| ▲ | maxbond 6 hours ago | parent [-] | | I'm guessing it was flagged because it came off as snark. I've gone ahead and vouched it but of course I can't guarantee it won't get flagged again. To be frank this comment is probably also going to get flagged for the strong language you're using. I don't think either are abusive uses of flagging. Additionally please note that I neither complained not expressed an entitlement. The author owes me as much as I owe them (nothing beyond respect and courtesy). I'm just as entitled to express a criticism as they are to publish a book. I suppose you could characterize my criticism as complaints, but I don't see what purpose that really serves other than to turn up the rhetorical temperature. |
|
|
|
|
| ▲ | samgranieri 7 hours ago | parent | prev | next [-] |
| As someone who is diving deep into Zig, I’m actually going to evaluate all this (and compare this to Ziglings) or the Zig track on Exercism. |
|
| ▲ | finalhacker an hour ago | parent | prev | next [-] |
| wow, it's so cool. |
|
| ▲ | bricss 10 hours ago | parent | prev | next [-] |
| But can we train AI on this beautifully hand-crafted material, and ask it later to rewrite Rust with Zig? :] |
|
| ▲ | zer0x4d 6 hours ago | parent | prev | next [-] |
| For me, personally, any new language needs to have a "why." If a new language can't convince me in 1-2 sentences why I need to learn it and how it's going to improve software development, as a whole, it's 99% bs and not worth my time. DHH does a great job of clarifying this during his podcast with Lex Friedman. The "why" is immediately clear and one can decide for themselves if it's what they're looking for. I have not yet seen a "why" for Zig. |
| |
|
| ▲ | serial_dev 13 hours ago | parent | prev | next [-] |
| It was very hard to find a link to the table of contents… then I tried opening it and the link didn’t work. I’m on iOS. I’d have loved to take a look quickly what’s in the book… |
| |
|
| ▲ | gigatree 11 hours ago | parent | prev | next [-] |
| inb4 people start putting a standardized “not AI generated” symbol in website headers |
|
| ▲ | wosined 13 hours ago | parent | prev | next [-] |
| Some text is unreadable because it is so small. |
|
| ▲ | matsy007 9 hours ago | parent | prev | next [-] |
| Why do we need another language? |
|
| ▲ | mendelmaleh 11 hours ago | parent | prev | next [-] |
| > The Zigbook intentionally contains no AI-generated content—it is hand-written, carefully curated, and continuously updated to reflect the latest language features and best practices. I think it's time to have a badge for non LLM content, and avoid the rest. |
| |
| ▲ | Rasthor 11 hours ago | parent | next [-] | | There is also Brainmade:
https://brainmade.org/ | | |
| ▲ | kaycey2022 7 hours ago | parent | next [-] | | What's stopping AI made content from including this as well? | | |
| ▲ | oconnor663 4 hours ago | parent [-] | | I imagine it's kind of like "What's stopping someone from forging your signature on almost any document?" The point is less that it's hard to fake, and more that it's a line you're crossing where everyone agrees you can't say "oops I didn't know I wasn't supposed to do that." |
| |
| ▲ | benatkin 7 hours ago | parent | prev | next [-] | | The name seems odd to me, because I think it's fine to describe things as a digital brain, especially when the word brain doesn't only apply to humans but to organisms as simple as a 959 cell roundworm with 302 neurons. | | |
| ▲ | righthand 7 hours ago | parent [-] | | Also the logo seems to imply a plant has taken over this person and the content was made by some sort of body-snatched pod person. | | |
| |
| ▲ | raincole 6 hours ago | parent | prev [-] | | If this gets any traction, AI bros on Twitter will put it on their generated images just out of spite. |
| |
| ▲ | gregsadetsky 11 hours ago | parent | prev | next [-] | | There seems to be https://notbyai.fyi/ and https://no-ai-icon.com/ ..! | |
| ▲ | fleebee 10 hours ago | parent | prev | next [-] | | I like these ones: https://cadence.moe/blog/2024-10-05-created-by-a-human-badge... | |
| ▲ | mikepurvis 8 hours ago | parent | prev | next [-] | | Even for content that isn’t directly composed by llm, I bet there’d be value in an alerting system that could ingest your docs and code+commits and flag places where behaviour referenced by docs has changed and may need to be updated. This kind of “workflow” llm use has the potential to deliver a lot of value even to a scenario where the final product is human-composed. | |
| ▲ | James_K 9 hours ago | parent | prev | next [-] | | > Most programming languages hide complexity from you—they abstract away memory management, mask control flow with implicit operations, and shield you from the machine beneath. This feels simple at first, but eventually you hit a wall. You need to understand why something is slow, where a crash happened, or how to squeeze every ounce of performance from your hardware. Suddenly, the abstractions that helped you get started are now in your way. > Zig takes a different path. It reveals complexity—and then gives you the tools to master it. > This book will take you from Hello, world! to building systems that cross-compile to any platform, manage memory with surgical precision, and generate code at compile time. You will learn not just how Zig works, but why it works the way it does. Every allocation will be explicit. Every control path will be visible. Every abstraction will be precise, not vague. But sadly people like the prompter of this book will lie and pretend to have written things themselves that they did not. First three paragraphs by the way, and a bingo for every sign of AI. | | |
| ▲ | cycomanic 8 hours ago | parent | next [-] | | These posts are getting old. I had a discussion on some other submission a couple of weeks back, where several people were arguing "it's obviously AI generated" (the style btw was completely different to this, quite a few explicitives...). When I put the the text in 5 random AI detectors the argument who except for one (which said mixed, 10% AI or so) all said 100% human I was being down voted and the argument became "AI detection tools can detect AI" but somehow the people claim there are 100% clear telltale signs which says it's AI (why those detection tools can detect them is baffling to me). I have the feeling that the whole "it's AI" stick has become a synonym for I don't like this writing style. It really does not add to the discussion. If people would post immediately "there's spelling mistakes this is rubbish", they would rightfully get down voted, but somehow saying "it's AI" is acceptable. Would the book be any more or less useful if somebody used AI for writing it? So what is your point? | | |
| ▲ | spacechild1 7 hours ago | parent | next [-] | | Check out the other examples presented in this thread or read some of the chapters. I'm pretty sure the author used LLMs to generate at least parts of this text. In this case this would be particularly outrageous since the author explicitly advertizes the content as 100% handwritten. > Would the book be any more or less useful if somebody used AI for writing it? Personally, I don't want to read AI generated texts. I would appreciate if people were upfront about their LLM usage. At the very least they shouldn't lie about it. | |
| ▲ | agucova 5 hours ago | parent | prev [-] | | I ran the introduction chapter through Pangram [1], which is one of the most reliable AI-generated text classifiers out there [2] (with a benchmarked accuracy of 99.85% over long-form text), and it gives high confidence for it having been AI-generated. It's also very intuitively obvious if you play a lot with LLMs. I have no problem at all reading AI-generated content if it's good, but I don't appreciate dishonesty. [1]: https://www.pangram.com/
[2]: https://arxiv.org/pdf/2402.14873 |
| |
| ▲ | Brian_K_White 4 hours ago | parent | prev | next [-] | | Right in those same first few paragraphs...
"...hiding something from you. Because they are." Would most LLMs have written that invalid fragment sentence "Because they are." ? I don't think you have enough to go on to make this accusation. | | |
| ▲ | assbuttbuttass 3 hours ago | parent [-] | | Yes, that fragment in particular screams LLM to me. It's the exact kind of meaningless yet overly dramatic slop that LLMs love |
| |
| ▲ | mef51 9 hours ago | parent | prev [-] | | The em dashes? | | |
| ▲ | James_K 9 hours ago | parent [-] | | There's also the classic “it's not just X, it's Y”, adjective overuse, rule of 3, total nonsense (manage memory with surgical precision? what does that mean?), etc. One of these is excusable, but text entirely comprised of AI indicators is either deliberately written to mimic AI style, or the product of AI. | | |
|
| |
| ▲ | ModernMech 10 hours ago | parent | prev [-] | | Meh. I mean, who's it for? People should be adopting the stance that everything is AI on the internet and make decisions from there. If you start trusting people telling you that they're not using AI, you're setting yourself up to be conned. Edit: So I wrote this before I read the rest of the thread where everyone is pointing out this is indeed probably AI, so right of the bat the "AI-free" label is conning people. |
|
|
| ▲ | rrgok 3 hours ago | parent | prev | next [-] |
| I guess now the trend is Zig. The era of Javascript framework has come to end. After that was AI tend. And now we have Zig and its allocators, especially the arena allocator. /S |
|
| ▲ | gigatexal 13 hours ago | parent | prev | next [-] |
| there's no way someone made this for free, where do I donate? im gonna get so much value from this this feels like stealing |
| |
| ▲ | gamegoblin 12 hours ago | parent | next [-] | | It's AI-written FWIW though maybe AI is getting to the point it can do stuff like this somewhat decently | | | |
| ▲ | cultofmetatron 13 hours ago | parent | prev [-] | | SAME. I was looking for a donation button myself! I've paid for worse quality instructional material. this is just the sort of thing I'm happy to support |
|
|
| ▲ | brcmthrowaway 11 hours ago | parent | prev | next [-] |
| Need this but to learn AI |
|
| ▲ | jesseb34r 8 hours ago | parent | prev [-] |
| This source is really hard to trust. AI or not, the author has done no work to really establish epistemological reliability and transparency. The entire book was published at once with no history, no evidence of the improvement and iteration it takes to create quality work, and no reference as to the creative process or collaborators or anything. And on top of that, the author does not seem to really have any other presence or history in the community. I love Zig, and have wanted more quality learning materials to exist. This, unfortunately, does not seem to be it. |
| |
| ▲ | ncgl 8 hours ago | parent | next [-] | | How do you feel about regular books, whose iterations and edits you dont see? | | |
| ▲ | jesseb34r 6 hours ago | parent [-] | | For books that are published in more traditional manners, digital or paper, there is normally a credible publisher, editors, sometimes a foreword from a known figure, reviews from critics or experts in the field, and often a bio about the author explaining who they are and why they wrote the book etc. These different elements are all signals of reliability, they help to convey that the content is more than just fluff around an attention-grabbing title, that it has depth and quality and holds up. The whole publishing business has put massive effort into establishing and building these markers of trust. |
| |
| ▲ | trenchpilgrim 7 hours ago | parent | prev [-] | | Do you have any criticism of the content, or just "I don't know the author"? | | |
| ▲ | Brian_K_White 4 hours ago | parent [-] | | They didn't say "this is in error", so they don't need any such example errors. They also didn't say just "I don't know the author". |
|
|