| ▲ | zachbee 9 days ago |
| I didn't get into this in the article, but one of the major challenges with achieving superhuman performance on Verilog is the lack of high-quality training data. Most professional-quality Verilog is closed source, so LLMs are generally much worse at writing Verilog than, say, Python. And even still, LLMs are pretty bad at Python! |
|
| ▲ | theptip 9 days ago | parent | next [-] |
| That’s what your VC investment would be buying; the model of “pay experts to create a private training set for fine tuning” is an obvious new business model that is probably under-appreciated. If that’s the biggest gap, then YC is correct that it’s a good area for a startup to tackle. |
| |
| ▲ | adrian_b 8 days ago | parent [-] | | It would be hard to find any experts that could be paid "to create a private training set for fine tuning". The reason is that those experts do not own the code that they have written. The code is owned by big companies like NVIDIA, AMD, Intel, Samsung and so on. It is unlikely that these companies would be willing to provide the code for training, except for some custom LLM to be used internally by them, in which case the amount of code that they could provide for training might not be very impressive. Even a designer who works in those companies may have great difficulties to see significant quantities of archived Verilog/VHDL code, though it can be hoped that it still exists somewhere. | | |
| ▲ | theptip 8 days ago | parent [-] | | When I say “pay to create” I generally mean authoring new material, distilling your career’s expertise. Not my field of expertise but there seem to be experts founding startups etc in the ASIC space, and Bitcoin miners were designed and built without any of the big companies participating. So I’m not following why we need Intel to be involved. An obvious way to set up the flywheel here is to hire experts to do professional services or consulting on customer-submitted designs while you build up your corpus. While I said “fine-tuning”, there is probably a lot of agent scaffolding to be built too, which disproportionately helps bigger companies with more work throughput. (You can also acquire a company with the expertise and tooling, as Apple did with PA Semi in ~2008, though obviously $100m order of magnitude is out of reach for a startup. https://www.forbes.com/2008/04/23/apple-buys-pasemi-tech-ebi...) | | |
| ▲ | adrian_b 7 days ago | parent [-] | | I doubt any real expert would be tempted by an offer to author new material, because that cannot be done in a good way. One could author some projects that can be implemented in FPGAs, but those do not provide good training material for generating code that could be used to implement a project in an ASIC, because the constraints of the design are very different. Designing an ASIC is a year-long process and it is never completed before testing some prototypes, whose manufacture may cost millions. Authoring some Verilog or VHDL code for an imaginary product that cannot be tested on real hardware prototypes could result only in garbage training material, like the code of a program that has never been tested to see if it actually works as intended. Learning to design an ASIC is not very difficult for a human, because a human does not need a huge number of examples, like ML/AI. Humans learn the rules and a few examples are enough for them. I have worked in a few companies at designing ASICs. While those companies had some internal training courses for their designers, those courses only taught their design methodologies, but with practically no code examples from older projects, so very unlikely to how a LLM would have to be trained. |
|
|
|
|
| ▲ | jjk166 9 days ago | parent | prev | next [-] |
| I would imagine it is a reasonably straightforward thing to create a simulator that generates arbitrary chip designs and the corresponding verilog that can be used as training data. It would be much like how AlphaFold was trained. The chip designs don't need to be good, or even useful, they just need to be valid so the LLM can learn the underlying relationships. |
| |
| ▲ | adrian_b 7 days ago | parent | next [-] | | I have never heard of any company, no matter how big and experienced, where it is possible to decide that an ASIC design is valid by any other means except by paying for a set of masks to be made and for some prototypes to be manufactured, then tested in the lab. This validation costs millions, which is why it is hard to enter this field, even as a fabless designer. Many design errors are not caught even during hardware testing, but only after mass production, like the ugly MONITOR/MWAIT bug of Intel Lunar Lake. Randomly-generated HDL code, even if it does not have syntax errors, and even if some testbench for it does not identify deviations from its specification, is not more likely to be valid when implemented in hardware, than the proverbial output of a typewriting monkey. | | |
| ▲ | jjk166 7 days ago | parent [-] | | Validating an arbitrary design is hard. It's equivalent to the halting problem. Working backwards using specific rules that guarantee validity is much easier. Again, the point is not to produce useful designs. The generated model doesn't need to be perfect, indeed it can't be, it just needs to be able to avoid the same issues that humans are looking for. |
| |
| ▲ | astrange 9 days ago | parent | prev [-] | | I know just enough about chips to be suspicious of "valid". The right solution for a chip at the HDL layer depends on your fab, the process you're targeting, what % of physical space on the chip you want it to take up, and how much you're willing to put into power optimization. | | |
| ▲ | jjk166 7 days ago | parent [-] | | The goal is not to produce the right, or even a good solution. The point is to create a large library of highly variable solutions so the trained model can pick up on underlying patterns. You want it to spit out lots of crap. |
|
|
|
| ▲ | e_y_ 9 days ago | parent | prev [-] |
| That's probably where there's a big advantage to being a company like Nvidia, which has both the proprietary chip design knowledge/data and the resources/money and AI/LLM expertise to work on something specialized like this. |
| |
| ▲ | DannyBee 9 days ago | parent [-] | | I strongly doubt this - they don't have enough training data either - you are confusing (i think) the scale of their success with the amount of verilog they possess. IE I think you are wildly underestimating both the scale of training data needing, and wildly overestimating the amount of verilog code possessed by nvidia. GPU's work by having moderate complexity cores (in the scheme of things) that are replicated 8000 times or whatever.
That does not require having 8000 times as much useful verilog, of course. The folks who have 8000 different chips, or 100 chips that each do 1000 things, would probably have orders of magnitude more verilog to use for training |
|