Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work on generative AI for circuit board design with tscircuit, IMO it's definitely going to be the dominant form of bootstrapping or combining circuit designs in the near future (<5 years)

Most people are wrong that AI won't be able to do this soon. The same way you can't expect an AI to generate a website in assembly, but you CAN expect it to generate a website with React/tailwind, you can't expect an AI to generate circuits without having strong functional blocks to work with.

Great work from the author studying existing solutions/models- I'll post some of my findings soon as well! The more you play with it, the more inevitable it feels!



> The same way you can't expect an AI to generate a website in assembly, but you CAN expect it to generate a website with React/tailwind

Can you? Because last time I tried (probably about February) it still wasn’t a thing


I tried GPT-4o in May and had good results asking it to generate react+tailwind components for me. It might not get things right the first time but it is generally able to respond to feedback well.


That’s not the same as generating a website though. You still need to iterate on the components, and use them.

I agree that using llms for generating things like schemas, components, build scripts etc is a good use of the technology, but we’re no closer to saying “make a saas landing page for X using vercel” and having it ready to deploy, then we were a year ago


Depends on the website, right. Because a single index.html can easily be a website which it cam generate.


I mean, yeah. But that’s not exactly helpful. Technically a web server can serve plain text which your browser will render so that meets the definition for most people.

I don’t think pedantry helps here, it doesn’t add to the conversation at all.


The problem is going to be getting those functional blocks in the first place.

The industry does not like sharing, and the openly available datasets are full of mistakes. As a junior EE you learn quite quickly to never trust third-party symbols and footprints - if you can find them at all. Even when they come directly from the manufacturer there's a decent chance they don't 100% agree with the datasheet PDF. And good luck if that datasheet is locked behind a NDA!

If we can't even get basic stuff like that done properly, I don't think we can reasonably expect manufacturers to provide ready-to-use "building blocks" any time soon. It would require the manufacturers to invest a lot of engineer-hours into manually writing those, for essentially zero gain to them. After all, the information is already available to customers via the datasheet...


This is why me and even some YC backed companies are working toward datasheet-to-component ai. We don’t trust third party, but we do trust datasheets (at least, trust enough to test for a revision)


So why would anyone designing hardware trust your third-party component library? If I don't trust it when it is handwritten by a trained (albeit probably junior) engineer, I am definitely not going to trust anything generated by AI.

Datasheets get incredibly confusing incredibly fast, and every single detail is critical. It's quite common for one datasheet to describe multiple parts at the same time, even in the same tables and diagrams, and have contradictions between the two parts. You end up with pins labeled "EN/SET" where the xxx1 variant has the pin act as Enable and the xxx3 variant have the same pin act as Setpoint. If you don't generate two separate symbols for those, the symbols are essentially useless because they can't be trusted. And that's just about the easiest thing you're going to come across.

This isn't a problem which can be solved downstream. Even trained experts are often confused because the input data is just really bad. You can't throw garbage into AI and except diamonds to come out, the only way to solve it is to convince all the manufacturers to switch to a to-be-developed universal documentation protocol.


I'd be interested in reading more of your findings!

Are you able to accomplish this with prompt-engineering, or are you doing fine-tuning of LLMs / custom-trained models?


No fine tuning needed, as long as the target language/DSL is fairly natural, just give eg a couple examples of tscircuit React, atopile JotX etc and it can generate compliant circuits. It can hallucinate imports, but if you give it an import list you can improve that a lot.


I've found the same thing - a little syntax example, some counter examples and generative AI does well generating syntactically correct code for PCB design.

A lot of the netlists are electrically nonsense when it's doing synthesis for me. Have you found otherwise?


Netlists, footprint diagrams, constraint diagrams etc. are mostly nonsense. I’m working on finetuning Phi3 and I’m hopeful it’ll get better. I’m also working on synthesized datasets and mini-DSLs to make that tuning possible eg https://text-to-footprint.tscircuit.com

My impression is that synthetic datasets and finetuning will basically completely solve the problem, but eventually it’ll be available in general purpose models- so it’s not clear if its worth it to build a dedicated model.

Overall the article’s analysis is great. I’m very optimistic that this will be solved in the next 2 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: