| ▲ | jjcm 7 hours ago |
| A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge. Even Figma went through this kind of thing very early on. To add something else to the discussion however, I'd encourage people to skip out on Claude Design for other reasons, and that is the inherent restrictions of LLMs for visual design. LLMs are blind, and spatial relativity is tremendously hard across layers of nested html / css. If you're early on, I'd recommend starting with diffusion first. GPT-Image-2 is phenominal at UI design, and especially if you're just starting out will let you align on a direction more rapidly than an LLM can. The difficulty will be converting from image->html, but you'll be able to explore different directions more cheaply/faster than you could with Claude Design. I will note a bias disclaimer here - I quit Figma to work on my own diffusion-based UI design tool. Not promoting that here, but wanted to at least share my findings in this space. |
|
| ▲ | semiquaver 7 hours ago | parent | next [-] |
| What do you mean LLMs are blind? All frontier models are multimodal, which means they literally consume images as tokens. They can “see” exactly as well as they can “read”. Also, GPT-Image-2 is not a diffusion model, it is based on Transformers, like other LLMs are. |
| |
| ▲ | embedding-shape 6 hours ago | parent | next [-] | | I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually. They're really bad at details and perfection when it comes to images, and doesn't understand things like visual hierarchy, affordances and other fundamental design concepts. Most of them are able to describe those things with letters, but doesn't seem to actually fundamentally grasp it when asking it to do UIs even when mentioning these things. Try doing 100% vibe-coding with an agent and loosely specify what kind of application you want, and observe how the resulting UI and UX is a complete mess, unless you specify exactly how the UI and UX should work in practice. If they actually had spatial understanding, together with being able to visually experience images, then they'd probably be able to build proper UI/UX from the get go, but since they only could describe what those things are, you end up with the messes even the current SOTAs produce. | | |
| ▲ | stingraycharles 27 minutes ago | parent | next [-] | | > I guess they do "see" but more like "see an explanation of the image", not "see" as in experience visually. Images are tokenized and fed to the exact same model, they can “visually inspect” images, eg “find the 2 differences between two images” and “where’s Waldo”-style things. So your mental model that they see descriptions is inaccurate. | |
| ▲ | spongebobstoes 5 hours ago | parent | prev | next [-] | | the models can accept images directly as tokens. not a description of an image, the actual image itself. yes, the visual intelligence is limited, but they do actually have vision capabilities. | |
| ▲ | marcus_holmes 5 hours ago | parent | prev [-] | | This is my experience too, but with all other aspects of the application. If you only loosely describe it, it comes out as a mess. You have to know what you're building to get the LLM to actually build something decent. I don't think this is purely a visual or design constraint. | | |
| ▲ | embedding-shape 5 hours ago | parent [-] | | When I'm using agents for programming, I can have a AGENTS.md outlining exactly what requirements, guidelines and constraints all the code need to follow, and the agent (codex in my case) will pretty much nail that. I've tried doing the same for design work, just really outlining exactly how the UI and UX needs to look and work, but for some reason it struggles a whole bunch with it, regardless of how clear I am. Maybe it's I'm just worse at explaining and describing what UI and UX I'm actually after though, I suppose. | | |
| ▲ | marcus_holmes 3 hours ago | parent [-] | | I once worked at a startup where the CEO was originally a designer. He once spent two days huddled with the main designer for the product, trying to pick exactly the right font for the product. I have no idea how you'd have that kind of discussion with an LLM. But then, I would not spend more than five minutes on this decision, so I'm probably the wrong audience for this ;) |
|
|
| |
| ▲ | slashdave 6 hours ago | parent | prev | next [-] | | Tokens are not a substitute for a numerical measurement. Ask a LLM how much time has passed. Watch it hallucinate wildly. Has anyone noticed that Opus has trouble building ascii diagrams (often leaves out spaces so lines are misaligned)? | | |
| ▲ | arjie an hour ago | parent | next [-] | | LLMs are just one mechanical component. One might as well say "Ask your println how much time has passed". That is not a question that makes sense. As an example, I did not construct my agent specifically to answer your question and when I saw your question I queried the agent. And it is correct. https://imgur.com/a/j8j7hL9 As semiquaver said, modern LLMs are multi-modal, they can reason in image-space and audio-space as well as in text-space. It is not a translate then operate kind of situation. Claude Design is not a raw LLM, nor an instruction-tuned LLM. It is an agent harness around an LLM that allows it to do certain things. | |
| ▲ | semiquaver 6 hours ago | parent | prev [-] | | Ok? Your comment is in no way responsive to anything I said. |
| |
| ▲ | bombcar 6 hours ago | parent | prev [-] | | Claude has been kicking ass at code, but I asked it to “sketch” a second floor with a stairway and bedrooms with large closets and it made … something that resembles something akin to not at all what I asked. |
|
|
| ▲ | jadar 6 hours ago | parent | prev | next [-] |
| This has not been my experience. Claude artifacts at first, then Claude Design after it was released, are excellent at design! The way I can steer the model updating the design with different ideas and visions, even adopting different design systems like Material 3 or Apple’s HIG it has been phenomenal. |
| |
| ▲ | jrumbut 5 hours ago | parent [-] | | It's also by far the best in my experience at a request like "it's 3:55 and I need a few slides on the topic of the Gettysburg Address for a 4PM meeting." I wish it was more integrated into PowerPoint but it's still the best slide generator I've used. | | |
|
|
| ▲ | bandrami 3 hours ago | parent | prev | next [-] |
| If you say the image models don't "see" you also have to say the text models don't "read": there's a meaningful case to be made for either claim but then you're left saying "they behave as if they see" or "they behave as if they read". |
|
| ▲ | pycassa 6 hours ago | parent | prev | next [-] |
| Thank you so much for your suggestion regarding UI design. As my main expertise is not this, I need some tool to depend on to ground my projects somehow. Even though stitch by google and claude design are not perfect, they give me some starting point. And then, after building the actual working project, will iterate until I like the look of it. This is how I'm using these right now. I can't even itearte on these design LLM's now, their own UX is very clunky and not very friendly, or its made more for the design folks. But I will give GPT-Image-2 a try. Actually few months back I remember doing this UX/UI research on the chat gpt app itself, just asking it to generate what a certain app might look like and etc. Please let me know your UI design tool. I'm want to try it out. |
|
| ▲ | satvikpendem 6 hours ago | parent | prev | next [-] |
| Or just use Google's Stitch, it integrates both code via Gemini and image UI generation via Nano Banana which I'd argue is even better than OpenAI's image models. |
| |
|
| ▲ | justinclift 6 hours ago | parent | prev | next [-] |
| > A lot of these things are made fast and loose Yeah, I'm starting to be worried about Anthropic's security controls for customer information. To say they'd have a firehose of sensitive info from customers would be a massive understatement. Hackers gaining access to that, especially for a non-trivial duration, would be a disaster. |
|
| ▲ | adi_kurian 6 hours ago | parent | prev | next [-] |
| Multimodal LLMs are not blind. Claude design in my experience is very, very solid. |
| |
| ▲ | teaearlgraycold 6 hours ago | parent [-] | | I’ve only used it for fairly basic stuff, things that are very well represented in the training data. But for that it has made me happy. |
|
|
| ▲ | TurdF3rguson 5 hours ago | parent | prev | next [-] |
| Huh, I never thought of asking an image model to prototype a UI. It's a good idea though, I will try it next time. |
|
| ▲ | FireBeyond 4 hours ago | parent | prev | next [-] |
| > A lot of these things are made fast and loose No kidding - you can't even delete a design system, draft or otherwise. Research Preview is accurate, it can do some things (but every system I've tried building it has resorted to the "hero text with key word in a different color" trope, however I try different prompts), but there's a lot missing (and when you ask Claude Design how to delete a design system it gives you an absolutely inaccurate and hallucinated answer and you say fine, here's the project ID, do it for me, "Sorry, can't, only you can"). |
|
| ▲ | SilverElfin 6 hours ago | parent | prev [-] |
| > A lot of these things are made fast and loose, and unfortunately this is the reality of using the bleeding edge. Anthropic lazily calls everything a preview and then pushes it hard on everyone. That feels dishonest |