Remix.run Logo
ch4s3 5 days ago

Trying to generate consistent images after using LLMs for coding has been really eye opening.

altruios 5 days ago | parent [-]

One-shot prompting: agreed.

Using a node based workflow with comfyUI, also being able to draw, also being able to train on your own images in a lora, and effectively using control nets and masks: different story...

I see, in the near future, a workflow by artists, where they themselves draw a sketch, with composition information, then use that as a base for 'rendering' the image drawn, with clean up with masking and hand drawing. lowering the time to output images.

Commercial artists will be competing, on many aspects that have nothing to do with the quality of their art itself. One of those factors is speed, and quantity. Other non-artistic aspects artists compete with are marketing, sales and attention.

Just like the artisan weavers back in the day were competing with inferior quality automatic loom machines. Focusing on quality over all others misses what it means to be in a society and meeting the needs of society.

Sometimes good enough is better than the best if it's more accessible/cheaper.

I see no such tooling a-la comfyUI available for text generation... everyone seems to be reliant on one-shot-ting results in that space.

mnky9800n 4 days ago | parent | next [-]

Yes I feel like at least for data analysis it would be interesting to have the ability to build a data dashboard on the fly. You start with a text prompt and your data sources or whatever document context you want. Then you can start exploring it and keeping the pieces you want. Kind of like a notebook but it doesn’t need the linear execution flow. I feel like there is this giant effort to build a foundation model of everything but most people who analyse data don’t want to just dump it into a model and click predict, they have some interest in understanding the relationships in the data themselves.

robfitz 5 days ago | parent | prev | next [-]

An extremely eye-opening comment, thank you. I haven't played with the image generators for ages, and hadn't realized where the workflows had gotten to.

Very interesting to see differences between the "mature" AI coding workflow vs. the "mature" image workflow. Context and design docs vs. pipelines and modules...

I've also got a toe inside the publishing industry (which is ridicilously, hilariously tech-impaired), and this has certainly gotten me noodling over what the workflow there ought to be...

ch4s3 5 days ago | parent | prev [-]

I've tried at least 4 other tools/SAASs and I'm just not seeing it. I've tried training models in other tools with input images, sketches, and long prompts built from other LLMs and the output is usually really bad if you want something even remotely novel.

Aside for the terrible name, what does comfyUI add? This[1] all screams AI slop to me.

[1]https://www.comfy.org/gallery

LelouBil 5 days ago | parent [-]

It's a node based UI. So you can use multiple models in succession, for parts of the image or include a sketch like the person you're responding to said. You can also add stages to manipulate your prompt.

Basically it's way beyond just "typing a prompt and pressing enter" you control every step of the way

ch4s3 5 days ago | parent [-]

right, but how is it better than Lovart AI, Freepik, Recraft, or any of the others?

withinboredom 5 days ago | parent [-]

Your question is a bit like asking how a word processor is better than a typewriter... they both produce typed text, but otherwise not comparable.

ch4s3 5 days ago | parent | next [-]

I'm looking at their blog[1] and yeah it looks like they're doing literally the exact same thing the other tools I named are doing but with a UI inspired by things like shader pipeline tools in game engines. It isn't clear how it's doing all of the things the grandparent is claiming.

[1]https://blog.comfy.org/p/nano-banana-via-comfyui-api-nodes

lelandbatey 5 days ago | parent | next [-]

The killer app of comfy UI and node based editors in general is that they allow "normal people" to do programmer-like things, almost like script like things. In a word: you have better repeatability and appropriate flexibility/control. Control because you can chain several operations in isolation and tweak the operations individually stacking them to achieve the desired result. Repeatability because you can get the "algorithm" (the sequence of steps) right for your needs and then start feeding different input images in to repeat an effect.

I'd say that comfy UI is like Photoshop vs Paint; layers, non-destructive editing, those are all things you could replicate the effects of with Paint and skill, but by adopting the more advanced concepts of Photoshop you can work faster and make changes easier vs Paint.

So it is with node based editing in nearly any tool.

qarl 5 days ago | parent | prev [-]

There's no need to belittle dataflow graphs. They are quite a nice model in many settings. I daresay they might be the PERFECT model for networks of agents. But time will tell.

Think of it this way: spreadsheets had a massive impact on the world even though you can do the same thing with code. Dataflow graph interfaces provide a similar level of usefulness.

ch4s3 4 days ago | parent [-]

I'm not belittling it, in fact I pointed to place where they work well. I just don't see how in this case it adds much over the other products I mentioned that in some cases offer similar layering with a different UX. It still doesn't really do anything to help with style cohesion across assets or the nondeterminism issues.

qarl 4 days ago | parent [-]

Hm. It seemed like you were belittling it. Still seems that way.

dgfitz 5 days ago | parent | prev [-]

Interesting, have you used both? A typewriter types when the key is pressed, a word processor sends an interrupt though the keyboard into the interrupt device through a bus and from there its 57 different steps until it shows up on the screen.

They’re about as similar as oil and water.

withinboredom 5 days ago | parent [-]

I have! And the non-comparative nature was exactly the point I was trying to make.