Remix.run Logo
elif 4 days ago

As someone who has tried to go through blender tutorials for multiple days, I can tell you, there is no chance I can get close to any of these examples.

I think you might be projecting your abilities a bit too much.

As someone who wants to make and use 3d models, not someone who wants to be a 3d model artist, this tech is insanely useful.

echelon 4 days ago | parent | next [-]

0.0001% of the population can sculpt 3D and leverage complex 3D toolchains. The rest of us (80% or whatever - the market will be big) don't want to touch those systems. We don't have the time, patience, or energy for it, yet we'd love to have custom 3D games and content quickly and easily. For all sorts of use cases.

But that misses the fact that this is only the beginning. These models will soon generate entire worlds. They will eventually surpass human modeller capabilities and they'll deliver stunning results in 1/100,000th the time. From an idea, photo, or video. And easy to mold, like clay. With just a few words, a click, or a tap.

Blender's days are long in the tooth.

I'm short on Blender, Houdini, Unreal Engine, Godot, and the like. That entire industry is going to be reinvented from scratch and look nothing like what exists today.

That said, companies like CSM, Tripo, and Meshy are probably not the right solutions. They feel like steam-powered horses.

Something like Genie, but not from Google.

fwip 4 days ago | parent | next [-]

> These models will soon generate entire worlds.

They may. It's hard to expect this when we already see LLMs plateauing at their current abilities. Nothing you've said is certain.

scotty79 3 days ago | parent | next [-]

I don't see them plateauing. I see that they are in their infancy. So far AI people were just persistently doing the dumbest possible thing that turned out to work, with very limited understanding, insight and innovation. At some point they will buy all the gpus and all the GWh they can and will be forced to actually figure out how to really improve what they are doing. Then the real breakthroughs will start showing up. There are probably improvements of 3-4 orders of magnitude right behind the finish line of the low hanging fruit picking contest.

d0100 4 days ago | parent | prev [-]

AI will just be cheaper procedural environments

weregiraffe 4 days ago | parent | prev | next [-]

> That entire industry is going to be reinvented from scratch

Hey, I heard that one before! The entire financial industry was supposed to have been reinvented from scratch by crypto.

pomtato 4 days ago | parent [-]

Well it kinda did change things up a bit. Me being able to receive payments across borders without significant delay or crazy fees is a decent perk, you can hate crypto culture and grifters trying to make a quick buck but it's applications are very real.

ares623 4 days ago | parent [-]

Won’t you get taxed on “gains”when you do that and then eventually convert to fiat?

I was considering this path a few years ago but all my research pointed to me being taxed for moving my own money from one country to another. Which would’ve cost significantly more than a good ol’ bank transfer. (I needed the fiat on the other end)

My understanding was that as far as the receiving bank is concerned, the converted crypto would’ve appeared out of an investment/trading platform and needed to be taxed

The bank transfer cost like a couple of bucks anyway so it wasn’t worth the risk of trying the crypto route in the end for me.

charcircuit 4 days ago | parent [-]

If you use stablecoins there will be no gains or losses.

catlifeonmars 4 days ago | parent | prev | next [-]

> These models will soon generate entire worlds. They will eventually surpass human modeller capabilities and they'll deliver stunning results in 1/100,000th the time. From an idea, photo, or video. And easy to mold, like clay. With just a few words, a click, or a tap.

This is a pretty sweeping and unqualified claim. Are you sure you’re not just trying to sell snake oil?

weregiraffe 4 days ago | parent [-]

I'm sure he is just trying to sell snake oil.

echelon 4 days ago | parent [-]

I've been predicting this since Deep Dream (which feels like a century ago) and HN loves to naysay.

I claimed three years ago that AI would totally disrupt the porn and film industries and we're practically on the cusp of it.

If you can't see how these models work and can't predict how they can be used to build amazing things, then that's on you. I have no reason to lift up anybody that doubts. More opportunity on the table.

bigyabai 4 days ago | parent | next [-]

FWIW I'm a 3D modeller (hard surface Blender modelling, ~10yrs) and I've been reading your comments for a while now. Reality wasn't disrupted quite as far as you suggested, most of the naysayers that advised restraint under your comments have largely been proven right. Time and time again, you made enormous claims and then refused to back them up with evidence or technical explanations. We waited just like you asked, and the piper still isn't paid.

Have you ever asked yourself why this revolution hasn't come yet? Why we're still "on the cusp" of it all? Because you can't push a button and generate better pornography than what two people can make with a VHS camera and some privacy. The platonic ideal of pornography and music and film and roleplaying video games and podcasting is already occupied by their human equivalent. The benchmark of quality in every artistic application of AI is inherently human, flawed, biased and petty. It isn't possible to commoditize human art with AI art unless there's a human element to it, no matter how good the AI gets.

There's merit to discussing the technical impetus for improvement (which I'm always interested in discussing), but the dependent variables here seem exclusively social; humanity simply might never have a Beatlemania for AI-generated content.

nl 4 days ago | parent | next [-]

I don't work in the field but I observe it pretty closely and my feeling is that comments like this remind me of the people I spoke to in the 1990s who said that Windows and Intel would never replace their Unix workstations.

Right now if I go on LinkedIn most header images on people's posts are AI generated. On video posts on LinkedIn that's a lot less, but we are beginning to see it now. The static image transition has taken maybe 3 years? The video transition will probably take about the same.

There's a set of content where people care about the human content of art, but there is a lot of content where people just don't care.

The thing is that there is a lot of money in generating this content. That money drives tool improvement and those improved tools increase accessibility.

> Have you ever asked yourself why this revolution hasn't come yet?

We are in the middle of the revolution which makes it hard to see.

echelon 4 days ago | parent | prev [-]

I hope the walls don't cave in on you. Eyes up. My friends in VFX are adopting AI workflows and they say that it's essential.

> Why OnlyFans May Sell for 75% Less Than It’s Worth [1, 2]

> Netflix uses AI effects for first time to cut costs [3]

Look at all of the jobs Netflix has posted for AI content production [4].

> Gabe Newell says AI is a 'significant technology transition' on a par with the emergence of computers or the internet, and will be 'a cheat code for people who want to take advantage of it' [5]

Jeffrey Katzenberg, the cofounder of DreamWorks [6]:

> "Well, the good old days when, you know, I made an animated movie, it took 500 artists five years to make a world-class animated movie," he said. "I don't think it will take 10% of that three years out from now," he added.

I can keep finding no shortage of sources, but I don't want to waste my time.

I've brushed shoulders with the C-suite at Disney and Pixar and talked at length about this with them. This world is absolutely changing.

The best evidence is what you can already see.

[1] https://www.theinformation.com/articles/onlyfans-may-sell-75...

[2] https://archive.is/Xndzx

[3] https://www.bbc.com/news/articles/c9vr4rymlw9o

[4] https://explore.jobs.netflix.net/careers?query=Machine%20Lea...

[5] https://www.pcgamer.com/software/ai/gabe-newell-says-ai-is-a...

[6] https://www.yahoo.com/entertainment/cofounder-dreamworks-say...

topato 4 days ago | parent [-]

Frankly, that is all just speculative, once again. AI is hitting a significant roadblock. Look at how disappointing GPT-5 was. No amount of compute is ever going to match the hype matching those quotes.

The C-suite who don't realize how wrong they are about AIs potential are going to be facing a harsh reality. And artists will be the first to be hurt by their HYPE TRAIN management style and mindset.

Edit: most of all, the 3d generation in this LLM3d model is about the same as the genAI 3d models from a year ago... And two years ago... A good counterpoint would be Tubi's recently released, mostly AI gen short films. They were garbage and looked like garbage.

Netflix's foray, of memory serves, was a single scene where a building collapses. Hardly industry shattering. And 3d modeling and genAI images/videos are substantially different.

mlinhares 4 days ago | parent [-]

The only consequence they will be facing is being parachuted off with bootloads of money after they have failed to deliver on their magical promises.

vrighter 4 days ago | parent | prev | next [-]

on the cusp means nothing. We are on the cusp of agi, tesla autopilot, cryptocurrency taking over, achieving nuclear fusion, and a bunch of other things. Companies don't sell working products anymore, they sell products that are "on the cusp of working"

We have been on the cusp of some things for literal decades.

imtringued 4 days ago | parent | prev | next [-]

Your prediction compresses 24 hours into a single second or a single day of work into a third of a second. How exactly do you expect to be proven right when just the network latency alone will eat a big chunk of that time?

You'll literally be proven wrong simply because the AI will take time to generate things even if the quality of the output is high.

lelanthran 4 days ago | parent | prev | next [-]

> I claimed three years ago that AI would totally disrupt the porn and film industries and we're practically on the cusp of it.

Meh. We were on the cusp 5 years ago. Five years later, we're still on the cusp?

Maybe I'm working with a different meaning of "cusp", but to me "On the cusp of $FOO" means that there is no intervening step between now and $FOO.

The reality is that there are uncountable intervening steps between now and "film industry disrupted".

weregiraffe 4 days ago | parent | prev | next [-]

> practically on the cusp of it.

Two Girls One Cusp.

_0ffh 4 days ago | parent | prev [-]

> More opportunity on the table.

Hate to disappoint you, but as the models get better, and eventually deliver the results, you won't have to wait a microsecond until the masses roll in to take advantage.

mxmilkiib 4 days ago | parent | prev | next [-]

Blender will just add AI creation/editing

ilaksh 4 days ago | parent | next [-]

There are probably already a bunch of Blender Add-Ons or extensions that build with AI that are in the approval queue and just being ignored. https://extensions.blender.org/approval-queue/

echelon 4 days ago | parent | prev [-]

Why bolt magic rocket boosters onto a horse?

That's like saying we'll add internet browsing and YouTube to Lotus 1-2-3 for DOS.

It's weird, legacy software for a dying modality.

Where we're going, we don't need geometry node editing. The diffusion models understand the physics of optics better than our hand-written math. And they can express it across a wide variety of artistic styles that aren't even photoreal.

The future of 3D probably looks like the video game "Tiny Glade" mixed with Midjourney and Google Genie. And since it'll become so much more accessible, I think we'll probably wind up blending the act of creation with the act of consuming. Nothing like Blender.

HappyPanacea 4 days ago | parent [-]

> The diffusion models understand the physics of optics better than our hand-written math.

How well will they do with something like creating two adjacent mirrors with 30 degree angle between them with one of them covered with varying-polarized red tinted checkerboard pattern?

echelon 4 days ago | parent [-]

I don't know. Diffusion is weird and a little uneven.

They do a better job of fluid sim than most human efforts to date. And that's just one of thousands of things they do better than our math.

topato 4 days ago | parent [-]

Haha, despite the 1000s of instances where it DOESNT simulate correctly. Very specifically chosen 10000-shot generated videos show somewhat impressive fluid physics... And even then, it MUST be something it's seen before. Diffusion is in NO way modeling physics in a realistic matter, there is not an infinite amount of training data to show all fluid dynamics...

Now I know you're too far down the hype rabbit hole. Either that, or you lack a cursory understanding of diffusion models.

jayd16 4 days ago | parent | prev | next [-]

I think you'll come to realize that the margin between people willing to learn blender today and people looking to generate models but won't learn how today is razor thin.

What's the use case of generating a model if all modelling and game engines are gone?

numpad0 4 days ago | parent | next [-]

All these pro-AI framings hinge on the fact that they can't tell apart AI data from human arts. That's like saying that because they don't know what improper bounds check is the code must be secure. It's just all broken logic.

HappyPanacea 4 days ago | parent | prev | next [-]

>What's the use case of generating a model if all modelling and game engines are gone?

Because using LL3M-style technique will probably be cheaper and better (fidelity and consistency and art direction wise) than generating the entire video/game with video generation model.

echelon 4 days ago | parent | prev [-]

> you'll come to realize

No. The Roblox of this space is going to make billions of dollars.

There's going to be so much money to make here.

topato 4 days ago | parent [-]

So, an ai generated psuedo-game engine with a majority of users under the age of 13? I'm sure that WILL make a lot of money. Those of us who didn't grow up playing Roblox will find this comparison impossibly stupid.

Some what related: im still amazed that no one has made a Roblox competitor, as in, a vague social building game that tricks children into wasting money on ridiculous MTXs. Maybe you are right, but I think that taking an already sorry state of affairs, and then removing the only imagination or STEM skills required by giving children access to GenAI.... is a really depressing thought.

I kinda meandered with my point lol.

x-complexity 4 days ago | parent [-]

> So, an ai generated psuedo-game engine with a majority of users under the age of 13? I'm sure that WILL make a lot of money. Those of us who didn't grow up playing Roblox will find this comparison impossibly stupid.

> ...with a majority of users under the age of 13? I'm sure that WILL make a lot of money. > ... will find this comparison impossibly stupid.

I'm ignoring the insinuations here for obvious reasons.

1. Roblox is the newest (note: not necessarily the best) iteration of the genre that Secondlife & (to a limited extent) modded Minecraft servers occupy: An interactive 3D platform that permits user-generated content.

2. Generative models just accelerate their development up to the brick wall of complexity much faster.

> Some what related: im still amazed that no one has made a Roblox competitor

This comment is just the HN Dropbox phenomenon, *again*, only this time from the angle that thinks it's easy to build a "pseudo game-engine" from scratch.

https://news.ycombinator.com/item?id=8863

Few competitors exist because of the moat they have built in making their platform easy to develop on, so much so that kids can use them with little issue.

> , as in, a vague social building game that tricks children into wasting money on ridiculous MTXs.

This part is entirely separate from the technical aspects of the platform. Roblox is a feces-covered silver bar, but the silver bar (their game platform) still exists.

> Maybe you are right, but I think that taking an already sorry state of affairs, and then removing the only imagination or STEM skills required by giving children access to GenAI.... is a really depressing thought.

This is a hyper-nihilistic opinion on children laid bare.

To think that the children (*with the dedication to make a game in the first place*) wouldn't try to learn about debugging the code that the models are spitting out, or that 100% of them would just stop writing their own code entirely, is a cynical viewpoint not worth any attention.

srid 4 days ago | parent | prev | next [-]

This reminds me of Elon Musk's recent claims on the future of gaming:

    This – but in real-time – is the future of gaming and all media
https://x.com/elonmusk/status/1954486538630476111
darepublic 4 days ago | parent [-]

Just don't slap a release year on this future and I'll be compelled to agree

Etherlord87 4 days ago | parent | prev | next [-]

The only sculpting example I see is the very first hat. Do you want to tell me you wouldn't be able to sculpt that?

I perfectly understand the time/patience/energy argument and my bias here. But even Spore (video game) editor with all its limitations gives you a similar result to the examples provided, and at least there you are the one giving the shape to your work, which gives you more control, and your art more soul, and moreover puts you on a creative path where the results are getting better.

Will the AI soon surpass human modeller? I don't know... I hear so much hype for AI, I have fallen victim to it myself where I spent quite some time trying to use AI for some serious work and guess what - it works as a search engine, it will give me a ffmpeg command that I could duckduckgo anyway, it will give me an Autohotkey script that I could figure out myself after a quick search etc. The LLM fails even at the tasks that seem optimal for it - I have tried multiple times to translate movie subtitles with it, and while the translation was better than using machine learning, at some point the AI goes crazy and decides to change the order of scenes in a movie - something that I couldn't detect until I watched the movie with friends, so it was a critical failure. I described a word, and the AI failed to give me the word I couldn't remember, and a simple search on thesaurus succeeded instead. I described what I remembered from a quote, but the AI failed to give me the quote, but my googlefu was enough to find it.

You probably know how to code, and would cringe if someone suggested to you to just ask the AI to write you the code for a video game without you yourself knowing how to code to at least supervise and correct it, and yet you think the 3D modelling will be good enough without intervention of a 3D artist; maybe, but as someone experienced in 3D I just don't see it, just like I don't see AI making Hollywood movies even though a lot of people claim it's a matter of years before that becomes the reality.

Instead what I see is AI slop everywhere and I'm sure video games will be filled with AI crap, just like a lot of places were filled with machine-learning translations because Google seriously suggested on its conferences that the translations are good enough (and if someone speaks only English, the Dunning-Kruger effect kicks in).

Sure, eventually we might have AGI and humanity will be obsolete. But I'm not a fan of extrapolating hyperbolic data; one Youtuber made an estimation that in a couple decades Earth will be visited by aliens, because there won't be enough Earthlings to satisfy his channel viewership stats.

numpad0 4 days ago | parent | prev [-]

100% of population has all the tools needed + ChatGPT for free to write a novel. Only 0.0001% are even able to complete even a short story - they often can't hold a complete and consistent plot in their head.

"AI allows those excluded from the guild" is total BS.

Gut figures, ~85% of creativity comes from skill itself. ~10% or so comes from prior arts. And it's all multiplied by willingness[0, 1] which >99.9999% of population has << 0.0001 as the value. Tools just don't change that, it just weighs down on the creativity part.

spookie 4 days ago | parent | prev | next [-]

I'm surprised if anything.

All examples are really just primitives either extruded in one step or the same and maybe 5 of them together.

I don't want to sound mean but these are reachable with just another day at it. They really are.

leviathant 4 days ago | parent | next [-]

>I don't want to sound mean but these are reachable with just another day at it. They really are.

Semi-related, understanding Sketchup took a couple of false starts for me. The first time I tried it, I could not make heads or tails of what I was doing. I must have spent hours trying to figure out how to model a desk, and I gave up. Tried again a year or two later, and it just didn't click.

The third try, a couple years later, it suddenly made sense, and what started as modeling a new desk out turned into modeling my room, then modeling the whole house, and now I've got a rough model of my neighborhood. And it's so easy, once you know how - there's obviously a rabbit hole of detail work one can fall down into, but the basics aren't bad.

charcircuit 4 days ago | parent | prev [-]

This is like for 2d art saying line art is just using the pen tool. Sure anyone can reproduce a single stroke, but figuring out what strokes to make has such a high skill ceiling.

spookie 4 days ago | parent [-]

No, the meshes involved are in the same ballpark as children's drawings for 2D art.

I'm sure the most difficult part here is just understanding blender UI. Clearly more difficult than picking up a pencil. But, a tutorial video should suffice.

For the chair example you pick a face on the default cube and the use the extrude tool on the left. Now you have a base.

Add 4 more cubes, and do the same. Now you have legs.

Then boolean them.

For the hat? Use a sphere, go to the sculpt tab and go ham.

There are way better ways to do this, of course. But really, there is not such a high degree of skill involved here, nor that being just a little more patient (one more day of trying) is that much to ask.

charcircuit 4 days ago | parent [-]

My point is that learning to use the tool is not the part people struggle with. The opened ended nature of creation is what is actually hard. Sure it may be primitives, but figuring out what primitives are needed, what dimensions they need, and where they should go is not easy. Everytime I attempt sculpting whatever I do turns into an abomination. That's what happens when I go ham. Not everyone with 1 day of practice are going to be perfectly able destruct what they have in their mind for what they want into parts to create or steps they need to do to get it look right.

Etherlord87 4 days ago | parent | prev [-]

Wrong tutorials. A lot of these models consist of just taking a primitive like a sphere, scaling it, and then creating another primitive, scaling it, moving it, so you have overlapping hulls ("bad" topology). Then in shading you just create a default material and set its color.

There are models in the examples that require e.g. extrusion (which is literally: select faces, press E, drag mouse).

Some shapes are smoothed/subdivided with Catmul-Clark Subdivision Surface modifier, which you can add simply by pressing CTRL+2 in "Object Mode" (the digit is the number of subdivisions, basically use 1 to 3, you may set more for renders).

Here's a good, albeit old tutorial: https://www.youtube.com/watch?v=1jHUY3qoBu8

Yes I made some assumptions when estimating it takes about a day to learn to make models like this: you have a free day to spend it in its entirety to learn, and as a hackernews user your IQ is over average and you're technically savvy. And last assumption: you learn skills required evenly, rather than going deep into the rabbit hole of e.g. correct topology; if you go through something like Andrew Pierce's doughnut tutorial, it may take more than a day, especially if you play around with the various functions of Blender rather than strictly following the videos - but you will end up making significantly better models than the examples presented, e.g. you will know to inset cylinder's ngons to avoid the Catmul-Clark subdiv artifacts you can see on the 2nd column of hats.

> this tech is insanely useful.

No, it isn't, but you don't see it, because you don't have enough experience to see it (Dunning-Kruger effect) - this is why I mentioned my experience, not to flex but to point out I have the experience required to estimate the value of this tool.

dang 4 days ago | parent | next [-]

> No, it isn't, but you don't see it, because you don't have enough experience to see it (Dunning-Kruger effect)

That crosses into personal attack. Please don't do this. You can make your substantive points without it.

https://news.ycombinator.com/newsguidelines.html

xtracto 4 days ago | parent | prev | next [-]

It's amazing how little understanding some people with "a gift" for certain skills have.

I play guitar, it's easy and I enjoy it a lot. I've taught plsome friends to play it, and some of them just... don't have it in them.

Similarly,.I've always liked drawing/painting and 3d modeling. But for some reason, that part of my brain is Just not there. I just can't do visualization. I've even tried award winning books (drawing with the right side of the brain) without success.

Way back in the day I tried 3D modeling with AW maya, 3d studio max and then Blender. I WANT to convert a sphere into a nice warrior, I died to make 3d games: I had the C/C++ part covered, as well as the opengl one. But I couldn't model a trash can,.after following all tutorials and.books.

This technology solves that for us who don't have that gift. I understand that for people that can "draw the rest of the fking owl" it won't look as much, but darn, it opens a world for me.

maplethorpe 4 days ago | parent | next [-]

I'm similar, honestly. I've spent countless hours trying to become a good drawer and a good 3D modeler, but I lack the ability to see something clearly in my mind's eye, and it feels like it's always held me back.

The thing is, I've actually worked as a 3D artist for a number of years. Some people even tell me I'm good. I suppose if that's true at all, it's because I've learned to use the computer to do the visualizing for me.

For some other artists, their process seems to be that they first picture a 'target' image in their mind, and then take steps towards that target until the target is reached. That seems impossible to me -- supernatural stuff. I almost don't believe they can really do it.

My process is closer to first finding some reference images, then taking a step in a random direction and asking whether I'm closer or further away from those references. I'm not necessarily trying to copy the references exactly, I'm just trying to match their level of quality. Then I take another random step, and check again. If you repeat this process enough times, you'll edge closer and closer to something that looks good. You'll also develop a vague sense of 'taste', around which random movements tend to produce more favourable results, and which random movements tend to produce more ugly results. It's a painful process, but it's doable.

I guess what I'm trying to say is that the ability to visualize isn't a prerequisite for 3D modeling.

Etherlord87 4 days ago | parent | prev [-]

I can agree with this. If someone has some kind of disability, like aphantasia (I don't know if it really applies here, as you can look at a reference image) then perhaps the tool is useful. The thing is, none of the examples presented in this particular AI tool are stuff that require hard 3D-related skills e.g. knowing human anatomy.

I wish I could see you struggling to model a trash can and see if maybe you didn't have too high requirements for the quality of said trash can. After all it's just taking a cylinder, insetting the top face and extruding it down, and the top you can model in the exact same way. The rest is detail that the AI tool in question is terrible at. https://i.imgur.com/xeFrgpP.gif

xtracto 3 days ago | parent [-]

Hah! you should have seen me "drawing" a coffee cup that was in front of me at a drawing class: The cup was sitting there, I was seeing it and supposedly I was drawing what I saw. The teacher came and told me: Squint your eyes, draw "lights and shadows". Theoretically, I did that, but my cup just didn't look like the others haha.

The teacher then asked me for my pencil, and started doing some adjustments in my drawing. The shitty cup just became alive with some touches here and there. All I could ask was HOW ??? how did she SEE that?

The book "drawing with the right side of the brain" goes over it: A lot of who are strongly (brain) left-sided see a Cup and "abstract" away the forms, we are constatly drawing "lines" (like, drawing a sticky-figure person,a head is a circle, then body is a line, girl skirt is a triangle, etc) and just cannot actually get past that reasoning in our brain.

Etherlord87 3 days ago | parent [-]

I remember getting the same piece of advice from the teacher. Problem is, even before getting it, I was already applying it, being a rare kid experienced in computer graphics. The teacher was just repeating a phrase she heard somewhere, without actual competence to direct me.

The way I see, and I think the way most people see, is that I have subpixels, not distributed in a square grid and small enough, too many to be able to count them - but I can see them when I close my eyes, it's somewhat similar to looking at a colored noise - something like this: https://i.imgur.com/1P3n80k.gif except you would have to display it on a ridiculously high resolution display (I don't know, 64k or maybe more) and it would represent just a small fragment of view.

Of course this unordered constellation of cones can be mapped into a grid of pixel or a space on a paper, so the only problem is I can't make a measurement in my head and I need to calibrate "eye-balling" measurement to figure out where on paper should I put what I see and I deal with it typically by imagining vertical and horizontal lines to subdivide my view, and then I likewise subdivide the paper.

So I don't really have a problem drawing what I see, the problem I have is the missing technique of how to use a pencil to draw what I actually want to draw.

I think most people work the same way but apparently you don't?

cthlee 4 days ago | parent | prev [-]

Most of these 3d asset generation tools oversimplify things down to stacking primitives and call it modeling, which skips fundamentals like extrusion, subdivision, and proper topology. If they wanted to make a tool actually worthwhile, what do you think the core features should be? Like it would be great if it enforces clean topology, streamline subdivision workflows, but given your xp I'm curious what you’d consider essential.

Etherlord87 4 days ago | parent | next [-]

I could probably write a book to answer this question :D Blender has gone "everything nodes" route, and in particular it created the Geometry Nodes system. I'm very good in geonodes and pretty much specialize in it, and yet I think the system is severly flawed: the nodes in compositing and shading work very well, but in geonodes they are too low-level, and you don't get the easy learning curve of usual node systems (learning geonodes is very hard), while you get all the annoyances of using a very shoddy programming language and having to manage node positioning and fighting with noodles...

...And here's where AI comes into play, If AI could be contained into steps: - Input node: describe where the starting data comes from and AI automatically loads a file from hard drive or Internet or generates a primitive - Select node: describe a pattern by which to select elements of the geometry - Modify Geometry node: perhaps should be split into multiple nodes as there's so many ways to modify the geometry. - Sample/connect data: create an attribute and describe a relation of it to something else to create an underlying algorithm populating this attribute. - Save node: do you want to output the data through the usual pipeline, or maybe export to a file, or save to a simulation cache?

This way AI could do low-level stuff that I think it excels at, because this low-level stuff is so repeatable AI can be well trained on it. While the high-level decision making would be in control of an artist.

quikoa 4 days ago | parent | prev [-]

These models could still be useful when rendered. But when animated or in a game probably less so. Maybe as a prototype to get funding and hire an artist.

jdiff 4 days ago | parent [-]

There are countless troves of CC-licensed assets that would be better suited.

Etherlord87 4 days ago | parent [-]

I'd like to see an AI trained to search for an asset!