Remix.run Logo
Show HN: DoodleMate: Animate Your Child's Hand Drawings Without Generative AI(doodlemate.com)
6 points by hjessmith 5 hours ago | 7 comments

Hi HN!

I made an app that takes a photo of a paper drawing and, in a handful of seconds, creates a fully rigged character that can be used in an animation or little story. It doesn’t use any image-to-video generative AI models. Instead, I built it using the years of insights I’ve picked up studying children’s drawings and character animation.

Today we’re releasing a community beta. I respect this community and would value any feedback you offer. It’s easy to try- you don’t need to create an account to check it out. We’ve got several free stories to drop your character into, and a Mother’s Day eCard.

I’m also working on a tool, DoodleMate Studio, to easily allow people to author their own stories instead of using premade templates. But what form that takes is going to be highly dependent on the type of feedback we get from the community with this beta.

How this came to be:

I’ve worked in this space for a while. Here’s an old HN post related to a popular tech demo I did ( https://news.ycombinator.com/item?id=30469321) and another one from when I open sourced the data and code ( https://news.ycombinator.com/item?id=35561203). I also wrote a SIGGRAPH paper about the methodology (https://dl.acm.org/doi/full/10.1145/3592788).

I’d moved on to other things, but had always felt like there was such potential in this space. Last year I decided I was over big tech and, with a lot of encouragement from my family, finally decided to pursue this seriously. Since then, my wife and I have been building this together. We’re bootstrapping at the moment, trying to give ourselves time and space to make sure DoodleMate turns into something wonderful and wholesome.

Thanks, Jesse

doormatt 4 hours ago | parent | next [-]

Why does your about page say this then?

>I want to build something with a low floor and a high ceiling; a tool that uses AI to smooth out the hardest parts of traditional animation, while letting creators keep full ownership of what they make.

And your terms of service says:

>We provide an online platform that allows users to upload images, drawings, and artwork and use our proprietary AI-powered technology to transform those images into animated videos. You may upload image files, customize certain settings related to the animation output, view and download the resulting videos, and manage their uploaded content and account. We reserve the right to establish limits on the file types, file sizes, and number of uploads permitted per user, and to modify those limits at any time. All video generation is performed automatically through a combination of AI and computer graphics technology.

hjessmith 4 hours ago | parent [-]

That's a good question. The distinction is between generative AI (which takes in a prompt or image and generates every pixel of a new video) and non-generative AI models (e.g. classifiers, segmentation models, and pose estimation models). The second category helps us to infer characteristics about the input drawing, but it doesn't try to 'recreate' anything.

We use non-generative AI models to quickly auto-rig the character when it's uploaded. In a traditional computer graphics animation pipeline this would be done by hand and would be a slow process; we use these models to speed that step up. The resulting animations don't use any AI at all(generative or otherwise).

doormatt 4 hours ago | parent [-]

Gotcha! Appreciate the clarification!

photonair 2 hours ago | parent | prev | next [-]

Would you say your approach is less flexible and creative vs gen AI then? Because you are bounded by what the pipeline can rig/interpret vs open-ended generation from gen AI. I suppose it does preserve the original authorship better though.

hjessmith an hour ago | parent [-]

That's a good question. I would say that the two approaches are quite different and bring different strengths to the table. The major strength of genAI is that it is open-ended.

But it comes with costs. GenAI video is expensive to generate, and most tools constrain your animation to a handful of seconds, not long enough to tell a real story. You can generate multiple clips and stitch them together, but then you'll run up again another limit of GenAI- subject consistency (especially with non-realistic subjects, like doodles).

It's also difficult to finely control genAI outputs, which I argue limits the creative expressivity of the human. And if you generate numerous clips to try to get things perfect, it can get expensive.

Our approach is limited by the motion/visual/audio assets we have access to. But, when we release DoodleMateStudio users will be able to upload their own visuals, record their own audio, capture their own motions, and specify their own high-level story scenes. This should be enough to let people tell expressive and personal stories. And if we get things right, it will also be a lot more fun than refining a prompt.

grvdrm 3 hours ago | parent | prev [-]

I love this.

My older daughter draws so many funny looking characters and stick figures. I’ll show her and I’m sure she will light up seeing her drawings come to life.

hjessmith 2 hours ago | parent [-]

I hope she likes!

Please let me know how it goes and if there's any functionality she/you would want.