| ▲ | Apple Releases Open Weights Video Model(starflow-v.github.io) |
| 269 points by vessenes 10 hours ago | 75 comments |
| |
|
| ▲ | devinprater 6 hours ago | parent | next [-] |
| Apple has a video understanding model too. I can't wait to find out what accessibility stuff they'll do with the models. As a blind person, AI has changed my life. |
| |
| ▲ | densh 6 hours ago | parent | next [-] | | > As a blind person, AI has changed my life. Something one doesn't see in news headlines. Happy to see this comment. | | |
| ▲ | tippa123 6 hours ago | parent | next [-] | | +1 and I would be curious to read and learn more about it. | | |
| ▲ | swores 4 hours ago | parent | next [-] | | A blind comedian / TV personality in the UK has just done a TV show on this subject - I haven't seen it, but here's a recent article about it: https://www.theguardian.com/tv-and-radio/2025/nov/23/chris-m... | | |
| ▲ | latexr 2 hours ago | parent [-] | | Hilariously, he beat the other teams in the “Say What You See” round (yes, really) of last year’s Big fat Quiz. No AI involved. https://youtu.be/i5NvNXz2TSE?t=4732 | | |
| ▲ | swores an hour ago | parent [-] | | Haha that's great! I'm not a fan of his (nothing against him, just not my cup of tea when it comes to comedy and mostly not been interested in other stuff he's done), but the few times I have seen him as a guest on shows it's been clear that he's a generally clever person. |
|
| |
| ▲ | joedevon 5 hours ago | parent | prev [-] | | If you want to see more on this topic, check out (google) the podcast I co-host called Accessibility and Gen. AI. | | |
| |
| ▲ | badmonster 6 hours ago | parent | prev | next [-] | | What other accessibility features do you wish existed in video AI models? Real-time vs post-processing? | |
| ▲ | fguerraz 5 hours ago | parent | prev [-] | | > Something one doesn't see in news headlines. I hope this wasn't a terrible pun | | |
| ▲ | densh 40 minutes ago | parent [-] | | No pun intended but it's indeed an unfortunate choice of words on my part. | | |
| ▲ | 47282847 a minute ago | parent [-] | | My blind friends have gotten used to it and hear/receive it not as a literal “see“ any more. They would not feel offended by your usage. |
|
|
| |
| ▲ | GeekyBear 2 hours ago | parent | prev | next [-] | | One cool feature they added for deaf parents a few years ago was a notification when it detects a baby crying. | | |
| ▲ | embedding-shape 10 minutes ago | parent [-] | | Is that something you actually need AI for though? A device with a sound sensor and something that shines/vibrate a
remote device when it detects sound above some threshold would be cheaper, faster detection, more reliable, easier to maintain, and more. | | |
| ▲ | jfindper a minute ago | parent [-] | | >Is that something you actually need AI for though? Need? Probably not. >would be cheaper, faster detection, more reliable, easier to maintain, and more. Cheaper than the phone I already own? Easier to maintain than the phone that I don't need to do maintenance on? From a fun hacking perspective, a different sensor & device is cool. But I don't think it's any of the things you mentioned for the majority of people. |
|
| |
| ▲ | phyzix5761 6 hours ago | parent | prev | next [-] | | Can you share some ways AI has changed your life? | | |
| ▲ | darkwater 6 hours ago | parent | next [-] | | I guess that auto-generated audio descriptions for (almost?) any video you want is a very, very nice feature for a blind person. | | |
| ▲ | tippa123 5 hours ago | parent | next [-] | | My two cents, this seems like a case where it’s better to wait for the person’s response instead of guessing. | | |
| ▲ | darkwater 5 hours ago | parent | next [-] | | Fair enough. Anyway I wasn't trying to say what actually changed GP's life, I was just expressing my opinion on what video models could potentially bring as an improvement to a blind person. | |
| ▲ | nkmnz 3 hours ago | parent | prev [-] | | My two cents, this seems like a comment it should be up to the OP to make instead of virtue signaling. | | |
| ▲ | tippa123 2 hours ago | parent | next [-] | | > Can you share some ways AI has changed your life? A question directed to GP, directly asking about their life and pointing this out is somehow virtue signalling, OK. | | |
| ▲ | throwup238 2 hours ago | parent [-] | | You can safely assume that anyone who uses “virtue signaling” unironically has nothing substantive to say. |
| |
| ▲ | foobarian 2 hours ago | parent | prev | next [-] | | Yall could have gotten a serviceable answer about this topic out of ChatGPT. 2025 version of "let me google that for you" | |
| ▲ | MangoToupe an hour ago | parent | prev | next [-] | | ...you know, people can have opinions about the best way to behave outside of self-aggrandizement, even if your brain can't grasp this concept. | |
| ▲ | fragmede 2 hours ago | parent | prev [-] | | From the list of virtues, which one was this signaling? https://www.virtuesforlife.com/virtues-list/ |
|
| |
| ▲ | baq 6 hours ago | parent | prev [-] | | guessing that being able to hear a description of what the camera is seeing (basically a special case of a video) in any circumstances is indeed life changing if you're blind...? take a picture through the window and ask what's the commotion? door closed outside that's normally open - take a picture, tell me if there's a sign on it? etc. |
| |
| ▲ | gostsamo 5 hours ago | parent | prev [-] | | Not the gp, but currently reading a web novel with a card game where the author didn't include alt text in the card images. I contacted them about it and they started, but in the meantime ai was a big help. all kinds of other images on the internet as well when they are significant to understanding the surrounding text. better search experience when Google, DDG, and the like make finding answers difficult. I might use smart glasses for better outdoor orientation, though a good solution might take some time. phone camera plus ai is also situationally useful. | | |
| ▲ | dzhiurgis 5 hours ago | parent [-] | | As a (web app) developer I never quite sure what to put in alt. Figured you might have some advice here? | | |
| ▲ | askew an hour ago | parent | next [-] | | One way to frame it is: "how would I describe this image to somebody sat next to me?" | | |
| ▲ | embedding-shape 8 minutes ago | parent [-] | | Important to add for blind people: "... assuming they never seen anything and visual metaphors won't work" The amount of times I've seem captions that wouldn't make sense for people who never been able to see is staggering, I don't think most people realize how visual our typical language usage is. |
| |
| ▲ | gostsamo 5 hours ago | parent | prev [-] | | The question to ask is, what a sighted person learns after looking at the image? The answer is the alt text. E.g if the image is a floppy, maybe you communicate that this is the save button. If it shows a cat sleeping on the windowsill, the alt text is yep: "my cat looking cute while sleeping on the windowsill". | | |
| ▲ | michaelbuckbee 4 hours ago | parent [-] | | I really like how you framed this as the takeaway or learning that needs to happen as what should be in the alt and not a recitation of the image. Where I've often had issues is more for things like business charts and illustrations and less cute cat photos. | | |
| ▲ | isoprophlex 3 hours ago | parent | next [-] | | "A meaningless image of a chart, from which nevertheless emanates a feeling of stonks going up" | |
| ▲ | travisjungroth 3 hours ago | parent | prev | next [-] | | It might be that you’re not perfectly clear on what exactly you’re trying to convey with the image and why it’s there. | | |
| ▲ | hrimfaxi an hour ago | parent | next [-] | | What would you put for this? "Graph of All-Transactions House Price Index for the United States 1975-2025"? https://fred.stlouisfed.org/series/USSTHPI | | |
| ▲ | wlesieutre 31 minutes ago | parent [-] | | Charts are one I've wondered about, do I need to try to describe the trend of the data, or provide several conclusions that a person seeing the chart might draw? Just saying "It's a chart" doesn't feel like it'd be useful to someone who can't see the chart. But if the other text on the page talks about the chart, then maybe identifying it as the chart is enough? | | |
| ▲ | embedding-shape 6 minutes ago | parent | next [-] | | What are you trying to point out with your graph in general? Write that basically. Usually graphs are added for some purpose, and assuming it's not purposefully misleading, verbalizing the purpose usually works well. | |
| ▲ | gostsamo 15 minutes ago | parent | prev [-] | | It depends on the context. What do you want to say? How much of it is said in the text? Can the content of the image be inferred from the text part? Even in the best scenario though, giving a summary of the image in the alt text / caption could be immensely useful and include the reader in your thought process. |
|
| |
| ▲ | gostsamo 2 hours ago | parent | prev [-] | | sorry, snark does not help with my desire to improve accessibility in the wild. |
| |
| ▲ | gostsamo 3 hours ago | parent | prev [-] | | The logic stays the same though the answer is longer and not always easy. Just saying "business chart" is totally useless. You can make a choice on what to focus and say "a chart of the stock for the last five years with constant improvement and a clear increase by 17 percent in 2022" (if it is a simple point that you are trying to make) or you can provide an html table with the datapoints if there is data that the user needs to explore on their own. | | |
| ▲ | nextaccountic 30 minutes ago | parent [-] | | but the table exists outside the alt text, right? i don't know a mechanism to say "this html table represents the contents of this image" , in a way that screen readers and other accessibility technologies take advantage of | | |
| ▲ | gostsamo 20 minutes ago | parent [-] | | The figure tag has both image and caption tags that link them. As far as I remember, some content could be marked as screen reader only if you don't want for the table to be visible to the rest of the users. Additionally, recently I've been a participant in accessibility studies where charts, diagrams and the like have been structured to be easier to explore with a sr. Those needed js to work and some of them looked custom, but they are also an alternative way to layer data. |
|
|
|
|
|
|
| |
| ▲ | javcasas 4 hours ago | parent | prev [-] | | Finally good news about the AI doing something good for the people. | | |
|
|
| ▲ | RobotToaster 6 hours ago | parent | prev | next [-] |
| The license[0] seems quite restrictive, limiting it's use to non commercial research. It doesn't meet the open source definition so it's more appropriate to call it weights available. [0]https://github.com/apple/ml-starflow/blob/main/LICENSE_MODEL |
|
| ▲ | andersa 11 minutes ago | parent | prev | next [-] |
| The number of video models that are worse than Wan 2.2 and can safely be ignored has increased by 1. |
| |
| ▲ | embedding-shape 5 minutes ago | parent [-] | | To be fair, the sizes aren't comparable, and for the variant that is comparable, the results aren't that much worse. |
|
|
| ▲ | giancarlostoro 17 minutes ago | parent | prev | next [-] |
| I was upset the page didnt have videos immediately available, then I realized I have to click on some of the tabs. One red flag on their github is the license looks to be their own flavor of MIT (though much closer to MS-PL). |
|
| ▲ | yegle 6 hours ago | parent | prev | next [-] |
| Looking at text to video examples (https://starflow-v.github.io/#text-to-video) I'm not impressed. Those gave me the feeling of the early Will Smith noodles videos. Did I miss anything? |
| |
| ▲ | M4v3R 6 hours ago | parent | next [-] | | These are ~2 years behind state of the art from the looks of it. Still cool that they're releasing anything that's open for researchers to play with, but it's nothing groundbreaking. | | |
| ▲ | Mashimo 6 hours ago | parent | next [-] | | But 7b is rather small no? Are other open weight video models also this small? Can this run on a single consumer card? | | |
| ▲ | dragonwriter 4 hours ago | parent | next [-] | | > But 7b is rather small no? Sure, its smallish. > Are other open weight video models also this small? Apples models are weights-available not open weights, and yes, WAN 2.1, as well as the 14B models, also has 1.3B models; WAN 2.2, as well as the 14B models, also has a 5B model (the WAN 2.2 VAE used by Starflow-V is specifically the one used with the 5B model.) and because the WAN models are largely actually open weights models (Apache 2.0 licensed) there are lots of downstream open-licensed derivatives. > Can this run on a single consumer card? Modern model runtimes like ComfyUI can run models that do not fit in VRAM on a single consumer card by swapping model layers between RAM and VRAM as needed; models bigger than this can run on single consumer cards. | |
| ▲ | Maxious 5 hours ago | parent | prev [-] | | Wan 2.2: "This generation was run on an RTX 3060 (12 GB VRAM) and took 900 seconds to complete at 840 × 420 resolution, producing 81 frames." https://www.nextdiffusion.ai/tutorials/how-to-run-wan22-imag... |
| |
| ▲ | tomthe 5 hours ago | parent | prev [-] | | No, it is not as good as Veo, but better than Grok, I would say. Definitely better than what was available 2 years ago. And it is only a 7B research model! |
| |
| ▲ | manmal 3 hours ago | parent | prev | next [-] | | I wanted to write exactly the same thing, this reminded me of the Will Smith noodles. The juice glass keeps filling up after the liquid stopped pouring in. | |
| ▲ | jfoster 3 hours ago | parent | prev [-] | | I think you need to go back and rewatch Will Smith eating spaghetti. These examples are far from perfect and probably not the best model right now, but they're far better than you're giving credit for. As far as I know, this might be the most advanced text-to-video model that has been released? I'm not sure whether the license will qualify as open enough in everyone's eyes, though. |
|
|
| ▲ | dymk 36 minutes ago | parent | prev | next [-] |
| Title is wrong, model isn’t released yet. Title also doesn’t appear in the link - why the editorializing? |
|
| ▲ | vessenes an hour ago | parent | prev | next [-] |
| From the paper, this is a research model aimed at dealing with the runaway error common in diffusion video models - the latent space is (proposed to be) causal and therefore it should have better coherence. For a 7b model the results look pretty good! If Apple gets a model out here that is competitive with wan or even veo I believe in my heart it will have been trained with images of the finest taste. |
|
| ▲ | coolspot 7 hours ago | parent | prev | next [-] |
| > STARFlow-V is trained on 96 H100 GPUs using approximately 20 million videos. They don’t say for how long. |
|
| ▲ | LoganDark 2 hours ago | parent | prev | next [-] |
| > Model Release Timeline: Pretrained checkpoints will be released soon. Please check back or watch this repository for updates. > The checkpoint files are not included in this repository due to size constraints. So it's not actually open weights yet. Maybe eventually once they actually release the weights it will be. "Soon" |
|
| ▲ | satvikpendem 7 hours ago | parent | prev | next [-] |
| Looks good. I wonder what use case Apple has in mind though, or I suppose this is just what the researchers themselves were interested in, perhaps due to the current zeitgeist. I'm not really sure how it works at big tech companies with regards to research, are there top down mandates? |
| |
| ▲ | ivape 18 minutes ago | parent [-] | | To add things to videos you create with your phone. TikTok and Insta will probably add this soon, but I suppose Apple is trying to provide this feature on “some level”. That means you don’t have to send your video through a social media platform first to creatively edit it (the platforms being the few tools that let you do generative video). They should really buy Snapchat. |
|
|
| ▲ | nothrowaways 6 hours ago | parent | prev | next [-] |
| Where do they get the video training data? |
| |
| ▲ | postalcoder 6 hours ago | parent [-] | | From the paper: > Datasets. We construct a diverse and high-quality collection of video datasets to train STARFlow-V. Specifically, we leverage the high-quality subset of Panda (Chen et al., 2024b) mixed with an in-house stock video dataset, with a total number of 70M text-video pairs. | | |
| ▲ | justinclift 4 hours ago | parent [-] | | > in-house stock video dataset Wonder if "iCloud backups" would be counted as "stock video" there? ;) | | |
| ▲ | anon7000 4 hours ago | parent | next [-] | | I have to delete as many videos as humanly possible before backing up to avoid blowing through my iCloud storage quota so I guess I’m safe | |
| ▲ | fragmede 4 hours ago | parent | prev [-] | | Turn on advanced data protection so they don't train on yours. | | |
| ▲ | givinguflac 2 hours ago | parent [-] | | That has nothing to do with it, and Apple wouldn’t train on user content, they’re not Google. If they ever did there would be opt in at best. There’s a reason they’re walking and observing, not running and trying to be the forefront cloud AI leader, like some others. |
|
|
|
|
|
| ▲ | camillomiller 6 hours ago | parent | prev | next [-] |
| Hopefully this will make into some useful feature in the ecosystem and not contribute to having just more terrible slop. Apple has saved itself from the destruction of quality and taste that these model enabled, I hope it stays that way. |
|
| ▲ | pulse7 5 hours ago | parent | prev | next [-] |
| <joke> GGUF when? </joke> |
|
| ▲ | mdrzn 5 hours ago | parent | prev [-] |
| "VAE: WAN2.2-VAE" so it's just a Wan2.2 edit, compressed to 7B. |
| |
| ▲ | kouteiheika 5 hours ago | parent | next [-] | | This doesn't necessarily mean that it's Wan2.2. People often don't train their own VAEs and just reuse an existing one, because a VAE isn't really what's doing the image generation part. A little bit more background for those who don't know what a VAE is (I'm simplifying here, so bear with me): it's essentially a model which turns raw RGB images into a something called a "latent space". You can think of it as a fancy "color" space, but on steroids. There are two main reasons for this: one is to make the model which does the actual useful work more computationally efficient. VAEs usually downscale the spatial dimensions of the images they ingest, so your model now instead of having to process a 1024x1024 image needs to work on only a 256x256 image. (However they often do increase the number of channels to compensate, but I digress.) The other reason is that, unlike raw RGB space, the latent space is actually a higher level representation of the image. Training a VAE isn't the most interesting part of image models, and while it is tricky, it's done entirely in an unsupervised manner. You give the VAE an RGB image, have it convert it to latent space, then have it convert it back to RGB, you take a diff between the input RGB image and the output RGB image, and that's the signal you use when training them (in reality it's a little more complex, but, again, I'm simplifying here to make the explanation more clear). So it makes sense to reuse them, and concentrate on the actually interesting parts of an image generation model. | | | |
| ▲ | dragonwriter 5 hours ago | parent | prev | next [-] | | > "VAE: WAN2.2-VAE" so it's just a Wan2.2 edit No, using the WAN 2.2 VAE does not mean it is a WAN 2.2 edit. > compressed to 7B. No, if it was an edit of the WAN model that uses the 2.2 VAE, it would be expanded to 7B, not compressed (the 14B models of WAN 2.2 use the WAN 2.1 VAE, the WAN 2.2 VAE is used by the 5B WAN 2.2 model.) | |
| ▲ | BoredPositron 5 hours ago | parent | prev [-] | | They used the VAE of WAN like many other models do. For image models you see a lot of them using the flux VAE. Which is perfectly fine, they are released as apache2 and save you time to focus on your transformers architecture... |
|