| ▲ | samtp 19 hours ago |
| What's a good use case for a defense contractor to generate AI images besides to include in presentations? |
|
| ▲ | aigen000 19 hours ago | parent | next [-] |
| Fabricating evidence of weapons of mass destruction in some developing nation. I kid, more real world use cases would be for concept images for a new product or marketing campaigns. |
| |
| ▲ | toasteros 11 hours ago | parent [-] | | ...you can do that with a pencil, though. What an impossibly weird thing to "need" an LLM for. | | |
| ▲ | KeplerBoy 9 hours ago | parent | next [-] | | You can also create images by poking bits in a hex editor. Some tools are better suited than others. | |
| ▲ | Gud 10 hours ago | parent | prev [-] | | I suppose you walk by foot everywhere? |
|
|
|
| ▲ | subroutine 18 hours ago | parent | prev | next [-] |
| Think of all the trivial ways an image generator could be used in business, and there is likely a similar use-case among the DoD and its contractors (e.g. create a cartoon image of a ship for a naval training aid; make a data dashboard wireframe concept for a decision aid). |
|
| ▲ | cuuupid 11 hours ago | parent | prev | next [-] |
| The very simple use case is generating mock targets. In movies they make it seem like they use mannequin style targets or traditional concentric circles but those are infeasible and unrealistic respectively. There's an entire modeling industry here and being able to replace that with infinitely diverse AI-generated targets is valuable! |
|
| ▲ | missedthecue 15 hours ago | parent | prev | next [-] |
| Generating 30,000 unique images of artillery pieces hiding in underbrush to train autonomous drone cameras. |
| |
| ▲ | gmerc 9 hours ago | parent | next [-] | | Unreal, Houdini and a bunch of assets do this just fine and provide actually usable depth / infrared / weather / fog / TOD / and other relevant data for training - likely cheaper than using their API See bifrost.ai and their fun videos of training naval drones to avoid whales in an ethical manners | |
| ▲ | junon 15 hours ago | parent | prev | next [-] | | It's probably not that, but who knows. The real answer is probably way, way more mundane - generating images for marketing, etc. | | | |
| ▲ | krzat 6 hours ago | parent | prev | next [-] | | Interesting. Let's say we have those and also 30k real unique images, my guess is that real ones would have more useful information in them, but is this measurable? And how much more? | | |
| ▲ | wahnfrieden 6 hours ago | parent [-] | | See IDF’s Gospel AI - the goal isn’t always accuracy, it’s speed of assigning new bombing targets per hour |
| |
| ▲ | Barrin92 12 hours ago | parent | prev | next [-] | | I don't really understand the logic here. All the actual signal about what artillery in bushes look like is already in the original training data. Synthetic data cannot conjure empirical evidence into existence, it's as likely to produce false images as real ones. Assuming the military has more privileged access to combat footage than a multi-purpose public chatbot I'd expect synthetic data to degrade the accuracy of a drone. | | |
| ▲ | stormfather an hour ago | parent | next [-] | | What you're saying just isn't true. I can get an AI to generate an image of a bear wearing a sombrero. There are no images of this in its training data, but there are bears, and there are images of sombreros, and other things wearing sombreros. It can combine the distributions in a plausible way. If I am trying to train a small model to fit into the optical sensor of a warhead to target bears wearing sombreros, this synthetic training set would be very useful. Same thing with artillery in bushes. Or artillery in different lighting conditions. This stuff is useful to saturate the input space with synthetic examples. | |
| ▲ | IanCal 3 hours ago | parent | prev | next [-] | | I'm not arguing this is the purpose here but data augmentation has been done for ages. It just kind of sucks a lot of the time. You take your images and crop, shift, etc them so that your model doesn't learn "all x are in the middle of the image". For text you might auto replace days of the week with others, there's a lot of work there. Broadly the intent is to keep the key information and generate realistic but irrelevant noise so that you train a model that correctly ignores the noise. You don't want to train your model identifying some class of ship to base it on how choppy the water is, just because that was the simple signal that correlated well. There was a case of radiology results that detected cancer well but actually was detecting rulers in the image because in images with tumors there was often a ruler so the tumor could be sized. (I think it was cancer, broad point applies if it was something else). | |
| ▲ | johndough 11 hours ago | parent | prev | next [-] | | Generative models can combine different concepts from the training data. For example, the training data might contain a single image of a new missile launcher at a military parade. The model can then generate an image of that missile launcher hiding in a bush, because it has internalized the general concept of things hiding in bushes, so it can apply it to new objects it has never seen hiding in bushes. | |
| ▲ | rovr138 9 hours ago | parent | prev [-] | | If you're building a system to detect something, usually you need enough variations. You add noise to the images, etc. With this, you could create a dataset that will by definition have that. You should still corroborate the data, but it's a step ahead without having to take 1000 photos and adding enough noise and variations to get to 30k. |
| |
| ▲ | cortesoft 13 hours ago | parent | prev [-] | | If the model can generate the images, can't it already recognize them? | | |
| ▲ | Falimonda 12 hours ago | parent [-] | | The model they're training to perform detection/identification out in the field would presumably need to be much smaller and run locally without needing to rely on network connectivity. It makes sense, so long as the openai model produces a training/validation set that's comparable to one that their development team would otherwise need to curate by hand. |
|
|
|
| ▲ | ZeroTalent 19 hours ago | parent | prev | next [-] |
| Manufacturing consent |
| |
|
| ▲ | potatoman22 16 hours ago | parent | prev | next [-] |
| Generating or augmenting data to train computer vision algorithms. I think a lot of defense problems have messy or low data |
|
| ▲ | golergka 17 hours ago | parent | prev | next [-] |
| Input one image of a known military installation and one civilian building. Prompt to generate a similar _civilian_ building, but resembling that military installation in some way: similar structure, similar colors, similar lighting. Then include this image in a dataset of another net with marker "civilian". Train that new neural net better so that it does lower false positive rate when asked "is this target military". |
| |
| ▲ | aprilthird2021 15 hours ago | parent [-] | | You'll never get promoted thinking like that! Mark them all "military", munitions sales will soar! | | |
| ▲ | derektank 14 hours ago | parent | next [-] | | You might not believe it but the US military actually places a premium on not committing war crimes. Every service member, or at least every airman in the Air Force (I can't speak for other branches) receives mandatory training on the Kunduz hospital before deployment in an effort to prevent another similar tragedy. If they didn't care, they wouldn't waste thousands of man-hours on it. | | |
| ▲ | jncfhnb 13 hours ago | parent | next [-] | | I knew a guy whose job was to assess and approve the legality of each strike considering second order impacts on the community | |
| ▲ | handfuloflight 14 hours ago | parent | prev | next [-] | | > On 7 October 2015, President Barack Obama issued an apology and announced the United States would be making condolence payments of $6,000 to the families of those killed in the airstrike. Definitely a premium. | |
| ▲ | guappa 7 hours ago | parent | prev [-] | | Most importantly they finance propaganda films like "eye in the sky" to make it look like they give a shit about not killing civilians. Videos on wikileaks tell a different story. |
| |
| ▲ | golergka 15 hours ago | parent | prev [-] | | Bombs and other kinds of weapon system which are "smarter" have higher markup. It's profitable to sell smarter weapons. Dumb weapons is destroying the whole cities, like Russia did in Ukraine. Smart weapons is striking a tank, a car, an apartment, a bunker, knowing who's there and when — which obviously means less % of civilian casualties. | | |
| ▲ | guappa 7 hours ago | parent [-] | | Remember when Obama re-defined so that "all adult males are terrorists"? That's how USA reduces civilian casualties. |
|
|
|
|
| ▲ | tzury 16 hours ago | parent | prev | next [-] |
| AI image generation is a "statistical simulator".
And when fed with the right information, it can generates pretty close to reality scenery. |
|
| ▲ | tyingq 5 hours ago | parent | prev | next [-] |
| Training, recruiting, sales (as you mention), testing image based targeting. |
|
| ▲ | aprilthird2021 15 hours ago | parent | prev | next [-] |
| Generating pictures of "bad guy looking guys" so your automated bombs shoot more so you sell more bombs |
|
| ▲ | sandspar 17 hours ago | parent | prev [-] |
| Vastly oversimplified but for every civilian job there's an equivalent military job. Superficially, the military is basically a country-sized self-contained corporation. Anywhere that Wal-Mart's corporate office could use AI so could the military. |