Remix.run Logo
Show HN: Off Grid – Run AI text, image gen, vision offline on your phone(github.com)
66 points by ali_chherawalla 7 hours ago | 26 comments

Your phone has a GPU more powerful than most 2018 laptops. Right now it sits idle while you pay monthly subscriptions to run AI on someone else's server, sending your conversations, your photos, your voice to companies whose privacy policy you've never read. Off Grid is an open-source app that puts that hardware to work. Text generation, image generation, vision AI, voice transcription — all running on your phone, all offline, nothing ever uploaded.

That means you can use AI on a flight with no wifi. In a country with internet censorship. In a hospital where cloud services are a compliance nightmare. Or just because you'd rather not have your journal entries sitting in someone's training data.

The tech: llama.cpp for text (15-30 tok/s, any GGUF model), Stable Diffusion for images (5-10s on Snapdragon NPU), Whisper for voice, SmolVLM/Qwen3-VL for vision. Hardware-accelerated on both Android (QNN, OpenCL) and iOS (Core ML, ANE, Metal).

MIT licensed. Android APK on GitHub Releases. Build from source for iOS.

sangaya 2 hours ago | parent | next [-]

Putting the power and the data of the users in the hands of the users themselves! Well done. Getting it setup was easy. Wish the app recognized the keyboard and realized when it was displayed so the bottom menu and chat box weren't hidden under it.

ali_chherawalla 2 hours ago | parent [-]

Thank you!

resonious 3 hours ago | parent | prev | next [-]

On my Samsung phone it doesn't move the screen up to make room for the keyboard so I can't see what I'm typing.

Really awesome idea though. I want this to work.

dotancohen 2 hours ago | parent [-]

I can confirm this bug on a Samsung S24 Ultra.

ali_chherawalla an hour ago | parent [-]

sorry about that one. I'm taking a look and fixing it right away

ali_chherawalla 31 minutes ago | parent [-]

hey, just pushed a fix for it here: https://github.com/alichherawalla/off-grid-mobile/releases/t...

Thanks for spotting and reporting this.

nine_k 2 hours ago | parent | prev | next [-]

Is there something similar, but geared towards a Linux desktop / laptop? I suppose this would be relatively easy to adapt.

ali_chherawalla 2 hours ago | parent [-]

LM Studio solves for it pretty well I think. It doesn't do image gen etc though

bkmeneguello 4 hours ago | parent | prev | next [-]

Very nice, but I'm gonna wait for the f-droid build.

ali_chherawalla an hour ago | parent [-]

ok, let me figure that one out.

flyingkiwi44 4 hours ago | parent | prev | next [-]

The repository is listed as offgrid-mobile everywhere on that page but is off-grid-mobile.

So the lastest releases is at https://github.com/alichherawalla/off-grid-mobile/releases/l...

And the clone would be: git clone https://github.com/alichherawalla/off-grid-mobile.git

ali_chherawalla 2 hours ago | parent [-]

hey, yes. thanks just pushed that fix out

wittlesus an hour ago | parent | prev | next [-]

This is genuinely exciting. The fact that you're getting 15-30 tok/s for text gen on phone hardware is wild — that's basically usable for real conversations.

Curious about a couple things: what GGUF model sizes are practical on a mid-range phone (say 8GB RAM)? And how's the battery impact during sustained inference — does it drain noticeably faster than, say, a video call?

The privacy angle is the real killer feature here IMO. There are so many use cases (journaling, health tracking, sensitive work notes) where people self-censor because they know it's going to a server somewhere. Removing that barrier entirely changes what people are willing to use AI for.

durhamg an hour ago | parent | next [-]

This sounds exactly like Claude wrote it. I've noticed Claude saying "genuinely" a lot lately, and the "real killer feature" segue just feels like Claude being asked to review something.

ali_chherawalla an hour ago | parent | prev | next [-]

I've added a section for recommended models. So basically you can chose from there.

I'd recommend going for any quantized 1B parameter model. So you can look at llama 3.2 1B, gemma3 1B, qwen3 VL 2B (if you'd like vision)

Appreciate the kind words!

add-sub-mul-div an hour ago | parent | prev [-]

> that's basically usable for real conversations.

That's using the word "real" very loosely.

vachina 4 hours ago | parent | prev | next [-]

Ok it lists the instruction to build for iOS, but how to sideload?

behole 4 hours ago | parent | next [-]

I found more info https://github.com/alichherawalla/off-grid-mobile/blob/main/...

ali_chherawalla 2 hours ago | parent [-]

yeah thats right. for now for iOS you'll actually have to pod install etc.

Thanks for pointing this out

instagib 2 hours ago | parent | prev [-]

Wonder if you can use GitHub actions to build iOS.

I found a guide for virtual box macOS which failed on intel then another for hyper-V but haven’t tried that one yet.

derac 4 hours ago | parent | prev | next [-]

I haven't run it, but I looked through the repo. It looks very well thought out, the UI is nice. I appreciate the ethos behind the local/offline design. Cheers.

ali_chherawalla an hour ago | parent [-]

thank you!

snicky 4 hours ago | parent | prev | next [-]

GitHub Releases link is broken.

The dash in "off-grid" is missing.

ali_chherawalla an hour ago | parent [-]

yup, just took a look at that and fixed it. My bad!

chr15m an hour ago | parent | prev [-]

This rules. Godspeed!

ali_chherawalla an hour ago | parent [-]

wow. thank you