Remix.run Logo
esperent 5 days ago

I would love to use Immich but I'm not into running a home server - electricity isn't that reliable here and putting in backup power is more expensive than I want to pay. Also I just don't want to manage the hardware.

I've looked into cloud hosting. But of course, photos and videos take up a lot of space. Object storage is cheap but not supported by Immich. Block storage is not cheap.

I did look into s3fuse but the concensus seemed to be that lots of tiny files like thumbnails wouldn't perform well.

Does anyone cloud host it? What's your solution?

freetonik 4 days ago | parent | next [-]

One very easy and painless way is Pikapods (https://www.pikapods.com/).

esperent 4 days ago | parent [-]

A pikapod with 2vcpus, 8gb of ram, and 1tb of storage is $200 a year. That's not too bad, but it's the maximum amount of storage so if you need to go over that you would have to attach separate storage (if that's possible).

justusthane 4 days ago | parent | prev | next [-]

Hetzner Storage Box is quite reasonable: https://www.hetzner.com/storage/storage-box/#matrix

esperent 4 days ago | parent [-]

That looks great but I'm in Asia and it's not available in this region.

moduspol 4 days ago | parent | prev | next [-]

I'm kind of surprised that using object storage wasn't a first-class concern. Though I guess if running it at home was the biggest thing, that's not the top priority, but still. Using fast, cheap object stores (often with CDNs in front) has been commonplace for images, videos, and similar content for decades now. For virtually anything that uses some dynamic amount of storage based on user actions, my expectation is that I'll be able to configure it to store and fetch from S3 (or similar).

esperent 4 days ago | parent [-]

Yeah this was pretty much my reaction, and seeing that it's not supported (either by Immich or Photoprism) really made me think these projects are not for me.

privatelypublic 4 days ago | parent [-]

Photoprism shouldn't be in consideration at all. They felt the need to post in their FAQ that not checking authorization/authentication before serving photos* if directly linked is OK because its "industry standard" and (paraphrased) "difficult/impossible to implement"

* might just be thumbnails. But lets be honest- a 1/10 scale thumbnail of 40MP shot is... still a ton of detail.

sepher0 4 days ago | parent | prev | next [-]

The team just announced a 1-click option on digital ocean, if you want it cloud hosted set up:

https://marketplace.digitalocean.com/apps/immich

esperent 4 days ago | parent [-]

Hosting it isn't the problem. Paying for 1-2tb of storage is.

aecsocket 4 days ago | parent | prev | next [-]

The cheapest possible Hetzner VPS (2 vCPU 40GB SSD) and a Hetzner storage box (1TB) works alright for cheap (less than EUR 10/mo). I store my database on the SSD, and the `/uploads` folder on the storage box attached as a CIFS drive. Put it behind Tailscale and it's worked fine for the past few months.

mlangenberg 4 days ago | parent [-]

Wouldn’t you want your photos to be encrypted at rest on the Hetzner storage box?

aecsocket 4 days ago | parent | next [-]

I don't really care about that, since my threat model doesn't involve Hetzner looking through my photos and training an AI model on them. If/when I move this off to my own hardware, then I'll do full disk encryption, since my threat model may involve someone stealing my hardware.

j45 4 days ago | parent | prev [-]

Docker could be run on the VPS, and the storage leg could be encrypted.

I'm presuming some VPS providers allow converting your VPS disk image to something that supports encryption.

mlangenberg 4 days ago | parent [-]

Is that something that docker can do?

I presume gocryptfs can be used to wrap an SMB mounted Hetzner storage box. Haven’t tried it myself though.

I would be careful storing any personal data on it unencrypted.

namibj 4 days ago | parent | next [-]

rclone.

Just use rclone if you need to turn object storage semantics usage into an encrypted mount.

It doesn't do well with non-object-storage access patterns but we're not putting an sqlite database on it here so that should be fine.

rclone has a `crypt` layer you can just paper over any of it's backends and still access through any of it's comfortable ways.

I'd personally likely bind mount the database folder over the rclone mount or the other way around, as needed to keep that database on a local filesystem.

dd_xplore 4 days ago | parent | prev [-]

In my experience mounting smb share inside docker containers has been very very unreliable...

dddw 4 days ago | parent | prev | next [-]

I actually got it working with cloud storage on hetzner. Wasn't supersnappy, but it worked. I borked the build though and am planning to run it on my home-server

esperent 4 days ago | parent [-]

I'm currently using Nextcloud Memories connected to a Wasabi bucket for photos and being "not supersnappy" is a real dealbreaker. When I scroll through hundreds or thousands of photos and have to wait five or ten seconds for each new page of thumbnails to load, then the same when I open a photos, I quickly go back to Google photos.

anttiharju 4 days ago | parent | prev | next [-]

I wish it'd be easy to plug it to a s3 backend and thumbnails etc. ephemeral things could just be on disk.

esperent 4 days ago | parent [-]

Yes, that would be the obvious solution. Database and thumbnails on disk, everything else on s3.

jauntywundrkind 4 days ago | parent | prev | next [-]

There have been attempts to use s3fuse like layers, but:

> NOTE: I found it too expensive in S3 requests and CloudTrail data recordings to use S3 as the backend.

https://github.com/dubrowin/Immich-backed-by-S3

They used aws's own mountpoint for this. Perhaps s3fs with it's caching could do better? Ideally someone would make an object store fuse driver that caches the whole file tree & metadata, or perhaps storing on slatedb or some such. Being able to tune the local file cache would also be important: maybe maybe maybe s3fuse caching is good enough, but making sure thumbnails can cache seems super important. It would be interesting to see how immich uses the filesystem.

goda90 4 days ago | parent | prev | next [-]

> electricity isn't that reliable here and putting in backup power is more expensive than I want to pay

A small UPS that can communicate its power state over USB isn't too expensive. So if power goes out, it sends a message to its host that it should shutdown after a certain amount of time and then when power restores, it turns the server back on. I can understand the desire to not have to manage all that though.

jerf 4 days ago | parent | prev | next [-]

"But of course, photos and videos take up a lot of space."

Videos take up a lot of space. Photos increasingly don't. 20 years of family photos for me takes up 150GB, and that's with me being very slovenly about cleaning up the "bad" photos, if I found a decent workflow for trimming photos I could probably cut that down by 75% pretty easily. Linode will attach 160GB of storage for $16/month, plus you'd need a $5/month VM to attach that to. https://www.linode.com/pricing/#block-storage

I acknowledge that you may be in a position where that is too much, but on the other hand, broadly speaking, it's not going to get much cheaper than this even in the next few years. It's not like it's $500/month anymore and there's room for it to be cut by $300/month.

Immich can also survive without necessarily being up all the time. If you have a computer of any kind and any reasonable spec that spends a reasonable amount of time being on, you can use tailscale or something to hook it to your phone and run a backup process every so often to a cloud block storage. It's OK that it isn't always up and then you get to pay object storage prices, which for 150GB now is as close to negligible as you can reasonably get.

j45 4 days ago | parent | prev | next [-]

You could run a small vm in the cloud and use a storage solution like backblaze or something that stores things relatively inexpensively.

The hardware isn't that much to manage anymore these days, a small usff uses very little electricity, can stay up for a few hours on a UPS.

Tools like Proxmox make it point and click like any cloud provider within reason.

dd_xplore 4 days ago | parent | prev | next [-]

I did some half ass backup solution. Bought a LifePo4 12AH battery, hooked up a compatible charger, hooked up a dc-dc converter to power up a N100 mini pc, my mikrotik router etc. Works perfectly fine as of now...

mnkmnk 3 days ago | parent [-]

Does this continuously charge and discharge the battery, using up battery cycles?

mbrochh 4 days ago | parent | prev | next [-]

If you want cloud hosting and fully E2E encrypted by a team that deeply cares about privacy and security, try Ente. They also have Google Authenticator alternative called Ente Auth.

jdc0589 4 days ago | parent | prev | next [-]

this is pretty much the situation I'm in re the storage. I'm perfectly fine running a home server, I already do, but workloads with heavy storage requirements scare me away from it. I don't want to have to think about that at home, and the cost of pretty much anything other than object storage in the cloud is prohibitive, and as you mentioned obj store support is non-existent or hacky and slow with most of these products.

namibj 4 days ago | parent | next [-]

`rclone mount` an `rclone crypt` over a Cloudflare R3 backend of `rclone`? Or if it's sufficiently often "off/idle", take 3 USB HDDs (1~2 years ago I bought an iirc WD MyPassport 5TB for very similar workload) into a RAID-5 and have appropriate off-site backup that you actively check to have successfully gotten the latest daily backup's file contents (check a couple (3~5) random files as well as a few (3~5 ish) critical/database/metadata files) at least every 1~3 months.

Also, as opportunities arise like for example from a major upgrade to local storage capacity, try to fully test a backup restore emulated to "your home burned to the ground while you were at the office/on holiday" conditions every 1~3 years if you can afford to spend the bandwidth for it.

Consider burning in drives for a group-buy you do with local friends if necessary to at least get such a full restore trial every about 3 years. Try as best to consider a trial every about 5 years to be a "cost of doing business" that's not just nice to have but essential to the value proposition of the data archive storing home server.

Oh and yeah, I fully mean to let the drives go to sleep when you're not accessing them through "manual"/interactive means (exceptions are limited-time background queued work with a set override timer, and the daily backup runs, which will also unlock the drives from their regular sleep-doesn't-get-interrupted-for-no-good-reason enforcement; ofc this is all something you do only if you can and feel like you want to: just hunting down rouge accesses/wake-ups happening at odd times by setting up some minor logging of which programs/files/accesses (or at the very least _when exactly_) are causing the drives to wake up is something you could very well get away with). Also take care to ensure they get good airflow: stack them with gaps between and ideally just take a decent but low-cost 120mm fan that you just hook to 5V from USB (if you don't have a fan header laying around) and rig with some cardboard and tape to channel air across your drives. The drives want to be around 30~45 C, consider hooking the smart temperature readout to a kill switch in case of fan failure.

j45 4 days ago | parent | prev | next [-]

Self-hosting seems easiest to think about as a home appliance.

Out of compute, storage, database, networking, etc, which is most preferable to be just an appliance?

It's pretty reasonable to get reliable storage self-hosted without the headache. If a big setup isn't needed, it's reasonably attractive to set up your own storage with reasonable power draw, which can be kept up with more reasonable UPS'.

Just because one can build and run a storage array on their own, doesn't mean it would be the best allcoation of their ongoing attention to maintain and be on call for a daycare for hard drives.

If seamless storage as good (and sometimes better) than a cloud is the minimum, it has to be something trustable, and run like a reliable home appliance needing minimum maintenance.

Lots of folk choose NAS enclosures that have raid mirroring and hot-swap drives built in quite inexpensively using things like Synology or QNAPs. The web admin interfaces on them are reasonable, and it's trivial to poke along with a youtube video to setup a RAID 5-10, and send email notifications how you like if it wanted to bring something to your attention.

Other things that become way more valuable over the years:

- NAS can be configured to backup offsite to the cloud backup of your choice, or another NAS. I know folks running them for 5-10 years and never think about it. Decent NAS with appropriate drives, secured of course. Some people even mail the enclosure to a datacenter and have them plug it in and keep it online.

- If you get a reasonably basic NAS with an intel Celeron CPU, power usage can remain low, but ram can be upgraded on it to run a few services as needed on it, both directly, and as docker images. It's pretty wild.

- If you do consider it, my recommendation is to pick one that has 2 extra drive slots than you need, and start from there. People who buy two bays can outgrow them quick, plus it's only a mirrored raid between two drives. Raid 5 and higher is great, if one drive is starting to have issues, you can just swap it while it's all running and the storage will heal.

Hope that helps. Having data close to crunch can be valuable.

yesnomaybe 4 days ago | parent | prev [-]

it's really not that hard. I've set up backblaze which is reasonably cheap. with the help of AI I was able to setup a permanent cron job that backs up everything from local into B2 using rclone, which client side encryption. It's epic. I haven't looked at it for a while but I do DR test every once in a while a small subset and it works really well. I use postgres as DB and this is the big one to back up daily. Rest is just the increment. Can be further optimised I guess but I'm happy with it.

amai 4 days ago | parent | prev | next [-]

Have a look at https://ente.io/.

esperent 4 days ago | parent [-]

That looks interesting, and it does support s3 from the look of it. However, I have to say I'm getting strong "performative open source" vibes from it. Have you tried self hosting? How was it?

linuxguy2 4 days ago | parent | prev | next [-]

I recently created https://immich.pro to partially address this problem. I've got spare compute and storage that I'd like to turn into MRR. While the privacy angle isn't _fantastic_ maybe some people won't care. Could be better than trusting Google/Apple.

joob123 4 days ago | parent | prev [-]

[dead]

DavideNL 4 days ago | parent [-]

Do they store photos end-to-end encrypted?

bo0tzz 4 days ago | parent [-]

No