| ▲ | switz 3 days ago |
| I didn't even really realize it was a SPOF in my deploy chain. I figured at least most of it would be cached locally. Nope, can't deploy. I don't work on mission-critical software (nor do I have anyone to answer to) so it's not the end of the world, but has me wondering what my alternate deployment routes are. Is there a mirror registry with all the same basic images? (node/alpine) I suppose the fact that I didn't notice before says wonderful things about its reliability. |
|
| ▲ | tom1337 3 days ago | parent | next [-] |
| I guess the best way would be to have a self-hosted pull-through registry with a cache. This way you'd have all required images ready even when dockerhub is offline. Unfortunately that does not help in an outage because you cannot fill the cache now. |
| |
| ▲ | cipherself 3 days ago | parent | next [-] | | In the case where you still have an image locally, trying to build will fail with an error complaining about not being able to load metadata for the image because a HEAD request failed. So, the real question is, why isn't there a way to disable the HEAD request for loading metadata for images? Perhaps there's a way and I don't know it. | | |
| ▲ | Too 3 days ago | parent | next [-] | | Sure? --pull=missing should be the default. | | |
| ▲ | cipherself 2 days ago | parent [-] | | While I haven’t tried --pull=missing, I have tried --pull=never, which I assume is a stricter version and it was still attempting the HEAD request. |
| |
| ▲ | switz 3 days ago | parent | prev [-] | | Yeah, this is the actual error that I'm running into. Metadata pages are returning 401 and bailing out of the build. |
| |
| ▲ | tln 3 days ago | parent | prev | next [-] | | You might still have it on your dev box or build box docker image ls
docker tag name/name:version your.registry/here/name/name:version
docker push your.registry/here/name/name:version
| | |
| ▲ | tln 3 days ago | parent | next [-] | | Per sibling comment, public.ecr.aws/docker/library/.... works even better | |
| ▲ | akshayKMR 3 days ago | parent | prev [-] | | This saved me. I was able to push image from one of my nodes. Thank you. |
| |
| ▲ | pebble 3 days ago | parent | prev [-] | | This is the way tho this can lead to fun moments like I was just setting up a new cluster and couldn't figure out why I was having problems pulling images when the other clusters were pulling just fine. Took me a while to think of checking the docker hub status page. |
|
|
| ▲ | kam 3 days ago | parent | prev | next [-] |
| > Is there a mirror registry with all the same basic images? https://gallery.ecr.aws/ |
|
| ▲ | matt_kantor 2 days ago | parent | prev | next [-] |
| > I don't work on mission-critical software > wondering what my alternate deployment routes are If the stakes are low and you don't have any specific need for a persistent registry then you could skip it entirely and push images to production from wherever they are built. This could be as simple as `docker save`/`scp`/`docker load`, or as fancy as running an ephemeral registry to get layer caching like you have with `docker push`/`docker pull`[1]. [1]: https://stackoverflow.com/a/79758446/3625 |
|
| ▲ | XCSme 3 days ago | parent | prev | next [-] |
| It's a bit stupid that I can't restart (on Coolify) my container, because pulling the image fails, even though I am already running it, so I do have the image, I just need to restart the Node.js process... |
| |
| ▲ | XCSme 3 days ago | parent [-] | | Nevermind, I used the terminal, docker ps to find the container and docker restart <container_id>, without going through Coolify. |
|
|
| ▲ | 3 days ago | parent | prev [-] |
| [deleted] |