| ▲ | gizmo686 17 hours ago |
| This depends on industry. Around here, working locally on laptop is a luxury, and most devs are required to treat their laptop like a thin client. Of course, being developer laptops, they all come with 16 gigs of RAM. In contrast, the remote VMs where we do all of the actual work are limited to 4GiB unless we get manager and IT approval for more. |
|
| ▲ | sumanthvepa 14 hours ago | parent | next [-] |
| Interesting. I required all my devs to use local VMs for development. We've saved a fair bit on cloud costs. |
| |
| ▲ | wongarsu 10 hours ago | parent | next [-] | | > We've saved a fair bit on cloud costs our company just went with the "server in the basement" approach, with every employee having a user account (no VM or docker separation, just normal file permissions). Sure, sounds like the 80s, but it works rearly well. Remote access with wireguard, uptime similar or better than cloud, sharing the same beefy CPUs works well and gives good utilization. Running jobs that need hundreds of GB of RAM isn't an issue as long as you respect other's needs too dont hog the RAM all day. And in amortized costs per employee its dirt cheap. I only wish we had more GPUs. | |
| ▲ | mr_toad 10 hours ago | parent | prev | next [-] | | > Interesting. I required all my devs to use local VMs for development. It doesn’t work when you’re developing on a large database, since it won’t fit. Database (and data warehouse) development has been held back from modern practices just for this reason. | |
| ▲ | layer8 10 hours ago | parent | prev | next [-] | | For many companies, IP isn’t allowed to leave environments controlled by the company, which employee laptops are not. | |
| ▲ | happymellon 13 hours ago | parent | prev [-] | | Current job used to let us run containers locally, but they decided to wrap initially docker, and then podman with "helper" scripts. These broke regularly, and became too much overhead to maintain so we are mandated to do local dev but access a dev k8 cluster to perform any level of testing that is more than unit and requires a db. A really shame as running local docker/podman for postges was fine when you just ran the commands. | | |
| ▲ | cdogl 12 hours ago | parent [-] | | I find this quite surprising! What benefit does your org accrue by mandating that the db instance used for testing is centralised? Where I am, the tests simply assume that there’s a database available on a certain port. docker-compose.yml makes it easy to spin this up for those so inclined. At that stage it’s immaterial whether it’s running natively, or in docker, or forwarded from somewhere else. Our tests stump up all the data they need and tear down the db afterwards. In contrast, I imagine that a dev k8s cluster requires some management and would be a single point of failure. | | |
| ▲ | happymellon 4 hours ago | parent [-] | | I really don't understand why they do what they do. Large corp gotta large corp? My guess is that providing the ability to pull containers means you can run code that they haven't explicitly given permission for, and the laptop scanning tools can't hijack them? |
|
|
|
|
| ▲ | user34283 13 hours ago | parent | prev [-] |
| Yes, zero latency typing in your local IDE on a laptop sounds like the dream. In enterprise, we get shared servers with constant connection issues, performance problems, and full disks. Alternatively we can use Windows VMs in Azure, with network attached storage where "git log" can take a full minute. And that's apparently the strategic solution. Not to mention that in Azure 8 CPUs gets you four physical cores of a previous gen server CPU. To anyone working with 4 CPUs or 2 physical cores: good luck. |