| ▲ | nickmonad 4 hours ago | |||||||||||||||||||||||||
> So you're stuck debugging a system you don't control, through screenshots and copy-pasted logs on a Zoom call. This is very real. I work with a deployment that operates in this fashion. Although unfortunately, we can't maintain _any_ connection back to our servers. Pull or push, doesn't matter. The goal right now is to build out tooling to export logs and telemetry data from an environment, such that a customer could trigger that export on our request, or (ideally) as part of the support ticketing process. Then our engineers can analyze async. This can be a ton of data though, so we're trying to figure out what to compress and how. We also have the challenge of figuring out how to scrub logs of any potentially sensitive information. Even IDs, file names, etc that only matter to customers. | ||||||||||||||||||||||||||
| ▲ | alongub 3 hours ago | parent | next [-] | |||||||||||||||||||||||||
> Although unfortunately, we can't maintain _any_ connection back to our servers. Pull or push, doesn't matter. We're working on something for this! Stay tuned. | ||||||||||||||||||||||||||
| ▲ | nodesocket 4 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||
I also used to work with on-premise installs of Kubernetes and their “security” postures prevented any in-bound access. It was a painful process of requesting access, getting on a zoom call and then controlling their screen via a Windows client and putty. It’s was beyond painful and frustrating. I tried to pitch using a tool like Twingate which doesn’t open any inbound ports, can be locked down very tight using SSO, 2fa, access control rules, and IP limiting but to no avail. They were stuck in their Windows based IT mentally. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | jcgrillo 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||
> This can be a ton of data though, so we're trying to figure out what to compress and how. We also have the challenge of figuring out how to scrub logs of any potentially sensitive information. This is fundamentally a data modeling problem. Currently computer telemetry data are just little bags of utf-8 bytes, or at best something like list<map<bytes, bytes>>. IMO this needs to change from the ground up. Logging libraries should emit structured data, conforming to a user supplied schema. Not some open-ended schema that tries to be everything to everyone. Then it's easy to solve both problems--each field is a typed column which can be compressed optimally, and marking a field as "safe" is something encoded in its type. So upon export, only the safe fields make it off the box, or out of the VPC, or whatever--note you can have a richer ACL structure than just "safe yes/no". I applaud the industry for trying so hard for so long to make everything backwards compatible with the unstructured bytes base case, but I'm not sure that's ever really been the right north star. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||