Remix.run Logo
TZubiri a day ago

How about wrapping the log.trace param in a lambda and monkeypatching log.trace to take a function that returns a string, and of course pushing the conditional to the monkeypatched func.

cluckindan a day ago | parent | next [-]

Then you still have the overhead of the log.trace function call and the lambda construction (which is not cheap because it has closure over the params being logged and is passed as a param to a function call, so probably gets allocated on the heap)

TZubiri a day ago | parent [-]

>Then you still have the overhead of the log.trace function call

That's not an overhead at all. Even if it were it's not compareable to string concatenation.

Regarding overhead of lambda and copying params. Depends on the language, but usually strings are pass by ref and pass by values are just 1 word long, so we are talking one cycle per variable and 8 bytes of memory. Which were already paid anyways.

That said, logging functions that just take a list of vars are even better, like python's print()

> printtrace("var x and y",x,y)

> def printtrace(*kwargs):

>> print(kwargs) if trace else None

Python gets a lot of slack for being a slow language, but you get so much expressiveness that you can invest in optimization after paying a flat cycle cost.

jeeeb a day ago | parent [-]

That’s what most languages, including Java do.

The problem the OP is pointing out is that some programmers are incompetent and do string concatenation anyway. A mistake which if anything is even easier in Python thanks to string interpolation.

01HNNWZ0MV43FF a day ago | parent | prev [-]

That is why the popular `tracing` crate in Rust uses macros for logging instead of functions. If the log level is too low, it doesn't evaluate the body of the macro

tsimionescu a day ago | parent | next [-]

Does that mean the log level is a compilation parameter? Ideally, log levels shouldn't even be startup parameters, they should be changeable on the fly, at least for any server side code. Having to restart if bad enough, having to recompile to get debug logs would be an extraordinary nightmare (not only do you need to get your customers to reproduce the issue with debug logs, you actually have to ship them new binaries, which likely implies export controls and security validations etc).

bluGill a day ago | parent | next [-]

I don't know how rust does it, but my internal C++ framework has a global static array so that we can lookup the current log level quickly, and change it at runtime as needed. It is very valuable to turn on specific debug logs at times, when someone has a problem and we want to know what some code is doing

TZubiri a day ago | parent | prev [-]

I know this is standard practice, but I personally think it's more professional to attach a gdb like debugger to a process instead of depending on coded log statements.

tsimionescu a day ago | parent | next [-]

A very common thing that will happen in professional environments is that you ship software to your customers, and they will occasionally complain that in certain situations (often ones they don't fully understand) the software misbehaves. You can't attach a debugger to your customer's setup that had a problem over the weekend and got restarted: the only solution to debug such issues is to have had programmed logs set up ahead of time.

ekidd 16 hours ago | parent | prev [-]

In my professional life, somewhere over 99% of time, the code suffering the error has either been:

1. Production code running somewhere on a cluster.

2. Released code running somewhere on a end-user's machine.

3. Released production code running somewhere on an end-user's cluster.

And errors happen at weird times, like 3am on a Sunday morning on someone else's cluster. So I'd just as soon not have to wake up, figuring out all the paperwork to get access to some other company's cluster, and then figure out how to attach a debugger. Especially when the error is some non-reproducible corner case in a distributed algorithm that happens once every few months, and the failing process is long gone. Just no.

It is so much easier to ask the user to turn up logging and send me the logs. Nine times out of ten, this will fix the problems. The tenth time, I add more logs and ask the user to keep an eye open.

TZubiri 12 hours ago | parent [-]

I think I get the idea, gdb is too powerful. For contexts where operator is distinct from manufacturer, the debug/logging tool needs to be weaker and not ad-hoc so it can be audited and to avoid exfiltrating user data.

tsimionescu 6 hours ago | parent [-]

It's not so much about power, but about the ad-hoc nature of attaching a debugger. If you're not there to catch and treat the error as it happens, a debugger is not useful in the slightest: by the time you can attach it, the error, or the context where it happened, are long gone. Not to mention, even if you can attach a debugger, it's most often not acceptable to pause the execution of the entire process for you to debug the error.

Especially since a lot of the time an exception being raised is not the actual bug: the bug happened many functions before. By logging key aspects of the state of the program, even in non-error cases, when an error happens, you have a much better chance of piecing together how you got to the error state in the first place.

jeeeb a day ago | parent | prev [-]

The idea in Java is to let the JIT optimise away the logging code.

This is more flexible as it still allows runtime configuration of the logging level.

The OP is simply pointing that some programmers are incompetent and call the trace function incorrectly.