Remix.run Logo
MobiusHorizons a day ago

What you are proposing sounds like a nightmare to debug. The high level perspective of the operation is of course valuable for determining if an investigation is necessary, but the low level perspective in the library code is almost always where the relevant details are hiding. Not logging these details means you are in the dark about anything your abstractions are hiding from higher level code (which is usually a lot)

cwillu a day ago | parent | next [-]

Those details don't belong in the error log level, that's what info or trace is for.

dpark a day ago | parent | next [-]

They were replying to a person who says “it’s almost always wrong for library functions to log anything”. Not just errors.

Retric a day ago | parent [-]

If it’s not your code how is a log useful vs returning an error?

Even relatively complex operations like say convert this document into a PDF etc basically only has two useful states either it worked or something specific failed at which point just tell me that thing.

Now independent software like web servers or database can have useful logs because they have completely independent interfaces with the outside world. But I call libraries they don’t call me.

lazyasciiart a day ago | parent [-]

That’s a very simple operation. Try “take these 100 user generated pdfs and translate all of them”. Oh, “cannot parse unexpected character 0x001?” Cool beans, I wish I knew more.

Retric a day ago | parent [-]

That’s ok, I’ll just check the log. 50MB of ‘This is my happy place.’ followed by a one liner “cannot to parse unexpected character 0x001?’

Any library can do a bad job here, that doesn’t come down to logging vs error messages.

dpark 11 hours ago | parent [-]

The unspoken assumption you are making is that anyone who would disagree with your philosophy on this is incompetent.

Retric 9 hours ago | parent [-]

Being incorrect doesn’t imply general incompetence.

dpark 8 hours ago | parent [-]

Your statement that logging would contain zero useful information indicates an assumption of incompetence.

Retric 7 hours ago | parent [-]

No, I’m only saying a useless error code and a useless log are both possible. Either could be useful or they could both be useless because the creator was actively malicious etc. Thus, the possibility of a useless error code doesn’t inherently mean a log would improve things.

Really the only thing we can defiantly say is when both approaches are executed well it’s harder to use log entries in your code. If something returns an error that’s tied to a specific call to a specific bit of code, where a log entry could in theory be from anything etc.

heisenbit 19 hours ago | parent | prev [-]

Trace can become so voluminous that it is switched on only on a need basis which can be too late for rare events. Also trace level as more a need to use debug tool tends to be less scrutinized for exposing sensitive data making it unsuitable for continuous operation or use in live production.

Too a day ago | parent | prev | next [-]

Simple: include those relevant details in the exceptions instead of hiding them.

layer8 a day ago | parent | next [-]

It’s not that simple. First, this results in exception messages that are a concatenation of multiple levels of error escalation. These become difficult to read and have to be broken up again in reverse order.

Second, it can lose information about at what exact time and in what exact order things happened. For example, cleanup operations during stack unwinding can also produce log messages, and then it’s not clear anymore that the original error happened before those.

Even when you include a timestamp at each level, that’s often not sufficient to establish a unique ordering, unless you add some sort of unique counter.

It gets even more complicated when exceptions are escalated across thread boundaries.

ninkendo 15 hours ago | parent [-]

> First, this results in exception messages that are a concatenation of multiple levels of error escalation. These become difficult to read and have to be broken up again in reverse order

Personally I don't mind it... the whole "$outer: $inner" convention naturally lends to messages that still parse in my brain and actually include the details in a pretty natural way. Something like:

"Error starting up: Could not connect to database: Could not read database configuration: Could not open config file: Permission denied"

Tells me the config file for the database has broken permissions. Because the permission denied error caused a failure opening the config file, which caused a failure reading the database configure, which caused a failure connecting to the database, which caused an error starting up. It's deterministic in that for "$outer: $inner", $inner always caused $outer.

Maybe it's just experience though, in a sense that it takes a lot of time and familiarity for someone to actually prefer the above. Non-technical people probably hate such messages and I don't necessarily blame them.

MobiusHorizons a day ago | parent | prev | next [-]

Sometimes you don’t have all the relevant details in scope at the point of error. For instance some recoverable thing might have happened first which exercises a backup path with slightly different data. This is not exception worthy and execution continues. Then maybe some piece of data in this backup path interacts poorly with some other backend causing an error. The exception won’t tell you how you got there, only where you got stuck. Logging can tell you the steps that led up to that, which is useful. Of course you need a way to deal with verbose logs effectively, but such systems aren’t exactly rare these days.

Hackbraten 20 hours ago | parent [-]

> Then maybe some piece of data in this backup path interacts poorly with some other backend causing an error. The exception won’t tell you how you got there, only where you got stuck.

Then catch the exception on the backup path and wrap it in a custom exception that conveys to the handler the fact that you were on the backup path. Then throw the new exception.

awesome_dude a day ago | parent | prev | next [-]

At the extreme end: If my Javascript frontend is being told about a database configuration error happening in the backend when a call with specific parameters is made - that is a SERIOUS security problem.

Errors are massaged for the reader - a database access library will know that a DNS error occurred and that is (the first step for debugging) why it cannot connect to the specified datastore. The service layer caller does not need to know that there is a DNS error, it just needs to know that the specified datastore is uncontactable (and then it can move on to the approriate resilience strategy, retry that same datastore, fallback to a different datastore, or tell the API that it cannot complete the call at all).

The caller can then decide what to do (typically say "Well, I tried, but nothing's happening, have yourself a merry 500)

It makes no sense for the Service level to know the details of why the database access layer could not connect, no more than it makes any sense for the database access layer to know why there is a DNS configuration error - the database access just needs to log the reasons (for humans to investigate), and tell the caller (the service layer) that it could not do the task it was asked to do.

If the service layer is told that the database access layer encountered a DNS problem, what is it going to do?

Nothing, the best it can do is log (tell the humans monitoring it) that a DB access call (to a specific DB service layer) failed, and try something else, which is a generic strategy, one that applies to a host of errors that the database call could return.

kgklxksnrb 20 hours ago | parent [-]

That’s how we get errors like ”file not found”, without a file name. A pain for mankind.

Kwpolska a day ago | parent | prev [-]

Not all problems cause exceptions.

energy123 a day ago | parent [-]

That's a matter of good taste, but there's nothing preventing you from throwing exceptions on every issue and requiring consumers to handle them

makeitdouble 20 hours ago | parent [-]

Imagine you have a caching library that handles DB fallback. A cache that should be there but goes missing is arguably an issue.

Should if throw an exception for that to let you know, or should it gracefully fallback so your service stays alive ? The middle ground is leaving a log and chugging along, your proposition throws that out of the window.

TZubiri a day ago | parent | prev [-]

You can log your IO and as long as your functions are idempotent that should be enough info to replicate.

dpark a day ago | parent [-]

Assuming everything is idempotent is a tall order.

There are a lot of libraries that haven non-idempotent actions. There are a lot of inputs that can be problematic to log, too.

TZubiri a day ago | parent [-]

Say like opening a file?

I guess in those cases standard practice is for lib to return a detailed error yeah.

As far as traces, trying to solve issues that depend on external systems is indeed a tall order for your code. Isn't it beyond the scope of the thing being programmed.

sigseg1v 11 hours ago | parent | next [-]

From my experience working on B2B applications, I am happy that everything is generally spammed to the logs because there would simply be no other reasonable way to diagnose many problems.

It is very, very common that the code that you have written isn't even the code that executes. It gets modified by enterprise anti virus or "endpoint security". All too often do I see "File.Open" calls return true that the caller has access, but actually what's happened is AV has intercepted the call, blocked it improperly, and returns 0 bytes file that exists (even though there is actually a larger file there) instead of saying the file cannot open.

I will never, in a million years, be granted access to attach a debugger to such a client computer. In fact, they will not even initially disclose that they are using anti virus. They will just say the machine is set up per company policy and that your software doesn't work, fix it. The assumption is always that your software is to blame and they give you nearly nothing, except for the logs.

The only way I ever get this solved in a reasonable amount of time is by looking at verbose logs, determining that the scenario they have described is impossible, explaining which series of log messages is not able to occur, yet occurred on their system, and ask them to investigate further. Usually this ends up being closed with a resolution like "Checked SuperProtectPro360 logs and found it was writing infernal error logs at the same time as using the software. Adjusted the monitoring settings and problem is now resolved."

dpark a day ago | parent | prev [-]

I don’t really understand what you mean about opening files. Is this just an example of an idempotent action or is there some specific significance here?

Either way logging the input (file name) is notably not sufficient for debugging if the file can change between invocations. The action can be idempotent and still be affected by other changes in the system.

> trying to solve issues that depend on external systems is indeed a tall order for your code. Isn't it beyond the scope of the thing being programmed.

If my program is broken I need it fixed regardless of why it’s broken. The specific example here of a file changing is likely to manifest as flakiness that’s impossible to diagnose without detailed logs from within the library.

TZubiri a day ago | parent [-]

I was just trying to think of an example of a non idempotent function. As in it depends on an external IO device.

I will say that error handling and logging in general is one of my weakpoints, but I made a comment about my approach so far being dbg/pdb based, attaching a debugger and creating breakpoints and prints ad-hoc rather than writing them in code. I'm sure there's reasons why it isn't used as much and logging in code is so much more common, but I have faith that it's a path worth specializing in.

Back to the file reading example, for a non-idempotent function. Considering we are using an encapsulating approach we have to split ourselves into 3 roles. We can be the IO library writer, we can be the calling code writer, and we can be an admin responsible for the whole product. I think a common trap engineers fall for is trying to keep all of the "global" context (or as much as they can handle) at all times.

In this case of course we wouldn't be writing the non-idempotent library, so of course that's not a hat we wear, do not quite care about the innards of the function and its state, rather we have a well defined set of errors that are part of the interface of the function (EINVAL, EACCES, EEXIST).

In this sense we respect the encapsulation boundaries and are provided the information necessary by the library. If we ever need to dive into the actual library code, first the encapsulation is broken and we are dealing with a leaky abstraction, second we just dive into the library code, (or the filesystem admin logs themselves).

It's not precisely the type of responsibility that can be handled at design time and in code anyways, when we code we are wearing the calling-module programmer hat. We cannot think of everything that the sysadmin might need at the time of experiencing an error, we have to think that they will be sufficiently armed with enough tools to gather the information necessary with other tools. And thank god for that! checking /proc/fs and looking at crash dumps, and attaching processes with dbg will yield far better info than relying on whatever print statements you somehow added to your program.

Anyways at least that's my take on the specific example of glibc-like implementations of POSIX file operations like open(). I'm sure the implications may change for other non-idempotent functions, but at some point, talking about specifics is a bit more productive than talking in the abstract.

dpark a day ago | parent [-]

The issue with relying on gdb is that you generally cannot do this in production. You can’t practically attach a debugger to a production instance of a service for both performance and privacy reasons, and the same generally applies to desktop and mobile applications being run by your customers. Gdb is mostly for local debugging and the truth is that “printf debugging” is how it often works for production. (Plus exception traces, crash dumps, etc. But there is a lot of debugging based on logging.) Interactive debugging is so much more efficient for local development but capable preexisting logging is so much more efficient for debugging production issues.

I generally agree that I would not expect a core library to do a bunch of logging, at least not onto your application logs. This stuff generally is very stable with a clean interface and well defined error reporting.

But there’s a whole world of libraries that are not as clean, not as stable, and not as well defined. Most libraries in my experience are nowhere near as clean as standard IO libraries. They often do very complex stuff to simplify for the calling application and have weakly defined error behavior. The more complexity a library contains, the more it likely has this issue. Arguably that is leaky abstraction but it’s also the reality of a lot of software and I’m not even sure that’s a bad thing. A good library that leaks in unexpected conditions might be just fine for many real world purposes.

TZubiri 4 hours ago | parent [-]

It's coming together more clearly now.

I guess my experience is more from the role of a startup or even in-house software. So we both develop and operate the software. But in scenarios where you ship the software and it's operated by someone else, it makes sense to have more auditable and restricted logging instead of all-too-powerful ad-hoc debugging.