▲ | bayindirh 9 hours ago | |
> which can't sort the numbers they don't see Then you sort at the point you can see the numbers and discard them later. > Also, why do I need to count columns like a cave man in 'sort -k 5' instead of doing the obvious "sort by size" awk can sort the columns for you. Plus, ls can already sort by size. Try "ls -lS " for biggest file first, or "ls -lSr" for smallest file first. Add "-h" to make human readable. > A problem that would disappear with... structured data! No. A problem that would disappear with "a small if block which asks which environment I'm in". If you're in a shell "-t" test in sh/bash will tell you that. If you're coding a tool, there are standard ways to do that (via termcap IIRC). Standard UNIX tools are doing this for decades now. IOW, structured data is not a cure for laziness... > so use the other "some" that can? Yes, because their authors are not that lazy. | ||
▲ | eviks 9 hours ago | parent [-] | |
> Then you sort at the point you can see the numbers and discard them later This sort of human overhead is only needed to compensate for the deficiencies of the data structures > ls can already sort by size That's the benefit of integration you're arguing against with your deficient piping suggestions > IOW, structured data is not a cure for laziness... It is precisely what good design is for - it reduces the need for various dumb workarounds that bad design requires, which means you can be more lazy and avoid said workaround > Yes, because their authors are not that lazy. This just ignores the argument, which was "some better new tools don't do that" isn't relevant when some better new tools also do that |