▲ | klibertp 4 days ago | |
> I noticed myself that really large HISTSIZE ...right, I totally forgot that. Yeah, my history file is 4.5MB, and $HISTSIZE is 1M. I even wrote a Scala app[1] some time ago to collect hist files from all my machines (I used many more than the current 2, at some point), merging and deduping them once a day. Adding to that, it's 13 years old at this point, and probably has quite a few KB of mis-pasted text files in it, so I guess it makes sense it's this large. It also makes sense that processing it takes a while, especially with deduping enabled. I'll check, but if that's the reason, then I'd be reluctant to do anything with it. Having fzf search through all my command lines dating back to 2012 is very valuable. I'll see how that would work with spooling. Thanks for the profiling tip, I'll check it out! As mentioned, I'm not thinking of jumping ship, so I'm willing to do some digging to make the situation better :) [1] https://github.com/piotrklibert/zsh-merge-hist EDIT: yeah, history is the reason:
| ||
▲ | cb321 4 days ago | parent [-] | |
In that case, since you are already de-duping "externally", you might play with `setopt HIST_IGNORE_ALL_DUPS HIST_IGNORE_DUPS HIST_SAVE_NO_DUPS` combinations. It's been many years since I looked at it, but I think these conspire with large saved histories to slow things down a lot at startup/initial history parse. I don't even recall if it's necessary or was just the simple algorithm. So, you might actually be able to get Zsh fixed if there is some quadratic thing that can be turned linear with a hash table. The Zsh mailing list is quite accommodating in my experience. |