▲ | cb321 4 days ago | |||||||
That sounds way too long. Mine takes like 15 ms on a 2015 cpu and I activate zsh-syntax-highlighting and new style completion and everything, but yeah oh-my-zsh often adds nutso overhead. Anyway, I suggest you profile your zsh start-up. Here's one copy-paste friendly way to do that:
(Note for $EPOCHREALTIME to work you need a `zmodload zsh/datetime` somewhere early on. I might suggest at the top of `$ZDOTDIR/.zshenv` for this kind of thing.)Also, if something seems limited by "just parsing", you can usually speed that up a lot with `zcompile`. I do that with a `.zcompdump.zwc` and a `digraphs.zsh.zwc`. EDIT: I noticed myself that really large HISTSIZE (in the 100s of thousands, and with such limit realized) combined with de-duplication seems to be a bad combination. I just lowered my HISTSIZE with a when-too-big spool-off for longer term history/cold storage. | ||||||||
▲ | klibertp 4 days ago | parent [-] | |||||||
> I noticed myself that really large HISTSIZE ...right, I totally forgot that. Yeah, my history file is 4.5MB, and $HISTSIZE is 1M. I even wrote a Scala app[1] some time ago to collect hist files from all my machines (I used many more than the current 2, at some point), merging and deduping them once a day. Adding to that, it's 13 years old at this point, and probably has quite a few KB of mis-pasted text files in it, so I guess it makes sense it's this large. It also makes sense that processing it takes a while, especially with deduping enabled. I'll check, but if that's the reason, then I'd be reluctant to do anything with it. Having fzf search through all my command lines dating back to 2012 is very valuable. I'll see how that would work with spooling. Thanks for the profiling tip, I'll check it out! As mentioned, I'm not thinking of jumping ship, so I'm willing to do some digging to make the situation better :) [1] https://github.com/piotrklibert/zsh-merge-hist EDIT: yeah, history is the reason:
| ||||||||
|