| |
| ▲ | klibertp 4 days ago | parent [-] | | It takes 3.5 seconds for a new login shell to open on my laptop, which has a decent CPU and a fast SSD. I do have quite a few lines of config, but no oh-my-zsh and almost no plugins. I have around 2k SLOC of ZSH config. Meanwhile, I have 22.3k SLOC of Emacs Lisp config, and Emacs starts up (granted, after lowering bytecode to native code AOT) in ~4 seconds. To me, that suggests there's something really wrong with ZSH in terms of performance - unfortunately, it's better in almost every other way compared to BASH, so I learned to live with that. Still, at least in my setup, ZSH indeed is slow, even on modern hardware. I wonder if it would even run on a 486... | | |
| ▲ | cb321 4 days ago | parent [-] | | That sounds way too long. Mine takes like 15 ms on a 2015 cpu and I activate zsh-syntax-highlighting and new style completion and everything, but yeah oh-my-zsh often adds nutso overhead. Anyway, I suggest you profile your zsh start-up. Here's one copy-paste friendly way to do that: (PS4='+$EPOCHREALTIME ' zsh -licx exit)2>err
era=$(grep '^+[1-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9].[0-9]*' <err |
head -c6) # c6 here rounds to 100_000 seconds (eg +17483xxxyy)
awk '/^\'${era}'[0-9][0-9][0-9][0-9][0-9]\.[0-9]*/{
if (c) printf "%.06f %s\n", $1 - t0, c; t0 = $1; c = $0 }
END { printf "%.06f %s\n", 0, c; }' < err | sort -g > startup-profile
(Note for $EPOCHREALTIME to work you need a `zmodload zsh/datetime` somewhere early on. I might suggest at the top of `$ZDOTDIR/.zshenv` for this kind of thing.)Also, if something seems limited by "just parsing", you can usually speed that up a lot with `zcompile`. I do that with a `.zcompdump.zwc` and a `digraphs.zsh.zwc`. EDIT: I noticed myself that really large HISTSIZE (in the 100s of thousands, and with such limit realized) combined with de-duplication seems to be a bad combination. I just lowered my HISTSIZE with a when-too-big spool-off for longer term history/cold storage. | | |
| ▲ | klibertp 4 days ago | parent [-] | | > I noticed myself that really large HISTSIZE ...right, I totally forgot that. Yeah, my history file is 4.5MB, and $HISTSIZE is 1M. I even wrote a Scala app[1] some time ago to collect hist files from all my machines (I used many more than the current 2, at some point), merging and deduping them once a day. Adding to that, it's 13 years old at this point, and probably has quite a few KB of mis-pasted text files in it, so I guess it makes sense it's this large. It also makes sense that processing it takes a while, especially with deduping enabled. I'll check, but if that's the reason, then I'd be reluctant to do anything with it. Having fzf search through all my command lines dating back to 2012 is very valuable. I'll see how that would work with spooling. Thanks for the profiling tip, I'll check it out! As mentioned, I'm not thinking of jumping ship, so I'm willing to do some digging to make the situation better :) [1] https://github.com/piotrklibert/zsh-merge-hist EDIT: yeah, history is the reason: -▶ time HISTFILE=/dev/null zsh -c 'echo $ERL_AFLAGS' # variable from the end of my .zshrc
-kernel shell_history enabled
HISTFILE=/dev/null zsh -c 'echo $ERL_AFLAGS' 0,20s user 0,03s system 98% cpu 0,233 total
| | |
| ▲ | cb321 4 days ago | parent [-] | | In that case, since you are already de-duping "externally", you might play with `setopt HIST_IGNORE_ALL_DUPS HIST_IGNORE_DUPS HIST_SAVE_NO_DUPS` combinations. It's been many years since I looked at it, but I think these conspire with large saved histories to slow things down a lot at startup/initial history parse. I don't even recall if it's necessary or was just the simple algorithm. So, you might actually be able to get Zsh fixed if there is some quadratic thing that can be turned linear with a hash table. The Zsh mailing list is quite accommodating in my experience. |
|
|
|
|