A not-insignificant amount of our PR traffic is from Dependabot, even
with the grouped update feature (something like 20% of all PRs in total
are from Dependabot, last I checked.)
We don't really need daily updates, and with the the current crate
dependency graph we practically get updates *every* day. Bump it to
weekly instead to stem the tide a little.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This uses `zlib-rs`, a native Rust library that is comparable in
performance to `zlib-ng`. Since there’s no complicated C build
and gitoxide only has one hashing backend now, this lets us drop our
`packaging` feature without adding any awkward build requirements.
`zlib-rs` is generally faster at decompression than
`zlib-ng`, and faster at compression on levels 6 and 9; see
<https://trifectatech.org/blog/zlib-rs-is-faster-than-c/>
for details.
I couldn’t get reliable‐looking benchmark results out of my
temperamental laptop; `hyperfine` seemed to think that some random
`jj` workloads I tested might be slightly slower than with `zlib-ng`,
but it wasn’t unambiguously distinguishable from noise, so I’d
like to see measurements from others.
It’s certainly a lot faster than the previous default, and I
think it’s likely that `zlib-rs` will continue to get faster
and that it’s more than worth avoiding the headaches of a native
library with a CMake build dependency. (Though on the other hand,
if distributions move in the direction of shipping `zlib-ng` by
default, maybe there will be more motivation to make `libz-ng-sys`
support system libraries.)
I think this check will only be useful if it actually blocks the
build. Since the resolution is simple (just bump the limit), I think
it’s okay to add a small amount of friction and ask people to take
a moment to consider other options first.
Follow-up to b1bb5e1
This creates a `.github/scripts/count-cargo-lock-packages` script to
count packages with our methodology that one can run outside CI.
I also renamed the check so that it's clearer what it does.
After some discussion on Discord yesterday, Emily floated this idea to
have a check that fails if `Cargo.lock` has too many dependencies, where
"too many" means "more than a random number I made up and sounds good."
This implements that, as a non-required check, and to do so it invokes
the power of an extremely evil and annoying Dragon. We could also ask
this Dragon to do other things too I suppose (pending future contract
negotiations).
Signed-off-by: Austin Seipp <aseipp@pobox.com>
As discussed several times before, we want to restrict permission to
approve PRs to the maintainers only. This patch adds a GitHub
CODEOWNERS file for that purpose. Once this has been merged, I'm going
to update the rulesets to make PRs requires approval from a
maintainer.
Like the CI runners, I've occasionally seen spikes where binary builds
on 'main' were failing. Bump a little bit to avoid that.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
We're well under the 15 minute limit at this point at the p90 case, but it
seems like the p99 build tends to run into ~10minute spikes due to underlying
runner anomalies (oversaturation?)
Let's go ahead and use a timeout of 20 minutes to give it a little more slack.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This might be the cause of dependabot PRs not having automerge. Let's
just revert it and see what happens when the next PRs roll in.
This backs out commit 47cd10669de28ecc36f0d7dbbb9964945124b730.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This shaves over 3½ minutes off the tests, taking this from the
slowest job to faster than Windows again. The macOS 13 builders
presumably won’t be around forever, and the newer free builders are
all Apple Silicon, but this is an easy substantial improvement for now.
This shaves off something like 20 to 40 seconds for Linux. It’s
seemingly within the margin of error for Windows, so not sure if
we’ll want to keep it there.
This configuration is for the CI builders, so by moving it to another
file we can add more opinionated things without risking breaking
builds for anyone else.
Just using the fastest platform should be fine for this. Hopefully it
shouldn’t slow down CI too much since it’s an independent build
job and only temporary, though a potential alternative would be to
just check the build instead. (It wouldn’t catch build regressions
in the test code, though.)
It turns out that merge queues bundle auto‐merge for free, so we
can just do this the old way.
This backs out commit eba4257ab9a0268460108c234dc0d046efcd6c33.
Ideally we’d just cancel workflows entirely when they’re ejected
from the merge queue but apparently that’s kinda hard for some
inexplicable reason. This should be a cheap and harmless win. I
guess it gives you slightly less information if something fails
because of changes in the queue but you can just rebase your PR to
get everything running.
I don’t think `github.event.merge_queue.head_ref` actually does
anything useful, but I’m not sure if `github.ref` works for merge
groups and it seems better to have a fallback than not. I’d really
like to cancel PR workflows when a PR enters the merge queue, and
cancel queue workflows when a PR is ejected from the merge queue,
but apparently this simple task is high‐level wizardry when you’re
using GitHub Actions, so I’m postponing that for later.
This was running all the checks twice for pushes to pull requests,
unnecessarily limiting our concurrency and clogging up the status
report. The only benefit is if someone is pushing to a branch that
they don’t have a PR for and waiting for the checks to run. I
suspect nobody is doing this with regularity, but if it turns out
the functionality is important, we could just ask people to add this
back to the `.github/workflows/ci.yml` on the branches they want
GitHub to test, or add logic to try and cancel `push` runs that match
`pull_request` ones.
This reports the status of the checks that are currently set as
required in the repository configuration, but only in the merge
queue. The advantages are twofold:
1. We can adjust required checks with our normal CI process rather
than having to bug Martin for it. (This doesn’t really place any
more trust in anyone than we do currently, because a malicious PR
could always just replace the jobs with ones that unconditionally
succeed anyway.)
2. We can make the job only ever fail in the merge queue. Currently,
we can only submit a PR into the merge queue after all its checks
pass. Those checks then immediately get run again in the merge
queue. If you do one final fix‐up to an approved PR, it takes
half an hour after that to merge instead of fifteen minutes. We
make this less painful by using auto‐merges, but it’s silly
to block on the PR checks when the actual guarantees are provided
by the merge queue.
Unfortunately GitHub demands the same jobs be required for putting
something into the merge queue and taking it out. We can work
around this by only requiring the `required-checks` job and having
it report its status depending on the context.
Tasks for Martin:
1. Go to <https://github.com/jj-vcs/jj/settings> and disable “Allow
auto-merge”. This would now only block on PR approval / discussion
resolution, so it’s probably more confusing than helpful; I’ve
hit the button and then come back an hour later to discover that
I forgot to resolve discussions.
Go to <https://github.com/jj-vcs/jj/settings/rules/3400426> and
replace all of the required GitHub Actions checks under “Require
status checks to pass” with the single `required-checks` check.
3. While you’re at it, go to
<https://github.com/jj-vcs/jj/settings/actions> and ensure that
“Workflow permissions” is set to “Read repository contents
and packages permissions”. We already handle this correctly in
all our workflows but the default is to allow write permissions
for legacy reasons.