This patch replaces single-char L type aliases with P, and renames the L aliases
where that makes sense. Many of the P aliases will be removed later by
introducing generic wrap<T>() trait.
I think this was remainder of the old design where wrap_*() functions took
&TemplateLanguage as self argument. Since the wrapper type implements accessor
functions like .try_into_boolean(), it makes sense that the same type implements
::wrap_boolean(), etc.
These .wrap_<T>() functions will become generic over T.
It seems like a small usability improvement if users don't need to
enter the "$schema" link manually when they create a new config file.
This doesn't help existing users.
This change, from an enum to a struct, is a more accurate representation
of the actual way that a ConfigPath works; additionally, it lets us add
different information without modifying every single enumeration field.
I'm trying to refactor property wrapping functions, and noticed that it's odd
that .wrap_<property>() does boxing internally whereas .wrap_template() doesn't.
Also, it sometimes makes sense to turn property into trait object earlier. For
example, we can deduplicate L::wrap_boolean() in build_binary_operation().
This also adds a test case for the completion of arguments following
multi-argument aliases, to cover the bug reported in issue #5377.
The default command is like a special kind of alias, one which is
expanded from zero tokens. Consequently, it also triggers the bug
#5377 on bash/zsh, even if the `default-command` is just a single token.
The fix is along the lines sketched out by the "TODO" comment. Bash and
Zsh don't behave identical, so the padding ([""]) still needs to be
applied (and removed) conditionally in a disciplined manner.
The completion mechanism works differently in different shells:
For example, when the command line `jj aaa bb ccc` is completed at the
end of the `bb` token, bash and zsh pass the completer the whole line
`-- jj aaa bb ccc` and an index of 2 which refers to the `bb` token;
they are then expected to complete `bb`. Meanwhile, fish and Powershell
only pass the command up to the completion point, so `-- jj aaa bb`;
the completer is always expected to complete the last token. In all
cases, the shell ultimately decides what to do with the completions,
e.g. to complete up to a common prefix (bash), to show an interactive
picker (zsh, fish), or to insert the completion if it is the only one
(all shells). Remaining tokens (`ccc`) are also always appended by the
shell and not part of the completion.
While this is mostly handled by the clap_complete crate, we do expand
aliases and present clap_complete with a fake view of the real command
line, thereby reaching into its internals by wrapping the interface
between the completion shell script that is provided by clap_complete
and its Rust code in `CommandEnv::try_complete()`. If we get this wrong,
completion might yield unexpected results, so it is worth testing
completion for both flavors of shells whenever aliases are potentially
in the mix.
To avoid redundancy, the shell-specific invocation of `jj` is factored
into a `complete_at()` function of the test fixture. The `test-case`
crate is then used to instantiate each test case for different values of
clap_complete's `Shell` enum.
filter
bash/zsh specific behavior
move impl
The `CliRunner` lets a custom binary add processing that should happen
before running a command. This patch replaces that by a hook that can
do processing before and/or after. I thought I would want to use this
for adding telemetry (such as timings) to our custom binary at
Google. I ended up adding that logging outside of `CliRunner::run()`
instead, so it gives a more accurate timing of the whole invocation. I
think this patch is still an improvement, as it's more generic than
the start hook.
In builtin diff editor, we materializes conflicts, so we need to parse them
back to reproduce the original (or partially-resolved) contents. OTOH, the
merge editor should write the merged contents transparently.
This change also revealed that binary hunks wouldn't be processed correctly in
the merge editor.
make_diff_files() will be async function that uses materialized_diff_stream()
internally. apply_diff_builtin() will take callbacks to handle binary/conflict
files.
If `git.fetch` contains remotes that are not available, we currently error even
if other remotes are available. For common fork workflows with separate
`upstream` and `origin` remotes (for example), this requires a user to either
set both remotes in their user config and override single-remote repos or set
only one in their user config and override all multi-remote repos to fetch from
`upstream` (or both).
This change updates fetching to only *warn* about unknown remotes **if** other
remotes are available. If none of the configured remotes are available, an error
is still raised as before.
When we ask the user to prodive a commit description, we currently
write a file to `.jj/repo/` with the draft description and then pass
that to the editor. If the editor exits with an error status, we leave
the file in place and tell the user about the path so they can recover
the description. I'm not sure I've ever used one of these files. I
have certainly never used a file that's not from the most recent
edit. I have, however, cleaned up old such files. This patch changes
the code so we write them to /tmp instead, so we get the cleanup for
free.
A pattern has emerged where a integration tests check for the
availability of an external tool (`git`, `taplo`, `gpg`, ...) and skip
the test (by simply passing it) when it is not available. To check this,
the program is run with the `--version` flag.
Some tests require that the program be available at least when running
in CI, by calling `ensure_running_outside_ci` conditionally on the
outcome. The decision is up to each test, though, the utility merely
returns a `bool`.
The `CommandNameAndArgs` struct is used in multiple places to specify
external tools. Previously, the schema only allowed for this in
`ui.pager`.
This commit adds a few sample configs which define variables editors and
fix tools as commands with env vars.
The schema has also been updated to make these valid.
Not sure if that is intentional or should rather be considered a bug,
but currently the "structured" option for specifying an external tool
requires that both the command "command" and the "env" keys are
specified. The "command" key makes sense; for the "env" key it came as
a surprise, but it can be argued that this form should only be used when
environment variables need to be specified and the plain string or array
form should be used otherwise.
Either way, the schema did not accurately reflect the current behavior;
now it does. Two sample configs have been added as schema test cases.
Anytime an external tool is referenced in the config, the command can be
provided as a string or as a token array. In the latter case, the array
must not be empty; at least the command name must be provided.
The schema didn't previously object to an empty array, though; this has
now been rectified. I've added more sample configs to cover this case.
Those same configs can also be used to illustrate that this is indeed
jj's current behavior:
$ jj --config-file cli/tests/sample-configs/invalid/ui.pager_empty_array.toml show
Config error: Invalid type or value for ui.pager
Caused by: data did not match any variant of untagged enum CommandNameAndArgs
$ jj --config-file cli/tests/sample-configs/invalid/ui.pager.command_empty_array.toml show
Config error: Invalid type or value for ui.pager
Caused by: data did not match any variant of untagged enum CommandNameAndArgs
$ jj --config-file cli/tests/sample-configs/invalid/ui.editor_empty_array.toml config edit --user
Config error: Invalid type or value for ui.editor
Caused by: data did not match any variant of untagged enum CommandNameAndArgs
$ jj --config-file cli/tests/sample-configs/invalid/ui.diff-editor_empty_array.toml split
Error: Failed to load tool configuration
Caused by:
1: Invalid type or value for ui.diff-editor
2: data did not match any variant of untagged enum CommandNameAndArgs
$ jj --config-file cli/tests/sample-configs/invalid/ui.merge-editor_empty_array.toml resolve
Error: Failed to load tool configuration
Caused by:
1: Invalid type or value for ui.merge-editor
2: data did not match any variant of untagged enum CommandNameAndArgs
$ jj --config-file cli/tests/sample-configs/invalid/ui.diff.tool_empty_array.toml diff
Config error: Invalid type or value for ui.diff.tool
Caused by: data did not match any variant of untagged enum CommandNameAndArgs
$ jj --config-file cli/tests/sample-configs/invalid/fix.tools.command_empty_array.toml fix
Config error: Invalid type or value for fix.tools.black
Caused by: data did not match any variant of untagged enum CommandNameAndArgs
in `command`
As a notable exception, `ui.default-command` *is* allowed to be an empty
array. In that case, `jj` will print a usage message. This is also
covered by a valid sample config.
While `ui.pager` can be a string which will be tokenized on whitespace,
and argument token array, or a command/env table, the `command` key
within that table currently must be an array. The schema previously
explicitly also allowed it to be a string but that does not actually
work, as exemplified by running:
```sh
$ jj --config-file cli/tests/sample-configs/invalid/ui.pager_command-env_string.toml config list
Config error: Invalid type or value for ui.pager
Caused by: data did not match any variant of untagged enum CommandNameAndArgs
```
`CommandNameAndArgs` should potentially be changed to allow strings.
For now, the schema has been updated to reflect the status quo. A new
sample toml has been added to the `invalid` directory to cover this;
prior to updating the schema, this new test case failed. Once the
behavior is changed to allow string, the file merely needs to be moved
to `valid`.
These are two more instances where the default values were wrong and in
fact not even consistent with the schema itself.
I've found these by running
```sh
jq -r 'paths(type == "object" and has("default")) as $p | getpath($p).default | tojson as $v | $p | map("\"\(select(. != "properties"))\"") | join(".") as $k | "\($k) = \($v)"' cli/src/config-schema.json | taplo check --schema=file://$PWD/cli/src/config-schema.json -
```
which uses `jq` to filter the default values from the schema definition
to create a rudimentary TOML file containing all the defaults according
to the schema and then uses `taplo` the validate this TOML against the
schema.
This approach could be developed further to also parse the intermediate
TOML file and compare the result with the default config (from parsing
an empty config). That would not only test for self-consistency of the
schema's proclaimed defaults but also for consistency with the actual
defaults as assumed by jj.
Adds a bunch of additional sample config toml files. Via the
`datatest_runner`, these each correspond to a test case to check that
the toml is correctly (in-)validated according to the schema.
The `valid/*.toml` files typically define multiple related config
options at once. Where there's some overlap with the default configs in
`cli/src/config`, the aim was to choose different allowed values, e.g.
hex colors, file size in bytes (numeric), etc.
The `invalid/*.toml` files typically only define a single offending
property such as to not obscure individual false negatives. All of the
"invalid" files are still valid toml as the aim is not to test the
`toml_edit` crate or Taplo.
The sample files all contain a Taplo schema directive. This allows them
to be validated against the schema on the fly by Taplo's LSP and derived
IDE plugins to speed up editing and immediately highlight offending
options.
Closes#5695.
The `datatest-stable` crate allows to dynamically instantiate test cases
based on available files. This is applied to `test_config_schema` to
create one test case per config file. As case in point, the test case
for `hints.toml` was missing previously, hence the total number of tests
is up one.
This will become useful when adding more config examples to somewhat
exhaust the schema.
`datatest-stable` uses a custom test harness and thus cannot be used in
the same integration test binary that all of the other test modules run
in. However, if data-driven tests are to be used for other applications,
they can share in the same binary, so the module structure is already
set up to mirror the central "runner" approach.