mirror of
https://github.com/nushell/nushell.git
synced 2025-05-05 23:42:56 +00:00
87 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
|
0ca5c2f135
|
Add cat and get-content to open 's search terms (#15643)
<!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> A friend of mine started using nushell on Windows and wondered why the `cat` command wasn't available. I answered to him, that he can use `help -f` or F1 to find the command but then we both realized that neither `cat` nor `Get-Command` were part of `open`'s search terms. So I added them. # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> None. # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> - 🟢 `toolkit fmt` - 🟢 `toolkit clippy` - 🟢 `toolkit test` - 🟢 `toolkit test stdlib` # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> |
||
|
e10ac2ede6
|
fix: command open sets default flags when calling "from xxx" converters (#15383)
Fixes #13722 # Description Simple solution: `eval_block` -> `eval_call` with empty arguments # User-Facing Changes Should be none. # Tests + Formatting +1 # After Submitting |
||
|
c5a14bb8ff
|
check signals in nu-glob and ls (#15140)
Fixes #10144 # User-Facing Changes Long running glob expansions and `ls` runs (e.g. `ls /**/*`) can now be interrupted with ctrl-c. |
||
|
bda3245725
|
More precise ErrorKind::NotFound errors (#15149)
In this PR, the two new variants for `ErrorKind`, `FileNotFound` and `DirectoryNotFound` with a nice `not_found_as` method for the `ErrorKind` to easily specify the `NotFound` errors. I also updated some places where I could of think of with these new variants and the message for `NotFound` is no longer "Entity not found" but "Not found" to be less strange. closes #15142 closes #15055 |
||
|
2f6b4c5e9b
|
bump the rust toolchain to 1.83.0 (#15148)
# Description This PR bumps the rust toolchain to 1.83.0 and fixes a clippy lint. We do this because Rust 1.85.0 was released today, and we try and stay 2 versions behind. # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> |
||
|
f04db2a7a3
|
Swap additional context and label in I/O errors (#14954)
<!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> Tweaks the error style for I/O errors introduced #14927. Moves the additional context to below the text that says "I/O error", and always shows the error kind in the label. Additional context|Before PR|After PR :-:|:-:|:-: yes| |  no|  |  # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> N/A, as this is a follow-up to #14927 which has not been included in a release # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> N/A # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> N/A --------- Co-authored-by: Piepmatz <git+github@cptpiepmatz.de> |
||
|
13d5a15f75
|
Run-time pipeline input typechecking tweaks (#14922)
<!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> This PR makes two changes related to [run-time pipeline input type checking](https://github.com/nushell/nushell/pull/14741): 1. The check which bypasses type checking for commands with only `Type::Nothing` input types has been expanded to work with commands with multiple `Type::Nothing` inputs for different outputs. For example, `ast` has three input/output type pairs, but all of the inputs are `Type::Nothing`: ``` ╭───┬─────────┬────────╮ │ # │ input │ output │ ├───┼─────────┼────────┤ │ 0 │ nothing │ table │ │ 1 │ nothing │ record │ │ 2 │ nothing │ string │ ╰───┴─────────┴────────╯ ``` Before this PR, passing a value (which would otherwise be ignored) to `ast` caused a run-time type error: ``` Error: nu:🐚:only_supports_this_input_type × Input type not supported. ╭─[entry #1:1:6] 1 │ echo 123 | ast -j -f "hi" · ─┬─ ─┬─ · │ ╰── only nothing, nothing, and nothing input data is supported · ╰── input type: int ╰──── ``` After this PR, no error is raised. This doesn't really matter for `ast` (the only other built-in command with a similar input/output type signature is `cal`), but it's more logically consistent. 2. Bypasses input type-checking (parse-time ***and*** run-time) for some (not all, see below) commands which have both a `Type::Nothing` input and some other non-nothing `Type` input. This is accomplished by adding a `Type::Any` input with the same output as the corresponding `Type::Nothing` input/output pair. This is necessary because some commands are intended to operate on an argument with empty pipeline input, or operate on an empty pipeline input with no argument. This causes issues when a value is implicitly passed to one of these commands. I [discovered this issue](https://discord.com/channels/601130461678272522/615962413203718156/1329945784346611712) when working with an example where the `open` command is used in `sort-by` closure: ```nushell ls | sort-by { open -r $in.name | lines | length } ``` Before this PR (but after the run-time input type checking PR), this error is raised: ``` Error: nu:🐚:only_supports_this_input_type × Input type not supported. ╭─[entry #1:1:1] 1 │ ls | sort-by { open -r $in.name | lines | length } · ─┬ ──┬─ · │ ╰── only nothing and string input data is supported · ╰── input type: record<name: string, type: string, size: filesize, modified: date> ╰──── ``` While this error is technically correct, we don't actually want to return an error here since `open` ignores its pipeline input when an argument is passed. This would be a parse-time error as well if the parser was able to infer that the closure input type was a record, but our type inference isn't that robust currently, so this technically incorrect form snuck by type checking until #14741. However, there are some commands with the same kind of type signature where this behavior is actually desirable. This means we can't just bypass type-checking for any command with a `Type::Nothing` input. These commands operate on true `null` values, rather than ignoring their input. For example, `length` returns `0` when passed a `null` value. It's correct, and even desirable, to throw a run-time error when `length` is passed an unexpected type. For example, a string, which should instead be measured with `str length`: ```nushell ["hello" "world"] | sort-by { length } # => Error: nu:🐚:only_supports_this_input_type # => # => × Input type not supported. # => ╭─[entry #32:1:10] # => 1 │ ["hello" "world"] | sort-by { length } # => · ───┬─── ───┬── # => · │ ╰── only list<any>, binary, and nothing input data is supported # => · ╰── input type: string # => ╰──── ``` We need a more robust way for commands to express how they handle the `Type::Nothing` input case. I think a possible solution here is to allow commands to express that they operate on `PipelineData::Empty`, rather than `Value::Nothing`. Then, a command like `open` could have an empty pipeline input type rather than a `Type::Nothing`, and the parse-time and run-time pipeline input type checks know that `open` will safely ignore an incorrectly typed input. That being said, we have a release coming up and the above solution might take a while to implement, so while unfortunate, bypassing input type-checking for these problematic commands serves as a workaround to avoid breaking changes in the release until a more robust solution is implemented. This PR bypasses input type-checking for the following commands: * `load-env`: can take record of envvars as input or argument * `nu-check`: checks input string or filename argument * `open`: can take filename as input or argument * `polars when`: can be used with input, or can be chained with another `polars when` * `stor insert`: data record can be passed as input or argument * `stor update`: data record can be passed as input or argument * `format date`: `--list` ignores input value * `into datetime`: `--list` ignores input value (also added a `Type::Nothing` input which was missing from this command) These commands have a similar input/output signature to the above commands, but are working as intended: * `cd`: The input/output signature was actually incorrect, `cd` always ignores its input. I fixed this in this PR. * `generate` * `get` * `history import` * `interleave` * `into bool` * `length` # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> As a temporary workaround, pipeline input type-checking for the following commands has been bypassed to avoid undesirable run-time input type checking errors which were previously not caught at parse-time: * `open` * `load-env` * `format date` * `into datetime` * `nu-check` * `stor insert` * `stor update` * `polars when` # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> CI became green in the time it took me to type the description 😄 # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> N/A |
||
|
66bc0542e0
|
Refactor I/O Errors (#14927)
<!--
if this PR closes one or more issues, you can automatically link the PR
with
them by using one of the [*linking
keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword),
e.g.
- this PR should close #xxxx
- fixes #xxxx
you can also mention related issues, PRs or discussions!
-->
# Description
<!--
Thank you for improving Nushell. Please, check our [contributing
guide](../CONTRIBUTING.md) and talk to the core team before making major
changes.
Description of your pull request goes here. **Provide examples and/or
screenshots** if your changes affect the user experience.
-->
As mentioned in #10698, we have too many `ShellError` variants, with
some even overlapping in meaning. This PR simplifies and improves I/O
error handling by restructuring `ShellError` related to I/O issues.
Previously, `ShellError::IOError` only contained a message string,
making it convenient but overly generic. It was widely used without
providing spans (#4323).
This PR introduces a new `ShellError::Io` variant that consolidates
multiple I/O-related errors (except for `ShellError::NetworkFailure`,
which remains distinct for now). The new `ShellError::Io` variant
replaces the following:
- `FileNotFound`
- `FileNotFoundCustom`
- `IOInterrupted`
- `IOError`
- `IOErrorSpanned`
- `NotADirectory`
- `DirectoryNotFound`
- `MoveNotPossible`
- `CreateNotPossible`
- `ChangeAccessTimeNotPossible`
- `ChangeModifiedTimeNotPossible`
- `RemoveNotPossible`
- `ReadingFile`
## The `IoError`
`IoError` includes the following fields:
1. **`kind`**: Extends `std::io::ErrorKind` to specify the type of I/O
error without needing new `ShellError` variants. This aligns with the
approach used in `std::io::Error`. This adds a second dimension to error
reporting by combining the `kind` field with `ShellError` variants,
making it easier to describe errors in more detail. As proposed by
@kubouch in [#design-discussion on
Discord](https://discord.com/channels/601130461678272522/615329862395101194/1323699197165178930),
this helps reduce the number of `ShellError` variants. In the error
report, the `kind` field is displayed as the "source" of the error,
e.g., "I/O error," followed by the specific kind of I/O error.
2. **`span`**: A non-optional field to encourage providing spans for
better error reporting (#4323).
3. **`path`**: Optional `PathBuf` to give context about the file or
directory involved in the error (#7695). If provided, it’s shown as a
help entry in error reports.
4. **`additional_context`**: Allows adding custom messages when the
span, kind, and path are insufficient. This is rendered in the error
report at the labeled span.
5. **`location`**: Sometimes, I/O errors occur in the engine itself and
are not caused directly by user input. In such cases, if we don’t have a
span and must set it to `Span::unknown()`, we need another way to
reference the error. For this, the `location` field uses the new
`Location` struct, which records the Rust file and line number where the
error occurred. This ensures that we at least know the Rust code
location that failed, helping with debugging. To make this work, a new
`location!` macro was added, which retrieves `file!`, `line!`, and
`column!` values accurately. If `Location::new` is used directly, it
issues a warning to remind developers to use the macro instead, ensuring
consistent and correct usage.
### Constructor Behavior
`IoError` provides five constructor methods:
- `new` and `new_with_additional_context`: Used for errors caused by
user input and require a valid (non-unknown) span to ensure precise
error reporting.
- `new_internal` and `new_internal_with_path`: Used for internal errors
where a span is not available. These methods require additional context
and the `Location` struct to pinpoint the source of the error in the
engine code.
- `factory`: Returns a closure that maps an `std::io::Error` to an
`IoError`. This is useful for handling multiple I/O errors that share
the same span and path, streamlining error handling in such cases.
## New Report Look
This is simulation how the I/O errors look like (the `open crates` is
simulated to show how internal errors are referenced now):

## `Span::test_data()`
To enable better testing, `Span::test_data()` now returns a value
distinct from `Span::unknown()`. Both `Span::test_data()` and
`Span::unknown()` refer to invalid source code, but having a separate
value for test data helps identify issues during testing while keeping
spans unique.
## Cursed Sneaky Error Transfers
I removed the conversions between `std::io::Error` and `ShellError` as
they often removed important information and were used too broadly to
handle I/O errors. This also removed the problematic implementation
found here:
|
||
|
dc52a6fec5
|
Handle permission denied error at nu_engine::glob_from (#14679)
<!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> #14528 mentioned that trying to `open` a file in a directory where you don't have read access results in a "file not found" error. I investigated the error and could find the root issue in the `nu_engine::glob_from` function. It uses `std::path::Path::canonicalize` some layers down and that may return an `std::io::Error`. All these errors were handled as "directory not found" which will be translated to "file not found" in the `open` command. To fix this, I handled the `PermssionDenied` error kind of the io error and passed that down. Now trying to `open` a file from a directory with no permissions returns a "permission denied" error. Before/After:  # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> That error is fixed, so correct error message. # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> - 🟢 `toolkit fmt` - 🟢 `toolkit clippy` - 🟢 `toolkit test` - 🟢 `toolkit test stdlib` # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> fixes #14528 |
||
|
76afa74320
|
open : Assign content_type metadata for filetypes not handled with a from converter (#14670)
# Description Filetypes which are converted during `open` should not have (and have not had) a `content_type` metadata field. However, filetypes which aren't converted now behave the same as with `--raw` and assign the appropriate `content_type`. ## Before ```nushell open toolkit.nu | metadata # => ╭────────┬────────────────────────────────────────────╮ # => │ source │ /home/ntd/src/ntd-forks/nushell/toolkit.nu │ # => ╰────────┴──────────────────────────────────────────── open --raw toolkit.nu | metadata # => ╭──────────────┬────────────────────────────────────────────╮ # => │ source │ /home/ntd/src/ntd-forks/nushell/toolkit.nu │ # => │ content_type │ application/x-nuscript │ # => ╰──────────────┴────────────────────────────────────────────╯ open script.py | metadata # => ╭────────┬─────────────────────────────╮ # => │ source │ /home/ntd/testing/script.py │ # => ╰────────┴─────────────────────────────╯ open Cargo.toml | metadata # => ╭────────┬────────────────────────────────────────────╮ # => │ source │ /home/ntd/src/ntd-forks/nushell/Cargo.toml │ # => ╰────────┴────────────────────────────────────────────╯ ``` ## After ```nushell # Not converted, so adds content_type open toolkit.nu | metadata # => ╭──────────────┬────────────────────────────────────────────╮ # => │ source │ /home/ntd/src/ntd-forks/nushell/toolkit.nu │ # => │ content_type │ application/x-nuscript │ # => ╰──────────────┴────────────────────────────────────────────╯ # Not converted, so adds content_type open --raw toolkit.nu | metadata # => ╭──────────────┬────────────────────────────────────────────╮ # => │ source │ /home/ntd/src/ntd-forks/nushell/toolkit.nu │ # => │ content_type │ application/x-nuscript │ # => ╰──────────────┴────────────────────────────────────────────╯ # Not converted, so adds content_type open script.py | metadata # => ╭──────────────┬─────────────────────────────╮ # => │ source │ /home/ntd/testing/script.py │ # => │ content_type │ text/plain │ # => ╰──────────────┴───────────────────────────── # Converted, so does not add content_type (no change) open Cargo.toml | metadata # => ╭────────┬────────────────────────────────────────────╮ # => │ source │ /home/ntd/src/ntd-forks/nushell/Cargo.toml │ # => ╰────────┴────────────────────────────────────────────╯ ``` # User-Facing Changes `open <file>` assigns the appropriate content type when the filetype is not converted via a `from <format>`. # Tests + Formatting - 🟢 `toolkit fmt` - 🟢 `toolkit clippy` - 🟢 `toolkit test` - 🟢 `toolkit test stdlib` # After Submitting N/A |
||
|
bcd85b6f3e
|
Remove duplicate implementations of CallExt::rest (#14484)
# Description Removes unnecessary usages of `Call::rest_iter_flattened` and `get_rest_for_glob_pattern` and replaces them with `CallExt::rest`. # User-Facing Changes None |
||
|
af9c31152a
|
Add metadata on open --raw with bytestreams (#14141)
# Description This PR closes #14137 and allows the display hook to be set on byte streams. So, with a hook like this below. ```nushell display_output: { metadata access {|meta| match $meta.content_type? { "application/x-nuscript" | "application/x-nuon" | "text/x-nushell" => { nu-highlight }, "application/json" => { ^bat --language=json --color=always --style=plain --paging=never }, _ => {}, } } | table } ``` You could type `open toolkit.nu` and the text of toolkit.nu would be highlighted by nu-highlight. This PR also changes the way content-type is assigned with `open`. Previously it would only assign it if `--raw` was specified. Lastly, it changes the `is_external()` function to only say `ByteStreamSource::Child`'s are external instead of both Child and `ByteStreamSource::File`. Again, this was to allow the hook to function properly. I'm not sure what negative ramifications changing `is_external()` could have, but there may be some? # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> |
||
|
95b78eee25
|
Change the usage misnomer to "description" (#13598)
# Description The meaning of the word usage is specific to describing how a command function is *used* and not a synonym for general description. Usage can be used to describe the SYNOPSIS or EXAMPLES sections of a man page where the permitted argument combinations are shown or example *uses* are given. Let's not confuse people and call it what it is a description. Our `help` command already creates its own *Usage* section based on the available arguments and doesn't refer to the description with usage. # User-Facing Changes `help commands` and `scope commands` will now use `description` or `extra_description` `usage`-> `description` `extra_usage` -> `extra_description` Breaking change in the plugin protocol: In the signature record communicated with the engine. `usage`-> `description` `extra_usage` -> `extra_description` The same rename also takes place for the methods on `SimplePluginCommand` and `PluginCommand` # Tests + Formatting - Updated plugin protocol specific changes # After Submitting - [ ] update plugin protocol doc |
||
|
73e8de9753
|
Attempt to guess the content type of a file when opening with --raw (#13521)
# Description Attempt to guess the content type of a file when opening with --raw and set it in the pipeline metadata. <img width="644" alt="Screenshot 2024-08-02 at 11 30 10" src="https://github.com/user-attachments/assets/071f0967-c4dd-405a-b8c8-f7aa073efa98"> # User-Facing Changes - Content of files can be directly piped into commands like `http post` with the content type set appropriately when using `--raw`. |
||
|
d7392f1f3b
|
Internal representation (IR) compiler and evaluator (#13330)
# Description This PR adds an internal representation language to Nushell, offering an alternative evaluator based on simple instructions, stream-containing registers, and indexed control flow. The number of registers required is determined statically at compile-time, and the fixed size required is allocated upon entering the block. Each instruction is associated with a span, which makes going backwards from IR instructions to source code very easy. Motivations for IR: 1. **Performance.** By simplifying the evaluation path and making it more cache-friendly and branch predictor-friendly, code that does a lot of computation in Nushell itself can be sped up a decent bit. Because the IR is fairly easy to reason about, we can also implement optimization passes in the future to eliminate and simplify code. 2. **Correctness.** The instructions mostly have very simple and easily-specified behavior, so hopefully engine changes are a little bit easier to reason about, and they can be specified in a more formal way at some point. I have made an effort to document each of the instructions in the docs for the enum itself in a reasonably specific way. Some of the errors that would have happened during evaluation before are now moved to the compilation step instead, because they don't make sense to check during evaluation. 3. **As an intermediate target.** This is a good step for us to bring the [`new-nu-parser`](https://github.com/nushell/new-nu-parser) in at some point, as code generated from new AST can be directly compared to code generated from old AST. If the IR code is functionally equivalent, it will behave the exact same way. 4. **Debugging.** With a little bit more work, we can probably give control over advancing the virtual machine that `IrBlock`s run on to some sort of external driver, making things like breakpoints and single stepping possible. Tools like `view ir` and [`explore ir`](https://github.com/devyn/nu_plugin_explore_ir) make it easier than before to see what exactly is going on with your Nushell code. The goal is to eventually replace the AST evaluator entirely, once we're sure it's working just as well. You can help dogfood this by running Nushell with `$env.NU_USE_IR` set to some value. The environment variable is checked when Nushell starts, so config runs with IR, or it can also be set on a line at the REPL to change it dynamically. It is also checked when running `do` in case within a script you want to just run a specific piece of code with or without IR. # Example ```nushell view ir { |data| mut sum = 0 for n in $data { $sum += $n } $sum } ``` ```gas # 3 registers, 19 instructions, 0 bytes of data 0: load-literal %0, int(0) 1: store-variable var 904, %0 # let 2: drain %0 3: drop %0 4: load-variable %1, var 903 5: iterate %0, %1, end 15 # for, label(1), from(14:) 6: store-variable var 905, %0 7: load-variable %0, var 904 8: load-variable %2, var 905 9: binary-op %0, Math(Plus), %2 10: span %0 11: store-variable var 904, %0 12: load-literal %0, nothing 13: drain %0 14: jump 5 15: drop %0 # label(0), from(5:) 16: drain %0 17: load-variable %0, var 904 18: return %0 ``` # Benchmarks All benchmarks run on a base model Mac Mini M1. ## Iterative Fibonacci sequence This is about as best case as possible, making use of the much faster control flow. Most code will not experience a speed improvement nearly this large. ```nushell def fib [n: int] { mut a = 0 mut b = 1 for _ in 2..=$n { let c = $a + $b $a = $b $b = $c } $b } use std bench bench { 0..50 | each { |n| fib $n } } ``` IR disabled: ``` ╭───────┬─────────────────╮ │ mean │ 1ms 924µs 665ns │ │ min │ 1ms 700µs 83ns │ │ max │ 3ms 450µs 125ns │ │ std │ 395µs 759ns │ │ times │ [list 50 items] │ ╰───────┴─────────────────╯ ``` IR enabled: ``` ╭───────┬─────────────────╮ │ mean │ 452µs 820ns │ │ min │ 427µs 417ns │ │ max │ 540µs 167ns │ │ std │ 17µs 158ns │ │ times │ [list 50 items] │ ╰───────┴─────────────────╯ ```  ## [gradient_benchmark_no_check.nu](https://github.com/nushell/nu_scripts/blob/main/benchmarks/gradient_benchmark_no_check.nu) IR disabled: ``` ╭───┬──────────────────╮ │ 0 │ 27ms 929µs 958ns │ │ 1 │ 21ms 153µs 459ns │ │ 2 │ 18ms 639µs 666ns │ │ 3 │ 19ms 554µs 583ns │ │ 4 │ 13ms 383µs 375ns │ │ 5 │ 11ms 328µs 208ns │ │ 6 │ 5ms 659µs 542ns │ ╰───┴──────────────────╯ ``` IR enabled: ``` ╭───┬──────────────────╮ │ 0 │ 22ms 662µs │ │ 1 │ 17ms 221µs 792ns │ │ 2 │ 14ms 786µs 708ns │ │ 3 │ 13ms 876µs 834ns │ │ 4 │ 13ms 52µs 875ns │ │ 5 │ 11ms 269µs 666ns │ │ 6 │ 6ms 942µs 500ns │ ╰───┴──────────────────╯ ``` ## [random-bytes.nu](https://github.com/nushell/nu_scripts/blob/main/benchmarks/random-bytes.nu) I got pretty random results out of this benchmark so I decided not to include it. Not clear why. # User-Facing Changes - IR compilation errors may appear even if the user isn't evaluating with IR. - IR evaluation can be enabled by setting the `NU_USE_IR` environment variable to any value. - New command `view ir` pretty-prints the IR for a block, and `view ir --json` can be piped into an external tool like [`explore ir`](https://github.com/devyn/nu_plugin_explore_ir). # Tests + Formatting All tests are passing with `NU_USE_IR=1`, and I've added some more eval tests to compare the results for some very core operations. I will probably want to add some more so we don't have to always check `NU_USE_IR=1 toolkit test --workspace` on a regular basis. # After Submitting - [ ] release notes - [ ] further documentation of instructions? - [ ] post-release: publish `nu_plugin_explore_ir` |
||
|
399a7c8836
|
Add and use new Signals struct (#13314)
# Description This PR introduces a new `Signals` struct to replace our adhoc passing around of `ctrlc: Option<Arc<AtomicBool>>`. Doing so has a few benefits: - We can better enforce when/where resetting or triggering an interrupt is allowed. - Consolidates `nu_utils::ctrl_c::was_pressed` and other ad-hoc re-implementations into a single place: `Signals::check`. - This allows us to add other types of signals later if we want. E.g., exiting or suspension. - Similarly, we can more easily change the underlying implementation if we need to in the future. - Places that used to have a `ctrlc` of `None` now use `Signals::empty()`, so we can double check these usages for correctness in the future. |
||
|
3fae77209a
|
Revert "Span ID Refactor (Step 2): Make Call SpanId-friendly (#13268)" (#13292)
This reverts commit 0cfd5fbece6f25b54ab9dc417a9e06af9d83f282. The original PR messed up syntax higlighting of aliases and causes panics of completion in the presence of alias. <!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> |
||
|
0cfd5fbece
|
Span ID Refactor (Step 2): Make Call SpanId-friendly (#13268)
<!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> Part of https://github.com/nushell/nushell/issues/12963, step 2. This PR refactors Call and related argument structures to remove their dependency on `Expression::span` which will be removed in the future. # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> Should be none. If you see some error messages that look broken, please report. # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use toolkit.nu; toolkit test stdlib"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> |
||
|
0d060aeae8
|
Use pipeline data for http post|put|patch|delete commands. (#13254)
# Description Provides the ability to use http commands as part of a pipeline. Additionally, this pull requests extends the pipeline metadata to add a content_type field. The content_type metadata field allows commands such as `to json` to set the metadata in the pipeline allowing the http commands to use it when making requests. This pull request also introduces the ability to directly stream http requests from streaming pipelines. One other small change is that Content-Type will always be set if it is passed in to the http commands, either indirectly or throw the content type flag. Previously it was not preserved with requests that were not of type json or form data. # User-Facing Changes * `http post`, `http put`, `http patch`, `http delete` can be used as part of a pipeline * `to text`, `to json`, `from json` all set the content_type metadata field and the http commands will utilize them when making requests. |
||
|
cc9f41e553
|
Use CommandType in more places (#12832)
# Description Kind of a vague title, but this PR does two main things: 1. Rather than overriding functions like `Command::is_parser_keyword`, this PR instead changes commands to override `Command::command_type`. The `CommandType` returned by `Command::command_type` is then used to automatically determine whether `Command::is_parser_keyword` and the other `is_{type}` functions should return true. These changes allow us to remove the `CommandType::Other` case and should also guarantee than only one of the `is_{type}` functions on `Command` will return true. 2. Uses the new, reworked `Command::command_type` function in the `scope commands` and `which` commands. # User-Facing Changes - Breaking change for `scope commands`: multiple columns (`is_builtin`, `is_keyword`, `is_plugin`, etc.) have been merged into the `type` column. - Breaking change: the `which` command can now report `plugin` or `keyword` instead of `built-in` in the `type` column. It may also now report `external` instead of `custom` in the `type` column for known `extern`s. |
||
|
6fd854ed9f
|
Replace ExternalStream with new ByteStream type (#12774)
# Description This PR introduces a `ByteStream` type which is a `Read`-able stream of bytes. Internally, it has an enum over three different byte stream sources: ```rust pub enum ByteStreamSource { Read(Box<dyn Read + Send + 'static>), File(File), Child(ChildProcess), } ``` This is in comparison to the current `RawStream` type, which is an `Iterator<Item = Vec<u8>>` and has to allocate for each read chunk. Currently, `PipelineData::ExternalStream` serves a weird dual role where it is either external command output or a wrapper around `RawStream`. `ByteStream` makes this distinction more clear (via `ByteStreamSource`) and replaces `PipelineData::ExternalStream` in this PR: ```rust pub enum PipelineData { Empty, Value(Value, Option<PipelineMetadata>), ListStream(ListStream, Option<PipelineMetadata>), ByteStream(ByteStream, Option<PipelineMetadata>), } ``` The PR is relatively large, but a decent amount of it is just repetitive changes. This PR fixes #7017, fixes #10763, and fixes #12369. This PR also improves performance when piping external commands. Nushell should, in most cases, have competitive pipeline throughput compared to, e.g., bash. | Command | Before (MB/s) | After (MB/s) | Bash (MB/s) | | -------------------------------------------------- | -------------:| ------------:| -----------:| | `throughput \| rg 'x'` | 3059 | 3744 | 3739 | | `throughput \| nu --testbin relay o> /dev/null` | 3508 | 8087 | 8136 | # User-Facing Changes - This is a breaking change for the plugin communication protocol, because the `ExternalStreamInfo` was replaced with `ByteStreamInfo`. Plugins now only have to deal with a single input stream, as opposed to the previous three streams: stdout, stderr, and exit code. - The output of `describe` has been changed for external/byte streams. - Temporary breaking change: `bytes starts-with` no longer works with byte streams. This is to keep the PR smaller, and `bytes ends-with` already does not work on byte streams. - If a process core dumped, then instead of having a `Value::Error` in the `exit_code` column of the output returned from `complete`, it now is a `Value::Int` with the negation of the signal number. # After Submitting - Update docs and book as necessary - Release notes (e.g., plugin protocol changes) - Adapt/convert commands to work with byte streams (high priority is `str length`, `bytes starts-with`, and maybe `bytes ends-with`). - Refactor the `tee` code, Devyn has already done some work on this. --------- Co-authored-by: Devyn Cairns <devyn.cairns@gmail.com> |
||
|
e879d4ecaf
|
ListStream touchup (#12524)
# Description Does some misc changes to `ListStream`: - Moves it into its own module/file separate from `RawStream`. - `ListStream`s now have an associated `Span`. - This required changes to `ListStreamInfo` in `nu-plugin`. Note sure if this is a breaking change for the plugin protocol. - Hides the internals of `ListStream` but also adds a few more methods. - This includes two functions to more easily alter a stream (these take a `ListStream` and return a `ListStream` instead of having to go through the whole `into_pipeline_data(..)` route). - `map`: takes a `FnMut(Value) -> Value` - `modify`: takes a function to modify the inner stream. |
||
|
bdb6daa4b5
|
Migrate to a new PWD API (#12603)
This is the first PR towards migrating to a new `$env.PWD` API that returns potentially un-canonicalized paths. Refer to PR #12515 for motivations. ## New API: `EngineState::cwd()` The goal of the new API is to cover both parse-time and runtime use case, and avoid unintentional misuse. It takes an `Option<Stack>` as argument, which if supplied, will search for `$env.PWD` on the stack in additional to the engine state. I think with this design, there's less confusion over parse-time and runtime environments. If you have access to a stack, just supply it; otherwise supply `None`. ## Deprecation of other PWD-related APIs Other APIs are re-implemented using `EngineState::cwd()` and properly documented. They're marked deprecated, but their behavior is unchanged. Unused APIs are deleted, and code that accesses `$env.PWD` directly without using an API is rewritten. Deprecated APIs: * `EngineState::current_work_dir()` * `StateWorkingSet::get_cwd()` * `env::current_dir()` * `env::current_dir_str()` * `env::current_dir_const()` * `env::current_dir_str_const()` Other changes: * `EngineState::get_cwd()` (deleted) * `StateWorkingSet::list_env()` (deleted) * `repl::do_run_cmd()` (rewritten with `env::current_dir_str()`) ## `cd` and `pwd` now use logical paths by default This pulls the changes from PR #12515. It's currently somewhat broken because using non-canonicalized paths exposed a bug in our path normalization logic (Issue #12602). Once that is fixed, this should work. ## Future plans This PR needs some tests. Which test helpers should I use, and where should I put those tests? I noticed that unquoted paths are expanded within `eval_filepath()` and `eval_directory()` before they even reach the `cd` command. This means every paths is expanded twice. Is this intended? Once this PR lands, the plan is to review all usages of the deprecated APIs and migrate them to `EngineState::cwd()`. In the meantime, these usages are annotated with `#[allow(deprecated)]` to avoid breaking CI. --------- Co-authored-by: Jakub Žádník <kubouch@gmail.com> |
||
|
02de69de92
|
Fix inconsistent print behavior (#12675)
# Description I found a bunch of issues relating to the specialized reimplementation of `print()` that's done in `nu-cli` and it just didn't seem necessary. So I tried to unify the behavior reasonably. `PipelineData::print()` already handles the call to `table` and it even has a `no_newline` option. One of the most major issues before was that we were using the value iterator, and then converting to string, and then printing each with newlines. This doesn't work well for an external stream, because its iterator ends up creating `Value::binary()` with each buffer... so we were doing lossy UTF-8 conversion on those and then printing them with newlines, which was very weird:  You can see the random newline inserted in a break between buffers, but this would be even worse if it were on a multibyte UTF-8 character. You can produce this by writing a large amount of text to a text file, and then doing `nu -c 'open file.txt'` - in my case I just wrote `^find .`; it just has to be large enough to trigger a buffer break. Using `print()` instead led to a new issue though, because it doesn't abort on errors. This is so that certain commands can produce a stream of errors and have those all printed. There are tests for e.g. `rm` that depend on this behavior. I assume we want to keep that, so instead I made my target `BufferedReader`, and had that fuse closed if an error was encountered. I can't imagine we want to keep reading from a wrapped I/O stream if an error occurs; more often than not the error isn't going to magically resolve itself, it's not going to be a different error each time, and it's just going to lead to an infinite stream of the same error. The test that broke without that was `open . | lines`, because `lines` doesn't fuse closed on error. But I don't know if it's expected or not for it to do that, so I didn't target that. I think this PR makes things better but I'll keep looking for ways to improve on how errors and streams interact, especially trying to eliminate cases where infinite error loops can happen. # User-Facing Changes - **Breaking**: `BufferedReader` changes + no more public fields - A raw I/O stream from e.g. `open` won't produce infinite errors anymore, but I consider that to be a plus - the implicit `print` on script output is the same as the normal one now # Tests + Formatting Everything passes but I didn't add anything specific. |
||
|
e889679d42
|
Use nightly clippy to kill dead code/fix style (#12334)
- **Remove duplicated imports** - **Remove unused field in `CompletionOptions`** - **Remove unused struct in `nu-table`** - **Clarify generic bounds** - **Simplify a subtrait bound for `ExactSizeIterator`** - **Use `unwrap_or_default`** - **Use `Option` directly instead of empty string** - **Elide unneeded clone in `to html`** |
||
|
c747ec75c9
|
Add command_prelude module (#12291)
# Description When implementing a `Command`, one must also import all the types present in the function signatures for `Command`. This makes it so that we often import the same set of types in each command implementation file. E.g., something like this: ```rust use nu_protocol::ast::Call; use nu_protocol::engine::{Command, EngineState, Stack}; use nu_protocol::{ record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, PipelineData, ShellError, Signature, Span, Type, Value, }; ``` This PR adds the `nu_engine::command_prelude` module which contains the necessary and commonly used types to implement a `Command`: ```rust // command_prelude.rs pub use crate::CallExt; pub use nu_protocol::{ ast::{Call, CellPath}, engine::{Command, EngineState, Stack}, record, Category, Example, IntoInterruptiblePipelineData, IntoPipelineData, IntoSpanned, PipelineData, Record, ShellError, Signature, Span, Spanned, SyntaxShape, Type, Value, }; ``` This should reduce the boilerplate needed to implement a command and also gives us a place to track the breadth of the `Command` API. I tried to be conservative with what went into the prelude modules, since it might be hard/annoying to remove items from the prelude in the future. Let me know if something should be included or excluded. |
||
|
87c5f6e455
|
ls, rm, cp, open, touch, mkdir: Don't expand tilde if input path is quoted string or a variable. (#12232)
# Description Fixes: #11887 Fixes: #11626 This pr unify the tilde expand behavior over several filesystem relative commands. It follows the same rule with glob expansion: | command | result | | ----------- | ------ | | ls ~/aaa | expand tilde | ls "~/aaa" | don't expand tilde | let f = "~/aaa"; ls $f | don't expand tilde, if you want to: use `ls ($f \| path expand)` | let f: glob = "~/aaa"; ls $f | expand tilde, they don't expand on `mkdir`, `touch` comamnd. Actually I'm not sure for 4th item, currently it's expanding is just because it followes the same rule with glob expansion. ### About the change It changes `expand_path_with` to accept a new argument called `expand_tilde`, if it's true, expand it, if not, just keep it as `~` itself. # User-Facing Changes After this change, `ls "~/aaa"` won't expand tilde. # Tests + Formatting Done |
||
|
b6c7656194
|
IO and redirection overhaul (#11934)
# Description The PR overhauls how IO redirection is handled, allowing more explicit and fine-grain control over `stdout` and `stderr` output as well as more efficient IO and piping. To summarize the changes in this PR: - Added a new `IoStream` type to indicate the intended destination for a pipeline element's `stdout` and `stderr`. - The `stdout` and `stderr` `IoStream`s are stored in the `Stack` and to avoid adding 6 additional arguments to every eval function and `Command::run`. The `stdout` and `stderr` streams can be temporarily overwritten through functions on `Stack` and these functions will return a guard that restores the original `stdout` and `stderr` when dropped. - In the AST, redirections are now directly part of a `PipelineElement` as a `Option<Redirection>` field instead of having multiple different `PipelineElement` enum variants for each kind of redirection. This required changes to the parser, mainly in `lite_parser.rs`. - `Command`s can also set a `IoStream` override/redirection which will apply to the previous command in the pipeline. This is used, for example, in `ignore` to allow the previous external command to have its stdout redirected to `Stdio::null()` at spawn time. In contrast, the current implementation has to create an os pipe and manually consume the output on nushell's side. File and pipe redirections (`o>`, `e>`, `e>|`, etc.) have precedence over overrides from commands. This PR improves piping and IO speed, partially addressing #10763. Using the `throughput` command from that issue, this PR gives the following speedup on my setup for the commands below: | Command | Before (MB/s) | After (MB/s) | Bash (MB/s) | | --------------------------- | -------------:| ------------:| -----------:| | `throughput o> /dev/null` | 1169 | 52938 | 54305 | | `throughput \| ignore` | 840 | 55438 | N/A | | `throughput \| null` | Error | 53617 | N/A | | `throughput \| rg 'x'` | 1165 | 3049 | 3736 | | `(throughput) \| rg 'x'` | 810 | 3085 | 3815 | (Numbers above are the median samples for throughput) This PR also paves the way to refactor our `ExternalStream` handling in the various commands. For example, this PR already fixes the following code: ```nushell ^sh -c 'echo -n "hello "; sleep 0; echo "world"' | find "hello world" ``` This returns an empty list on 0.90.1 and returns a highlighted "hello world" on this PR. Since the `stdout` and `stderr` `IoStream`s are available to commands when they are run, then this unlocks the potential for more convenient behavior. E.g., the `find` command can disable its ansi highlighting if it detects that the output `IoStream` is not the terminal. Knowing the output streams will also allow background job output to be redirected more easily and efficiently. # User-Facing Changes - External commands returned from closures will be collected (in most cases): ```nushell 1..2 | each {|_| nu -c "print a" } ``` This gives `["a", "a"]` on this PR, whereas this used to print "a\na\n" and then return an empty list. ```nushell 1..2 | each {|_| nu -c "print -e a" } ``` This gives `["", ""]` and prints "a\na\n" to stderr, whereas this used to return an empty list and print "a\na\n" to stderr. - Trailing new lines are always trimmed for external commands when piping into internal commands or collecting it as a value. (Failure to decode the output as utf-8 will keep the trailing newline for the last binary value.) In the current nushell version, the following three code snippets differ only in parenthesis placement, but they all also have different outputs: 1. `1..2 | each { ^echo a }` ``` a a ╭────────────╮ │ empty list │ ╰────────────╯ ``` 2. `1..2 | each { (^echo a) }` ``` ╭───┬───╮ │ 0 │ a │ │ 1 │ a │ ╰───┴───╯ ``` 3. `1..2 | (each { ^echo a })` ``` ╭───┬───╮ │ 0 │ a │ │ │ │ │ 1 │ a │ │ │ │ ╰───┴───╯ ``` But in this PR, the above snippets will all have the same output: ``` ╭───┬───╮ │ 0 │ a │ │ 1 │ a │ ╰───┴───╯ ``` - All existing flags on `run-external` are now deprecated. - File redirections now apply to all commands inside a code block: ```nushell (nu -c "print -e a"; nu -c "print -e b") e> test.out ``` This gives "a\nb\n" in `test.out` and prints nothing. The same result would happen when printing to stdout and using a `o>` file redirection. - External command output will (almost) never be ignored, and ignoring output must be explicit now: ```nushell (^echo a; ^echo b) ``` This prints "a\nb\n", whereas this used to print only "b\n". This only applies to external commands; values and internal commands not in return position will not print anything (e.g., `(echo a; echo b)` still only prints "b"). - `complete` now always captures stderr (`do` is not necessary). # After Submitting The language guide and other documentation will need to be updated. |
||
|
14d1c67863
|
Debugger experiments (#11441)
<!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> This PR adds a new evaluator path with callbacks to a mutable trait object implementing a Debugger trait. The trait object can do anything, e.g., profiling, code coverage, step debugging. Currently, entering/leaving a block and a pipeline element is marked with callbacks, but more callbacks can be added as necessary. Not all callbacks need to be used by all debuggers; unused ones are simply empty calls. A simple profiler is implemented as a proof of concept. The debugging support is implementing by making `eval_xxx()` functions generic depending on whether we're debugging or not. This has zero computational overhead, but makes the binary slightly larger (see benchmarks below). `eval_xxx()` variants called from commands (like `eval_block_with_early_return()` in `each`) are chosen with a dynamic dispatch for two reasons: to not grow the binary size due to duplicating the code of many commands, and for the fact that it isn't possible because it would make Command trait objects object-unsafe. In the future, I hope it will be possible to allow plugin callbacks such that users would be able to implement their profiler plugins instead of having to recompile Nushell. [DAP](https://microsoft.github.io/debug-adapter-protocol/) would also be interesting to explore. Try `help debug profile`. ## Screenshots Basic output:  To profile with more granularity, increase the profiler depth (you'll see that repeated `is-windows` calls take a large chunk of total time, making it a good candidate for optimizing):  ## Benchmarks ### Binary size Binary size increase vs. main: **+40360 bytes**. _(Both built with `--release --features=extra,dataframe`.)_ ### Time ```nushell # bench_debug.nu use std bench let test = { 1..100 | each { ls | each {|row| $row.name | str length } } | flatten | math avg } print 'debug:' let res2 = bench { debug profile $test } --pretty print $res2 ``` ```nushell # bench_nodebug.nu use std bench let test = { 1..100 | each { ls | each {|row| $row.name | str length } } | flatten | math avg } print 'no debug:' let res1 = bench { do $test } --pretty print $res1 ``` `cargo run --release -- bench_debug.nu` is consistently 1--2 ms slower than `cargo run --release -- bench_nodebug.nu` due to the collection overhead + gathering the report. This is expected. When gathering more stuff, the overhead is obviously higher. `cargo run --release -- bench_nodebug.nu` vs. `nu bench_nodebug.nu` I didn't measure any difference. Both benchmarks report times between 97 and 103 ms randomly, without one being consistently higher than the other. This suggests that at least in this particular case, when not running any debugger, there is no runtime overhead. ## API changes This PR adds a generic parameter to all `eval_xxx` functions that forces you to specify whether you use the debugger. You can resolve it in two ways: * Use a provided helper that will figure it out for you. If you wanted to use `eval_block(&engine_state, ...)`, call `let eval_block = get_eval_block(&engine_state); eval_block(&engine_state, ...)` * If you know you're in an evaluation path that doesn't need debugger support, call `eval_block::<WithoutDebug>(&engine_state, ...)` (this is the case of hooks, for example). I tried to add more explanation in the docstring of `debugger_trait.rs`. ## TODO - [x] Better profiler output to reduce spam of iterative commands like `each` - [x] Resolve `TODO: DEBUG` comments - [x] Resolve unwraps - [x] Add doc comments - [x] Add usage and extra usage for `debug profile`, explaining all columns # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> Hopefully none. # Tests + Formatting <!-- Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - `cargo run -- -c "use std testing; testing run-tests --path crates/nu-std"` to run the tests for the standard library > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` --> # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> |
||
|
387328fe73
|
Glob: don't allow implicit casting between glob and string (#11992)
# Description As title, currently on latest main, nushell confused user if it allows implicit casting between glob and string: ```nushell let x = "*.txt" def glob-test [g: glob] { open $g } glob-test $x ``` It always expand the glob although `$x` is defined as a string. This pr implements a solution from @kubouch : > We could make it really strict and disallow all autocasting between globs and strings because that's what's causing the "magic" confusion. Then, modify all builtins that accept globs to accept oneof(glob, string) and the rules would be that globs always expand and strings never expand # User-Facing Changes After this pr, user needs to use `into glob` to invoke `glob-test`, if user pass a string variable: ```nushell let x = "*.txt" def glob-test [g: glob] { open $g } glob-test ($x | into glob) ``` Or else nushell will return an error. ``` 3 │ glob-test $x · ─┬ · ╰── can't convert string to glob ``` # Tests + Formatting Done # After Submitting Nan |
||
|
f7d647ac3c
|
open , rm , umv , cp , rm and du : Don't globs if inputs are variables or string interpolation (#11886)
# Description
This is a follow up to
https://github.com/nushell/nushell/pull/11621#issuecomment-1937484322
Also Fixes: #11838
## About the code change
It applys the same logic when we pass variables to external commands:
|
||
|
6ff3a4180b
|
Specify which file not found in error (#11868)
# Description Currently, `ShellError::FileNotFound` shows the span where the error occurred but doesn't say which file wasn't found. This PR makes it so the help includes that (like the `DirectoryNotFound` error). # User-Facing Changes No breaking changes, it's just that when a file can't be found, the help will say which file couldn't be found:  |
||
|
1c49ca503a
|
Name the Value conversion functions more clearly (#11851)
# Description This PR renames the conversion functions on `Value` to be more consistent. It follows the Rust [API guidelines](https://rust-lang.github.io/api-guidelines/naming.html#ad-hoc-conversions-follow-as_-to_-into_-conventions-c-conv) for ad-hoc conversions. The conversion functions on `Value` now come in a few forms: - `coerce_{type}` takes a `&Value` and attempts to convert the value to `type` (e.g., `i64` are converted to `f64`). This is the old behavior of some of the `as_{type}` functions -- these functions have simply been renamed to better reflect what they do. - The new `as_{type}` functions take a `&Value` and returns an `Ok` result only if the value is of `type` (no conversion is attempted). The returned value will be borrowed if `type` is non-`Copy`, otherwise an owned value is returned. - `into_{type}` exists for non-`Copy` types, but otherwise does not attempt conversion just like `as_type`. It takes an owned `Value` and always returns an owned result. - `coerce_into_{type}` has the same relationship with `coerce_{type}` as `into_{type}` does with `as_{type}`. - `to_{kind}_string`: conversion to different string formats (debug, abbreviated, etc.). Only two of the old string conversion functions were removed, the rest have been renamed only. - `to_{type}`: other conversion functions. Currently, only `to_path` exists. (And `to_string` through `Display`.) This table summaries the above: | Form | Cost | Input Ownership | Output Ownership | Converts `Value` case/`type` | | ---------------------------- | ----- | --------------- | ---------------- | -------- | | `as_{type}` | Cheap | Borrowed | Borrowed/Owned | No | | `into_{type}` | Cheap | Owned | Owned | No | | `coerce_{type}` | Cheap | Borrowed | Borrowed/Owned | Yes | | `coerce_into_{type}` | Cheap | Owned | Owned | Yes | | `to_{kind}_string` | Expensive | Borrowed | Owned | Yes | | `to_{type}` | Expensive | Borrowed | Owned | Yes | # User-Facing Changes Breaking API change for `Value` in `nu-protocol` which is exposed as part of the plugin API. |
||
|
903afda6d9
|
Remove required positional arg for some file system commands (#11858)
# Description Fixes (most of) #11796. Some filesystem commands have a required positional argument which hinders spreading rest args. This PR removes the required positional arg from `rm`, `open`, and `touch` to be consistent with other filesystem commands that already only have a single rest arg (`mkdir` and `cp`). # User-Facing Changes `rm`, `open`, and `touch` might no longer error when they used to, but otherwise there should be no noticeable changes. |
||
|
d646903161
|
Unify glob behavior on open , rm , cp-old , mv , umv , cp and du commands (#11621)
# Description This pr is a follow up to [#11569](https://github.com/nushell/nushell/pull/11569#issuecomment-1902279587) > Revert the logic in https://github.com/nushell/nushell/pull/10694 and apply the logic in this pr to mv, cp, rv will require a larger change, I need to think how to achieve the bahavior And sorry @bobhy for reverting some of your changes. This pr is going to unify glob behavior on the given commands: * open * rm * cp-old * mv * umv * cp * du So they have the same behavior to `ls`, which is: If given parameter is quoted by single quote(`'`) or double quote(`"`), don't auto-expand the glob pattern. If not quoted, auto-expand the glob pattern. Fixes: #9558 Fixes: #10211 Fixes: #9310 Fixes: #10364 # TODO But there is one thing remains: if we give a variable to the command, it will always auto-expand the glob pattern, e.g: ```nushell let path = "a[123]b" rm $path ``` I don't think it's expected. But I also think user might want to auto-expand the glob pattern in variables. So I'll introduce a new command called `glob escape`, then if user doesn't want to auto-expand the glob pattern, he can just do this: `rm ($path | glob escape)` # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> # Tests + Formatting Done # After Submitting <!-- If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. --> ## NOTE This pr changes the semantic of `GlobPattern`, before this pr, it will `expand path` after evaluated, this makes `nu_engine::glob_from` have no chance to glob things right if a path contains glob pattern. e.g: [#9310 ](https://github.com/nushell/nushell/issues/9310#issuecomment-1886824030) #10211 I think changing the semantic is fine, because it makes glob works if path contains something like '*'. It maybe a breaking change if a custom command's argument are annotated by `: glob`. |
||
|
1867bb1a88
|
Fix incorrect handling of boolean flags for builtin commands (#11492)
# Description
Possible fix of #11456
This PR fixes a bug where builtin commands did not respect the logic of
dynamically passed boolean flags. The reason is
[has_flag](
|
||
|
aeffa188f0
|
Fix an infinite loop if the input stream and output stream are the same (#11384)
# Description Fixes #11382 # User-Facing Changes * before ```console nushell/test (109f629) [✘?] ❯ open hello.md hello nushell/test (109f629) [✘?] ❯ ls hello.md | get size ╭───┬─────╮ │ 0 │ 6 B │ ╰───┴─────╯ nushell/test (109f629) [✘?] ❯ open --raw hello.md | prepend "world" | save --raw --force hello.md ^C nushell/test (109f629) [✘?] ❯ ls hello.md | get size ╭───┬─────────╮ │ 0 │ 2.8 GiB │ ╰───┴─────────╯ ``` * after ```console nushell/test on fix_save [✘!?⇡] ❯ open hello.md | prepend "hello" | save --force hello.md nushell/test on fix_save [✘!?⇡] ❯ open --raw hello.md | prepend "hello" | save --raw --force ../test/hello.md Error: × pipeline input and output are same file ╭─[entry #4:1:1] 1 │ open --raw hello.md | prepend "hello" | save --raw --force ../test/hello.md · ────────┬─────── · ╰── can't save output to '/data/source/nushell/test/hello.md' while it's being reading ╰──── help: you should change output path nushell/test on fix_save [✘!?⇡] ❯ open hello | prepend "hello" | save --force hello Error: × pipeline input and output are same file ╭─[entry #5:1:1] 1 │ open hello | prepend "hello" | save --force hello · ──┬── · ╰── can't save output to '/data/source/nushell/test/hello' while it's being reading ╰──── help: you should change output path ``` # Tests + Formatting Make sure you've run and fixed any issues with these commands: - [x] add `commands::save::save_same_file_with_extension` - [x] add `commands::save::save_same_file_without_extension` - [x] `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - [x] `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used` to check that you're using the standard code style - [x] `cargo test --workspace` to check that all tests pass (on Windows make sure to [enable developer mode](https://learn.microsoft.com/en-us/windows/apps/get-started/developer-mode-features-and-debugging)) - [x] `cargo run -- -c "use std testing; testing run-tests --path crates/nu-std"` to run the tests for the standard library # After Submitting |
||
|
5b01685fc3
|
Enforce required, optional, and rest positional arguments start with an uppercase and end with a period. (#11285)
# Description This updates all the positional arguments (except with `--features=dataframe` or `--features=extra`) to start with an uppercase letter and end with a period. Part of #5066, specifically [this comment](/nushell/nushell/issues/5066#issuecomment-1421528910) Some arguments had example data removed from them because it also appears in the examples. There are other inconsistencies in positional arguments I noticed while making the tests pass which I will bring up in #5066. # User-Facing Changes Positional arguments are now consistent # Tests + Formatting - 🟢 `toolkit fmt` - 🟢 `toolkit clippy` - 🟢 `toolkit test` - 🟢 `toolkit test stdlib` # After Submitting Automatic documentation updates |
||
|
a95a4505ef
|
Convert Shellerror::GenericError to named fields (#11230)
# Description Replace `.to_string()` used in `GenericError` with `.into()` as `.into()` seems more popular Replace `Vec::new()` used in `GenericError` with `vec![]` as `vec![]` seems more popular (There are so, so many) |
||
|
8386bc0919
|
Convert more ShellError variants to named fields (#11173)
# Description Convert these ShellError variants to named fields: * CreateNotPossible * MoveNotPossibleSingle * DirectoryNotFoundCustom * DirectoryNotFound * NotADirectory * OutOfMemoryError * PermissionDeniedError * IOErrorSpanned * IOError * IOInterrupted Also place the `span` field of `DirectoryNotFound` last to match other errors. Part of #10700 (almost half done!) # User-Facing Changes None # Tests + Formatting - 🟢 `toolkit fmt` - 🟢 `toolkit clippy` - 🟢 `toolkit test` - 🟢 `toolkit test stdlib` # After Submitting N/A |
||
|
a324a50bb7
|
Convert FileNotFound to named fields (#11120)
# Description Part of #10700 # User-Facing Changes None # Tests + Formatting - 🟢 `toolkit fmt` - 🟢 `toolkit clippy` - 🟢 `toolkit test` - 🟢 `toolkit test stdlib` # After Submitting N/A |
||
|
1751ac12f4
|
allow multiple extensions (#10593)
<!-- if this PR closes one or more issues, you can automatically link the PR with them by using one of the [*linking keywords*](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword), e.g. - this PR should close #xxxx - fixes #xxxx you can also mention related issues, PRs or discussions! --> # Description <!-- Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes. Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience. --> This PR allows `open` to handle files with multiple extensions; i.e it will try to call `from tar.gz`, `from gz` when calling ```nu open file.tar.gz ``` # User-Facing Changes <!-- List of all changes that impact the user experience here. This helps us keep track of breaking changes. --> No breaking changes. |
||
|
16453b6986
|
Making open case-insensitive to file extensions (#10451)
# Description Closes #10441 Uses `String::to_lowercase()` when the file's extension `ext` is parsed to allow `from_decl(format!("from {ext}"))` to return the desired output regardless of extension case. It doesn't work with sqlite files since those are handled earlier in the parsing but I think is good- since there's no standard file extension used by sqlite so a user will likely want case sensitivity in that case. This also has the (possibly undesired) effect of making `open` completely case insensitive, e.g. `open foo.JSON` will work on a file named `foo.json` and vice versa. This is good on Windows as it treats `foo.json` and `foo.JSON` as the same file, but may not be the desired behaviour on Unix. If this behaviour is undesired I assume it would be fixed with a `#[cfg(not(unix))]` attribute on the `to_lowercase()` operation but that produces slightly "uglier" code that I didn't wish to submit unless necessary. old behaviour:  new behaviour:  # User-Facing Changes `open` will now present a table when `open`-ing files with captitalized extensions rather than the file's raw data # Tests + Formatting new test: `parses_file_with_uppercase_extension` which tests the desired behaviour --------- Co-authored-by: Stefan Holderbach <sholderbach@users.noreply.github.com> |
||
|
1c677c9577
|
Map DirectoryNotFound to FileNotFound for open command (#10089)
# Description This PR should close #10085 Maps `DirectoryNotFound` errors to `FileNotFound`. All other errors are left unchanged. # User-Facing Changes This means a user will see `FileNotFound` instead of `DirectoryNotFound` which is more meaning full to the user. |
||
|
bb06661d24
|
Document that open looks up from subcommands (#10255)
# Description Related to https://github.com/nushell/nushell.github.io/pull/1048 Include this information in the command help. # User-Facing Changes As soon as this information is documented people are much more likely to depend on it so we need to be careful in the future if this design sparks joy or not. |
||
|
0ca49091c0
|
Add rest and glob support to 'open' (#8506)
# Description This adds two different features to `open`: * The ability to pass more than one file to `open`. * Support for using globs in the filenames `open` will create a list stream and stream the output if there is more than one file opened Examples: ``` open file1.csv file2.csv file3.csv ``` ``` open *.nu | where $it =~ "echo" ``` # User-Facing Changes Multi-file and glob support in `open`. Original `open` functionality should continue as before. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass > **Note** > from `nushell` you can also use the `toolkit` as follows > ```bash > use toolkit.nu # or use an `env_change` hook to activate it automatically > toolkit check pr > ``` # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. |
||
|
f7b8f97873
|
Document and critically review ShellError variants - Ep. 2 (#8326)
Continuation of #8229 # Description The `ShellError` enum at the moment is kind of messy. Many variants are basic tuple structs where you always have to reference the implementation with its macro invocation to know which field serves which purpose. Furthermore we have both variants that are kind of redundant or either overly broad to be useful for the user to match on or overly specific with few uses. So I set out to start fixing the lacking documentation and naming to make it feasible to critically review the individual usages and fix those. Furthermore we can decide to join or split up variants that don't seem to be fit for purpose. **Everyone:** Feel free to add review comments if you spot inconsistent use of `ShellError` variants. - Name fields of `SE::IncorrectValue` - Merge and name fields on `SE::TypeMismatch` - Name fields on `SE::UnsupportedOperator` - Name fields on `AssignmentRequires*` and fix doc - Name fields on `SE::UnknownOperator` - Name fields on `SE::MissingParameter` - Name fields on `SE::DelimiterError` - Name fields on `SE::IncompatibleParametersSingle` # User-Facing Changes (None now, end goal more explicit and consistent error messages) # Tests + Formatting (No additional tests needed so far) |
||
|
99076af18b
|
Use imported names in Command::run signatures (#7967)
# Description _(Thank you for improving Nushell. Please, check our [contributing guide](../CONTRIBUTING.md) and talk to the core team before making major changes.)_ I opened this PR to unify the run command method. It's mainly to improve consistency across the tree. # User-Facing Changes None. # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. |
||
|
ab480856a5
|
Use variable names directly in the format strings (#7906)
# Description Lint: `clippy::uninlined_format_args` More readable in most situations. (May be slightly confusing for modifier format strings https://doc.rust-lang.org/std/fmt/index.html#formatting-parameters) Alternative to #7865 # User-Facing Changes None intended # Tests + Formatting (Ran `cargo +stable clippy --fix --workspace -- -A clippy::all -D clippy::uninlined_format_args` to achieve this. Depends on Rust `1.67`) |
||
|
82ac590412
|
Progress bar Implementation (#7661)
# Description _(Description of your pull request goes here. **Provide examples and/or screenshots** if your changes affect the user experience.)_ I implemented the status bar we talk about yesterday. The idea was inspired by the progress bar of `wget`. I decided to go for the second suggestion by `@Reilly` > 2. add an Option<usize> or whatever to RawStream (and ListStream?) for situations where you do know the length ahead of time For now only works with the command `save` but after the approve of this PR we can see how we can implement it on commands like `cp` and `mv` When using `fetch` nushell will check if there is any `content-length` attribute in the request header. If so, then `fetch` will send it through the new `Option` variable in the `RawStream` to the `save`. If we know the total size we show the progress bar  but if we don't then we just show the stats like: data already saved, bytes per second, and time lapse.   Please let me know If I need to make any changes and I will be happy to do it. # User-Facing Changes A new flag (`--progress` `-p`) was added to the `save` command Examples: ```nu fetch https://github.com/torvalds/linux/archive/refs/heads/master.zip | save --progress -f main.zip fetch https://releases.ubuntu.com/22.04.1/ubuntu-22.04.1-desktop-amd64.iso | save --progress -f main.zip open main.zip --raw | save --progress main.copy ``` # Tests + Formatting Don't forget to add tests that cover your changes. Make sure you've run and fixed any issues with these commands: - `cargo fmt --all -- --check` to check standard code formatting (`cargo fmt --all` applies these changes) - `cargo clippy --workspace -- -D warnings -D clippy::unwrap_used -A clippy::needless_collect` to check that you're using the standard code style - `cargo test --workspace` to check that all tests pass - I am getting some errors and its weird because the errors are showing up in files i haven't touch. Is this normal? # After Submitting If your PR had any user-facing changes, update [the documentation](https://github.com/nushell/nushell.github.io) after the PR is merged, if necessary. This will help us keep the docs up to date. Co-authored-by: Reilly Wood <reilly.wood@icloud.com> |