Skip to content

Conversation

@JoeKar
Copy link
Collaborator

@JoeKar JoeKar commented Jul 16, 2025

This will solve the problem unable to save backups with very long paths due to URL escaped UTF8 characters.

If we think SHA256 is the better option over MD5 (which it should be for new functions) then we simply need to rename it to SHA256.

Fixes #3794

@Andriamanitra
Copy link
Contributor

I think it would be nice to include part of the path (maybe like last 10 characters or something) so the name gives at least some clue about the origin in case someone looks at the backups manually.

@usfbih8u
Copy link
Contributor

I check the backups/ directory with a plugin to recover those backups. This plugin relies on having the path at least decodable. Could base64 be an alternative to the sha256/md5 functions to solve the underlying problem that this PR tries to solve?

The paths will be stored in base64 at $HOME/.config/micro/backups/BASE64 and could be decoded if someone — me — wants?

@JoeKar
Copy link
Collaborator Author

JoeKar commented Jul 19, 2025

I thought about using base64 encoding as well, but even there the paths will become quite large, especially with wide runes:
大 = 5aSn
This character/rune results in 4 times the length.

@usfbih8u
Copy link
Contributor

I read the issue and reread the first comment. I totally ignored the "long path" part. I thought the issue was related only to UTF-8.

@JoeKar JoeKar force-pushed the fix/backup-path branch from a557994 to 910fc54 Compare July 23, 2025 20:04
if _, err := os.Stat(md5sum); err == nil {
return md5sum
runes := []rune(filepath.Base(path))
truncBaseName := string(runes[len(runes)-Min(len(runes), 16):])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the last 16 characters, not the first 16 characters?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggested last N characters so the file extension is kept, but in this implementation the md5 hash is after the truncated name. I would probably swap the parts (<md5>_<last16> instead of <last16>_<md5>).

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 24, 2025

Human-readability and (especially) decodability are indeed solid arguments again this new approach. If the user accidentally discovers that there is a backup of some file saved in ~/.config/micro/backups, the user bloody wants to know which file was that. Whereas this new approach makes it not just hard but impossible, by the very definition of a hash function.

So, we need an encoding that would be:

  1. unambiguous
  2. decodable
  3. human-readable
  4. minimizing likelihood of exceeding file path length limits

The original pre-#3273 encoding (i.e. simply replacing slashes and colons with %) satisfied 2, 3 and 4 but had a problem with 1. The URL encoding, introduced in #3273, addressed 1 but introduced a problem with 4, and hurted 3 too. The hash encoding, introduced here, is addressing both 1 and 4 but killing both 2 and 3.

So, an example encoding satisfying all 4 (i.e. similar to the original one but unambiguous):

/ -> %^
: -> %@
% -> %%

(I picked ^ and @ just because they don't seem to be special characters in bash, unlike characters like $ or & which would need to be escaped when typed in the shell command line.)

BTW the original encoding was escaping both / and : on Windows, but only / on Unix (since : is not special on Unix), but I think we should escape both regardless of the OS. The user may boot both OSes on the same machine and use micro to work with the same files (and thus with the same backups) in both, (or for example may copy the entire ~/.config/micro/, including backups, to a different machine with a different OS), so it's better to have the same encoding everywhere.

@niten94
Copy link
Contributor

niten94 commented Jul 25, 2025

So, an example encoding satisfying all 4 (i.e. similar to the original one but unambiguous):

/ -> %^
: -> %@
% -> %%

(I picked ^ and @ just because they don't seem to be special characters in bash, unlike characters like $ or & which would need to be escaped when typed in the shell command line.)

If we desire escaping to be always unnecessary, this may not be a valid reason to avoid picking specific characters, since cmd.exe and PowerShell on Windows interpret (most or all of) those characters.

BTW the original encoding was escaping both / and : on Windows, but only / on Unix (since : is not special on Unix), but I think we should escape both regardless of the OS. The user may boot both OSes on the same machine and use micro to work with the same files (and thus with the same backups) in both, (or for example may copy the entire ~/.config/micro/, including backups, to a different machine with a different OS), so it's better to have the same encoding everywhere.

Micro cannot automatically use the same backups as described in the former use case, since the FS structure and path syntax is too different between Unix and Windows.

However I agree with the latter reason of providing the ability to store the same config directory in either OSs, since this will prevent errors on Windows.

@niten94
Copy link
Contributor

niten94 commented Jul 25, 2025

I didn't research or read #3273 due to the amount of time it will take, but I came up with 2 other methods on letting the user know the original path. Both may not be reliable, but try to be cross-platform and meet the conditions @dmaluka mentioned.

  1. Based on the hash encoding introduced in this PR, but also insert the path at the beginning of the backup file. If we want to consider paths with \n while using it as delimiter, the path has to be escaped (but Micro probably doesn't properly display it anyways).

  2. Based on the hash encoding introduced in this PR, but also store another file which only contains the original path. The filenames would be like <hash>_<last16> (content) and <hash>-original_path, with a different format to avoid a technically possible conflict.

    I've originally thought about creating a symbolic link to the original file, but this isn't supported on FAT32 (unlike NTFS) and requires permissions by default on Windows.

Not really related, but I hope there's a tool to search comments in a PR like #3273.

Edit: The reason I came up with the 2 methods above, is to completely avoid any path length limitation which can be encountered while using other programs.

@JoeKar
Copy link
Collaborator Author

JoeKar commented Jul 25, 2025

  1. unambiguous
  2. decodable
  3. human-readable
  4. minimizing likelihood of exceeding file path length limits

From my point of view 3 isn't that important as long as 1, 2 & 4 are fulfilled. 3 results out of 2 in case there is a mechanism to easily do that.
Currently I assume, that one doesn't frequently check the backup directory.
Without compression 4 will become a problem, because:

[RANDOM_EDITOR] [MAX_POSSIBLE_PATH_LENGTH]

...might work, while...

micro [MAX_POSSIBLE_PATH_LENGTH]

...doesn't due to the expansion by [BACKUP_DIR]+[MAX_POSSIBLE_PATH_LENGTH].

When we are at it with "compression", what about DEFLATEing the target path?
It fulfills 1, 2 & 4, while it can be turned into 3 on the command-line and/or via micro -decode-backup-path [STRING].

@Andriamanitra
Copy link
Contributor

while it can be turned into 3 on the command-line and/or via micro -decode-backup-path [STRING].

The person looking at the file names will most likely be completely unaware that there's a command to decode them, and even if they know about it they wouldn't know which one(s) to decode. It's a lot of extra friction just to handle a rare edge case.

@usfbih8u
Copy link
Contributor

usfbih8u commented Jul 25, 2025

I am quite certain that I might end up with backups that I don't recognize, and I'm unsure how to recover them without using grep -rn in some likely directories.

Another idea is to use a hash file similar to those used in GitHub releases.

BackupFilepath1 Hash1
BackupFilepath2 Hash2
BackupFilepath3 Hash3
BackupFilepath4 Hash4
BackupFilepath5 Hash5

If a BackupFilepath matches our path, retrieve the Hash and use it as the filename in the backups/ directory.
Does not need to be a hash, can be backup-TIMESTAMP.txt

@JoeKar
Copy link
Collaborator Author

JoeKar commented Jul 25, 2025

[...], and even if they know about it they wouldn't know which one(s) to decode

Isn't it reflected with the following block?

const OverwriteFailMsg = `An error occurred while writing to the file:
%s
The file may be corrupted now. The good news is that it has been
successfully backed up. Next time you open this file with Micro,
Micro will ask if you want to recover it from the backup.
The backup path is:
%s`

It's a lot of extra friction just to handle a rare edge case.

Long paths aren't rare...they're quite usual in build environments heavily nested with generated file names.

So, an example encoding satisfying all 4 (i.e. similar to the original one but unambiguous):

/ -> %^
: -> %@
% -> %%

It is human readable, but is it interpretable without checking the documentation or code? It can't be used for c&p without modification either.

The additional lookup file could solve this, but needs some more logic to be kept in sync.

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 25, 2025

If we desire escaping to be always unnecessary, this may not be a valid reason to avoid picking specific characters, since cmd.exe and PowerShell on Windows interpret (most or all of) those characters.

I'm not insisting on ^ and @ at all, any other characters allowed in filenames would suffice too.

Micro cannot automatically use the same backups as described in the former use case, since the FS structure and path syntax is too different between Unix and Windows.

Ah, indeed.

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 25, 2025

It is human readable, but is it interpretable without checking the documentation or code?

Sure. It is not hard to guess that %^home%^user%^foo%^bar.txt stands for /home/user/foo/bar.txt, right?

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 25, 2025

What is not quite clear to me: the original bug report in #3794 says:

  1. On a Windows system with LongPathsEnabled set to 1 in the registry, install and run a recent nightly build of Micro.
  2. Create a directory structure with a total path length exceeding 113 characters.

@yuhoocom are you sure this is accurate? From my brief googling it looks like with LongPathsEnabled the maximum path length is 32767 characters. It doesn't sound plausible that the URL encoding caused reducing this limit by 300 times. Even in the worst case (the path consisting entirely of 4-byte unicode characters) it would only reduce it by 12 times, right?

@Andriamanitra
Copy link
Contributor

Andriamanitra commented Jul 26, 2025

What is not quite clear to me: the original bug report in #3794 says:

  1. On a Windows system with LongPathsEnabled set to 1 in the registry, install and run a recent nightly build of Micro.
  2. Create a directory structure with a total path length exceeding 113 characters.

@yuhoocom are you sure this is accurate? From my brief googling it looks like with LongPathsEnabled the maximum path length is 32767 characters. It doesn't sound plausible that the URL encoding caused reducing this limit by 300 times. Even in the worst case (the path consisting entirely of 4-byte unicode characters) it would only reduce it by 12 times, right?

Maximum file path length may be increased to 32767 but maximum file name length is typically still only 255. According to Microsoft Learn article on maximum path length limitation:

The Windows API has many functions that also have Unicode versions to permit an extended-length path for a maximum total path length of 32,767 characters. This type of path is composed of components separated by backslashes, each up to the value returned in the lpMaximumComponentLength parameter of the GetVolumeInformation function (this value is commonly 255 characters).

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 26, 2025

Aaa, got it... we are talking about the length of the backup file name, which encodes a file path, but itself is a file name, not a file path. So LongPathsEnabled doesn't matter here, while this lpMaximumComponentLength is what matters.

@JoeKar
Copy link
Collaborator Author

JoeKar commented Jul 26, 2025

Exactly and...

/tmp/home/user/documents/hidden new folder/\"private\" stuff/do not open/really/go_away_there_is_nothing_to_see/why do you not stop here?/I warned you/then go ahead/if you think you are smarter/try_&_test_a_very_long_file_name_@10:01_to_be_100%_correct.txt

...results in...

vim (OK):
/tmp/%tmp%home%user%documents%hidden new folder%"private" stuff%do not open%really%go_away_there_is_nothing_to_see%why do you not stop here?%I warned you%then go ahead%if you think you are smarter%try_&_test_a_very_long_file_name_@10:01_to_be_100%_correct.txt

micro (NOK):
/home/user/.config/micro/backups/%^tmp%^home%^user%^documents%^hidden new folder%^"private" stuff%^do not open%^really%^go_away_there_is_nothing_to_see%^why do you not stop here?%^I warned you%^then go ahead%^if you think you are smarter%^try_&_test_a_very_long_file_name_@10%@01_to_be_100%%_correct.txt

Exchanging one character with two is already something we should prevent, since it unnecessarily extends the resulting file name. That was the reason why I came up with something like shorting resp. encoding the path/name, but I agree that it is not trivial to recover.

So which options do we have?
In case we don't like to hash nor compress we can only exchange the characters with just one replacement.
Unfortunately Windows has one more, which need to be replaced. This makes it even harder to prevent ambiguity.

Does someone know if Windows can handle the following replacement characters U+FFFD, U+FFFE & U+FFFF? (too long either)

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 26, 2025

So, seems like the only options to satisfy everyone are the following (or their variations):

  1. hash + "index file" (this file may not be used by micro itself, since the hash is already enough to match the file, but may be used by users or other tools)
  2. deflate

Now, regarding option 2: is there a compression algorithm with a compression ratio of 128 (32768 / 256)? :)

@JoeKar
Copy link
Collaborator Author

JoeKar commented Jul 26, 2025

I don't think so, since 100(%) is already done by the DEL key.

@Andriamanitra
Copy link
Contributor

Now, regarding option 2: is there a compression algorithm with a compression ratio of 128 (32768 / 256)? :)

A lossless algorithm that reduces the size of every possible input is mathematically impossible (pigeonhole principle), the compression ratio is irrelevant.

I would suggest keeping the old method (URL encoding) as-is for the happy path, but when the result is longer than N bytes 1

  1. Hash the URL encoded path
  2. Drop enough bytes from the beginning to make it fit in N bytes: fname = {hash}_{encoded_path[N-len(hash)-1:]}
  3. Store a lookup table (fname => encoded_path or hash => encoded_path) in a separate file

The advantage of this approach is that the behavior doesn't change at all in most cases, and even when the path is long the filename will be mostly human readable. If we use the full fname in the lookup table even just a 16-bit hash (4 hexadecimal digits) would make collisions extremely unlikely.

Footnotes

  1. N=255 seems like a safe bet: https://en.wikipedia.org/wiki/Comparison_of_file_systems#Limits

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 26, 2025

Ok, maybe that's the way to go.

Small notes:

  1. Hash the URL encoded path

Or the original path?

  1. Drop enough bytes from the beginning to make it fit in N bytes: fname = {hash}_{encoded_path[N-len(hash)-1:]}

Using _ makes it ambiguous, since _ is allowed in URL encoding. So e.g. {hash}:{encoded_path} instead of {hash}_{encoded_path}?

...That being said,

The advantage of this approach is that the behavior doesn't change at all in most cases

We already deprecated one encoding once (while providing backward compatibility support for it), we can do that again. :) And I'm not very comfortable with keeping using URL encoding, since, as we've realized, it is pretty pointless to use. We really only need to encode the file separators (/ and :), while URL encoding encodes many more different characters, including any non-ASCII characters, thus in many cases makes backup file names much longer and much less readable than they need to be.

@Andriamanitra
Copy link
Contributor

  1. Hash the URL encoded path

Or the original path?

I guess it doesn't make a difference, either one should work.

  1. Drop enough bytes from the beginning to make it fit in N bytes: fname = {hash}_{encoded_path[N-len(hash)-1:]}

Using _ makes it ambiguous, since _ is allowed in URL encoding. So e.g. {hash}:{encoded_path} instead of {hash}_{encoded_path}?

Wouldn't paths normally start with something like / or C:\ rather than anything that could be mistaken for a hash? But my idea was that it would be differentiated by length (len(fname) >= N -> includes a hash).

And I'm not very comfortable with keeping using URL encoding, since, as we've realized, it is pretty pointless to use. We really only need to encode the file separators (/ and :), while URL encoding encodes many more different characters, including any non-ASCII characters, thus in many cases makes backup file names much longer and much less readable than they need to be.

I think URL encoding is simple enough to encode/decode that it's not a big deal, and it could prevent bugs under some (admittedly unlikely) circumstances. If the backups are stored on a different file system than the file you're editing it could have different restrictions. On FAT32 filenames can't use " * / : < > ? \ |, and the maximum length is 255 UCS-2 characters rather than bytes. 1 Truncating also becomes more treacherous if you may have arbitrary Unicode as it needs to be done on a grapheme cluster boundary. 2 If we stick to ASCII we don't even need to think about these kinds of things.

Footnotes

  1. https://en.wikipedia.org/wiki/Long_filename#Limits

  2. Technically you could even have a valid file path /X where X is a single grapheme cluster that is 255 bytes long(!) so you can't truncate it. Although I don't think this is a case that we should be worried about, the point is that Unicode is weird and can cause unexpected issues.

@dmaluka
Copy link
Collaborator

dmaluka commented Jul 27, 2025

If the backups are stored on a different file system than the file you're editing it could have different restrictions.

Good point.

Ok, let's keep using URL encoding then.

@JoeKar
Copy link
Collaborator Author

JoeKar commented Aug 5, 2025

  1. Drop enough bytes from the beginning to make it fit in N bytes: fname = {hash}_{encoded_path[N-len(hash)-1:]}

The intention was to truncate to the length of N, right?
It will result in something like: fname = {hash}_{encoded_path[len(encoded_path)-N+len(hash)+1:]}

  1. Store a lookup table (fname => encoded_path or hash => encoded_path) in a separate file

In case we store a LUT we can point to the (target) path directly instead of the (URL) encoded one (or both):
fname => target path or hash => target path
The user doesn't need to decode the URL encoded path here.

In general:
How do we plan to handle dozens of backup files when permbackup is active?
We need to collect them on startup and have to sync them with the LUT, otherwise we end up in a mess with this LUT.

@JoeKar
Copy link
Collaborator Author

JoeKar commented Aug 21, 2025

One more question:
Should this resp. #3794 block v2.0.15?
I'd say yes, since it is a regression.

@dmaluka
Copy link
Collaborator

dmaluka commented Aug 21, 2025

Yes, I think so too.

@niten94
Copy link
Contributor

niten94 commented Sep 6, 2025

Wouldn't the LUT need to be read everytime before doing backup, due to the possibility that other Micro instances have written to it? It's the reason I suggested to write the path in a separate file (maybe in backups/paths/?) per file to be backed up, but I'm not aware if this would be worse.

In general:
How do we plan to handle dozens of backup files when permbackup is active?
We need to collect them on startup and have to sync them with the LUT, otherwise we end up in a mess with this LUT.

Sorry, may I ask what do you specifically mean by collecting backup files at startup and syncing them with the LUT?

@JoeKar
Copy link
Collaborator Author

JoeKar commented Sep 6, 2025

Sorry, may I ask what do you specifically mean by collecting backup files at startup and syncing them with the LUT?

I'll answer with your suggestion...

Wouldn't the LUT need to be read everytime before doing backup, due to the possibility that other Micro instances have written to it?

...because it is even better to do it just before writing.
Additionally the backups could have been resolved/deleted manually outside1 of micro and in this scenario the LUT could be forgotten.

Anyway we have to check all available backups present in the backup directory and need to check if all of them are available in the LUT and unnecessary entries must be deleted.

It is quite a lot of new logic, which is only needed in the scenarios where someone needs to take care of these backup files manually.

Footnotes

  1. Being able to decode the backups file names manually was the reason for this LUT.

@JoeKar
Copy link
Collaborator Author

JoeKar commented Sep 17, 2025

Also, the last quoted sentence seems to imply that backups without an entry should be added in the LUT.

Most probably we don't need to...assuming the presence of a backup implies, that this backup was created by micro and thus stored by itself into the LUT already.

Still, maybe an argument to micro -clean could be accepted instead of always syncing, to perform only the sync of the LUT at desired times with specified target paths or all entries if none.

The command could probably instead do other operations as well, like deleting specified backups together with the entry. Tools that manage backups should use this command, and users could use it as well.

Might be something to think about.

@JoeKar JoeKar force-pushed the fix/backup-path branch 2 times, most recently from bc7419e to f6a4f7c Compare September 22, 2025 19:46
@JoeKar
Copy link
Collaborator Author

JoeKar commented Sep 22, 2025

I pushed the first draft for tests.
Do we now really need to store the backup name that complex and long, when we have this lookup file?

@dmaluka
Copy link
Collaborator

dmaluka commented Sep 22, 2025

I haven't followed the recent discussions, but it sounded like the complexity and overhead (and fragility) of maintaining a LUT file is not worth it, i.e. just fname = {hash}_{encoded_path[N-len(hash)-1:]} is enough?

@niten94
Copy link
Contributor

niten94 commented Sep 23, 2025

I haven't followed the recent discussions, but it sounded like the complexity and overhead (and fragility) of maintaining a LUT file is not worth it, i.e. just fname = {hash}_{encoded_path[N-len(hash)-1:]} is enough?

It isn't enough, LUT is provided to let users or programs determine the complete target path of a backup.

@niten94
Copy link
Contributor

niten94 commented Sep 23, 2025

Sorry, please ignore the coment I just linked, since it might be referring to an earlier suggestion with less information in the backup filename.

@JoeKar
Copy link
Collaborator Author

JoeKar commented Sep 23, 2025

I haven't followed the recent discussions, but it sounded like the complexity and overhead (and fragility) of maintaining a LUT file is not worth it, i.e. just fname = {hash}_{encoded_path[N-len(hash)-1:]} is enough?

It isn't enough, LUT is provided to let users or programs determine the complete target path of a backup.

No we have at least a proof of concept.

What I'm still struggling with is if we need to truncate the full "happy path" when it exceeds the filename limit?
Since it is truncated it isn't that "happy" any longer and can't be restored without the help of a LUT. Since we have the LUT we could directly use the hash as full filename only.

fname = hash instead of fname = hash + "_" + fname[length-fileNameLengthLimit+len(hash)+len(backupSuffix)+1:]

What do you think?

@dmaluka
Copy link
Collaborator

dmaluka commented Sep 23, 2025

What I'm still struggling with is if we need to truncate the full "happy path" when it exceeds the filename limit?

I'm not sure what you mean by "happy path" here, but I think @Andriamanitra by "happy path" meant precisely the case when the URL-encoded path does not exceed the filename limit. What else could it mean?

...Anyway, why won't we explore other options?

For example, we could store the full path inside the backup file itself. Or, probably better, we could store it in a separate file per backup file. For example, in {hash}_{encoded_path[N-len(hash)-1:]}.fullpath.

And thus avoid scalability issues of having a single LUT file for all backups.

And, in line with what @Andriamanitra suggested, we don't need to do that in the happy case when the encoded path doesn't exceed the filename limit, so in most of real cases the behavior wouldn't change from the existing behavior.

@dmaluka
Copy link
Collaborator

dmaluka commented Sep 23, 2025

For example, in {hash}_{encoded_path[N-len(hash)-1:]}.fullpath.

Or speaking of names, indeed, like @JoeKar noted, why need to stuff a part of the path into the backup file name if the full path is stored in a separate file anyway. So in such case the backup file name could be e.g. just {hash}.backup, and its path info file could be e.g. {hash}.path. (But again, in the happy case the backup file name could be still the URL-encoded path and wouldn't need the path info file.)

@JoeKar
Copy link
Collaborator Author

JoeKar commented Sep 24, 2025

So in such case the backup file name could be e.g. just {hash}.backup, and its path info file could be e.g. {hash}.path. (But again, in the happy case the backup file name could be still the URL-encoded path and wouldn't need the path info file.)

Sounds like the best option so far. I'll throw the LUT away...
Thanks for the additional exploration round.

@JoeKar JoeKar changed the title util: Hash the path in DetermineEscapePath() with precedence and rename it to DeterminePath() backup+util: Prevent too long backup file names with hashing and store an additional file to resolve the hashed path Sep 24, 2025
@JoeKar JoeKar changed the title backup+util: Prevent too long backup file names with hashing and store an additional file to resolve the hashed path backup+util: Prevent too long backup file names with hashing + resolve file Sep 24, 2025
name := filepath.Base(path)
if len(name) > fileNameLengthLimit {
dir := filepath.Dir(path)
path = filepath.Join(dir, HashStringMd5(path))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just the hash, without any extension?

Can't we just adjust DetermineEscapePath() to use fileNameLengthLimit - len(".micro-backup") instead of fileNameLengthLimit instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. No need to process this twice.
Only benefit was, to have a bit longer filename...now we hash earlier, even the escaped path doesn't exceed the max filename limit.


newPath := b.Path != filename
if newPath {
b.RemoveBackup()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch.

...in case the escaped path exceeds the file name length limit
… file

Since full escaped backup paths can become longer than the maximum filename size
and hashed filenames cannot be restored it is helpful to have a lookup file for
the user to resolve the hashed path.
@JoeKar JoeKar merged commit 284942d into zyedidia:master Oct 19, 2025
6 checks passed
@JoeKar JoeKar deleted the fix/backup-path branch October 19, 2025 10:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Nightly build fails to save files with long paths on Windows

5 participants