-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pull] master from git:master #12
Open
pull
wants to merge
10,000
commits into
gl-prout:master
Choose a base branch
from
git:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Some of the static functions in the `packfile.c` access global variables, which can simply be avoided by passing the `repository` struct down to them. Let's do that. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The function `odb_pack_name` currently relies on the global variable `the_repository`. To eliminate global variable usage in `packfile.c`, we should progressively shift the dependency on the_repository to higher layers. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The functions `has_object[_kept]_pack` currently rely on the global variable `the_repository`. To eliminate global variable usage in `packfile.c`, we should progressively shift the dependency on the_repository to higher layers. Let's remove its usage from these functions and any related ones. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The function `for_each_packed_object` currently relies on the global variable `the_repository`. To eliminate global variable usage in `packfile.c`, we should progressively shift the dependency on the_repository to higher layers. Let's remove its usage from this function and closely related function `is_promisor_object`. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The `delta_base_cache_limit` variable is a global config variable used by multiple subsystems. Let's make this non-global, by adding this variable independently to the subsystems where it is used. First, add the setting to the `repo_settings` struct, this provides access to the config in places where the repository is available. Use this in `packfile.c`. In `index-pack.c` we add it to the `pack_idx_option` struct and its constructor. While the repository struct is available here, it may not be set because `git index-pack` can be used without a repository. In `gc.c` add it to the `gc_config` struct and also the constructor function. The gc functions currently do not have direct access to a repository struct. These changes are made to remove the usage of `delta_base_cache_limit` as a global variable in `packfile.c`. This brings us one step closer to removing the `USE_THE_REPOSITORY_VARIABLE` definition in `packfile.c` which we complete in the next patch. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The variables `packed_git_window_size` and `packed_git_limit` are global config variables used in the `packfile.c` file. Since it is only used in this file, let's change it from being a global config variable to a local variable for the subsystem. With this, we rid `packfile.c` from all global variable usage and this means we can also remove the `USE_THE_REPOSITORY_VARIABLE` guard from the file. Helped-by: Taylor Blau <[email protected]> Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The `multi_pack_index` struct represents the MIDX for a repository. Here, we add a pointer to the repository in this struct, allowing direct use of the repository variable without relying on the global `the_repository` struct. With this addition, we can determine the repository associated with a `bitmap_index` struct. A `bitmap_index` points to either a `packed_git` or a `multi_pack_index`, both of which have direct repository references. To support this, we introduce a static helper function, `bitmap_repo`, in `pack-bitmap.c`, which retrieves a repository given a `bitmap_index`. With this, we clear up all usages of `the_repository` within `pack-bitmap.c` and also remove the `USE_THE_REPOSITORY_VARIABLE` definition. Bringing us another step closer to remove all global variable usage. Although this change also opens up the potential to clean up `midx.c`, doing so would require additional refactoring to pass the repository struct to functions where the MIDX struct is created: a task better suited for future patches. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
In 454ea2e (treewide: use get_all_packs, 2018-08-20) we converted existing calls to both: - get_packed_git(), as well as - the_repository->objects->packed_git , to instead use the new get_all_packs() function. In the instance that this commit addresses, there was a preceding call to prepare_packed_git(), which dates all the way back to 660c889 (sha1_file: add for_each iterators for loose and packed objects, 2014-10-15) when its caller (for_each_packed_object()) was first introduced. This call could have been removed in 454ea2e, since get_all_packs() itself calls prepare_packed_git(). But the translation in 454ea2e was (to the best of my knowledge) a find-and-replace rather than inspecting each individual caller. Having an extra prepare_packed_git() call here is harmless, since it will notice that we have already set the 'packed_git_initialized' field and the call will be a noop. So we're only talking about a few dozen CPU cycles to set up and tear down the stack frame. But having a lone prepare_packed_git() call immediately before a call to get_all_packs() confused me, so let's remove it as redundant to avoid more confusion in the future. Signed-off-by: Taylor Blau <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The rev-list documentation doesn't mention that the given commit must be in the specified commit range, leading to unexpected results. Signed-off-by: Kai Koponen <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Commit da91a90 (fast-import: disallow more path components, 2024-11-30) added two separate verify_path() calls (one for added/modified files, and one for renames/copies). But our tests only exercise the first one. Let's protect ourselves against regressions by tweaking one of the tests to rename into the bad path. There are adjacent tests that will stay as additions, so now both calls are covered. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Give a bit of advice/hint message when "git maintenance" stops finding a lock file left by another instance that still is potentially running. * ps/gc-stale-lock-warning: t7900: fix host-dependent behaviour when testing git-maintenance(1) builtin/gc: provide hint when maintenance hits a stale schedule lock
Leakfixes. * ps/leakfixes-part-10: (27 commits) t: remove TEST_PASSES_SANITIZE_LEAK annotations test-lib: unconditionally enable leak checking t: remove unneeded !SANITIZE_LEAK prerequisites t: mark some tests as leak free t5601: work around leak sanitizer issue git-compat-util: drop now-unused `UNLEAK()` macro global: drop `UNLEAK()` annotation t/helper: fix leaking commit graph in "read-graph" subcommand builtin/branch: fix leaking sorting options builtin/init-db: fix leaking directory paths builtin/help: fix leaks in `check_git_cmd()` help: fix leaking return value from `help_unknown_cmd()` help: fix leaking `struct cmdnames` help: refactor to not use globals for reading config builtin/sparse-checkout: fix leaking sanitized patterns split-index: fix memory leak in `move_cache_to_base_index()` git: refactor builtin handling to use a `struct strvec` git: refactor alias handling to use a `struct strvec` strvec: introduce new `strvec_splice()` function line-log: fix leak when rewriting commit parents ...
The migration procedure between two ref backends has been optimized. * ps/ref-backend-migration-optim: reftable: rename scratch buffer refs: adapt `initial_transaction` flag to be unsigned reftable/block: optimize allocations by using scratch buffer reftable/block: rename `block_writer::buf` variable reftable/writer: optimize allocations by using a scratch buffer refs: don't normalize log messages with `REF_SKIP_CREATE_REFLOG` refs: skip collision checks in initial transactions refs: use "initial" transaction semantics to migrate refs refs/files: support symbolic and root refs in initial transaction refs: introduce "initial" transaction flag refs/files: move logic to commit initial transaction refs: allow passing flags when setting up a transaction
"git fsck" learned to issue warnings on "curiously formatted" ref contents that have always been taken valid but something Git wouldn't have written itself (e.g., missing terminating end-of-line after the full object name). * sj/ref-contents-check: ref: add symlink ref content check for files backend ref: check whether the target of the symref is a ref ref: add basic symref content check for files backend ref: add more strict checks for regular refs ref: port git-fsck(1) regular refs check for files backend ref: support multiple worktrees check for refs ref: initialize ref name outside of check functions ref: check the full refname instead of basename ref: initialize "fsck_ref_report" with zero
A trivial "correctness" fix that does not yet matter in practice. * tb/boundary-traversal-fix: pack-bitmap.c: typofix in `find_boundary_objects()`
Use the right helper program to measure file size in performance tests. * tb/use-test-file-size-more: t/perf: use 'test_file_size' in more places
Work around Coverity warning that would not trigger in practice. * ps/bisect-double-free-fix: bisect: address Coverity warning about potential double free
Built-in Git subcommands are supplied the repository object to work with; they learned to do the same when they invoke sub-subcommands. * kn/pass-repo-to-builtin-sub-sub-commands: builtin: pass repository to sub commands
Drop support for older libcURL and Perl. * bc/drop-ancient-libcurl-and-perl: gitweb: make use of s///r Require Perl 5.26.0 INSTALL: document requirement for libcurl 7.61.0 git-curl-compat: remove check for curl 7.56.0 git-curl-compat: remove check for curl 7.53.0 git-curl-compat: remove check for curl 7.52.0 git-curl-compat: remove check for curl 7.44.0 git-curl-compat: remove check for curl 7.43.0 git-curl-compat: remove check for curl 7.39.0 git-curl-compat: remove check for curl 7.34.0 git-curl-compat: remove check for curl 7.25.0 git-curl-compat: remove check for curl 7.21.5
Documentation mark-up updates. * ja/git-diff-doc-markup: doc: git-diff: apply format changes to config part doc: git-diff: apply format changes to diff-generate-patch doc: git-diff: apply format changes to diff-format doc: git-diff: apply format changes to diff-options doc: git-diff: apply new documentation guidelines
Signed-off-by: Junio C Hamano <[email protected]>
* kn/the-repository: packfile.c: remove unnecessary prepare_packed_git() call midx: add repository to `multi_pack_index` struct config: make `packed_git_(limit|window_size)` non-global variables config: make `delta_base_cache_limit` a non-global variable packfile: pass down repository to `for_each_packed_object` packfile: pass down repository to `has_object[_kept]_pack` packfile: pass down repository to `odb_pack_name` packfile: pass `repository` to static function in the file packfile: use `repository` from `packed_git` directly packfile: add repository to struct `packed_git`
…wo-the-repository * kn/pass-repo-to-builtin-sub-sub-commands: builtin: pass repository to sub commands Git 2.47.1 Makefile(s): avoid recipe prefix in conditional statements doc: switch links to https doc: update links to current pages The eleventh batch pack-objects: only perform verbatim reuse on the preferred pack t5332-multi-pack-reuse.sh: demonstrate duplicate packing failure test-lib: move malloc-debug setup after $PATH setup builtin/difftool: intialize some hashmap variables refspec: store raw refspecs inside refspec_item refspec: drop separate raw_nr count fetch: adjust refspec->raw_nr when filtering prefetch refspecs test-lib: check malloc debug LD_PRELOAD before using
In 'midx-write.c' there are a lot of static functions which use global variables `the_repository` or `the_hash_algo`. In a follow up commit, the repository variable will be added to `write_midx_context`, which some of the functions can use. But for functions which do not have access to this struct, pass down the required information from non-static functions `write_midx_file` and `write_midx_file_only`. This requires that the function `hash_to_hex` is also replaced with `hash_to_hex_algop` since the former internally accesses the `the_hash_algo` global variable. This ensures that the usage of global variables is limited to these non-static functions, which will be cleaned up in a follow up commit. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The function `read_refs_snapshot()` uses `parse_oid_hex()`, which relies on the global `the_hash_algo` variable. Let's instead use `parse_oid_hex_algop()` and provide the hash algo via `revs->repo`. Also, while here, fix a missing newline after the function's definition. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The struct `write_midx_context` is used to pass context for creating MIDX files. Add the repository field here to ensure that most functions within `midx-write.c` have access to the field and can use that instead of the global `the_repository` variable. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
In a previous commit, we passed the repository field to all subcommands in the `builtin/` directory. Utilize this to pass the repository field down to the `write_midx_file[_only]` functions to remove the usage of `the_repository` global variables. With this, all usage of global variables in `midx-write.c` is removed, hence, remove the `USE_THE_REPOSITORY_VARIABLE` guard from the file. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
In the `midx.c` file, there are multiple usages of `the_repository` and `the_hash_algo` within static functions of the file. Some of the usages can be simply swapped out with the available `repository` struct. While some of them can be swapped out by passing the repository to the required functions. This leaves out only some other usages of `the_repository` and `the_hash_algo` in the file in non-static functions, which we'll tackle in upcoming commits. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The `load_multi_pack_index` function in midx uses `the_repository` variable to access the `repository` struct. Modify the function and its callee's to send the `repository` field. This moves usage of `the_repository` to the `test-read-midx.c` file. While that is not optimal, it is okay, since the upcoming commits will slowly move the usage of `the_repository` up the layers and remove it eventually. Signed-off-by: Karthik Nayak <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Now that we generate man pages, articles and user manual with Meson the only thing that is still missing in an installation of HTML documents is a couple of static files. Wire these up to finalize Meson's support for generating HTML documentation. Diffing an installation that uses our Makefile with an installation that uses Meson only surfaces a couple of discepancies now: - Meson doesn't install "everyday.html" and "git-remote-helpers.html". These files are marked as obsolete and don't contain any useful information anymore: they simply point to their modern equivalents. - Meson doesn't install "*.txt" files when asking for HTML docs. I'm not sure why our Makefiles do this in the first place, and it does seem like the resulting installation is fully functional even without those files. Other than that, both layout and file contents are the exact same. Signed-off-by: Patrick Steinhardt <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The "check-meson" target uses process substitution to check whether extracted contents from "meson.build" match expected contents. Process substitution is unportable though and thus the target will fail when using for example Dash. Fix this by writing data into a temporary directory. Signed-off-by: Patrick Steinhardt <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Wire up sanity checks for Meson to verify that no man pages are missing. This check is similar to the same check we already have for our tests. Signed-off-by: Patrick Steinhardt <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
When realloc(3) fails, it returns NULL and keeps the original allocation intact. REFTABLE_ALLOC_GROW overwrites both the original pointer and the allocation count variable in that case, simultaneously leaking the original allocation and misrepresenting the number of storable items. parse_names() and reftable_buf_add() avoid leaking by restoring the original pointer value on failure, but all other callers seem to be OK with losing the old allocation. Add a new variant of the macro, REFTABLE_ALLOC_GROW_OR_NULL, which plugs the leak and zeros the allocation counter. Use it for those callers. Signed-off-by: René Scharfe <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
When realloc(3) fails, it returns NULL and keeps the original allocation intact. REFTABLE_ALLOC_GROW overwrites both the original pointer and the allocation count variable in that case, simultaneously leaking the original allocation and misrepresenting the number of storable items. parse_names() avoids the leak by keeping the original pointer if reallocation fails, but still increase the allocation count in such a case as if it succeeded. That's OK, because the error handling code just frees everything and doesn't look at names_cap anymore. reftable_buf_add() does the same, but here it is a problem as it leaves the reftable_buf in a broken state, with ->alloc being roughly twice as big as the actually allocated memory, allowing out-of-bounds writes in subsequent calls. Reimplement REFTABLE_ALLOC_GROW to avoid leaks, keep allocation counts in sync and still signal failures to callers while avoiding code duplication in callers. Make it an expression that evaluates to 0 if no reallocation is needed or it succeeded and 1 on failure while keeping the original pointer and allocation counter values. Adjust REFTABLE_ALLOC_GROW_OR_NULL to the new calling convention for REFTABLE_ALLOC_GROW, but keep its support for non-size_t alloc variables for now. Signed-off-by: René Scharfe <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Check the final reallocation for adding the terminating NULL and handle it just like those in the loop. Simply use REFTABLE_ALLOC_GROW instead of keeping the REFTABLE_REALLOC_ARRAY call and adding code to preserve the original pointer value around it. Signed-off-by: René Scharfe <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Check reallocation errors in unit tests, like everywhere else. Signed-off-by: René Scharfe <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
The developer documentation has been updated to give the latest info on gitk and git-gui maintainer. * as/gitk-git-gui-repo-update: Update the official repo of gitk
meson-based build without GitWeb failed the self tests. * ps/meson-test-wo-gitweb: meson: enable auto-discovered "gitweb" GIT-BUILD-OPTIONS: wire up NO_GITWEB option GIT-BUILD-OPTIONS: sort variables alphabetically
When storing output in test-results/, we usually give each numbered run in a --stress set its own output file. But we don't do that for storing LSan logs, so something like: ./t0003-attributes.sh --stress will have many scripts simultaneously creating, writing to, and deleting the test-results/t0003-attributes.leak directory. This can cause logs from one run to be attributed to another, spurious failures when creation and deletion race, and so on. This has always been broken, but nobody noticed because it's rare to do a --stress run with LSan (since the point is for the code to run quickly many times in order to hit races). But if you're trying to find a race in the leak sanitizing code, it makes sense to use these together. We can fix it by using $TEST_RESULTS_BASE, which already incorporates the stress job suffix. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
This reverts commit 993d38a. That commit was trying to solve a race between LSan setting up the threads stack and another thread calling exit(), by making sure that all pthread_create() calls have finished before doing any work that might trigger the exit(). But that isn't sufficient. The setup code actually runs in the individual threads themselves, not in the spawning thread's call to pthread_create(). So while it may have improved the race a bit, you can still trigger it pretty quickly with: make SANITIZE=leak cd t ./t5309-pack-delta-cycles.sh --stress Let's back out that failed attempt so we can try again. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
One thread primitive we don't yet support is a barrier: it waits for all threads to reach a synchronization point before letting any of them continue. This would be useful for avoiding the LSan race we see in index-pack (and other places) by having all threads complete their initialization before any of them start to do real work. POSIX introduced a pthread_barrier_t in 2004, which does what we want. But if we want to rely on it: 1. Our Windows pthread emulation would need a new set of wrapper functions. There's a Synchronization Barrier primitive there, which was introduced in Windows 8 (which is old enough for us to depend on). 2. macOS (and possibly other systems) has pthreads but not pthread_barrier_t. So there we'd have to implement our own barrier based on the mutex and cond primitives. Those are do-able, but since we only care about avoiding races in our LSan builds, there's an easier way: make it a noop on systems without a native pthread barrier. This patch introduces a "maybe_thread_barrier" API. The clunky name (rather than just using pthread_barrier directly) should hopefully clue people in that on some systems it will do nothing. It's wired to a Makefile knob which has to be triggered manually, and we enable it for the linux-leaks CI jobs (since we know we'll have it there). There are some other possible options: - we could turn it on all the time for Linux systems based on uname. But we really only care about it for LSan builds, and there is no need to add extra code to regular builds. - we could turn it on only for LSan builds. But that would break builds on non-Linux platforms (like macOS) that otherwise should support sanitizers. - we could trigger only on the combination of Linux and LSan together. This isn't too hard to do, but the uname check isn't completely accurate. It is really about what your libc supports, and non-glibc systems might not have it (though at least musl seems to). So we'd risk breaking builds on those systems, which would need to add a new knob. Though the upside would be that running local "make SANITIZE=leak test" would be protected automatically. And of course none of this protects LSan runs from races on systems without pthread barriers. It's probably OK in practice to protect only our CI jobs, though. The race is rare-ish and most leak-checking happens through CI. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
We sometimes get false positives from our linux-leaks CI job because of a race in LSan itself. The problem is that one thread is still initializing its stack in LSan's code (and allocating memory to do so) while anothe thread calls die(), taking down the whole process and triggering a leak check. The problem is described in more detail in 993d38a (index-pack: spawn threads atomically, 2024-01-05), which tried to fix it by pausing worker threads until all calls to pthread_create() had completed. But that's not enough to fix the problem, because the LSan setup code runs in the threads themselves. So even though pthread_create() has returned, we have no idea if all threads actually finished their setup before letting any of them do real work. We can fix that by using a barrier inside the threads themselves, waiting for all of them to hit the start of their main function before any of them proceed. You can test for the race by running: make SANITIZE=leak THREAD_BARRIER_PTHREAD=YesOnLinux cd t ./t5309-pack-delta-cycles.sh --stress which fails quickly before this patch, and should run indefinitely without it. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
There's a race with LSan when spawning threads and one of the threads calls die(). We worked around one such problem with index-pack in the previous commit, but it exists in git-grep, too. You can see it with: make SANITIZE=leak THREAD_BARRIER_PTHREAD=YesOnLinux cd t ./t0003-attributes.sh --stress which fails pretty quickly with: ==git==4096424==ERROR: LeakSanitizer: detected memory leaks Direct leak of 32 byte(s) in 1 object(s) allocated from: #0 0x7f906de14556 in realloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:98 #1 0x7f906dc9d2c1 in __pthread_getattr_np nptl/pthread_getattr_np.c:180 #2 0x7f906de2500d in __sanitizer::GetThreadStackTopAndBottom(bool, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:150 #3 0x7f906de25187 in __sanitizer::GetThreadStackAndTls(bool, unsigned long*, unsigned long*, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:614 #4 0x7f906de17d18 in __lsan::ThreadStart(unsigned int, unsigned long long, __sanitizer::ThreadType) ../../../../src/libsanitizer/lsan/lsan_posix.cpp:53 #5 0x7f906de143a9 in ThreadStartFunc<false> ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:431 #6 0x7f906dc9bf51 in start_thread nptl/pthread_create.c:447 #7 0x7f906dd1a677 in __clone3 ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78 As with the previous commit, we can fix this by inserting a barrier that makes sure all threads have finished their setup before continuing. But there's one twist in this case: the thread which calls die() is not one of the worker threads, but the main thread itself! So we need the main thread to wait in the barrier, too, until all threads have gotten to it. And thus we initialize the barrier for num_threads+1, to account for all of the worker threads plus the main one. If we then test as above, t0003 should run indefinitely. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
In 1b9e9be (csum-file.c: use unsafe SHA-1 implementation when available, 2024-09-26) we have converted our `struct hashfile` to use the unsafe SHA1 backend, which results in a significant speedup. One needs to be careful with how to use that structure now though because callers need to consistently use either the safe or unsafe variants of SHA1, as otherwise one can easily trigger corruption. As it turns out, we have one inconsistent usage in our tree because we directly initialize `struct hashfile_checkpoint::ctx` with the safe variant of SHA1, but end up writing to that context with the unsafe ones. This went unnoticed so far because our CI systems do not exercise different hash functions for these two backends, and consequently safe and unsafe variants are equivalent. But when using SHA1DC as safe and OpenSSL as unsafe backend this leads to a crash an t1050: ++ git -c core.compression=0 add large1 AddressSanitizer:DEADLYSIGNAL ================================================================= ==1367==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000040 (pc 0x7ffff7a01a99 bp 0x507000000db0 sp 0x7fffffff5690 T0) ==1367==The signal is caused by a READ memory access. ==1367==Hint: address points to the zero page. #0 0x7ffff7a01a99 in EVP_MD_CTX_copy_ex (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) #1 0x555555ddde56 in openssl_SHA1_Clone ../sha1/openssl.h:40:2 #2 0x555555dce2fc in git_hash_sha1_clone_unsafe ../object-file.c:123:2 #3 0x555555c2d5f8 in hashfile_checkpoint ../csum-file.c:211:2 #4 0x555555b9905d in deflate_blob_to_pack ../bulk-checkin.c:286:4 #5 0x555555b98ae9 in index_blob_bulk_checkin ../bulk-checkin.c:362:15 #6 0x555555ddab62 in index_blob_stream ../object-file.c:2756:9 #7 0x555555dda420 in index_fd ../object-file.c:2778:9 #8 0x555555ddad76 in index_path ../object-file.c:2796:7 #9 0x555555e947f3 in add_to_index ../read-cache.c:771:7 #10 0x555555e954a4 in add_file_to_index ../read-cache.c:804:9 #11 0x5555558b5c39 in add_files ../builtin/add.c:355:7 #12 0x5555558b412e in cmd_add ../builtin/add.c:578:18 #13 0x555555b1f493 in run_builtin ../git.c:480:11 #14 0x555555b1bfef in handle_builtin ../git.c:740:9 #15 0x555555b1e6f4 in run_argv ../git.c:807:4 #16 0x555555b1b87a in cmd_main ../git.c:947:19 #17 0x5555561649e6 in main ../common-main.c:64:11 #18 0x7ffff742a1fb in __libc_start_call_main (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a1fb) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #19 0x7ffff742a2b8 in __libc_start_main@GLIBC_2.2.5 (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a2b8) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #20 0x555555772c84 in _start (git+0x21ec84) ==1367==Register values: rax = 0x0000511000001080 rbx = 0x0000000000000000 rcx = 0x000000000000000c rdx = 0x0000000000000000 rdi = 0x0000000000000000 rsi = 0x0000507000000db0 rbp = 0x0000507000000db0 rsp = 0x00007fffffff5690 r8 = 0x0000000000000000 r9 = 0x0000000000000000 r10 = 0x0000000000000000 r11 = 0x00007ffff7a01a30 r12 = 0x0000000000000000 r13 = 0x00007fffffff6b38 r14 = 0x00007ffff7ffd000 r15 = 0x00005555563b9910 AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) in EVP_MD_CTX_copy_ex ==1367==ABORTING ./test-lib.sh: line 1023: 1367 Aborted git $config add large1 error: last command exited with $?=134 not ok 4 - add with -c core.compression=0 Fix the issue by using the unsafe variant instead. Signed-off-by: Patrick Steinhardt <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Same as with the preceding commit, git-fast-import(1) is using the safe variant to initialize a hashfile checkpoint. This leads to a segfault when passing the checkpoint into the hashfile subsystem because it would use the unsafe variants instead: ++ git --git-dir=R/.git fast-import --big-file-threshold=1 AddressSanitizer:DEADLYSIGNAL ================================================================= ==577126==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000040 (pc 0x7ffff7a01a99 bp 0x5070000009c0 sp 0x7fffffff5b30 T0) ==577126==The signal is caused by a READ memory access. ==577126==Hint: address points to the zero page. #0 0x7ffff7a01a99 in EVP_MD_CTX_copy_ex (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) #1 0x555555ddde56 in openssl_SHA1_Clone ../sha1/openssl.h:40:2 #2 0x555555dce2fc in git_hash_sha1_clone_unsafe ../object-file.c:123:2 #3 0x555555c2d5f8 in hashfile_checkpoint ../csum-file.c:211:2 #4 0x5555559647d1 in stream_blob ../builtin/fast-import.c:1110:2 #5 0x55555596247b in parse_and_store_blob ../builtin/fast-import.c:2031:3 #6 0x555555967f91 in file_change_m ../builtin/fast-import.c:2408:5 #7 0x55555595d8a2 in parse_new_commit ../builtin/fast-import.c:2768:4 #8 0x55555595bb7a in cmd_fast_import ../builtin/fast-import.c:3614:4 #9 0x555555b1f493 in run_builtin ../git.c:480:11 #10 0x555555b1bfef in handle_builtin ../git.c:740:9 #11 0x555555b1e6f4 in run_argv ../git.c:807:4 #12 0x555555b1b87a in cmd_main ../git.c:947:19 #13 0x5555561649e6 in main ../common-main.c:64:11 #14 0x7ffff742a1fb in __libc_start_call_main (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a1fb) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #15 0x7ffff742a2b8 in __libc_start_main@GLIBC_2.2.5 (/nix/store/65h17wjrrlsj2rj540igylrx7fqcd6vq-glibc-2.40-36/lib/libc.so.6+0x2a2b8) (BuildId: bf320110569c8ec2425e9a0c5e4eb7e97f1fb6e4) #16 0x555555772c84 in _start (git+0x21ec84) ==577126==Register values: rax = 0x0000511000000cc0 rbx = 0x0000000000000000 rcx = 0x000000000000000c rdx = 0x0000000000000000 rdi = 0x0000000000000000 rsi = 0x00005070000009c0 rbp = 0x00005070000009c0 rsp = 0x00007fffffff5b30 r8 = 0x0000000000000000 r9 = 0x0000000000000000 r10 = 0x0000000000000000 r11 = 0x00007ffff7a01a30 r12 = 0x0000000000000000 r13 = 0x00007fffffff6b60 r14 = 0x00007ffff7ffd000 r15 = 0x00005555563b9910 AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (/nix/store/h1ydpxkw9qhjdxjpic1pdc2nirggyy6f-openssl-3.3.2/lib/libcrypto.so.3+0x201a99) (BuildId: 41746a580d39075fc85e8c8065b6c07fb34e97d4) in EVP_MD_CTX_copy_ex ==577126==ABORTING ./test-lib.sh: line 1039: 577126 Aborted git --git-dir=R/.git fast-import --big-file-threshold=1 < input error: last command exited with $?=134 not ok 167 - R: blob bigger than threshold The segfault is only exposed in case the unsafe and safe backends are different from one another. Fix the issue by initializing the context with the unsafe SHA1 variant. Signed-off-by: Patrick Steinhardt <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
In the preceding commit we have fixed a segfault when using an unsafe SHA1 backend that is different from the safe one. This segfault only went by unnoticed because we never set up an unsafe backend in our CI systems. Fix this ommission by setting `OPENSSL_SHA1_UNSAFE` in our TEST-vars job. Signed-off-by: Patrick Steinhardt <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Test modernization. * ms/t7611-test-path-is-file: t7611: replace test -f with test_path_is* helpers
Signed-off-by: Junio C Hamano <[email protected]>
The custom allocator code in the reftable library did not handle failing realloc() very well, which has been addressed. * rs/reftable-realloc-errors: t-reftable-merged: handle realloc errors reftable: handle realloc error in parse_names() reftable: fix allocation count on realloc error reftable: avoid leaks on realloc error
An earlier "csum-file checksum does not have to be computed with sha1dc" topic had a few code paths that had initialized an implementation of a hash function to be used by an unmatching hash by mistake, which have been corrected. * ps/weak-sha1-for-tail-sum-fix: ci: exercise unsafe OpenSSL backend builtin/fast-import: fix segfault with unsafe SHA1 backend bulk-checkin: fix segfault with unsafe SHA1 backend
CI jobs that run threaded programs under LSan has been giving false positives from time to time, which has been worked around. * jk/lsan-race-with-barrier: grep: work around LSan threading race with barrier index-pack: work around LSan threading race with barrier thread-utils: introduce optional barrier type Revert "index-pack: spawn threads atomically" test-lib: use individual lsan dir for --stress runs
Signed-off-by: Junio C Hamano <[email protected]>
The extra "barrier" approach was too much code whose sole purpose was to work around a race that is not even ours (i.e. in LSan's teardown code). In preparation for queuing a solution taking a much-less-invasive approach, let's revert them.
When we run with sanitizers, we set abort_on_error=1 so that the tests themselves can detect problems directly (when the buggy program exits with SIGABRT). This has one blind spot, though: we don't always check the exit codes for all programs (e.g., helpers like upload-pack invoked behind the scenes). For ASan and UBSan this is mostly fine; they exit as soon as they see an error, so the unexpected abort of the program causes the test to fail anyway. But for LSan, the program runs to completion, since we can only check for leaks at the end. And in that case we could miss leak reports. And thus we started checking LSan logs in faececa (test-lib: have the "check" mode for SANITIZE=leak consider leak logs, 2022-07-28). Originally the logs were optional, but logs are generated (and checked) always as of 8c1d669 (test-lib: GIT_TEST_SANITIZE_LEAK_LOG enabled by default, 2024-07-11). And we even check them for each test snippet, as of cf14643 (test-lib: check for leak logs after every test, 2024-09-24). So now aborting on error is superfluous for LSan! We can get everything we need by checking the logs. And checking the logs is actually preferable, since it gives us more control over silencing false positives (something we do not yet do, but will soon). So let's tell LSan to just exit normally, even if it finds leaks. We can do so with exitcode=0, which also suppresses the abort_on_error flag. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
We have a function to count the number of leaks found (actually, it is the number of processes which produced a log file). Once upon a time we cared about seeing if this number increased between runs. But we simplified that away in 95c679a (test-lib: stop showing old leak logs, 2024-09-24), and now we only care if it returns any results or not. In preparation for refactoring it further, let's drop the counting function entirely, and roll it into the "is it empty" check. The outcome should be the same, but we'll be free to return a boolean "did we find anything" without worrying about somebody adding a new call to the counting function. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
When we check the leak logs, our original strategy was to check for any non-empty log file produced by LSan. We later amended that to ignore noisy lines in 370ef7e (test-lib: ignore uninteresting LSan output, 2023-08-28). This makes it hard to ignore noise which is more than a single line; we'd have to actually parse the file to determine the meaning of each line. But there's an easy line-oriented solution. Because we always pass the dedup_token_length option, the output will contain a DEDUP_TOKEN line for each leak that has been found. So if we invert our strategy to stop ignoring useless lines and only look for useful ones, we can just count the number of DEDUP_TOKEN lines. If it's non-zero, then we found at least one leak (it would even give us a count of unique leaks, but we really only care if it is non-zero). This should yield the same outcome, but will help us build more false positive detection on top. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
Our CI jobs sometimes see false positive leaks like this: ================================================================= ==3904583==ERROR: LeakSanitizer: detected memory leaks Direct leak of 32 byte(s) in 1 object(s) allocated from: #0 0x7fa790d01986 in __interceptor_realloc ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:98 #1 0x7fa790add769 in __pthread_getattr_np nptl/pthread_getattr_np.c:180 #2 0x7fa790d117c5 in __sanitizer::GetThreadStackTopAndBottom(bool, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:150 #3 0x7fa790d11957 in __sanitizer::GetThreadStackAndTls(bool, unsigned long*, unsigned long*, unsigned long*, unsigned long*) ../../../../src/libsanitizer/sanitizer_common/sanitizer_linux_libcdep.cpp:598 #4 0x7fa790d03fe8 in __lsan::ThreadStart(unsigned int, unsigned long long, __sanitizer::ThreadType) ../../../../src/libsanitizer/lsan/lsan_posix.cpp:51 #5 0x7fa790d013fd in __lsan_thread_start_func ../../../../src/libsanitizer/lsan/lsan_interceptors.cpp:440 #6 0x7fa790adc3eb in start_thread nptl/pthread_create.c:444 #7 0x7fa790b5ca5b in clone3 ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81 This is not a leak in our code, but appears to be a race between one thread calling exit() while another one is in LSan's stack setup code. You can reproduce it easily by running t0003 or t5309 with --stress (these trigger it because of the threading in git-grep and index-pack respectively). This may be a bug in LSan, but regardless of whether it is eventually fixed, it is useful to work around it so that we stop seeing these false positives. We can recognize it by the mention of the sanitizer functions in the DEDUP_TOKEN line. With this patch, the scripts mentioned above should run with --stress indefinitely. Signed-off-by: Jeff King <[email protected]> Signed-off-by: Junio C Hamano <[email protected]>
CI jobs that run threaded programs under LSan has been giving false positives from time to time, which has been worked around. This is an alternative to the jk/lsan-race-with-barrier topic with much smaller change to the production code. * jk/lsan-race-ignore-false-positive: test-lib: ignore leaks in the sanitizer's thread code test-lib: check leak logs for presence of DEDUP_TOKEN test-lib: simplify leak-log checking test-lib: rely on logs to detect leaks Revert barrier-based LSan threading race workaround
The build procedure based on meson learned to generate HTML documention pages. * ps/build-meson-html: Documentation: wire up sanity checks for Meson t/Makefile: make "check-meson" work with Dash meson: install static files for HTML documentation meson: generate articles Documentation: refactor "howto-index.sh" for out-of-tree builds Documentation: refactor "api-index.sh" for out-of-tree builds meson: generate user manual Documentation: inline user-manual.conf meson: generate HTML pages for all man page categories meson: fix generation of merge tools meson: properly wire up dependencies for our docs meson: wire up support for AsciiDoctor
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )