Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
d61ff9c
Merge branch 'ps/object-file-cleanup' into ps/object-store-cleanup
gitster Apr 24, 2025
cbc1d8e
ci: update the message for unavailble third-party software
gitster Apr 25, 2025
956acbe
ci: download JGit from maven, not eclipse.org
gitster Apr 25, 2025
2cfe054
meson: report detected runtime executable paths
pks-t Apr 25, 2025
4cba20f
meson: prefer shell at "/bin/sh"
pks-t Apr 25, 2025
d235c46
send-email: retrieve Message-ID from outlook SMTP server
AdityaGarg8 Apr 25, 2025
89d557b
test-tool: add pack-deltas helper
derrickstolee Apr 28, 2025
fd7fd7a
t5309: create failing test for 'git index-pack'
derrickstolee Apr 28, 2025
98f8854
index-pack: allow revisiting REF_DELTA chains
derrickstolee Apr 28, 2025
daec3c0
send-email: add --[no-]outlook-id-fix option
AdityaGarg8 Apr 29, 2025
ddb28da
object-store: move `struct packed_git` into "packfile.h"
pks-t Apr 29, 2025
56ef85e
object-store: drop `loose_object_path()`
pks-t Apr 29, 2025
0b8ed25
object-store: move and rename `odb_pack_keep()`
pks-t Apr 29, 2025
1a79326
object-store: move function declarations to their respective subsystems
pks-t Apr 29, 2025
f8fc4ca
object-store: allow fetching objects via `has_object()`
pks-t Apr 29, 2025
062b914
treewide: convert users of `repo_has_object_file()` to `has_object()`
pks-t Apr 29, 2025
8a9e27b
object-store: drop `repo_has_object_file()`
pks-t Apr 29, 2025
03f2915
xdiff: disable cleanup_records heuristic with --minimal
yndolg Apr 29, 2025
38758be
Merge branch 'ag/send-email-outlook'
gitster May 12, 2025
bd99d6e
Merge branch 'ps/object-store-cleanup'
gitster May 12, 2025
a9d67d6
Merge branch 'jc/ci-skip-unavailable-external-software'
gitster May 12, 2025
6dbc416
Merge branch 'ds/fix-thin-fix'
gitster May 12, 2025
a4ad13d
Merge branch 'ng/xdiff-truly-minimal'
gitster May 12, 2025
b8cc1a9
Merge branch 'ps/meson-bin-sh'
gitster May 12, 2025
38af977
The thirteenth batch
gitster May 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions Documentation/RelNotes/2.50.0.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,15 @@ UI, Workflows & Features

* The build procedure installs bash (but not zsh) completion script.

* send-email has been updated to work better with Outlook's smtp server.

* "git diff --minimal" used to give non-minimal output when its
optimization kicked in, which has been disabled.

* "git index-pack --fix-thin" used to abort to prevent a cycle in
delta chains from forming in a corner case even when there is no
such cycle.


Performance, Internal Implementation, Development Support etc.
--------------------------------------------------------------
Expand Down Expand Up @@ -134,6 +143,8 @@ Performance, Internal Implementation, Development Support etc.

* Add an equivalent to "make hdr-check" target to meson based builds.

* Further code clean-up in the object-store layer.


Fixes since v2.49
-----------------
Expand Down Expand Up @@ -261,6 +272,10 @@ Fixes since v2.49
now detected and the command errors out.
(merge 974f0d4664 ps/mv-contradiction-fix later to maint).

* Further refinement on CI messages when an optional external
software is unavailable (e.g. due to third-party service outage).
(merge 956acbefbd jc/ci-skip-unavailable-external-software later to maint).

* Other code cleanup, docfix, build fix, etc.
(merge 227c4f33a0 ja/doc-block-delimiter-markup-fix later to maint).
(merge 2bfd3b3685 ab/decorate-code-cleanup later to maint).
Expand Down
13 changes: 13 additions & 0 deletions Documentation/git-send-email.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,19 @@ illustration below where `[PATCH v2 0/3]` is in reply to `[PATCH 0/2]`:
Only necessary if --compose is also set. If --compose
is not set, this will be prompted for.

--[no-]outlook-id-fix::
Microsoft Outlook SMTP servers discard the Message-ID sent via email and
assign a new random Message-ID, thus breaking threads.
+
With `--outlook-id-fix`, 'git send-email' uses a mechanism specific to
Outlook servers to learn the Message-ID the server assigned to fix the
threading. Use it only when you know that the server reports the
rewritten Message-ID the same way as Outlook servers do.
+
Without this option specified, the fix is done by default when talking
to 'smtp.office365.com' or 'smtp-mail.outlook.com'. Use
`--no-outlook-id-fix` to disable even when talking to these two servers.

--subject=<string>::
Specify the initial subject of the email thread.
Only necessary if --compose is also set. If --compose
Expand Down
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -819,6 +819,7 @@ TEST_BUILTINS_OBJS += test-mergesort.o
TEST_BUILTINS_OBJS += test-mktemp.o
TEST_BUILTINS_OBJS += test-name-hash.o
TEST_BUILTINS_OBJS += test-online-cpus.o
TEST_BUILTINS_OBJS += test-pack-deltas.o
TEST_BUILTINS_OBJS += test-pack-mtimes.o
TEST_BUILTINS_OBJS += test-parse-options.o
TEST_BUILTINS_OBJS += test-parse-pathspec-file.o
Expand Down
3 changes: 2 additions & 1 deletion builtin/cat-file.c
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,8 @@ static int cat_one_file(int opt, const char *exp_type, const char *obj_name,
goto cleanup;

case 'e':
ret = !repo_has_object_file(the_repository, &oid);
ret = !has_object(the_repository, &oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR);
goto cleanup;

case 'w':
Expand Down
4 changes: 1 addition & 3 deletions builtin/clone.c
Original file line number Diff line number Diff line change
Expand Up @@ -504,9 +504,7 @@ static void write_followtags(const struct ref *refs, const char *msg)
continue;
if (ends_with(ref->name, "^{}"))
continue;
if (!repo_has_object_file_with_flags(the_repository, &ref->old_oid,
OBJECT_INFO_QUICK |
OBJECT_INFO_SKIP_FETCH_OBJECT))
if (!has_object(the_repository, &ref->old_oid, 0))
continue;
refs_update_ref(get_main_ref_store(the_repository), msg,
ref->name, &ref->old_oid, NULL, 0,
Expand Down
2 changes: 1 addition & 1 deletion builtin/count-objects.c
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#include "parse-options.h"
#include "quote.h"
#include "packfile.h"
#include "object-store.h"
#include "object-file.h"

static unsigned long garbage;
static off_t size_garbage;
Expand Down
3 changes: 2 additions & 1 deletion builtin/fast-import.c
Original file line number Diff line number Diff line change
Expand Up @@ -811,7 +811,8 @@ static char *keep_pack(const char *curr_index_name)
int keep_fd;

odb_pack_name(pack_data->repo, &name, pack_data->hash, "keep");
keep_fd = odb_pack_keep(name.buf);
keep_fd = safe_create_file_with_leading_directories(pack_data->repo,
name.buf);
if (keep_fd < 0)
die_errno("cannot create keep file");
write_or_die(keep_fd, keep_msg, strlen(keep_msg));
Expand Down
15 changes: 7 additions & 8 deletions builtin/fetch.c
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,6 @@ static void find_non_local_tags(const struct ref *refs,
struct string_list_item *remote_ref_item;
const struct ref *ref;
struct refname_hash_entry *item = NULL;
const int quick_flags = OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT;

refname_hash_init(&existing_refs);
refname_hash_init(&remote_refs);
Expand Down Expand Up @@ -367,9 +366,9 @@ static void find_non_local_tags(const struct ref *refs,
*/
if (ends_with(ref->name, "^{}")) {
if (item &&
!repo_has_object_file_with_flags(the_repository, &ref->old_oid, quick_flags) &&
!has_object(the_repository, &ref->old_oid, 0) &&
!oidset_contains(&fetch_oids, &ref->old_oid) &&
!repo_has_object_file_with_flags(the_repository, &item->oid, quick_flags) &&
!has_object(the_repository, &item->oid, 0) &&
!oidset_contains(&fetch_oids, &item->oid))
clear_item(item);
item = NULL;
Expand All @@ -383,7 +382,7 @@ static void find_non_local_tags(const struct ref *refs,
* fetch.
*/
if (item &&
!repo_has_object_file_with_flags(the_repository, &item->oid, quick_flags) &&
!has_object(the_repository, &item->oid, 0) &&
!oidset_contains(&fetch_oids, &item->oid))
clear_item(item);

Expand All @@ -404,7 +403,7 @@ static void find_non_local_tags(const struct ref *refs,
* checked to see if it needs fetching.
*/
if (item &&
!repo_has_object_file_with_flags(the_repository, &item->oid, quick_flags) &&
!has_object(the_repository, &item->oid, 0) &&
!oidset_contains(&fetch_oids, &item->oid))
clear_item(item);

Expand Down Expand Up @@ -911,7 +910,8 @@ static int update_local_ref(struct ref *ref,
struct commit *current = NULL, *updated;
int fast_forward = 0;

if (!repo_has_object_file(the_repository, &ref->new_oid))
if (!has_object(the_repository, &ref->new_oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
die(_("object %s not found"), oid_to_hex(&ref->new_oid));

if (oideq(&ref->old_oid, &ref->new_oid)) {
Expand Down Expand Up @@ -1330,8 +1330,7 @@ static int check_exist_and_connected(struct ref *ref_map)
* we need all direct targets to exist.
*/
for (r = rm; r; r = r->next) {
if (!repo_has_object_file_with_flags(the_repository, &r->old_oid,
OBJECT_INFO_SKIP_FETCH_OBJECT))
if (!has_object(the_repository, &r->old_oid, HAS_OBJECT_RECHECK_PACKED))
return -1;
}

Expand Down
2 changes: 1 addition & 1 deletion builtin/gc.c
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
#include "commit.h"
#include "commit-graph.h"
#include "packfile.h"
#include "object-store.h"
#include "object-file.h"
#include "pack.h"
#include "pack-objects.h"
#include "path.h"
Expand Down
65 changes: 35 additions & 30 deletions builtin/index-pack.c
Original file line number Diff line number Diff line change
Expand Up @@ -892,9 +892,8 @@ static void sha1_object(const void *data, struct object_entry *obj_entry,

if (startup_info->have_repository) {
read_lock();
collision_test_needed =
repo_has_object_file_with_flags(the_repository, oid,
OBJECT_INFO_QUICK);
collision_test_needed = has_object(the_repository, oid,
HAS_OBJECT_FETCH_PROMISOR);
read_unlock();
}

Expand Down Expand Up @@ -1109,8 +1108,8 @@ static void *threaded_second_pass(void *data)
set_thread_data(data);
for (;;) {
struct base_data *parent = NULL;
struct object_entry *child_obj;
struct base_data *child;
struct object_entry *child_obj = NULL;
struct base_data *child = NULL;

counter_lock();
display_progress(progress, nr_resolved_deltas);
Expand All @@ -1137,15 +1136,18 @@ static void *threaded_second_pass(void *data)
parent = list_first_entry(&work_head, struct base_data,
list);

if (parent->ref_first <= parent->ref_last) {
while (parent->ref_first <= parent->ref_last) {
int offset = ref_deltas[parent->ref_first++].obj_no;
child_obj = objects + offset;
if (child_obj->real_type != OBJ_REF_DELTA)
die("REF_DELTA at offset %"PRIuMAX" already resolved (duplicate base %s?)",
(uintmax_t) child_obj->idx.offset,
oid_to_hex(&parent->obj->idx.oid));
if (child_obj->real_type != OBJ_REF_DELTA) {
child_obj = NULL;
continue;
}
child_obj->real_type = parent->obj->real_type;
} else {
break;
}

if (!child_obj && parent->ofs_first <= parent->ofs_last) {
child_obj = objects +
ofs_deltas[parent->ofs_first++].obj_no;
assert(child_obj->real_type == OBJ_OFS_DELTA);
Expand Down Expand Up @@ -1178,29 +1180,32 @@ static void *threaded_second_pass(void *data)
}
work_unlock();

if (parent) {
child = resolve_delta(child_obj, parent);
if (!child->children_remaining)
FREE_AND_NULL(child->data);
} else {
child = make_base(child_obj, NULL);
if (child->children_remaining) {
/*
* Since this child has its own delta children,
* we will need this data in the future.
* Inflate now so that future iterations will
* have access to this object's data while
* outside the work mutex.
*/
child->data = get_data_from_pack(child_obj);
child->size = child_obj->size;
if (child_obj) {
if (parent) {
child = resolve_delta(child_obj, parent);
if (!child->children_remaining)
FREE_AND_NULL(child->data);
} else{
child = make_base(child_obj, NULL);
if (child->children_remaining) {
/*
* Since this child has its own delta children,
* we will need this data in the future.
* Inflate now so that future iterations will
* have access to this object's data while
* outside the work mutex.
*/
child->data = get_data_from_pack(child_obj);
child->size = child_obj->size;
}
}
}

work_lock();
if (parent)
parent->retain_data--;
if (child->data) {

if (child && child->data) {
/*
* This child has its own children, so add it to
* work_head.
Expand All @@ -1209,7 +1214,7 @@ static void *threaded_second_pass(void *data)
base_cache_used += child->size;
prune_base_data(NULL);
free_base_data(child);
} else {
} else if (child) {
/*
* This child does not have its own children. It may be
* the last descendant of its ancestors; free those
Expand Down Expand Up @@ -1565,7 +1570,7 @@ static void write_special_file(const char *suffix, const char *msg,
else
filename = odb_pack_name(the_repository, &name_buf, hash, suffix);

fd = odb_pack_keep(filename);
fd = safe_create_file_with_leading_directories(the_repository, filename);
if (fd < 0) {
if (errno != EEXIST)
die_errno(_("cannot write %s file '%s'"),
Expand Down
4 changes: 3 additions & 1 deletion builtin/receive-pack.c
Original file line number Diff line number Diff line change
Expand Up @@ -1506,7 +1506,9 @@ static const char *update(struct command *cmd, struct shallow_info *si)
}
}

if (!is_null_oid(new_oid) && !repo_has_object_file(the_repository, new_oid)) {
if (!is_null_oid(new_oid) &&
!has_object(the_repository, new_oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR)) {
error("unpack should have generated %s, "
"but I can't find it!", oid_to_hex(new_oid));
ret = "bad pack";
Expand Down
3 changes: 2 additions & 1 deletion builtin/remote.c
Original file line number Diff line number Diff line change
Expand Up @@ -454,7 +454,8 @@ static int get_push_ref_states(const struct ref *remote_refs,
info->status = PUSH_STATUS_UPTODATE;
else if (is_null_oid(&ref->old_oid))
info->status = PUSH_STATUS_CREATE;
else if (repo_has_object_file(the_repository, &ref->old_oid) &&
else if (has_object(the_repository, &ref->old_oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR) &&
ref_newer(&ref->new_oid, &ref->old_oid))
info->status = PUSH_STATUS_FASTFORWARD;
else
Expand Down
3 changes: 2 additions & 1 deletion builtin/show-ref.c
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,8 @@ static void show_one(const struct show_one_options *opts,
const char *hex;
struct object_id peeled;

if (!repo_has_object_file(the_repository, oid))
if (!has_object(the_repository, oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
die("git show-ref: bad ref %s (%s)", refname,
oid_to_hex(oid));

Expand Down
3 changes: 2 additions & 1 deletion builtin/unpack-objects.c
Original file line number Diff line number Diff line change
Expand Up @@ -449,7 +449,8 @@ static void unpack_delta_entry(enum object_type type, unsigned long delta_size,
delta_data = get_data(delta_size);
if (!delta_data)
return;
if (repo_has_object_file(the_repository, &base_oid))
if (has_object(the_repository, &base_oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
; /* Ok we have this one */
else if (resolve_against_held(nr, &base_oid,
delta_data, delta_size))
Expand Down
3 changes: 2 additions & 1 deletion bulk-checkin.c
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,8 @@ static void flush_batch_fsync(void)
static int already_written(struct bulk_checkin_packfile *state, struct object_id *oid)
{
/* The object may already exist in the repository */
if (repo_has_object_file(the_repository, oid))
if (has_object(the_repository, oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
return 1;

/* Might want to keep the list sorted */
Expand Down
13 changes: 9 additions & 4 deletions cache-tree.c
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,9 @@ int cache_tree_fully_valid(struct cache_tree *it)
int i;
if (!it)
return 0;
if (it->entry_count < 0 || !repo_has_object_file(the_repository, &it->oid))
if (it->entry_count < 0 ||
has_object(the_repository, &it->oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
return 0;
for (i = 0; i < it->subtree_nr; i++) {
if (!cache_tree_fully_valid(it->down[i]->cache_tree))
Expand Down Expand Up @@ -289,7 +291,9 @@ static int update_one(struct cache_tree *it,
}
}

if (0 <= it->entry_count && repo_has_object_file(the_repository, &it->oid))
if (0 <= it->entry_count &&
has_object(the_repository, &it->oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))
return it->entry_count;

/*
Expand Down Expand Up @@ -395,7 +399,8 @@ static int update_one(struct cache_tree *it,
ce_missing_ok = mode == S_IFGITLINK || missing_ok ||
!must_check_existence(ce);
if (is_null_oid(oid) ||
(!ce_missing_ok && !repo_has_object_file(the_repository, oid))) {
(!ce_missing_ok && !has_object(the_repository, oid,
HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR))) {
strbuf_release(&buffer);
if (expected_missing)
return -1;
Expand Down Expand Up @@ -443,7 +448,7 @@ static int update_one(struct cache_tree *it,
struct object_id oid;
hash_object_file(the_hash_algo, buffer.buf, buffer.len,
OBJ_TREE, &oid);
if (repo_has_object_file_with_flags(the_repository, &oid, OBJECT_INFO_SKIP_FETCH_OBJECT))
if (has_object(the_repository, &oid, HAS_OBJECT_RECHECK_PACKED))
oidcpy(&it->oid, &oid);
else
to_invalidate = 1;
Expand Down
Loading