Skip to content

Add index scan#2351

Open
sandy-bes wants to merge 2 commits intoapache:masterfrom
sandy-bes:add_index_scan
Open

Add index scan#2351
sandy-bes wants to merge 2 commits intoapache:masterfrom
sandy-bes:add_index_scan

Conversation

@sandy-bes
Copy link

@sandy-bes sandy-bes commented Mar 3, 2026

Motivation / Problem:
As a result of load testing, a significant performance degradation was found in insertion scenarios. The scenarios used were taken from an open-source benchmark and rewritten in pure SQL. Examples of the queries can be found here:

  1. https://github.com/ldbc/ldbc_snb_interactive_v1_impls/blob/main/cypher/queries/interactive-update-1.cypher
  2. https://github.com/ldbc/ldbc_snb_interactive_v1_impls/blob/main/cypher/queries/interactive-update-6.cypher
  3. https://github.com/ldbc/ldbc_snb_interactive_v1_impls/blob/main/cypher/queries/interactive-update-7.cypher

Perf analysis showed that the main bottleneck is the entity_exists function. The root cause lies in the use of a Sequential Scan (SeqScan) to check for the existence of an entity prior to insertion. The time complexity of a SeqScan is O(N), meaning the search time grows linearly as the number of rows in the table increases. The larger the graph became, the longer each individual insertion took. This led to a drop in TPS regardless of the concurrency level (the issue was consistently reproduced with both 1 and 30 threads).

Profile Analysis (Flame Graphs & perf): I am attaching Flame Graphs for comparison.

  • Before Patch: The Flame Graph shows a massive "plateau" for the entity_exists function and its underlying sequential scan operations.
image
  • After Patch: The "plateau" of entity_exists has disappeared, confirming that the bottleneck is resolved.
image

Changes Made:

  • Added Index Scan (IndexScan / time complexity O(log N)) inside the entity_exists function.
  • Refactored other functions utilizing SeqScan — they were also migrated to use IndexScan wherever applicable.

Performance Impact:
Benchmarks were conducted on a server with 30 CPU cores and 32 GB of RAM, using a graph ranging from 20,000 to 200,000 objects over a 2-minute duration. The transition to index access completely eliminated the performance degradation associated with data volume growth:

  • Before: ~1,500 TPS (at peak, with subsequent degradation as the table grew).
  • After: Stable ~15,000 TPS (a 10x speedup).

Here are the scripts and commands for local reproduction:

1. Graph creation and population:
The sf parameter here defines the scale factor of the generated graph.
psql -d your_database -f generate_graph.sql -v sf=1
generate_graph.sql

2. Creating wrapper functions for the workload:
This script sets up the environment and creates functions with the correct parameter mapping for Apache AGE.
psql -d your_database -f setup_func_for_workload.sql
setup_func_for_workload.sql

3. Running the load test:
Running 8 clients for 120 seconds.
pgbench -d your_database -f workload_update.sql -D sf=1 -c 8 -T 120 -P
workload_update.sql

Acknowledgments:

  • Huge thanks to Daria Barsukova for conducting the load testing and isolating the issue.
  • Implementation of index scanning: Alexandra Bondar.

Alexandra Bondar and others added 2 commits March 3, 2026 14:03
This commit fixes performance degradation during insertion scenarios by replacing SeqScan with IndexScan.

Co-authored-by: Daria Barsukova <d.barsukova@g.nsu.ru>
Co-authored-by: Alexandra Bondar <s6311704@gmail.com>
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR targets a load-test regression where insertion throughput degraded as label tables grew, by replacing O(N) sequential scans with index-backed lookups (primarily around entity_exists and other hot paths that check entity/label presence).

Changes:

  • Switch entity_exists() to use an index scan when a suitable index is available (fallback to seq scan otherwise).
  • Refactor several other table lookups (vertex fetch, label enumeration, SET/DELETE code paths) to prefer index scans.
  • Add index discovery logic to locate a usable index when a PK index isn’t present.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
src/backend/utils/adt/agtype.c Uses index scan to fetch a vertex tuple by graphid, with fallback to sequential scan.
src/backend/utils/adt/age_global_graph.c Uses index scan to enumerate labels for a graph_oid, filtering kind in the heap.
src/backend/executor/cypher_utils.c Reworks entity_exists() to prefer index scans over seq scans.
src/backend/executor/cypher_set.c Uses index scan to find the tuple to update (SET/REMOVE) instead of seq scan.
src/backend/executor/cypher_delete.c Uses index scan to find tuples for DELETE and to accelerate connected-edge checks where possible.
src/backend/catalog/ag_label.c Uses ag_label_graph_oid_index to accelerate edge-label enumeration for a graph.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

(errcode(ERRCODE_INTERNAL_ERROR),
errmsg("missing RLS cache entry for relation %u",
relid)));
heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple);
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The index-scan update path skips the RLS USING-policy check (check_security_quals) that the sequential-scan path performs. This can allow SET/REMOVE to update rows that should be filtered by RLS, and also changes behavior (it should silently skip when filtered). Apply the same RLS gating logic in the index-scan branch before calling update_entity_tuple().

Suggested change
heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple);
/*
* Apply the same RLS USING-policy gating as in the
* sequential-scan path: if the tuple is filtered by
* RLS, silently skip the update.
*/
if (check_security_quals(estate, resultRelInfo, heap_tuple))
heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple);

Copilot uses AI. Check for mistakes.
Comment on lines +687 to +691
heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple);
}

ExecStoreHeapTuple(heap_tuple, entry->slot, false);
should_update = check_security_quals(entry->qualExprs,
entry->slot,
econtext);
if (shouldFree)
heap_freetuple(heap_tuple);
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the index-scan branch, heap_tuple is overwritten with the HeapTuple returned by update_entity_tuple(), but the code then calls heap_freetuple(heap_tuple) based on shouldFree from ExecFetchSlotHeapTuple(index_slot,...). This can free the wrong tuple (and potentially double-free or corrupt memory). Keep the fetched tuple in a separate variable and only free that one when shouldFree is true.

Copilot uses AI. Check for mistakes.
Comment on lines +5708 to +5709
ScanKeyInit(&scan_keys[0], 1, BTEqualStrategyNumber, F_OIDEQ,
Int64GetDatum(graphid));
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sequential-scan fallback still uses F_OIDEQ to compare the graphid column. Since the index path uses F_GRAPHIDEQ (and the column type is GRAPHIDOID), the fallback should also use F_GRAPHIDEQ/GRAPHID_GET_DATUM to avoid incorrect comparisons when no suitable index is found.

Suggested change
ScanKeyInit(&scan_keys[0], 1, BTEqualStrategyNumber, F_OIDEQ,
Int64GetDatum(graphid));
ScanKeyInit(&scan_keys[0], 1, BTEqualStrategyNumber, F_GRAPHIDEQ,
GRAPHID_GET_DATUM(graphid));

Copilot uses AI. Check for mistakes.
Comment on lines +365 to +369
// setup scan keys to get all edges for the given graph oid
ScanKeyInit(&scan_keys[1], Anum_ag_label_graph, BTEqualStrategyNumber,
F_OIDEQ, ObjectIdGetDatum(graph_oid));
ScanKeyInit(&scan_keys[0], Anum_ag_label_kind, BTEqualStrategyNumber,
F_CHAREQ, CharGetDatum(LABEL_TYPE_EDGE));
Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file uses C++-style '//' comments in newly added code. The rest of the backend C codebase follows C90/C-style block comments, and using '//' can break builds under stricter C90 settings. Please convert these to /* ... */ comments for consistency and portability.

Copilot uses AI. Check for mistakes.
Comment on lines +323 to +325
ScanKeyInit(&scan_keys[0], Anum_ag_label_name, BTEqualStrategyNumber,
F_OIDEQ, ObjectIdGetDatum(graph_oid));

Copy link

Copilot AI Mar 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the index-scan setup, ScanKeyInit() uses Anum_ag_label_name as the scan-key attno. For index scans the attno should be the index column number (typically 1 for the first index key), not a heap attribute-number macro (even if it currently happens to be 1). Using the heap Anum macro here is misleading and easy to break if catalog columns are ever reordered—use an explicit index-key attno instead (and consider using ag_label_graph_oid_index_id() rather than name-based lookup).

Copilot uses AI. Check for mistakes.
@jrgemignani
Copy link
Contributor

jrgemignani commented Mar 3, 2026

@sandy-bes Please address the above Copilot suggestions. Additionally, please make sure to note any AI usage, if used.

@jrgemignani
Copy link
Contributor

@sandy-bes Please see this as well (Opus 4.6 review). All of these items need to be address.

Review of PR #2351 — Add index scan

The core idea — replacing O(N) sequential scans with O(log N) index scans in entity_exists and related hot paths — is excellent, and the performance numbers are compelling. However, there are several correctness, security, and code quality issues that need to be addressed before this can be merged.

Critical Issues

1. RLS bypass in cypher_set.c index-scan path (security bug)

The index-scan branch in process_update_list() calls update_entity_tuple() directly without checking check_security_quals. The seq-scan fallback correctly gates the update on RLS USING-policy evaluation. This means SET/REMOVE will bypass row-level security when an index is present. The same RLS gating logic must be applied in the index-scan branch.

2. heap_freetuple on wrong tuple in cypher_set.c (memory corruption)

heap_tuple = ExecFetchSlotHeapTuple(index_slot, true, &shouldFree);
// ...
heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple);
// ...
if (shouldFree)
    heap_freetuple(heap_tuple);  // <-- frees the UPDATE result, not the fetched tuple

After update_entity_tuple(), heap_tuple points to the new tuple, but shouldFree applies to the originally fetched tuple. This can free the wrong memory. Keep the fetched tuple in a separate variable and only free that one.

Correctness Issues

3. F_OIDEQ vs F_GRAPHIDEQ mismatch in agtype.c fallback path

The index path correctly uses F_GRAPHIDEQ, but the seq-scan fallback at line ~5709 still uses F_OIDEQ. The column type is graphid, so both paths should use F_GRAPHIDEQ / GRAPHID_GET_DATUM.

4. Index attno vs heap attno confusion in ag_label.c

ScanKeyInit for the index scan uses Anum_ag_label_name as the attribute number. For index scans, the attno should be the index column number (typically 1), not a heap attribute-number macro. Additionally, the index is on graph_oid, so referencing Anum_ag_label_name is semantically incorrect — it only works by coincidence if the value happens to equal 1. Consider using ag_label_graph_oid_index_id() instead of get_relname_relid() string lookup as well.

Performance Issue

5. Index discovery runs per-tuple in hot paths

In process_delete_list() and process_update_list(), the index discovery block (RelationGetIndexList + open/close each index) runs inside the per-entity loop. For a DELETE or SET touching N entities of the same label, this runs N times. The index OID should be cached per-label or discovered once outside the inner loop.

Code Quality Issues

6. Massive code duplication

The index discovery pattern (iterate RelationGetIndexList, open each, check indisvalid/indnatts/indkey.values[0]) is copy-pasted 6+ times across the changed files. This should be factored into a shared utility function, e.g.:

Oid find_usable_index_for_attr(Relation rel, AttrNumber attnum);

7. DETACH DELETE duplication in cypher_delete.c

The two-pass connected-edge check (START_ID pass + END_ID pass) duplicates ~120 lines of nearly identical ACL check + RLS check + delete + error logic. A helper function would reduce this significantly.

8. C++ style comments

New code uses // comments in ag_label.c and elsewhere. The AGE/PostgreSQL codebase uses /* ... */ exclusively. Please convert for consistency and C90 portability.

9. Brace style inconsistency

The PR mixes } else { with the existing codebase convention of:

}
else
{

Please follow the existing style.

10. Inconsistent lock levels

Index open calls use RowExclusiveLock in some read-only scan paths (ag_label.c label enumeration, age_global_graph.c) where AccessShareLock would be more appropriate. The lock levels for index access during reads vs writes should be reviewed for consistency.

Summary

# Issue Severity
1 RLS bypass in SET index path Critical
2 heap_freetuple on wrong tuple High
3 F_OIDEQ vs F_GRAPHIDEQ mismatch Medium
4 Index attno vs heap attno confusion Medium
5 Index discovery per-tuple in loops Medium
6 Index discovery code duplication (6x) High
7 DETACH DELETE logic duplication Medium
8 C++ style comments Low
9 Brace style inconsistency Low
10 Inconsistent lock levels Medium

The performance improvement is real and valuable. With the security fix (#1), memory safety fix (#2), correctness fixes (#3-4), and a refactoring pass to deduplicate the index discovery logic (#5-7), this will be in good shape.

@sandy-bes
Copy link
Author

I've updated the PR description with the steps to reproduce the tests locally.

@jrgemignani
Copy link
Contributor

@sandy-bes

I've updated the PR description with the steps to reproduce the tests locally.

I was able to validate the perf claims with Opus earlier.

But, more importantly, I can't do anything with this PR until all of the above issues (Copilot & Opus) have been addressed. Including noting any use of AI. Btw, using AI is fine, it just needs to be noted.

@sandy-bes
Copy link
Author

Yes, absolutely, I understand that the review process needs to be finalized first. I will do my best to address all the feedback over the next few days.

Regarding the use of AI agents: I did not use any AI agents while writing this code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants