Conversation
This commit fixes performance degradation during insertion scenarios by replacing SeqScan with IndexScan. Co-authored-by: Daria Barsukova <d.barsukova@g.nsu.ru> Co-authored-by: Alexandra Bondar <s6311704@gmail.com>
There was a problem hiding this comment.
Pull request overview
This PR targets a load-test regression where insertion throughput degraded as label tables grew, by replacing O(N) sequential scans with index-backed lookups (primarily around entity_exists and other hot paths that check entity/label presence).
Changes:
- Switch
entity_exists()to use an index scan when a suitable index is available (fallback to seq scan otherwise). - Refactor several other table lookups (vertex fetch, label enumeration, SET/DELETE code paths) to prefer index scans.
- Add index discovery logic to locate a usable index when a PK index isn’t present.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
src/backend/utils/adt/agtype.c |
Uses index scan to fetch a vertex tuple by graphid, with fallback to sequential scan. |
src/backend/utils/adt/age_global_graph.c |
Uses index scan to enumerate labels for a graph_oid, filtering kind in the heap. |
src/backend/executor/cypher_utils.c |
Reworks entity_exists() to prefer index scans over seq scans. |
src/backend/executor/cypher_set.c |
Uses index scan to find the tuple to update (SET/REMOVE) instead of seq scan. |
src/backend/executor/cypher_delete.c |
Uses index scan to find tuples for DELETE and to accelerate connected-edge checks where possible. |
src/backend/catalog/ag_label.c |
Uses ag_label_graph_oid_index to accelerate edge-label enumeration for a graph. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| (errcode(ERRCODE_INTERNAL_ERROR), | ||
| errmsg("missing RLS cache entry for relation %u", | ||
| relid))); | ||
| heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple); |
There was a problem hiding this comment.
The index-scan update path skips the RLS USING-policy check (check_security_quals) that the sequential-scan path performs. This can allow SET/REMOVE to update rows that should be filtered by RLS, and also changes behavior (it should silently skip when filtered). Apply the same RLS gating logic in the index-scan branch before calling update_entity_tuple().
| heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple); | |
| /* | |
| * Apply the same RLS USING-policy gating as in the | |
| * sequential-scan path: if the tuple is filtered by | |
| * RLS, silently skip the update. | |
| */ | |
| if (check_security_quals(estate, resultRelInfo, heap_tuple)) | |
| heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple); |
| heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple); | ||
| } | ||
|
|
||
| ExecStoreHeapTuple(heap_tuple, entry->slot, false); | ||
| should_update = check_security_quals(entry->qualExprs, | ||
| entry->slot, | ||
| econtext); | ||
| if (shouldFree) | ||
| heap_freetuple(heap_tuple); |
There was a problem hiding this comment.
In the index-scan branch, heap_tuple is overwritten with the HeapTuple returned by update_entity_tuple(), but the code then calls heap_freetuple(heap_tuple) based on shouldFree from ExecFetchSlotHeapTuple(index_slot,...). This can free the wrong tuple (and potentially double-free or corrupt memory). Keep the fetched tuple in a separate variable and only free that one when shouldFree is true.
| ScanKeyInit(&scan_keys[0], 1, BTEqualStrategyNumber, F_OIDEQ, | ||
| Int64GetDatum(graphid)); |
There was a problem hiding this comment.
The sequential-scan fallback still uses F_OIDEQ to compare the graphid column. Since the index path uses F_GRAPHIDEQ (and the column type is GRAPHIDOID), the fallback should also use F_GRAPHIDEQ/GRAPHID_GET_DATUM to avoid incorrect comparisons when no suitable index is found.
| ScanKeyInit(&scan_keys[0], 1, BTEqualStrategyNumber, F_OIDEQ, | |
| Int64GetDatum(graphid)); | |
| ScanKeyInit(&scan_keys[0], 1, BTEqualStrategyNumber, F_GRAPHIDEQ, | |
| GRAPHID_GET_DATUM(graphid)); |
| // setup scan keys to get all edges for the given graph oid | ||
| ScanKeyInit(&scan_keys[1], Anum_ag_label_graph, BTEqualStrategyNumber, | ||
| F_OIDEQ, ObjectIdGetDatum(graph_oid)); | ||
| ScanKeyInit(&scan_keys[0], Anum_ag_label_kind, BTEqualStrategyNumber, | ||
| F_CHAREQ, CharGetDatum(LABEL_TYPE_EDGE)); |
There was a problem hiding this comment.
This file uses C++-style '//' comments in newly added code. The rest of the backend C codebase follows C90/C-style block comments, and using '//' can break builds under stricter C90 settings. Please convert these to /* ... */ comments for consistency and portability.
| ScanKeyInit(&scan_keys[0], Anum_ag_label_name, BTEqualStrategyNumber, | ||
| F_OIDEQ, ObjectIdGetDatum(graph_oid)); | ||
|
|
There was a problem hiding this comment.
In the index-scan setup, ScanKeyInit() uses Anum_ag_label_name as the scan-key attno. For index scans the attno should be the index column number (typically 1 for the first index key), not a heap attribute-number macro (even if it currently happens to be 1). Using the heap Anum macro here is misleading and easy to break if catalog columns are ever reordered—use an explicit index-key attno instead (and consider using ag_label_graph_oid_index_id() rather than name-based lookup).
|
@sandy-bes Please address the above Copilot suggestions. Additionally, please make sure to note any AI usage, if used. |
|
@sandy-bes Please see this as well (Opus 4.6 review). All of these items need to be address. Review of PR #2351 — Add index scanThe core idea — replacing O(N) sequential scans with O(log N) index scans in Critical Issues1. RLS bypass in The index-scan branch in 2. heap_tuple = ExecFetchSlotHeapTuple(index_slot, true, &shouldFree);
// ...
heap_tuple = update_entity_tuple(resultRelInfo, slot, estate, heap_tuple);
// ...
if (shouldFree)
heap_freetuple(heap_tuple); // <-- frees the UPDATE result, not the fetched tupleAfter Correctness Issues3. The index path correctly uses 4. Index attno vs heap attno confusion in
Performance Issue5. Index discovery runs per-tuple in hot paths In Code Quality Issues6. Massive code duplication The index discovery pattern (iterate Oid find_usable_index_for_attr(Relation rel, AttrNumber attnum);7. DETACH DELETE duplication in The two-pass connected-edge check (START_ID pass + END_ID pass) duplicates ~120 lines of nearly identical ACL check + RLS check + delete + error logic. A helper function would reduce this significantly. 8. C++ style comments New code uses 9. Brace style inconsistency The PR mixes }
else
{Please follow the existing style. 10. Inconsistent lock levels Index open calls use Summary
The performance improvement is real and valuable. With the security fix (#1), memory safety fix (#2), correctness fixes (#3-4), and a refactoring pass to deduplicate the index discovery logic (#5-7), this will be in good shape. |
|
I've updated the PR description with the steps to reproduce the tests locally. |
I was able to validate the perf claims with Opus earlier. But, more importantly, I can't do anything with this PR until all of the above issues (Copilot & Opus) have been addressed. Including noting any use of AI. Btw, using AI is fine, it just needs to be noted. |
|
Yes, absolutely, I understand that the review process needs to be finalized first. I will do my best to address all the feedback over the next few days. Regarding the use of AI agents: I did not use any AI agents while writing this code. |
Motivation / Problem:
As a result of load testing, a significant performance degradation was found in insertion scenarios. The scenarios used were taken from an open-source benchmark and rewritten in pure SQL. Examples of the queries can be found here:
Perf analysis showed that the main bottleneck is the entity_exists function. The root cause lies in the use of a Sequential Scan (SeqScan) to check for the existence of an entity prior to insertion. The time complexity of a SeqScan is O(N), meaning the search time grows linearly as the number of rows in the table increases. The larger the graph became, the longer each individual insertion took. This led to a drop in TPS regardless of the concurrency level (the issue was consistently reproduced with both 1 and 30 threads).
Profile Analysis (Flame Graphs & perf): I am attaching Flame Graphs for comparison.
entity_existsfunction and its underlying sequential scan operations.Changes Made:
Performance Impact:
Benchmarks were conducted on a server with 30 CPU cores and 32 GB of RAM, using a graph ranging from 20,000 to 200,000 objects over a 2-minute duration. The transition to index access completely eliminated the performance degradation associated with data volume growth:
Here are the scripts and commands for local reproduction:
1. Graph creation and population:
The
sfparameter here defines the scale factor of the generated graph.psql -d your_database -f generate_graph.sql -v sf=1
generate_graph.sql
2. Creating wrapper functions for the workload:
This script sets up the environment and creates functions with the correct parameter mapping for Apache AGE.
psql -d your_database -f setup_func_for_workload.sql
setup_func_for_workload.sql
3. Running the load test:
Running 8 clients for 120 seconds.
pgbench -d your_database -f workload_update.sql -D sf=1 -c 8 -T 120 -P
workload_update.sql
Acknowledgments: