Add E2E homogeneous graph store training example#514
Add E2E homogeneous graph store training example#514kmontemayor2-sc wants to merge 45 commits intomainfrom
Conversation
|
/e2e_test |
GiGL Automation@ 16:26:40UTC : 🔄 @ 18:01:01UTC : ❌ Workflow failed. |
|
Sorry, can be hard to see pure GH comments if they're not on a file :/ Let's look at https://console.cloud.google.com/vertex-ai/pipelines/locations/us-central1/runs/hom-cora-sup-test-on-20260303-010325?project=external-snap-ci-github-gigl for a colocated test, and https://console.cloud.google.com/vertex-ai/pipelines/locations/us-central1/runs/hom-cora-sup-gs-test-on-20260303-010325?project=external-snap-ci-github-gigl for a graph store test, both on cora. They take ~ the same time (43 minutes and change). But I don't see model metrics for either run, see the below for the logs I see in the colocated pipeline/ is it possible we broke the metrics for these pipelines at some point? I'd rather have that be a separate fix if possible. |
|
/e2e_test |
GiGL Automation@ 18:07:23UTC : 🔄 @ 19:34:50UTC : ❌ Workflow failed. |
|
/e2e_test |
GiGL Automation@ 19:49:01UTC : 🔄 @ 21:12:51UTC : ❌ Workflow failed. |
|
/e2e_test |
GiGL Automation@ 21:44:39UTC : 🔄 @ 23:12:47UTC : ❌ Workflow failed. |
Scope of work done
Add example for graph store homogeneous training, and update splitter slightly to allow tuple-edge types to be passed in.
Will follow up with heterogeneous loop when ready :)
Adding new deployment/configs/e2e_glt_gs_train_resource_config.yaml so we can have GS trainer as a temp workaround
Where is the documentation for this feature?: N/A
Did you add automated tests or write a test plan?
Updated Changelog.md? NO
Ready for code review?: NO