Reliable, minimal and scalable library for pretraining foundation and world models
-
Updated
Mar 5, 2026 - Python
Reliable, minimal and scalable library for pretraining foundation and world models
This repository contains my research internship code at the University of Illinois Chicago under the supervision of Prof. Pedram Rooshenas.
This repository contains the implementations of our experiments and our approach presented in the paper: CoNCRA: A Convolutional Neural Network Code Retrieval Approach
A simple and efficient implementation of Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (I-JEPA)
Demo for Binding Text, Images, Graphs, and Audio for Music Representation Learning
Code and Models for Binding Text, Images, Graphs, and Audio for Music Representation Learning
[CVPR 2026] FLAC: Few-Shot Acoustic Synthesis with Flow Matching. FLAC enables RIR generation in novel scenes using only one-shot acoustic observation. The repository also provides AGREE, a joint embedding space for RIRs and room geometry, designed to evaluate the geometric consistency of RIR synthesis models.
PyTorch code and models for VJEPA2 self-supervised learning from video.
Add a description, image, and links to the joint-embedding topic page so that developers can more easily learn about it.
To associate your repository with the joint-embedding topic, visit your repo's landing page and select "manage topics."