Skip to content

Non-record: Mixed-Int6 LZMA9 B3072 Warm5000#1438

Open
sabdulmajid wants to merge 1 commit intoopenai:mainfrom
sabdulmajid:pr/mixed-int6-lzma9-b3072-warm5000
Open

Non-record: Mixed-Int6 LZMA9 B3072 Warm5000#1438
sabdulmajid wants to merge 1 commit intoopenai:mainfrom
sabdulmajid:pr/mixed-int6-lzma9-b3072-warm5000

Conversation

@sabdulmajid
Copy link
Copy Markdown

Summary

Adds a non-record unlimited-compute 16MB submission: Mixed-Int6 LZMA9 B3072 Warm5000.

This is not a 10-minute record attempt, rather a 16.1h single-GPU run using the established EMA + XSA(last-4) + BigramHash3072 + LeakyReLU^2 flat-transformer stack, then exports the preserved raw checkpoint with broad mixed-int6 over mlp;attn;embed and LZMA9 extreme compression.

Result

  • val_bpb: 1.20289664
  • val_loss: 1.99963255
  • pre-quant sliding BPB: 1.16618894
  • artifact bytes: 15,991,188

This beats the listed 4-hour non-record baseline but does not beat the current 1-bit non-record result or the 10-minute SOTA -- was a fun little experiment I trained and ran on limited compute (as a student in college 😃)

Training: NVIDIA RTX A4500, 20GB VRAM
Export & eval: NVIDIA GeForce RTX 3050, 8GB VRAM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant