Collection on FP8 Quantization of Weights, Activations and KV Cache
NM Testing
company
AI & ML interests
None defined yet.
Recent Activity
View all activity
Collection of State-of-the-art FP8 Block Quantized Models
Models used by https://github.com/vllm-project/speculators CI system
-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 5 • 2 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 13 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 12.6k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 4.73k • 1
-
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4
Text Generation • 8B • Updated • 11 • 1 -
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
Text Generation • 8B • Updated • 10 -
RedHatAI/Sparse-Llama-3.1-8B-2of4
Text Generation • 8B • Updated • 131 • 62 -
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4-FP8-dynamic
Text Generation • 8B • Updated • 7 • 2
-
RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Text Generation • Updated • 3.36k • • 24 -
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation • 8B • Updated • 21.9k • • 9 -
RedHatAI/Mixtral-8x7B-Instruct-v0.1-AutoFP8
Text Generation • 47B • Updated • 48 • 3 -
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation • 71B • Updated • 2.31k • 13
Collection on FP8 Quantization of Weights, Activations and KV Cache
-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 5 • 2 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 13 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 12.6k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 4.73k • 1
Collection of State-of-the-art FP8 Block Quantized Models
Models used by https://github.com/vllm-project/speculators CI system
-
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4
Text Generation • 8B • Updated • 11 • 1 -
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
Text Generation • 8B • Updated • 10 -
RedHatAI/Sparse-Llama-3.1-8B-2of4
Text Generation • 8B • Updated • 131 • 62 -
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4-FP8-dynamic
Text Generation • 8B • Updated • 7 • 2
-
RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Text Generation • Updated • 3.36k • • 24 -
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation • 8B • Updated • 21.9k • • 9 -
RedHatAI/Mixtral-8x7B-Instruct-v0.1-AutoFP8
Text Generation • 47B • Updated • 48 • 3 -
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation • 71B • Updated • 2.31k • 13