1. Plase decompress src.tar.gz % tar xzvf src.tar.gz % cd src 2. Please download optimization.py from github of TernaryBERT (https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TernaryBERT). % mkdir transformer % cd transformer % wget https://github.com/huawei-noah/Pretrained-Language-Model/raw/refs/heads/master/TernaryBERT/transformer/optimization.py 3. Please download GLUE dataset according to https://github.com/nyu-mll/GLUE-baselines 4. Please evaluate our trained models. 4-1. RTE task % tar xzvf rte.tar.gz % python quant_task_glue_ma2_eval.py --calc_int_precision --eval_only --input_bits 8 --weight_bits 2 --weight_smax_bits 8 --weight_norm_bits 8 --student_model ./rte --task_name rte --data_dir (GLUE dataset path) --model bert-base-uncased ***** Eval only ***** acc = 0.6967509025270758 4-2. MNLI task % tar xzvf mnli.tar.gz % python quant_task_glue_ma2_eval.py --calc_int_precision --eval_only --input_bits 8 --weight_bits 2 --weight_smax_bits 8 --weight_norm_bits 8 --student_model ./mnli --task_name mnli --data_dir (GLUE dataset path) --model bert-base-uncased ***** Eval only ***** acc = 0.8308711156393276 mm-acc = 0.8295362082994304 4-3. CoLA task % tar xzvf cola.tar.gz % python quant_task_glue_ma2_eval.py --calc_int_precision --eval_only --input_bits 8 --weight_bits 2 --weight_smax_bits 8 --weight_norm_bits 8 --student_model ./cola --task_name cola --data_dir (GLUE dataset path) --model bert-base-uncased ***** Eval only ***** mcc = 0.5291140309961344 (Appendix) We verified the models using Python 3.9.21, PyTorch 2.3.1, and Transformers 4.45.0.