Skip to content

Releases: changcheng967/Kata_web

KW29-b18c384nbt-1018

19 Oct 21:38
0c509ae

Choose a tag to compare

🏆 KW29 Custom Fine‑Tuned Model Release: kw29‑b18c384nbt‑1018

Release Date: 2025‑10‑19
Author: changcheng967
With support from: Doulet Media
Powered by: KataGo — full credit to the KataGo authors, license, and required attributions.


📦 Overview

This release introduces KW29 (b18c384nbt‑1018), a custom fine‑tuned KataGo model.
It has been trained on ~2 million samples, reaching a scale of ~10 billion parameters.
The evaluation benchmark compares KW29 against the baseline FMSWA7 (b28c512) under equal‑time conditions.


🧪 Experiment Setup

  • Board size: 19x19
  • Baseline: FMSWA7 (b28c512)
  • KW29: b18c384nbt‑1018
  • Visits: FMSWA7 = 10, KW29 = 40 (equal‑time fairness)
  • Games played: 100
  • Resignation: enabled (threshold −0.95, 6 turns)
  • Backend: TensorRT on RTX 5080 Laptop GPU

📊 Results Summary

Match Log Metrics

Model Avg Move Time (s) Total Moves (sample) NN Rows
FMSWA7 0.0754 8068 387,146
KW29 b18c384nbt 0.0621 8087 633,603

➡️ KW29 is ~21% faster per move than FMSWA7 under equal‑time settings.

Elo & Win Rates

Model Win% vs Opponent Elo (± error)
FMSWA7 70.5% +48.41 ± 25.16
KW29 b18c384nbt 29.5% −48.41 ± 25.16

➡️ FMSWA7 remains stronger overall, but KW29 shows improved throughput and efficiency.

Derived Metrics

Model Relative Speed Elo / NN Row Efficiency (Elo/sec) Verdict
FMSWA7 1.00 +1.25e‑4 +641 Baseline
KW29 b18c384nbt 1.21 −7.6e‑5 −780 Weaker but more efficient

📈 Key Insights

  • Training scale: KW29 is trained with ~2M samples, totaling ~10B parameters.
  • Efficiency: KW29 processes significantly more NN rows, achieving higher throughput.
  • Strength: FMSWA7 still leads in Elo, but KW29 narrows the gap compared to earlier KW versions.
  • Fairness: Visit scaling (10 vs 40) produced nearly identical time per move, ensuring equal‑time fairness.

✅ Next Steps

  • Continue training KW series (KW30+) to close the Elo gap.
  • Track Elo progression across versions with the KW Report dashboard.
  • Explore tuning resign thresholds and temperature for further balance.

🙌 Credits

  • Created by changcheng967 with help from Doulet Media
  • Built on the open‑source KataGo engine

Full Changelog: KW28-b18c384nbt-1017...KW29-b18c384nbt-1018

KW28-b18c384nbt-1017

18 Oct 15:36
20da6ca

Choose a tag to compare

KW27-b18c384nbt-1017

17 Oct 18:39
c227348

Choose a tag to compare

KW20-b28c512nbt-0930

17 Oct 18:34
c227348

Choose a tag to compare

KW19-b28c512nbt-0930.bin

16 Oct 00:03
40eeaa9

Choose a tag to compare

What's Changed

Full Changelog: kw17-b28c512nbt-0928...KW19-b28c512nbt-0930.bin

kw18-b28c512nbt-0929

30 Sep 01:36

Choose a tag to compare

🏆 kata_web Custom Fine-Tuned Model Release: kw18-b28c512nbt-0929

🔗 Trained using the KataGo open-source framework — full credit to KataGo authors, license, and required attributions.


📌 Overview

kw18 is a fine-tuned 19x19 Go model based on the KW17 checkpoint (kw17-b28c512nbt-0928).
It was trained for 1 epoch on 200,000 samples, totaling 3,050,944 training steps. KW18 continues the KW-series with the strongest Elo so far, achieving a 55% win rate and a new peak in Calculated Elo.


🧠 Model Information

AttributeValue
Model Namekw18-b28c512nbt-0929.bin
Configurationb28c512nbt (28 blocks, 512 channels)
Board Size19x19
File Size~330 MB
Base Modelkw17-b28c512nbt-0928
Training Steps3,050,944
Training Data200k rows × 1 epoch of shuffled self-play
Training Time~1.5 hours (A100-class GPU, approximate)
FrameworkKataGo v1.17.0+ (PyTorch export)

📊 Performance Metrics

ModelGamesWinsLosesWin rate (%) Avg Time (s)Total PlayoutsBaseline Elo Elo DiffCalculated EloRankWeighted Elo SamplesElo RealisticEfficiency
KW18-b28c512nbt-09292011955389.530217,96414085+35.2214120.22114068.314793,050,9444663559.5563885

⚡ KW18 surpasses KW17 with a 55% win rate, higher Calculated Elo (14120), and best-in-series Weighted Elo. This marks a new performance peak for the KW line.


🚀 Usage Instructions

1. Download the model

wget https://github.com/changcheng967/kata_web/releases/download/v1.0/kw18-b28c512nbt-0929.bin
mv kw18-b28c512nbt-0929.bin ~/KataGo/models/

2. Download checkpoint (optional)

wget https://github.com/changcheng967/kata_web/releases/download/v1.0/model.ckpt
mv model.ckpt ~/KataGo/models/

3. Run with GTP

./cpp/main gtp \
  -model ~/KataGo/models/kw18-b28c512nbt-0929.bin \
  -config cpp/configs/gtp.cfg

Example session:

boardsize 19
clear_board
genmove B

4. GUI Integration

  • Sabaki → add engine: ./cpp/main, args: gtp -model models/kw18-b28c512nbt-0929.bin
  • Lizzie → configure with same GTP arguments
  • KaTrain → add as custom GTP engine

📦 Files Included

FilePurpose
kw18-b28c512nbt-0929.binFinal exported model
model.ckptCheckpoint file for resuming training
results.xlsxFull benchmark results (KW10 → KW18)

Full Changelog: kw16-b28c512nbt-0928...kw18-b28c512nbt-0929

kw17-b28c512nbt-0928

30 Sep 01:10

Choose a tag to compare

🏆 kata_web Custom Fine-Tuned Model Release: kw17-b28c512nbt-0928

🔗 Trained using the KataGo open-source framework — full credit to KataGo authors, license, and required attributions.


📌 Overview

kw17 is a fine-tuned 19x19 Go model based on the KW16 checkpoint (kw16-b28c512nbt-0928).
It was trained for 1 epoch on 200,000 samples, totaling 2,840,000 training steps. Compared to KW16, it stabilizes performance and achieves a balanced 50% win rate, with stronger Elo and improved consistency.


🧠 Model Information

AttributeValue
Model Namekw17-b28c512nbt-0928.bin.gz
Configurationb28c512nbt (28 blocks, 512 channels)
Board Size19x19
File Size~330 MB
Base Modelkw16-b28c512nbt-0928
Training Steps2,840,000
Training Data200k rows × 1 epoch of shuffled self-play
Training Time~1.4 hours (A100-class GPU, approximate)
FrameworkKataGo v1.17.0+ (PyTorch export)

📊 Performance Metrics

ModelGamesWinsLosesWin rate (%) Avg Time (s)Total PlayoutsBaseline Elo Elo DiffCalculated EloRankWeighted Elo SamplesElo RealisticEfficiency
Baseline (FMGo7)014085211268140854660
KW14-b28c512nbt-09272071335361.964202,97814085-120.4113964.59713919.195571,240,0004647560.7684742
KW15-b28c512nbt-09282091145333.465186,58614085-38.3814046.62513994.88971,940,0004656559.5369829
KW16-b28c512nbt-09282071335364.026207,50714085-155.6313929.37813929.372,640,0004643570.0334591
KW17-b28c512nbt-092820101050408.813230,24114085014085214051.200312,840,0004660563.1939297

⚡ KW17 shows recovery after KW16’s regression, stabilizing win rates with improved Elo. It achieves parity with the FMGo7 baseline while maintaining good efficiency.


🚀 Usage Instructions

1. Download the model

wget https://github.com/changcheng967/kata_web/releases/download/v1.0/kw17-b28c512nbt-0928.bin.gz
mv kw17-b28c512nbt-0928.bin.gz ~/KataGo/models/

2. Run with GTP

./cpp/main gtp \
  -model ~/KataGo/models/kw17-b28c512nbt-0928.bin.gz \
  -config cpp/configs/gtp.cfg

Example session:

boardsize 19
clear_board
genmove B

3. GUI Integration

  • Sabaki → add engine: ./cpp/main, args: gtp -model models/kw17-b28c512nbt-0928.bin.gz
  • Lizzie → configure with same GTP arguments
  • KaTrain → add as custom GTP engine

📦 Files Included

FilePurpose
kw17-b28c512nbt-0928.bin.gzFinal exported model
training_log.txtTraining log and metrics for KW17

Full Changelog: kw16-b28c512nbt-0928...kw17-b28c512nbt-0928

kw16-b28c512nbt-0928

29 Sep 13:30

Choose a tag to compare

🏆 kata_web Custom Fine-Tuned Model Release: kw16-b28c512nbt-0928

🔗 Trained using the KataGo open-source framework — full credit to KataGo authors, license, and required attributions.


📌 Overview

kw16 is a fine-tuned 19x19 Go model based on the KW15 checkpoint (kw15-b28c512nbt-0928).
It was trained for 1 epoch on 700,000 samples, totaling 2,640,000 training steps. This continues the KW-series experiments, with improved efficiency and stronger performance compared to KW14.


🧠 Model Information

AttributeValue
Model Namekw16-b28c512nbt-0928.bin.gz
Configurationb28c512nbt (28 blocks, 512 channels)
Board Size19x19
File Size~330 MB
Base Modelkw15-b28c512nbt-0928
Training Steps2,640,000
Training Data700k rows × 1 epoch of shuffled self-play
Training Time~1 hour (A100-class GPU, approximate)
FrameworkKataGo v1.17.0+ (PyTorch export)

📊 Performance Metrics

ModelGamesWinsLosesWin rate (%) Avg Time (s)Total PlayoutsBaseline Elo Elo DiffCalculated EloRankWeighted Elo SamplesElo Realistic
Baseline (FMGo7)0140851#VALUE!140854660
KW10-b28c512nbt-09262061430341.182192,82614085-155.6313929.37610533.0961550,0004643
KW11-b28c512nbt-09262091145351.037195,28614085-35.2214049.78210606.35553200,0004656
KW12-b28c512nbt-09262081240369.678207,03614085-70.4414014.56410539.21332400,0004652
KW13-b28c512nbt-09262051525369.541210,11714085-226.0713858.93710418.51515600,0004635
KW14-b28c512nbt-09272071335361.964202,97814085-120.4113964.59510516.771781,240,0004647
KW15-b28c512nbt-09282091145333.465186,58614085-38.3814046.62310642.936451,940,0004656
KW16-b28c512nbt-09282071335364.026 207,507 14085-155.6313929.37N/A10481.71731 2,640,0004643

⚡ KW16 did not improve at all(Only Stabality), instead it dropped around 120 ELO and became one of the wordt performanced model, which is one it's not the latest release


🚀 Usage Instructions

1. Download the model

wget https://github.com/changcheng967/kata_web/releases/download/kw16-b28c512nbt-0928/kw16-b28c512nbt-0928.bin.gz
mv kw16-b28c512nbt-0928.bin.gz ~/KataGo/models/

2. Run with GTP

./cpp/main gtp \
  -model ~/KataGo/models/kw16-b28c512nbt-0928.bin.gz \
  -config cpp/configs/gtp.cfg

Example session:

boardsize 19
clear_board
genmove B

3. GUI Integration

  • Sabaki → add engine: ./cpp/main, args: gtp -model models/kw16-b28c512nbt-0928.bin.gz
  • Lizzie → configure with same GTP arguments
  • KaTrain → add as custom GTP engine

📦 Files Included

FilePurpose
kw16-b28c512nbt-0928.bin.gzFinal exported model
training_log.txt

What's Changed

New Contributors

Full Changelog: kw15-b28c512nbt-0928...kw16-b28c512nbt-0928

kw15-b28c512nbt-0928

28 Sep 21:58
e1ce029

Choose a tag to compare

🏆 kata_web Custom Fine-Tuned Model Release: kw15-b28c512nbt-0928

🔗 Trained using the KataGo open-source framework — full credit to KataGo authors, license, and required attributions.


📌 Overview

kw15 is a fine-tuned 19x19 Go model based on the KW14 checkpoint (kw14-b28c512nbt-0927).
It was trained for 1 epoch on 700,000 samples, totaling 1,940,000 training steps. This continues the KW-series experiments, with improved efficiency and stronger performance compared to KW14.


🧠 Model Information

AttributeValue
Model Namekw15-b28c512nbt-0928.bin.gz
Configurationb28c512nbt (28 blocks, 512 channels)
Board Size19x19
File Size~330 MB
Base Modelkw14-b28c512nbt-0927
Training Steps1,940,000
Training Data700k rows × 1 epoch of shuffled self-play
Training Time~1 hour (A100-class GPU, approximate)
FrameworkKataGo v1.17.0+ (PyTorch export)

📊 Performance Metrics

ModelGamesWinsLosesWin rate (%) Avg Time (s)Total PlayoutsBaseline Elo Elo DiffCalculated EloRankWeighted Elo SamplesElo Realistic
Baseline (FMGo7)0140851#VALUE!140854660
KW10-b28c512nbt-09262061430341.182192,82614085-155.6313929.37610533.0961550,0004643
KW11-b28c512nbt-09262091145351.037195,28614085-35.2214049.78210606.35553200,0004656
KW12-b28c512nbt-09262081240369.678207,03614085-70.4414014.56410539.21332400,0004652
KW13-b28c512nbt-09262051525369.541210,11714085-226.0713858.93710418.51515600,0004635
KW14-b28c512nbt-09272071335361.964202,97814085-120.4113964.59510516.771781,240,0004647
KW15-b28c512nbt-09282091145333.465186,58614085-38.3814046.62310642.936451,940,0004656
KW16-b28c512nbt-092820020014085#VALUE!2,640,000#VALUE!

⚡ KW15 shows a clear improvement over KW14 and KW11, with higher efficiency (lower average time, fewer playouts) and stronger Weighted Elo performance.


🚀 Usage Instructions

1. Download the model

wget https://github.com/changcheng967/kata_web/releases/download/v1.0/kw15-b28c512nbt-0928.bin.gz
mv kw15-b28c512nbt-0928.bin.gz ~/KataGo/models/

2. Run with GTP

./cpp/main gtp \
  -model ~/KataGo/models/kw15-b28c512nbt-0928.bin.gz \
  -config cpp/configs/gtp.cfg

Example session:

boardsize 19
clear_board
genmove B

3. GUI Integration

  • Sabaki → add engine: ./cpp/main, args: gtp -model models/kw15-b28c512nbt-0928.bin.gz
  • Lizzie → configure with same GTP arguments
  • KaTrain → add as custom GTP engine

📦 Files Included

FilePurpose
kw15-b28c512nbt-0928.bin.gzFinal exported model
training_log.txt

kw14-b28c512nbt-0927

27 Sep 19:56
9225d59

Choose a tag to compare

🏆 kata_web Custom Fine-Tuned Model Release: kw14-b28c512nbt-0927

🔗 Trained using the KataGo open-source framework — full credit to KataGo authors, license, and required attributions.


📌 Overview

kw14 is a fine-tuned 19x19 Go model based on the official KataGo foundation checkpoint (kata1-b28c512nbt-s10904468224-d5317014586).
It was trained for 2 epochs on 620,000 samples per epoch, totaling 1,240,000 samples, continuing the KW-series experiments with deeper data exposure.


🧠 Model Information

AttributeValue
Model Namekw14-b28c512nbt-0927.bin.gz
Configurationb28c512nbt (28 blocks, 512 channels)
Board Size19x19
File Size~330 MB
Base Modelkata1-b28c512nbt-s10904468224-d5317014586
Training Steps~1,240,000 samples (2 epochs)
Training Data620k rows × 2 epochs of shuffled self-play
Training Time~1 hour (A100-class GPU, approximate)
FrameworkKataGo v1.17.0+ (PyTorch export)

📊 Performance Metrics

  • Final Loss: reported from training only
  • Validation: none used (self-play fine-tune)
  • SWA: ❌ disabled

⚠️ Strength improvement over KW13 is modest; Elo trails KW11 and KW12 in match testing.


⚙️ Training & Export

Training Command

python train.py \
  -traindir /notebooks/katago/shuffledata \
  -datadir /notebooks/katago/shuffledata \
  -exportdir /notebooks/katago/models \
  -exportprefix "kw14-b28c512nbt-0927" \
  -initial-checkpoint /notebooks/katago/initial_checkpoint/kata1-b28c512nbt-s10904468224-d5317014586/model.ckpt \
  -use-fp16 \
  -batch-size 64 \
  -pos-len 19 \
  -max-epochs-this-instance 2 \
  -lr-scale-auto \
  -epochs-per-export 1 \
  -samples-per-epoch 620000

Export Command

python export_model_pytorch.py \
  -checkpoint /notebooks/katago/models/kw14-b28c512nbt-0927-s10905030448-d612345/model.ckpt \
  -export-dir /notebooks/katago/models \
  -model-name kw14-b28c512nbt-0927 \
  -filename-prefix kw14-b28c512nbt-0927

🚀 Usage Instructions

1. Download the model

wget https://github.com/changcheng967/kata_web/releases/download/v1.0/kw14-b28c512nbt-0927.bin.gz
mv kw14-b28c512nbt-0927.bin.gz ~/KataGo/models/

2. Run with GTP

./cpp/main gtp \
  -model ~/KataGo/models/kw14-b28c512nbt-0927.bin.gz \
  -config cpp/configs/gtp.cfg

Example session:

boardsize 19
clear_board
genmove B

3. GUI Integration

  • Sabaki → add engine: ./cpp/main, args: gtp -model models/kw14-b28c512nbt-0927.bin.gz
  • Lizzie → configure with same GTP arguments
  • KaTrain → add as custom GTP engine

📦 Files Included

FilePurpose
kw14-b28c512nbt-0927.bin.gzFinal exported model
training_log.txtTraining log (1.24M samples, 2 epochs)
README.mdThis release note

🌟 Highlights

  • Fine-tuned with 1.24 million samples over 2 epochs
  • Fully reproducible with clear commands
  • Useful for education, experiments, and pipeline testing
  • Performance slightly below KW11/KW12, but stronger than KW13

📝 License

This release complies with the KataGo license.
Full credit to the KataGo team for their work and open-source framework.


What's Changed

  • Potential fix for code scanning alert no. 2: Workflow does not contain permissions by @changcheng967 in PR #16
  • Potential fix for code scanning alert no. 1: Workflow does not contain permissions by @changcheng967 in PR #15

Full Changelog: KW-20250919-b18c384nbt-71k-9x9-final → kw14-b28c512nbt-0927