You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source_en/Usage Guide/Server and Client/Overview.md
+23-21Lines changed: 23 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Server and Client
1
+
# Overview
2
2
3
3
Twinkle provides a complete HTTP Server/Client architecture that supports deploying models as services and remotely calling them through clients to complete training, inference, and other tasks. This architecture decouples **model hosting (Server side)** and **training logic (Client side)**, allowing multiple users to share the same base model for training.
4
4
@@ -14,7 +14,7 @@ Twinkle Server supports two protocol modes:
14
14
| Mode | server_type | Description |
15
15
|------|------------|------|
16
16
|**Twinkle Server**|`twinkle`| Native Twinkle protocol, used with `twinkle_client`, simpler API |
17
-
|**Tinker Compatible Server**|`tinker`| Compatible with Tinker protocol, used with `init_tinker_compat_client`, can reuse existing Tinker training code |
17
+
|**Tinker Compatible Server**|`tinker`| Compatible with Tinker protocol, can reuse existing Tinker training code |
18
18
19
19
### Two Model Backends
20
20
@@ -30,7 +30,7 @@ Regardless of Server mode, model loading supports two backends:
30
30
| Client | Initialization Method | Description |
31
31
|--------|---------|------|
32
32
|**Twinkle Client**|`init_twinkle_client`| Native client, simply change `from twinkle import` to `from twinkle_client import` to migrate local training code to remote calls |
33
-
|**Tinker Compatible Client**|`init_tinker_compat_client`| Patches Tinker SDK, allowing existing Tinker training code to be directly reused |
33
+
|**Tinker Client**|`init_tinker_client`| Patches Tinker SDK, allowing existing Tinker training code to be directly reused |
34
34
35
35
## How to Choose
36
36
@@ -47,7 +47,7 @@ Regardless of Server mode, model loading supports two backends:
47
47
| Scenario | Recommendation |
48
48
|------|------|
49
49
| Existing Twinkle local training code, want to switch to remote | Twinkle Client — only need to change import paths |
50
-
| Existing Tinker training code, want to reuse | Tinker Compatible Client — only need to initialize patch |
50
+
| Existing Tinker training code, want to reuse | Tinker Client — only need to initialize patch |
51
51
| New project | Twinkle Client — simpler API |
52
52
53
53
### Model Backend Selection
@@ -65,33 +65,35 @@ Complete runnable examples are located in the `cookbook/client/` directory:
65
65
```
66
66
cookbook/client/
67
67
├── twinkle/ # Twinkle native protocol examples
68
-
│ ├── transformer/ # Transformers backend
68
+
│ ├── transformer/ # Transformers backend server config
69
69
│ │ ├── server.py # Startup script
70
-
│ │ ├── server_config.yaml # Configuration file
71
-
│ │ └── lora.py # LoRA training client
72
-
│ └── megatron/ # Megatron backend
73
-
│ ├── server.py
74
-
│ ├── server_config.yaml
75
-
│ └── lora.py
70
+
│ │ └── server_config.yaml # Configuration file
71
+
│ ├── megatron/ # Megatron backend server config
72
+
│ │ ├── server.py
73
+
│ │ └── server_config.yaml
74
+
│ ├── grpo.py # GRPO training client
75
+
│ ├── sample.py # Inference sampling client
76
+
│ └── self_congnition.py # Self-cognition training client
76
77
└── tinker/ # Tinker compatible protocol examples
77
-
├── transformer/ # Transformers backend
78
+
├── transformer/ # Transformers backend server config
0 commit comments