Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,18 +51,18 @@ opengradient config init
import os
import opengradient as og

og_client = og.new_client(
client = og.init(
private_key=os.environ.get("OG_PRIVATE_KEY"),
email=None, # Optional: only needed for model uploads
password=None,
private_key=os.environ.get("OG_PRIVATE_KEY"),
)
```

### 3. Basic Usage

#### LLM Chat
```python
completion = og_client.llm_chat(
completion = client.llm.chat(
model=og.TEE_LLM.GPT_4O,
messages=[{"role": "user", "content": "Hello!"}],
)
Expand All @@ -73,7 +73,7 @@ print(f"Tx hash: {completion.transaction_hash}")
#### Custom Model Inference
Browse models on our [Model Hub](https://hub.opengradient.ai/) or upload your own:
```python
result = og_client.infer(
result = client.inference.infer(
model_cid="your-model-cid",
model_input={"input": [1.0, 2.0, 3.0]},
inference_mode=og.InferenceMode.VANILLA,
Expand All @@ -86,7 +86,7 @@ print(f"Output: {result.model_output}")
OpenGradient supports secure, verifiable inference through TEE for leading LLM providers. Access models from OpenAI, Anthropic, Google, and xAI with cryptographic attestation:
```python
# Use TEE mode for verifiable AI execution
completion = og_client.llm_chat(
completion = client.llm.chat(
model=og.TEE_LLM.CLAUDE_3_7_SONNET,
messages=[{"role": "user", "content": "Your message here"}],
)
Expand All @@ -112,10 +112,10 @@ The Alpha Testnet provides access to experimental features, including **workflow
```python
import opengradient as og

og.init(
client = og.init(
private_key="your-private-key",
email="your-email",
password="your-password",
private_key="your-private-key",
)

# Define input query for historical price data
Expand All @@ -129,7 +129,7 @@ input_query = og.HistoricalInputQuery(
)

# Deploy a workflow (optionally with scheduling)
contract_address = og.alpha.new_workflow(
contract_address = client.alpha.new_workflow(
model_cid="your-model-cid",
input_query=input_query,
input_tensor_name="input",
Expand All @@ -141,14 +141,14 @@ print(f"Workflow deployed at: {contract_address}")
#### Execute and Read Results
```python
# Manually trigger workflow execution
result = og.alpha.run_workflow(contract_address)
result = client.alpha.run_workflow(contract_address)
print(f"Inference output: {result}")

# Read the latest result
latest = og.alpha.read_workflow_result(contract_address)
latest = client.alpha.read_workflow_result(contract_address)

# Get historical results
history = og.alpha.read_workflow_history(contract_address, num_results=5)
history = client.alpha.read_workflow_history(contract_address, num_results=5)
```

### 6. Examples
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ outline: [2,3]



# Package opengradient.llm
# Package opengradient.agents

OpenGradient LLM Adapters

Expand All @@ -19,7 +19,7 @@ into existing applications and agent frameworks.
### Langchain adapter

```python
def langchain_adapter(private_key: str, model_cid: opengradient.types.LLM, max_tokens: int = 300) ‑> opengradient.llm.og_langchain.OpenGradientChatModel
def langchain_adapter(private_key: str, model_cid: opengradient.types.LLM, max_tokens: int = 300) ‑> opengradient.agents.og_langchain.OpenGradientChatModel
```


Expand All @@ -34,7 +34,7 @@ and can be plugged into LangChain agents.
### Openai adapter

```python
def openai_adapter(private_key: str) ‑> opengradient.llm.og_openai.OpenGradientOpenAIClient
def openai_adapter(private_key: str) ‑> opengradient.agents.og_openai.OpenGradientOpenAIClient
```


Expand Down
4 changes: 2 additions & 2 deletions docs/opengradient/alphasense/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ OpenGradient AlphaSense Tools
### Create read workflow tool

```python
def create_read_workflow_tool(tool_type: opengradient.alphasense.types.ToolType, workflow_contract_address: str, tool_name: str, tool_description: str, output_formatter: Callable[..., str] = <function <lambda>>) ‑> Union[langchain_core.tools.base.BaseTool, Callable]
def create_read_workflow_tool(tool_type: opengradient.alphasense.types.ToolType, workflow_contract_address: str, tool_name: str, tool_description: str, alpha: Optional[opengradient.client.alpha.Alpha] = None, output_formatter: Callable[..., str] = <function <lambda>>) ‑> Union[langchain_core.tools.base.BaseTool, Callable]
```


Expand Down Expand Up @@ -44,7 +44,7 @@ Callable: For ToolType.SWARM, returns a decorated function with appropriate meta
### Create run model tool

```python
def create_run_model_tool(tool_type: opengradient.alphasense.types.ToolType, model_cid: str, tool_name: str, model_input_provider: Callable[..., Dict[str, Union[str, int, float, List, numpy.ndarray]]], model_output_formatter: Callable[[opengradient.types.InferenceResult], str], tool_input_schema: Optional[Type[pydantic.main.BaseModel]] = None, tool_description: str = 'Executes the given ML model', inference_mode: opengradient.types.InferenceMode = InferenceMode.VANILLA) ‑> Union[langchain_core.tools.base.BaseTool, Callable]
def create_run_model_tool(tool_type: opengradient.alphasense.types.ToolType, model_cid: str, tool_name: str, model_input_provider: Callable[..., Dict[str, Union[str, int, float, List, numpy.ndarray]]], model_output_formatter: Callable[[opengradient.types.InferenceResult], str], inference: Optional[opengradient.client.onchain_inference.Inference] = None, tool_input_schema: Optional[Type[pydantic.main.BaseModel]] = None, tool_description: str = 'Executes the given ML model', inference_mode: opengradient.types.InferenceMode = InferenceMode.VANILLA) ‑> Union[langchain_core.tools.base.BaseTool, Callable]
```


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ outline: [2,3]



# Package opengradient.alpha
# Package opengradient.client.alpha

Alpha Testnet features for OpenGradient SDK.

Expand All @@ -16,7 +16,7 @@ including workflow management and ML model execution.

### Alpha

<code>class <b>Alpha</b>(client: Client)</code>
<code>class <b>Alpha</b>(blockchain: [Web3](docs/main.md#Web3), wallet_account: [local](docs/signers.md#local))</code>



Expand Down
53 changes: 53 additions & 0 deletions docs/opengradient/client/client.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
---
outline: [2,3]
---



# Package opengradient.client.client

## Classes


### Client

<code>class <b>Client</b>(private_key: str, email: Optional[str] = None, password: Optional[str] = None, rpc_url: str = 'https://ogevmdevnet.opengradient.ai', api_url: str = 'https://sdk-devnet.opengradient.ai', contract_address: str = '0x8383C9bD7462F12Eb996DD02F78234C0421A6FaE', og_llm_server_url: Optional[str] = 'https://llmogevm.opengradient.ai', og_llm_streaming_server_url: Optional[str] = 'https://llmogevm.opengradient.ai')</code>




Initialize the OpenGradient client.


**Arguments**

* **`private_key`**: Private key for OpenGradient transactions.
* **`email`**: Email for Model Hub authentication. Optional.
* **`password`**: Password for Model Hub authentication. Optional.
* **`rpc_url`**: RPC URL for the blockchain network.
* **`api_url`**: API URL for the OpenGradient API.
* **`contract_address`**: Inference contract address.
* **`og_llm_server_url`**: OpenGradient LLM server URL.
* **`og_llm_streaming_server_url`**: OpenGradient LLM streaming server URL.


#### Variables



* static `inference : opengradient.client.onchain_inference.Inference` - The type of the None singleton.

* static `llm : opengradient.client.llm.LLM` - The type of the None singleton.

* static `model_hub : opengradient.client.model_hub.ModelHub` - The type of the None singleton.



* `alpha` - Access Alpha Testnet features.

Returns:
Alpha: Alpha namespace with workflow and ML model execution methods.

Example:
client = og.Client(...)
result = client.alpha.new_workflow(model_cid, input_query, input_tensor_name)
188 changes: 188 additions & 0 deletions docs/opengradient/client/exceptions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
---
outline: [2,3]
---



# Package opengradient.client.exceptions

## Classes


### AuthenticationError

<code>class <b>AuthenticationError</b>(message='Authentication failed', **kwargs)</code>




Raised when there's an authentication error





### FileNotFoundError

<code>class <b>FileNotFoundError</b>(file_path)</code>




Raised when a file is not found





### InferenceError

<code>class <b>InferenceError</b>(message, model_cid=None, **kwargs)</code>




Raised when there's an error during inference





### InsufficientCreditsError

<code>class <b>InsufficientCreditsError</b>(message='Insufficient credits', required_credits=None, available_credits=None, **kwargs)</code>




Raised when the user has insufficient credits for the operation





### InvalidInputError

<code>class <b>InvalidInputError</b>(message, invalid_fields=None, **kwargs)</code>




Raised when invalid input is provided





### NetworkError

<code>class <b>NetworkError</b>(message, status_code=None, response=None)</code>




Raised when a network error occurs





### OpenGradientError

<code>class <b>OpenGradientError</b>(message, status_code=None, response=None)</code>




Base exception for OpenGradient SDK


#### Subclasses
* `AuthenticationError`
* `FileNotFoundError`
* `InferenceError`
* `InsufficientCreditsError`
* `InvalidInputError`
* `NetworkError`
* `RateLimitError`
* `ResultRetrievalError`
* `ServerError`
* `TimeoutError`
* `UnsupportedModelError`
* `UploadError`



### RateLimitError

<code>class <b>RateLimitError</b>(message='Rate limit exceeded', retry_after=None, **kwargs)</code>




Raised when API rate limit is exceeded





### ResultRetrievalError

<code>class <b>ResultRetrievalError</b>(message, inference_cid=None, **kwargs)</code>




Raised when there's an error retrieving results





### ServerError

<code>class <b>ServerError</b>(message, status_code=None, response=None)</code>




Raised when a server error occurs





### TimeoutError

<code>class <b>TimeoutError</b>(message='Request timed out', timeout=None, **kwargs)</code>




Raised when a request times out





### UnsupportedModelError

<code>class <b>UnsupportedModelError</b>(model_type)</code>




Raised when an unsupported model type is used





### UploadError

<code>class <b>UploadError</b>(message, file_path=None, **kwargs)</code>




Raised when there's an error during file upload
Loading