Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/appendix/enhanced-js.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Enhanced JS nodes allow you to utilize all built-in functions for external calls, such as networking and database operations. If your requirement is solely to process and operate on data records, it is recommended to use [standard JS nodes](standard-js.md).

For detailed instructions on how to use enhanced JS nodes and explore various scenarios, please refer to the documentation and resources available for [JS processing node](../data-transformation/process-node.md#js-process).
For detailed instructions on how to use enhanced JS nodes and explore various scenarios, please refer to the documentation and resources available for [JS processing node](../data-transformation/process-node.md#js-processing).

:::tip

Expand Down
2 changes: 1 addition & 1 deletion docs/appendix/standard-js.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Standard JS nodes can only process and operate on data records. If you require the usage of system built-in functions for external calls, such as networking or database operations, you can utilize [enhanced JS nodes](enhanced-js.md).

For information on how to use and scenarios, see [JS processing node](../data-transformation/process-node.md#js-process).
For information on how to use and scenarios, see [JS processing node](../data-transformation/process-node.md#js-processing).

## DateUtil

Expand Down
2 changes: 1 addition & 1 deletion docs/case-practices/best-practice/alert-via-qqmail.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ The email authorization code is a special password used by QQ Mail to log into t
![SMTP Service Settings](../../images/qqmail_smtp_settings.png)

* **SMTP Service Account**: Enter your QQ email address.
* **SMTP Service Password**: Enter the authorization code you obtained in [Step One](#mail-code).
* **SMTP Service Password**: Enter the authorization code you obtained in [Step One](#step-one-obtain-email-authorization-code).
* **Encryption Method**: Select **SSL** to ensure security.
* **SMTP Service Host**: Enter **smtp.qq.com**, the SMTP server for sending emails from QQ Mail.
* **SMTP Service Port**: Enter **465** or **587**.
Expand Down
2 changes: 1 addition & 1 deletion docs/case-practices/best-practice/raw-logs-solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ To enhance the efficiency of capturing data changes, TapData supports not only u
* **Operating System**: Linux 64 or Windows 64 platforms.
* **Storage**: Supported file systems include ext4, btrfs, zfs, xfs, sshfs; supported database block sizes are 2k, 4k, 8k, 16k, 32k.
* **Port Requirements**: Some server ports must be open for service communication, including: default data transfer port: **8203**, web management default port: **8303**, raw log service port: **8190**.
* **Permission**: The operating system user running the raw log plugin must have read access to redo log files; in addition to the permissions required for the source database as per the [Oracle Preparation Work](../../connectors/on-prem-databases/oracle.md#source) and enabling archive logs, additional permissions must be granted to simulate Oracle's data information structure and processes to cache part of Oracle Schema information to support the parsing of redo logs.
* **Permission**: The operating system user running the raw log plugin must have read access to redo log files; in addition to the permissions required for the source database as per the [Oracle Preparation Work](../../connectors/on-prem-databases/oracle.md#as-a-source-database) and enabling archive logs, additional permissions must be granted to simulate Oracle's data information structure and processes to cache part of Oracle Schema information to support the parsing of redo logs.

```sql
-- Replace <DSTUSER> with the actual username
Expand Down
2 changes: 1 addition & 1 deletion docs/case-practices/pipeline-tutorial/mysql-to-bigquery.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ For more information, See [Management Tasks](../../data-transformation/manage-ta



## <span id="faq"> FAQ</span>
## FAQ

* Q: Why does Agent's machine require access to Google Cloud Services?

Expand Down
2 changes: 2 additions & 0 deletions docs/connectors/on-prem-databases/opengauss.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ Please follow the instructions below to successfully add and use openGauss datab
openGauss 3.0.0 and above

## Supported Field Types

<details>

<summary><b>Click to expand and view the detailed list</b></summary>

- smallint
Expand Down
4 changes: 4 additions & 0 deletions docs/connectors/on-prem-databases/sqlserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@ In addition, for synchronization from SQL Server to PostgreSQL, extra support is
* When SQL Server is used as the source database and a DDL operation (such as adding a column) is performed on the fields of a table under incremental sync, you will need to restart change data capture for the table to avoid data synchronization errors or failures.

<details>

<summary>Restart change data capture for the corresponding table</summary>

```sql
Expand Down Expand Up @@ -342,8 +343,11 @@ When configuring SQL Server as the source node for a task, TapData provides seve
**A**: SQL Server 2005 does not support CDC. Incremental data capture can be done via [field polling](../../introduction/change-data-capture-mechanism.md), or by using the following method:

<details>

<summary>SQL Server 2005 as a Source Solution</summary>

Since CDC is supported from SQL Server 2008 onward, for earlier versions, you can simulate change data capture using Custom SQL. When replicating data from older versions, the source table must have a change-tracking column, such as <b>LAST_UPDATED_TIME</b>, which is updated with every insert or update. When creating the data replication task, set the task synchronization type to <b>Full</b>, enable <b>Repeat Custom SQL</b> as <b>True</b>, and provide appropriate Custom SQL in the mapping design.

</details>

* **Q**: When certain tables cannot enable CDC while others work normally. How can I resolve this issue without restarting the entire database?
Expand Down
2 changes: 1 addition & 1 deletion docs/connectors/on-prem-databases/tidb.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,6 +165,6 @@ GRANT SELECT, INSERT, UPDATE, DELETE, ALTER, CREATE, DROP ON *.* TO 'username';

1. Extract the downloaded file, then navigate to the extracted directory and run the `make` command to compile.

2. Locate the generated **cdc** binary file and place it in the **{tapData-dir}/run-resource/ti-db/tool** directory on the TapData engine machine (replace if necessary).
2. Locate the generated **cdc** binary file and place it in the **\{tapData-dir\}/run-resource/ti-db/tool** directory on the TapData engine machine (replace if necessary).

3. Use the `chmod` command to grant read, write, and execute permissions to the files in this directory.
5 changes: 3 additions & 2 deletions docs/connectors/others/dummy.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@

Dummy is a data source that generates test data. This article describes how to add Dummy data sources to TapData Cloud.

<details><summary>Supported Generated Field Types</summary>
<details>

<summary>Supported Generated Field Types</summary>

| Type | Description | Parameters |
| ------------------------- | ---------------------- | ------------------------------------------------------------ |
Expand Down Expand Up @@ -58,4 +60,3 @@ Dummy is a data source that generates test data. This article describes how to a
If the connection test fails, follow the prompts on the page to fix it.

:::

12 changes: 6 additions & 6 deletions docs/connectors/pre-check.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,22 +16,22 @@ After the data replication/development task configuration is completed, execute
| -------------------------- | ------------------------------------------------------------ |
| Task Sync Type Check | 【ERROR】【2023-01-01 00:00:00】【Task Sync Type Check】【Node-Source】The sync type (Full) of this node does not match the task sync type (Full + Incremental), the task cannot start normally, please check the relevant configuration. |
| Default Timezone Check | 【WARN】【2023-01-01 00:00:00】【Default Timezone Check】The default timezone connected by the source node (Node-Source) is inconsistent with the default timezone connected by the target node (Node-Target), which may cause inconsistency in synchronized data. |
| Task Model Inference Check | 【ERROR】【2023-01-01 00:00:00】【Task Model Inference Check】Task configuration model inference failed, the task cannot start normally, please check the relevant issues. <br /> { <br />Node Name 1: Table Name 1: Field Name 1; <br />Node Name 2: Table Name 2: Field Name 2 <br />} |
| Task Model Inference Check | 【ERROR】【2023-01-01 00:00:00】【Task Model Inference Check】Task configuration model inference failed, the task cannot start normally, please check the relevant issues. <br /> &#123; <br />Node Name 1: Table Name 1: Field Name 1; <br />Node Name 2: Table Name 2: Field Name 2 <br />&#125; |

## Source Node Check

| Check Item | Log Example |
| ------------------------------ | ------------------------------------------------------------ |
| Source Connection Status Check | 【ERROR】【2023-01-01 00:00:00】【Source Connection Status Check】【Node-Source】The data connection of this node is unavailable, the task cannot start normally, please check the relevant issues. { Data source login permission check failed: Wrong username or password } |
| Source Model Load Check | 【ERROR】【2023-01-01 00:00:00】【Source Model Load Check】【Node-Source】The data model loading of this node failed, the task cannot start normally, please check the relevant issues. <br />{ Table Name 1; <br />Table Name 2 <br />} |
| Source Connection Status Check | 【ERROR】【2023-01-01 00:00:00】【Source Connection Status Check】【Node-Source】The data connection of this node is unavailable, the task cannot start normally, please check the relevant issues. &#123; Data source login permission check failed: Wrong username or password &#125; |
| Source Model Load Check | 【ERROR】【2023-01-01 00:00:00】【Source Model Load Check】【Node-Source】The data model loading of this node failed, the task cannot start normally, please check the relevant issues. <br />&#123; Table Name 1; <br />Table Name 2 <br />&#125; |
| Source Type Mapping Check | 【WARN】【2023-01-01 00:00:00】【Source Type Mapping Check】【Node-Source】【Personinfo】【id】The data type of this field is temporarily unsupported, and will be ignored during data reading. |

## Target Node Check

| Check Item | Log Example |
| ------------------------------ | ------------------------------------------------------------ |
| Target Connection Status Check | 【ERROR】【2023-01-01 00:00:00】【Target Connection Status Check】【Node-Target】The data connection of this node is unavailable, the task cannot start normally, please check the relevant issues. { Data source login permission check failed: Wrong username or password } |
| Target Model Load Check | 【ERROR】【2023-01-01 00:00:00】【Target Model Load Check】【Node-Target】The data model loading of this node failed, the task cannot start normally, please check the relevant issues. { Table Name 1; Table Name 2 } |
| Target Connection Status Check | 【ERROR】【2023-01-01 00:00:00】【Target Connection Status Check】【Node-Target】The data connection of this node is unavailable, the task cannot start normally, please check the relevant issues. &#123; Data source login permission check failed: Wrong username or password &#125; |
| Target Model Load Check | 【ERROR】【2023-01-01 00:00:00】【Target Model Load Check】【Node-Target】The data model loading of this node failed, the task cannot start normally, please check the relevant issues. &#123; Table Name 1; Table Name 2 &#125; |
| Target Type Mapping Failure | 【WARN】【2023-01-01 00:00:00】【Target Type Mapping Check】【Node-Target】【Personinfo】【id】The data type of this field is temporarily unsupported, and will be ignored during data writing. |
| Target Type Mapping Warning | 【WARN】【2023-01-01 00:00:00】【Source Type Mapping Check】【Node-Target】【Personinfo】【pic】The target data type mapped by this field is a system-guessed result, which may be biased. Please check and confirm whether it meets the expectations, and adjust accordingly. |

Expand All @@ -41,4 +41,4 @@ After the data replication/development task configuration is completed, execute
| ----------- | -------------------------------- | ------------------------------------------------------------ |
| Source Node | MariaDB Timezone Check | 【WARN】【2023-01-01 00:00:00】【MariaDB Timezone Check】【Node-MariaDB】The timezone setting in the MariaDB connection configuration only takes effect in the Full phase and is invalid in the Incremental phase. |
| Target Node | Oracle NOT NULL Constraint Check | 【WARN】【2023-01-01 00:00:00】【Oracle NOT NULL Constraint Check】【Node-Oracle】【Personinfo】【id】When writing NOT NULL fields with Oracle data source as the target, "" cannot be processed. |
| Target Node | CK Primary Key Type Check | 【WARN】【2023-01-01 00:00:00】【CK Primary Key Type Check】【Node-CK】【Personinfo】When using ClickHouse data source as the target and the primary key data type is floating-point, this data table does not support the processing of update delete events. |
| Target Node | CK Primary Key Type Check | 【WARN】【2023-01-01 00:00:00】【CK Primary Key Type Check】【Node-CK】【Personinfo】When using ClickHouse data source as the target and the primary key data type is floating-point, this data table does not support the processing of update delete events. |
1 change: 1 addition & 0 deletions docs/connectors/saas-and-api/coding.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ For more details on data structures and event support, refer to Coding's [offici
- **Webhook**: Uses Coding’s Webhook feature to listen for events and sends notifications to the Tapdata platform via HTTP POST. If selected, click **Generate** to obtain a service URL, and follow the steps below to configure Webhook on the Coding platform.

<details>

<summary>Configure Webhook on the Coding Platform</summary>

1. Log in to the [Coding platform](https://e.coding.net/login) as an administrator.
Expand Down
6 changes: 4 additions & 2 deletions docs/connectors/saas-and-api/lark-im.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ Lark is an enterprise collaboration and management platform that integrates inst
```

<details>
<summary>Fields Description</summary>

<summary>Fields Description</summary>

| Field Name | Description |
| --------------- | ------------------------------------------------------------ |
Expand All @@ -33,7 +34,8 @@ Lark is an enterprise collaboration and management platform that integrates inst
| **content** | Message content, formatted as a JSON string |

For more details, see the [Send message content structure](https://open.feishu.cn/document/server-docs/im-v1/message-content-description/create_json).
</details>

</details>

## Prerequisites

Expand Down
3 changes: 2 additions & 1 deletion docs/connectors/saas-and-api/zoho-desk.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,8 @@ Before connecting Zoho Desk to TapData, follow these steps to retrieve authentic


<details>
<summary>Create Webhook (Click to expand) </summary>

<summary>Create Webhook (Click to expand) </summary>

1. In [Zoho Desk](https://www.zoho.com/), click the ![settings](../../images/setting_icon.png) icon in the top-right.

Expand Down
7 changes: 5 additions & 2 deletions docs/connectors/warehouses-and-lake/big-query.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,10 @@ TapData Platform's machine can access to Google Cloud Services.
2. On the redirected page, enter the role name and click **ADD PERMISSIONS**.
3. In the pop-up dialog, search for each permission one by one and grant them accordingly.

<details>
<summary>Minimum Permissions List (Click to expand) </summary>
<details>

<summary>Minimum Permissions List (Click to expand) </summary>

<div>
<div>
bigquery.datasets.create<br/>
Expand All @@ -45,6 +47,7 @@ TapData Platform's machine can access to Google Cloud Services.
bigquery.tables.updateData
</div>
</div>

</details>

2. After the permission selection is complete, click **CREATE**.
Expand Down
1 change: 1 addition & 0 deletions docs/connectors/warehouses-and-lake/gaussdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ import TabItem from '@theme/TabItem';


<details>

<summary><b>What is the distribution column?</b></summary>

In GaussDB (DWS), distribution columns determine how data is distributed across nodes in a distributed table, impacting query performance. For more details, see [Best Practices for Choosing Distribution Columns](https://support.huaweicloud.com/intl/en-us/performance-dws/dws_10_0042.html).
Expand Down
7 changes: 5 additions & 2 deletions docs/data-replication/create-task.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,11 @@ Before you create a data replication task, you need to perform the following pre
As an example of creating a data replication task, the article demonstrates the real-time replication of data from MySQL to MongoDB. However, it's important to note that TapData supports replication tasks between various data sources, so you can configure replication between different combinations of databases based on your specific requirements.

<details>
<summary>Best Practices</summary>
To build efficient and reliable data replication tasks, it is recommended to read the <a href="../case-practices/best-practice/data-sync">Data Synchronization Best Practices</a> before starting to configure tasks.

<summary>Best Practices</summary>

To build efficient and reliable data replication tasks, it is recommended to read the <a href="../case-practices/best-practice/data-sync">Data Synchronization Best Practices</a> before starting to configure tasks.

</details>

1. Log in to TapData Platform.
Expand Down
4 changes: 3 additions & 1 deletion docs/data-transformation/create-views/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@ import TapDataAnimation from '@site/src/components/Animation/TapDataAnimation';
<TapDataAnimation />


<details><summary>Ways to Add Related Fields</summary>
<details>

<summary>Ways to Add Related Fields</summary>

When designing your Incremental Materialized View, you can choose how data from related tables is included in your main record. TapData lets you customize this structure to match your analysis needs and downstream use cases:

Expand Down
6 changes: 4 additions & 2 deletions docs/data-transformation/create-views/using-tapflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,10 @@ You can build the real-time, incremental materialized view directly in **TapShel
)
```

<details><summary>Understanding type and path in .lookup()</summary>
<details>

<summary>Understanding type and path in .lookup()</summary>

These parameters control **how** related data is merged:

- **type="object"** – embeds the joined record as a nested document at `path`. Ideal for one-to-one enrichments like adding user profiles inside orders.
Expand Down Expand Up @@ -247,4 +250,3 @@ This structure is analysis-ready, API-friendly, and tailored for real-time use.
- **Publish the view as an API** so other teams or systems can consume fresh, structured order data via REST or GraphQL.



1 change: 1 addition & 0 deletions docs/experimental/mcp/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ import TabItem from '@theme/TabItem';
```

<details>

<summary><b>Understand Tapdata MCP Server Primitives</b></summary>

Tapdata MCP Server is built on three core primitives: **Prompts**, **Resources**, and **Tools**. These form the foundation for AI models to interact with data systems—allowing them to discover available resources, select appropriate operations, and use prompt templates to retrieve structured context for accurate and efficient reasoning.
Expand Down
Loading