diff --git a/_static/images/New_Dark_Gray.png b/_static/images/New_Dark_Gray.png new file mode 100644 index 000000000..34ac016eb Binary files /dev/null and b/_static/images/New_Dark_Gray.png differ diff --git a/_static/images/SAP_BO.png b/_static/images/SAP_BO.png new file mode 100644 index 000000000..413ce7d0e Binary files /dev/null and b/_static/images/SAP_BO.png differ diff --git a/_static/images/SAP_BO_2.png b/_static/images/SAP_BO_2.png new file mode 100644 index 000000000..91bc53d1a Binary files /dev/null and b/_static/images/SAP_BO_2.png differ diff --git a/_static/images/SQream_logo_without background-15.png b/_static/images/SQream_logo_without background-15.png new file mode 100644 index 000000000..1a4460581 Binary files /dev/null and b/_static/images/SQream_logo_without background-15.png differ diff --git a/_static/images/chunks_and_extents.png b/_static/images/chunks_and_extents.png index bb092fab7..972b624e3 100644 Binary files a/_static/images/chunks_and_extents.png and b/_static/images/chunks_and_extents.png differ diff --git a/_static/images/color_table.png b/_static/images/color_table.png new file mode 100644 index 000000000..b815f9616 Binary files /dev/null and b/_static/images/color_table.png differ diff --git a/_static/images/new.png b/_static/images/new.png new file mode 100644 index 000000000..a0df8ff0f Binary files /dev/null and b/_static/images/new.png differ diff --git a/_static/images/new_2022.1.1.png b/_static/images/new_2022.1.1.png new file mode 100644 index 000000000..2ffb80039 Binary files /dev/null and b/_static/images/new_2022.1.1.png differ diff --git a/_static/images/new_2022.1.png b/_static/images/new_2022.1.png new file mode 100644 index 000000000..27b2d285a Binary files /dev/null and b/_static/images/new_2022.1.png differ diff --git a/_static/images/new_dark_gray_2022.1.1.png b/_static/images/new_dark_gray_2022.1.1.png new file mode 100644 index 000000000..6d290734a Binary files /dev/null and b/_static/images/new_dark_gray_2022.1.1.png differ diff --git a/_static/images/new_gray_2022.1.1.png b/_static/images/new_gray_2022.1.1.png new file mode 100644 index 000000000..7c6cd28db Binary files /dev/null and b/_static/images/new_gray_2022.1.1.png differ diff --git a/_static/images/storage_organization.png b/_static/images/storage_organization.png index 8cde6d70e..d6a06d763 100644 Binary files a/_static/images/storage_organization.png and b/_static/images/storage_organization.png differ diff --git a/_static/images/table_columns_storage.png b/_static/images/table_columns_storage.png index 322538dac..071e140ea 100644 Binary files a/_static/images/table_columns_storage.png and b/_static/images/table_columns_storage.png differ diff --git a/architecture/filesystem_and_filesystem_usage.rst b/architecture/filesystem_and_filesystem_usage.rst index d1838d4e8..634097e23 100644 --- a/architecture/filesystem_and_filesystem_usage.rst +++ b/architecture/filesystem_and_filesystem_usage.rst @@ -27,7 +27,7 @@ The **cluster root** is the directory in which all data for SQream DB is stored. The databases directory houses all of the actual data in tables and columns. -Each database is stored as it's own directory. Each table is stored under it's respective database, and columns are stored in their respective table. +Each database is stored as its own directory. Each table is stored under its respective database, and columns are stored in their respective table. .. figure:: /_static/images/table_columns_storage.png @@ -63,10 +63,10 @@ Each column directory will contain extents, which are collections of chunks. .. figure:: /_static/images/chunks_and_extents.png -``metadata`` or ``leveldb`` +``metadata`` or ``rocksdb`` ---------------------------- -SQream DB's metadata is an embedded key-value store, based on LevelDB. LevelDB helps SQream DB ensure efficient storage for keys, handle atomic writes, snapshots, durability, and automatic recovery. +SQream DB's metadata is an embedded key-value store, based on RocksDB. RocksDB helps SQream DB ensure efficient storage for keys, handle atomic writes, snapshots, durability, and automatic recovery. The metadata is where all database objects are stored, including roles, permissions, database and table structures, chunk mappings, and more. diff --git a/architecture/internals_architecture.rst b/architecture/internals_architecture.rst index f25dfeb22..571b5f9a0 100644 --- a/architecture/internals_architecture.rst +++ b/architecture/internals_architecture.rst @@ -45,7 +45,7 @@ The storage is split into the :ref:`metadata layer` and an appe Metadata layer ^^^^^^^^^^^^^^^^^^^^^^ -The metadata layer uses LevelDB, and uses LevelDB's snapshot and write atomic features as part of the transaction system. +The metadata layer uses RocksDB, and uses RocksDB's snapshot and write atomic features as part of the transaction system. The metadata layer, together with the append-only bulk data layer help ensure consistency. diff --git a/conf.py b/conf.py index 95a622cd6..191fc86ce 100644 --- a/conf.py +++ b/conf.py @@ -21,12 +21,12 @@ # -- Project information ----------------------------------------------------- project = 'SQream DB' -copyright = '2022 SQream' +copyright = '2023 SQream' author = 'SQream Documentation' # The full version, including alpha/beta/rc tags -release = '2021.2' +release = '4.0.0' @@ -68,7 +68,7 @@ 'css/custom.css', # Relative to the _static path ] -html_logo = '_static/images/sqream_logo.png' +html_logo = '_static/images/SQream_logo_without background-15.png' # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. @@ -90,7 +90,7 @@ 'logo_only': True # Hide "SQream DB" title and only show logo , 'display_version': True # Display version at the top , 'style_external_links': True # Show little icon next to external links - , 'style_nav_header_background': '#0f9790' # SQream teal + , 'style_nav_header_background': '#133148' # SQream teal , 'navigation_depth': -1 , 'collapse_navigation': False , 'titles_only': True diff --git a/configuration_guides/admin_cluster_flags.rst b/configuration_guides/admin_cluster_flags.rst index 3c74819d6..f17cee34a 100644 --- a/configuration_guides/admin_cluster_flags.rst +++ b/configuration_guides/admin_cluster_flags.rst @@ -6,4 +6,4 @@ Cluster Administration Flags The **Cluster Administration Flags** page describes **Cluster** modification type flags, which can be modified by administrators on a session and cluster basis using the ``ALTER SYSTEM SET`` command: -* `Persisting Your Cache Directory `_ \ No newline at end of file +* `Persisting Your Cache Directory `_ \ No newline at end of file diff --git a/configuration_guides/admin_regular_flags.rst b/configuration_guides/admin_regular_flags.rst index 5300310b6..fd7131692 100644 --- a/configuration_guides/admin_regular_flags.rst +++ b/configuration_guides/admin_regular_flags.rst @@ -3,29 +3,30 @@ ************************* Regular Administration Flags ************************* -The **Regular Administration Flags** page describes **Regular** modification type flags, which can be modified by administrators on a session and cluster basis using the ``ALTER SYSTEM SET`` command: +The **Regular Administration Flags** page describes **Regular** modification type flags, which can be modified by administrators on a session and cluster basis using the ``ALTER SYSTEM SET`` command: -* `Setting Bin Size `_ -* `Setting CUDA Memory `_ -* `Limiting Runtime to Utility Functions `_ -* `Enabling High Bin Control Granularity `_ -* `Reducing CPU Hashtable Sizes `_ -* `Setting Chunk Size for Copying from CPU to GPU `_ -* `Indicating GPU Synchronicity `_ -* `Enabling Modification of R&D Flags `_ -* `Checking for Post-Production CUDA Errors `_ -* `Enabling Modification of clientLogger_debug File `_ -* `Activating the NVidia Profiler Markers `_ -* `Appending String at End of Log Lines `_ -* `Monitoring and Printing Pinned Allocation Reports `_ -* `Increasing Chunk Size to Reduce Query Speed `_ -* `Adding Rechunker before Expensing Chunk Producer `_ -* `Setting the Buffer Size `_ -* `Setting Memory Used to Abort Server `_ -* `Splitting Large Reads for Concurrent Execution `_ -* `Setting Worker Amount to Handle Concurrent Reads `_ -* `Setting Implicit Casts in ORC Files `_ -* `Setting Timeout Limit for Locking Objects before Executing Statements `_ -* `Interpreting Decimal Literals as Double Instead of Numeric `_ -* `Interpreting VARCHAR as TEXT `_ -* `VARCHAR Identifiers `_ +* :ref:`Setting Bin Size` +* :ref:`Setting CUDA Memory` +* :ref:`Limiting Runtime to Utility Functions` +* :ref:`Enabling High Bin Control Granularity` +* :ref:`Reducing CPU Hashtable Sizes` +* :ref:`Setting Chunk Size for Copying from CPU to GPU` +* :ref:`Indicating GPU Synchronicity` +* :ref:`Setting the Graceful Server Shutdown` +* :ref:`Enabling Modification of R&D Flags` +* :ref:`Checking for Post-Production CUDA Errors` +* :ref:`Enabling Modification of clientLogger_debug File` +* :ref:`Activating the NVidia Profiler Markers` +* :ref:`Appending String at End of Log Lines` +* :ref:`Monitoring and Printing Pinned Allocation Reports` +* :ref:`Increasing Chunk Size to Reduce Query Speed` +* :ref:`Adding Rechunker before Expensing Chunk Producer` +* :ref:`Setting the Buffer Size` +* :ref:`Setting Memory Used to Abort Server` +* :ref:`Splitting Large Reads for Concurrent Execution` +* :ref:`Setting Worker Amount to Handle Concurrent Reads` +* :ref:`Setting Implicit Casts in ORC Files` +* :ref:`Setting Timeout Limit for Locking Objects before Executing Statements` +* :ref:`Interpreting Decimal Literals as Double Instead of Numeric` +* :ref:`Using Legacy String Literals` +* :ref:`Blocking New VARCHAR Objects` diff --git a/configuration_guides/admin_worker_flags.rst b/configuration_guides/admin_worker_flags.rst index 2be570695..130131b57 100644 --- a/configuration_guides/admin_worker_flags.rst +++ b/configuration_guides/admin_worker_flags.rst @@ -3,9 +3,18 @@ ************************* Worker Administration Flags ************************* + +.. |icon-new_gray_2022.1.1| image:: /_static/images/new_gray_2022.1.1.png + :align: middle + :width: 110 + + The **Worker Administration Flags** page describes **Worker** modification type flags, which can be modified by administrators on a session and cluster basis using the ``ALTER SYSTEM SET`` command: -* `Setting Total Device Memory Usage in SQream Instance `_ -* `Enabling Manually Setting Reported IP `_ -* `Setting Port Used for Metadata Server Connection `_ -* `Assigning Local Network IP `_ \ No newline at end of file +* `Setting Total Device Memory Usage in SQream Instance `_ +* `Enabling Manually Setting Reported IP `_ +* `Setting Port Used for Metadata Server Connection `_ +* `Assigning Local Network IP `_ +* `Enabling the Query Healer `_ +* `Configuring the Query Healer `_ +* `Adjusting Permitted Log-in Attempts `_ \ No newline at end of file diff --git a/configuration_guides/block_new_varchar_objects.rst b/configuration_guides/block_new_varchar_objects.rst new file mode 100644 index 000000000..a64ff7995 --- /dev/null +++ b/configuration_guides/block_new_varchar_objects.rst @@ -0,0 +1,12 @@ +.. _block_new_varchar_objects: + +************************* +Blocking New VARCHAR Objects +************************* +The ``blockNewVarcharObjects`` flag disables the creation of new tables, views, external tables containing Varchar columns, and the creation of user-defined functions with Varchar arguments or a Varchar return value. + +The following describes the ``blockNewVarcharObjects`` flag: + +* **Data type** - boolean +* **Default value** - ``false`` +* **Allowed values** - ``true``, ``false`` \ No newline at end of file diff --git a/configuration_guides/configuring_sqream.rst b/configuration_guides/configuring_sqream.rst new file mode 100644 index 000000000..901af9e3d --- /dev/null +++ b/configuration_guides/configuring_sqream.rst @@ -0,0 +1,22 @@ +.. _configuring_sqream: + +************************* +Configuring SQream +************************* +The **Configuring SQream** page describes the following configuration topics: + +.. toctree:: + :maxdepth: 1 + :glob: + :titlesonly: + + current_method_configuration_levels + current_method_flag_types + current_method_configuration_roles + current_method_modification_methods + current_method_configuring_your_parameter_values + current_method_command_examples + current_method_showing_all_flags_in_the_catalog_table + current_method_all_configurations + + diff --git a/configuration_guides/current_configuration_method.rst b/configuration_guides/current_configuration_method.rst deleted file mode 100644 index e7ca5c0d3..000000000 --- a/configuration_guides/current_configuration_method.rst +++ /dev/null @@ -1,729 +0,0 @@ -.. _current_configuration_method: - -************************** -Configuring SQream -************************** -The **Configuring SQream** page describes SQream’s method for configuring your instance of SQream and includes the following topics: - -.. contents:: - :local: - :depth: 1 - -Overview ------ -Modifications that you make to your configurations are persistent based on whether they are made at the session or cluster level. Persistent configurations are modifications made to attributes that are retained after shutting down your system. - -Modifying Your Configuration ----- -The **Modifying Your Configuration** section describes the following: - -.. contents:: - :local: - :depth: 1 - -Modifying Your Configuration Using the Worker Configuration File -~~~~~~~~~~~ -You can modify your configuration using the **worker configuration file (config.json)**. Changes that you make to worker configuration files are persistent. Note that you can only set the attributes in your worker configuration file **before** initializing your SQream worker, and while your worker is active these attributes are read-only. - -The following is an example of a worker configuration file: - -.. code-block:: postgres - - { - “cluster”: “/home/test_user/sqream_testing_temp/sqreamdb”, - “gpu”: 0, - “licensePath”: “home/test_user/SQream/tests/license.enc”, - “machineIP”: “127.0.0.1”, - “metadataServerIp”: “127.0.0.1”, - “metadataServerPort”: “3105, - “port”: 5000, - “useConfigIP”” true, - “legacyConfigFilePath”: “home/SQream_develop/SqrmRT/utils/json/legacy_congif.json” - } - -You can access the legacy configuration file from the ``legacyConfigFilePath`` parameter shown above. If all (or most) of your workers require the same flag settings, you can set the ``legacyConfigFilePath`` attribute to the same legacy file. - -Modifying Your Configuration Using a Legacy Configuration File -~~~~~~~~~~~ -You can modify your configuration using a legacy configuration file. - -The Legacy configuration file provides access to the read/write flags used in SQream’s previous configuration method. A link to this file is provided in the **legacyConfigFilePath** parameter in the worker configuration file. - -The following is an example of the legacy configuration file: - -.. code-block:: postgres - - { - “developerMode”: true, - “reextentUse”: false, - “useClientLog”: true, - “useMetadataServer”” false - } - -Session vs Cluster Based Configuration -============================== -.. contents:: - :local: - :depth: 1 - -Cluster-Based Configuration --------------- -SQream uses cluster-based configuration, enabling you to centralize configurations for all workers on the cluster. Only flags set to the regular or cluster flag type have access to cluster-based configuration. Configurations made on the cluster level are persistent and stored at the metadata level. The parameter settings in this file are applied globally to all workers connected to it. - -For more information, see the following: - -* `Using SQream SQL `_ - modifying flag attributes from the CLI. -* `SQream Acceleration Studio `_ - modifying flag attributes from Studio. - -For more information on flag-based access to cluster-based configuration, see **Configuration Flag Types** below. - -Session-Based Configuration ----------------- -Session-based configurations are not persistent and are deleted when your session ends. This method enables you to modify all required configurations while avoiding conflicts between flag attributes modified on different devices at different points in time. - -The **SET flag_name** command is used to modify flag attributes. Any modifications you make with the **SET flag_name** command apply only to your open session, and are not saved when it ends - -For example, when the query below has completed executing, the values configured will be restored to its previous setting: - -.. code-block:: console - - set spoolMemoryGB=700; - select * from table a where date='2021-11-11' - -For more information, see the following: - -* `Using SQream SQL `_ - modifying flag attributes from the CLI. -* `SQream Acceleration Studio `_ - modifying flag attributes from Studio. - -Configuration Flag Types -========== -The flag type attribute can be set for each flag and determines its write access as follows: - -* **Administration:** session-based read/write flags that can be stored in the metadata file. -* **Cluster:** global cluster-based read/write flags that can be stored in the metadata file. -* **Worker:** single worker-based read-only flags that can be stored in the worker configuration file. - -The flag type determines which files can be accessed and which commands or commands sets users can run. - -The following table describes the file or command modification rights for each flag type: - -.. list-table:: - :widths: 20 20 20 20 - :header-rows: 1 - - * - **Flag Type** - - **Legacy Configuration File** - - **ALTER SYSTEM SET** - - **Worker Configuration File** - * - :ref:`Regular` - - Can modify - - Can modify - - Cannot modify - * - :ref:`Cluster` - - Cannot modify - - Can modify - - Cannot modify - * - :ref:`Worker` - - Cannot modify - - Cannot modify - - Can modify - -.. _regular_flag_types: - -Regular Flag Types ---------------------- -The following is an example of the correct syntax for running a **Regular** flag type command: - -.. code-block:: console - - SET spoolMemoryGB= 11; - executed - -The following table describes the Regular flag types: - -.. list-table:: - :widths: 2 5 10 - :header-rows: 1 - - * - **Command** - - **Description** - - **Example** - * - ``SET `` - - Used for modifying flag attributes. - - ``SET developerMode=true`` - * - ``SHOW / ALL`` - - Used to preset either a specific flag value or all flag values. - - ``SHOW `` - * - ``SHOW ALL LIKE`` - - Used as a wildcard character for flag names. - - ``SHOW `` - * - ``show_conf_UF`` - - Used to print all flags with the following attributes: - - * Flag name - * Default value - * Is developer mode (Boolean) - * Flag category - * Flag type - - ``rechunkThreshold,90,true,RND,regular`` - * - ``show_conf_extended UF`` - - Used to print all information output by the show_conf UF command, in addition to description, usage, data type, default value and range. - - ``compilerGetsOnlyUFs,false,generic,regular,Makes runtime pass to compiler only`` - ``utility functions names,boolean,true,false`` - * - ``show_md_flag UF`` - - Used to show a specific flag/all flags stored in the metadata file. - - - * Example 1: ``* master=> ALTER SYSTEM SET heartbeatTimeout=111;`` - * Example 2: ``* master=> select show_md_flag(‘all’); heartbeatTimeout,111`` - * Example 3: ``* master=> select show_md_flag(‘heartbeatTimeout’); heartbeatTimeout,111`` - -.. _cluster_flag_types: - -Cluster Flag Types ---------------------- -The following is an example of the correct syntax for running a **Cluster** flag type command: - -.. code-block:: console - - ALTER SYSTEM RESET useMetadataServer; - executed - -The following table describes the Cluster flag types: - -.. list-table:: - :widths: 1 5 10 - :header-rows: 1 - - * - **Command** - - **Description** - - **Example** - * - ``ALTER SYSTEM SET `` - - Used to storing or modifying flag attributes in the metadata file. - - ``ALTER SYSTEM SET `` - * - ``ALTER SYSTEM RESET `` - - Used to remove a flag or all flag attributes from the metadata file. - - ``ALTER SYSTEM RESET `` - * - ``SHOW / ALL`` - - Used to print the value of a specified value or all flag values. - - ``SHOW `` - * - ``SHOW ALL LIKE`` - - Used as a wildcard character for flag names. - - ``SHOW `` - * - ``show_conf_UF`` - - Used to print all flags with the following attributes: - - * Flag name - * Default value - * Is developer mode (Boolean) - * Flag category - * Flag type - - ``rechunkThreshold,90,true,RND,regular`` - * - ``show_conf_extended UF`` - - Used to print all information output by the show_conf UF command, in addition to description, usage, data type, default value and range. - - ``compilerGetsOnlyUFs,false,generic,regular,Makes runtime pass to compiler only`` - ``utility functions names,boolean,true,false`` - * - ``show_md_flag UF`` - - Used to show a specific flag/all flags stored in the metadata file. - - - * Example 1: ``* master=> ALTER SYSTEM SET heartbeatTimeout=111;`` - * Example 2: ``* master=> select show_md_flag(‘all’); heartbeatTimeout,111`` - * Example 3: ``* master=> select show_md_flag(‘heartbeatTimeout’); heartbeatTimeout,111`` - -.. _worker_flag_types: - -Worker Flag Types ---------------------- -The following is an example of the correct syntax for running a **Worker** flag type command: - -.. code-block:: console - - SHOW spoolMemoryGB; - -The following table describes the Worker flag types: - -.. list-table:: - :widths: 1 5 10 - :header-rows: 1 - - * - **Command** - - **Description** - - **Example** - * - ``ALTER SYSTEM SET `` - - Used to storing or modifying flag attributes in the metadata file. - - ``ALTER SYSTEM SET `` - * - ``ALTER SYSTEM RESET `` - - Used to remove a flag or all flag attributes from the metadata file. - - ``ALTER SYSTEM RESET `` - * - ``SHOW / ALL`` - - Used to print the value of a specified value or all flag values. - - ``SHOW `` - * - ``SHOW ALL LIKE`` - - Used as a wildcard character for flag names. - - ``SHOW `` - * - ``show_conf_UF`` - - Used to print all flags with the following attributes: - - * Flag name - * Default value - * Is developer mode (Boolean) - * Flag category - * Flag type - - ``rechunkThreshold,90,true,RND,regular`` - * - ``show_conf_extended UF`` - - Used to print all information output by the show_conf UF command, in addition to description, usage, data type, default value and range. - - - ``compilerGetsOnlyUFs,false,generic,regular,Makes runtime pass to compiler only`` - ``utility functions names,boolean,true,false`` - * - ``show_md_flag UF`` - - Used to show a specific flag/all flags stored in the metadata file. - - - * Example 1: ``* master=> ALTER SYSTEM SET heartbeatTimeout=111;`` - * Example 2: ``* master=> select show_md_flag(‘all’); heartbeatTimeout,111`` - * Example 3: ``* master=> select show_md_flag(‘heartbeatTimeout’); heartbeatTimeout,111`` - -All Configurations ---------------------- -The following table describes the **Generic** and **Administration** configuration flags: - -.. list-table:: - :header-rows: 1 - :widths: 1 2 1 15 1 20 - :class: my-class - :name: my-name - - * - Flag Name - - Access Control - - Modification Type - - Description - - Data Type - - Default Value - - * - ``binSizes`` - - Administration - - Regular - - Sets the custom bin size in the cache to enable high granularity bin control. - - string - - - ``16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536,`` - ``131072,262144,524288,1048576,2097152,4194304,8388608,16777216,`` - ``33554432,67108864,134217728,268435456,536870912,786432000,107374,`` - ``1824,1342177280,1610612736,1879048192,2147483648,2415919104,`` - ``2684354560,2952790016,3221225472`` - - * - ``checkCudaMemory`` - - Administration - - Regular - - Sets the pad device memory allocations with safety buffers to catch out-of-bounds writes. - - boolean - - ``FALSE`` - - * - ``compilerGetsOnlyUFs`` - - Administration - - Regular - - Sets the runtime to pass only utility functions names to the compiler. - - boolean - - ``FALSE`` - - * - ``copyToRestrictUtf8`` - - Administration - - Regular - - Sets the custom bin size in the cache to enable high granularity bin control. - - boolean - - ``FALSE`` - - * - ``cpuReduceHashtableSize`` - - Administration - - Regular - - Sets the hash table size of the CpuReduce. - - uint - - ``10000`` - - * - ``csvLimitRowLength`` - - Administration - - Cluster - - Sets the maximum supported CSV row length. - - uint - - ``100000`` - - * - ``cudaMemcpyMaxSizeBytes`` - - Administration - - Regular - - Sets the chunk size for copying from CPU to GPU. If set to 0, do not divide. - - uint - - ``0`` - - * - ``CudaMemcpySynchronous`` - - Administration - - Regular - - Indicates if copying from/to GPU is synchronous. - - boolean - - ``FALSE`` - - * - ``cudaMemQuota`` - - Administration - - Worker - - Sets the percentage of total device memory to be used by the instance. - - uint - - ``90`` - - * - ``developerMode`` - - Administration - - Regular - - Enables modifying R&D flags. - - boolean - - ``FALSE`` - - * - ``enableDeviceDebugMessages`` - - Administration - - Regular - - Activates the Nvidia profiler (nvprof) markers. - - boolean - - ``FALSE`` - - * - ``enableLogDebug`` - - Administration - - Regular - - Enables creating and logging in the clientLogger_debug file. - - boolean - - ``TRUE`` - - * - ``enableNvprofMarkers`` - - Administration - - Regular - - Activates the Nvidia profiler (nvprof) markers. - - boolean - - ``FALSE`` - - * - ``endLogMessage`` - - Administration - - Regular - - Appends a string at the end of every log line. - - string - - ``EOM`` - - - - * - ``varcharIdentifiers`` - - Administration - - Regular - - Activates using varchar as an identifier. - - boolean - - ``true`` - - - - * - ``extentStorageFileSizeMB`` - - Administration - - Cluster - - Sets the minimum size in mebibytes of extents for table bulk data. - - uint - - ``20`` - - * - ``gatherMemStat`` - - Administration - - Regular - - Monitors all pinned allocations and all **memcopies** to/from device, and prints a report of pinned allocations that were not memcopied to/from the device using the **dump_pinned_misses** utility function. - - boolean - - ``FALSE`` - - * - ``increaseChunkSizeBeforeReduce`` - - Administration - - Regular - - Increases the chunk size to reduce query speed. - - boolean - - ``FALSE`` - - * - ``increaseMemFactors`` - - Administration - - Regular - - Adds rechunker before expensive chunk producer. - - boolean - - ``TRUE`` - - * - ``leveldbWriteBufferSize`` - - Administration - - Regular - - Sets the buffer size. - - uint - - ``524288`` - - * - ``machineIP`` - - Administration - - Worker - - Manual setting of reported IP. - - string - - ``127.0.0.1`` - - - - - * - ``memoryResetTriggerMB`` - - Administration - - Regular - - Sets the size of memory used during a query to trigger aborting the server. - - uint - - ``0`` - - * - ``metadataServerPort`` - - Administration - - Worker - - Sets the port used to connect to the metadata server. SQream recommends using port ranges above 1024† because ports below 1024 are usually reserved, although there are no strict limitations. Any positive number (1 - 65535) can be used. - - uint - - ``3105`` - - * - ``mtRead`` - - Administration - - Regular - - Splits large reads to multiple smaller ones and executes them concurrently. - - boolean - - ``FALSE`` - - * - ``mtReadWorkers`` - - Administration - - Regular - - Sets the number of workers to handle smaller concurrent reads. - - uint - - ``30`` - - * - ``orcImplicitCasts`` - - Administration - - Regular - - Sets the implicit cast in orc files, such as **int** to **tinyint** and vice versa. - - boolean - - ``TRUE`` - - * - ``statementLockTimeout`` - - Administration - - Regular - - Sets the timeout (seconds) for acquiring object locks before executing statements. - - uint - - ``3`` - - * - ``useConfigIP`` - - Administration - - Worker - - Activates the machineIP (true). Setting to false ignores the machineIP and automatically assigns a local network IP. This cannot be activated in a cloud scenario (on-premises only). - - boolean - - ``FALSE`` - - * - ``useLegacyDecimalLiterals`` - - Administration - - Regular - - Interprets decimal literals as **Double** instead of **Numeric**. Used to preserve legacy behavior in existing customers. - - boolean - - ``FALSE`` - - * - ``useLegacyStringLiterals`` - - Administration - - Regular - - Interprets ASCII-only strings as **VARCHAR** instead of **TEXT**. Used to preserve legacy behavior in existing customers. - - boolean - - ``FALSE`` - - * - ``flipJoinOrder`` - - Generic - - Regular - - Reorders join to force equijoins and/or equijoins sorted by table size. - - boolean - - ``FALSE`` - - * - ``limitQueryMemoryGB`` - - Generic - - Worker - - Prevents a query from processing more memory than the flag’s value. - - uint - - ``100000`` - - * - ``cacheEvictionMilliseconds`` - - Generic - - Regular - - Sets how long the cache stores contents before being flushed. - - size_t - - ``2000`` - - - * - ``cacheDiskDir`` - - Generic - - Regular - - Sets the ondisk directory location for the spool to save files on. - - size_t - - Any legal string - - - * - ``cacheDiskGB`` - - Generic - - Regular - - Sets the amount of memory (GB) to be used by Spool on the disk. - - size_t - - ``128`` - - * - ``cachePartitions`` - - Generic - - Regular - - Sets the number of partitions that the cache is split into. - - size_t - - ``4`` - - - * - ``cachePersistentDir`` - - Generic - - Regular - - Sets the persistent directory location for the spool to save files on. - - string - - Any legal string - - - * - ``cachePersistentGB`` - - Generic - - Regular - - Sets the amount of data (GB) for the cache to store persistently. - - size_t - - ``128`` - - - * - ``cacheRamGB`` - - Generic - - Regular - - Sets the amount of memory (GB) to be used by Spool InMemory. - - size_t - - ``16`` - - - - - - - - * - ``logSysLevel`` - - Generic - - Regular - - - Determines the client log level: - 0 - L_SYSTEM, - 1 - L_FATAL, - 2 - L_ERROR, - 3 - L_WARN, - 4 - L_INFO, - 5 - L_DEBUG, - 6 - L_TRACE - - uint - - ``100000`` - - * - ``maxAvgBlobSizeToCompressOnGpu`` - - Generic - - Regular - - Sets the CPU to compress columns with size above (flag’s value) * (row count). - - uint - - ``120`` - - - * - ``sessionTag`` - - Generic - - Regular - - Sets the name of the session tag. - - string - - Any legal string - - - - * - ``spoolMemoryGB`` - - Generic - - Regular - - Sets the amount of memory (GB) to be used by the server for spooling. - - uint - - ``8`` - -Configuration Commands -========== -The configuration commands are associated with particular flag types based on permissions. - -The following table describes the commands or command sets that can be run based on their flag type. Note that the flag names described in the following table are described in the :ref:`Configuration Roles` section below. - -.. list-table:: - :header-rows: 1 - :widths: 1 2 10 17 - :class: my-class - :name: my-name - - * - Flag Type - - Command - - Description - - Example - * - Regular - - ``SET `` - - Used for modifying flag attributes. - - ``SET developerMode=true`` - * - Cluster - - ``ALTER SYSTEM SET `` - - Used to storing or modifying flag attributes in the metadata file. - - ``ALTER SYSTEM SET `` - * - Cluster - - ``ALTER SYSTEM RESET `` - - Used to remove a flag or all flag attributes from the metadata file. - - ``ALTER SYSTEM RESET `` - * - Regular, Cluster, Worker - - ``SHOW / ALL`` - - Used to print the value of a specified value or all flag values. - - ``SHOW `` - * - Regular, Cluster, Worker - - ``SHOW ALL LIKE`` - - Used as a wildcard character for flag names. - - ``SHOW `` - * - Regular, Cluster, Worker - - ``show_conf_UF`` - - Used to print all flags with the following attributes: - - * Flag name - * Default value - * Is developer mode (Boolean) - * Flag category - * Flag type - - - - - ``rechunkThreshold,90,true,RND,regular`` - * - Regular, Cluster, Worker - - ``show_conf_extended UF`` - - Used to print all information output by the show_conf UF command, in addition to description, usage, data type, default value and range. - - ``spoolMemoryGB,15,false,generic,regular,Amount of memory (GB)`` - ``the server can use for spooling,”Statement that perform “”group by””,`` - ``“”order by”” or “”join”” operation(s) on large set of data will run`` - ``much faster if given enough spool memory, otherwise disk spooling will`` - ``be used resulting in performance hit.”,uint,,0-5000`` - * - Regular, Cluster, Worker - - ``show_md_flag UF`` - - Used to show a specific flag/all flags stored in the metadata file. - - - * Example 1: ``* master=> ALTER SYSTEM SET heartbeatTimeout=111;`` - * Example 2: ``* master=> select show_md_flag(‘all’); heartbeatTimeout,111`` - * Example 3: ``* master=> select show_md_flag(‘heartbeatTimeout’); heartbeatTimeout,111`` - -.. _configuration_roles: - -Configuration Roles -=========== -SQream divides flags into the following roles, each with their own set of permissions: - -* **`Administration flags `_**: can be modified by administrators on a session and cluster basis using the ``ALTER SYSTEM SET`` command. -* **`Generic flags `_**: can be modified by standard users on a session basis. - -Showing All Flags in the Catalog Table -======= -SQream uses the **sqream_catalog.parameters** catalog table for showing all flags, providing the scope (default, cluster and session), description, default value and actual value. - -The following is the correct syntax for a catalog table query: - -.. code-block:: console - - SELECT * FROM sqream_catalog.settings - -The following is an example of a catalog table query: - -.. code-block:: console - - externalTableBlobEstimate, 100, 100, default, - varcharEncoding, ascii, ascii, default, Changes the expected encoding for Varchar columns - useCrcForTextJoinKeys, true, true, default, - hiveStyleImplicitStringCasts, false, false, default, - -This guide covers the configuration files and the ``SET`` statement. \ No newline at end of file diff --git a/configuration_guides/current_method_all_configurations.rst b/configuration_guides/current_method_all_configurations.rst new file mode 100644 index 000000000..9a2352faf --- /dev/null +++ b/configuration_guides/current_method_all_configurations.rst @@ -0,0 +1,431 @@ +.. _current_method_all_configurations: + +************************** +All Configurations +************************** +The following table describes all **Generic** and **Administration** configuration flags: + +.. list-table:: + :header-rows: 1 + :widths: 1 2 1 15 1 20 + :class: my-class + :name: my-name + + * - Flag Name + - Access Control + - Modification Type + - Description + - Data Type + - Default Value + + * - ``binSizes`` + - Admin + - Regular + - Sets the custom bin size in the cache to enable high granularity bin control. + - string + - + ``16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536,`` + ``131072,262144,524288,1048576,2097152,4194304,8388608,16777216,`` + ``33554432,67108864,134217728,268435456,536870912,786432000,107374,`` + ``1824,1342177280,1610612736,1879048192,2147483648,2415919104,`` + ``2684354560,2952790016,3221225472`` + + * - ``cacheEvictionMilliseconds`` + - Generic + - Regular + - Sets how long the cache stores contents before being flushed. + - size_t + - ``2000`` + + + * - ``cacheDiskDir`` + - Generic + - Regular + - Sets the ondisk directory location for the spool to save files on. + - size_t + - Any legal string + + + * - ``cacheDiskGB`` + - Generic + - Regular + - Sets the amount of memory (GB) to be used by Spool on the disk. + - size_t + - ``128`` + + * - ``cachePartitions`` + - Generic + - Regular + - Sets the number of partitions that the cache is split into. + - size_t + - ``4`` + + + * - ``cachePersistentDir`` + - Generic + - Regular + - Sets the persistent directory location for the spool to save files on. + - string + - Any legal string + + + * - ``cachePersistentGB`` + - Generic + - Regular + - Sets the amount of data (GB) for the cache to store persistently. + - size_t + - ``128`` + + + * - ``cacheRamGB`` + - Generic + - Regular + - Sets the amount of memory (GB) to be used by Spool InMemory. + - size_t + - ``16`` + + + + * - ``checkCudaMemory`` + - Admin + - Regular + - Sets the pad device memory allocations with safety buffers to catch out-of-bounds writes. + - boolean + - ``FALSE`` + + * - ``compilerGetsOnlyUFs`` + - Admin + - Regular + - Sets the runtime to pass only utility functions names to the compiler. + - boolean + - ``FALSE`` + + * - ``copyToRestrictUtf8`` + - Admin + - Regular + - Sets the custom bin size in the cache to enable high granularity bin control. + - boolean + - ``FALSE`` + + * - ``cpuReduceHashtableSize`` + - Admin + - Regular + - Sets the hash table size of the CpuReduce. + - uint + - ``10000`` + + * - ``csvLimitRowLength`` + - Admin + - Cluster + - Sets the maximum supported CSV row length. + - uint + - ``100000`` + + * - ``cudaMemcpyMaxSizeBytes`` + - Admin + - Regular + - Sets the chunk size for copying from CPU to GPU. If set to 0, do not divide. + - uint + - ``0`` + + * - ``CudaMemcpySynchronous`` + - Admin + - Regular + - Indicates if copying from/to GPU is synchronous. + - boolean + - ``FALSE`` + + * - ``cudaMemQuota`` + - Admin + - Worker + - Sets the percentage of total device memory to be used by the instance. + - uint + - ``90`` + + * - ``developerMode`` + - Admin + - Regular + - Enables modifying R&D flags. + - boolean + - ``FALSE`` + + * - ``enableDeviceDebugMessages`` + - Admin + - Regular + - Activates the Nvidia profiler (nvprof) markers. + - boolean + - ``FALSE`` + + * - ``enableLogDebug`` + - Admin + - Regular + - Enables creating and logging in the clientLogger_debug file. + - boolean + - ``TRUE`` + + * - ``enableNvprofMarkers`` + - Admin + - Regular + - Activates the Nvidia profiler (nvprof) markers. + - boolean + - ``FALSE`` + + * - ``endLogMessage`` + - Admin + - Regular + - Appends a string at the end of every log line. + - string + - ``EOM`` + + + + + + + + * - ``extentStorageFileSizeMB`` + - Admin + - Cluster + - Sets the minimum size in mebibytes of extents for table bulk data. + - uint + - ``20`` + + + * - ``externalTableBlobEstimate`` + - ? + - Regular + - ? + - ? + - ? + + + + + + * - ``flipJoinOrder`` + - Generic + - Regular + - Reorders join to force equijoins and/or equijoins sorted by table size. + - boolean + - ``FALSE`` + + + + * - ``gatherMemStat`` + - Admin + - Regular + - Monitors all pinned allocations and all **memcopies** to/from device, and prints a report of pinned allocations that were not memcopied to/from the device using the **dump_pinned_misses** utility function. + - boolean + - ``FALSE`` + + + * - ``healerMaxInactivityHours`` + - Admin + - Worker + - Defines the threshold for creating a log recording a slow statement. + - size_t + - ``5`` + + + + + * - ``increaseChunkSizeBeforeReduce`` + - Admin + - Regular + - Increases the chunk size to reduce query speed. + - boolean + - ``FALSE`` + + * - ``increaseMemFactors`` + - Admin + - Regular + - Adds rechunker before expensive chunk producer. + - boolean + - ``TRUE`` + + + * - ``isHealerOn`` + - Admin + - Worker + - Periodically examines the progress of running statements and logs statements exceeding the ``healerMaxInactivityHours`` flag setting. + - boolean + - ``TRUE`` + + + + + + * - ``leveldbWriteBufferSize`` + - Admin + - Regular + - Sets the buffer size. + - uint + - ``524288`` + + * - ``limitQueryMemoryGB`` + - Generic + - Worker + - Prevents a query from processing more memory than the flag’s value. + - uint + - ``100000`` + + + + + * - ``loginMaxRetries`` + - Admin + - Worker + - Sets the permitted log-in attempts. + - size_t + - ``5`` + + + + * - ``logSysLevel`` + - Generic + - Regular + - + Determines the client log level: + 0 - L_SYSTEM, + 1 - L_FATAL, + 2 - L_ERROR, + 3 - L_WARN, + 4 - L_INFO, + 5 - L_DEBUG, + 6 - L_TRACE + - uint + - ``100000`` + + + + + + * - ``machineIP`` + - Admin + - Worker + - Manual setting of reported IP. + - string + - ``127.0.0.1`` + + + * - ``maxAvgBlobSizeToCompressOnGpu`` + - Generic + - Regular + - Sets the CPU to compress columns with size above (flag’s value) * (row count). + - uint + - ``120`` + + + * - ``maxPinnedPercentageOfTotalRAM`` + - Admin + - Regular + - Sets the maximum percentage CPU RAM that pinned memory can use. + - uint + - ``70`` + + + + * - ``memMergeBlobOffsetsCount`` + - Admin + - Regular + - Sets the size of memory used during a query to trigger aborting the server. + - uint + - ``0`` + + + + * - ``memoryResetTriggerMB`` + - Admin + - Regular + - Sets the size of memory used during a query to trigger aborting the server. + - uint + - ``0`` + + * - ``metadataServerPort`` + - Admin + - Worker + - Sets the port used to connect to the metadata server. SQream recommends using port ranges above 1024† because ports below 1024 are usually reserved, although there are no strict limitations. Any positive number (1 - 65535) can be used. + - uint + - ``3105`` + + * - ``mtRead`` + - Admin + - Regular + - Splits large reads to multiple smaller ones and executes them concurrently. + - boolean + - ``FALSE`` + + * - ``mtReadWorkers`` + - Admin + - Regular + - Sets the number of workers to handle smaller concurrent reads. + - uint + - ``30`` + + * - ``orcImplicitCasts`` + - Admin + - Regular + - Sets the implicit cast in orc files, such as **int** to **tinyint** and vice versa. + - boolean + - ``TRUE`` + + + * - ``sessionTag`` + - Generic + - Regular + - Sets the name of the session tag. + - string + - Any legal string + + + + * - ``spoolMemoryGB`` + - Generic + - Regular + - Sets the amount of memory (GB) to be used by the server for spooling. + - uint + - ``8`` + + + * - ``statementLockTimeout`` + - Admin + - Regular + - Sets the timeout (seconds) for acquiring object locks before executing statements. + - uint + - ``3`` + + * - ``useConfigIP`` + - Admin + - Worker + - Activates the machineIP (true). Setting to false ignores the machineIP and automatically assigns a local network IP. This cannot be activated in a cloud scenario (on-premises only). + - boolean + - ``FALSE`` + + * - ``useLegacyDecimalLiterals`` + - Admin + - Regular + - Interprets decimal literals as **Double** instead of **Numeric**. Used to preserve legacy behavior in existing customers. + - boolean + - ``FALSE`` + + * - ``useLegacyStringLiterals`` + - Admin + - Regular + - Interprets ASCII-only strings as **VARCHAR** instead of **TEXT**. Used to preserve legacy behavior in existing customers. + - boolean + - ``FALSE`` + + + + + + + + + + * - ``blockNewVarcharObjects`` + - Admin + - Regular + - Disables the creation of new tables, views, external tables containing Varchar columns, and the creation of user-defined functions with Varchar arguments or a Varchar return value. + - boolean + - ``FALSE`` \ No newline at end of file diff --git a/configuration_guides/current_method_command_examples.rst b/configuration_guides/current_method_command_examples.rst new file mode 100644 index 000000000..2f75a4711 --- /dev/null +++ b/configuration_guides/current_method_command_examples.rst @@ -0,0 +1,36 @@ +.. _current_method_command_examples: + +************************** +Command Examples +************************** +This section includes the following command examples: + +.. contents:: + :local: + :depth: 1 + +Running a Regular Flag Type Command +--------------------- +The following is an example of running a **Regular** flag type command: + +.. code-block:: console + + SET spoolMemoryGB= 11; + executed + +Running a Worker Flag Type Command +--------------------- +The following is an example of running a **Worker** flag type command: + +.. code-block:: console + + SHOW spoolMemoryGB; + +Running a Cluster Flag Type Command +--------------------- +The following is an example of running a **Cluster** flag type command: + +.. code-block:: console + + ALTER SYSTEM RESET useMetadataServer; + executed \ No newline at end of file diff --git a/configuration_guides/current_method_configuration_levels.rst b/configuration_guides/current_method_configuration_levels.rst new file mode 100644 index 000000000..ad5143def --- /dev/null +++ b/configuration_guides/current_method_configuration_levels.rst @@ -0,0 +1,33 @@ +.. _current_method_configuration_levels: + +************************** +Configuration Levels +************************** +SQream's configuration parameters are based on the following hierarchy: + +.. contents:: + :local: + :depth: 1 + +Cluster-Based Configuration +-------------- +Cluster-based configuration lets you centralize configurations for all workers on the cluster. Only :ref:`Regular and Cluster flag types` can be modified on the cluster level. These modifications are persistent and stored at the metadata level, which are applied globally to all workers in the cluster. + +.. note:: While cluster-based configuration was designed for configuring Workers, you can only configure Worker values set to the Regular or Cluster type. + +Worker-Based Configuration +-------------- +Worker-based configuration lets you modify individual workers using a worker configuration file. Worker-based configuration modifications are persistent. + +For more information on making configurations from the worker configuration file, see :ref:`previous_configuration_method`. + +Session-Based Configuration +-------------- +Session-based configurations are not persistent and are deleted when your session ends. This method enables you to modify all required configurations while avoiding conflicts between flag attributes modified on different devices at different points in time. The **SET flag_name** command is used to modify flag values on the session level. Any modifications you make with the **SET flag_name** command apply only to your open session, and are not saved when it ends. + +For example, when the query below has completed executing, the values configured will be restored to its previous setting: + +.. code-block:: console + + set spoolMemoryGB=700; + select * from table a where date='2021-11-11' \ No newline at end of file diff --git a/configuration_guides/current_method_configuration_roles.rst b/configuration_guides/current_method_configuration_roles.rst new file mode 100644 index 000000000..b3afc2b46 --- /dev/null +++ b/configuration_guides/current_method_configuration_roles.rst @@ -0,0 +1,17 @@ +.. _current_method_configuration_roles: + +************************** +Configuration Roles +************************** +SQream divides flags into the following roles, each with their own set of permissions: + +* :ref:`admin_flags` - can be modified by administrators on a session and cluster basis using the ``ALTER SYSTEM SET`` command: + + * Regular + * Worker + * Cluster + +* :ref:`generic_flags` - can be modified by standard users on a session basis: + + * Regular + * Worker \ No newline at end of file diff --git a/configuration_guides/current_method_configuring_your_parameter_values.rst b/configuration_guides/current_method_configuring_your_parameter_values.rst new file mode 100644 index 000000000..d082db693 --- /dev/null +++ b/configuration_guides/current_method_configuring_your_parameter_values.rst @@ -0,0 +1,40 @@ +.. _current_method_configuring_your_parameter_values: + +************************** +Configuring Your Parameter Values +************************** +The method you must use to configure your parameter values depends on the configuration level. Each configuration level has its own command or set of commands used to configure values, as shown below: + ++-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| **Configuration Level** | ++=================================================================================================================================================================================================================================================================================================================+ +| **Regular, Worker, and Cluster** | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| **Command** | **Description** | **Example** | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``SET `` | Used for modifying flag attributes. | ``SET developerMode=true`` | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``SHOW / ALL`` | Used to preset either a specific flag value or all flag values. | ``SHOW `` | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``SHOW ALL LIKE`` | Used as a wildcard character for flag names. | ``SHOW `` | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``show_conf_UF`` | Used to print all flags with the following attributes: | ``rechunkThreshold,90,true,RND,regular`` | +| | | | +| | * Flag name | | +| | * Default value | | +| | * Is Developer Mode (Boolean) | | +| | * Flag category | | +| | * Flag type | | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``show_conf_extended UF`` | Used to print all information output by the show_conf UF command, in addition to description, usage, data type, default value and range. | ``rechunkThreshold,90,true,RND,regular`` | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``show_md_flag UF`` | Used to show a specific flag/all flags stored in the metadata file. |* Example 1: ``* master=> ALTER SYSTEM SET heartbeatTimeout=111;`` | +| | |* Example 2: ``* master=> select show_md_flag(‘all’); heartbeatTimeout,111`` | +| | |* Example 3: ``* master=> select show_md_flag(‘heartbeatTimeout’); heartbeatTimeout,111`` | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| **Worker and Cluster** | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``ALTER SYSTEM SET `` | Used for storing or modifying flag attributes in the metadata file. | ``ALTER SYSTEM SET `` | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ +| ``ALTER SYSTEM RESET `` | Used to remove a flag or all flag attributes from the metadata file. | ``ALTER SYSTEM RESET `` | ++-----------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ \ No newline at end of file diff --git a/configuration_guides/current_method_flag_types.rst b/configuration_guides/current_method_flag_types.rst new file mode 100644 index 000000000..93e82e79b --- /dev/null +++ b/configuration_guides/current_method_flag_types.rst @@ -0,0 +1,20 @@ +.. _current_method_flag_types: + +************************** +Flag Types +************************** +SQream uses three flag types, **Cluster**, **Worker**, and **Regular**. Each of these flag types is associated with one of three hierarchical configuration levels described earlier, making it easier to configure your system. + +The highest level in the hierarchy is Cluster, which lets you set configurations across all workers in a given cluster. Modifying cluster values is **persistent**, meaning that any configurations you set are retained after shutting down your system. Configurations set at the Cluster level take the highest priority and override settings made on the Regular and Worker level. This is known as **cluster-based configuration**. Note that Cluster-based configuration lets you modify Cluster *and* Regular flag types. An example of a Cluster flag is **persisting your cache directory.** + +The second level is Worker, which lets you configure individual workers. Modifying Worker values are also **persistent**. This is known as **worker-based configuration**. Some examples of Worker flags includes **setting total device memory usage** and **setting metadata server connection port**. + +The lowest level is Regular, which means that modifying values of Regular flags affects only your current session and are not persistent. This means that they are automatically restored to their default value when the session ends. This is known as **session-based configuration**. Some examples of Regular flags includes **setting your bin size** and **setting CUDA memory**. + +To see each flag's default value, see one of the following: + +* The **Default Value** column in the :ref:`All Configurations` section. + + :: + +* The flag's individual description page, such as :ref:`Setting CUDA Memory`. \ No newline at end of file diff --git a/configuration_guides/current_method_modification_methods.rst b/configuration_guides/current_method_modification_methods.rst new file mode 100644 index 000000000..05825b07a --- /dev/null +++ b/configuration_guides/current_method_modification_methods.rst @@ -0,0 +1,50 @@ +.. _current_method_modification_methods: + +************************** +Modification Methods +************************** +SQream provides two different ways to modify your configurations. The current method is based on hierarchical configuration as described above. This method is based on making modifications on the **worker configuration file**, while you can still make modifications using the previous method using the **legacy configuration file**, both described below: + +.. contents:: + :local: + :depth: 1 + +Modifying Your Configuration Using the Worker Configuration File +------------------- +You can modify your configuration using the **worker configuration file (config.json)**. Changes that you make to worker configuration files are persistent. Note that you can only set the attributes in your worker configuration file **before** initializing your SQream worker, and while your worker is active these attributes are read-only. + +The following is an example of a worker configuration file: + +.. code-block:: postgres + + { + “cluster”: “/home/test_user/sqream_testing_temp/sqreamdb”, + “gpu”: 0, + “licensePath”: “home/test_user/SQream/tests/license.enc”, + “machineIP”: “127.0.0.1”, + “metadataServerIp”: “127.0.0.1”, + “metadataServerPort”: “3105, + “port”: 5000, + “useConfigIP”” true, + “legacyConfigFilePath”: “home/SQream_develop/SqrmRT/utils/json/legacy_congif.json” + } + +You can access the legacy configuration file from the ``legacyConfigFilePath`` parameter shown above. If all (or most) of your workers require the same flag settings, you can set the ``legacyConfigFilePath`` attribute to the same legacy file. + +Modifying Your Configuration Using a Legacy Configuration File +--------------------- +You can modify your configuration using a legacy configuration file. + +The Legacy configuration file provides access to the read/write flags used in SQream’s previous configuration method. A link to this file is provided in the **legacyConfigFilePath** parameter in the worker configuration file. + +The following is an example of the legacy configuration file: + +.. code-block:: postgres + + { + “developerMode”: true, + “reextentUse”: false, + “useClientLog”: true, + “useMetadataServer”” false + } +For more information on using the previous configuration method, see :ref:`previous_configuration_method`. \ No newline at end of file diff --git a/configuration_guides/current_method_showing_all_flags_in_the_catalog_table.rst b/configuration_guides/current_method_showing_all_flags_in_the_catalog_table.rst new file mode 100644 index 000000000..647a401b6 --- /dev/null +++ b/configuration_guides/current_method_showing_all_flags_in_the_catalog_table.rst @@ -0,0 +1,21 @@ +.. _current_method_showing_all_flags_in_the_catalog_table: + +************************** +Showing All Flags in the Catalog Table +************************** +SQream uses the **sqream_catalog.parameters** catalog table for showing all flags, providing the scope (default, cluster and session), description, default value and actual value. + +The following is the correct syntax for a catalog table query: + +.. code-block:: console + + SELECT * FROM sqream_catalog.settings + +The following is an example of a catalog table query: + +.. code-block:: console + + externalTableBlobEstimate, 100, 100, default, + varcharEncoding, ascii, ascii, default, Changes the expected encoding for Varchar columns + useCrcForTextJoinKeys, true, true, default, + hiveStyleImplicitStringCasts, false, false, default, \ No newline at end of file diff --git a/configuration_guides/generic_regular_flags.rst b/configuration_guides/generic_regular_flags.rst index f8235ae07..34ff6850c 100644 --- a/configuration_guides/generic_regular_flags.rst +++ b/configuration_guides/generic_regular_flags.rst @@ -6,16 +6,16 @@ Regular Generic Flags The **Regular Generic Flags** page describes **Regular** modification type flags, which can be modified by standard users on a session basis: -* `Flipping Join Order to Force Equijoins `_ -* `Determining Client Level `_ -* `Setting CPU to Compress Defined Columns `_ -* `Setting Query Memory Processing Limit `_ -* `Setting the Spool Memory `_ -* `Setting Cache Partitions `_ -* `Setting Cache Flushing `_ -* `Setting InMemory Spool Memory `_ -* `Setting Disk Spool Memory `_ -* `Setting Spool Saved File Directory Location `_ -* `Setting Data Stored Persistently on Cache `_ -* `Setting Persistent Spool Saved File Directory Location `_ -* `Setting Session Tag Name `_ \ No newline at end of file +* `Flipping Join Order to Force Equijoins `_ +* `Determining Client Level `_ +* `Setting CPU to Compress Defined Columns `_ +* `Setting Query Memory Processing Limit `_ +* `Setting the Spool Memory `_ +* `Setting Cache Partitions `_ +* `Setting Cache Flushing `_ +* `Setting InMemory Spool Memory `_ +* `Setting Disk Spool Memory `_ +* `Setting Spool Saved File Directory Location `_ +* `Setting Data Stored Persistently on Cache `_ +* `Setting Persistent Spool Saved File Directory Location `_ +* `Setting Session Tag Name `_ \ No newline at end of file diff --git a/configuration_guides/generic_worker_flags.rst b/configuration_guides/generic_worker_flags.rst index 97cee4ecf..7ae2a0ee7 100644 --- a/configuration_guides/generic_worker_flags.rst +++ b/configuration_guides/generic_worker_flags.rst @@ -5,4 +5,4 @@ Worker Generic Flags ************************* The Worker Generic Flags** page describes **Worker** modification type flags, which can be modified by standard users on a session basis: - * `Persisting Your Cache Directory `_ \ No newline at end of file + * `Persisting Your Cache Directory `_ \ No newline at end of file diff --git a/configuration_guides/graceful_shutdown.rst b/configuration_guides/graceful_shutdown.rst new file mode 100644 index 000000000..8b4f4b55c --- /dev/null +++ b/configuration_guides/graceful_shutdown.rst @@ -0,0 +1,22 @@ +.. _graceful_shutdown: + +************************* +Setting the Graceful Server Shutdown +************************* +The ``defaultGracefulShutdownTimeoutMinutes`` flag is used for setting the amount of time to pass before SQream performs a graceful server shutdown. + +The following describes the ``defaultGracefulShutdownTimeoutMinutes`` flag: + +* **Data type** - size_t +* **Default value** - ``5`` +* **Allowed values** - 1-4000000000 + +For more information, see :ref:`shutdown_server`. + +For related flags, see the folowing: + +* :ref:`is_healer_on` + + :: + +* :ref:`healer_max_inactivity_hours` \ No newline at end of file diff --git a/configuration_guides/healer_detection_frequency_seconds.rst b/configuration_guides/healer_detection_frequency_seconds.rst new file mode 100644 index 000000000..9e7805852 --- /dev/null +++ b/configuration_guides/healer_detection_frequency_seconds.rst @@ -0,0 +1,14 @@ +.. _healer_detection_frequency_seconds: + +************************* +Healer Detection Frequency Seconds +************************* +The ``healerDetectionFrequencySeconds`` flag is used for defining the threshold for creating a log recording a slow statement. The log includes information about the log memory, CPU and GPU. + +The following describes the ``healerDetectionFrequencySeconds`` worker level flag: + +* **Data type** - size_t +* **Default value** - ``1`` +* **Allowed values** - 1-3600 + +For related flags, see :ref:`is_healer_on`. \ No newline at end of file diff --git a/configuration_guides/healer_max_statement_inactivity_seconds.rst b/configuration_guides/healer_max_statement_inactivity_seconds.rst new file mode 100644 index 000000000..142e76eb7 --- /dev/null +++ b/configuration_guides/healer_max_statement_inactivity_seconds.rst @@ -0,0 +1,14 @@ +.. _healer_max_statement_inactivity_seconds: + +************************* +Max Statement Inactivity Seconds +************************* +The ``maxStatementInactivitySeconds`` flag is used for defining the threshold for creating a log recording a slow statement. The log includes information about the log memory, CPU and GPU. + +The following describes the ``maxStatementInactivitySeconds`` worker level flag: + +* **Data type** - size_t +* **Default value** - ``5`` +* **Allowed values** - 1-4000000000 + +For related flags, see :ref:`is_healer_on`. \ No newline at end of file diff --git a/configuration_guides/index.rst b/configuration_guides/index.rst index 6a65853f0..9c0b206e7 100644 --- a/configuration_guides/index.rst +++ b/configuration_guides/index.rst @@ -11,8 +11,7 @@ The **Configuration Guides** page describes the following configuration informat :glob: spooling - configuration_methods + configuring_sqream + ldap configuration_flags - - - + previous_configuration_method \ No newline at end of file diff --git a/configuration_guides/is_healer_on.rst b/configuration_guides/is_healer_on.rst new file mode 100644 index 000000000..1eb0e6384 --- /dev/null +++ b/configuration_guides/is_healer_on.rst @@ -0,0 +1,14 @@ +.. _is_healer_on: + +************************* +Is Healer On +************************* +The ``is_healer_on`` flag enables the Query Healer, which periodically examines the progress of running statements and logs statements exceeding the ``maxStatementInactivitySeconds`` flag setting. + +The following describes the ``is_healer_on`` flag: + +* **Data type** - boolean +* **Default value** - ``true`` +* **Allowed values** - ``true``, ``false`` + +For related flags, see :ref:`healer_max_statement_inactivity_seconds`. \ No newline at end of file diff --git a/configuration_guides/ldap.rst b/configuration_guides/ldap.rst new file mode 100644 index 000000000..45104ee19 --- /dev/null +++ b/configuration_guides/ldap.rst @@ -0,0 +1,134 @@ +.. _ldap: + +************************************* +Configuring LDAP authentication +************************************* + + +Lightweight Directory Access Protocol (LDAP) is an authentication management service widely use with Microsoft Active Directory. Once it has been configured to authenticate SQream roles, all existing and newly added roles will be required to be authenticated by an LDAP server, with the exception of the initial system deployment ``sqream`` role, which is granted full control permissions upon deployment. + +Prior to integrating SQream with LDAP, two preconditions must be considered: + + * If SQream DB is being installed within an LDAP-integrated environment, it is best practice to ensure that the newly created SQream role names are consistent with existing LDAP user names. + + * If LDAP is being integrated with a SQream environment, it is best practice to ensure that the newly created LDAP user names are consistent with existing SQream role names. Note that after LDAP has been successfully integrated, SQream roles that were mistakenly not configured or have conflicting names with LDAP will be recreated in SQream as roles without the ability to log in, without permissions, and without a default schema. + +.. contents:: In this topic: + :local: + +Before You Begin +================ + +Enable self-signed certificates for OpenLDAP by adding the following line to the ``ldap.conf`` configuration file: + +.. code-block:: postgres + + ``TLS_REQCERT allow`` + +Configuring SQream roles +======================== + +**Procedure** + +1. Create a new role: + +.. code-block:: postgres + + CREATE ROLE ; + +2. Grant new role login permission: + +.. code-block:: postgres + + GRANT LOGIN TO ; + +3. Grant the new role ``CONNECT`` permission: + +.. code-block:: postgres + + GRANT CONNECT ON DATABASE TO ; + + +You may also wish to :ref:`rename SQream roles`. + + +Configuring LDAP Authentication +=============================== + + +Flag Attributes +--------------- +To enable LDAP Authentication, configure the following **cluster** flag attributes using the ``ALERT SYSTEM SET`` command: + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Attribute + - Description + * - ``authenticationMethod`` + - Configure an authentication method. Attribute may be set to either ``sqream`` or ``ldap``. To configure LDAP authentication, choose ``ldap``. + * - ``ldapDomain`` + - Configure users` domain. + * - ``ldapIpAddress`` + - Configure the IP address or the Fully Qualified Domain Name (FQDN) of your LDAP server and select a protocol. Out of the ``ldap`` and ``ldaps``, we recommend to use the encrypted ``ldaps`` protocol. + * - ``ldapConnTimeoutSec`` + - Configure the LDAP connection timeout threshold (seconds). The default is 30 seconds. +.. comment:: + +Enabling LDAP Authentication +------------------------------- + +Roles with admin privileges or higher may enable LDAP Authentication. + +**Procedure** + +1. Set the ``ldapIpAddress`` attribute: + +.. code-block:: postgres + + ALTER SYSTEM SET ldapIpAddress = ''; + +2. Set the ``ldapDomain`` attribute: + +.. code-block:: postgres + + ALTER SYSTEM SET ldapDomain = ''; + +3. To set the ``ldapConnTimeoutSec`` attribute (Optional), run: + +.. code-block:: postgres + + ALTER SYSTEM SET ldapConnTimeoutSec = <...>; + +4. Set the ``authenticationMethod`` attribute: + +.. code-block:: postgres + + ALTER SYSTEM SET authenticationMethod = 'ldap'; + +5. Restart all sqreamd servers. + +Example +------- + +.. code-block:: postgres + + ALTER SYSTEM SET ldapIpAddress = ''; + ALTER SYSTEM SET ldapDomain = '<@sqream.loc>'; + ALTER SYSTEM SET ldapConnTimeoutSec = <15>; + ALTER SYSTEM SET authenticationMethod = 'ldap'; + + +Disabling LDAP Authentication +----------------------------- + +To disable LDAP authentication and configure sqream authentication: + +1. Execute the following syntax: + +.. code-block:: postgres + + ALTER SYSTEM SET authenticationMethod = 'sqream'; + +2. Restart all sqreamd servers. diff --git a/configuration_guides/login_max_retries.rst b/configuration_guides/login_max_retries.rst new file mode 100644 index 000000000..bf3ae6d40 --- /dev/null +++ b/configuration_guides/login_max_retries.rst @@ -0,0 +1,11 @@ +.. _login_max_retries: + +************************* +Adjusting Permitted Log-in Attempts +************************* +The ``loginMaxRetries`` flag sets the permitted log-in attempts. + +The following describes the ``loginMaxRetries`` flag: + +* **Data type** - size_t +* **Default value** - ``5`` \ No newline at end of file diff --git a/configuration_guides/varchar_identifiers.rst b/configuration_guides/varchar_identifiers.rst deleted file mode 100644 index 889e5c16e..000000000 --- a/configuration_guides/varchar_identifiers.rst +++ /dev/null @@ -1,12 +0,0 @@ -.. _varchar_identifiers: - -************************* -Interpreting VARCHAR as TEXT -************************* -The ``varcharIdentifiers`` flag activates using **varchar** as an identifier. - -The following describes the ``varcharIdentifiers`` flag: - -* **Data type** - boolean -* **Default value** - ``true`` -* **Allowed values** - ``true``, ``false`` \ No newline at end of file diff --git a/connecting_to_sqream/client_drivers/dotnet/index.rst b/connecting_to_sqream/client_drivers/dotnet/index.rst new file mode 100644 index 000000000..ed0c61557 --- /dev/null +++ b/connecting_to_sqream/client_drivers/dotnet/index.rst @@ -0,0 +1,131 @@ +.. _net: + +************************* +Connecting to SQream Using .NET +************************* +The SqreamNet ADO.NET Data Provider lets you connect to SQream through your .NET environment. + +The .NET page includes the following sections: + +.. contents:: + :local: + :depth: 1 + +Integrating SQreamNet +================================== +The **Integrating SQreamNet** section describes the following: + +.. contents:: + :local: + :depth: 1 + +Prerequisites +---------------- +The SqreamNet provider requires a .NET version 6 or newer. + +Getting the DLL file +---------------- +The .NET driver is available for download from the :ref:`client drivers download page`. + +Integrating SQreamNet +------------------------- +After downloading the .NET driver, save the archive file to a known location. Next, in your IDE, add a Sqreamnet.dll reference to your project. + +If you wish to upgrade SQreamNet within an existing project, you may replace the existing .dll file with an updated one or change the project's reference location to a new one. + + +Known Driver Limitations +---------------------------- + * Unicode characters are not supported when using ``INSERT INTO AS SELECT``. + + * To avoid possible casting issues, use ``getDouble`` when using ``FLOAT``. + +Connecting to SQream For the First Time +============================================== +An initial connection to SQream must be established by creating a **SqreamConnection** object using a connection string. + +.. contents:: + :local: + :depth: 1 + + +Connection String +-------------------- +To connect to SQream, instantiate a **SqreamConnection** object using this connection string. + +The following is the syntax for SQream: + +.. code-block:: text + + "Data Source=,;User=;Password=;Initial Catalog=;Integrated Security=true"; + +Connection Parameters +^^^^^^^^^^^^^^^^^^^^^^^^ + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Item + - State + - Default + - Description + * - ```` + - Mandatory + - None + - Hostname/IP/FQDN and port of the SQream DB worker. For example, ``127.0.0.1:5000``, ``sqream.mynetwork.co:3108`` + * - ```` + - Mandatory + - None + - Database name to connect to. For example, ``master`` + * - ```` + - Mandatory + - None + - Username of a role to use for connection. For example, ``username=rhendricks`` + * - ```` + - Mandatory + - None + - Specifies the password of the selected role. For example, ``password=Tr0ub4dor&3`` + * - ```` + - Optional + - ``sqream`` + - Specifices service queue to use. For example, ``service=etl`` + * - ```` + - Optional + - ``false`` + - Specifies SSL for this connection. For example, ``ssl=true`` + * - ```` + - Optional + - ``true`` + - Connect via load balancer (use only if exists, and check port). + +Connection String Examples +^^^^^^^^^^^^^^^^^^^^^^^^^^^ +The following is an example of a SQream cluster with load balancer and no service queues (with SSL): + +.. code-block:: text + + Data Source=sqream.mynetwork.co,3108;User=rhendricks;Password=Tr0ub4dor&3;Initial Catalog=master;Integrated Security=true;ssl=true;cluster=true; + + +The following is a minimal example for a local standalone SQream database: + +.. code-block:: text + + + Data Source=127.0.0.1,5000;User=rhendricks;Password=Tr0ub4dor&3;Initial Catalog=master; + +The following is an example of a SQream cluster with load balancer and a specific service queue named ``etl``, to the database named ``raviga`` + +.. code-block:: text + + Data Source=sqream.mynetwork.co,3108;User=rhendricks;Password=Tr0ub4dor&3;Initial Catalog=raviga;Integrated Security=true;service=etl;cluster=true; + +Sample C# Program +-------------------- +You can download the :download:`.NET Application Sample File ` below by right-clicking and saving it to your computer. + +.. literalinclude:: sample.cs + :language: C# + :caption: .NET Application Sample + :linenos: diff --git a/connecting_to_sqream/client_drivers/dotnet/sample.cs b/connecting_to_sqream/client_drivers/dotnet/sample.cs new file mode 100644 index 000000000..54a19e0da --- /dev/null +++ b/connecting_to_sqream/client_drivers/dotnet/sample.cs @@ -0,0 +1,93 @@ + public void Test() + { + var connection = OpenConnection("192.168.4.62", 5000, "sqream", "sqream", "master"); + + ExecuteSQLCommand(connection, "create or replace table tbl_example as select 1 as x , 'a' as y;"); + + var tableData = ReadExampleData(connection, "select * from tbl_example;"); + } + + /// + /// Builds a connection string to sqream server and opens a connection + /// + /// host to connect + /// port sqreamd is running on + /// role username + /// role password + /// database name + /// optional - set to true when the ip,port endpoint is a server picker process + /// + /// SQream connection object + /// Throws SqreamException if fails to open a connction + /// + public SqreamConnection OpenConnection(string ipAddress, int port, string username, string password, string databaseName, bool isCluster = false) + { + // create the connection string according to the format + var connectionString = string.Format( + "Data Source={0},{1};User={2};Password={3};Initial Catalog={4};Cluster={5}", + ipAddress, + port, + username, + password, + databaseName, + isCluster + ); + + // create a sqeram connection object + var connection = new SqreamConnection(connectionString); + + // open a connection + connection.Open(); + + // returns the connection object + return connection; + } + + /// + /// Executes a SQL command to sqream server + /// + /// connection to sqream server + /// sql command + /// thrown when the connection is not open + public void ExecuteSQLCommand(SqreamConnection connection, string sql) + { + // validates the connection is open and throws exception if not + if (connection.State != System.Data.ConnectionState.Open) + throw new InvalidOperationException(string.Format("connection to sqream is not open. connection.State: {0}", connection.State)); + + // creates a new command object utilizing the sql and the connection + var command = new SqreamCommand(sql, connection); + + // executes the command + command.ExecuteNonQuery(); + } + + /// + /// Executes a SQL command to sqream server, and reads the result set usiing DataReader + /// + /// connection to sqream server + /// sql command + /// thrown when the connection is not open + public List> ReadExampleData(SqreamConnection connection, string sql) + { + // validates the connection is open and throws exception if not + if (connection.State != System.Data.ConnectionState.Open) + throw new InvalidOperationException(string.Format("connection to sqream is not open. connection.State: {0}", connection.State)); + + // creates a new command object utilizing the sql and the connection + var command = new SqreamCommand(sql, connection); + + // creates a reader object to iterate over the result set + var reader = (SqreamDataReader)command.ExecuteReader(); + + // list of results + var result = new List>(); + + //iterate the reader and read the table int,string values into a result tuple object + while (reader.Read()) + result.Add(new Tuple(reader.GetInt32(0), reader.GetString(1))); + + // return the result set + return result; + } + diff --git a/third_party_tools/client_drivers/index.rst b/connecting_to_sqream/client_drivers/index.rst similarity index 56% rename from third_party_tools/client_drivers/index.rst rename to connecting_to_sqream/client_drivers/index.rst index 2b486d47f..d35b1662f 100644 --- a/third_party_tools/client_drivers/index.rst +++ b/connecting_to_sqream/client_drivers/index.rst @@ -1,110 +1,119 @@ -.. _client_drivers: - -************************************ -Client Drivers for |latest_version| -************************************ - -The guides on this page describe how to use the Sqream DB client drivers and client applications with SQream. - -Client Driver Downloads -============================= - -All Operating Systems ---------------------------- -The following are applicable to all operating systems: - -.. _jdbc: - -* **JDBC** - recommended installation via ``mvn``: - - * `JDBC .jar file `_ - sqream-jdbc-4.5.3 (.jar) - * `JDBC driver `_ - - -.. _python: - -* **Python** - Recommended installation via ``pip``: - - * `Python .tar file `_ - pysqream v3.1.3 (.tar.gz) - * `Python driver `_ - - -.. _nodejs: - -* **Node.JS** - Recommended installation via ``npm``: - - * `Node.JS `_ - sqream-v4.2.4 (.tar.gz) - * `Node.JS driver `_ - - -.. _tableau_connector: - -* **Tableau**: - - * `Tableau connector `_ - SQream (.taco) - * `Tableau manual installation `_ - - -.. _powerbi_connector: - -* **Power BI**: - - * `Power BI PowerQuery connector `_ - SQream (.mez) - * `Power BI manual installation `_ - - -Windows --------------- -The following are applicable to Windows: - -* **ODBC installer** - SQream Drivers v2020.2.0, with Tableau customizations. Please contact your `Sqream represenative `_ for this installer. - - For more information on installing and configuring ODBC on Windows, see :ref:`Install and configure ODBC on Windows `. - - -* **Net driver** - `SQream .Net driver v3.0.2 `_ - - - -Linux --------------- -The following are applicable to Linux: - -* `SQream SQL (x86_64) `_ - sqream-sql-v2020.1.1_stable.x86_64.tar.gz -* `Sqream SQL CLI Reference `_ - Interactive command-line SQL client for Intel-based machines - - :: - -* `SQream SQL*(IBM POWER9) `_ - sqream-sql-v2020.1.1_stable.ppc64le.tar.gz -* `Sqream SQL CLI Reference `_ - Interactive command-line SQL client for IBM POWER9-based machines - - :: - -* ODBC Installer - Please contact your SQream representative for this installer. - - :: - -* C++ connector - `libsqream-4.0 `_ -* `C++ shared object library `_ - - -.. toctree:: - :maxdepth: 4 - :caption: Client Driver Documentation: - :titlesonly: - - jdbc/index - python/index - nodejs/index - odbc/index - cpp/index - - - -.. rubric:: Need help? - -If you couldn't find what you're looking for, we're always happy to help. Visit `SQream's support portal `_ for additional support. - -.. rubric:: Looking for older drivers? - +.. _client_drivers: + +************** +Client Drivers +************** + +The guides on this page describe how to use the Sqream DB client drivers and client applications with SQream. + +Client Driver Downloads +============================= + +All Operating Systems +--------------------------- +The following are applicable to all operating systems: + +.. _jdbc: + +* **JDBC** - recommended installation via ``mvn``: + + * `JDBC .jar file `_ - sqream-jdbc-4.5.3 (.jar) + * :ref:`java_jdbc` + +.. _.net: + +* **.NET**: + + * `.NET .dll file `_ + * :ref:`net` + +* **Kafka**: + + * `Kafka download file <>`_ + * :ref:`kafka` + +.. _python: + +* **Python** - Recommended installation via ``pip``: + + * `Python .tar file `_ - pysqream v3.1.3 (.tar.gz) + * :ref:`pysqream` + + +.. _nodejs: + +* **Node.JS** - Recommended installation via ``npm``: + + * `Node.JS `_ - sqream-v4.2.4 (.tar.gz) + * :ref:`nodejs` + + +.. _tableau_connector: + +* **Tableau**: + + * `Tableau connector `_ - SQream (.taco) + * :ref:`tableau` + + +.. _powerbi_connector: + +* **Power BI**: + + * `Power BI PowerQuery connector `_ - SQream (.mez) + * :ref:`power_bi` + + +Windows +-------------- +The following are applicable to Windows: + +* **ODBC installer** - SQream Drivers v2020.2.0, with Tableau customizations. Please contact your `Sqream represenative `_ for this installer. + + For more information on installing and configuring ODBC on Windows, see :ref:`Install and configure ODBC on Windows `. + + +* **Net driver** - `SQream .Net driver v3.0.2 `_ + + + +Linux +-------------- +The following are applicable to Linux: + +* `SQream SQL (x86_64) `_ - sqream-sql-v2020.1.1_stable.x86_64.tar.gz +* :ref:`sqream_sql_cli_reference` - Interactive command-line SQL client for Intel-based machines + + :: + +* `SQream SQL*(IBM POWER9) `_ - sqream-sql-v2020.1.1_stable.ppc64le.tar.gz +* :ref:`sqream_sql_cli_reference` - Interactive command-line SQL client for IBM POWER9-based machines + + :: + +* ODBC Installer - Please contact your SQream representative for this installer. + + + +.. toctree:: + :maxdepth: 4 + :caption: Client Driver Documentation: + :titlesonly: + + jdbc/index + python/index + dotnet/index + kafka/index + nodejs/index + odbc/index + dotnet/index + + + +.. rubric:: Need help? + +If you couldn't find what you're looking for, we're always happy to help. Visit :ref:`information_for_support` for additional support. + +.. rubric:: Looking for older drivers? + If you're looking for an older version of SQream DB drivers, versions 1.10 through 2019.2.1 are available at https://sqream.com/product/client-drivers/. \ No newline at end of file diff --git a/connecting_to_sqream/client_drivers/jdbc/index.rst b/connecting_to_sqream/client_drivers/jdbc/index.rst new file mode 100644 index 000000000..438df0ad7 --- /dev/null +++ b/connecting_to_sqream/client_drivers/jdbc/index.rst @@ -0,0 +1,171 @@ +.. _java_jdbc: + +************************* +Connecting to SQream Using JDBC +************************* +The SQream JDBC driver lets you connect to SQream using many Java applications and tools. This page describes how to write a Java application using the JDBC interface. The JDBC driver requires Java 1.8 or newer. + +The JDBC page includes the following sections: + +.. contents:: + :local: + :depth: 1 + +Installing the JDBC Driver +================================== +The **Installing the JDBC Driver** section describes the following: + +.. contents:: + :local: + :depth: 1 + +Prerequisites +---------------- +The SQream JDBC driver requires Java 1.8 or newer, and SQream recommends using Oracle Java or OpenJDK.: + +* **Oracle Java** - Download and install `Java 8 `_ from Oracle for your platform. + + :: + +* **OpenJDK** - Install `OpenJDK `_ + + :: + +* **Windows** - SQream recommends installing `Zulu 8 `_ + +Getting the JAR file +--------------------- +SQream provides the JDBC driver as a zipped JAR file, available for download from the :ref:`client drivers download page`. This JAR file can be integrated into your Java-based applications or projects. + +Extracting the ZIP Archive +------------------------- +Run the following command to extract the JAR file from the ZIP archive: + +.. code-block:: console + + $ unzip sqream-jdbc-4.3.0.zip + +Setting Up the Class Path +---------------------------- +To use the driver, you must include the JAR named ``sqream-jdbc-.jar`` in the class path, either by inserting it in the ``CLASSPATH`` environment variable, or by using flags on the relevant Java command line. + +For example, if the JDBC driver has been unzipped to ``/home/sqream/sqream-jdbc-4.3.0.jar``, the following command is used to run application: + +.. code-block:: console + + $ export CLASSPATH=/home/sqream/sqream-jdbc-4.3.0.jar:$CLASSPATH + $ java my_java_app + +Alternatively, you can pass ``-classpath`` to the Java executable file: + +.. code-block:: console + + $ java -classpath .:/home/sqream/sqream-jdbc-4.3.0.jar my_java_app + +Connecting to SQream Using a JDBC Application +============================================== +You can connect to SQream using one of the following JDBC applications: + +.. contents:: + :local: + :depth: 1 + +Driver Class +-------------- +Use ``com.sqream.jdbc.SQDriver`` as the driver class in the JDBC application. + +Connection String +-------------------- +JDBC drivers rely on a connection string. + +The following is the syntax for SQream: + +.. code-block:: text + + jdbc:Sqream:///;user=;password=;[; ...] + +Connection Parameters +^^^^^^^^^^^^^^^^^^^^^^^^ +The following table shows the connection string parameters: + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Item + - State + - Default + - Description + * - ```` + - Mandatory + - None + - Hostname and port of the SQream DB worker. For example, ``127.0.0.1:5000``, ``sqream.mynetwork.co:3108`` + * - ```` + - Mandatory + - None + - Database name to connect to. For example, ``master`` + * - ``username=`` + - Mandatory + - None + - Username of a role to use for connection. For example, ``username=rhendricks`` + * - ``password=`` + - Mandatory + - None + - Specifies the password of the selected role. For example, ``password=Tr0ub4dor&3`` + * - ``service=`` + - Optional + - ``sqream`` + - Specifices service queue to use. For example, ``service=etl`` + * - ```` + - Optional + - ``false`` + - Specifies SSL for this connection. For example, ``ssl=true`` + * - ```` + - Optional + - ``true`` + - Connect via load balancer (use only if exists, and check port). + * - ```` + - Optional + - ``true`` + - Enables on-demand loading, and defines double buffer size for result. The ``fetchSize`` parameter is rounded according to chunk size. For example, ``fetchSize=1`` loads one row and is rounded to one chunk. If the fetchSize is 100,600, a chunk size of 100,000 loads, and is rounded to, two chunks. + * - ```` + - Optional + - ``true`` + - Defines the bytes size for inserting a buffer before flushing data to the server. Clients running a parameterized insert (network insert) can define the amount of data to collect before flushing the buffer. + * - ```` + - Optional + - ``true`` + - Defines the logger level as either ``debug`` or ``trace``. + * - ```` + - Optional + - ``true`` + - Enables the file appender and defines the file name. The file name can be set as either the file name or the file path. + +Connection String Examples +^^^^^^^^^^^^^^^^^^^^^^^^^^^ +The following is an example of a SQream cluster with load balancer and no service queues (with SSL): + +.. code-block:: text + + jdbc:Sqream://sqream.mynetwork.co:3108/master;user=rhendricks;password=Tr0ub4dor&3;ssl=true;cluster=true + +The following is a minimal example for a local standalone SQream database: + +.. code-block:: text + + jdbc:Sqream://127.0.0.1:5000/master;user=rhendricks;password=Tr0ub4dor&3 + +The following is an example of a SQream cluster with load balancer and a specific service queue named ``etl``, to the database named ``raviga`` + +.. code-block:: text + + jdbc:Sqream://sqream.mynetwork.co:3108/raviga;user=rhendricks;password=Tr0ub4dor&3;cluster=true;service=etl + +Sample Java Program +-------------------- +You can download the :download:`JDBC Application Sample File ` below by right-clicking and saving it to your computer. + +.. literalinclude:: sample.java + :language: java + :caption: JDBC Application Sample + :linenos: diff --git a/third_party_tools/client_drivers/jdbc/sample.java b/connecting_to_sqream/client_drivers/jdbc/sample.java similarity index 97% rename from third_party_tools/client_drivers/jdbc/sample.java rename to connecting_to_sqream/client_drivers/jdbc/sample.java index 1b7af5804..3ff670747 100644 --- a/third_party_tools/client_drivers/jdbc/sample.java +++ b/connecting_to_sqream/client_drivers/jdbc/sample.java @@ -1,67 +1,67 @@ -import java.sql.Connection; -import java.sql.DatabaseMetaData; -import java.sql.DriverManager; -import java.sql.Statement; -import java.sql.ResultSet; - -import java.io.IOException; -import java.security.KeyManagementException; -import java.security.NoSuchAlgorithmException; -import java.sql.SQLException; - - - -public class SampleTest { - - // Replace with your connection string - static final String url = "jdbc:Sqream://sqream.mynetwork.co:3108/master;user=rhendricks;password=Tr0ub4dor&3;ssl=true;cluster=true"; - - // Allocate objects for result set and metadata - Connection conn = null; - Statement stmt = null; - ResultSet rs = null; - DatabaseMetaData dbmeta = null; - - int res = 0; - - public void testJDBC() throws SQLException, IOException { - - // Create a connection - conn = DriverManager.getConnection(url,"rhendricks","Tr0ub4dor&3"); - - // Create a table with a single integer column - String sql = "CREATE TABLE test (x INT)"; - stmt = conn.createStatement(); // Prepare the statement - stmt.execute(sql); // Execute the statement - stmt.close(); // Close the statement handle - - // Insert some values into the newly created table - sql = "INSERT INTO test VALUES (5),(6)"; - stmt = conn.createStatement(); - stmt.execute(sql); - stmt.close(); - - // Get values from the table - sql = "SELECT * FROM test"; - stmt = conn.createStatement(); - rs = stmt.executeQuery(sql); - // Fetch all results one-by-one - while(rs.next()) { - res = rs.getInt(1); - System.out.println(res); // Print results to screen - } - rs.close(); // Close the result set - stmt.close(); // Close the statement handle - } - - - public static void main(String[] args) throws SQLException, KeyManagementException, NoSuchAlgorithmException, IOException, ClassNotFoundException{ - - // Load SQream DB JDBC driver - Class.forName("com.sqream.jdbc.SQDriver"); - - // Create test object and run - SampleTest test = new SampleTest(); - test.testJDBC(); - } +import java.sql.Connection; +import java.sql.DatabaseMetaData; +import java.sql.DriverManager; +import java.sql.Statement; +import java.sql.ResultSet; + +import java.io.IOException; +import java.security.KeyManagementException; +import java.security.NoSuchAlgorithmException; +import java.sql.SQLException; + + + +public class SampleTest { + + // Replace with your connection string + static final String url = "jdbc:Sqream://sqream.mynetwork.co:3108/master;user=rhendricks;password=Tr0ub4dor&3;ssl=true;cluster=true"; + + // Allocate objects for result set and metadata + Connection conn = null; + Statement stmt = null; + ResultSet rs = null; + DatabaseMetaData dbmeta = null; + + int res = 0; + + public void testJDBC() throws SQLException, IOException { + + // Create a connection + conn = DriverManager.getConnection(url,"rhendricks","Tr0ub4dor&3"); + + // Create a table with a single integer column + String sql = "CREATE TABLE test (x INT)"; + stmt = conn.createStatement(); // Prepare the statement + stmt.execute(sql); // Execute the statement + stmt.close(); // Close the statement handle + + // Insert some values into the newly created table + sql = "INSERT INTO test VALUES (5),(6)"; + stmt = conn.createStatement(); + stmt.execute(sql); + stmt.close(); + + // Get values from the table + sql = "SELECT * FROM test"; + stmt = conn.createStatement(); + rs = stmt.executeQuery(sql); + // Fetch all results one-by-one + while(rs.next()) { + res = rs.getInt(1); + System.out.println(res); // Print results to screen + } + rs.close(); // Close the result set + stmt.close(); // Close the statement handle + } + + + public static void main(String[] args) throws SQLException, KeyManagementException, NoSuchAlgorithmException, IOException, ClassNotFoundException{ + + // Load SQream DB JDBC driver + Class.forName("com.sqream.jdbc.SQDriver"); + + // Create test object and run + SampleTest test = new SampleTest(); + test.testJDBC(); + } } \ No newline at end of file diff --git a/connecting_to_sqream/client_drivers/kafka/index.rst b/connecting_to_sqream/client_drivers/kafka/index.rst new file mode 100644 index 000000000..983b0b96e --- /dev/null +++ b/connecting_to_sqream/client_drivers/kafka/index.rst @@ -0,0 +1,250 @@ +.. _kafka: + +************************* +Connecting to SQream Using Kafka +************************* + +If you are using Kafka Apache for distributed streaming and wish to use it with SQream, follow these instructions. + + +.. contents:: + :local: + :depth: 1 + + +Before You Begin +================ + +* You must have JAVA 11 installed +* You must have `JDBC `_ installed +* Your network bandwidth must be at least 100 mega per second +* Supported data formats for streamed data is JSON + +High Level Workflow +=================== + +1. Install the JDBC Connector. +2. Install kafka_2.12-3.2.1 +3. Run your Kafka Connect API. +4. + + + +Installation and Configuration +============================== + +Before you configure the Kafka Connector, make sure that Kafka and Zookeeper are both running. + +Kafka Connector workflow: + +.. figure:: /_static/images/kafka_flow.png + +.. contents:: + :local: + :depth: 1 + +Sink Connector +--------------- + +The Sink Connector reads JSON format Kafka topics and writes the messages inside each topic into text files. The files are created with the extension ``.tmp`` and stored in a specified directory. The ``sqream.batchRecordCount`` parameter defines the number of records to be written to each file, and when the specified number is reached, the Sink Connector closes the file, renames it to ``sqream.fileExtension``, and then creates a new file. Unlike data streaming, which continuously sends data from the Kafka topic to the database, the Sink Connector only sends the data when the file size reaches a predefined threshold. This means that data will arrive in batches. + +SQream tables must be created according to the columns configured in ``csvorder``. + + +Sink Connector Configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configuration file structure: + + .. code-block:: postgres + + name=SQReamFileSink + topics=topsqreamtest1 + tasks.max=4 + connector.class=tr.com.entegral.FileSinkConnector + errors.tolerance=all + errors.log.enable=true + errors.log.include.messages=true + value.converter=org.apache.kafka.connect.json.JsonConverter + value.converter.schemas.enable=false + transforms=flatten + transforms.flatten.type=org.apache.kafka.connect.transforms.Flatten$Value + transforms.flatten.delimiter=. + sqream.outputdir=/home/sqream/kafkaconnect/outputs + sqream.batchRecordCount =10 + sqream.fileExtension=csv + sqream.removeNewline =false + sqream.outputType=csv + sqream.csvOrder=receivedTime,equipmentId,asdf,timestamp,intv + +The following parameters require configuration. + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Parameter + - Description + * - Topic + - A category or feed name to which messages are published and subscribed to + * - ``sqream.batchrecordcount`` + - The record count to be written to each file + * - ``outputdir`` + - Copy the ``sqream.outputdir`` path, from its beginning and until ``outputs``, included, and save it to a known location. It is required to configure SQream loader to use this section of the path + * - ``csvorder`` + - Defines table columns. SQream table columns must align with the ``csvorder`` table columns + + +Connection string: + + .. code-block:: postgres + + vi /home/sqream/kafkaconnect1/sqream-kafka-connector/sqream-kafkaconnect/config/sqream-filesink.properties + +Running commands: + + .. code-block:: postgres + + export JAVA_HOME=/home/sqream/copy-from-util/jdk-11;export CLASSPATH=.:$JAVA_HOME/jre/lib:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar;cd /home/sqream/kafkaconnect1/kafka/bin/ && ./connect-standalone.sh /home/sqream/kafkaconnect1/sqream-kafka-connector/sqream-kafkaconnect/config/connect-standalone.properties /home/sqream/kafkaconnect1/sqream-kafka-connector/sqream-kafkaconnect/config/sqream-filesink.properties & + + + + +JDBC +------------- + +The JDBC connector can be used to ingest data from Kafka, allowing SQream DB to consume the messages directly. This enables efficient and secure data ingestion into SQream DB. + +.. contents:: + :local: + :depth: 1 + +JDBC Configuration +~~~~~~~~~~~~~~~~~~ + +.. code-block:: postgres + vi /home/sqream/kafkaconnect1/sqream-kafka-connector/sqream-kafkaconnect/config/sqream-jdbcsink.properties + +Example + +.. code-block:: postgres + + name=SQReamJDBCSink + topics=demo1 + tasks.max=1 + connector.class=tr.com.entegral.JDBCSinkConnector + errors.tolerance=all + errors.log.enable=true + errors.log.include.messages=true + value.converter=org.apache.kafka.connect.json.JsonConverter + value.converter.schemas.enable=false + transforms=flatten + transforms.flatten.type=org.apache.kafka.connect.transforms.Flatten$Value + transforms.flatten.delimiter=. + sqream.batchRecordCount =3 + #sqream.jdbc.connectionstring=jdbc:sqlserver://localhost;databaseName=TestDB;user=kafka;password=kafka;encrypt=true;trustServerCertificate=true; + sqream.jdbc.connectionstring=jdbc:Sqream://192.168.0.102:5001/kafka;user=sqream;password=sqream;cluster=false + sqream.input.inputfields=intStr,inInt,indateTime,inFloat + sqream.jdbc.tablename=testtable + sqream.jdbc.table.columnnames=colStr,colInt,Coldatetime,ColFloat + sqream.jdbc.table.columntypes=VARCHAR,INTEGER,TIMESTAMP,FLOAT + sqream.jdbc.dateformat=yyyy-MM-dd HH:mm:ss + +SQream Loader Configuration +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + +Building the SQream Loader: + + .. code-block:: postgres + + git clone -b develop http://gitlab.sq.l/java/copy-from-util.git + mvn clean package + + +Running the SQream Loader: + + .. code-block:: postgres + + git clone -b develop http://gitlab.sq.l/java/copy-from-util.git + mvn clean package + +What needs to be configured: + +.. list-table:: + :widths: auto + :header-rows: 1 + + + * - Parameter + - Description + * - ``root`` + – paste copied path to root + * - ``schema`` + - + * - ``name`` + - + +Configuration file structure: + + .. code-block:: postgres + + #config.yaml + + com: + sqream: + kafka: + common: + root: "/home/sqream/copy_from_root" + readyFileSuffix: ".csv" + connection: + ip: "127.0.0.1" + port: 3108 + database: "master" + cluster: true + user: sqream + pass: sqream + delimiter: "," + tables: + - schema: "public" + name: "t1" + parallel: 5 + - schema: "public" + name: "t2" + parallel: 3 + - schema: "public" + name: "t3" + parallel: 1 + + + + +Running commands: + + .. code-block:: postgres + + /home/sqream/copy-from-util/jdk-11/bin/java -jar /home/sqream/copy-from-util/copy-from-util/target/copy-from-util-0.0.1-SNAPSHOT.jar --spring.config.additional-location=/home/sqream/copy-from-util/config.yaml & + +Logging and Monitoring +======================== + +The following log files are created: + * JAVA application fails (consumer or loader?) + * Files cannot be saved to folder due to +Either + * Folder permission issue +Or + * SQream loader folder is not the same as Kenan folder + +Purging +======= +Ingested files are automatically zipped and archived for 60 days. + +Limitations +=========== + +Latency +Retention + +Examples +========= diff --git a/third_party_tools/client_drivers/nodejs/index.rst b/connecting_to_sqream/client_drivers/nodejs/index.rst similarity index 96% rename from third_party_tools/client_drivers/nodejs/index.rst rename to connecting_to_sqream/client_drivers/nodejs/index.rst index cb7db193b..94b84a072 100644 --- a/third_party_tools/client_drivers/nodejs/index.rst +++ b/connecting_to_sqream/client_drivers/nodejs/index.rst @@ -1,382 +1,382 @@ -.. _nodejs: - -************************* -Node.JS -************************* - -The SQream DB Node.JS driver allows Javascript applications and tools connect to SQream DB. -This tutorial shows you how to write a Node application using the Node.JS interface. - -The driver requires Node 10 or newer. - -.. contents:: In this topic: - :local: - -Installing the Node.JS driver -================================== - -Prerequisites ----------------- - -* Node.JS 10 or newer. Follow instructions at `nodejs.org `_ . - -Install with NPM -------------------- - -Installing with npm is the easiest and most reliable method. -If you need to install the driver in an offline system, see the offline method below. - -.. code-block:: console - - $ npm install @sqream/sqreamdb - -Install from an offline package -------------------------------------- - -The Node driver is provided as a tarball for download from the `SQream Drivers page `_ . - -After downloading the tarball, use ``npm`` to install the offline package. - -.. code-block:: console - - $ sudo npm install sqreamdb-4.0.0.tgz - - -Connect to SQream DB with a Node.JS application -==================================================== - -Create a simple test ------------------------------------------- - -Replace the connection parameters with real parameters for a SQream DB installation. - -.. code-block:: javascript - :caption: sqreamdb-test.js - - const Connection = require('@sqream/sqreamdb'); - - const config = { - host: 'localhost', - port: 3109, - username: 'rhendricks', - password: 'super_secret_password', - connectDatabase: 'raviga', - cluster: true, - is_ssl: true, - service: 'sqream' - }; - - const query1 = 'SELECT 1 AS test, 2*6 AS "dozen"'; - - const sqream = new Connection(config); - sqream.execute(query1).then((data) => { - console.log(data); - }, (err) => { - console.error(err); - }); - - -Run the test ----------------- - -A successful run should look like this: - -.. code-block:: console - - $ node sqreamdb-test.js - [ { test: 1, dozen: 12 } ] - - -API reference -==================== - -Connection parameters ---------------------------- - -.. list-table:: - :widths: auto - :header-rows: 1 - - * - Item - - Optional - - Default - - Description - * - ``host`` - - ✗ - - None - - Hostname for SQream DB worker. For example, ``127.0.0.1``, ``sqream.mynetwork.co`` - * - ``port`` - - ✗ - - None - - Port for SQream DB end-point. For example, ``3108`` for the load balancer, ``5000`` for a worker. - * - ``username`` - - ✗ - - None - - Username of a role to use for connection. For example, ``rhendricks`` - * - ``password`` - - ✗ - - None - - Specifies the password of the selected role. For example, ``Tr0ub4dor&3`` - * - ``connectDatabase`` - - ✗ - - None - - Database name to connect to. For example, ``master`` - * - ``service`` - - ✓ - - ``sqream`` - - Specifices service queue to use. For example, ``etl`` - * - ``is_ssl`` - - ✓ - - ``false`` - - Specifies SSL for this connection. For example, ``true`` - * - ``cluster`` - - ✓ - - ``false`` - - Connect via load balancer (use only if exists, and check port). For example, ``true`` - -Events -------------- - -The connector handles event returns with an event emitter - -getConnectionId - The ``getConnectionId`` event returns the executing connection ID. - -getStatementId - The ``getStatementId`` event returns the executing statement ID. - -getTypes - The ``getTypes`` event returns the results columns types. - -Example -^^^^^^^^^^^^^^^^^ - -.. code-block:: javascript - - const myConnection = new Connection(config); - - myConnection.runQuery(query1, function (err, data){ - myConnection.events.on('getConnectionId', function(data){ - console.log('getConnectionId', data); - }); - - myConnection.events.on('getStatementId', function(data){ - console.log('getStatementId', data); - }); - - myConnection.events.on('getTypes', function(data){ - console.log('getTypes', data); - }); - }); - -Input placeholders -------------------------- - -The Node.JS driver can replace parameters in a statement. - -Input placeholders allow values like user input to be passed as parameters into queries, with proper escaping. - -The valid placeholder formats are provided in the table below. - -.. list-table:: - :widths: auto - :header-rows: 1 - - * - Placeholder - - Type - * - ``%i`` - - Identifier (e.g. table name, column name) - * - ``%s`` - - A text string - * - ``%d`` - - A number value - * - ``%b`` - - A boolean value - -See the :ref:`input placeholders example` below. - -Examples -=============== - -Setting configuration flags ------------------------------------ - -SQream DB configuration flags can be set per statement, as a parameter to ``runQuery``. - -For example: - -.. code-block:: javascript - - const setFlag = 'SET showfullexceptioninfo = true;'; - - const query_string = 'SELECT 1'; - - const myConnection = new Connection(config); - myConnection.runQuery(query_string, function (err, data){ - console.log(err, data); - }, setFlag); - - -Lazyloading ------------------------------------ - -To process rows without keeping them in memory, you can lazyload the rows with an async: - -.. code-block:: javascript - - - const Connection = require('@sqream/sqreamdb'); - - const config = { - host: 'localhost', - port: 3109, - username: 'rhendricks', - password: 'super_secret_password', - connectDatabase: 'raviga', - cluster: true, - is_ssl: true, - service: 'sqream' - }; - - const sqream = new Connection(config); - - const query = "SELECT * FROM public.a_very_large_table"; - - (async () => { - const cursor = await sqream.executeCursor(query); - let count = 0; - for await (let rows of cursor.fetchIterator(100)) { - // fetch rows in chunks of 100 - count += rows.length; - } - await cursor.close(); - return count; - })().then((total) => { - console.log('Total rows', total); - }, (err) => { - console.error(err); - }); - - -Reusing a connection ------------------------------------ - -It is possible to execeute multiple queries with the same connection (although only one query can be executed at a time). - -.. code-block:: javascript - - const Connection = require('@sqream/sqreamdb'); - - const config = { - host: 'localhost', - port: 3109, - username: 'rhendricks', - password: 'super_secret_password', - connectDatabase: 'raviga', - cluster: true, - is_ssl: true, - service: 'sqream' - }; - - const sqream = new Connection(config); - - (async () => { - - const conn = await sqream.connect(); - try { - const res1 = await conn.execute("SELECT 1"); - const res2 = await conn.execute("SELECT 2"); - const res3 = await conn.execute("SELECT 3"); - conn.disconnect(); - return {res1, res2, res3}; - } catch (err) { - conn.disconnect(); - throw err; - } - - })().then((res) => { - console.log('Results', res) - }, (err) => { - console.error(err); - }); - - -.. _input_placeholders_example: - -Using placeholders in queries ------------------------------------ - -Input placeholders allow values like user input to be passed as parameters into queries, with proper escaping. - -.. code-block:: javascript - - const Connection = require('@sqream/sqreamdb'); - - const config = { - host: 'localhost', - port: 3109, - username: 'rhendricks', - password: 'super_secret_password', - connectDatabase: 'raviga', - cluster: true, - is_ssl: true, - service: 'sqream' - }; - - const sqream = new Connection(config); - - const sql = "SELECT %i FROM public.%i WHERE name = %s AND num > %d AND active = %b"; - - sqream.execute(sql, "col1", "table2", "john's", 50, true); - - -The query that will run is ``SELECT col1 FROM public.table2 WHERE name = 'john''s' AND num > 50 AND active = true`` - - -Troubleshooting and recommended configuration -================================================ - - -Preventing ``heap out of memory`` errors --------------------------------------------- - -Some workloads may cause Node.JS to fail with the error: - -.. code-block:: none - - FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory - -To prevent this error, modify the heap size configuration by setting the ``--max-old-space-size`` run flag. - -For example, set the space size to 2GB: - -.. code-block:: console - - $ node --max-old-space-size=2048 my-application.js - -BIGINT support ------------------------- - -The Node.JS connector supports fetching ``BIGINT`` values from SQream DB. However, some applications may encounter an error when trying to serialize those values. - -The error that appears is: -.. code-block:: none - - TypeError: Do not know how to serialize a BigInt - -This is because JSON specification do not support BIGINT values, even when supported by Javascript engines. - -To resolve this issue, objects with BIGINT values should be converted to string before serializing, and converted back after deserializing. - -For example: - -.. code-block:: javascript - - const rows = [{test: 1n}] - const json = JSON.stringify(rows, , (key, value) => - typeof value === 'bigint' - ? value.toString() - : value // return everything else unchanged - )); - console.log(json); // [{"test": "1"}] - +.. _nodejs: + +************************* +Connecting to SQream Using Node.JS +************************* + +The SQream DB Node.JS driver allows Javascript applications and tools connect to SQream DB. +This tutorial shows you how to write a Node application using the Node.JS interface. + +The driver requires Node 10 or newer. + +.. contents:: In this topic: + :local: + +Installing the Node.JS driver +================================== + +Prerequisites +---------------- + +* Node.JS 10 or newer. Follow instructions at `nodejs.org `_ . + +Install with NPM +------------------- + +Installing with npm is the easiest and most reliable method. +If you need to install the driver in an offline system, see the offline method below. + +.. code-block:: console + + $ npm install @sqream/sqreamdb + +Install from an offline package +------------------------------------- + +The Node driver is provided as a tarball for download from the `SQream Drivers page `_ . + +After downloading the tarball, use ``npm`` to install the offline package. + +.. code-block:: console + + $ sudo npm install sqreamdb-4.0.0.tgz + + +Connect to SQream DB with a Node.JS application +==================================================== + +Create a simple test +------------------------------------------ + +Replace the connection parameters with real parameters for a SQream DB installation. + +.. code-block:: javascript + :caption: sqreamdb-test.js + + const Connection = require('@sqream/sqreamdb'); + + const config = { + host: 'localhost', + port: 3109, + username: 'rhendricks', + password: 'super_secret_password', + connectDatabase: 'raviga', + cluster: true, + is_ssl: true, + service: 'sqream' + }; + + const query1 = 'SELECT 1 AS test, 2*6 AS "dozen"'; + + const sqream = new Connection(config); + sqream.execute(query1).then((data) => { + console.log(data); + }, (err) => { + console.error(err); + }); + + +Run the test +---------------- + +A successful run should look like this: + +.. code-block:: console + + $ node sqreamdb-test.js + [ { test: 1, dozen: 12 } ] + + +API reference +==================== + +Connection parameters +--------------------------- + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Item + - Optional + - Default + - Description + * - ``host`` + - ✗ + - None + - Hostname for SQream DB worker. For example, ``127.0.0.1``, ``sqream.mynetwork.co`` + * - ``port`` + - ✗ + - None + - Port for SQream DB end-point. For example, ``3108`` for the load balancer, ``5000`` for a worker. + * - ``username`` + - ✗ + - None + - Username of a role to use for connection. For example, ``rhendricks`` + * - ``password`` + - ✗ + - None + - Specifies the password of the selected role. For example, ``Tr0ub4dor&3`` + * - ``connectDatabase`` + - ✗ + - None + - Database name to connect to. For example, ``master`` + * - ``service`` + - ✓ + - ``sqream`` + - Specifices service queue to use. For example, ``etl`` + * - ``is_ssl`` + - ✓ + - ``false`` + - Specifies SSL for this connection. For example, ``true`` + * - ``cluster`` + - ✓ + - ``false`` + - Connect via load balancer (use only if exists, and check port). For example, ``true`` + +Events +------------- + +The connector handles event returns with an event emitter + +getConnectionId + The ``getConnectionId`` event returns the executing connection ID. + +getStatementId + The ``getStatementId`` event returns the executing statement ID. + +getTypes + The ``getTypes`` event returns the results columns types. + +Example +^^^^^^^^^^^^^^^^^ + +.. code-block:: javascript + + const myConnection = new Connection(config); + + myConnection.runQuery(query1, function (err, data){ + myConnection.events.on('getConnectionId', function(data){ + console.log('getConnectionId', data); + }); + + myConnection.events.on('getStatementId', function(data){ + console.log('getStatementId', data); + }); + + myConnection.events.on('getTypes', function(data){ + console.log('getTypes', data); + }); + }); + +Input placeholders +------------------------- + +The Node.JS driver can replace parameters in a statement. + +Input placeholders allow values like user input to be passed as parameters into queries, with proper escaping. + +The valid placeholder formats are provided in the table below. + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Placeholder + - Type + * - ``%i`` + - Identifier (e.g. table name, column name) + * - ``%s`` + - A text string + * - ``%d`` + - A number value + * - ``%b`` + - A boolean value + +See the :ref:`input placeholders example` below. + +Examples +=============== + +Setting configuration flags +----------------------------------- + +SQream DB configuration flags can be set per statement, as a parameter to ``runQuery``. + +For example: + +.. code-block:: javascript + + const setFlag = 'SET showfullexceptioninfo = true;'; + + const query_string = 'SELECT 1'; + + const myConnection = new Connection(config); + myConnection.runQuery(query_string, function (err, data){ + console.log(err, data); + }, setFlag); + + +Lazyloading +----------------------------------- + +To process rows without keeping them in memory, you can lazyload the rows with an async: + +.. code-block:: javascript + + + const Connection = require('@sqream/sqreamdb'); + + const config = { + host: 'localhost', + port: 3109, + username: 'rhendricks', + password: 'super_secret_password', + connectDatabase: 'raviga', + cluster: true, + is_ssl: true, + service: 'sqream' + }; + + const sqream = new Connection(config); + + const query = "SELECT * FROM public.a_very_large_table"; + + (async () => { + const cursor = await sqream.executeCursor(query); + let count = 0; + for await (let rows of cursor.fetchIterator(100)) { + // fetch rows in chunks of 100 + count += rows.length; + } + await cursor.close(); + return count; + })().then((total) => { + console.log('Total rows', total); + }, (err) => { + console.error(err); + }); + + +Reusing a connection +----------------------------------- + +It is possible to execeute multiple queries with the same connection (although only one query can be executed at a time). + +.. code-block:: javascript + + const Connection = require('@sqream/sqreamdb'); + + const config = { + host: 'localhost', + port: 3109, + username: 'rhendricks', + password: 'super_secret_password', + connectDatabase: 'raviga', + cluster: true, + is_ssl: true, + service: 'sqream' + }; + + const sqream = new Connection(config); + + (async () => { + + const conn = await sqream.connect(); + try { + const res1 = await conn.execute("SELECT 1"); + const res2 = await conn.execute("SELECT 2"); + const res3 = await conn.execute("SELECT 3"); + conn.disconnect(); + return {res1, res2, res3}; + } catch (err) { + conn.disconnect(); + throw err; + } + + })().then((res) => { + console.log('Results', res) + }, (err) => { + console.error(err); + }); + + +.. _input_placeholders_example: + +Using placeholders in queries +----------------------------------- + +Input placeholders allow values like user input to be passed as parameters into queries, with proper escaping. + +.. code-block:: javascript + + const Connection = require('@sqream/sqreamdb'); + + const config = { + host: 'localhost', + port: 3109, + username: 'rhendricks', + password: 'super_secret_password', + connectDatabase: 'raviga', + cluster: true, + is_ssl: true, + service: 'sqream' + }; + + const sqream = new Connection(config); + + const sql = "SELECT %i FROM public.%i WHERE name = %s AND num > %d AND active = %b"; + + sqream.execute(sql, "col1", "table2", "john's", 50, true); + + +The query that will run is ``SELECT col1 FROM public.table2 WHERE name = 'john''s' AND num > 50 AND active = true`` + + +Troubleshooting and recommended configuration +================================================ + + +Preventing ``heap out of memory`` errors +-------------------------------------------- + +Some workloads may cause Node.JS to fail with the error: + +.. code-block:: none + + FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory + +To prevent this error, modify the heap size configuration by setting the ``--max-old-space-size`` run flag. + +For example, set the space size to 2GB: + +.. code-block:: console + + $ node --max-old-space-size=2048 my-application.js + +BIGINT support +------------------------ + +The Node.JS connector supports fetching ``BIGINT`` values from SQream DB. However, some applications may encounter an error when trying to serialize those values. + +The error that appears is: +.. code-block:: none + + TypeError: Do not know how to serialize a BigInt + +This is because JSON specification do not support BIGINT values, even when supported by Javascript engines. + +To resolve this issue, objects with BIGINT values should be converted to string before serializing, and converted back after deserializing. + +For example: + +.. code-block:: javascript + + const rows = [{test: 1n}] + const json = JSON.stringify(rows, , (key, value) => + typeof value === 'bigint' + ? value.toString() + : value // return everything else unchanged + )); + console.log(json); // [{"test": "1"}] + diff --git a/third_party_tools/client_drivers/nodejs/sample.js b/connecting_to_sqream/client_drivers/nodejs/sample.js similarity index 95% rename from third_party_tools/client_drivers/nodejs/sample.js rename to connecting_to_sqream/client_drivers/nodejs/sample.js index a8ec3db66..cf3e19099 100644 --- a/third_party_tools/client_drivers/nodejs/sample.js +++ b/connecting_to_sqream/client_drivers/nodejs/sample.js @@ -1,21 +1,21 @@ -const Connection = require('@sqream/sqreamdb'); - -const config = { - host: 'localhost', - port: 3109, - username: 'rhendricks', - password: 'super_secret_password', - connectDatabase: 'raviga', - cluster: true, - is_ssl: true, - service: 'sqream' - }; - -const query1 = 'SELECT 1 AS test, 2*6 AS "dozen"'; - -const sqream = new Connection(config); -sqream.execute(query1).then((data) => { - console.log(data); -}, (err) => { - console.error(err); +const Connection = require('@sqream/sqreamdb'); + +const config = { + host: 'localhost', + port: 3109, + username: 'rhendricks', + password: 'super_secret_password', + connectDatabase: 'raviga', + cluster: true, + is_ssl: true, + service: 'sqream' + }; + +const query1 = 'SELECT 1 AS test, 2*6 AS "dozen"'; + +const sqream = new Connection(config); +sqream.execute(query1).then((data) => { + console.log(data); +}, (err) => { + console.error(err); }); \ No newline at end of file diff --git a/third_party_tools/client_drivers/odbc/index.rst b/connecting_to_sqream/client_drivers/odbc/index.rst similarity index 96% rename from third_party_tools/client_drivers/odbc/index.rst rename to connecting_to_sqream/client_drivers/odbc/index.rst index 7623b4e99..691e7e999 100644 --- a/third_party_tools/client_drivers/odbc/index.rst +++ b/connecting_to_sqream/client_drivers/odbc/index.rst @@ -1,58 +1,58 @@ -.. _odbc: - -************************* -ODBC -************************* - -.. toctree:: - :maxdepth: 1 - :titlesonly: - :hidden: - - install_configure_odbc_windows - install_configure_odbc_linux - -SQream has an ODBC driver to connect to SQream DB. This tutorial shows how to install the ODBC driver for Linux or Windows for use with applications like Tableau, PHP, and others that use ODBC. - -.. list-table:: - :widths: auto - :header-rows: 1 - - * - Platform - - Versions supported - - * - Windows - - * Windows 7 (64 bit) - * Windows 8 (64 bit) - * Windows 10 (64 bit) - * Windows Server 2008 R2 (64 bit) - * Windows Server 2012 - * Windows Server 2016 - * Windows Server 2019 - - * - Linux - - * Red Hat Enterprise Linux (RHEL) 7 - * CentOS 7 - * Ubuntu 16.04 - * Ubuntu 18.04 - -Other distributions may also work, but are not officially supported by SQream. - -.. contents:: In this topic: - :local: - -Downloading the ODBC driver -================================== - -The SQream DB ODBC driver is distributed by your SQream account manager. Before contacting your account manager, verify which platform the ODBC driver will be used on. Go to `SQream Support `_ or contact your SQream account manager to get the driver. - -The driver is provided as an executable installer for Windows, or a compressed tarball for Linux platforms. -After downloading the driver, follow the relevant instructions to install and configure the driver for your platform: - -Install and configure the ODBC driver -======================================= - -Continue based on your platform: - -* :ref:`install_odbc_windows` +.. _odbc: + +************************* +Connecting to SQream Using ODBC +************************* + +.. toctree:: + :maxdepth: 1 + :titlesonly: + :hidden: + + install_configure_odbc_windows + install_configure_odbc_linux + +SQream has an ODBC driver to connect to SQream DB. This tutorial shows how to install the ODBC driver for Linux or Windows for use with applications like Tableau, PHP, and others that use ODBC. + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Platform + - Versions supported + + * - Windows + - * Windows 7 (64 bit) + * Windows 8 (64 bit) + * Windows 10 (64 bit) + * Windows Server 2008 R2 (64 bit) + * Windows Server 2012 + * Windows Server 2016 + * Windows Server 2019 + + * - Linux + - * Red Hat Enterprise Linux (RHEL) 7 + * CentOS 7 + * Ubuntu 16.04 + * Ubuntu 18.04 + +Other distributions may also work, but are not officially supported by SQream. + +.. contents:: In this topic: + :local: + +Downloading the ODBC driver +================================== + +The SQream DB ODBC driver is distributed by your SQream account manager. Before contacting your account manager, verify which platform the ODBC driver will be used on. Go to `SQream Support `_ or contact your SQream account manager to get the driver. + +The driver is provided as an executable installer for Windows, or a compressed tarball for Linux platforms. +After downloading the driver, follow the relevant instructions to install and configure the driver for your platform: + +Install and configure the ODBC driver +======================================= + +Continue based on your platform: + +* :ref:`install_odbc_windows` * :ref:`install_odbc_linux` \ No newline at end of file diff --git a/third_party_tools/client_drivers/odbc/install_configure_odbc_linux.rst b/connecting_to_sqream/client_drivers/odbc/install_configure_odbc_linux.rst similarity index 96% rename from third_party_tools/client_drivers/odbc/install_configure_odbc_linux.rst rename to connecting_to_sqream/client_drivers/odbc/install_configure_odbc_linux.rst index 737768756..61919f161 100644 --- a/third_party_tools/client_drivers/odbc/install_configure_odbc_linux.rst +++ b/connecting_to_sqream/client_drivers/odbc/install_configure_odbc_linux.rst @@ -1,253 +1,253 @@ -.. _install_odbc_linux: - -**************************************** -Install and configure ODBC on Linux -**************************************** - -.. toctree:: - :maxdepth: 1 - :titlesonly: - :hidden: - - -The ODBC driver for Windows is provided as a shared library. - -This tutorial shows how to install and configure ODBC on Linux. - -.. contents:: In this topic: - :local: - :depth: 2 - -Prerequisites -============== - -.. _unixODBC: - -unixODBC ------------- - -The ODBC driver requires a driver manager to manage the DSNs. SQream DB's driver is built for unixODBC. - -Verify unixODBC is installed by running: - -.. code-block:: console - - $ odbcinst -j - unixODBC 2.3.4 - DRIVERS............: /etc/odbcinst.ini - SYSTEM DATA SOURCES: /etc/odbc.ini - FILE DATA SOURCES..: /etc/ODBCDataSources - USER DATA SOURCES..: /home/rhendricks/.odbc.ini - SQLULEN Size.......: 8 - SQLLEN Size........: 8 - SQLSETPOSIROW Size.: 8 - -Take note of the location of ``.odbc.ini`` and ``.odbcinst.ini``. In this case, ``/etc``. If ``odbcinst`` is not installed, follow the instructions for your platform below: - -.. contents:: Install unixODBC on: - :local: - :depth: 1 - -Install unixODBC on RHEL 7 / CentOS 7 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code-block:: console - - $ yum install -y unixODBC unixODBC-devel - -Install unixODBC on Ubuntu -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code-block:: console - - $ sudo apt-get install unixodbc unixodbc-dev - - -Install the ODBC driver with a script -======================================= - -Use this method if you have never used ODBC on your machine before. If you have existing DSNs, see the manual install process below. - -#. Unpack the tarball - Copy the downloaded file to any directory, and untar it to a new directory: - - .. code-block:: console - - $ mkdir -p sqream_odbc64 - $ tar xf sqream_2019.2.1_odbc_3.0.0_x86_64_linux.tar.gz -C sqream_odbc64 - -#. Run the first-time installer. The installer will create an editable DSN. - - .. code-block:: console - - $ cd sqream_odbc64 - ./odbc_install.sh --install - - -#. Edit the DSN created by editing ``/etc/.odbc.ini``. See the parameter explanation in the section :ref:`ODBC DSN Parameters `. - - -Install the ODBC driver manually -======================================= - -Use this method when you have existing ODBC DSNs on your machine. - -#. Unpack the tarball - Copy the file you downloaded to the directory where you want to install it, and untar it: - - .. code-block:: console - - $ tar xf sqream_2019.2.1_odbc_3.0.0_x86_64_linux.tar.gz -C sqream_odbc64 - - Take note of the directory where the driver was unpacked. For example, ``/home/rhendricks/sqream_odbc64`` - -#. Locate the ``.odbc.ini`` and ``.odbcinst.ini`` files, using ``odbcinst -j``. - - #. In ``.odbcinst.ini``, add the following lines to register the driver (change the highlighted paths to match your specific driver): - - .. code-block:: ini - :emphasize-lines: 6,7 - - [ODBC Drivers] - SqreamODBCDriver=Installed - - [SqreamODBCDriver] - Description=Driver DSII SqreamODBC 64bit - Driver=/home/rhendricks/sqream_odbc64/sqream_odbc64.so - Setup=/home/rhendricks/sqream_odbc64/sqream_odbc64.so - APILevel=1 - ConnectFunctions=YYY - DriverODBCVer=03.80 - SQLLevel=1 - IconvEncoding=UCS-4LE - - #. In ``.odbc.ini``, add the following lines to configure the DSN (change the highlighted parameters to match your installation): - - .. code-block:: ini - :emphasize-lines: 6,7,8,9,10,11,12,13,14 - - [ODBC Data Sources] - MyTest=SqreamODBCDriver - - [MyTest] - Description=64-bit Sqream ODBC - Driver=/home/rhendricks/sqream_odbc64/sqream_odbc64.so - Server="127.0.0.1" - Port="5000" - Database="raviga" - Service="" - User="rhendricks" - Password="Tr0ub4dor&3" - Cluster=false - Ssl=false - - Parameters are in the form of ``parameter = value``. For details about the parameters that can be set for each DSN, see the section :ref:`ODBC DSN Parameters `. - - - #. Create a file called ``.sqream_odbc.ini`` for managing the driver settings and logging. - This file should be created alongside the other files, and add the following lines (change the highlighted parameters to match your installation): - - .. code-block:: ini - :emphasize-lines: 5,7 - - # Note that this default DriverManagerEncoding of UTF-32 is for iODBC. unixODBC uses UTF-16 by default. - # If unixODBC was compiled with -DSQL_WCHART_CONVERT, then UTF-32 is the correct value. - # Execute 'odbc_config --cflags' to determine if you need UTF-32 or UTF-16 on unixODBC - [Driver] - DriverManagerEncoding=UTF-16 - DriverLocale=en-US - ErrorMessagesPath=/home/rhendricks/sqream_odbc64/ErrorMessages - LogLevel=0 - LogNamespace= - LogPath=/tmp/ - ODBCInstLib=libodbcinst.so - - -Install the driver dependencies -================================== - -Add the ODBC driver path to ``LD_LIBRARY_PATH``: - -.. code-block:: console - - $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/rhendricks/sqream_odbc64/lib - -You can also add this previous command line to your ``~/.bashrc`` file in order to keep this installation working between reboots without re-entering the command manually - -Testing the connection -======================== - -Test the driver using ``isql``. - -If the DSN created is called ``MyTest`` as the example, run isql in this format: - -.. code-block:: console - - $ isql MyTest - - -.. _dsn_params: - -ODBC DSN Parameters -======================= - -.. list-table:: - :widths: auto - :header-rows: 1 - - * - Item - - Default - - Description - * - Data Source Name - - None - - An easily recognizable name that you'll use to reference this DSN. - * - Description - - None - - A description of this DSN for your convenience. This field can be left blank - * - User - - None - - Username of a role to use for connection. For example, ``User="rhendricks"`` - * - Password - - None - - Specifies the password of the selected role. For example, ``User="Tr0ub4dor&3"`` - * - Database - - None - - Specifies the database name to connect to. For example, ``Database="master"`` - * - Service - - ``sqream`` - - Specifices :ref:`service queue` to use. For example, ``Service="etl"``. Leave blank (``Service=""``) for default service ``sqream``. - * - Server - - None - - Hostname of the SQream DB worker. For example, ``Server="127.0.0.1"`` or ``Server="sqream.mynetwork.co"`` - * - Port - - None - - TCP port of the SQream DB worker. For example, ``Port="5000"`` or ``Port="3108"`` for the load balancer - * - Cluster - - ``false`` - - Connect via load balancer (use only if exists, and check port). For example, ``Cluster=true`` - * - Ssl - - ``false`` - - Specifies SSL for this connection. For example, ``Ssl=true`` - * - DriverManagerEncoding - - ``UTF-16`` - - Depending on how unixODBC is installed, you may need to change this to ``UTF-32``. - * - ErrorMessagesPath - - None - - Location where the driver was installed. For example, ``ErrorMessagePath=/home/rhendricks/sqream_odbc64/ErrorMessages``. - * - LogLevel - - 0 - - Set to 0-6 for logging. Use this setting when instructed to by SQream Support. For example, ``LogLevel=1`` - - .. hlist:: - :columns: 3 - - * 0 = Disable tracing - * 1 = Fatal only error tracing - * 2 = Error tracing - * 3 = Warning tracing - * 4 = Info tracing - * 5 = Debug tracing - * 6 = Detailed tracing - - - +.. _install_odbc_linux: + +**************************************** +Install and configure ODBC on Linux +**************************************** + +.. toctree:: + :maxdepth: 1 + :titlesonly: + :hidden: + + +The ODBC driver for Windows is provided as a shared library. + +This tutorial shows how to install and configure ODBC on Linux. + +.. contents:: In this topic: + :local: + :depth: 2 + +Prerequisites +============== + +.. _unixODBC: + +unixODBC +------------ + +The ODBC driver requires a driver manager to manage the DSNs. SQream DB's driver is built for unixODBC. + +Verify unixODBC is installed by running: + +.. code-block:: console + + $ odbcinst -j + unixODBC 2.3.4 + DRIVERS............: /etc/odbcinst.ini + SYSTEM DATA SOURCES: /etc/odbc.ini + FILE DATA SOURCES..: /etc/ODBCDataSources + USER DATA SOURCES..: /home/rhendricks/.odbc.ini + SQLULEN Size.......: 8 + SQLLEN Size........: 8 + SQLSETPOSIROW Size.: 8 + +Take note of the location of ``.odbc.ini`` and ``.odbcinst.ini``. In this case, ``/etc``. If ``odbcinst`` is not installed, follow the instructions for your platform below: + +.. contents:: Install unixODBC on: + :local: + :depth: 1 + +Install unixODBC on RHEL 7 / CentOS 7 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code-block:: console + + $ yum install -y unixODBC unixODBC-devel + +Install unixODBC on Ubuntu +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code-block:: console + + $ sudo apt-get install unixodbc unixodbc-dev + + +Install the ODBC driver with a script +======================================= + +Use this method if you have never used ODBC on your machine before. If you have existing DSNs, see the manual install process below. + +#. Unpack the tarball + Copy the downloaded file to any directory, and untar it to a new directory: + + .. code-block:: console + + $ mkdir -p sqream_odbc64 + $ tar xf sqream_2019.2.1_odbc_3.0.0_x86_64_linux.tar.gz -C sqream_odbc64 + +#. Run the first-time installer. The installer will create an editable DSN. + + .. code-block:: console + + $ cd sqream_odbc64 + ./odbc_install.sh --install + + +#. Edit the DSN created by editing ``/etc/.odbc.ini``. See the parameter explanation in the section :ref:`ODBC DSN Parameters `. + + +Install the ODBC driver manually +======================================= + +Use this method when you have existing ODBC DSNs on your machine. + +#. Unpack the tarball + Copy the file you downloaded to the directory where you want to install it, and untar it: + + .. code-block:: console + + $ tar xf sqream_2019.2.1_odbc_3.0.0_x86_64_linux.tar.gz -C sqream_odbc64 + + Take note of the directory where the driver was unpacked. For example, ``/home/rhendricks/sqream_odbc64`` + +#. Locate the ``.odbc.ini`` and ``.odbcinst.ini`` files, using ``odbcinst -j``. + + #. In ``.odbcinst.ini``, add the following lines to register the driver (change the highlighted paths to match your specific driver): + + .. code-block:: ini + :emphasize-lines: 6,7 + + [ODBC Drivers] + SqreamODBCDriver=Installed + + [SqreamODBCDriver] + Description=Driver DSII SqreamODBC 64bit + Driver=/home/rhendricks/sqream_odbc64/sqream_odbc64.so + Setup=/home/rhendricks/sqream_odbc64/sqream_odbc64.so + APILevel=1 + ConnectFunctions=YYY + DriverODBCVer=03.80 + SQLLevel=1 + IconvEncoding=UCS-4LE + + #. In ``.odbc.ini``, add the following lines to configure the DSN (change the highlighted parameters to match your installation): + + .. code-block:: ini + :emphasize-lines: 6,7,8,9,10,11,12,13,14 + + [ODBC Data Sources] + MyTest=SqreamODBCDriver + + [MyTest] + Description=64-bit Sqream ODBC + Driver=/home/rhendricks/sqream_odbc64/sqream_odbc64.so + Server="127.0.0.1" + Port="5000" + Database="raviga" + Service="" + User="rhendricks" + Password="Tr0ub4dor&3" + Cluster=false + Ssl=false + + Parameters are in the form of ``parameter = value``. For details about the parameters that can be set for each DSN, see the section :ref:`ODBC DSN Parameters `. + + + #. Create a file called ``.sqream_odbc.ini`` for managing the driver settings and logging. + This file should be created alongside the other files, and add the following lines (change the highlighted parameters to match your installation): + + .. code-block:: ini + :emphasize-lines: 5,7 + + # Note that this default DriverManagerEncoding of UTF-32 is for iODBC. unixODBC uses UTF-16 by default. + # If unixODBC was compiled with -DSQL_WCHART_CONVERT, then UTF-32 is the correct value. + # Execute 'odbc_config --cflags' to determine if you need UTF-32 or UTF-16 on unixODBC + [Driver] + DriverManagerEncoding=UTF-16 + DriverLocale=en-US + ErrorMessagesPath=/home/rhendricks/sqream_odbc64/ErrorMessages + LogLevel=0 + LogNamespace= + LogPath=/tmp/ + ODBCInstLib=libodbcinst.so + + +Install the driver dependencies +================================== + +Add the ODBC driver path to ``LD_LIBRARY_PATH``: + +.. code-block:: console + + $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/rhendricks/sqream_odbc64/lib + +You can also add this previous command line to your ``~/.bashrc`` file in order to keep this installation working between reboots without re-entering the command manually + +Testing the connection +======================== + +Test the driver using ``isql``. + +If the DSN created is called ``MyTest`` as the example, run isql in this format: + +.. code-block:: console + + $ isql MyTest + + +.. _dsn_params: + +ODBC DSN Parameters +======================= + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Item + - Default + - Description + * - Data Source Name + - None + - An easily recognizable name that you'll use to reference this DSN. + * - Description + - None + - A description of this DSN for your convenience. This field can be left blank + * - User + - None + - Username of a role to use for connection. For example, ``User="rhendricks"`` + * - Password + - None + - Specifies the password of the selected role. For example, ``User="Tr0ub4dor&3"`` + * - Database + - None + - Specifies the database name to connect to. For example, ``Database="master"`` + * - Service + - ``sqream`` + - Specifices :ref:`service queue` to use. For example, ``Service="etl"``. Leave blank (``Service=""``) for default service ``sqream``. + * - Server + - None + - Hostname of the SQream DB worker. For example, ``Server="127.0.0.1"`` or ``Server="sqream.mynetwork.co"`` + * - Port + - None + - TCP port of the SQream DB worker. For example, ``Port="5000"`` or ``Port="3108"`` for the load balancer + * - Cluster + - ``false`` + - Connect via load balancer (use only if exists, and check port). For example, ``Cluster=true`` + * - Ssl + - ``false`` + - Specifies SSL for this connection. For example, ``Ssl=true`` + * - DriverManagerEncoding + - ``UTF-16`` + - Depending on how unixODBC is installed, you may need to change this to ``UTF-32``. + * - ErrorMessagesPath + - None + - Location where the driver was installed. For example, ``ErrorMessagePath=/home/rhendricks/sqream_odbc64/ErrorMessages``. + * - LogLevel + - 0 + - Set to 0-6 for logging. Use this setting when instructed to by SQream Support. For example, ``LogLevel=1`` + + .. hlist:: + :columns: 3 + + * 0 = Disable tracing + * 1 = Fatal only error tracing + * 2 = Error tracing + * 3 = Warning tracing + * 4 = Info tracing + * 5 = Debug tracing + * 6 = Detailed tracing + + + diff --git a/third_party_tools/client_drivers/odbc/install_configure_odbc_windows.rst b/connecting_to_sqream/client_drivers/odbc/install_configure_odbc_windows.rst similarity index 97% rename from third_party_tools/client_drivers/odbc/install_configure_odbc_windows.rst rename to connecting_to_sqream/client_drivers/odbc/install_configure_odbc_windows.rst index 7749b44ab..4972e3057 100644 --- a/third_party_tools/client_drivers/odbc/install_configure_odbc_windows.rst +++ b/connecting_to_sqream/client_drivers/odbc/install_configure_odbc_windows.rst @@ -1,134 +1,134 @@ -.. _install_odbc_windows: - -**************************************** -Install and Configure ODBC on Windows -**************************************** - -The ODBC driver for Windows is provided as a self-contained installer. - -This tutorial shows you how to install and configure ODBC on Windows. - -.. contents:: In this topic: - :local: - :depth: 2 - -Installing the ODBC Driver -================================== - -Prerequisites ----------------- - -.. _vcredist: - -Visual Studio 2015 Redistributables -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -To install the ODBC driver you must first install Microsoft's **Visual C++ Redistributable for Visual Studio 2015**. To install Visual C++ Redistributable for Visual Studio 2015, see the `Install Instructions `_. - -Administrator Privileges -^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The SQream DB ODBC driver requires administrator privileges on your computer to add the DSNs (data source names). - - -1. Run the Windows installer ------------------------------- - -Install the driver by following the on-screen instructions in the easy-to-follow installer. - -.. image:: /_static/images/odbc_windows_installer_screen1.png - -.. note:: The installer will install the driver in ``C:\Program Files\SQream Technologies\ODBC Driver`` by default. This path is changable during the installation. - -2. Selecting Components -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The installer includes additional components, like JDBC and Tableau customizations. - -.. image:: /_static/images/odbc_windows_installer_screen2.png - -You can deselect items you don't want to install, but the items named **ODBC Driver DLL** and **ODBC Driver Registry Keys** must remain selected for a complete installation of the ODBC driver. - -Once the installer finishes, you will be ready to configure the DSN for connection. - -.. _create_windows_odbc_dsn: - -3. Configuring the ODBC Driver DSN -====================================== - -ODBC driver configurations are done via DSNs. Each DSN represents one SQream DB database. - -#. Open up the Windows menu by clicking the Windows button on your keyboard (:kbd:`⊞ Win`) or pressing the Windows button with your mouse. - -#. Type **ODBC** and select **ODBC Data Sources (64-bit)**. Click the item to open up the setup window. - - .. image:: /_static/images/odbc_windows_startmenu.png - -#. The installer has created a sample User DSN named **SQreamDB** - - You can modify this DSN, or create a new one (:menuselection:`Add --> SQream ODBC Driver --> Next`) - - .. image:: /_static/images/odbc_windows_dsns.png - -#. Enter your connection parameters. See the reference below for a description of the parameters. - - .. image:: /_static/images/odbc_windows_dsn_config.png - -#. When completed, save the DSN by selecting :menuselection:`OK` - -.. tip:: Test the connection by clicking :menuselection:`Test` before saving. A successful test looks like this: - - .. image:: /_static/images/odbc_windows_dsn_test.png - -#. You can now use this DSN in ODBC applications like :ref:`Tableau `. - - - -Connection Parameters ------------------------ - -.. list-table:: - :widths: auto - :header-rows: 1 - - * - Item - - Description - * - Data Source Name - - An easily recognizable name that you'll use to reference this DSN. Once you set this, it can not be changed. - * - Description - - A description of this DSN for your convenience. You can leave this blank. - * - User - - Username of a role to use for connection. For example, ``rhendricks`` - * - Password - - Specifies the password of the selected role. For example, ``Tr0ub4dor&3`` - * - Database - - Specifies the database name to connect to. For example, ``master`` - * - Service - - Specifices :ref:`service queue` to use. For example, ``etl``. Leave blank for default service ``sqream``. - * - Server - - Hostname of the SQream DB worker. For example, ``127.0.0.1`` or ``sqream.mynetwork.co`` - * - Port - - TCP port of the SQream DB worker. For example, ``5000`` or ``3108`` - * - User server picker - - Connect via load balancer (use only if exists, and check port) - * - SSL - - Specifies SSL for this connection - * - Logging options - - Use this screen to alter logging options when tracing the ODBC connection for possible connection issues. - - -Troubleshooting -================== - -Solving "Code 126" ODBC errors ---------------------------------- - -After installing the ODBC driver, you may experience the following error: - -.. code-block:: none - - The setup routines for the SQreamDriver64 ODBC driver could not be loaded due to system error - code 126: The specified module could not be found. - (c:\Program Files\SQream Technologies\ODBC Driver\sqreamOdbc64.dll) - -This is an issue with the Visual Studio Redistributable packages. Verify you've correctly installed them, as described in the :ref:`Visual Studio 2015 Redistributables ` section above. +.. _install_odbc_windows: + +**************************************** +Install and Configure ODBC on Windows +**************************************** + +The ODBC driver for Windows is provided as a self-contained installer. + +This tutorial shows you how to install and configure ODBC on Windows. + +.. contents:: In this topic: + :local: + :depth: 2 + +Installing the ODBC Driver +================================== + +Prerequisites +---------------- + +.. _vcredist: + +Visual Studio 2015 Redistributables +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To install the ODBC driver you must first install Microsoft's **Visual C++ Redistributable for Visual Studio 2015**. To install Visual C++ Redistributable for Visual Studio 2015, see the `Install Instructions `_. + +Administrator Privileges +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The SQream DB ODBC driver requires administrator privileges on your computer to add the DSNs (data source names). + + +1. Run the Windows installer +------------------------------ + +Install the driver by following the on-screen instructions in the easy-to-follow installer. + +.. image:: /_static/images/odbc_windows_installer_screen1.png + +.. note:: The installer will install the driver in ``C:\Program Files\SQream Technologies\ODBC Driver`` by default. This path is changable during the installation. + +2. Selecting Components +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The installer includes additional components, like JDBC and Tableau customizations. + +.. image:: /_static/images/odbc_windows_installer_screen2.png + +You can deselect items you don't want to install, but the items named **ODBC Driver DLL** and **ODBC Driver Registry Keys** must remain selected for a complete installation of the ODBC driver. + +Once the installer finishes, you will be ready to configure the DSN for connection. + +.. _create_windows_odbc_dsn: + +3. Configuring the ODBC Driver DSN +====================================== + +ODBC driver configurations are done via DSNs. Each DSN represents one SQream DB database. + +#. Open up the Windows menu by clicking the Windows button on your keyboard (:kbd:`⊞ Win`) or pressing the Windows button with your mouse. + +#. Type **ODBC** and select **ODBC Data Sources (64-bit)**. Click the item to open up the setup window. + + .. image:: /_static/images/odbc_windows_startmenu.png + +#. The installer has created a sample User DSN named **SQreamDB** + + You can modify this DSN, or create a new one (:menuselection:`Add --> SQream ODBC Driver --> Next`) + + .. image:: /_static/images/odbc_windows_dsns.png + +#. Enter your connection parameters. See the reference below for a description of the parameters. + + .. image:: /_static/images/odbc_windows_dsn_config.png + +#. When completed, save the DSN by selecting :menuselection:`OK` + +.. tip:: Test the connection by clicking :menuselection:`Test` before saving. A successful test looks like this: + + .. image:: /_static/images/odbc_windows_dsn_test.png + +#. You can now use this DSN in ODBC applications like :ref:`Tableau `. + + + +Connection Parameters +----------------------- + +.. list-table:: + :widths: auto + :header-rows: 1 + + * - Item + - Description + * - Data Source Name + - An easily recognizable name that you'll use to reference this DSN. Once you set this, it can not be changed. + * - Description + - A description of this DSN for your convenience. You can leave this blank. + * - User + - Username of a role to use for connection. For example, ``rhendricks`` + * - Password + - Specifies the password of the selected role. For example, ``Tr0ub4dor&3`` + * - Database + - Specifies the database name to connect to. For example, ``master`` + * - Service + - Specifices :ref:`service queue` to use. For example, ``etl``. Leave blank for default service ``sqream``. + * - Server + - Hostname of the SQream DB worker. For example, ``127.0.0.1`` or ``sqream.mynetwork.co`` + * - Port + - TCP port of the SQream DB worker. For example, ``5000`` or ``3108`` + * - User server picker + - Connect via load balancer (use only if exists, and check port) + * - SSL + - Specifies SSL for this connection + * - Logging options + - Use this screen to alter logging options when tracing the ODBC connection for possible connection issues. + + +Troubleshooting +================== + +Solving "Code 126" ODBC errors +--------------------------------- + +After installing the ODBC driver, you may experience the following error: + +.. code-block:: none + + The setup routines for the SQreamDriver64 ODBC driver could not be loaded due to system error + code 126: The specified module could not be found. + (c:\Program Files\SQream Technologies\ODBC Driver\sqreamOdbc64.dll) + +This is an issue with the Visual Studio Redistributable packages. Verify you've correctly installed them, as described in the :ref:`Visual Studio 2015 Redistributables ` section above. diff --git a/connecting_to_sqream/client_drivers/python/index.rst b/connecting_to_sqream/client_drivers/python/index.rst new file mode 100644 index 000000000..f29679bc0 --- /dev/null +++ b/connecting_to_sqream/client_drivers/python/index.rst @@ -0,0 +1,477 @@ +.. _pysqream: + +************************* +Connecting to SQream Using Python (pysqream) +************************* +The **Python** connector page describes the following: + +.. contents:: + :local: + :depth: 1 + +Overview +============= +The SQream Python connector is a set of packages that allows Python programs to connect to SQream DB. + +* ``pysqream`` is a pure Python connector. It can be installed with ``pip`` on any operating system, including Linux, Windows, and macOS. + +* ``pysqream-sqlalchemy`` is a SQLAlchemy dialect for ``pysqream`` + +The connector supports Python 3.6.5 and newer. The base ``pysqream`` package conforms to Python DB-API specifications `PEP-249 `_. + +Installing the Python Connector +================================== + +Prerequisites +---------------- +Installing the Python connector includes the following prerequisites: + +.. contents:: + :local: + :depth: 1 + +Python +^^^^^^^^^^^^ + +The connector requires Python 3.6.5 or newer. To verify your version of Python: + +.. code-block:: console + + $ python --version + Python 3.7.3 + + +PIP +^^^^^^^^^^^^ +The Python connector is installed via ``pip``, the Python package manager and installer. + +We recommend upgrading to the latest version of ``pip`` before installing. To verify that you are on the latest version, run the following command: + +.. code-block:: console + + $ python3 -m pip install --upgrade pip + Collecting pip + Downloading https://files.pythonhosted.org/packages/00/b6/9cfa56b4081ad13874b0c6f96af8ce16cfbc1cb06bedf8e9164ce5551ec1/pip-19.3.1-py2.py3-none-any.whl (1.4MB) + |████████████████████████████████| 1.4MB 1.6MB/s + Installing collected packages: pip + Found existing installation: pip 19.1.1 + Uninstalling pip-19.1.1: + Successfully uninstalled pip-19.1.1 + Successfully installed pip-19.3.1 + +.. note:: + * On macOS, you may want to use virtualenv to install Python and the connector, to ensure compatibility with the built-in Python environment + * If you encounter an error including ``SSLError`` or ``WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.`` - please be sure to reinstall Python with SSL enabled, or use virtualenv or Anaconda. + +OpenSSL for Linux +^^^^^^^^^^^^^^^^^^^^^^^^^^ +Some distributions of Python do not include OpenSSL. The Python connector relies on OpenSSL for secure connections to SQream DB. + +* To install OpenSSL on RHEL/CentOS + + .. code-block:: console + + $ sudo yum install -y libffi-devel openssl-devel + +* To install OpenSSL on Ubuntu + + .. code-block:: console + + $ sudo apt-get install libssl-dev libffi-dev -y + +Installing via PIP +----------------- +The Python connector is available via `PyPi `_. + +Install the connector with ``pip``: + +.. code-block:: console + + $ pip3 install pysqream pysqream-sqlalchemy + +``pip3`` will automatically install all necessary libraries and modules. + +Upgrading an Existing Installation +-------------------------------------- +The Python drivers are updated periodically. To upgrade an existing pysqream installation, use pip's ``-U`` flag: + +.. code-block:: console + + $ pip3 install pysqream pysqream-sqlalchemy -U + +Validating Your Installation +----------------------------- +This section describes how to validate your installation. + +**To validate your installation**: + +1. Create a file called ``sample.py``, containing the following: + +.. literalinclude:: sample.py + :language: python + :caption: pysqream Validation Script + :linenos: + +2. Verify that the parameters in the connection have been replaced with your respective SQream installation parameters. + + :: + +3. Run the sample file to verify that you can connect to SQream: + + .. code-block:: console + + $ python sample.py + Version: v2020.1 + + If the validation was successful, you can build an application using the SQream Python connector. If you receive a connection error, verify the following: + + * You have access to a running SQream database. + + :: + + * The connection parameters are correct. + +SQLAlchemy Examples +======================== +SQLAlchemy is an **Object-Relational Mapper (ORM) for Python. When you install the SQream dialect (``pysqream-sqlalchemy``) you can use frameworks such as Pandas, TensorFlow, and Alembic to query SQream directly. + +This section includes the following examples: + +.. contents:: + :local: + :depth: 1 + +Standard Connection Example +--------------------------------- + + +.. code-block:: python + + import sqlalchemy as sa + from sqlalchemy.engine.url import URL + + engine_url = URL('sqream' + , username='rhendricks' + , password='secret_passwor" + , host='localhost' + , port=5000 + , database='raviga' + , query={'use_ssl': False}) + + engine = sa.create_engine(engine_url) + + res = engine.execute('create or replace table test (ints int, ints2 int)') + res = engine.execute('insert into test (ints,ints2) values (5,1), (6,2)') + res = engine.execute('select * from test') + for item in res: + print(item) + +Multi Cluster Connection Example +------------------------ + +The following example is for using a ServerPicker: + +.. code-block:: python + + import sqlalchemy as sa + from sqlalchemy.engine.url import URL + + + engine_url = URL('sqream' + , username='dor' + , password='DorBerg123$' + , host='localhost' + , port=3108 + , database='pushlive') + + engine = sa.create_engine(engine_url,connect_args={"clustered": True}) + + res = engine.execute("create or replace table test100 (dor int);") + res = engine.execute('insert into test100 values (5), (6);') + res = engine.execute('select * from test100') + for item in res: + print(item) + + +Pulling a Table into Pandas +--------------------------- +The following example shows how to pull a table in Pandas. This examples uses the URL method to create the connection string: + +.. code-block:: python + + import sqlalchemy as sa + import pandas as pd + from sqlalchemy.engine.url import URL + + + engine_url = URL('sqream' + , username='rhendricks' + , password='secret_passwor" + , host='localhost' + , port=5000 + , database='raviga' + , query={'use_ssl': False}) + + engine = sa.create_engine(engine_url) + + table_df = pd.read_sql("select * from nba", con=engine) + +API Examples +=============== +This section includes the following examples: + +.. contents:: + :local: + :depth: 1 + + +Using the Cursor +-------------------------------------------- +The DB-API specification includes several methods for fetching results from the cursor. This sections shows an example using the ``nba`` table, which looks as follows: + +.. csv-table:: nba + :file: nba-t10.csv + :widths: auto + :header-rows: 1 + +As before, you must import the library and create a :py:meth:`~Connection`, followed by :py:meth:`~Connection.execute` on a simple ``SELECT *`` query: + +.. code-block:: python + + import pysqream + con = pysqream.connect(host='127.0.0.1', port=3108, database='master' + , username='rhendricks', password='Tr0ub4dor&3' + , clustered=True) + + cur = con.cursor() # Create a new cursor + # The select statement: + statement = 'SELECT * FROM nba' + cur.execute(statement) + +When the statement has finished executing, you have a :py:meth:`Connection` cursor object waiting. A cursor is iterable, meaning that it advances the cursor to the next row when fetched. + +You can use :py:meth:`~Connection.fetchone` to fetch one record at a time: + +.. code-block:: python + + first_row = cur.fetchone() # Fetch one row at a time (first row) + + second_row = cur.fetchone() # Fetch one row at a time (second row) + +To fetch several rows at a time, use :py:meth:`~Connection.fetchmany`: + +.. code-block:: python + + # executing `fetchone` twice is equivalent to this form: + third_and_fourth_rows = cur.fetchmany(2) + +To fetch all rows at once, use :py:meth:`~Connection.fetchall`: + +.. code-block:: python + + # To get all rows at once, use `fetchall` + remaining_rows = cur.fetchall() + + cur.close() + + + # Close the connection when done + con.close() + +The following is an example of the contents of the row variables used in our examples: + +.. code-block:: pycon + + >>> print(first_row) + ('Avery Bradley', 'Boston Celtics', 0, 'PG', 25, '6-2', 180, 'Texas', 7730337) + >>> print(second_row) + ('Jae Crowder', 'Boston Celtics', 99, 'SF', 25, '6-6', 235, 'Marquette', 6796117) + >>> print(third_and_fourth_rows) + [('John Holland', 'Boston Celtics', 30, 'SG', 27, '6-5', 205, 'Boston University', None), ('R.J. Hunter', 'Boston Celtics', 28, 'SG', 22, '6-5', 185, 'Georgia State', 1148640)] + >>> print(remaining_rows) + [('Jonas Jerebko', 'Boston Celtics', 8, 'PF', 29, '6-10', 231, None, 5000000), ('Amir Johnson', 'Boston Celtics', 90, 'PF', 29, '6-9', 240, None, 12000000), ('Jordan Mickey', 'Boston Celtics', 55, 'PF', 21, '6-8', 235, 'LSU', 1170960), ('Kelly Olynyk', 'Boston Celtics', 41, 'C', 25, '7-0', 238, 'Gonzaga', 2165160), + [...] + +.. note:: Calling a fetch command after all rows have been fetched will return an empty array (``[]``). + +Reading Result Metadata +---------------------------- +When you execute a statement, the connection object also contains metadata about the result set, such as **column names**, **types**, etc). + +The metadata is stored in the :py:attr:`Connection.description` object of the cursor: + +.. code-block:: pycon + + >>> import pysqream + >>> con = pysqream.connect(host='127.0.0.1', port=3108, database='master' + ... , username='rhendricks', password='Tr0ub4dor&3' + ... , clustered=True) + >>> cur = con.cursor() + >>> statement = 'SELECT * FROM nba' + >>> cur.execute(statement) + + >>> print(cur.description) + [('Name', 'STRING', 24, 24, None, None, True), ('Team', 'STRING', 22, 22, None, None, True), ('Number', 'NUMBER', 1, 1, None, None, True), ('Position', 'STRING', 2, 2, None, None, True), ('Age (as of 2018)', 'NUMBER', 1, 1, None, None, True), ('Height', 'STRING', 4, 4, None, None, True), ('Weight', 'NUMBER', 2, 2, None, None, True), ('College', 'STRING', 21, 21, None, None, True), ('Salary', 'NUMBER', 4, 4, None, None, True)] + +You can fetch a list of column names by iterating over the ``description`` list: + +.. code-block:: pycon + + >>> [ i[0] for i in cur.description ] + ['Name', 'Team', 'Number', 'Position', 'Age (as of 2018)', 'Height', 'Weight', 'College', 'Salary'] + +Loading Data into a Table +--------------------------- +This example shows how to load 10,000 rows of dummy data to an instance of SQream. + +**To load data 10,000 rows of dummy data to an instance of SQream:** + +1. Run the following: + + .. code-block:: python + + import pysqream + from datetime import date, datetime + from time import time + + con = pysqream.connect(host='127.0.0.1', port=3108, database='master' + , username='rhendricks', password='Tr0ub4dor&3' + , clustered=True) + , cur = con.cursor() + +2. Create a table for loading: + + .. code-block:: python + + create = 'create or replace table perf (b bool, t tinyint, sm smallint, i int, bi bigint, f real, d double, s varchar(12), ss text, dt date, dtt datetime)' + cur.execute(create) + +3. Load your data into table using the ``INSERT`` command. + + :: + +4. Create dummy data matching the table you created: + + .. code-block:: python + + data = (False, 2, 12, 145, 84124234, 3.141, -4.3, "Marty McFly" , u"キウイは楽しい鳥です" , date(2019, 12, 17), datetime(1955, 11, 4, 1, 23, 0, 0)) + + row_count = 10**4 + +5. Get a new cursor: + + .. code-block:: python + + insert = 'insert into perf values (?,?,?,?,?,?,?,?,?,?,?)' + start = time() + cur.executemany(insert, [data] * row_count) + print (f"Total insert time for {row_count} rows: {time() - start} seconds") + +6. Close this cursor: + + .. code-block:: python + + cur.close() + +7. Verify that the data was inserted correctly: + + .. code-block:: python + + cur = con.cursor() + cur.execute('select count(*) from perf') + result = cur.fetchall() # `fetchall` collects the entire data set + print (f"Count of inserted rows: {result[0][0]}") + +8. Close the cursor: + + .. code-block:: python + + cur.close() + +9. Close the connection: + + .. code-block:: python + + con.close() + + + +Using SQLAlchemy ORM to Create and Populate Tables +----------------------------------------------------------------------- +This section shows how to use the ORM to create and populate tables from Python objects. + +**To use SQLAlchemy ORM to create and populate tables:** + +1. Run the following: + + .. code-block:: python + + import sqlalchemy as sa + import pandas as pd + from sqlalchemy.engine.url import URL + + + engine_url = URL('sqream' + , username='rhendricks' + , password='secret_passwor" + , host='localhost' + , port=5000 + , database='raviga' + , query={'use_ssl': False}) + + engine = sa.create_engine(engine_url) + +2. Build a metadata object and bind it: + + .. code-block:: python + + metadata = sa.MetaData() + metadata.bind = engine + +3. Create a table in the local metadata: + + .. code-block:: python + + employees = sa.Table( + 'employees' + , metadata + , sa.Column('id', sa.Integer) + , sa.Column('name', sa.VARCHAR(32)) + , sa.Column('lastname', sa.VARCHAR(32)) + , sa.Column('salary', sa.Float) + ) + + The ``create_all()`` function uses the SQream engine object. + +4. Create all the defined table objects: + + .. code-block:: python + + metadata.create_all(engine) + +5. Populate your table. + + :: + +6. Build the data rows: + + .. code-block:: python + + insert_data = [ {'id': 1, 'name': 'Richard','lastname': 'Hendricks', 'salary': 12000.75} + ,{'id': 3, 'name': 'Bertram', 'lastname': 'Gilfoyle', 'salary': 8400.0} + ,{'id': 8, 'name': 'Donald', 'lastname': 'Dunn', 'salary': 6500.40} + ] + +7. Build the ``INSERT`` command: + + .. code-block:: python + + ins = employees.insert(insert_data) + +8. Execute the command: + + .. code-block:: python + + result = engine.execute(ins) + +For more information, see the :ref:`python_api_reference_guide`. \ No newline at end of file diff --git a/third_party_tools/client_drivers/python/nba-t10.csv b/connecting_to_sqream/client_drivers/python/nba-t10.csv similarity index 98% rename from third_party_tools/client_drivers/python/nba-t10.csv rename to connecting_to_sqream/client_drivers/python/nba-t10.csv index fe9ced442..024530355 100644 --- a/third_party_tools/client_drivers/python/nba-t10.csv +++ b/connecting_to_sqream/client_drivers/python/nba-t10.csv @@ -1,10 +1,10 @@ -Name,Team,Number,Position,Age,Height,Weight,College,Salary -Avery Bradley,Boston Celtics,0.0,PG,25.0,6-2,180.0,Texas,7730337.0 -Jae Crowder,Boston Celtics,99.0,SF,25.0,6-6,235.0,Marquette,6796117.0 -John Holland,Boston Celtics,30.0,SG,27.0,6-5,205.0,Boston University, -R.J. Hunter,Boston Celtics,28.0,SG,22.0,6-5,185.0,Georgia State,1148640.0 -Jonas Jerebko,Boston Celtics,8.0,PF,29.0,6-10,231.0,,5000000.0 -Amir Johnson,Boston Celtics,90.0,PF,29.0,6-9,240.0,,12000000.0 -Jordan Mickey,Boston Celtics,55.0,PF,21.0,6-8,235.0,LSU,1170960.0 -Kelly Olynyk,Boston Celtics,41.0,C,25.0,7-0,238.0,Gonzaga,2165160.0 -Terry Rozier,Boston Celtics,12.0,PG,22.0,6-2,190.0,Louisville,1824360.0 +Name,Team,Number,Position,Age,Height,Weight,College,Salary +Avery Bradley,Boston Celtics,0.0,PG,25.0,6-2,180.0,Texas,7730337.0 +Jae Crowder,Boston Celtics,99.0,SF,25.0,6-6,235.0,Marquette,6796117.0 +John Holland,Boston Celtics,30.0,SG,27.0,6-5,205.0,Boston University, +R.J. Hunter,Boston Celtics,28.0,SG,22.0,6-5,185.0,Georgia State,1148640.0 +Jonas Jerebko,Boston Celtics,8.0,PF,29.0,6-10,231.0,,5000000.0 +Amir Johnson,Boston Celtics,90.0,PF,29.0,6-9,240.0,,12000000.0 +Jordan Mickey,Boston Celtics,55.0,PF,21.0,6-8,235.0,LSU,1170960.0 +Kelly Olynyk,Boston Celtics,41.0,C,25.0,7-0,238.0,Gonzaga,2165160.0 +Terry Rozier,Boston Celtics,12.0,PG,22.0,6-2,190.0,Louisville,1824360.0 diff --git a/third_party_tools/client_drivers/python/test.py b/connecting_to_sqream/client_drivers/python/test.py similarity index 95% rename from third_party_tools/client_drivers/python/test.py rename to connecting_to_sqream/client_drivers/python/test.py index 51d0b4a92..d7de6305a 100644 --- a/third_party_tools/client_drivers/python/test.py +++ b/connecting_to_sqream/client_drivers/python/test.py @@ -1,37 +1,37 @@ -#!/usr/bin/env python - -import pysqream - -""" -Connection parameters include: -* IP/Hostname -* Port -* database name -* username -* password -* Connect through load balancer, or direct to worker (Default: false - direct to worker) -* use SSL connection (default: false) -* Optional service queue (default: 'sqream') -""" - -# Create a connection object - -con = pysqream.connect(host='127.0.0.1', port=5000, database='master' - , username='sqream', password='sqream' - , clustered=False) - -# Create a new cursor -cur = con.cursor() - -# Prepare and execute a query -cur.execute('select show_version()') - -result = cur.fetchall() # `fetchall` gets the entire data set - -print (f"Version: {result[0][0]}") - -# This should print the SQream DB version. For example ``Version: v2020.1``. - -# Finally, close the connection - +#!/usr/bin/env python + +import pysqream + +""" +Connection parameters include: +* IP/Hostname +* Port +* database name +* username +* password +* Connect through load balancer, or direct to worker (Default: false - direct to worker) +* use SSL connection (default: false) +* Optional service queue (default: 'sqream') +""" + +# Create a connection object + +con = pysqream.connect(host='127.0.0.1', port=5000, database='master' + , username='sqream', password='sqream' + , clustered=False) + +# Create a new cursor +cur = con.cursor() + +# Prepare and execute a query +cur.execute('select show_version()') + +result = cur.fetchall() # `fetchall` gets the entire data set + +print (f"Version: {result[0][0]}") + +# This should print the SQream DB version. For example ``Version: v2020.1``. + +# Finally, close the connection + con.close() \ No newline at end of file diff --git a/third_party_tools/client_platforms/connect2.sas b/connecting_to_sqream/client_platforms/connect.sas similarity index 100% rename from third_party_tools/client_platforms/connect2.sas rename to connecting_to_sqream/client_platforms/connect.sas diff --git a/third_party_tools/client_platforms/connect.sas b/connecting_to_sqream/client_platforms/connect2.sas similarity index 95% rename from third_party_tools/client_platforms/connect.sas rename to connecting_to_sqream/client_platforms/connect2.sas index 78c670762..10fcdb0a2 100644 --- a/third_party_tools/client_platforms/connect.sas +++ b/connecting_to_sqream/client_platforms/connect2.sas @@ -1,27 +1,27 @@ -options sastrace='d,d,d,d' -sastraceloc=saslog -nostsuffix -msglevel=i -sql_ip_trace=(note,source) -DEBUG=DBMS_SELECT; - -options validvarname=any; - -libname sqlib jdbc driver="com.sqream.jdbc.SQDriver" - classpath="/opt/sqream/sqream-jdbc-4.0.0.jar" - URL="jdbc:Sqream://sqream-cluster.piedpiper.com:3108/raviga;cluster=true" - user="rhendricks" - password="Tr0ub4dor3" - schema="public" - PRESERVE_TAB_NAMES=YES - PRESERVE_COL_NAMES=YES; - -proc sql; - title 'Customers table'; - select * - from sqlib.customers; -quit; - -data sqlib.customers; - set sqlib.customers; +options sastrace='d,d,d,d' +sastraceloc=saslog +nostsuffix +msglevel=i +sql_ip_trace=(note,source) +DEBUG=DBMS_SELECT; + +options validvarname=any; + +libname sqlib jdbc driver="com.sqream.jdbc.SQDriver" + classpath="/opt/sqream/sqream-jdbc-4.0.0.jar" + URL="jdbc:Sqream://sqream-cluster.piedpiper.com:3108/raviga;cluster=true" + user="rhendricks" + password="Tr0ub4dor3" + schema="public" + PRESERVE_TAB_NAMES=YES + PRESERVE_COL_NAMES=YES; + +proc sql; + title 'Customers table'; + select * + from sqlib.customers; +quit; + +data sqlib.customers; + set sqlib.customers; run; \ No newline at end of file diff --git a/third_party_tools/client_platforms/connect3.sas b/connecting_to_sqream/client_platforms/connect3.sas similarity index 100% rename from third_party_tools/client_platforms/connect3.sas rename to connecting_to_sqream/client_platforms/connect3.sas diff --git a/third_party_tools/client_platforms/index.rst b/connecting_to_sqream/client_platforms/index.rst similarity index 91% rename from third_party_tools/client_platforms/index.rst rename to connecting_to_sqream/client_platforms/index.rst index 30280c788..421a99ced 100644 --- a/third_party_tools/client_platforms/index.rst +++ b/connecting_to_sqream/client_platforms/index.rst @@ -1,37 +1,43 @@ -.. _client_platforms: - -************************************ -Client Platforms -************************************ -These topics explain how to install and connect a variety of third party tools. - -Browse the articles below, in the sidebar, or use the search to find the information you need. - -Overview -========== - -SQream DB is designed to work with most common database tools and interfaces, allowing you direct access through a variety of drivers, connectors, tools, vizualisers, and utilities. - -The tools listed have been tested and approved for use with SQream DB. Most 3\ :sup:`rd` party tools that work through JDBC, ODBC, and Python should work. - -If you are looking for a tool that is not listed, SQream and our partners can help. Go to `SQream Support `_ or contact your SQream account manager for more information. - -.. toctree:: - :maxdepth: 4 - :caption: In this section: - :titlesonly: - - power_bi - tibco_spotfire - sas_viya - sql_workbench - tableau - pentaho - microstrategy - informatica - r - php - xxtalend - xxdiagnosing_common_connectivity_issues - -.. image:: /_static/images/connectivity_ecosystem.png \ No newline at end of file +.. _client_platforms: + +************************************ +Client Platforms +************************************ +These topics explain how to install and connect a variety of third party tools. + +Browse the articles below, in the sidebar, or use the search to find the information you need. + +Overview +========== + +SQream DB is designed to work with most common database tools and interfaces, allowing you direct access through a variety of drivers, connectors, tools, vizualisers, and utilities. + +The tools listed have been tested and approved for use with SQream DB. Most 3\ :sup:`rd` party tools that work through JDBC, ODBC, and Python should work. + +If you are looking for a tool that is not listed, SQream and our partners can help. Go to `SQream Support `_ or contact your SQream account manager for more information. + +.. toctree:: + :maxdepth: 4 + :caption: In this section: + :titlesonly: + + + + trino + informatica + microstrategy + pentaho + php + power_bi + r + sap_businessobjects + sas_viya + sql_workbench + tableau + talend + tibco_spotfire + xxdiagnosing_common_connectivity_issues + +.. image:: /_static/images/connectivity_ecosystem.png + + diff --git a/third_party_tools/client_platforms/informatica.rst b/connecting_to_sqream/client_platforms/informatica.rst similarity index 97% rename from third_party_tools/client_platforms/informatica.rst rename to connecting_to_sqream/client_platforms/informatica.rst index 6bc50b22a..ec39a0129 100644 --- a/third_party_tools/client_platforms/informatica.rst +++ b/connecting_to_sqream/client_platforms/informatica.rst @@ -143,7 +143,7 @@ After establishing a connection between SQream and Informatica you can establish 2. In the **JDBC_IC Connection Properties** section, in the **JDBC Connection URL** field, establish a JDBC connection by providing the correct connection string. - For connection string examples, see `Connection Strings `_. + For connection string examples, see `Connection Strings `_. :: diff --git a/third_party_tools/client_platforms/microstrategy.rst b/connecting_to_sqream/client_platforms/microstrategy.rst similarity index 94% rename from third_party_tools/client_platforms/microstrategy.rst rename to connecting_to_sqream/client_platforms/microstrategy.rst index 6d2be281f..370312a0d 100644 --- a/third_party_tools/client_platforms/microstrategy.rst +++ b/connecting_to_sqream/client_platforms/microstrategy.rst @@ -1,185 +1,185 @@ -.. _microstrategy: - - -************************* -Connect to SQream Using MicroStrategy -************************* - -.. _ms_top: - -Overview ---------------- -This document is a Quick Start Guide that describes how to install MicroStrategy and connect a datasource to the MicroStrategy dasbhoard for analysis. - - - -The **Connecting to SQream Using MicroStrategy** page describes the following: - - -.. contents:: - :local: - - - - - - -What is MicroStrategy? -================ -MicroStrategy is a Business Intelligence software offering a wide variety of data analytics capabilities. SQream uses the MicroStrategy connector for reading and loading data into SQream. - -MicroStrategy provides the following: - -* Data discovery -* Advanced analytics -* Data visualization -* Embedded BI -* Banded reports and statements - - -For more information about Microstrategy, see `MicroStrategy `_. - - - -:ref:`Back to Overview ` - - - - - -Connecting a Data Source -======================= - -1. Activate the **MicroStrategy Desktop** app. The app displays the Dossiers panel to the right. - - :: - -2. Download the most current version of the `SQream JDBC driver `_. - - :: - -3. Click **Dossiers** and **New Dossier**. The **Untitled Dossier** panel is displayed. - - :: - -4. Click **New Data**. - - :: - -5. From the **Data Sources** panel, select **Databases** to access data from tables. The **Select Import Options** panel is displayed. - - :: - -6. Select one of the following: - - * Build a Query - * Type a Query - * Select Tables - - :: - -7. Click **Next**. - - :: - -8. In the Data Source panel, do the following: - - 1. From the **Database** dropdown menu, select **Generic**. The **Host Name**, **Port Number**, and **Database Name** fields are removed from the panel. - - :: - - 2. In the **Version** dropdown menu, verify that **Generic DBMS** is selected. - - :: - - 3. Click **Show Connection String**. - - :: - - 4. Select the **Edit connection string** checkbox. - - :: - - 5. From the **Driver** dropdown menu, select a driver for one of the following connectors: - - * **JDBC** - The SQream driver is not integrated with MicroStrategy and does not appear in the dropdown menu. However, to proceed, you must select an item, and in the next step you must specify the path to the SQream driver that you installed on your machine. - * **ODBC** - SQreamDB ODBC - - :: - - 6. In the **Connection String** text box, type the relevant connection string and path to the JDBC jar file using the following syntax: - - .. code-block:: console - - $ jdbc:Sqream:///;user=;password=sqream;[; ...] - - The following example shows the correct syntax for the JDBC connector: - - .. code-block:: console - - jdbc;MSTR_JDBC_JAR_FOLDER=C:\path\to\jdbc\folder;DRIVER=;URL={jdbc:Sqream:///;user=;password=;[; ...];} - - The following example shows the correct syntax for the ODBC connector: - - .. code-block:: console - - odbc:Driver={SqreamODBCDriver};DSN={SQreamDB ODBC};Server=;Port=;Database=;User=;Password=;Cluster=; - - For more information about the available **connection parameters** and other examples, see `Connection Parameters `_. - - 7. In the **User** and **Password** fields, fill out your user name and password. - - :: - - 8. In the **Data Source Name** field, type **SQreamDB**. - - :: - - 9. Click **Save**. The SQreamDB that you picked in the Data Source panel is displayed. - - -9. In the **Namespace** menu, select a namespace. The tables files are displayed. - - :: - -10. Drag and drop the tables into the panel on the right in your required order. - - :: - -11. **Recommended** - Click **Prepare Data** to customize your data for analysis. - - :: - -12. Click **Finish**. - - :: - -13. From the **Data Access Mode** dialog box, select one of the following: - - - * Connect Live - * Import as an In-memory Dataset - -Your populated dashboard is displayed and is ready for data discovery and analytics. - - - - - - -.. _supported_sqream_drivers: - -:ref:`Back to Overview ` - -Supported SQream Drivers -================ - -The following list shows the supported SQream drivers and versions: - -* **JDBC** - Version 4.3.3 and higher. -* **ODBC** - Version 4.0.0. - - -.. _supported_tools_and_operating_systems: - -:ref:`Back to Overview ` +.. _microstrategy: + + +************************* +Connect to SQream Using MicroStrategy +************************* + +.. _ms_top: + +Overview +--------------- +This document is a Quick Start Guide that describes how to install MicroStrategy and connect a datasource to the MicroStrategy dasbhoard for analysis. + + + +The **Connecting to SQream Using MicroStrategy** page describes the following: + + +.. contents:: + :local: + + + + + + +What is MicroStrategy? +================ +MicroStrategy is a Business Intelligence software offering a wide variety of data analytics capabilities. SQream uses the MicroStrategy connector for reading and loading data into SQream. + +MicroStrategy provides the following: + +* Data discovery +* Advanced analytics +* Data visualization +* Embedded BI +* Banded reports and statements + + +For more information about Microstrategy, see `MicroStrategy `_. + + + +:ref:`Back to Overview ` + + + + + +Connecting a Data Source +======================= + +1. Activate the **MicroStrategy Desktop** app. The app displays the Dossiers panel to the right. + + :: + +2. Download the most current version of the `SQream JDBC driver `_. + + :: + +3. Click **Dossiers** and **New Dossier**. The **Untitled Dossier** panel is displayed. + + :: + +4. Click **New Data**. + + :: + +5. From the **Data Sources** panel, select **Databases** to access data from tables. The **Select Import Options** panel is displayed. + + :: + +6. Select one of the following: + + * Build a Query + * Type a Query + * Select Tables + + :: + +7. Click **Next**. + + :: + +8. In the Data Source panel, do the following: + + 1. From the **Database** dropdown menu, select **Generic**. The **Host Name**, **Port Number**, and **Database Name** fields are removed from the panel. + + :: + + 2. In the **Version** dropdown menu, verify that **Generic DBMS** is selected. + + :: + + 3. Click **Show Connection String**. + + :: + + 4. Select the **Edit connection string** checkbox. + + :: + + 5. From the **Driver** dropdown menu, select a driver for one of the following connectors: + + * **JDBC** - The SQream driver is not integrated with MicroStrategy and does not appear in the dropdown menu. However, to proceed, you must select an item, and in the next step you must specify the path to the SQream driver that you installed on your machine. + * **ODBC** - SQreamDB ODBC + + :: + + 6. In the **Connection String** text box, type the relevant connection string and path to the JDBC jar file using the following syntax: + + .. code-block:: console + + $ jdbc:Sqream:///;user=;password=sqream;[; ...] + + The following example shows the correct syntax for the JDBC connector: + + .. code-block:: console + + jdbc;MSTR_JDBC_JAR_FOLDER=C:\path\to\jdbc\folder;DRIVER=;URL={jdbc:Sqream:///;user=;password=;[; ...];} + + The following example shows the correct syntax for the ODBC connector: + + .. code-block:: console + + odbc:Driver={SqreamODBCDriver};DSN={SQreamDB ODBC};Server=;Port=;Database=;User=;Password=;Cluster=; + + For more information about the available **connection parameters** and other examples, see `Connection Parameters `_. + + 7. In the **User** and **Password** fields, fill out your user name and password. + + :: + + 8. In the **Data Source Name** field, type **SQreamDB**. + + :: + + 9. Click **Save**. The SQreamDB that you picked in the Data Source panel is displayed. + + +9. In the **Namespace** menu, select a namespace. The tables files are displayed. + + :: + +10. Drag and drop the tables into the panel on the right in your required order. + + :: + +11. **Recommended** - Click **Prepare Data** to customize your data for analysis. + + :: + +12. Click **Finish**. + + :: + +13. From the **Data Access Mode** dialog box, select one of the following: + + + * Connect Live + * Import as an In-memory Dataset + +Your populated dashboard is displayed and is ready for data discovery and analytics. + + + + + + +.. _supported_sqream_drivers: + +:ref:`Back to Overview ` + +Supported SQream Drivers +================ + +The following list shows the supported SQream drivers and versions: + +* **JDBC** - Version 4.3.3 and higher. +* **ODBC** - Version 4.0.0. + + +.. _supported_tools_and_operating_systems: + +:ref:`Back to Overview ` diff --git a/third_party_tools/client_platforms/odbc-sqream.tdc b/connecting_to_sqream/client_platforms/odbc-sqream.tdc similarity index 98% rename from third_party_tools/client_platforms/odbc-sqream.tdc rename to connecting_to_sqream/client_platforms/odbc-sqream.tdc index f1bbe279d..36cd55e33 100644 --- a/third_party_tools/client_platforms/odbc-sqream.tdc +++ b/connecting_to_sqream/client_platforms/odbc-sqream.tdc @@ -1,25 +1,25 @@ - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/third_party_tools/client_platforms/pentaho.rst b/connecting_to_sqream/client_platforms/pentaho.rst similarity index 76% rename from third_party_tools/client_platforms/pentaho.rst rename to connecting_to_sqream/client_platforms/pentaho.rst index 1cd95866f..fa8146c41 100644 --- a/third_party_tools/client_platforms/pentaho.rst +++ b/connecting_to_sqream/client_platforms/pentaho.rst @@ -1,249 +1,253 @@ -.. _pentaho_data_integration: - -************************* -Connect to SQream Using Pentaho Data Integration -************************* -.. _pentaho_top: - -Overview -========= -This document is a Quick Start Guide that describes how to install Pentaho, create a transformation, and define your output. - -The Connecting to SQream Using Pentaho page describes the following: - -* :ref:`Installing Pentaho ` -* :ref:`Installing and setting up the JDBC driver ` -* :ref:`Creating a transformation ` -* :ref:`Defining your output ` -* :ref:`Importing your data ` - -.. _install_pentaho: - -Installing Pentaho -~~~~~~~~~~~~~~~~~ -To install PDI, see the `Pentaho Community Edition (CE) Installation Guide `_. - -The **Pentaho Community Edition (CE) Installation Guide** describes how to do the following: - -* Downloading the PDI software. -* Installing the **JRE (Java Runtime Environment)** and **JDK (Java Development Kit)**. -* Setting up the JRE and JDK environment variables for PDI. - -:ref:`Back to Overview ` - -.. _install_set_up_jdbc_driver: - -Installing and Setting Up the JDBC Driver -~~~~~~~~~~~~~~~~~ -After installing Pentaho you must install and set up the JDBC driver. This section explains how to set up the JDBC driver using Pentaho. These instructions use Spoon, the graphical transformation and job designer associated with the PDI suite. - -You can install the driver by copying and pasting the SQream JDBC .jar file into your **/design-tools/data-integration/lib** directory. - -**NOTE:** Contact your SQream license account manager for the JDBC .jar file. - -:ref:`Back to Overview ` - -.. _create_transformation: - -Creating a Transformation -~~~~~~~~~~~~~~~~~~ -After installing Pentaho you can create a transformation. - -**To create a transformation:** - -1. Use the CLI to open the PDI client for your operating system (Windows): - - .. code-block:: console - - $ spoon.bat - -2. Open the spoon.bat file from its folder location. - -:: - -3. In the **View** tab, right-click **Transformations** and click **New**. - -A new transformation tab is created. - -4. In the **Design** tab, click **Input** to show its file contents. - -:: - -5. Drag and drop the **CSV file input** item to the new transformation tab that you created. - -:: - -6. Double-click **CSV file input**. The **CSV file input** panel is displayed. - -:: - -7. In the **Step name** field, type a name. - -:: - -8. To the right of the **Filename** field, click **Browse**. - -:: - -9. Select the file that you want to read from and click **OK**. - -:: - -10. In the CSV file input window, click **Get Fields**. - -:: - -11. In the **Sample data** window, enter the number of lines you want to sample and click **OK**. The default setting is **100**. - -The tool reads the file and suggests the field name and type. - -12. In the CSV file input window, click **Preview**. - -:: - -13. In the **Preview size** window, enter the number of rows you want to preview and click **OK**. The default setting is **1000**. - -:: - -14. Verify that the preview data is correct and click **Close**. - -:: - -15. Click **OK** in the **CSV file input** window. - -:ref:`Back to Overview ` - -.. _define_output: - -Defining Your Output ------------------ -After creating your transformation you must define your output. - -**To define your output:** - -1. In the **Design** tab, click **Output**. - - The Output folder is opened. - -2. Drag and drop **Table output** item to the Transformation window. - -:: - -3. Double-click **Table output** to open the **Table output** dialog box. - -:: - -4. From the **Table output** dialog box, type a **Step name** and click **New** to create a new connection. Your **steps** are the building blocks of a transformation, such as file input or a table output. - -The **Database Connection** window is displayed with the **General** tab selected by default. - -5. Enter or select the following information in the Database Connection window and click **Test**. - -The following table shows and describes the information that you need to fill out in the Database Connection window: - -.. list-table:: - :widths: 6 31 73 - :header-rows: 1 - - * - No. - - Element Name - - Description - * - 1 - - Connection name - - Enter a name that uniquely describes your connection, such as **sampledata**. - * - 2 - - Connection type - - Select **Generic database**. - * - 3 - - Access - - Select **Native (JDBC)**. - * - 4 - - Custom connection URL - - Insert **jdbc:Sqream:///;user=;password=;[; ...];**. The IP is a node in your SQream cluster and is the name or schema of the database you want to connect to. Verify that you have not used any leading or trailing spaces. - * - 5 - - Custom driver class name - - Insert **com.sqream.jdbc.SQDriver**. Verify that you have not used any leading or trailing spaces. - * - 6 - - Username - - Your SQreamdb username. If you leave this blank, you will be prompted to provide it when you connect. - * - 7 - - Password - - Your password. If you leave this blank, you will be prompted to provide it when you connect. - -The following message is displayed: - -.. image:: /_static/images/third_party_connectors/pentaho/connection_tested_successfully_2.png - -6. Click **OK** in the window above, in the Database Connection window, and Table Output window. - -:ref:`Back to Overview ` - -.. _import_data: - -Importing Data ------------------ -After defining your output you can begin importing your data. - -For more information about backing up users, permissions, or schedules, see `Backup and Restore Pentaho Repositories `_ - -**To import data:** - -1. Double-click the **Table output** connection that you just created. - -:: - -2. To the right of the **Target schema** field, click **Browse** and select a schema name. - -:: - -3. Click **OK**. The selected schema name is displayed in the **Target schema** field. - -:: - -4. Create a new hop connection between the **CSV file input** and **Table output** steps: - - 1. On the CSV file input step item, click the **new hop connection** icon. - - .. image:: /_static/images/third_party_connectors/pentaho/csv_file_input_options.png - - 2. Drag an arrow from the **CSV file input** step item to the **Table output** step item. - - .. image:: /_static/images/third_party_connectors/pentaho/csv_file_input_options_2.png - - 3. Release the mouse button. The following options are displayed. - - 4. Select **Main output of step**. - - .. image:: /_static/images/third_party_connectors/pentaho/main_output_of_step.png - -:: - -5. Double-click **Table output** to open the **Table output** dialog box. - -:: - -6. In the **Target table** field, define a target table name. - -:: - -7. Click **SQL** to open the **Simple SQL editor.** - -:: - -8. In the **Simple SQL editor**, click **Execute**. - - The system processes and displays the results of the SQL statements. - -9. Close all open dialog boxes. - -:: - -10. Click the play button to execute the transformation. - - .. image:: /_static/images/third_party_connectors/pentaho/execute_transformation.png - - The **Run Options** dialog box is displayed. - -11. Click **Run**. The **Execution Results** are displayed. - -:ref:`Back to Overview ` +.. _pentaho_data_integration: + +************************* +Connecting to SQream Using Pentaho Data Integration +************************* +.. _pentaho_top: + +Overview +========= +This document is a Quick Start Guide that describes how to install Pentaho, create a transformation, and define your output. + +The Connecting to SQream Using Pentaho page describes the following: + +* :ref:`Installing Pentaho ` +* :ref:`Installing and setting up the JDBC driver ` +* :ref:`Creating a transformation ` +* :ref:`Defining your output ` +* :ref:`Importing your data ` + +.. _install_pentaho: + +Installing Pentaho +~~~~~~~~~~~~~~~~~ +To install PDI, see the `Pentaho Community Edition (CE) Installation Guide `_. + +The **Pentaho Community Edition (CE) Installation Guide** describes how to do the following: + +* Downloading the PDI software. +* Installing the **JRE (Java Runtime Environment)** and **JDK (Java Development Kit)**. +* Setting up the JRE and JDK environment variables for PDI. + +:ref:`Back to Overview ` + +.. _install_set_up_jdbc_driver: + +Installing and Setting Up the JDBC Driver +~~~~~~~~~~~~~~~~~ +After installing Pentaho you must install and set up the JDBC driver. This section explains how to set up the JDBC driver using Pentaho. These instructions use Spoon, the graphical transformation and job designer associated with the PDI suite. + +You can install the driver by copying and pasting the SQream JDBC .jar file into your **/design-tools/data-integration/lib** directory. + +**NOTE:** Contact your SQream license account manager for the JDBC .jar file. + +:ref:`Back to Overview ` + +.. _create_transformation: + +Creating a Transformation +~~~~~~~~~~~~~~~~~~ +After installing Pentaho you can create a transformation. + +**To create a transformation:** + +1. Use the CLI to open the PDI client for your operating system (Windows): + + .. code-block:: console + + $ spoon.bat + +2. Open the spoon.bat file from its folder location. + +:: + +3. In the **View** tab, right-click **Transformations** and click **New**. + + A new transformation tab is created. + +4. In the **Design** tab, click **Input** to show its file contents. + +:: + +5. Drag and drop the **CSV file input** item to the new transformation tab that you created. + +:: + +6. Double-click **CSV file input**. The **CSV file input** panel is displayed. + +:: + +7. In the **Step name** field, type a name. + +:: + +8. To the right of the **Filename** field, click **Browse**. + +:: + +9. Select the file that you want to read from and click **OK**. + +:: + +10. In the CSV file input window, click **Get Fields**. + +:: + +11. In the **Sample data** window, enter the number of lines you want to sample and click **OK**. The default setting is **100**. + + The tool reads the file and suggests the field name and type. + +12. In the CSV file input window, click **Preview**. + +:: + +13. In the **Preview size** window, enter the number of rows you want to preview and click **OK**. The default setting is **1000**. + +:: + +14. Verify that the preview data is correct and click **Close**. + +:: + +15. Click **OK** in the **CSV file input** window. + +:ref:`Back to Overview ` + +.. _define_output: + +Defining Your Output +----------------- +After creating your transformation you must define your output. + +**To define your output:** + +1. In the **Design** tab, click **Output**. + + The Output folder is opened. + +2. Drag and drop **Table output** item to the Transformation window. + +:: + +3. Double-click **Table output** to open the **Table output** dialog box. + +:: + +4. From the **Table output** dialog box, type a **Step name** and click **New** to create a new connection. Your **steps** are the building blocks of a transformation, such as file input or a table output. + + The **Database Connection** window is displayed with the **General** tab selected by default. + +5. Enter or select the following information in the Database Connection window and click **Test**. + + The following table shows and describes the information that you need to fill out in the Database Connection window: + + .. list-table:: + :widths: 6 31 73 + :header-rows: 1 + + * - No. + - Element Name + - Description + * - 1 + - Connection name + - Enter a name that uniquely describes your connection, such as **sampledata**. + * - 2 + - Connection type + - Select **Generic database**. + * - 3 + - Access + - Select **Native (JDBC)**. + * - 4 + - Custom connection URL + - Insert **jdbc:Sqream:///;user=;password=;[; ...];**. The IP is a node in your SQream cluster and is the name or schema of the database you want to connect to. Verify that you have not used any leading or trailing spaces. + * - 5 + - Custom driver class name + - Insert **com.sqream.jdbc.SQDriver**. Verify that you have not used any leading or trailing spaces. + * - 6 + - Username + - Your SQreamdb username. If you leave this blank, you will be prompted to provide it when you connect. + * - 7 + - Password + - Your password. If you leave this blank, you will be prompted to provide it when you connect. + + The following message is displayed: + +.. image:: /_static/images/third_party_connectors/pentaho/connection_tested_successfully_2.png + +6. Click **OK** in the window above, in the Database Connection window, and Table Output window. + +:ref:`Back to Overview ` + +.. _import_data: + +Importing Data +----------------- +After defining your output you can begin importing your data. + +For more information about backing up users, permissions, or schedules, see `Backup and Restore Pentaho Repositories `_ + +**To import data:** + +1. Double-click the **Table output** connection that you just created. + +:: + +2. To the right of the **Target schema** field, click **Browse** and select a schema name. + +:: + +3. Click **OK**. The selected schema name is displayed in the **Target schema** field. + +:: + +4. Create a new hop connection between the **CSV file input** and **Table output** steps: + + 1. On the CSV file input step item, click the **new hop connection** icon. + + .. image:: /_static/images/third_party_connectors/pentaho/csv_file_input_options.png + + 2. Drag an arrow from the **CSV file input** step item to the **Table output** step item. + + .. image:: /_static/images/third_party_connectors/pentaho/csv_file_input_options_2.png + + 3. Release the mouse button. The following options are displayed. + + :: + + 4. Select **Main output of step**. + + .. image:: /_static/images/third_party_connectors/pentaho/main_output_of_step.png + +:: + +5. Double-click **Table output** to open the **Table output** dialog box. + +:: + +6. In the **Target table** field, define a target table name. + +:: + +7. Click **SQL** to open the **Simple SQL editor.** + +:: + +8. In the **Simple SQL editor**, click **Execute**. + + The system processes and displays the results of the SQL statements. + +9. Close all open dialog boxes. + +:: + +10. Click the play button to execute the transformation. + + .. image:: /_static/images/third_party_connectors/pentaho/execute_transformation.png + + The **Run Options** dialog box is displayed. + +11. Click **Run**. + + The **Execution Results** are displayed. + +:ref:`Back to Overview ` \ No newline at end of file diff --git a/connecting_to_sqream/client_platforms/php.rst b/connecting_to_sqream/client_platforms/php.rst new file mode 100644 index 000000000..ebb2c796f --- /dev/null +++ b/connecting_to_sqream/client_platforms/php.rst @@ -0,0 +1,76 @@ +.. _php: + +***************************** +Connect to SQream Using PHP +***************************** + +Overview +========== +PHP is an open source scripting language that executes scripts on servers. The **Connect to PHP** page explains how to connect to a SQream cluster, and describes the following: + +.. contents:: + :local: + :depth: 1 + +Installing PHP +------------------- +**To install PHP:** + +1. Download the JDBC driver installer from the `SQream Drivers page `_. + + :: + +2. Create a DSN. + + :: + +3. Install the **uODBC** extension for your PHP installation. + + For more information, navigate to `PHP Documentation `_ and see the topic menu on the right side of the page. + +Configuring PHP +------------------- +You can configure PHP in one of the following ways: + +* When compiling, configure PHP to enable uODBC using ``./configure --with-pdo-odbc=unixODBC,/usr/local``. + + :: + +* Install ``php-odbc`` and ``php-pdo`` along with PHP using your distribution package manager. SQream recommends a minimum of version 7.1 for the best results. + +.. note:: PHP's string size limitations truncates fetched text, which you can override by doing one of the following: + + * Increasing the **php.ini** default setting, such as the *odbc.defaultlrl* to **10000**. + + :: + + * Setting the size limitation in your code before making your connection using **ini_set("odbc.defaultlrl", "10000");**. + + :: + + * Setting the size limitation in your code before fetchng your result using **odbc_longreadlen($result, "10000");**. + +Operating PHP +------------------- +After configuring PHP, you can test your connection. + +**To test your connection:** + +#. Create a test connection file using the correct parameters for your SQream installation, as shown below: + + .. literalinclude:: test.php + :language: php + :emphasize-lines: 4 + :linenos: + + For more information, download the sample :download:`PHP example connection file ` shown above. + + The following is an example of a valid DSN line: + + .. code:: php + + $dsn = "odbc:Driver={SqreamODBCDriver};Server=192.168.0.5;Port=5000;Database=master;User=rhendricks;Password=super_secret;Service=sqream"; + +#. Run the PHP file either directly with PHP (``php test.php``) or through a browser. + + For more information about supported DSN parameters, see :ref:`dsn_params`. \ No newline at end of file diff --git a/third_party_tools/client_platforms/power_bi.rst b/connecting_to_sqream/client_platforms/power_bi.rst similarity index 94% rename from third_party_tools/client_platforms/power_bi.rst rename to connecting_to_sqream/client_platforms/power_bi.rst index 3b9f662bd..b43adf578 100644 --- a/third_party_tools/client_platforms/power_bi.rst +++ b/connecting_to_sqream/client_platforms/power_bi.rst @@ -1,7 +1,7 @@ .. _power_bi: ************************* -Connect to SQream Using Power BI Desktop +Connecting to SQream Using Power BI Desktop ************************* Overview @@ -22,7 +22,7 @@ SQream integrates with Power BI Desktop to do the following: SQream uses Power BI for extracting data sets using the following methods: -* **Direct query** - Direct queries lets you connect easily with no errors, and refreshes Power BI artifacts, such as graphs and reports, in a considerable amount of time in relation to the time taken for queries to run using the `SQream SQL CLI Reference guide `_. +* **Direct query** - Direct queries lets you connect easily with no errors, and refreshes Power BI artifacts, such as graphs and reports, in a considerable amount of time in relation to the time taken for queries to run using the `SQream SQL CLI Reference guide `_. :: @@ -52,7 +52,7 @@ Installing Power BI Desktop 2. Download and configure your ODBC driver. - For more information about configuring your ODBC driver, see `ODBC `_. + For more information about configuring your ODBC driver, see `ODBC `_. 3. Navigate to **Windows** > **Documents** and create a folder called **Power BI Desktop Custom Connectors**. @@ -140,4 +140,4 @@ SQream supports the following SQream driver versions: Related Information ------------------- -For more information, see the `Glossary `_. \ No newline at end of file +For more information, see the `Glossary `_. \ No newline at end of file diff --git a/third_party_tools/client_platforms/r.rst b/connecting_to_sqream/client_platforms/r.rst similarity index 96% rename from third_party_tools/client_platforms/r.rst rename to connecting_to_sqream/client_platforms/r.rst index 6abe27031..c84bf901b 100644 --- a/third_party_tools/client_platforms/r.rst +++ b/connecting_to_sqream/client_platforms/r.rst @@ -1,151 +1,151 @@ -.. _r: - -***************************** -Connect to SQream Using R -***************************** - -You can use R to interact with a SQream DB cluster. - -This tutorial is a guide that will show you how to connect R to SQream DB. - -.. contents:: In this topic: - :local: - -JDBC -========= - - -#. Get the :ref:`SQream DB JDBC driver`. - -#. - In R, install RJDBC - - .. code-block:: rconsole - - > install.packages("RJDBC") - Installing package into 'C:/Users/r/...' - (as 'lib' is unspecified) - - package 'RJDBC' successfully unpacked and MD5 sums checked - -#. - Import the RJDBC library - - .. code-block:: rconsole - - > library(RJDBC) - -#. - Set the classpath and initialize the JDBC driver which was previously installed. For example, on Windows: - - .. code-block:: rconsole - - > cp = c("C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") - > .jinit(classpath=cp) - > drv <- JDBC("com.sqream.jdbc.SQDriver","C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") -#. - Open a connection with a :ref:`JDBC connection string` and run your first statement - - .. code-block:: rconsole - - > con <- dbConnect(drv,"jdbc:Sqream://127.0.0.1:3108/master;user=rhendricks;password=Tr0ub4dor&3;cluster=true") - - > dbGetQuery(con,"select top 5 * from t") - xint xtinyint xsmallint xbigint - 1 1 82 5067 1 - 2 2 14 1756 2 - 3 3 91 22356 3 - 4 4 84 17232 4 - 5 5 13 14315 5 - -#. - Close the connection - - .. code-block:: rconsole - - > close(con) - -A full example ------------------ - -.. code-block:: rconsole - - > library(RJDBC) - > cp = c("C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") - > .jinit(classpath=cp) - > drv <- JDBC("com.sqream.jdbc.SQDriver","C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") - > con <- dbConnect(drv,"jdbc:Sqream://127.0.0.1:3108/master;user=rhendricks;password=Tr0ub4dor&3;cluster=true") - > dbGetQuery(con,"select top 5 * from t") - xint xtinyint xsmallint xbigint - 1 1 82 5067 1 - 2 2 14 1756 2 - 3 3 91 22356 3 - 4 4 84 17232 4 - 5 5 13 14315 5 - > close(con) - -ODBC -========= - -#. Install the :ref:`SQream DB ODBC driver` for your operating system, and create a DSN. - -#. - In R, install RODBC - - .. code-block:: rconsole - - > install.packages("RODBC") - Installing package into 'C:/Users/r/...' - (as 'lib' is unspecified) - - package 'RODBC' successfully unpacked and MD5 sums checked - -#. - Import the RODBC library - - .. code-block:: rconsole - - > library(RODBC) - -#. - Open a connection handle to an existing DSN (``my_cool_dsn`` in this example) - - .. code-block:: rconsole - - > ch <- odbcConnect("my_cool_dsn",believeNRows=F) - -#. - Run your first statement - - .. code-block:: rconsole - - > sqlQuery(ch,"select top 5 * from t") - xint xtinyint xsmallint xbigint - 1 1 82 5067 1 - 2 2 14 1756 2 - 3 3 91 22356 3 - 4 4 84 17232 4 - 5 5 13 14315 5 - -#. - Close the connection - - .. code-block:: rconsole - - > close(ch) - -A full example ------------------ - -.. code-block:: rconsole - - > library(RODBC) - > ch <- odbcConnect("my_cool_dsn",believeNRows=F) - > sqlQuery(ch,"select top 5 * from t") - xint xtinyint xsmallint xbigint - 1 1 82 5067 1 - 2 2 14 1756 2 - 3 3 91 22356 3 - 4 4 84 17232 4 - 5 5 13 14315 5 - > close(ch) +.. _r: + +***************************** +Connect to SQream Using R +***************************** + +You can use R to interact with a SQream DB cluster. + +This tutorial is a guide that will show you how to connect R to SQream DB. + +.. contents:: In this topic: + :local: + +JDBC +========= + + +#. Get the :ref:`SQream DB JDBC driver`. + +#. + In R, install RJDBC + + .. code-block:: rconsole + + > install.packages("RJDBC") + Installing package into 'C:/Users/r/...' + (as 'lib' is unspecified) + + package 'RJDBC' successfully unpacked and MD5 sums checked + +#. + Import the RJDBC library + + .. code-block:: rconsole + + > library(RJDBC) + +#. + Set the classpath and initialize the JDBC driver which was previously installed. For example, on Windows: + + .. code-block:: rconsole + + > cp = c("C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") + > .jinit(classpath=cp) + > drv <- JDBC("com.sqream.jdbc.SQDriver","C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") +#. + Open a connection with a :ref:`JDBC connection string` and run your first statement + + .. code-block:: rconsole + + > con <- dbConnect(drv,"jdbc:Sqream://127.0.0.1:3108/master;user=rhendricks;password=Tr0ub4dor&3;cluster=true") + + > dbGetQuery(con,"select top 5 * from t") + xint xtinyint xsmallint xbigint + 1 1 82 5067 1 + 2 2 14 1756 2 + 3 3 91 22356 3 + 4 4 84 17232 4 + 5 5 13 14315 5 + +#. + Close the connection + + .. code-block:: rconsole + + > close(con) + +A full example +----------------- + +.. code-block:: rconsole + + > library(RJDBC) + > cp = c("C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") + > .jinit(classpath=cp) + > drv <- JDBC("com.sqream.jdbc.SQDriver","C:\\Program Files\\SQream Technologies\\JDBC Driver\\2020.1-3.2.0\\sqream-jdbc-3.2.jar") + > con <- dbConnect(drv,"jdbc:Sqream://127.0.0.1:3108/master;user=rhendricks;password=Tr0ub4dor&3;cluster=true") + > dbGetQuery(con,"select top 5 * from t") + xint xtinyint xsmallint xbigint + 1 1 82 5067 1 + 2 2 14 1756 2 + 3 3 91 22356 3 + 4 4 84 17232 4 + 5 5 13 14315 5 + > close(con) + +ODBC +========= + +#. Install the :ref:`SQream DB ODBC driver` for your operating system, and create a DSN. + +#. + In R, install RODBC + + .. code-block:: rconsole + + > install.packages("RODBC") + Installing package into 'C:/Users/r/...' + (as 'lib' is unspecified) + + package 'RODBC' successfully unpacked and MD5 sums checked + +#. + Import the RODBC library + + .. code-block:: rconsole + + > library(RODBC) + +#. + Open a connection handle to an existing DSN (``my_cool_dsn`` in this example) + + .. code-block:: rconsole + + > ch <- odbcConnect("my_cool_dsn",believeNRows=F) + +#. + Run your first statement + + .. code-block:: rconsole + + > sqlQuery(ch,"select top 5 * from t") + xint xtinyint xsmallint xbigint + 1 1 82 5067 1 + 2 2 14 1756 2 + 3 3 91 22356 3 + 4 4 84 17232 4 + 5 5 13 14315 5 + +#. + Close the connection + + .. code-block:: rconsole + + > close(ch) + +A full example +----------------- + +.. code-block:: rconsole + + > library(RODBC) + > ch <- odbcConnect("my_cool_dsn",believeNRows=F) + > sqlQuery(ch,"select top 5 * from t") + xint xtinyint xsmallint xbigint + 1 1 82 5067 1 + 2 2 14 1756 2 + 3 3 91 22356 3 + 4 4 84 17232 4 + 5 5 13 14315 5 + > close(ch) diff --git a/connecting_to_sqream/client_platforms/sap_businessobjects.rst b/connecting_to_sqream/client_platforms/sap_businessobjects.rst new file mode 100644 index 000000000..4c740b034 --- /dev/null +++ b/connecting_to_sqream/client_platforms/sap_businessobjects.rst @@ -0,0 +1,60 @@ +.. _sap_businessobjects: + +************************* +Connecting to SQream Using SAP BusinessObjects +************************* +The **Connecting to SQream Using SAP BusinessObjects** guide includes the following sections: + +.. contents:: + :local: + :depth: 1 + +Overview +========== +The **Connecting to SQream Using SAP BusinessObjects** guide describes the best practices for configuring a connection between SQream and the SAP BusinessObjects BI platform. SAP BO's multi-tier architecture includes both client and server components, and this guide describes integrating SQream with SAP BO's object client tools using a generic JDBC connector. The instructions in this guide are relevant to both the **Universe Design Tool (UDT)** and the **Information Design Tool (IDT)**. This document only covers how to establish a connection using the generic out-of-the-box JDBC connectors, and does not cover related business object products, such as the **Business Objects Data Integrator**. + +The **Define a new connection** window below shows the generic JDBC driver, which you can use to establish a new connection to a database. + +.. image:: /_static/images/SAP_BO_2.png + +SAP BO also lets you customize the interface to include a SQream data source. + +Establising a New Connection Using a Generic JDCB Connector +========== +This section shows an example of using a generic JDBC connector to establish a new connection. + +**To establish a new connection using a generic JDBC connector:** + +1. In the fields, provide a user name, password, database URL, and JDBC class. + + The following is the correct format for the database URL: + + .. code-block:: console + +
jdbc:Sqream://:3108/
+	  
+   SQream recommends quickly testing your connection to SQream by selecting the Generic JDBC data source in the **Define a new connection** window. When you connect using a generic JDBC data source you do not need to modify your configuration files, but are limited to the out-of-the-box settings defined in the default **jdbc.prm** file.
+   
+   .. note:: Modifying the jdbc.prm file for the generic driver impacts all other databases using the same driver.
+
+For more information, see `Connection String Examples `_.
+
+2. (Optonal)If you are using the generic JDBC driver specific to SQream, modify the jdbc.sbo file to include the SQream JDBC driver location by adding the following lines under the Database section of the file:
+
+   .. code-block:: console
+
+      Database Active="Yes" Name="SQream JDBC data source">
+      
+      
+      C:\Program Files\SQream Technologies\JDBC Driver\2021.2.0-4.5.3\sqream-jdbc-4.5.3.jar
+      
+      
+      
+      com.sqream.jdbc.SQDriver
+
+      
+      
+
+3. Restart the BusinessObjects server.
+
+   When the connection is established, **SQream** is listed as a driver selection.
\ No newline at end of file
diff --git a/third_party_tools/client_platforms/sas_viya.rst b/connecting_to_sqream/client_platforms/sas_viya.rst
similarity index 91%
rename from third_party_tools/client_platforms/sas_viya.rst
rename to connecting_to_sqream/client_platforms/sas_viya.rst
index fc0806296..ef4a338a4 100644
--- a/third_party_tools/client_platforms/sas_viya.rst
+++ b/connecting_to_sqream/client_platforms/sas_viya.rst
@@ -1,185 +1,185 @@
-.. _connect_to_sas_viya:
-
-*************************
-Connect to SQream Using SAS Viya
-*************************
-
-Overview
-==========
-SAS Viya is a cloud-enabled analytics engine used for producing useful insights. The **Connect to SQream Using SAS Viya** page describes how to connect to SAS Viya, and describes the following:
-
-.. contents:: 
-   :local:
-   :depth: 1
-
-Installing SAS Viya
--------------------
-The **Installing SAS Viya** section describes the following:
-
-.. contents:: 
-   :local:
-   :depth: 1 
-
-Downloading SAS Viya
-~~~~~~~~~~~~~~~~~~
-Integrating with SQream has been tested with SAS Viya v.03.05 and newer.
-
-To download SAS Viya, see `SAS Viya `_.
-
-Installing the JDBC Driver
-~~~~~~~~~~~~~~~~~~
-The SQream JDBC driver is required for establishing a connection between SAS Viya and SQream.
-
-**To install the JDBC driver:**
-
-#. Download the `JDBC driver `_.
-
-    ::
-
-#. Unzip the JDBC driver into a location on the SAS Viya server.
-   
-   SQream recommends creating the directory ``/opt/sqream`` on the SAS Viya server.
-   
-Configuring SAS Viya
--------------------
-After installing the JDBC driver, you must configure the JDBC driver from the SAS Studio so that it can be used with SQream Studio.
-
-**To configure the JDBC driver from the SAS Studio:**
-
-#. Sign in to the SAS Studio.
-
-    ::
-
-#. From the **New** menu, click **SAS Program**.
-   
-    ::
-	
-#. Configure the SQream JDBC connector by adding the following rows:
-
-   .. literalinclude:: connect3.sas
-      :language: php
-
-For more information about writing a connection string, see **Connect to SQream DB with a JDBC Application** and navigate to `Connection String `_.
-
-Operating SAS Viya
---------------------  
-The **Operating SAS Viya** section describes the following:
-
-.. contents:: 
-   :local:
-   :depth: 1
-   
-Using SAS Viya Visual Analytics
-~~~~~~~~~~~~~~~~~~
-This section describes how to use SAS Viya Visual Analytics.
-
-**To use SAS Viya Visual Analytics:**
-
-#. Log in to `SAS Viya Visual Analytics `_ using your credentials:
-
-    ::
-
-2. Click **New Report**.
-
-    ::
-
-3. Click **Data**.
-
-    ::
-
-4. Click **Data Sources**.
-
-    ::
-
-5. Click the **Connect** icon.
-
-    ::
-
-6. From the **Type** menu, select **Database**.
-
-    ::
-
-7. Provide the required information and select **Persist this connection beyond the current session**.
-
-    ::
-
-8. Click **Advanced** and provide the required information.
-
-    ::
-
-9. Add the following additional parameters by clicking **Add Parameters**:
-
-.. list-table::
-   :widths: 10 90
-   :header-rows: 1   
-   
-   * - Name
-     - Value
-   * - class
-     - com.sqream.jdbc.SQDriver
-   * - classPath
-     - **   
-   * - url
-     - \jdbc:Sqream://**:**/**;cluster=true
-   * - username
-     - 
-   * - password
-     - 
-   
-10. Click **Test Connection**.
-
-     ::
-
-11. If the connection is successful, click **Save**.
-
-If your connection is not successful, see :ref:`troubleshooting_sas_viya` below.
-
-.. _troubleshooting_sas_viya:
-
-Troubleshooting SAS Viya
--------------------------
-The **Best Practices and Troubleshooting** section describes the following best practices and troubleshooting procedures when connecting to SQream using SAS Viya:
-
-.. contents:: 
-   :local:
-   :depth: 1
-
-Inserting Only Required Data
-~~~~~~~~~~~~~~~~~~
-When using SAS Viya, SQream recommends using only data that you need, as described below:
-
-* Insert only the data sources you need into SAS Viya, excluding tables that don’t require analysis.
-
-    ::
-
-* To increase query performance, add filters before analyzing. Every modification you make while analyzing data queries the SQream database, sometimes several times. Adding filters to the datasource before exploring limits the amount of data analyzed and increases query performance.
-
-Creating a Separate Service for SAS Viya
-~~~~~~~~~~~~~~~~~~
-SQream recommends creating a separate service for SAS Viya with the DWLM. This reduces the impact that Tableau has on other applications and processes, such as ETL. In addition, this works in conjunction with the load balancer to ensure good performance.
-
-Locating the SQream JDBC Driver
-~~~~~~~~~~~~~~~~~~
-In some cases, SAS Viya cannot locate the SQream JDBC driver, generating the following error message:
-
-.. code-block:: text
-
-   java.lang.ClassNotFoundException: com.sqream.jdbc.SQDriver
-
-**To locate the SQream JDBC driver:**
-
-1. Verify that you have placed the JDBC driver in a directory that SAS Viya can access.
-
-    ::
-
-2. Verify that the classpath in your SAS program is correct, and that SAS Viya can access the file that it references.
-
-    ::
-
-3. Restart SAS Viya.
-
-For more troubleshooting assistance, see the `SQream Support Portal `_.
-
-Supporting TEXT
-~~~~~~~~~~~~~~~~~~
-In SAS Viya versions lower than 4.0, casting ``TEXT`` to ``CHAR`` changes the size to 1,024, such as when creating a table including a ``TEXT`` column. This is resolved by casting ``TEXT`` into ``CHAR`` when using the JDBC driver.
+.. _connect_to_sas_viya:
+
+*************************
+Connect to SQream Using SAS Viya
+*************************
+
+Overview
+==========
+SAS Viya is a cloud-enabled analytics engine used for producing useful insights. The **Connect to SQream Using SAS Viya** page describes how to connect to SAS Viya, and describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Installing SAS Viya
+-------------------
+The **Installing SAS Viya** section describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1 
+
+Downloading SAS Viya
+~~~~~~~~~~~~~~~~~~
+Integrating with SQream has been tested with SAS Viya v.03.05 and newer.
+
+To download SAS Viya, see `SAS Viya `_.
+
+Installing the JDBC Driver
+~~~~~~~~~~~~~~~~~~
+The SQream JDBC driver is required for establishing a connection between SAS Viya and SQream.
+
+**To install the JDBC driver:**
+
+#. Download the `JDBC driver `_.
+
+    ::
+
+#. Unzip the JDBC driver into a location on the SAS Viya server.
+   
+   SQream recommends creating the directory ``/opt/sqream`` on the SAS Viya server.
+   
+Configuring SAS Viya
+-------------------
+After installing the JDBC driver, you must configure the JDBC driver from the SAS Studio so that it can be used with SQream Studio.
+
+**To configure the JDBC driver from the SAS Studio:**
+
+#. Sign in to the SAS Studio.
+
+    ::
+
+#. From the **New** menu, click **SAS Program**.
+   
+    ::
+	
+#. Configure the SQream JDBC connector by adding the following rows:
+
+   .. literalinclude:: connect3.sas
+      :language: php
+
+For more information about writing a connection string, see **Connect to SQream DB with a JDBC Application** and navigate to `Connection String `_.
+
+Operating SAS Viya
+--------------------  
+The **Operating SAS Viya** section describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Using SAS Viya Visual Analytics
+~~~~~~~~~~~~~~~~~~
+This section describes how to use SAS Viya Visual Analytics.
+
+**To use SAS Viya Visual Analytics:**
+
+#. Log in to `SAS Viya Visual Analytics `_ using your credentials:
+
+    ::
+
+2. Click **New Report**.
+
+    ::
+
+3. Click **Data**.
+
+    ::
+
+4. Click **Data Sources**.
+
+    ::
+
+5. Click the **Connect** icon.
+
+    ::
+
+6. From the **Type** menu, select **Database**.
+
+    ::
+
+7. Provide the required information and select **Persist this connection beyond the current session**.
+
+    ::
+
+8. Click **Advanced** and provide the required information.
+
+    ::
+
+9. Add the following additional parameters by clicking **Add Parameters**:
+
+.. list-table::
+   :widths: 10 90
+   :header-rows: 1   
+   
+   * - Name
+     - Value
+   * - class
+     - com.sqream.jdbc.SQDriver
+   * - classPath
+     - **   
+   * - url
+     - \jdbc:Sqream://**:**/**;cluster=true
+   * - username
+     - 
+   * - password
+     - 
+   
+10. Click **Test Connection**.
+
+     ::
+
+11. If the connection is successful, click **Save**.
+
+If your connection is not successful, see :ref:`troubleshooting_sas_viya` below.
+
+.. _troubleshooting_sas_viya:
+
+Troubleshooting SAS Viya
+-------------------------
+The **Best Practices and Troubleshooting** section describes the following best practices and troubleshooting procedures when connecting to SQream using SAS Viya:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Inserting Only Required Data
+~~~~~~~~~~~~~~~~~~
+When using SAS Viya, SQream recommends using only data that you need, as described below:
+
+* Insert only the data sources you need into SAS Viya, excluding tables that don’t require analysis.
+
+    ::
+
+* To increase query performance, add filters before analyzing. Every modification you make while analyzing data queries the SQream database, sometimes several times. Adding filters to the datasource before exploring limits the amount of data analyzed and increases query performance.
+
+Creating a Separate Service for SAS Viya
+~~~~~~~~~~~~~~~~~~
+SQream recommends creating a separate service for SAS Viya with the DWLM. This reduces the impact that Tableau has on other applications and processes, such as ETL. In addition, this works in conjunction with the load balancer to ensure good performance.
+
+Locating the SQream JDBC Driver
+~~~~~~~~~~~~~~~~~~
+In some cases, SAS Viya cannot locate the SQream JDBC driver, generating the following error message:
+
+.. code-block:: text
+
+   java.lang.ClassNotFoundException: com.sqream.jdbc.SQDriver
+
+**To locate the SQream JDBC driver:**
+
+1. Verify that you have placed the JDBC driver in a directory that SAS Viya can access.
+
+    ::
+
+2. Verify that the classpath in your SAS program is correct, and that SAS Viya can access the file that it references.
+
+    ::
+
+3. Restart SAS Viya.
+
+For more troubleshooting assistance, see the `SQream Support Portal `_.
+
+Supporting TEXT
+~~~~~~~~~~~~~~~~~~
+In SAS Viya versions lower than 4.0, casting ``TEXT`` to ``CHAR`` changes the size to 1,024, such as when creating a table including a ``TEXT`` column. This is resolved by casting ``TEXT`` into ``CHAR`` when using the JDBC driver.
\ No newline at end of file
diff --git a/third_party_tools/client_platforms/sql_workbench.rst b/connecting_to_sqream/client_platforms/sql_workbench.rst
similarity index 92%
rename from third_party_tools/client_platforms/sql_workbench.rst
rename to connecting_to_sqream/client_platforms/sql_workbench.rst
index d46d45ae6..a5f7e8871 100644
--- a/third_party_tools/client_platforms/sql_workbench.rst
+++ b/connecting_to_sqream/client_platforms/sql_workbench.rst
@@ -1,135 +1,137 @@
-.. _connect_to_sql_workbench:
-
-*****************************
-Connect to SQream Using SQL Workbench
-*****************************
-
-You can use SQL Workbench to interact with a SQream DB cluster. SQL Workbench/J is a free SQL query tool, and is designed to run on any JRE-enabled environment. 
-
-This tutorial is a guide that will show you how to connect SQL Workbench to SQream DB.
-
-.. contents:: In this topic:
-   :local:
-
-Installing SQL Workbench with the SQream DB installer (Windows only)
-=====================================================================
-
-SQream DB's driver installer for Windows can install the Java prerequisites and SQL Workbench for you.
-
-#. Get the JDBC driver installer available for download from the `SQream Drivers page `_. The Windows installer takes care of the Java prerequisites and subsequent configuration.
-
-#. Install the driver by following the on-screen instructions in the easy-to-follow installer.
-   By default, the installer does not install SQL Workbench. Make sure to select the item!
-   
-   .. image:: /_static/images/jdbc_windows_installer_screen.png
-
-.. note:: The installer will install SQL Workbench in ``C:\Program Files\SQream Technologies\SQLWorkbench`` by default. You can change this path during the installation.
-
-#. Once finished, SQL Workbench is installed and contains the necessary configuration for connecting to SQream DB clusters.
-
-#. Start SQL Workbench from the Windows start menu. Be sure to select **SQL Workbench (64)** if you're on 64-bit Windows.
-   
-   .. image:: /_static/images/sql_workbench_launch.png
-
-You are now ready to create a profile for your cluster. Continue to :ref:`Creating a new connection profile `.
-
-Installing SQL Workbench manually (Linux, MacOS)
-===================================================
-
-Install Java Runtime 
-------------------------
-
-Both SQL Workbench and the SQream DB JDBC driver require Java 1.8 or newer. You can install either Oracle Java or OpenJDK.
-
-**Oracle Java**
-
-Download and install Java 8 from Oracle for your platform - https://www.java.com/en/download/manual.jsp
-
-**OpenJDK**
-
-For Linux and BSD, see https://openjdk.java.net/install/
-
-For Windows, SQream recommends Zulu 8 https://www.azul.com/downloads/zulu-community/?&version=java-8-lts&architecture=x86-64-bit&package=jdk
-
-Get the SQream DB JDBC driver
--------------------------------
-
-SQream DB's JDBC driver is provided as a zipped JAR file, available for download from the `SQream Drivers page `_. 
-
-Download and extract the JAR file from the zip archive.
-
-Install SQL Workbench
------------------------
-
-#. Download the latest stable release from https://www.sql-workbench.eu/downloads.html . The **Generic package for all systems** is recommended.
-
-#. Extract the downloaded ZIP archive into a directory of your choice.
-
-#. Start SQL workbench. If you are using 64 bit windows, run ``SQLWorkbench64.exe`` instead of ``SQLWOrkbench.exe``.
-
-Setting up the SQream DB JDBC driver profile
----------------------------------------------
-
-#. Define a connection profile - :menuselection:`&File --> &Connect window (Alt+C)`
-   
-   .. image:: /_static/images/sql_workbench_connect_window1.png
-
-#. Open the drivers management window - :menuselection:`&Manage Drivers`
-   
-   .. image:: /_static/images/sql_workbench_manage_drivers.png
-   
-   
-   
-#. Create the SQream DB driver profile
-   
-   .. image:: /_static/images/sql_workbench_create_driver.png
-   
-   #. Click on the Add new driver button ("New" icon)
-   
-   #. Name the driver as you see fit. We recommend calling it SQream DB , where  is the version you have installed.
-   
-   #. 
-      Add the JDBC drivers from the location where you extracted the SQream DB JDBC JAR.
-      
-      If you used the SQream installer, the file will be in ``C:\Program Files\SQream Technologies\JDBC Driver\``
-   
-   #. Click the magnifying glass button to detect the classname automatically. Other details are purely optional
-   
-   #. Click OK to save and return to "new connection screen"
-
-
-.. _new_connection_profile:
-
-Create a new connection profile for your cluster
-=====================================================
-
-   .. image:: /_static/images/sql_workbench_connection_profile.png
-
-#. Create new connection by clicking the New icon (top left)
-
-#. Give your connection a descriptive name
-
-#. Select the SQream Driver that was created in the previous screen
-
-#. Type in your connection string. To find out more about your connection string (URL), see the :ref:`Connection string documentation `.
-
-#. Text the connection details
-
-#. Click OK to save the connection profile and connect to SQream DB
-
-Suggested optional configuration
-==================================
-
-If you installed SQL Workbench manually, you can set a customization to help SQL Workbench show information correctly in the DB Explorer panel.
-
-#. Locate your workbench.settings file
-   On Windows, typically: ``C:\Users\\.sqlworkbench\workbench.settings``
-   On Linux, ``$HOME/.sqlworkbench``
-   
-#. Add the following line at the end of the file:
-   
-   .. code-block:: text
-      
-      workbench.db.sqreamdb.schema.retrieve.change.catalog=true
-
-#. Save the file and restart SQL Workbench
+.. _connect_to_sql_workbench:
+
+*****************************
+Connect to SQream Using SQL Workbench
+*****************************
+
+You can use SQL Workbench to interact with a SQream DB cluster. SQL Workbench/J is a free SQL query tool, and is designed to run on any JRE-enabled environment. 
+
+This tutorial is a guide that will show you how to connect SQL Workbench to SQream DB.
+
+.. contents:: In this topic:
+   :local:
+
+Installing SQL Workbench with the SQream Installer
+=====================================================================
+This section applies to Windows only.
+
+SQream DB's driver installer for Windows can install the Java prerequisites and SQL Workbench for you.
+
+#. Get the JDBC driver installer available for download from the `SQream Drivers page `_. The Windows installer takes care of the Java prerequisites and subsequent configuration.
+
+#. Install the driver by following the on-screen instructions in the easy-to-follow installer.
+   By default, the installer does not install SQL Workbench. Make sure to select the item!
+   
+   .. image:: /_static/images/jdbc_windows_installer_screen.png
+
+.. note:: The installer will install SQL Workbench in ``C:\Program Files\SQream Technologies\SQLWorkbench`` by default. You can change this path during the installation.
+
+#. Once finished, SQL Workbench is installed and contains the necessary configuration for connecting to SQream DB clusters.
+
+#. Start SQL Workbench from the Windows start menu. Be sure to select **SQL Workbench (64)** if you're on 64-bit Windows.
+   
+   .. image:: /_static/images/sql_workbench_launch.png
+
+You are now ready to create a profile for your cluster. Continue to :ref:`Creating a new connection profile `.
+
+Installing SQL Workbench Manually
+===================================================
+This section applies to Linux and MacOS only.
+
+Install Java Runtime 
+------------------------
+
+Both SQL Workbench and the SQream DB JDBC driver require Java 1.8 or newer. You can install either Oracle Java or OpenJDK.
+
+**Oracle Java**
+
+Download and install Java 8 from Oracle for your platform - https://www.java.com/en/download/manual.jsp
+
+**OpenJDK**
+
+For Linux and BSD, see https://openjdk.java.net/install/
+
+For Windows, SQream recommends Zulu 8 https://www.azul.com/downloads/zulu-community/?&version=java-8-lts&architecture=x86-64-bit&package=jdk
+
+Get the SQream DB JDBC Driver
+-------------------------------
+
+SQream DB's JDBC driver is provided as a zipped JAR file, available for download from the `SQream Drivers page `_. 
+
+Download and extract the JAR file from the zip archive.
+
+Install SQL Workbench
+-----------------------
+
+#. Download the latest stable release from https://www.sql-workbench.eu/downloads.html . The **Generic package for all systems** is recommended.
+
+#. Extract the downloaded ZIP archive into a directory of your choice.
+
+#. Start SQL workbench. If you are using 64 bit windows, run ``SQLWorkbench64.exe`` instead of ``SQLWOrkbench.exe``.
+
+Setting up the SQream DB JDBC Driver Profile
+---------------------------------------------
+
+#. Define a connection profile - :menuselection:`&File --> &Connect window (Alt+C)`
+   
+   .. image:: /_static/images/sql_workbench_connect_window1.png
+
+#. Open the drivers management window - :menuselection:`&Manage Drivers`
+   
+   .. image:: /_static/images/sql_workbench_manage_drivers.png
+   
+   
+   
+#. Create the SQream DB driver profile
+   
+   .. image:: /_static/images/sql_workbench_create_driver.png
+   
+   #. Click on the Add new driver button ("New" icon)
+   
+   #. Name the driver as you see fit. We recommend calling it SQream DB , where  is the version you have installed.
+   
+   #. 
+      Add the JDBC drivers from the location where you extracted the SQream DB JDBC JAR.
+      
+      If you used the SQream installer, the file will be in ``C:\Program Files\SQream Technologies\JDBC Driver\``
+   
+   #. Click the magnifying glass button to detect the classname automatically. Other details are purely optional
+   
+   #. Click OK to save and return to "new connection screen"
+
+
+.. _new_connection_profile:
+
+Create a New Connection Profile for Your Cluster
+=====================================================
+
+   .. image:: /_static/images/sql_workbench_connection_profile.png
+
+#. Create new connection by clicking the New icon (top left)
+
+#. Give your connection a descriptive name
+
+#. Select the SQream Driver that was created in the previous screen
+
+#. Type in your connection string. To find out more about your connection string (URL), see the :ref:`Connection string documentation `.
+
+#. Text the connection details
+
+#. Click OK to save the connection profile and connect to SQream DB
+
+Suggested Optional Configuration
+==================================
+
+If you installed SQL Workbench manually, you can set a customization to help SQL Workbench show information correctly in the DB Explorer panel.
+
+#. Locate your workbench.settings file
+   On Windows, typically: ``C:\Users\\.sqlworkbench\workbench.settings``
+   On Linux, ``$HOME/.sqlworkbench``
+   
+#. Add the following line at the end of the file:
+   
+   .. code-block:: text
+      
+      workbench.db.sqreamdb.schema.retrieve.change.catalog=true
+
+#. Save the file and restart SQL Workbench
diff --git a/connecting_to_sqream/client_platforms/tableau.rst b/connecting_to_sqream/client_platforms/tableau.rst
new file mode 100644
index 000000000..1d2ca17b6
--- /dev/null
+++ b/connecting_to_sqream/client_platforms/tableau.rst
@@ -0,0 +1,215 @@
+.. _tableau:
+
+*************************
+Connecting to SQream Using Tableau
+*************************
+
+Overview
+=====================
+SQream's Tableau connector plugin, based on standard JDBC, enables storing and fast querying large volumes of data.
+
+The **Connecting to SQream Using Tableau** page is a Quick Start Guide that describes how install Tableau and the JDBC driver and connect to SQream for data analysis. It also describes using best practices and troubleshoot issues that may occur while installing Tableau. SQream supports both Tableau Desktop and Tableau Server on Windows, MacOS, and Linux distributions.
+
+For more information on SQream's integration with Tableau, see `Tableau's Extension Gallery `_.
+
+The Connecting to SQream Using Tableau page describes the following:
+
+.. contents::
+   :local:
+   :depth: 1
+
+Installing the JDBC Driver and Tableau Connector Plugin
+-------------------
+This section describes how to install the JDBC driver using the fully-integrated Tableau connector plugin (Tableau Connector, or **.taco** file). SQream has been tested with Tableau versions 9.2 and newer.
+
+You can connect to SQream using Tableau by doing one of the following:
+
+   * **For MacOS or Linux** - See :ref:`Installing the JDBC Driver `.
+
+.. _tableau_jdbc_installer:
+   
+Installing the JDBC Driver
+-------------------
+If you are using MacOS, Linux, or the Tableau server, after installing the Tableau Desktop application you can install the JDBC driver manually. When the driver is installed, you can connect to SQream.
+
+**To install the JDBC driver:**
+
+1. Download the JDBC installer and SQream Tableau connector (.taco) file from the :ref:`from the client drivers page`.
+
+    ::
+
+2. Based on your operating system, your Tableau driver directory is located in one of the following places:
+
+   * **Tableau Desktop on MacOS:** *~/Library/Tableau/Drivers*
+   
+      ::
+	  
+   * **Tableau Desktop on Windows:** *C:\\Program Files\\Tableau\\Drivers*
+      
+      ::
+   
+   * **Tableau on Linux**: */opt/tableau/tableau_driver/jdbc*
+	  
+   Note the following when installing the JDBC driver:
+
+   * You must have read permissions on the .jar file.
+   
+      ::
+	  
+   * Tableau requires a JDBC 4.0 or later driver.
+   
+      ::
+	  
+   * Tableau requires a Type 4 JDBC driver.
+   
+      ::
+	  
+   * The latest 64-bit version of Java 8 is installed.
+
+3. Install the **SQreamDB.taco** file by moving the SQreamDB.taco file into the Tableau connectors directory.
+   
+   Based on the installation method that you used, your Tableau driver directory is located in one of the following places:
+
+   * **Tableau Desktop on Windows:** *C:\\Users\\\\My Tableau Repository\\Connectors*
+   
+      ::
+	  
+   * **Tableau Desktop on MacOS:** *~/My Tableau Repository/Connectors*
+
+You can now restart Tableau Desktop or Server to begin using the SQream driver by connecting to SQream as described in the section below.
+
+Connecting to SQream
+---------------------
+After installing the JDBC driver you can connect to SQream.
+
+**To connect to SQream:**
+
+#. Start Tableau Desktop.
+
+    ::
+	
+#. In the **Connect** menu, in the **To a Server** sub-menu, click **More...**.
+
+   More connection options are displayed.
+
+    ::
+	
+#. Select **SQream DB by SQream Technologies**.
+
+   The **New Connection** dialog box is displayed.
+
+    ::
+	
+#. In the New Connection dialog box, fill in the fields and click **Sign In**.
+
+  The following table describes the fields:
+   
+  .. list-table:: 
+     :widths: 15 38 38
+     :header-rows: 1
+   
+     * - Item
+       - Description
+       - Example
+     * - Server
+       - Defines the server of the SQream worker.
+       - ``127.0.0.1`` or ``sqream.mynetwork.co``
+     * - Port
+       - Defines the TCP port of the SQream worker.
+       - ``3108`` when using a load balancer, or ``5100`` when connecting directly to a worker with SSL.
+     * - Database
+       - Defines the database to establish a connection with.
+       - ``master``
+     * - Cluster
+       - Enables (``true``) or disables (``false``) the load balancer. After enabling or disabling the load balance, verify the connection.
+       - 
+     * - Username
+       - Specifies the username of a role to use when connecting.
+       - ``rhendricks``	 
+     * - Password
+       - Specifies the password of the selected role.
+       - ``Tr0ub4dor&3``
+     * - Require SSL (recommended)
+       - Sets SSL as a requirement for establishing this connection.
+       - 
+
+The connection is established and the data source page is displayed.
+  
+.. _set_up_sqream_tables_as_data_sources:
+
+Setting Up SQream Tables as Data Sources
+----------------
+After connecting to SQream you must set up the SQream tables as data sources.
+
+**To set up SQream tables as data sources:**
+	
+1. From the **Table** menu, select the desired database and schema.
+
+   SQream's default schema is **public**.
+   
+    ::
+	
+#. Drag the desired tables into the main area (labeled **Drag tables here**).
+
+   This area is also used for specifying joins and data source filters.
+   
+    ::
+	
+#. Open a new sheet to analyze data. 
+
+Tableau Best Practices and Troubleshooting
+---------------
+This section describes the following best practices and troubleshooting procedures when connecting to SQream using Tableau:
+
+.. contents::
+   :local:
+
+Using Tableau's Table Query Syntax
+~~~~~~~~~~~~~~~~~~~
+Dragging your desired tables into the main area in Tableau builds queries based on its own syntax. This helps ensure increased performance, while using views or custom SQL may degrade performance. In addition, SQream recommends using the :ref:`create_view` to create pre-optimized views, which your datasources point to. 
+
+Creating a Separate Service for Tableau
+~~~~~~~~~~~~~~~~~~~
+SQream recommends creating a separate service for Tableau with the DWLM. This reduces the impact that Tableau has on other applications and processes, such as ETL. In addition, this works in conjunction with the load balancer to ensure good performance.
+
+Troubleshooting Workbook Performance Before Deploying to the Tableau Server
+~~~~~~~~~~~~~~~~~~~
+Tableau has a built-in `performance recorder `_ that shows how time is being spent. If you're seeing slow performance, this could be the result of a misconfiguration such as setting concurrency too low.
+
+Use the Tableau Performance Recorder for viewing the performance of queries run by Tableau. You can use this information to identify queries that can be optimized by using views.
+
+Troubleshooting Error Codes
+~~~~~~~~~~~~~~~~~~~
+Tableau may be unable to locate the SQream JDBC driver. The following message is displayed when Tableau cannot locate the driver:
+
+.. code-block:: console
+     
+   Error Code: 37CE01A3, No suitable driver installed or the URL is incorrect
+   
+**To troubleshoot error codes:**
+
+If Tableau cannot locate the SQream JDBC driver, do the following:
+
+ 1. Verify that the JDBC driver is located in the correct directory:
+ 
+   * **Tableau Desktop on Windows:** *C:\Program Files\Tableau\Drivers*
+   
+      ::
+	  
+   * **Tableau Desktop on MacOS:** *~/Library/Tableau/Drivers*
+   
+      ::
+	  
+   * **Tableau on Linux**: */opt/tableau/tableau_driver/jdbc*
+   
+ 2. Find the file path for the JDBC driver and add it to the Java classpath:
+   
+   * **For Linux** - ``export CLASSPATH=;$CLASSPATH``
+
+        ::
+		
+   * **For Windows** - add an environment variable for the classpath:  
+
+	.. image:: /_static/images/Third_Party_Connectors/tableau/envrionment_variable_for_classpath.png
+
+If you experience issues after restarting Tableau, see the `SQream support portal `_.
\ No newline at end of file
diff --git a/connecting_to_sqream/client_platforms/talend.rst b/connecting_to_sqream/client_platforms/talend.rst
new file mode 100644
index 000000000..7f11092fb
--- /dev/null
+++ b/connecting_to_sqream/client_platforms/talend.rst
@@ -0,0 +1,123 @@
+.. _talend:
+
+*************************
+Connecting to SQream Using Talend
+*************************
+
+Overview
+================= 
+This page describes how to use Talend to interact with a SQream cluster. The Talend connector is used for reading data from a SQream cluster and loading data into SQream. In addition, this page provides a viability report on Talend's comptability with SQream for stakeholders.
+
+The **Connecting to SQream Using Talend** describes the following:
+
+.. contents::
+   :local:
+   :depth: 1
+
+Creating a New Metadata JDBC DB Connection
+----------------
+**To create a new metadata JDBC DB connection:**
+
+1. In the **Repository** panel, nagivate to **Metadata** and right-click **Db connections**.
+
+    ::
+	
+2. Select **Create connection**.
+
+    ::
+	
+3. In the **Name** field, type a name.
+
+    ::
+
+   Note that the name cannot contain spaces.
+
+4. In the **Purpose** field, type a purpose and click **Next**.
+
+   Note that you cannot continue to the next step until you define both a Name and a Purpose.
+
+    ::
+
+5. In the **DB Type** field, select **JDBC**.
+
+    ::
+
+6. In the **JDBC URL** field, type the relevant connection string.
+
+   For connection string examples, see `Connection Strings `_.
+   
+7. In the **Drivers** field, click the **Add** button.
+
+   The **"newLine"** entry is added.
+
+8. One the **"newLine'** entry, click the ellipsis.
+
+   The **Module** window is displayed.
+
+9. From the Module window, select **Artifact repository(local m2/nexus)** and select **Install a new module**.
+
+    ::
+
+10. Click the ellipsis.
+
+    Your hard drive is displayed.	
+
+11. Navigate to a **JDBC jar file** (such as **sqream-jdbc-4.5.3.jar**)and click **Open**.
+
+     ::
+
+12. Click **Detect the module install status**.
+
+     ::
+
+13. Click **OK**.
+
+    The JDBC that you selected is displayed in the **Driver** field.
+
+14. Click **Select class name**.
+
+     ::
+
+15. Click **Test connection**.
+
+    If a driver class is not found (for example, you didn't select a JDBC jar file), the following error message is displayed:
+
+    After creating a new metadata JDBC DB connection, you can do the following:
+
+    * Use your new metadata connection.
+	
+	   ::
+	   
+    * Drag it to the **job** screen.
+	
+	   ::
+	   
+    * Build Talend components.
+ 
+    For more information on loading data from JSON files to the Talend Open Studio, see `How to Load Data from JSON Files in Talend `_.
+
+Supported SQream Drivers
+----------------
+The following list shows the supported SQream drivers and versions:
+
+* **JDBC** - Version 4.3.3 and higher.
+
+   ::
+   
+* **ODBC** - Version 4.0.0. This version requires a Bridge to connect. For more information on the required Bridge, see `Connecting Talend on Windows to an ODBC Database `_.
+
+Supported Data Sources
+----------------
+Talend Cloud connectors let you create reusable connections with a wide variety of systems and environments, such as those shown below. This lets you access and read records of a range of diverse data.
+
+* **Connections:** Connections are environments or systems for storing datasets, including databases, file systems, distributed systems and platforms. Because these systems are reusable, you only need to establish connectivity with them once.
+
+   ::
+
+* **Datasets:** Datasets include database tables, file names, topics (Kafka), queues (JMS) and file paths (HDFS). For more information on the complete list of connectors and datasets that Talend supports, see `Introducing Talend Connectors `_.
+
+Known Issues
+----------------
+As of 6/1/2021 schemas were not displayed for tables with identical names.
+
+If you experience issues using Talend, see the `SQream support portal `_.
\ No newline at end of file
diff --git a/third_party_tools/client_platforms/test.php b/connecting_to_sqream/client_platforms/test.php
similarity index 96%
rename from third_party_tools/client_platforms/test.php
rename to connecting_to_sqream/client_platforms/test.php
index fef04e699..88ec88338 100644
--- a/third_party_tools/client_platforms/test.php
+++ b/connecting_to_sqream/client_platforms/test.php
@@ -1,16 +1,16 @@
- 
+ 
diff --git a/third_party_tools/client_platforms/tibco_spotfire.rst b/connecting_to_sqream/client_platforms/tibco_spotfire.rst
similarity index 90%
rename from third_party_tools/client_platforms/tibco_spotfire.rst
rename to connecting_to_sqream/client_platforms/tibco_spotfire.rst
index b0a707c51..2d85fecf1 100644
--- a/third_party_tools/client_platforms/tibco_spotfire.rst
+++ b/connecting_to_sqream/client_platforms/tibco_spotfire.rst
@@ -18,7 +18,7 @@ Establishing a Connection between TIBCO Spotfire and SQream
 -----------------
 TIBCO Spotfire supports the following versions:
 
-* **JDBC driver** - Version 4.5.3
+* **JDBC driver** - Version 4.5.2 
 * **ODBC driver** - Version 4.1.1
 
 SQream supports TIBCO Spotfire version 7.12.0.
@@ -128,52 +128,52 @@ After creating a connection, you can create your SQream data source template.
        .. code-block:: console
 	
           
-            SQream   
-            com.sqream.jdbc.SQDriver   
-            jdbc:Sqream://<host>:<port>/database;user=sqream;password=sqream;cluster=true   
-            true   
-            true   
-            false   
-            TABLE,EXTERNAL_TABLE   
+            SQream
+            com.sqream.jdbc.SQDriver
+            jdbc:Sqream://<host>:<port>/database;user=sqream;password=sqream;cluster=true
+            true
+            true
+            false
+            TABLE,EXTERNAL_TABLE
             
              
-                Bool   
-                Integer   
+                Bool
+                Integer
               
               
-                VARCHAR(2048)   
-                String   
+                VARCHAR(2048)
+                String
               
               
-                INT   
-                Integer   
+                INT
+                Integer
               
               
-                BIGINT   
-                LongInteger   
+                BIGINT
+                LongInteger
               
               
-                Real   
-                Real   
+                Real
+                Real
               
 	           
-                Decimal   
-                Float   
+                Decimal
+                Float
               
                
-                Numeric   
-                Float   
+                Numeric
+                Float
               
               
-                Date   
-                DATE   
+                Date
+                DATE
               
               
-                DateTime   
-                DateTime   
+                DateTime
+                DateTime
               
              
-               
+            
           			
 	
 4. Click **Save configuration**.
@@ -384,4 +384,4 @@ Information Services do not Support Live Queries
 ~~~~~~~~~~~
 TIBCO Spotfire data connectors support live queries, but no APIs currently exist for creating custom data connectors. This is resolved by creating a customized SQream adapter using TIBCO's **Data Virtualization (TDV)** or the **Spotfire Advanced Services (ADS)**. These can be used from the built-in TDV connector to enable live queries.
 
-This resolution applies to JDBC and ODBC drivers.
+This resolution applies to JDBC and ODBC drivers.
\ No newline at end of file
diff --git a/third_party_tools/connectivity_ecosystem.jpg b/connecting_to_sqream/connectivity_ecosystem.jpg
similarity index 100%
rename from third_party_tools/connectivity_ecosystem.jpg
rename to connecting_to_sqream/connectivity_ecosystem.jpg
diff --git a/third_party_tools/index.rst b/connecting_to_sqream/index.rst
similarity index 92%
rename from third_party_tools/index.rst
rename to connecting_to_sqream/index.rst
index 1052f9f27..ecb9e4715 100644
--- a/third_party_tools/index.rst
+++ b/connecting_to_sqream/index.rst
@@ -1,18 +1,18 @@
-.. _third_party_tools:
-
-*************************
-Third Party Tools
-*************************
-SQream supports the most common database tools and interfaces, giving you direct access through a variety of drivers, connectors, and visualiztion tools and utilities. The tools described on this page have been tested and approved for use with SQream. Most third party tools that work through JDBC, ODBC, and Python should work.
-
-This section provides information about the following third party tools:
-
-.. toctree::
-   :maxdepth: 2
-   :glob:
-   :titlesonly:
-   
-   client_platforms/index
-   client_drivers/index
-
+.. _connecting_to_sqream:
+
+*************************
+Connecting to SQream
+*************************
+SQream supports the most common database tools and interfaces, giving you direct access through a variety of drivers, connectors, and visualiztion tools and utilities. The tools described on this page have been tested and approved for use with SQream. Most third party tools that work through JDBC, ODBC, and Python should work.
+
+This section provides information about the following third party tools:
+
+.. toctree::
+   :maxdepth: 2
+   :glob:
+   :titlesonly:
+   
+   client_platforms/index
+   client_drivers/index
+
 If you need a tool that SQream does not support, contact SQream Support or your SQream account manager for more information.
\ No newline at end of file
diff --git a/data_ingestion/avro.rst b/data_ingestion/avro.rst
new file mode 100644
index 000000000..a548ec265
--- /dev/null
+++ b/data_ingestion/avro.rst
@@ -0,0 +1,370 @@
+.. _avro:
+
+**************************
+Ingesting Data from Avro
+**************************
+The **Ingesting Data from Avro** page describes ingesting data from Avro into SQream and includes the following:
+
+
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Overview
+===========
+**Avro** is a well-known data serialization system that relies on schemas. Due to its flexibility as an efficient data storage method, SQream supports the Avro binary data format as an alternative to JSON. Avro files are represented using the **Object Container File** format, in which the Avro schema is encoded alongside binary data. Multiple files loaded in the same transaction are serialized using the same schema. If they are not serialized using the same schema, an error message is displayed. SQream uses the **.avro** extension for ingested Avro files.
+
+Making Avro Files Accessible to Workers
+================
+To give workers access to files every node must have the same view of the storage being used.
+
+The following apply for Avro files to be accessible to workers:
+
+* For files hosted on NFS, ensure that the mount is accessible from all servers.
+
+* For HDFS, ensure that SQream servers have access to the HDFS name node with the correct **user-id**. For more information, see :ref:`hdfs`.
+
+* For S3, ensure network access to the S3 endpoint. For more information, see :ref:`s3`.
+
+For more information about restricted worker access, see :ref:`workload_manager`.
+
+Preparing Your Table
+===============
+You can build your table structure on both local and foreign tables:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Creating a Table
+---------------------   
+Before loading data, you must build the ``CREATE TABLE`` to correspond with the file structure of the inserted table.
+
+The example in this section is based on the source ``nba.avro`` table shown below:
+
+.. csv-table:: nba.avro
+   :file: nba-t10.csv
+   :widths: auto
+   :header-rows: 1 
+
+The following example shows the correct file structure used to create the ``CREATE TABLE`` statement based on the **nba.avro** table:
+
+.. code-block:: postgres
+   
+   CREATE TABLE ext_nba
+   (
+
+        Name       TEXT(40),
+        Team       TEXT(40),
+        Number     BIGINT,
+        Position   TEXT(2),
+        Age        BIGINT,
+        Height     TEXT(4),
+        Weight     BIGINT,
+        College    TEXT(40),
+        Salary     FLOAT
+    )
+    WRAPPER avro_fdw
+    OPTIONS
+    (
+      LOCATION =  's3://sqream-demo-data/nba.avro'
+    );
+
+.. tip:: 
+
+   An exact match must exist between the SQream and Avro types. For unsupported column types, you can set the type to any type and exclude it from subsequent queries.
+
+.. note:: The **nba.avro** file is stored on S3 at ``s3://sqream-demo-data/nba.avro``.
+
+Creating a Foreign Table
+---------------------
+Before loading data, you must build the ``CREATE FOREIGN TABLE`` to correspond with the file structure of the inserted table.
+
+The example in this section is based on the source ``nba.avro`` table shown below:
+
+.. csv-table:: nba.avro
+   :file: nba-t10.csv
+   :widths: auto
+   :header-rows: 1 
+
+The following example shows the correct file structure used to create the ``CREATE FOREIGN TABLE`` statement based on the **nba.avro** table:
+
+.. code-block:: postgres
+   
+   CREATE FOREIGN TABLE ext_nba
+   (
+
+        Name       TEXT(40),
+        Team       TEXT(40),
+        Number     BIGINT,
+        Position   TEXT(2),
+        Age        BIGINT,
+        Height     TEXT(4),
+        Weight     BIGINT,
+        College    TEXT(40),
+        Salary     FLOAT
+    )
+    WRAPPER avro_fdw
+    OPTIONS
+    (
+      LOCATION =  's3://sqream-demo-data/nba.avro'
+    );
+
+.. tip:: 
+
+   An exact match must exist between the SQream and Avro types. For unsupported column types, you can set the type to any type and exclude it from subsequent queries.
+
+.. note:: The **nba.avro** file is stored on S3 at ``s3://sqream-demo-data/nba.avro``.
+
+.. note:: The examples in the sections above are identical except for the syntax used to create the tables.
+
+Mapping Between SQream and Avro Data Types
+=================
+Mapping between SQream and Avro data types depends on the Avro data type:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Primitive Data Types
+--------------
+The following table shows the supported **Primitive** data types:
+
++-------------+------------------------------------------------------+
+| Avro Type   | SQream Type                                          |
+|             +-----------+---------------+-----------+--------------+
+|             | Number    | Date/Datetime | String    | Boolean      |
++=============+===========+===============+===========+==============+
+| ``null``    | Supported | Supported     | Supported | Supported    |
++-------------+-----------+---------------+-----------+--------------+
+| ``boolean`` |           |               | Supported | Supported    |
++-------------+-----------+---------------+-----------+--------------+
+| ``int``     | Supported |               | Supported |              |
++-------------+-----------+---------------+-----------+--------------+
+| ``long``    | Supported |               | Supported |              |
++-------------+-----------+---------------+-----------+--------------+
+| ``float``   | Supported |               | Supported |              |
++-------------+-----------+---------------+-----------+--------------+
+| ``double``  | Supported |               | Supported |              |
++-------------+-----------+---------------+-----------+--------------+
+| ``bytes``   |           |               |           |              |
++-------------+-----------+---------------+-----------+--------------+
+| ``string``  |           | Supported     | Supported |              |
++-------------+-----------+---------------+-----------+--------------+
+
+Complex Data Types
+--------------
+The following table shows the supported **Complex** data types:
+
++------------+-------------------------------------------------------+
+|            | SQream Type                                           |
+|            +------------+----------------+-------------+-----------+
+|Avro Type   | Number     |  Date/Datetime |   String    | Boolean   |
++============+============+================+=============+===========+
+| ``record`` |            |                |             |           |
++------------+------------+----------------+-------------+-----------+
+| ``enum``   |            |                | Supported   |           |
++------------+------------+----------------+-------------+-----------+
+| ``array``  |            |                |             |           |
++------------+------------+----------------+-------------+-----------+
+| ``map``    |            |                |             |           |
++------------+------------+----------------+-------------+-----------+
+| ``union``  |  Supported | Supported      | Supported   | Supported |
++------------+------------+----------------+-------------+-----------+
+| ``fixed``  |            |                |             |           |
++------------+------------+----------------+-------------+-----------+
+
+Logical Data Types
+--------------
+The following table shows the supported **Logical** data types:
+
++----------------------------+-------------------------------------------------+
+| Avro Type                  | SQream Type                                     |
+|                            +-----------+---------------+-----------+---------+
+|                            | Number    | Date/Datetime | String    | Boolean |
++============================+===========+===============+===========+=========+
+| ``decimal``                | Supported |               | Supported |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``uuid``                   |           |               | Supported |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``date``                   |           | Supported     | Supported |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``time-millis``            |           |               |           |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``time-micros``            |           |               |           |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``timestamp-millis``       |           | Supported     | Supported |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``timestamp-micros``       |           | Supported     | Supported |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``local-timestamp-millis`` |           |               |           |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``local-timestamp-micros`` |           |               |           |         |
++----------------------------+-----------+---------------+-----------+---------+
+| ``duration``               |           |               |           |         |
++----------------------------+-----------+---------------+-----------+---------+
+
+.. note:: Number types include **tinyint**, **smallint**, **int**, **bigint**, **real** and **float**, and **numeric**. String types include **text**.
+
+Mapping Objects to Rows
+===============
+When mapping objects to rows, each Avro object or message must contain one ``record`` type object corresponding to a single row in SQream. The ``record`` fields are associated by name to their target table columns. Additional unmapped fields will be ignored. Note that using the JSONPath option overrides this.
+
+Ingesting Data into SQream
+==============
+This section includes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Syntax
+-----------
+Before ingesting data into SQream from an Avro file, you must create a table using the following syntax:
+
+.. code-block:: postgres
+   
+   COPY [schema name.]table_name
+     FROM WRAPPER fdw_name
+   ;
+	  
+After creating a table you can ingest data from an Avro file into SQream using the following syntax:
+
+.. code-block:: postgres
+
+   avro_fdw
+   
+Example
+-----------
+The following is an example of creating a table:
+
+.. code-block:: postgres
+   
+   COPY t
+     FROM WRAPPER fdw_name
+     OPTIONS
+     (
+       [ copy_from_option [, ...] ]
+     )
+   ;
+
+The following is an example of loading data from an Avro file into SQream:
+
+.. code-block:: postgres
+
+    WRAPPER avro_fdw
+    OPTIONS
+    (
+      LOCATION =  's3://sqream-demo-data/nba.avro'
+    );
+	  
+For more examples, see :ref:`additional_examples`.
+
+Parameters
+===================
+The following table shows the Avro parameter:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+   * - ``schema_name``
+     - The schema name for the table. Defaults to ``public`` if not specified.
+
+Best Practices
+============
+Because external tables do not automatically verify the file integrity or structure, SQream recommends manually verifying your table output when ingesting Avro files into SQream. This lets you determine if your table output is identical to your originally inserted table.
+
+The following is an example of the output based on the **nba.avro** table:
+
+.. code-block:: psql
+   
+   t=> SELECT * FROM ext_nba LIMIT 10;
+   Name          | Team           | Number | Position | Age | Height | Weight | College           | Salary  
+   --------------+----------------+--------+----------+-----+--------+--------+-------------------+---------
+   Avery Bradley | Boston Celtics |      0 | PG       |  25 | 6-2    |    180 | Texas             |  7730337
+   Jae Crowder   | Boston Celtics |     99 | SF       |  25 | 6-6    |    235 | Marquette         |  6796117
+   John Holland  | Boston Celtics |     30 | SG       |  27 | 6-5    |    205 | Boston University |         
+   R.J. Hunter   | Boston Celtics |     28 | SG       |  22 | 6-5    |    185 | Georgia State     |  1148640
+   Jonas Jerebko | Boston Celtics |      8 | PF       |  29 | 6-10   |    231 |                   |  5000000
+   Amir Johnson  | Boston Celtics |     90 | PF       |  29 | 6-9    |    240 |                   | 12000000
+   Jordan Mickey | Boston Celtics |     55 | PF       |  21 | 6-8    |    235 | LSU               |  1170960
+   Kelly Olynyk  | Boston Celtics |     41 | C        |  25 | 7-0    |    238 | Gonzaga           |  2165160
+   Terry Rozier  | Boston Celtics |     12 | PG       |  22 | 6-2    |    190 | Louisville        |  1824360
+   Marcus Smart  | Boston Celtics |     36 | PG       |  22 | 6-4    |    220 | Oklahoma State    |  3431040
+
+.. note:: If your table output has errors, verify that the structure of the Avro files correctly corresponds to the external table structure that you created.
+
+.. _additional_examples:
+
+Additional Examples
+===============
+This section includes the following additional examples of loading data into SQream:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Omitting Unsupported Column Types
+--------------
+When loading data, you can omit columns using the ``NULL as`` argument. You can use this argument to omit unsupported columns from queries that access external tables. By omitting them, these columns will not be called and will avoid generating a "type mismatch" error.
+
+In the example below, the ``Position`` column is not supported due its type.
+
+.. code-block:: postgres
+   
+   CREATE TABLE nba AS
+      SELECT Name, Team, Number, NULL as Position, Age, Height, Weight, College, Salary FROM ext_nba;   
+
+Modifying Data Before Loading
+--------------
+One of the main reasons for staging data using the ``EXTERNAL TABLE`` argument is to examine and modify table contents before loading it into SQream.
+
+For example, we can replace pounds with kilograms using the :ref:`create_table_as` statement
+
+In the example below, the ``Position`` column is set to the default ``NULL``.
+
+.. code-block:: postgres
+   
+   CREATE TABLE nba AS 
+      SELECT name, team, number, NULL as Position, age, height, (weight / 2.205) as weight, college, salary 
+              FROM ext_nba
+              ORDER BY weight;
+
+Loading a Table from a Directory of Avro Files on HDFS
+--------------
+The following is an example of loading a table from a directory of Avro files on HDFS:
+
+.. code-block:: postgres
+
+   CREATE FOREIGN TABLE ext_users
+     (id INT NOT NULL, name TEXT(30) NOT NULL, email TEXT(50) NOT NULL)  
+   WRAPPER avro_fdw
+   OPTIONS
+     (
+        LOCATION =  'hdfs://hadoop-nn.piedpiper.com/rhendricks/users/*.avro'
+     );
+   
+   CREATE TABLE users AS SELECT * FROM ext_users;
+
+For more configuration option examples, navigate to the :ref:`create_foreign_table` page and see the **Parameters** table.
+
+Loading a Table from a Directory of Avro Files on S3
+--------------
+The following is an example of loading a table from a directory of Avro files on S3:
+
+.. code-block:: postgres
+
+   CREATE FOREIGN TABLE ext_users
+     (id INT NOT NULL, name TEXT(30) NOT NULL, email TEXT(50) NOT NULL)  
+   WRAPPER avro_fdw
+   OPTIONS
+     ( LOCATION = 's3://pp-secret-bucket/users/*.avro',
+       AWS_ID = 'our_aws_id',
+       AWS_SECRET = 'our_aws_secret'
+      );
+   
+   CREATE TABLE users AS SELECT * FROM ext_users;
\ No newline at end of file
diff --git a/data_ingestion/csv.rst b/data_ingestion/csv.rst
index f44c3c9e9..afe819fb5 100644
--- a/data_ingestion/csv.rst
+++ b/data_ingestion/csv.rst
@@ -1,16 +1,17 @@
 .. _csv:
 
 **********************
-Inserting Data from a CSV File
+Ingesting Data from a CSV File
 **********************
 
-This guide covers inserting data from CSV files into SQream DB using the :ref:`copy_from` method. 
+This guide covers ingesting data from CSV files into SQream DB using the :ref:`copy_from` method. 
 
 
-.. contents:: In this topic:
+.. contents:: 
    :local:
+   :depth: 1
 
-1. Prepare CSVs
+Prepare CSVs
 =====================
 
 Prepare the source CSVs, with the following requirements:
@@ -44,7 +45,7 @@ Prepare the source CSVs, with the following requirements:
    .. note:: If a text field is quoted but contains no content (``""``) it is considered an empty text field. It is not considered ``NULL``.
 
 
-2. Place CSVs where SQream DB workers can access
+Place CSVs where SQream DB workers can access
 =======================================================
 
 During data load, the :ref:`copy_from` command can run on any worker (unless explicitly speficied with the :ref:`workload_manager`).
@@ -56,7 +57,7 @@ It is important that every node has the same view of the storage being used - me
 
 * For S3, ensure network access to the S3 endpoint. See our :ref:`s3` guide for more information.
 
-3. Figure out the table structure
+Figure out the table structure
 ===============================================
 
 Prior to loading data, you will need to write out the table structure, so that it matches the file structure.
@@ -83,19 +84,19 @@ We will make note of the file structure to create a matching ``CREATE TABLE`` st
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
 
-4. Bulk load the data with COPY FROM
+Bulk load the data with COPY FROM
 ====================================
 
 The CSV is a standard CSV, but with two differences from SQream DB defaults:
diff --git a/data_ingestion/index.rst b/data_ingestion/index.rst
index 83aca40ea..6e8530710 100644
--- a/data_ingestion/index.rst
+++ b/data_ingestion/index.rst
@@ -3,16 +3,18 @@
 *************************
 Data Ingestion Sources
 *************************
-The **Data Ingestion Sources** provides information about the following:
+The **Data Ingestion Sources** page provides information about the following:
 
 .. toctree::
    :maxdepth: 1
    :glob:
    
-   inserting_data
+   ingesting_data
+   avro
    csv
    parquet
    orc
    oracle
+   json
 
-For information about database tools and interfaces that SQream supports, see `Third Party Tools `_.
+For information about database tools and interfaces that SQream supports, see `Third Party Tools `_.
\ No newline at end of file
diff --git a/data_ingestion/ingesting_data.rst b/data_ingestion/ingesting_data.rst
new file mode 100644
index 000000000..97d57ef4c
--- /dev/null
+++ b/data_ingestion/ingesting_data.rst
@@ -0,0 +1,481 @@
+.. _ingesting_data:
+
+***************************
+Ingesting Data Overview
+***************************
+The **Ingesting Data Overview** page provides basic information useful when ingesting data into SQream from a variety of sources and locations, and describes the following:
+
+.. contents::
+   :local:
+   :depth: 1
+   
+Getting Started
+================================
+SQream supports ingesting data using the following methods:
+
+* Executing the ``INSERT`` statement using a client driver.
+
+   ::
+   
+* Executing the ``COPY FROM`` statement or ingesting data from foreign tables:
+
+  * Local filesystem and locally mounted network filesystems
+  * Ingesting Data using the Amazon S3 object storage service
+  * Ingesting Data using an HDFS data storage system
+
+SQream supports loading files from the following formats:
+
+* Text - CSV, TSV, and PSV
+* Parquet
+* ORC
+
+For more information, see the following:
+
+* Using the ``INSERT`` statement - :ref:`insert`
+
+* Using client drivers - :ref:`Client drivers`
+
+* Using the ``COPY FROM`` statement - :ref:`copy_from`
+
+* Using the Amazon S3 object storage service - :ref:`s3`
+
+* Using the HDFS data storage system - :ref:`hdfs`
+
+* Loading data from foreign tables - :ref:`foreign_tables`
+
+Data Loading Considerations
+================================
+The **Data Loading Considerations** section describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Verifying Data and Performance after Loading
+-----------------------------------------
+Like many RDBMSs, SQream recommends its own set of best practices for table design and query optimization. When using SQream, verify the following:
+
+* That your data is structured as you expect (row counts, data types, formatting, content).
+
+* That your query performance is adequate.
+
+* That you followed the table design best practices (:ref:`Optimization and Best Practices`).
+
+* That you've tested and verified that your applications work (such as :ref:`Tableau`).
+
+* That your data types have not been not over-provisioned.
+
+File Soure Location when Loading
+--------------------------------
+While you are loading data, you can use the ``COPY FROM`` command to let statements run on any worker. If you are running multiple nodes, verify that all nodes can see the source the same. Loading data from a local file that is only on one node and not on shared storage may cause it to fail. If required, you can also control which node a statement runs on using the Workload Manager).
+
+For more information, see the following:
+
+* :ref:`copy_from`
+
+* :ref:`workload_manager`
+
+Supported Load Methods
+-------------------------------
+You can use the ``COPY FROM`` syntax to load CSV files.
+
+.. note:: The ``COPY FROM`` cannot be used for loading data from Parquet and ORC files.
+
+You can use foreign tables to load text files, Parquet, and ORC files, and to transform your data before generating a full table, as described in the following table:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   :stub-columns: 1
+   
+   * - Method/File Type
+     - Text (CSV)
+     - Parquet
+     - ORC
+     - Streaming Data
+   * - COPY FROM
+     - Supported
+     - Not supported
+     - Not supported
+     - Not supported
+   * - Foreign tables
+     - Supported
+     - Supported
+     - Supported
+     - Not supported
+   * - INSERT
+     - Not supported
+     - Not supported
+     - Not supported
+     - Supported (Python, JDBC, Node.JS)
+	 
+For more information, see the following:
+
+* :ref:`COPY FROM`
+
+* :ref:`Foreign tables`
+
+* :ref:`INSERT`
+
+Unsupported Data Types
+-----------------------------
+SQream does not support certain features that are supported by other databases, such as ``ARRAY``, ``BLOB``, ``ENUM``, and ``SET``. You must convert these data types before loading them. For example, you can store ``ENUM`` as ``TEXT``.
+
+Handing Extended Errors
+----------------------------
+While you can use foreign tables to load CSVs, the ``COPY FROM`` statement provides more fine-grained error handling options and extended support for non-standard CSVs with multi-character delimiters, alternate timestamp formats, and more.
+
+For more information, see :ref:`foreign tables`.
+
+Best Practices for CSV
+------------------------------
+Text files, such as CSV, rarely conform to `RFC 4180 `_ , so you may need to make the following modifications:
+
+* Use ``OFFSET 2`` for files containing header rows.
+
+* You can capture failed rows in a log file for later analysis, or skip them. See :ref:`capturing_rejected_rows` for information on skipping rejected rows.
+
+* You can modify record delimiters (new lines) using the :ref:`RECORD DELIMITER` syntax.
+
+* If the date formats deviate from ISO 8601, refer to the :ref:`copy_date_parsers` section for overriding the default parsing.
+
+* *(Optional)* You can quote fields in a CSV using double-quotes (``"``).
+
+.. note:: You must quote any field containing a new line or another double-quote character.
+
+* If a field is quoted, you must double quote any double quote, similar to the **string literals quoting rules**. For example, to encode ``What are "birds"?``, the field should appear as ``"What are ""birds""?"``. For more information, see :ref:`string literals quoting rules`.
+
+* Field delimiters do not have to be a displayable ASCII character. For all supported field delimiters, see :ref:`field_delimiters`.
+
+Best Practices for Parquet
+--------------------------------
+The following list shows the best practices when ingesting data from Parquet files:
+
+* You must load Parquet files through :ref:`foreign_tables`. Note that the destination table structure must be identical to the number of columns between the source files.
+
+* Parquet files support **predicate pushdown**. When a query is issued over Parquet files, SQream uses row-group metadata to determine which row-groups in a file must be read for a particular query and the row indexes can narrow the search to a particular set of rows.
+
+Supported Types and Behavior Notes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Unlike the ORC format, the column types should match the data types exactly, as shown in the table below:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   :stub-columns: 1
+   
+   * -   SQream DB type →
+   
+         Parquet source
+     - ``BOOL``
+     - ``TINYINT``
+     - ``SMALLINT``
+     - ``INT``
+     - ``BIGINT``
+     - ``REAL``
+     - ``DOUBLE``
+     - Text [#f0]_
+     - ``DATE``
+     - ``DATETIME``
+   * - ``BOOLEAN``
+     - Supported 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``INT16``
+     - 
+     - 
+     - Supported
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``INT32``
+     - 
+     - 
+     - 
+     - Supported
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``INT64``
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``FLOAT``
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - 
+     - 
+     - 
+     - 
+   * - ``DOUBLE``
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - 
+     - 
+     - 
+   * - ``BYTE_ARRAY`` [#f2]_
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - 
+     - 
+   * - ``INT96`` [#f3]_
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported [#f4]_
+
+If a Parquet file has an unsupported type, such as ``enum``, ``uuid``, ``time``, ``json``, ``bson``, ``lists``, ``maps``, but the table does not reference this data (i.e., the data does not appear in the :ref:`SELECT` query), the statement will succeed. If the table **does** reference a column, an error will be displayed explaining that the type is not supported, but the column may be omitted.
+
+Best Practices for ORC
+--------------------------------
+The following list shows the best practices when ingesting data from ORC files:
+
+* You must load ORC files through :ref:`foreign_tables`. Note that the destination table structure must be identical to the number of columns between the source files.
+
+* ORC files support **predicate pushdown**. When a query is issued over ORC files, SQream uses ORC metadata to determine which stripes in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of 10,000 rows.
+
+Type Support and Behavior Notes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+You must load ORC files through foreign table. Note that the destination table structure must be identical to the number of columns between the source files.
+
+For more information, see :ref:`foreign_tables`.
+
+The types should match to some extent within the same "class", as shown in the following table:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   :stub-columns: 1
+   
+   * -   SQream DB Type →
+   
+         ORC Source
+     - ``BOOL``
+     - ``TINYINT``
+     - ``SMALLINT``
+     - ``INT``
+     - ``BIGINT``
+     - ``REAL``
+     - ``DOUBLE``
+     - Text [#f0]_
+     - ``DATE``
+     - ``DATETIME``
+   * - ``boolean``
+     - Supported 
+     - Supported [#f5]_
+     - Supported [#f5]_
+     - Supported [#f5]_
+     - Supported [#f5]_
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``tinyint``
+     - ○ [#f6]_
+     - Supported
+     - Supported
+     - Supported
+     - Supported
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``smallint``
+     - ○ [#f6]_
+     - ○ [#f7]_
+     - Supported
+     - Supported
+     - Supported
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``int``
+     - ○ [#f6]_
+     - ○ [#f7]_
+     - ○ [#f7]_
+     - Supported
+     - Supported
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``bigint``
+     - ○ [#f6]_
+     - ○ [#f7]_
+     - ○ [#f7]_
+     - ○ [#f7]_
+     - Supported
+     - 
+     - 
+     - 
+     - 
+     - 
+   * - ``float``
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - Supported
+     - 
+     - 
+     - 
+   * - ``double``
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - Supported
+     - 
+     - 
+     - 
+   * - ``string`` / ``char`` / ``varchar``
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - 
+     - 
+   * - ``date``
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+     - Supported
+   * - ``timestamp``, ``timestamp`` with timezone
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - 
+     - Supported
+
+* If an ORC file has an unsupported type like ``binary``, ``list``, ``map``, and ``union``, but the data is not referenced in the table (it does not appear in the :ref:`SELECT` query), the statement will succeed. If the column is referenced, an error will be thrown to the user, explaining that the type is not supported, but the column may be ommited.
+
+
+
+..
+   insert
+
+   example
+
+   are there some variations to highlight?:
+
+   create table as
+
+   sequences, default values
+
+   insert select
+
+   make distinction between an insert command, and a parameterized/bulk
+   insert "over the network"
+
+
+   copy
+
+
+   best practices for insert
+
+   chunks and extents, and storage reorganisation
+
+   copy:
+
+   give an example
+
+   supports csv and parquet
+
+   what else do we have right now? any other formats? have the s3 and
+   hdfs url support also
+
+   error handling
+
+   best practices
+
+   try to combine sensibly with the external table stuff
+
+Further Reading and Migration Guides
+=======================================
+For more information, see the following:
+
+* :ref:`copy_from`
+* :ref:`insert`
+* :ref:`foreign_tables`
+
+.. rubric:: Footnotes
+
+.. [#f0] Text values include ``TEXT``, ``VARCHAR``, and ``NVARCHAR``
+
+.. [#f2] With UTF8 annotation
+
+.. [#f3] With ``TIMESTAMP_NANOS`` or ``TIMESTAMP_MILLIS`` annotation
+
+.. [#f4] Any microseconds will be rounded down to milliseconds.
+
+.. [#f5] Boolean values are cast to 0, 1
+
+.. [#f6] Will succeed if all values are 0, 1
+
+.. [#f7] Will succeed if all values fit the destination type
\ No newline at end of file
diff --git a/data_ingestion/inserting_data.rst b/data_ingestion/inserting_data.rst
deleted file mode 100644
index 660cd61bd..000000000
--- a/data_ingestion/inserting_data.rst
+++ /dev/null
@@ -1,474 +0,0 @@
-.. _inserting_data:
-
-***************************
-Inserting Data Overview
-***************************
-
-The **Inserting Data Overview** page describes how to insert data into SQream, specifically how to insert data from a variety of sources and locations. 
-
-.. contents:: In this topic:
-   :local:
-
-
-Getting Started
-================================
-
-SQream supports importing data from the following sources:
-
-* Using :ref:`insert` with :ref:`a client driver`
-* Using :ref:`copy_from`:
-
-   - Local filesystem and locally mounted network filesystems
-   - :ref:`s3`
-   - :ref:`hdfs`
-
-* Using :ref:`external_tables`:
-
-   - Local filesystem and locally mounted network filesystems
-   - :ref:`s3`
-   - :ref:`hdfs`
-
-
-SQream DB supports loading files in the following formats:
-
-* Text - CSV, TSV, PSV
-* Parquet
-* ORC
-
-Data Loading Considerations
-================================
-
-Verifying Data and Performance after Loading
------------------------------------------
-
-Like other RDBMSs, SQream DB has its own set of best practcies for table design and query optimization.
-
-SQream therefore recommends:
-
-* Verify that the data is as you expect it (e.g. row counts, data types, formatting, content)
-
-* The performance of your queries is adequate
-
-* :ref:`Best practices` were followed for table design
-
-* Applications such as :ref:`Tableau` and others have been tested, and work
-
-* Data types were not over-provisioned (e.g. don't use VARCHAR(2000) to store a short string)
-
-File Soure Location when Loading
---------------------------------
-
-During loading using :ref:`copy_from`, the statement can run on any worker. If you are running multiple nodes, make sure that all nodes can see the source the same. If you load from a local file which is only on 1 node and not on shared storage, it will fail some of the time. (If you need to, you can also control which node a statement runs on using the :ref:`workload_manager`).
-
-Supported load methods
--------------------------------
-
-SQream DB's :ref:`COPY FROM` syntax can be used to load CSV files, but can't be used for Parquet and ORC.
-
-:ref:`FOREIGN TABLE` can be used to load text files, Parquet, and ORC files, and can also transform the data prior to materialization as a full table.
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-   :stub-columns: 1
-   
-   * - Method / File type
-     - Text (CSV)
-     - Parquet
-     - ORC
-     - Streaming data
-   * - :ref:`copy_from`
-     - ✓
-     - ✗
-     - ✗
-     - ✗
-   * - :ref:`external_tables`
-     - ✓
-     - ✓
-     - ✓
-     - ✗
-   * - :ref:`insert`
-     - ✗
-     - ✗
-     - ✗
-     - ✓ (Python, JDBC, Node.JS)
-
-Unsupported Data Types
------------------------------
-
-SQream DB doesn't support the entire set of features that some other database systems may have, such as ``ARRAY``, ``BLOB``, ``ENUM``, ``SET``, etc.
-
-These data types will have to be converted before load. For example, ``ENUM`` can often be stored as a ``VARCHAR``.
-
-Handing Extended Errors
-----------------------------
-
-While :ref:`external tables` can be used to load CSVs, the ``COPY FROM`` statement provides more fine-grained error handling options, as well as extended support for non-standard CSVs with multi-character delimiters, alternate timestamp formats, and more.
-
-Best Practices for CSV
-------------------------------
-
-Text files like CSV rarely conform to `RFC 4180 `_ , so alterations may be required:
-
-* Use ``OFFSET 2`` for files containing header rows
-
-* Failed rows can be captured in a log file for later analysis, or just to skip them. See :ref:`capturing_rejected_rows` for information on skipping rejected rows.
-
-* Record delimiters (new lines) can be modified with the :ref:`RECORD DELIMITER` syntax.
-
-* If the date formats differ from ISO 8601, refer to the :ref:`copy_date_parsers` section to see how to override default parsing.
-
-* 
-   Fields in a CSV can be optionally quoted with double-quotes (``"``). However, any field containing a newline or another double-quote character must be quoted.
-
-   If a field is quoted, any double quote that appears must be double-quoted (similar to the :ref:`string literals quoting rules`. For example, to encode ``What are "birds"?``, the field should appear as ``"What are ""birds""?"``.
-
-* Field delimiters don't have a to be a displayable ASCII character. See :ref:`field_delimiters` for all options.
-
-
-Best Practices for Parquet
---------------------------------
-
-* Parquet files are loaded through :ref:`external_tables`. The destination table structure has to match in number of columns between the source files.
-
-* Parquet files support predicate pushdown. When a query is issued over Parquet files, SQream DB uses row-group metadata to determine which row-groups in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of rows.
-
-Type Support and Behavior Notes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-* Unlike ORC, the column types should match the data types exactly (see table below).
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-   :stub-columns: 1
-   
-   * -   SQream DB type →
-   
-         Parquet source
-     - ``BOOL``
-     - ``TINYINT``
-     - ``SMALLINT``
-     - ``INT``
-     - ``BIGINT``
-     - ``REAL``
-     - ``DOUBLE``
-     - Text [#f0]_
-     - ``DATE``
-     - ``DATETIME``
-   * - ``BOOLEAN``
-     - ✓ 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``INT16``
-     - 
-     - 
-     - ✓
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``INT32``
-     - 
-     - 
-     - 
-     - ✓
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``INT64``
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``FLOAT``
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - 
-     - 
-     - 
-     - 
-   * - ``DOUBLE``
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - 
-     - 
-     - 
-   * - ``BYTE_ARRAY`` [#f2]_
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - 
-     - 
-   * - ``INT96`` [#f3]_
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓ [#f4]_
-
-* If a Parquet file has an unsupported type like ``enum``, ``uuid``, ``time``, ``json``, ``bson``, ``lists``, ``maps``, but the data is not referenced in the table (it does not appear in the :ref:`SELECT` query), the statement will succeed. If the column is referenced, an error will be thrown to the user, explaining that the type is not supported, but the column may be ommited.
-
-Best Practices for ORC
---------------------------------
-
-* ORC files are loaded through :ref:`external_tables`. The destination table structure has to match in number of columns between the source files.
-
-* ORC files support predicate pushdown. When a query is issued over ORC files, SQream DB uses ORC metadata to determine which stripes in a file need to be read for a particular query and the row indexes can narrow the search to a particular set of 10,000 rows.
-
-Type Support and Behavior Notes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-* ORC files are loaded through :ref:`external_tables`. The destination table structure has to match in number of columns between the source files.
-
-* The types should match to some extent within the same "class" (see table below).
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-   :stub-columns: 1
-   
-   * -   SQream DB type →
-   
-         ORC source
-     - ``BOOL``
-     - ``TINYINT``
-     - ``SMALLINT``
-     - ``INT``
-     - ``BIGINT``
-     - ``REAL``
-     - ``DOUBLE``
-     - Text [#f0]_
-     - ``DATE``
-     - ``DATETIME``
-   * - ``boolean``
-     - ✓ 
-     - ✓ [#f5]_
-     - ✓ [#f5]_
-     - ✓ [#f5]_
-     - ✓ [#f5]_
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``tinyint``
-     - ○ [#f6]_
-     - ✓
-     - ✓
-     - ✓
-     - ✓
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``smallint``
-     - ○ [#f6]_
-     - ○ [#f7]_
-     - ✓
-     - ✓
-     - ✓
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``int``
-     - ○ [#f6]_
-     - ○ [#f7]_
-     - ○ [#f7]_
-     - ✓
-     - ✓
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``bigint``
-     - ○ [#f6]_
-     - ○ [#f7]_
-     - ○ [#f7]_
-     - ○ [#f7]_
-     - ✓
-     - 
-     - 
-     - 
-     - 
-     - 
-   * - ``float``
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - ✓
-     - 
-     - 
-     - 
-   * - ``double``
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - ✓
-     - 
-     - 
-     - 
-   * - ``string`` / ``char`` / ``varchar``
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - 
-     - 
-   * - ``date``
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-     - ✓
-   * - ``timestamp``, ``timestamp`` with timezone
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - 
-     - ✓
-
-* If an ORC file has an unsupported type like ``binary``, ``list``, ``map``, and ``union``, but the data is not referenced in the table (it does not appear in the :ref:`SELECT` query), the statement will succeed. If the column is referenced, an error will be thrown to the user, explaining that the type is not supported, but the column may be ommited.
-
-
-
-..
-   insert
-
-   example
-
-   are there some variations to highlight?:
-
-   create table as
-
-   sequences, default values
-
-   insert select
-
-   make distinction between an insert command, and a parameterized/bulk
-   insert "over the network"
-
-
-   copy
-
-
-   best practices for insert
-
-   chunks and extents, and storage reorganisation
-
-   copy:
-
-   give an example
-
-   supports csv and parquet
-
-   what else do we have right now? any other formats? have the s3 and
-   hdfs url support also
-
-   error handling
-
-   best practices
-
-   try to combine sensibly with the external table stuff
-
-Further Reading and Migration Guides
-=======================================
-
-.. toctree::
-   :caption: Data loading guides
-   :titlesonly:
-   
-   migration/csv
-   migration/parquet
-   migration/orc
-
-.. toctree::
-   :caption: Migration guides
-   :titlesonly:
-   
-   migration/oracle
-
-
-.. rubric:: See also:
-
-* :ref:`copy_from`
-* :ref:`insert`
-* :ref:`external_tables`
-
-.. rubric:: Footnotes
-
-.. [#f0] Text values include ``TEXT``, ``VARCHAR``, and ``NVARCHAR``
-
-.. [#f2] With UTF8 annotation
-
-.. [#f3] With ``TIMESTAMP_NANOS`` or ``TIMESTAMP_MILLIS`` annotation
-
-.. [#f4] Any microseconds will be rounded down to milliseconds.
-
-.. [#f5] Boolean values are cast to 0, 1
-
-.. [#f6] Will succeed if all values are 0, 1
-
-.. [#f7] Will succeed if all values fit the destination type
diff --git a/data_ingestion/json.rst b/data_ingestion/json.rst
new file mode 100644
index 000000000..50fe2258c
--- /dev/null
+++ b/data_ingestion/json.rst
@@ -0,0 +1 @@
+.. _json:

**************************
Ingesting Data from JSON
**************************

.. contents:: 
   :local:
   :depth: 1
   
Overview
========

JSON (Java Script Object Notation) is used both as a file format and as a serialization method. The JSON file format is flexible and is commonly used for dynamic, nested, and semi-structured data representations. 

The SQream DB JSON parser supports the `RFC 8259 `_ data interchange format and supports both JSON objects and JSON object arrays.

Only the `JSON Lines `_ data format is supported by SQream.


Making JSON Files Accessible to Workers
=======================================

To give workers access to files, every node in your system must have access to the storage being used.

The following are required for JSON files to be accessible to workers:

* For files hosted on NFS, ensure that the mount is accessible from all servers.

* For HDFS, ensure that SQream servers have access to the HDFS NameNode with the correct **user-id**. For more information, see :ref:`hdfs`.

* For S3, ensure network access to the S3 endpoint. For more information, see :ref:`s3`.

For more information about configuring worker access, see :ref:`workload_manager`.


Mapping between JSON and SQream
===============================

A JSON field consists of a key name and a value.

Key names, which are case sensitive, are mapped to SQream columns. Key names which do not have corresponding SQream table columns are treated as errors by default, unless the ``IGNORE_EXTRA_FIELDS`` parameter is set to ``true``, in which case these key names will be ignored during the mapping process.

SQream table columns which do not have corresponding JSON fields are automatically set to ``null`` as a value.

Values may be one of the following reserved words (lower-case): ``false``, ``true``, or ``null``, or any of the following data types:

.. list-table:: 
   :widths: auto
   :header-rows: 1
   
   * - JSON Data Type
     - Representation in SQream
   * - Number
     - ``TINYINT``, ``SMALLINT``, ``INT``, ``BIGINT``, ``FLOAT``, ``DOUBLE``, ``NUMERIC``
   * - String
     - ``TEXT``
   * - JSON Literal
     - ``NULL``, ``TRUE``, ``FALSE``
   * - JSON Array
     - ``TEXT``
   * - JSON Object
     - ``TEXT``
 


Character Escaping
------------------

The ASCII 10 character (LF) marks the end of JSON objects. Use ``\\n`` to escape the ``\n`` character when you do not mean it be a new line.



Ingesting JSON Data into SQream
===============================

.. contents:: In this topic:
   :local:

Syntax
-------
To access JSON files, use the ``json_fdw`` with a ``COPY FROM``, ``COPY TO``, or ``CREATE FOREIGN TABLE`` statement.

The Foreign Data Wrapper (FDW) syntax is:

.. code-block:: 

	json_fdw [OPTIONS(option=value[,...])]


Parameters
------------

The following parameters are supported by ``json_fdw``:

.. list-table:: 
   :widths: auto
   :header-rows: 1
   
   * - Parameter
     - Description
   * - ``DATETIME_FORMAT``
     - Default format is ``yyyy-mm-dd``. Other supported date formats are:``iso8601``, ``iso8601c``, ``dmy``, ``ymd``, ``mdy``, ``yyyymmdd``, ``yyyy-m-d``, ``yyyy-mm-dd``, ``yyyy/m/d``, ``yyyy/mm/dd``, ``d/m/yyyy``, ``dd/mm/yyyy``, ``mm/dd/yyyy``, ``dd-mon-yyyy``, ``yyyy-mon-dd``.  
   * - ``IGNORE_EXTRA_FIELDS``
     - Default value is ``false``. When value is ``true``, key names which do not have corresponding SQream table columns will be ignored. Parameter may be used with the ``COPY TO`` and ``IGNORE FOREIGN TABLE`` statements. 
   * - ``COMPRESSION``
     - Supported values are ``auto``, ``gzip``, and ``none``. ``auto`` means that the compression type is automatically detected upon import. Parameter is not supported for exporting. ``gzip`` means that a ``gzip`` compression is applied. ``none`` means that no compression or an attempt to decompress will take place. 
   * - ``LOCATION``
     - A path on the local filesystem, on S3, or on HDFS URI. The local path must be an absolute path that SQream DB can access.
   * - ``LIMIT``
     - When specified, tells SQream DB to stop ingesting after the specified number of rows. Unlimited if unset.
   * - ``OFFSET``
     - The row number from which to start ingesting.
   * - ``ERROR_LOG``
     - If when using the ``COPY`` command, copying a row fails, the ``ERROR LOG`` command writes error information to the error log specified in the ``ERROR LOG`` command.

         * If an existing file path is specified, the file will be overwritten.
         
         * Specifying the same file for ``ERROR_LOG`` and ``REJECTED_DATA`` is not allowed and will result in error.
         
         * Specifying an error log when creating a foreign table will write a new error log for every query on the foreign table.
   * - ``CONTINUE_ON_ERROR``
     - Specifies if errors should be ignored or skipped. When set to true, the transaction will continue despite rejected data. This parameter should be set together with ``ERROR_COUNT``. When reading multiple files, if an entire file cannot be opened, it will be skipped.
   * - ``ERROR_COUNT``
     - Specifies the maximum number of faulty records that will be ignored. This setting must be used in conjunction with ``continue_on_error``.
   * - ``MAX_FILE_SIZE``
     - Sets the maximum file size (bytes).
   * - ``ENFORCE_SINGLE_FILE``
     - Permitted values are ``true`` or ``false``. When set to ``true``, a single file of unlimited size is created. This single file is not limited by the ``MAX_FILE_SIZE`` parameter. ``false`` permits creating several files together limited by the ``MAX_FILE_SIZE`` parameter. Default value: ``false``.

   * - ``AWS_ID``, ``AWS_SECRET``
     - Specifies the authentication details for secured S3 buckets.
 

Automatic Schema Inference
---------------------------

You may let SQream DB automatically infer the schema of a foreign table when using ``json_fdw``. 

For more information, follow the :ref:`Automatic Foreign Table DDL Resolution` page.

Automatic Schema Inference example:

.. code-block:: postgres
   
   CREATE FOREIGN TABLE t
     WRAPPER json_fdw
     OPTIONS
     (
       location = 'somefile.json'
     )
   ;


Examples
------------

JSON object array:

.. code-block:: postgres

	{ "name":"Avery Bradley", "age":25, "position":"PG" }
	{ "name":"Jae Crowder", "age":25, "position":"PG" }
	{ "name":"John Holland", "age":27, "position":"SG" }

JSON objects:

.. code-block:: postgres

	[
	{ "name":"Avery Bradley", "age":25, "position":"PG" },
	{ "name":"Jae Crowder", "age":25, "position":"SF" },
	{ "name":"John Holland", "age":27, "position":"SG" }
	]

Using the ``COPY FROM`` statement:

.. code-block:: postgres
   
   COPY t
     FROM WRAPPER json_fdw
     OPTIONS
     (
       location = 'somefile.json'
     )
   ;

Note that JSON files generated using the ``COPY TO`` statement will store objects, and not object arrays.

.. code-block:: postgres
   
   COPY t
     TO WRAPPER json_fdw
     OPTIONS
     (
       location = 'somefile.json'
     )
   ;

When using the ``CREATE FOREIGN TABLE`` statement, make sure that the table schema corresponds with the JSON file structure.

.. code-block:: postgres
   
   CREATE FOREIGN TABLE t
	 (
	   id int not null
	 )
     WRAPPER json_fdw
     OPTIONS
     (
       location = 'somefile.json'
     )
   ;

The following is an example of loading data from a JSON file into SQream:

.. code-block:: postgres

    WRAPPER json_fdw
    OPTIONS
    (
      LOCATION =  'somefile.json'
    );
	  


.. tip:: 

   An exact match must exist between the SQream and JSON types. For unsupported column types, you can set the type to any type and exclude it from subsequent queries.



\ No newline at end of file
diff --git a/data_ingestion/nba-t10.csv b/data_ingestion/nba-t10.csv
index 024530355..e57ad3131 100644
--- a/data_ingestion/nba-t10.csv
+++ b/data_ingestion/nba-t10.csv
@@ -1,10 +1,10 @@
 Name,Team,Number,Position,Age,Height,Weight,College,Salary
-Avery Bradley,Boston Celtics,0.0,PG,25.0,6-2,180.0,Texas,7730337.0
-Jae Crowder,Boston Celtics,99.0,SF,25.0,6-6,235.0,Marquette,6796117.0
-John Holland,Boston Celtics,30.0,SG,27.0,6-5,205.0,Boston University,
-R.J. Hunter,Boston Celtics,28.0,SG,22.0,6-5,185.0,Georgia State,1148640.0
-Jonas Jerebko,Boston Celtics,8.0,PF,29.0,6-10,231.0,,5000000.0
-Amir Johnson,Boston Celtics,90.0,PF,29.0,6-9,240.0,,12000000.0
-Jordan Mickey,Boston Celtics,55.0,PF,21.0,6-8,235.0,LSU,1170960.0
-Kelly Olynyk,Boston Celtics,41.0,C,25.0,7-0,238.0,Gonzaga,2165160.0
-Terry Rozier,Boston Celtics,12.0,PG,22.0,6-2,190.0,Louisville,1824360.0
+Avery Bradley,Boston Celtics,0,PG,25,44714,180,Texas,7730337
+Jae Crowder,Boston Celtics,99,SF,25,44718,235,Marquette,6796117
+John Holland,Boston Celtics,30,SG,27,44717,205,Boston University,
+R.J. Hunter,Boston Celtics,28,SG,22,44717,185,Georgia State,1148640
+Jonas Jerebko,Boston Celtics,8,PF,29,44722,231,,5000000
+Amir Johnson,Boston Celtics,90,PF,29,44721,240,,12000000
+Jordan Mickey,Boston Celtics,55,PF,21,44720,235,LSU,1170960
+Kelly Olynyk,Boston Celtics,41,C,25,36708,238,Gonzaga,2165160
+Terry Rozier,Boston Celtics,12,PG,22,44714,190,Louisville,1824360
diff --git a/data_ingestion/nba.json b/data_ingestion/nba.json
new file mode 100644
index 000000000..e4df53204
--- /dev/null
+++ b/data_ingestion/nba.json
@@ -0,0 +1,9 @@
+{"name":"Avery Bradley","team":"Boston Celtics","number":0,"position":"PG","age":25,"height":"6-2","weight":180.0,"college":"Texas","salary":7730337.0}
+{"name":"Jae Crowder","team":"Boston Celtics","number":99,"position":"SF","age":25,"height":"6-6","weight":235.0,"college":"Marquette","salary":6796117.0}
+{"name":"John Holland","team":"Boston Celtics","number":30,"position":"SG","age":27,"height":"6-5","weight":205.0,"college":"Boston University","salary":null}
+{"name":"R.J. Hunter","team":"Boston Celtics","number":28,"position":"SG","age":22,"height":"6-5","weight":185.0,"college":"Georgia State","salary":1148640.0}
+{"name":"Jonas Jerebko","team":"Boston Celtics","number":8,"position":"PF","age":29,"height":"6-10","weight":231.0,"college":null,"salary":5000000.0}
+{"name":"Amir Johnson","team":"Boston Celtics","number":90,"position":"PF","age":29,"height":"6-9","weight":240.0,"college":null,"salary":12000000.0}
+{"name":"Jordan Mickey","team":"Boston Celtics","number":55,"position":"PF","age":21,"height":"6-8","weight":235.0,"college":"LSU","salary":1170960.0}
+{"name":"Kelly Olynyk","team":"Boston Celtics","number":41,"position":"C","age":25,"height":"7-0","weight":238.0,"college":"Gonzaga","salary":2165160.0}
+{"name":"Terry Rozier","team":"Boston Celtics","number":12,"position":"PG","age":22,"height":"6-2","weight":190.0,"college":"Louisville","salary":1824360.0}
\ No newline at end of file
diff --git a/data_ingestion/oracle.rst b/data_ingestion/oracle.rst
deleted file mode 100644
index 0b0e6d5c8..000000000
--- a/data_ingestion/oracle.rst
+++ /dev/null
@@ -1,353 +0,0 @@
-.. _oracle:
-
-**********************
-Migrating Data from Oracle
-**********************
-
-This guide covers actions required for migrating from Oracle to SQream DB with CSV files. 
-
-.. contents:: In this topic:
-   :local:
-
-
-1. Preparing the tools and login information
-====================================================
-
-* Migrating data from Oracle requires a username and password for your Oracle system.
-
-* In this guide, we'll use the `Oracle Data Pump `_ , specifically the `Data Pump Export utility `_ .
-
-
-2. Export the desired schema
-===================================
-
-Use the Data Pump Export utility to export the database schema.
-
-The format for using the Export utility is
-
-   ``expdp / DIRECTORY= DUMPFILE= CONTENT=metadata_only NOLOGFILE``
-
-The resulting Oracle-only schema is stored in a dump file.
-
-
-Examples
-------------
-
-Dump all tables
-^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: console
-
-   $ expdp rhendricks/secretpassword DIRECTORY=dpumpdir DUMPFILE=tables.dmp CONTENT=metadata_only NOLOGFILE
-
-
-Dump only specific tables
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-In this example, we specify two tables for dumping.
-
-.. code-block:: console
-
-   $ expdp rhendricks/secretpassword DIRECTORY=dpumpdir DUMPFILE=tables.dmp CONTENT=metadata_only TABLES=employees,jobs NOLOGFILE
-
-3. Convert the Oracle dump to standard SQL
-=======================================================
-
-Oracle's Data Pump Import utility will help us convert the dump from the previous step to standard SQL.
-
-The format for using the Import utility is
-
-   ``impdp / DIRECTORY= DUMPFILE= SQLFILE= TRANSFORM=SEGMENT_ATTRIBUTES:N:table PARTITION_OPTIONS=MERGE``
-
-* ``TRANSFORM=SEGMENT_ATTRIBUTES:N:table`` excludes segment attributes (both STORAGE and TABLESPACE) from the tables
-
-* ``PARTITON_OPTIONS=MERGE`` combines all partitions and subpartitions into one table.
-
-Example
-----------
-
-.. code-block:: console
-   
-   $ impdp rhendricks/secretpassword DIRECTORY=dpumpdir DUMPFILE=tables.dmp SQLFILE=sql_export.sql TRANSFORM=SEGMENT_ATTRIBUTES:N:table PARTITION_OPTIONS=MERGE
-
-4. Figure out the database structures
-===============================================
-
-Using the SQL file created in the previous step, write CREATE TABLE statements to match the schemas of the tables.
-
-Remove unsupported attributes
------------------------------------
-
-Trim unsupported primary keys, indexes, constraints, and other unsupported Oracle attributes.
-
-Match data types
----------------------
-
-Refer to the table below to match the Oracle source data type to a new SQream DB type:
-
-.. list-table:: Data types
-   :widths: auto
-   :header-rows: 1
-   
-   * - Oracle Data type
-     - Precision
-     - SQream DB data type
-   * - ``CHAR(n)``, ``CHARACTER(n)``
-     - Any ``n``
-     - ``VARCHAR(n)``
-   * - ``BLOB``, ``CLOB``, ``NCLOB``, ``LONG``
-     - 
-     - ``TEXT``
-   * - ``DATE``
-     - 
-     - ``DATE``
-   * - ``FLOAT(p)``
-     - p <= 63
-     - ``REAL``
-   * - ``FLOAT(p)``
-     - p > 63
-     - ``FLOAT``, ``DOUBLE``
-
-   * - ``NCHAR(n)``, ``NVARCHAR2(n)``
-     - Any ``n``
-     - ``TEXT`` (alias of ``NVARCHAR``)
-
-   * - ``NUMBER(p)``, ``NUMBER(p,0)``
-     - p < 5
-     - ``SMALLINT``
-
-   * - ``NUMBER(p)``, `NUMBER(p,0)``
-     - p < 9
-     - ``INT``
-
-   * - ``NUMBER(p)``, `NUMBER(p,0)``
-     - p < 19
-     - ``INT``
-
-   * - ``NUMBER(p)``, `NUMBER(p,0)``
-     - p >= 20
-     - ``BIGINT``
-
-   * - ``NUMBER(p,f)``, ``NUMBER(*,f)``
-     - f > 0
-     - ``FLOAT`` / ``DOUBLE``
-
-   * - ``VARCHAR(n)``, ``VARCHAR2(n)``
-     - Any ``n``
-     - ``VARCHAR(n)`` or ``TEXT``
-   * - ``TIMESTAMP``
-     -  
-     - ``DATETIME``
-
-Read more about :ref:`supported data types in SQream DB`.
-
-Additional considerations
------------------------------
-
-* Understand how :ref:`tables are created in SQream DB`
-
-* Learn how :ref:`SQream DB handles null values`, particularly with regards to constraints.
-
-* Oracle roles and user management commands need to be rewritten to SQream DB's format. SQream DB supports :ref:`full role-based access control (RBAC)` similar to Oracle.
-
-5. Create the tables in SQream DB
-======================================
-
-After rewriting the table strucutres, create them in SQream DB.
-
-Example
----------
-
-
-Consider Oracle's ``HR.EMPLOYEES`` sample table:
-
-.. code-block:: sql
-
-      CREATE TABLE employees
-         ( employee_id NUMBER(6)
-         , first_name VARCHAR2(20)
-         , last_name VARCHAR2(25)
-         CONSTRAINT emp_last_name_nn NOT NULL
-         , email VARCHAR2(25)
-         CONSTRAINT emp_email_nn NOT NULL
-         , phone_number VARCHAR2(20)
-         , hire_date DATE
-         CONSTRAINT emp_hire_date_nn NOT NULL
-         , job_id VARCHAR2(10)
-         CONSTRAINT emp_job_nn NOT NULL
-         , salary NUMBER(8,2)
-         , commission_pct NUMBER(2,2)
-         , manager_id NUMBER(6)
-         , department_id NUMBER(4)
-         , CONSTRAINT emp_salary_min
-         CHECK (salary > 0) 
-         , CONSTRAINT emp_email_uk
-         UNIQUE (email)
-         ) ;
-      CREATE UNIQUE INDEX emp_emp_id_pk
-               ON employees (employee_id) ;
-             
-      ALTER TABLE employees
-               ADD ( CONSTRAINT emp_emp_id_pk
-         PRIMARY KEY (employee_id)
-         , CONSTRAINT emp_dept_fk
-         FOREIGN KEY (department_id)
-         REFERENCES departments
-         , CONSTRAINT emp_job_fk
-         FOREIGN KEY (job_id)
-         REFERENCES jobs (job_id)
-         , CONSTRAINT emp_manager_fk
-         FOREIGN KEY (manager_id)
-         REFERENCES employees
-         ) ;
-
-This table rewritten for SQream DB would be created like this:
-
-.. code-block:: postgres
-   
-   CREATE TABLE employees
-   (
-     employee_id      SMALLINT NOT NULL,
-     first_name       VARCHAR(20),
-     last_name        VARCHAR(25) NOT NULL,
-     email            VARCHAR(20) NOT NULL,
-     phone_number     VARCHAR(20),
-     hire_date        DATE NOT NULL,
-     job_id           VARCHAR(10) NOT NULL,
-     salary           FLOAT,
-     commission_pct   REAL,
-     manager_id       SMALLINT,
-     department_id    TINYINT
-   );
-
-
-6. Export tables to CSVs
-===============================
-
-Exporting CSVs from Oracle servers is not a trivial task.
-
-.. contents:: Options for exporting to CSVs
-   :local:
-
-Using SQL*Plus to export data lists
-------------------------------------------
-
-Here's a sample SQL*Plus script that will export PSVs in a format that SQream DB can read:
-
-:download:`Download to_csv.sql `
-
-.. literalinclude:: to_csv.sql
-    :language: sql
-    :caption: Oracle SQL*Plus CSV export script
-    :linenos:
-
-Enter SQL*Plus and export tables one-by-one interactively:
-
-.. code-block:: console
-   
-   $ sqlplus rhendricks/secretpassword
-
-   @spool employees
-   @spool jobs
-   [...]
-   EXIT
-
-Each table is exported as a data list file (``.lst``).
-
-Creating CSVs using stored procedures
--------------------------------------------
-
-You can use stored procedures if you have them set-up.
-
-Examples of `stored procedures for generating CSVs `_` can be found in the Ask The Oracle Mentors forums.
-
-CSV generation considerations
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-* Files should be a valid CSV. By default, SQream DB's CSV parser can handle `RFC 4180 standard CSVs `_ , but can also be modified to support non-standard CSVs (with multi-character delimiters, unquoted fields, etc).
-
-* Files are UTF-8 or ASCII encoded
-
-* Field delimiter is an ASCII character or characters
-
-* Record delimiter, also known as a new line separator, is a Unix-style newline (``\n``), DOS-style newline (``\r\n``), or Mac style newline (``\r``).
-
-* Fields are optionally enclosed by double-quotes, or mandatory quoted if they contain one of the following characters:
-
-   * The record delimiter or field delimiter
-
-   * A double quote character
-
-   * A newline
-
-* 
-   If a field is quoted, any double quote that appears must be double-quoted (similar to the :ref:`string literals quoting rules`. For example, to encode ``What are "birds"?``, the field should appear as ``"What are ""birds""?"``.
-   
-   Other modes of escaping are not supported (e.g. ``1,"What are \"birds\"?"`` is not a valid way of escaping CSV values).
-
-* ``NULL`` values can be marked in two ways in the CSV:
-   
-   - An explicit null marker. For example, ``col1,\N,col3``
-   - An empty field delimited by the field delimiter. For example, ``col1,,col3``
-   
-   .. note:: If a text field is quoted but contains no content (``""``) it is considered an empty text field. It is not considered ``NULL``.
-
-
-7. Place CSVs where SQream DB workers can access
-=======================================================
-
-During data load, the :ref:`copy_from` command can run on any worker (unless explicitly speficied with the :ref:`workload_manager`).
-It is important that every node has the same view of the storage being used - meaning, every SQream DB worker should have access to the files.
-
-* For files hosted on NFS, ensure that the mount is accessible from all servers.
-
-* For HDFS, ensure that SQream DB servers can access the HDFS name node with the correct user-id
-
-* For S3, ensure network access to the S3 endpoint
-
-8. Bulk load the CSVs
-=================================
-
-Issue the :ref:`copy_from` commands to SQream DB to insert a table from the CSVs created.
-
-Repeat the ``COPY FROM`` command for each table exported from Oracle.
-
-Example
--------------
-
-For the ``employees`` table, run the following command:
-
-.. code-block:: postgres
-   
-   COPY employees FROM 'employees.lst' WITH DELIMITER '|';
-
-9. Rewrite Oracle queries
-=====================================
-
-SQream DB supports a large subset of ANSI SQL.
-
-You will have to refactor much of Oracle's SQL and functions that often are not ANSI SQL. 
-
-We recommend the following resources:
-
-* :ref:`sql_feature_support` - to understand SQream DB's SQL feature support.
-
-* :ref:`sql_best_practices` - to understand best practices for SQL queries and schema design.
-
-* :ref:`common_table_expressions` - CTEs can be used to rewrite complex queries in a compact form.
-
-* :ref:`concurrency_and_locks` - to understand the difference between Oracle's transactions and SQream DB's concurrency.
-
-* :ref:`identity` - SQream DB supports sequences, but no triggers for auto-increment.
-
-* :ref:`joins` - SQream DB supports ANSI join syntax. Oracle uses the ``+`` operator which SQream DB doesn't support.
-
-* :ref:`saved_queries` - Saved queries can be used to emulate some stored procedures.
-
-* :ref:`subqueries` - SQream DB supports a limited set of subqueries.
-
-* :ref:`python_functions` - SQream DB supports Python User Defined Functions which can be used to run complex operations in-line.
-
-* :ref:`Views` - SQream DB supports logical views, but does not support materialized views.
-
-* :ref:`window_functions` - SQream DB supports a wide array of window functions.
\ No newline at end of file
diff --git a/data_ingestion/orc.rst b/data_ingestion/orc.rst
index d199e958e..4906a80de 100644
--- a/data_ingestion/orc.rst
+++ b/data_ingestion/orc.rst
@@ -1,21 +1,25 @@
 .. _orc:
 
 **********************
-Inserting Data from an ORC File
+Ingesting Data from an ORC File
 **********************
 
-This guide covers inserting data from ORC files into SQream DB using :ref:`FOREIGN TABLE`. 
+.. contents:: 
+   :local:
+   :depth: 1
 
+This guide covers ingesting data from ORC files into SQream DB using :ref:`FOREIGN TABLE`. 
 
-1. Prepare the files
+
+Prepare the files
 =====================
 
 Prepare the source ORC files, with the following requirements:
 
 .. list-table:: 
-   :widths: auto
+   :widths: 5 5 70 70 70 70 5 5 5 5 5
    :header-rows: 1
-   :stub-columns: 1
+
    
    * -   SQream DB type →
    
@@ -27,15 +31,15 @@ Prepare the source ORC files, with the following requirements:
      - ``BIGINT``
      - ``REAL``
      - ``DOUBLE``
-     - Text [#f0]_
+     - ``TEXT`` [#f0]_
      - ``DATE``
      - ``DATETIME``
    * - ``boolean``
-     - ✓ 
-     - ✓ [#f5]_
-     - ✓ [#f5]_
-     - ✓ [#f5]_
-     - ✓ [#f5]_
+     - Supported 
+     - Supported [#f5]_
+     - Supported [#f5]_
+     - Supported [#f5]_
+     - Supported [#f5]_
      - 
      - 
      - 
@@ -43,10 +47,10 @@ Prepare the source ORC files, with the following requirements:
      - 
    * - ``tinyint``
      - ○ [#f6]_
-     - ✓
-     - ✓
-     - ✓
-     - ✓
+     - Supported
+     - Supported
+     - Supported
+     - Supported
      - 
      - 
      - 
@@ -55,9 +59,9 @@ Prepare the source ORC files, with the following requirements:
    * - ``smallint``
      - ○ [#f6]_
      - ○ [#f7]_
-     - ✓
-     - ✓
-     - ✓
+     - Supported
+     - Supported
+     - Supported
      - 
      - 
      - 
@@ -67,8 +71,8 @@ Prepare the source ORC files, with the following requirements:
      - ○ [#f6]_
      - ○ [#f7]_
      - ○ [#f7]_
-     - ✓
-     - ✓
+     - Supported
+     - Supported
      - 
      - 
      - 
@@ -79,7 +83,7 @@ Prepare the source ORC files, with the following requirements:
      - ○ [#f7]_
      - ○ [#f7]_
      - ○ [#f7]_
-     - ✓
+     - Supported
      - 
      - 
      - 
@@ -91,8 +95,8 @@ Prepare the source ORC files, with the following requirements:
      - 
      - 
      - 
-     - ✓
-     - ✓
+     - Supported
+     - Supported
      - 
      - 
      - 
@@ -102,12 +106,12 @@ Prepare the source ORC files, with the following requirements:
      - 
      - 
      - 
-     - ✓
-     - ✓
+     - Supported
+     - Supported
      - 
      - 
      - 
-   * - ``string`` / ``char`` / ``varchar``
+   * - ``string`` / ``char`` / ``text``
      - 
      - 
      - 
@@ -115,7 +119,7 @@ Prepare the source ORC files, with the following requirements:
      - 
      - 
      - 
-     - ✓
+     - Supported
      - 
      - 
    * - ``date``
@@ -127,8 +131,8 @@ Prepare the source ORC files, with the following requirements:
      - 
      - 
      - 
-     - ✓
-     - ✓
+     - Supported
+     - Supported
    * - ``timestamp``, ``timestamp`` with timezone
      - 
      - 
@@ -139,13 +143,13 @@ Prepare the source ORC files, with the following requirements:
      - 
      - 
      - 
-     - ✓
+     - Supported
 
 * If an ORC file has an unsupported type like ``binary``, ``list``, ``map``, and ``union``, but the data is not referenced in the table (it does not appear in the :ref:`SELECT` query), the statement will succeed. If the column is referenced, an error will be thrown to the user, explaining that the type is not supported, but the column may be ommited. This can be worked around. See more information in the examples.
 
 .. rubric:: Footnotes
 
-.. [#f0] Text values include ``TEXT``, ``VARCHAR``, and ``NVARCHAR``
+.. [#f0] Text values include ``TEXT``
 
 .. [#f5] Boolean values are cast to 0, 1
 
@@ -153,7 +157,7 @@ Prepare the source ORC files, with the following requirements:
 
 .. [#f7] Will succeed if all values fit the destination type
 
-2. Place ORC files where SQream DB workers can access them
+Place ORC files where SQream DB workers can access them
 ================================================================
 
 Any worker may try to access files (unless explicitly speficied with the :ref:`workload_manager`).
@@ -165,7 +169,7 @@ It is important that every node has the same view of the storage being used - me
 
 * For S3, ensure network access to the S3 endpoint. See our :ref:`s3` guide for more information.
 
-3. Figure out the table structure
+Figure out the table structure
 ===============================================
 
 Prior to loading data, you will need to write out the table structure, so that it matches the file structure.
@@ -186,14 +190,14 @@ We will make note of the file structure to create a matching ``CREATE FOREIGN TA
    
    CREATE FOREIGN TABLE ext_nba
    (
-        Name       VARCHAR(40),
-        Team       VARCHAR(40),
+        Name       TEXT(40),
+        Team       TEXT(40),
         Number     BIGINT,
-        Position   VARCHAR(2),
+        Position   TEXT(2),
         Age        BIGINT,
-        Height     VARCHAR(4),
+        Height     TEXT(4),
         Weight     BIGINT,
-        College    VARCHAR(40),
+        College    TEXT(40),
         Salary     FLOAT
     )
       WRAPPER orc_fdw
@@ -209,7 +213,7 @@ We will make note of the file structure to create a matching ``CREATE FOREIGN TA
    If the column type isn't supported, a possible workaround is to set it to any arbitrary type and then exclude it from subsequent queries.
 
 
-4. Verify table contents
+Verify table contents
 ====================================
 
 External tables do not verify file integrity or structure, so verify that the table definition matches up and contains the correct data.
@@ -232,7 +236,7 @@ External tables do not verify file integrity or structure, so verify that the ta
 
 If any errors show up at this stage, verify the structure of the ORC files and match them to the external table structure you created.
 
-5. Copying data into SQream DB
+Copying data into SQream DB
 ===================================
 
 To load the data into SQream DB, use the :ref:`create_table_as` statement:
@@ -288,7 +292,7 @@ Loading a table from a directory of ORC files on HDFS
 .. code-block:: postgres
 
    CREATE FOREIGN TABLE ext_users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
+     (id INT NOT NULL, name TEXT(30) NOT NULL, email TEXT(50) NOT NULL)  
    WRAPPER orc_fdw
      OPTIONS
        ( 
@@ -303,7 +307,7 @@ Loading a table from a bucket of files on S3
 .. code-block:: postgres
 
    CREATE FOREIGN TABLE ext_users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
+     (id INT NOT NULL, name TEXT(30) NOT NULL, email TEXT(50) NOT NULL)  
    WRAPPER orc_fdw
    OPTIONS
      (  LOCATION = 's3://pp-secret-bucket/users/*.ORC',
@@ -312,4 +316,4 @@ Loading a table from a bucket of files on S3
       )
    ;
    
-   CREATE TABLE users AS SELECT * FROM ext_users;
+   CREATE TABLE users AS SELECT * FROM ext_users;
\ No newline at end of file
diff --git a/data_ingestion/parquet.rst b/data_ingestion/parquet.rst
index 800c3122a..2128cf17c 100644
--- a/data_ingestion/parquet.rst
+++ b/data_ingestion/parquet.rst
@@ -1,39 +1,73 @@
 .. _parquet:
 
 **********************
-Inserting Data from a Parquet File
+Ingesting Data from a Parquet File
 **********************
+This guide covers ingesting data from Parquet files into SQream using :ref:`FOREIGN TABLE`, and describes the following;
 
-This guide covers inserting data from Parquet files into SQream DB using :ref:`FOREIGN TABLE`. 
-
-.. contents:: In this topic:
+.. contents:: 
    :local:
+   :depth: 1
 
-1. Prepare the files
-=====================
+Overview
+===================
+SQream supports ingesting data into SQream from Parquet files. However, because it is an open-source column-oriented data storage format, you may want to retain your data on external Parquet files instead of ingesting it into SQream. SQream supports executing queries on external Parquet files.
 
-Prepare the source Parquet files, with the following requirements:
+Preparing Your Parquet Files
+=====================
+Prepare your source Parquet files according to the requirements described in the following table:
 
 .. list-table:: 
-   :widths: auto
+   :widths: 40 5 20 20 20 20 5 5 5 5 10
    :header-rows: 1
-   :stub-columns: 1
    
-   * -   SQream DB type →
+   * -   SQream Type →
    
-         Parquet source
+          ::
+
+         Parquet Source ↓
      - ``BOOL``
+
+          ::
+
      - ``TINYINT``
+
+          ::
+
      - ``SMALLINT``
+
+          ::
+
      - ``INT``
+
+          ::
+
      - ``BIGINT``
+
+          ::
+
      - ``REAL``
+
+          ::
+
      - ``DOUBLE``
-     - Text [#f0]_
+
+          ::
+
+     - ``TEXT`` [#f0]_
+
+          ::
+
      - ``DATE``
+
+          ::
+
      - ``DATETIME``
+
+          ::
+
    * - ``BOOLEAN``
-     - ✓ 
+     - Supported 
      - 
      - 
      - 
@@ -46,7 +80,7 @@ Prepare the source Parquet files, with the following requirements:
    * - ``INT16``
      - 
      - 
-     - ✓
+     - Supported
      - 
      - 
      - 
@@ -58,7 +92,7 @@ Prepare the source Parquet files, with the following requirements:
      - 
      - 
      - 
-     - ✓
+     - Supported
      - 
      - 
      - 
@@ -70,7 +104,7 @@ Prepare the source Parquet files, with the following requirements:
      - 
      - 
      - 
-     - ✓
+     - Supported
      - 
      - 
      - 
@@ -82,7 +116,7 @@ Prepare the source Parquet files, with the following requirements:
      - 
      - 
      - 
-     - ✓
+     - Supported
      - 
      - 
      - 
@@ -94,7 +128,7 @@ Prepare the source Parquet files, with the following requirements:
      - 
      - 
      - 
-     - ✓
+     - Supported
      - 
      - 
      - 
@@ -106,7 +140,7 @@ Prepare the source Parquet files, with the following requirements:
      - 
      - 
      - 
-     - ✓
+     - Supported
      - 
      - 
    * - ``INT96`` [#f3]_
@@ -119,13 +153,13 @@ Prepare the source Parquet files, with the following requirements:
      - 
      - 
      - 
-     - ✓ [#f4]_
+     - Supported [#f4]_
 
-* If a Parquet file has an unsupported type like ``enum``, ``uuid``, ``time``, ``json``, ``bson``, ``lists``, ``maps``, but the data is not referenced in the table (it does not appear in the :ref:`SELECT` query), the statement will succeed. If the column is referenced, an error will be thrown to the user, explaining that the type is not supported, but the column may be ommited. This can be worked around. See more information in the examples.
+* Your statements will succeed even if your Parquet file contains an unsupported type, such as ``enum``, ``uuid``, ``time``, ``json``, ``bson``, ``lists``, ``maps``, but the data is not referenced in the table (it does not appear in the :ref:`SELECT` query). If the column containing the unsupported type is referenced, an error message is displayed explaining that the type is not supported and that the column may be ommitted. For solutions to this error message, see more information in **Managing Unsupported Column Types** example in the **Example** section.
 
 .. rubric:: Footnotes
 
-.. [#f0] Text values include ``TEXT``, ``VARCHAR``, and ``NVARCHAR``
+.. [#f0] Text values include ``TEXT``
 
 .. [#f2] With UTF8 annotation
 
@@ -133,48 +167,41 @@ Prepare the source Parquet files, with the following requirements:
 
 .. [#f4] Any microseconds will be rounded down to milliseconds.
 
-2. Place Parquet files where SQream DB workers can access them
+Making Parquet Files Accessible to Workers
 ================================================================
-
-Any worker may try to access files (unless explicitly speficied with the :ref:`workload_manager`).
-It is important that every node has the same view of the storage being used - meaning, every SQream DB worker should have access to the files.
+To give workers access to files every node must have the same view of the storage being used.
 
 * For files hosted on NFS, ensure that the mount is accessible from all servers.
 
-* For HDFS, ensure that SQream DB servers can access the HDFS name node with the correct user-id. See our :ref:`hdfs` guide for more information.
+* For HDFS, ensure that SQream servers have access to the HDFS name node with the correct user-id. For more information, see :ref:`hdfs` guide for more information.
 
-* For S3, ensure network access to the S3 endpoint. See our :ref:`s3` guide for more information.
+* For S3, ensure network access to the S3 endpoint. For more information, see :ref:`s3` guide for more information.
 
-
-3. Figure out the table structure
+Creating a Table
 ===============================================
+Before loading data, you must build the CREATE TABLE to correspond with the file structure of the inserted table.
 
-Prior to loading data, you will need to write out the table structure, so that it matches the file structure.
-
-For example, to import the data from ``nba.parquet``, we will first look at the source table:
+The example in this section is based on the source nba.parquet table shown below:
 
 .. csv-table:: nba.parquet
    :file: nba-t10.csv
    :widths: auto
    :header-rows: 1 
 
-* The file is stored on S3, at ``s3://sqream-demo-data/nba.parquet``.
-
-
-We will make note of the file structure to create a matching ``CREATE EXTERNAL TABLE`` statement.
+The following example shows the correct file structure used to create the ``CREATE EXTERNAL TABLE`` statement based on the nba.parquet table:
 
 .. code-block:: postgres
    
    CREATE FOREIGN TABLE ext_nba
    (
-        Name       VARCHAR(40),
-        Team       VARCHAR(40),
+        Name       TEXT(40),
+        Team       TEXT(40),
         Number     BIGINT,
-        Position   VARCHAR(2),
+        Position   TEXT(2),
         Age        BIGINT,
-        Height     VARCHAR(4),
+        Height     TEXT(4),
         Weight     BIGINT,
-        College    VARCHAR(40),
+        College    TEXT(40),
         Salary     FLOAT
     )
     WRAPPER parquet_fdw
@@ -183,71 +210,53 @@ We will make note of the file structure to create a matching ``CREATE EXTERNAL T
       LOCATION =  's3://sqream-demo-data/nba.parquet'
     );
 
-.. tip:: 
-
-   Types in SQream DB must match Parquet types exactly.
-   
-   If the column type isn't supported, a possible workaround is to set it to any arbitrary type and then exclude it from subsequent queries.
-
-
-4. Verify table contents
-====================================
+.. tip:: An exact match must exist between the SQream and Parquet types. For unsupported column types, you can set the type to any type and exclude it from subsequent queries.
 
-External tables do not verify file integrity or structure, so verify that the table definition matches up and contains the correct data.
+.. note:: The **nba.parquet** file is stored on S3 at ``s3://sqream-demo-data/nba.parquet``.
 
-.. code-block:: psql
-   
-   t=> SELECT * FROM ext_nba LIMIT 10;
-   Name          | Team           | Number | Position | Age | Height | Weight | College           | Salary  
-   --------------+----------------+--------+----------+-----+--------+--------+-------------------+---------
-   Avery Bradley | Boston Celtics |      0 | PG       |  25 | 6-2    |    180 | Texas             |  7730337
-   Jae Crowder   | Boston Celtics |     99 | SF       |  25 | 6-6    |    235 | Marquette         |  6796117
-   John Holland  | Boston Celtics |     30 | SG       |  27 | 6-5    |    205 | Boston University |         
-   R.J. Hunter   | Boston Celtics |     28 | SG       |  22 | 6-5    |    185 | Georgia State     |  1148640
-   Jonas Jerebko | Boston Celtics |      8 | PF       |  29 | 6-10   |    231 |                   |  5000000
-   Amir Johnson  | Boston Celtics |     90 | PF       |  29 | 6-9    |    240 |                   | 12000000
-   Jordan Mickey | Boston Celtics |     55 | PF       |  21 | 6-8    |    235 | LSU               |  1170960
-   Kelly Olynyk  | Boston Celtics |     41 | C        |  25 | 7-0    |    238 | Gonzaga           |  2165160
-   Terry Rozier  | Boston Celtics |     12 | PG       |  22 | 6-2    |    190 | Louisville        |  1824360
-   Marcus Smart  | Boston Celtics |     36 | PG       |  22 | 6-4    |    220 | Oklahoma State    |  3431040
-
-If any errors show up at this stage, verify the structure of the Parquet files and match them to the external table structure you created.
-
-5. Copying data into SQream DB
+Ingesting Data into SQream
 ===================================
+This section describes the following:
 
-To load the data into SQream DB, use the :ref:`create_table_as` statement:
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Syntax
+-----------
+You can use the :ref:`create_table_as` statement to load the data into SQream, as shown below:
 
 .. code-block:: postgres
    
    CREATE TABLE nba AS
       SELECT * FROM ext_nba;
 
-Working around unsupported column types
----------------------------------------------
+Examples
+----------------
+This section describes the following examples:
 
-Suppose you only want to load some of the columns - for example, if one of the columns isn't supported.
+.. contents:: 
+   :local:
+   :depth: 1
 
-By ommitting unsupported columns from queries that access the ``EXTERNAL TABLE``, they will never be called, and will not cause a "type mismatch" error.
+Omitting Unsupported Column Types
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+When loading data, you can omit columns using the NULL as argument. You can use this argument to omit unsupported columns from queries that access external tables. By omitting them, these columns will not be called and will avoid generating a “type mismatch” error.
 
-For this example, assume that the ``Position`` column isn't supported because of its type.
+In the example below, the ``Position column`` is not supported due its type.
 
 .. code-block:: postgres
    
    CREATE TABLE nba AS
       SELECT Name, Team, Number, NULL as Position, Age, Height, Weight, College, Salary FROM ext_nba;
-   
-   -- We ommitted the unsupported column `Position` from this query, and replaced it with a default ``NULL`` value, to maintain the same table structure.
-
 
-Modifying data during the copy process
-------------------------------------------
+Modifying Data Before Loading
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+One of the main reasons for staging data using the ``EXTERNAL TABLE`` argument is to examine and modify table contents before loading it into SQream.
 
-One of the main reasons for staging data with ``EXTERNAL TABLE`` is to examine the contents and modify them before loading them.
+For example, we can replace **pounds** with **kilograms** using the ``CREATE TABLE AS`` statement.
 
-Assume we are unhappy with weight being in pounds, because we want to use kilograms instead. We can apply the transformation as part of the :ref:`create_table_as` statement.
-
-Similar to the previous example, we will also set the ``Position`` column as a default ``NULL``.
+In the example below, the ``Position column`` is set to the default ``NULL``.
 
 .. code-block:: postgres
    
@@ -256,20 +265,14 @@ Similar to the previous example, we will also set the ``Position`` column as a d
               FROM ext_nba
               ORDER BY weight;
 
-
-Further Parquet loading examples
-=======================================
-
-:ref:`create_foreign_table` contains several configuration options. See more in :ref:`the CREATE FOREIGN TABLE parameters section`.
-
-
-Loading a table from a directory of Parquet files on HDFS
-------------------------------------------------------------
+Loading a Table from a Directory of Parquet Files on HDFS
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The following is an example of loading a table from a directory of Parquet files on HDFS:
 
 .. code-block:: postgres
 
    CREATE FOREIGN TABLE ext_users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
+     (id INT NOT NULL, name TEXT(30) NOT NULL, email TEXT(50) NOT NULL)  
    WRAPPER parquet_fdw
    OPTIONS
      (
@@ -278,13 +281,14 @@ Loading a table from a directory of Parquet files on HDFS
    
    CREATE TABLE users AS SELECT * FROM ext_users;
 
-Loading a table from a bucket of files on S3
------------------------------------------------
+Loading a Table from a Directory of Parquet Files on S3
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The following is an example of loading a table from a directory of Parquet files on S3:
 
 .. code-block:: postgres
 
    CREATE FOREIGN TABLE ext_users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
+     (id INT NOT NULL, name TEXT(30) NOT NULL, email TEXT(50) NOT NULL)  
    WRAPPER parquet_fdw
    OPTIONS
      ( LOCATION = 's3://pp-secret-bucket/users/*.parquet',
@@ -293,3 +297,29 @@ Loading a table from a bucket of files on S3
       );
    
    CREATE TABLE users AS SELECT * FROM ext_users;
+
+For more configuration option examples, navigate to the :ref:`create_foreign_table` page and see the **Parameters** table.
+
+Best Practices
+============
+Because external tables do not automatically verify the file integrity or structure, SQream recommends manually verifying your table output when ingesting Parquet files into SQream. This lets you determine if your table output is identical to your originally inserted table.
+
+The following is an example of the output based on the **nba.parquet** table:
+
+.. code-block:: psql
+   
+   t=> SELECT * FROM ext_nba LIMIT 10;
+   Name          | Team           | Number | Position | Age | Height | Weight | College           | Salary  
+   --------------+----------------+--------+----------+-----+--------+--------+-------------------+---------
+   Avery Bradley | Boston Celtics |      0 | PG       |  25 | 6-2    |    180 | Texas             |  7730337
+   Jae Crowder   | Boston Celtics |     99 | SF       |  25 | 6-6    |    235 | Marquette         |  6796117
+   John Holland  | Boston Celtics |     30 | SG       |  27 | 6-5    |    205 | Boston University |         
+   R.J. Hunter   | Boston Celtics |     28 | SG       |  22 | 6-5    |    185 | Georgia State     |  1148640
+   Jonas Jerebko | Boston Celtics |      8 | PF       |  29 | 6-10   |    231 |                   |  5000000
+   Amir Johnson  | Boston Celtics |     90 | PF       |  29 | 6-9    |    240 |                   | 12000000
+   Jordan Mickey | Boston Celtics |     55 | PF       |  21 | 6-8    |    235 | LSU               |  1170960
+   Kelly Olynyk  | Boston Celtics |     41 | C        |  25 | 7-0    |    238 | Gonzaga           |  2165160
+   Terry Rozier  | Boston Celtics |     12 | PG       |  22 | 6-2    |    190 | Louisville        |  1824360
+   Marcus Smart  | Boston Celtics |     36 | PG       |  22 | 6-4    |    220 | Oklahoma State    |  3431040
+
+.. note:: If your table output has errors, verify that the structure of the Parquet files correctly corresponds to the external table structure that you created.
\ No newline at end of file
diff --git a/data_type_guides/converting_and_casting_types.rst b/data_type_guides/converting_and_casting_types.rst
index ee5e273da..25e150881 100644
--- a/data_type_guides/converting_and_casting_types.rst
+++ b/data_type_guides/converting_and_casting_types.rst
@@ -3,7 +3,7 @@
 *************************
 Converting and Casting Types
 *************************
-SQream supports explicit and implicit casting and type conversion. The system may automatically add implicit casts when combining different data types in the same expression. In many cases, while the details related to this are not important, they can affect the query results of a query. When necessary, an explicit cast can be used to override the automatic cast added by SQream DB.
+SQream supports explicit and implicit casting and type conversion. The system may automatically add implicit casts when combining different data types in the same expression. In many cases, while the details related to this are not important, they can affect the results of a query. When necessary, an explicit cast can be used to override the automatic cast added by SQream DB.
 
 For example, the ANSI standard defines a ``SUM()`` aggregation over an ``INT`` column as an ``INT``. However, when dealing with large amounts of data this could cause an overflow. 
 
@@ -15,7 +15,7 @@ You can rectify this by casting the value to a larger data type, as shown below:
 
 SQream supports the following three data conversion types:
 
-* ``CAST( TO )``, to convert a value from one type to another. For example, ``CAST('1997-01-01' TO DATE)``, ``CAST(3.45 TO SMALLINT)``, ``CAST(some_column TO VARCHAR(30))``.
+* ``CAST( TO )``, to convert a value from one type to another. For example, ``CAST('1997-01-01' TO DATE)``, ``CAST(3.45 TO SMALLINT)``, ``CAST(some_column TO TEXT)``.
 
    ::
   
@@ -23,4 +23,24 @@ SQream supports the following three data conversion types:
 
    ::
   
-* See the :ref:`SQL functions reference ` for additional functions that convert from a specific value which is not an SQL type, such as :ref:`from_unixts`, etc.
\ No newline at end of file
+* See the :ref:`SQL functions reference ` for additional functions that convert from a specific value which is not an SQL type, such as :ref:`from_unixts`, etc.
+
+
+Supported Casts
+---------------
+
++----------------------------------------------+-----------+----------------------------------------------+-----------------+--------------+------------------------+-----------------------+
+|                                              | **BOOL**  | **TINYINT**/**SMALLINT**/**INT**/**BIGINT**  | **REAL/FLOAT**  | **NUMERIC**  | **DATE**/**DATETIME**  | **VARCHAR**/**TEXT**  |
++==============================================+===========+==============================================+=================+==============+========================+=======================+
+| **BOOL**                                     | N/A       | ✓                                            | ✗               | ✗            | ✗                      | ✓                     |
++----------------------------------------------+-----------+----------------------------------------------+-----------------+--------------+------------------------+-----------------------+
+| **TINYINT**/**SMALLINT**/**INT**/**BIGINT**  | ✓         | N/A                                          | ✓               | ✓            | ✗                      | ✓                     |
++----------------------------------------------+-----------+----------------------------------------------+-----------------+--------------+------------------------+-----------------------+
+| **REAL/FLOAT**                               | ✗         | ✓                                            | N/A             | ✓            | ✗                      | ✓                     |
++----------------------------------------------+-----------+----------------------------------------------+-----------------+--------------+------------------------+-----------------------+
+| **NUMERIC**                                  | ✗         | ✓                                            | ✓               | ✓            | ✗                      | ✓                     |
++----------------------------------------------+-----------+----------------------------------------------+-----------------+--------------+------------------------+-----------------------+
+| **DATE**/**DATETIME**                        | ✗         | ✗                                            | ✗               | ✗            | N/A                    | ✓                     |
++----------------------------------------------+-----------+----------------------------------------------+-----------------+--------------+------------------------+-----------------------+
+| **VARCHAR**/**TEXT**                         | ✓         | ✓                                            | ✓               | ✓            | ✓                      | N/A                   |
++----------------------------------------------+-----------+----------------------------------------------+-----------------+--------------+------------------------+-----------------------+
diff --git a/data_type_guides/sql_data_types_date.rst b/data_type_guides/sql_data_types_date.rst
index da83f80cc..88236c113 100644
--- a/data_type_guides/sql_data_types_date.rst
+++ b/data_type_guides/sql_data_types_date.rst
@@ -108,5 +108,5 @@ The following table shows the possible ``DATE`` and ``DATETIME`` value conversio
    
    * - Type
      - Details
-   * - ``VARCHAR(n)``
+   * - ``TEXT``
      - ``'1997-01-01'`` → ``'1997-01-01'``, ``'1955-11-05 01:24'`` → ``'1955-11-05 01:24:00.000'``
\ No newline at end of file
diff --git a/data_type_guides/sql_data_types_floating_point.rst b/data_type_guides/sql_data_types_floating_point.rst
index 18227140c..3edc8362d 100644
--- a/data_type_guides/sql_data_types_floating_point.rst
+++ b/data_type_guides/sql_data_types_floating_point.rst
@@ -74,7 +74,6 @@ The following table shows the possible Floating Point value conversions:
      - ``1.0`` → ``true``, ``0.0`` → ``false``
    * - ``TINYINT``, ``SMALLINT``, ``INT``, ``BIGINT``
      - ``2.0`` → ``2``, ``3.14159265358979`` → ``3``, ``2.718281828459`` → ``2``, ``0.5`` → ``0``, ``1.5`` → ``1``
-   * - ``VARCHAR(n)`` (n > 6 recommended)
-     - ``1`` → ``'1.0000'``, ``3.14159265358979`` → ``'3.1416'``
+
 
 .. note:: As shown in the above examples, casting ``real`` to ``int`` rounds down.
\ No newline at end of file
diff --git a/data_type_guides/sql_data_types_integer.rst b/data_type_guides/sql_data_types_integer.rst
index 9d4210731..cd27f6956 100644
--- a/data_type_guides/sql_data_types_integer.rst
+++ b/data_type_guides/sql_data_types_integer.rst
@@ -79,5 +79,5 @@ The following table shows the possible Integer value conversions:
      - Details
    * - ``REAL``, ``DOUBLE``
      - ``1`` → ``1.0``, ``-32`` → ``-32.0``
-   * - ``VARCHAR(n)`` (All numberic values must fit in the string length)
+   * - ``TEXT`` (All numberic values must fit in the string length)
      - ``1`` → ``'1'``, ``2451`` → ``'2451'``
\ No newline at end of file
diff --git a/data_type_guides/sql_data_types_string.rst b/data_type_guides/sql_data_types_string.rst
index beb970b8d..df4261d8f 100644
--- a/data_type_guides/sql_data_types_string.rst
+++ b/data_type_guides/sql_data_types_string.rst
@@ -3,42 +3,18 @@
 *************************
 String
 *************************
-``TEXT`` and ``VARCHAR`` are types designed for storing text or strings of characters.
+``TEXT`` is designed for storing text or strings of characters.
 
-SQream separates ASCII (``VARCHAR``) and UTF-8 representations (``TEXT``).
-
-.. note:: The data type ``NVARCHAR`` has been deprecated by ``TEXT`` as of version 2020.1.
-
-String Types
-^^^^^^^^^^^^^^^^^^^^^^
-The following table describes the String types:
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-   
-   * - Name
-     - Details
-     - Data Size (Not Null, Uncompressed)
-     - Example
-   * - ``TEXT [(n)]``, ``NVARCHAR (n)``
-     - Varaiable length string - UTF-8 unicode. ``NVARCHAR`` is synonymous with ``TEXT``.
-     - Up to ``4*n`` bytes
-     - ``'キウイは楽しい鳥です'``
-   * - ``VARCHAR (n)``
-     - Variable length string - ASCII only
-     - ``n`` bytes
-     - ``'Kiwis have tiny wings, but cannot fly.'``
+SQream UTF-8 representations (``TEXT``).
 
 Length
 ^^^^^^^^^
-When using ``TEXT``, specifying a size is optional. If not specified, the text field carries no constraints. To limit the size of the input, use ``VARCHAR(n)`` or ``TEXT(n)``, where ``n`` is the permitted number of characters.
+When using ``TEXT``, specifying a size is optional. If not specified, the text field carries no constraints. To limit the size of the input, use ``TEXT(n)``, where ``n`` is the permitted number of characters.
 
 The following apply to setting the String type length:
 
 * If the data exceeds the column length limit on ``INSERT`` or ``COPY`` operations, SQream DB will return an error.
-* When casting or converting, the string has to fit in the target. For example, ``'Kiwis are weird birds' :: VARCHAR(5)`` will return an error. Use ``SUBSTRING`` to truncate the length of the string.
-* ``VARCHAR`` strings are padded with spaces.
+* When casting or converting, the string has to fit in the target. For example, ``'Kiwis are weird birds' :: TEXT(5)`` will return an error. Use ``SUBSTRING`` to truncate the length of the string.
 
 Syntax
 ^^^^^^^^
@@ -47,7 +23,7 @@ String types can be written with standard SQL string literals, which are enclose
 
 Size
 ^^^^^^
-``VARCHAR(n)`` can occupy up to *n* bytes, whereas ``TEXT(n)`` can occupy up to *4*n* bytes. However, the size of strings is variable and is compressed by SQream.
+``TEXT(n)`` can occupy up to *4*n* bytes. However, the size of strings is variable and is compressed by SQream.
 
 String Examples
 ^^^^^^^^^^
diff --git a/data_type_guides/supported_data_types.rst b/data_type_guides/supported_data_types.rst
index 2743b054b..afc735ad1 100644
--- a/data_type_guides/supported_data_types.rst
+++ b/data_type_guides/supported_data_types.rst
@@ -51,17 +51,17 @@ The following table shows the supported data types.
      - 8 bytes
      - ``0.000003``
      - ``FLOAT``/``DOUBLE PRECISION``
-   * - ``TEXT [(n)]``, ``NVARCHAR (n)``
+   * - ``TEXT [(n)]``
      - Variable length string - UTF-8 unicode
      - Up to ``4*n`` bytes
      - ``'キウイは楽しい鳥です'``
-     - ``CHAR VARYING``, ``CHAR``, ``CHARACTER VARYING``, ``CHARACTER``, ``NATIONAL CHARACTER VARYING``, ``NATIONAL CHARACTER``, ``NCHAR VARYING``, ``NCHAR``, ``NVARCHAR``
+     - ``CHAR VARYING``, ``CHAR``, ``CHARACTER VARYING``, ``CHARACTER``, ``NATIONAL CHARACTER VARYING``, ``NATIONAL CHARACTER``, ``NCHAR VARYING``, ``NCHAR``
    * - ``NUMERIC``
      -  38 digits
      - 16 bytes
      - ``0.123245678901234567890123456789012345678``
      - ``DECIMAL``
-   * - ``VARCHAR (n)``
+   * - ``TEXT (n)``
      - Variable length string - ASCII only
      - ``n`` bytes
      - ``'Kiwis have tiny wings, but cannot fly.'``
diff --git a/external_storage_platforms/hdfs.rst b/external_storage_platforms/hdfs.rst
new file mode 100644
index 000000000..a6ffe0ead
--- /dev/null
+++ b/external_storage_platforms/hdfs.rst
@@ -0,0 +1,252 @@
+.. _hdfs:
+
+.. _back_to_top_hdfs:
+
+Using SQream in an HDFS Environment
+=======================================
+
+.. _configuring_an_hdfs_environment_for_the_user_sqream:
+
+Configuring an HDFS Environment for the User **sqream**
+----------------------------------------------------------
+
+This section describes how to configure an HDFS environment for the user **sqream** and is only relevant for users with an HDFS environment.
+
+**To configure an HDFS environment for the user sqream:**
+
+1. Open your **bash_profile** configuration file for editing:
+
+   .. code-block:: console
+     
+       $ vim /home/sqream/.bash_profile
+       
+..
+   Comment: - see below; do we want to be a bit more specific on what changes we're talking about?
+
+   .. code-block:: console
+     
+      $ #PATH=$PATH:$HOME/.local/bin:$HOME/bin
+
+      $ #export PATH
+
+      $ # PS1
+      $ #MYIP=$(curl -s -XGET "http://ip-api.com/json" | python -c 'import json,sys; jstr=json.load(sys.stdin); print jstr["query"]')
+      $ #PS1="\[\e[01;32m\]\D{%F %T} \[\e[01;33m\]\u@\[\e[01;36m\]$MYIP \[\e[01;31m\]\w\[\e[37;36m\]\$ \[\e[1;37m\]"
+
+      $ SQREAM_HOME=/usr/local/sqream
+      $ export SQREAM_HOME
+
+      $ export JAVA_HOME=${SQREAM_HOME}/hdfs/jdk
+      $ export HADOOP_INSTALL=${SQREAM_HOME}/hdfs/hadoop
+      $ export CLASSPATH=`${HADOOP_INSTALL}/bin/hadoop classpath --glob`
+      $ export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_INSTALL}/lib/native
+      $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${SQREAM_HOME}/lib:$HADOOP_COMMON_LIB_NATIVE_DIR
+
+
+      $ PATH=$PATH:$HOME/.local/bin:$HOME/bin:${SQREAM_HOME}/bin/:${JAVA_HOME}/bin:$HADOOP_INSTALL/bin
+      $ export PATH
+
+2. Verify that the edits have been made:
+
+   .. code-block:: console
+     
+      source /home/sqream/.bash_profile
+       
+3. Check if you can access Hadoop from your machine:       
+       
+  .. code-block:: console
+     
+     $ hadoop fs -ls hdfs://:8020/
+      
+..
+   Comment: - 
+   **NOTICE:** If you cannot access Hadoop from your machine because it uses Kerberos, see `Connecting a SQream Server to Cloudera Hadoop with Kerberos `_
+
+
+4. Verify that an HDFS environment exists for SQream services:
+
+   .. code-block:: console
+     
+      $ ls -l /etc/sqream/sqream_env.sh
+	  
+.. _step_6:
+
+      
+5. If an HDFS environment does not exist for SQream services, create one (sqream_env.sh):
+   
+   .. code-block:: console
+     
+      $ #!/bin/bash
+
+      $ SQREAM_HOME=/usr/local/sqream
+      $ export SQREAM_HOME
+
+      $ export JAVA_HOME=${SQREAM_HOME}/hdfs/jdk
+      $ export HADOOP_INSTALL=${SQREAM_HOME}/hdfs/hadoop
+      $ export CLASSPATH=`${HADOOP_INSTALL}/bin/hadoop classpath --glob`
+      $ export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_INSTALL}/lib/native
+      $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${SQREAM_HOME}/lib:$HADOOP_COMMON_LIB_NATIVE_DIR
+
+
+      $ PATH=$PATH:$HOME/.local/bin:$HOME/bin:${SQREAM_HOME}/bin/:${JAVA_HOME}/bin:$HADOOP_INSTALL/bin
+      $ export PATH
+	  
+:ref:`Back to top `
+
+	  
+.. _authenticate_hadoop_servers_that_require_kerberos:
+
+Authenticating Hadoop Servers that Require Kerberos
+---------------------------------------------------
+
+If your Hadoop server requires Kerberos authentication, do the following:
+
+1. Create a principal for the user **sqream**.
+
+   .. code-block:: console
+   
+      $ kadmin -p root/admin@SQ.COM
+      $ addprinc sqream@SQ.COM
+      
+2. If you do not know yor Kerberos root credentials, connect to the Kerberos server as a root user with ssh and run **kadmin.local**:
+
+   .. code-block:: console
+   
+      $ kadmin.local
+      
+   Running **kadmin.local** does not require a password.
+
+3. If a password is not required, change your password to **sqream@SQ.COM**.
+
+   .. code-block:: console
+   
+      $ change_password sqream@SQ.COM
+
+4. Connect to the hadoop name node using ssh:
+
+   .. code-block:: console
+   
+      $ cd /var/run/cloudera-scm-agent/process
+
+5. Check the most recently modified content of the directory above:
+
+   .. code-block:: console
+   
+      $ ls -lrt
+
+6. Look for a recently updated folder containing the text **hdfs**.
+
+The following is an example of the correct folder name:
+
+   .. code-block:: console
+   
+      cd -hdfs-
+	  
+   This folder should contain a file named **hdfs.keytab** or another similar .keytab file.
+   
+
+ 
+..
+   Comment: - Does "something" need to be replaced with "file name"
+   
+
+7. Copy the .keytab file to user **sqream's** Home directory on the remote machines that you are planning to use Hadoop on.
+
+8. Copy the following files to the **sqream sqream@server:/hdfs/hadoop/etc/hadoop:** directory:
+
+   * core-site.xml
+   * hdfs-site.xml
+
+9. Connect to the sqream server and verify that the .keytab file's owner is a user sqream and is granted the correct permissions:
+
+   .. code-block:: console
+   
+      $ sudo chown sqream:sqream /home/sqream/hdfs.keytab
+      $ sudo chmod 600 /home/sqream/hdfs.keytab
+
+10. Log into the sqream server.
+
+11. Log in as the user **sqream**.
+
+12. Navigate to the Home directory and check the name of a Kerberos principal represented by the following .keytab file:
+
+   .. code-block:: console
+   
+      $ klist -kt hdfs.keytab
+
+   The following is an example of the correct output:
+
+   .. code-block:: console
+   
+      $ sqream@Host-121 ~ $ klist -kt hdfs.keytab
+      $ Keytab name: FILE:hdfs.keytab
+      $ KVNO Timestamp           Principal
+      $ ---- ------------------- ------------------------------------------------------
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 HTTP/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+      $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
+
+13. Verify that the hdfs service named **hdfs/nn1@SQ.COM** is shown in the generated output above.
+
+14. Run the following:
+
+   .. code-block:: console
+   
+      $ kinit -kt hdfs.keytab hdfs/nn1@SQ.COM
+
+15. Check the output:
+  
+   .. code-block:: console
+   
+      $ klist
+      
+   The following is an example of the correct output:
+
+   .. code-block:: console
+   
+      $ Ticket cache: FILE:/tmp/krb5cc_1000
+      $ Default principal: sqream@SQ.COM
+      $ 
+      $ Valid starting       Expires              Service principal
+      $ 09/16/2020 13:44:18  09/17/2020 13:44:18  krbtgt/SQ.COM@SQ.COM
+
+16. List the files located at the defined server name or IP address:
+
+   .. code-block:: console
+   
+      $ hadoop fs -ls hdfs://:8020/
+
+17. Do one of the following:
+
+    * If the list below is output, continue with Step 16.
+    * If the list is not output, verify that your environment has been set up correctly.
+	
+If any of the following are empty, verify that you followed :ref:`Step 6 ` in the **Configuring an HDFS Environment for the User sqream** section above correctly:
+
+  .. code-block:: console
+   
+      $ echo $JAVA_HOME
+      $ echo $SQREAM_HOME
+      $ echo $CLASSPATH
+      $ echo $HADOOP_COMMON_LIB_NATIVE_DIR
+      $ echo $LD_LIBRARY_PATH
+      $ echo $PATH
+
+18. Verify that you copied the correct keytab file.
+
+19. Review this procedure to verify that you have followed each step.
+
+:ref:`Back to top `
\ No newline at end of file
diff --git a/external_storage_platforms/index.rst b/external_storage_platforms/index.rst
new file mode 100644
index 000000000..827fdf190
--- /dev/null
+++ b/external_storage_platforms/index.rst
@@ -0,0 +1,26 @@
+.. _external_storage_platforms:
+
+***********************
+External Storage Platforms
+***********************
+SQream supports the following external storage platforms:
+
+.. toctree::
+   :maxdepth: 1
+   :titlesonly:
+
+   s3
+   
+   hdfs
+   
+For more information, see the following:
+
+* :ref:`foreign_tables`
+
+   ::
+   
+* :ref:`copy_from`
+
+   ::
+   
+* :ref:`copy_to`
diff --git a/external_storage_platforms/s3.rst b/external_storage_platforms/s3.rst
new file mode 100644
index 000000000..a3111f3b9
--- /dev/null
+++ b/external_storage_platforms/s3.rst
@@ -0,0 +1,127 @@
+.. _s3:
+
+***********************
+Inserting Data Using Amazon S3
+***********************
+SQream uses a native S3 connector for inserting data. The ``s3://`` URI specifies an external file path on an S3 bucket. File names may contain wildcard characters, and the files can be in CSV or columnar format, such as Parquet and ORC.
+
+The **Amazon S3** describes the following topics:
+
+.. contents::
+   :local:
+   
+S3 Configuration
+==============================
+
+A best practice for granting access to AWS S3 is by creating an `Identity and Access Management (IAM) `_ user account. If creating an IAM user account is not possible, you may follow AWS guidelines for `using the global configuration object `_ and setting an `AWS region `_
+
+S3 URI Format
+===============
+With S3, specify a location for a file (or files) when using :ref:`copy_from` or :ref:`external_tables`.
+
+The following is an example of the general S3 syntax:
+
+.. code-block:: console
+ 
+   s3://bucket_name/path
+
+Authentication
+=================
+SQream supports ``AWS ID`` and ``AWS SECRET`` authentication. These should be specified when executing a statement.
+
+Examples
+==========
+Use a foreign table to stage data from S3 before loading from CSV, Parquet, or ORC files.
+
+The **Examples** section includes the following examples:
+
+.. contents::
+   :local:
+   :depth: 1
+Planning for Data Staging
+--------------------------------
+The examples in this section are based on a CSV file, as shown in the following table:
+
+.. csv-table:: nba.csv
+   :file: ../nba-t10.csv
+   :widths: auto
+   :header-rows: 1 
+
+The file is stored on Amazon S3, and this bucket is public and listable. To create a matching ``CREATE FOREIGN TABLE`` statement you can make note of the file structure.
+
+Creating a Foreign Table
+-----------------------------
+Based on the source file's structure, you can create a foreign table with the appropriate structure, and point it to your file as shown in the following example:
+
+.. code-block:: postgres
+   
+   CREATE FOREIGN TABLE nba
+   (
+      Name varchar(40),
+      Team varchar(40),
+      Number tinyint,
+      Position varchar(2),
+      Age tinyint,
+      Height varchar(4),
+      Weight real,
+      College varchar(40),
+      Salary float
+    )
+    WRAPPER csv_fdw
+    OPTIONS
+      (
+         LOCATION = 's3://sqream-demo-data/nba_players.csv',
+         RECORD_DELIMITER = '\r\n' -- DOS delimited file
+      )
+    ;
+
+In the example above the file format is CSV, and it is stored as an S3 object. If the path is on HDFS, you must change the URI accordingly. Note that the record delimiter is a DOS newline (``\r\n``).
+
+For more information, see the following:
+
+* **Creating a foreign table** - see :ref:`create a foreign table`.
+* **Using SQream in an HDFS environment** - see :ref:`hdfs`.
+
+Querying Foreign Tables
+------------------------------
+The following shows the data in the foreign table:
+
+.. code-block:: psql
+   
+   t=> SELECT * FROM nba LIMIT 10;
+   name          | team           | number | position | age | height | weight | college           | salary  
+   --------------+----------------+--------+----------+-----+--------+--------+-------------------+---------
+   Avery Bradley | Boston Celtics |      0 | PG       |  25 | 6-2    |    180 | Texas             |  7730337
+   Jae Crowder   | Boston Celtics |     99 | SF       |  25 | 6-6    |    235 | Marquette         |  6796117
+   John Holland  | Boston Celtics |     30 | SG       |  27 | 6-5    |    205 | Boston University |         
+   R.J. Hunter   | Boston Celtics |     28 | SG       |  22 | 6-5    |    185 | Georgia State     |  1148640
+   Jonas Jerebko | Boston Celtics |      8 | PF       |  29 | 6-10   |    231 |                   |  5000000
+   Amir Johnson  | Boston Celtics |     90 | PF       |  29 | 6-9    |    240 |                   | 12000000
+   Jordan Mickey | Boston Celtics |     55 | PF       |  21 | 6-8    |    235 | LSU               |  1170960
+   Kelly Olynyk  | Boston Celtics |     41 | C        |  25 | 7-0    |    238 | Gonzaga           |  2165160
+   Terry Rozier  | Boston Celtics |     12 | PG       |  22 | 6-2    |    190 | Louisville        |  1824360
+   Marcus Smart  | Boston Celtics |     36 | PG       |  22 | 6-4    |    220 | Oklahoma State    |  3431040
+   
+Bulk Loading a File from a Public S3 Bucket
+----------------------------------------------
+The ``COPY FROM`` command can also be used to load data without staging it first.
+
+.. note:: The bucket must be publicly available and objects can be listed.
+
+The following is an example of bulk loading a file from a public S3 bucket:
+
+.. code-block:: postgres
+
+   COPY nba FROM 's3://sqream-demo-data/nba.csv' WITH OFFSET 2 RECORD DELIMITER '\r\n';
+   
+For more information on the ``COPY FROM`` command, see :ref:`copy_from`.
+
+Loading Files from an Authenticated S3 Bucket
+---------------------------------------------------
+The following is an example of loading fles from an authenticated S3 bucket:
+
+.. code-block:: postgres
+
+   COPY nba FROM 's3://secret-bucket/*.csv' WITH OFFSET 2 RECORD DELIMITER '\r\n' 
+   AWS_ID '12345678'
+   AWS_SECRET 'super_secretive_secret';
\ No newline at end of file
diff --git a/external_storage_platforms/storing_data_on_parquet.rst b/external_storage_platforms/storing_data_on_parquet.rst
new file mode 100644
index 000000000..0aa54dbc4
--- /dev/null
+++ b/external_storage_platforms/storing_data_on_parquet.rst
@@ -0,0 +1,8 @@
+.. _storing_data_on_parquet:
+
+***********************
+Storing Data on Parquet
+***********************
+As described in the **Data Ingestion Sources** section, users can insert data into SQream from Parquet files. However, because it is an open-source column-oriented data storage format, users may want to retain their data there instead of inserting it into SQream. This requires SQream users to be able to execute queries on external Parquet files.
+
+
diff --git a/feature_guides/automatic_foreign_table_ddl_resolution.rst b/feature_guides/automatic_foreign_table_ddl_resolution.rst
new file mode 100644
index 000000000..6c0d1605c
--- /dev/null
+++ b/feature_guides/automatic_foreign_table_ddl_resolution.rst
@@ -0,0 +1,46 @@
+.. _automatic_foreign_table_ddl_resolution:
+
+***********************
+Automatic Foreign Table DDL Resolution
+***********************
+The **Automatic Foreign Table DDL Resolution** page describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+   
+Overview
+----------
+SQream must be able to access a schema when reading and mapping external files to a foreign table. To facilitate this, you must specify the correct schema in the statement that creates the foreign table, which must also include the correct list of columns. To avoid human error related to this complex process SQream can now automatically identify the corresponding schema, saving you the time and effort required to build your schema manually. This is especially useful for particular file formats, such as Parquet, which include a built-in schema declaration.
+
+Usage Notes
+----------
+The automatic foreign table DDL resolution feature supports Parquet, ORC, JSON, and Avro files, while using it with CSV files generates an error. You can activate this feature when you create a foreign table by omitting the column list, described in the **Syntax** section below.
+
+Using this feature the path you specify in the ``LOCATION`` option must point to at least one existing file. If no files exist for the schema to read, an error will be generated. You can specify the schema manually even in the event of the error above.
+
+.. note:: When using this feature, SQream assumes that all files in the path use the same schema.
+
+Syntax
+----------
+The following is the syntax for using the automatic foreign table DDL resolution feature:
+
+.. code-block:: console
+   
+   CREATE FOREIGN TABLE table_name
+   [FOREIGN DATA] WRAPPER fdw_name
+   [OPTIONS (...)];
+
+Example
+----------
+The following is an example of using the automatic foreign table DDL resolution feature:
+
+.. code-block:: console
+
+   create foreign table parquet_table
+   wrapper parquet_fdw
+   options (location = '/tmp/file.parquet');
+   
+Permissions
+----------
+The automatic foreign table DDL resolution feature requires **Read** permissions.
\ No newline at end of file
diff --git a/feature_guides/compression.rst b/feature_guides/compression.rst
index 710641036..e6143bc59 100644
--- a/feature_guides/compression.rst
+++ b/feature_guides/compression.rst
@@ -3,57 +3,58 @@
 ***********************
 Compression
 ***********************
+The **Compression** page describes the following:
 
-SQream DB uses compression and encoding techniques to optimize query performance and save on disk space.
+.. contents:: 
+   :local:
+   :depth: 1
 
-Encoding
-=============
+.. |icon-new_dark_gray_2022.1.1.png| image:: /_static/images/new_dark_gray_2022.1.1.png
+   :align: middle
+   :width: 110
 
-Encoding converts data into a common format.
+SQream uses a variety of compression and encoding methods to optimize query performance and to save disk space.
 
-When data is stored in a columnar format, it is often in a common format. This is in contrast with data stored in CSVs for example, where everything is stored in a text format.
-
-Because encoding uses specific data formats and encodings, it increases performance and reduces data size. 
+Encoding
+=============
+**Encoding** is an automatic operation used to convert data into common formats. For example, certain formats are often used for data stored in columnar format, in contrast with data stored in a CSV file, which stores all data in text format.
 
-SQream DB encodes data in several ways depending on the data type. For example, a date is stored as an integer, with March 1st 1CE as the start. This is a lot more efficient than encoding the date as a string, and offers a wider range than storing it relative to the Unix Epoch. 
+Encoding enhances performance and reduces data size by using specific data formats and encoding methods. SQream encodes data in a number of ways in accordance with the data type. For example, a **date** is stored as an **integer**, starting with **March 1st 1CE**, which is significantly more efficient than encoding the date as a string. In addition, it offers a wider range than storing it relative to the Unix Epoch. 
 
-Compression
+Lossless Compression
 ==============
+**Compression** transforms data into a smaller format without sacrificing accuracy, known as **lossless compression**.
 
-Compression transforms data into a smaller format without losing accuracy (lossless).
-
-After encoding a set of column values, SQream DB packs the data and compresses it.
-
-Before data can be accessed, SQream DB needs to decompress it.
+After encoding a set of column values, SQream packs the data and compresses it and decompresses it to make it accessible to users. Depending on the compression scheme used, these operations can be performed on the CPU or the GPU. Some users find that GPU compressions provide better performance.
 
-Depending on the compression scheme, the operations can be performed on the CPU or the GPU. Some users find that GPU compressions perform better for their data.
-
-Automatic compression
+Automatic Compression
 ------------------------
-
-By default, SQream DB automatically compresses every column (see :ref:`Specifying compressions` below for overriding default compressions). This feature is called **automatic adaptive compression** strategy.
+By default, SQream automatically compresses every column (see :ref:`Specifying Compression Strategies` below for overriding default compressions). This feature is called **automatic adaptive compression** strategy.
 
 When loading data, SQream DB automatically decides on the compression schemes for specific chunks of data by trying several compression schemes and selecting the one that performs best. SQream DB tries to balance more agressive compressions with the time and CPU/GPU time required to compress and decompress the data.
 
-Compression strategies
+Compression Methods
 ------------------------
 
+
+The following table shows the supported compression methods:
+
 .. list-table:: 
    :widths: auto
    :header-rows: 1
 
-   * - Compression name
-     - Supported data types
+   * - Compression Method
+     - Supported Data Types
      - Description
      - Location
    * - ``FLAT``
      - All types
      - No compression (forced)
-     - -
+     - NA
    * - ``DEFAULT``
      - All types
      - Automatic scheme selection
-     - -
+     - NA
    * - ``DICT``
      - Integer types, dates and timestamps, short texts
      - 
@@ -89,26 +90,34 @@ Compression strategies
      - Integer types
      - Optimized RLE + Delta type for built-in :ref:`identity columns`. 
      - GPU
+   * - ``zlib``
+     - All types
+     - The **basic_zlib_compressor** and **basic_zlib_decompressor** compress and decompress data in the **ZLIB** format, using **DualUseFilters** for input and output. In general, compression filters are for output, and decompression filters for input.
+     - CPU
+	 
+.. note:: Automatic compression does **not** select the ``zlib`` compression method.
 
 .. _specifying_compressions:
 
-Specifying compression strategies
+Specifying Compression Strategies
 ----------------------------------
+When you create a table without defining any compression specifications, SQream defaults to automatic adaptive compression (``"default"``). However, you can prevent this by specifying a compression strategy when creating a table.
 
-When creating a table without any compression specifications, SQream DB defaults to automatic adaptive compression (``"default"``).
+This section describes the following compression strategies:
 
-However, this can be overriden by specifying a compression strategy when creating a table.
+.. contents:: 
+   :local:
+   :depth: 1
 
-Explicitly specifying automatic compression
+Explicitly Specifying Automatic Compression
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The following two are equivalent:
+When you explicitly specify automatic compression, the following two are equivalent:
 
 .. code-block:: postgres
    
    CREATE TABLE t (
       x INT,
-      y VARCHAR(50)
+      y TEXT(50)
    );
 
 In this version, the default compression is specified explicitly:
@@ -117,47 +126,50 @@ In this version, the default compression is specified explicitly:
    
    CREATE TABLE t (
       x INT CHECK('CS "default"'),
-      y VARCHAR(50) CHECK('CS "default"')
+      y TEXT(50) CHECK('CS "default"')
    );
 
-Forcing no compression (flat)
+Forcing No Compression
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+**Forcing no compression** is also known as "flat", and can be used in the event that you want to remove compression entirely on some columns. This may be useful for reducing CPU or GPU resource utilization at the expense of increased I/O.
 
-In some cases, you may wish to remove compression entirely on some columns,
-in order to reduce CPU or GPU resource utilization at the expense of increased I/O.
+The following is an example of removing compression:
 
 .. code-block:: postgres
    
    CREATE TABLE t (
       x INT NOT NULL CHECK('CS "flat"'), -- This column won't be compressed
-      y VARCHAR(50) -- This column will still be compressed automatically
+      y TEXT(50) -- This column will still be compressed automatically
    );
 
-
-Forcing compressions
+Forcing Compression
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-In some cases, you may wish to force SQream DB to use a specific compression scheme based
-on your knowledge of the data. 
-
-For example:
+In other cases, you may want to force SQream to use a specific compression scheme based on your knowledge of the data, as shown in the following example:
 
 .. code-block:: postgres
    
    CREATE TABLE t (
       id BIGINT NOT NULL CHECK('CS "sequence"'),
-      y VARCHAR(110) CHECK('CS "lz4"'), -- General purpose text compression
-      z VARCHAR(80) CHECK('CS "dict"'), -- Low cardinality column
+      y TEXT(110) CHECK('CS "lz4"'), -- General purpose text compression
+      z TEXT(80) CHECK('CS "dict"'), -- Low cardinality column
       
    );
 
+However, if SQream finds that the given compression method cannot effectively compress the data, it will return to the default compression type.
 
-Examining compression effectiveness
+Examining Compression Effectiveness
 --------------------------------------
+Queries made on the internal metadata catalog can expose how effective the compression is, as well as what compression schemes were selected.
 
-Queries to the internal metadata catalog can expose how effective the compression is, as well as what compression schemes were selected.
+This section describes the following:
 
-Here is a sample query we can use to query the catalog:
+.. contents:: 
+   :local:
+   :depth: 1
+
+Querying the Catalog
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The following is a sample query that can be used to query the catalog:
 
 .. code-block:: postgres
    
@@ -178,7 +190,9 @@ Here is a sample query we can use to query the catalog:
       GROUP BY 1,
                2;
 
-Example (subset) from the ``ontime`` table:
+Example Subset from "Ontime" Table			   
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The following is an example (subset) from the ``ontime`` table:
 
 .. code-block:: psql
    
@@ -268,43 +282,43 @@ Example (subset) from the ``ontime`` table:
    uniquecarrier             | dict               |     578221 |      7230705 |                     11.96 | default             
    year                      | rle                |          6 |      2065915 |                 317216.08 | default             
 
-
-Notes on reading this table:
+Notes on Reading the "Ontime" Table
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The following are some useful notes on reading the "Ontime" table shown above:
 
-#. Higher numbers in the *effectiveness* column represent better compressions. 0 represents a column that wasn't compressed at all.
+#. Higher numbers in the **Compression effectiveness** column represent better compressions. **0** represents a column that has **not been compressed**.
 
-#. Column names are the internal representation. Names with ``@null`` and ``@val`` suffixes represent a nullable column's null (boolean) and values respectively, but are treated as one logical column.
+    ::
 
+#. Column names are an internal representation. Names with ``@null`` and ``@val`` suffixes represent a nullable column's null (boolean) and values respectively, but are treated as one logical column.
+
+    ::
+	
 #. The query lists all actual compressions for a column, so it may appear several times if the compression has changed mid-way through the loading (as with the ``carrierdelay`` column).
 
-#. When ``default`` is the compression strategy, the system automatically selects the best compression. This can also mean no compression at all (``flat``).
+    ::
+	
+#. When your compression strategy is ``default``, the system automatically selects the best compression, including no compression at all (``flat``).
 
-Compression best practices
+Best Practices
 ==============================
+This section describes the best compression practices:
 
-Let SQream DB decide on the compression strategy
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Letting SQream Determine the Best Compression Strategy
 ----------------------------------------------------
+In general, SQream determines the best compression strategy for most cases. If you decide to override SQream's selected compression strategies, we recommend benchmarking your query and load performance **in addition to** your storage size.
 
-In general, SQream DB will decide on the best compression strategy in most cases.
-
-When overriding compression strategies, we recommend benchmarking not just storage size but also query and load performance.
-
-
-Maximize the advantage of each compression schemes
+Maximizing the Advantage of Each Compression Scheme
 -------------------------------------------------------
-
-Some compression schemes perform better when data is organized in a specific way.
-
-For example, to take advantage of RLE, sorting a column may result in better performance and reduced disk-space and I/O usage.
+Some compression schemes perform better when data is organized in a specific way. For example, to take advantage of RLE, sorting a column may result in better performance and reduced disk-space and I/O usage.
 Sorting a column partially may also be beneficial. As a rule of thumb, aim for run-lengths of more than 10 consecutive values.
 
-Choose data types that fit the data
+Choosing Data Types that Fit Your Data
 ---------------------------------------
+Adapting to the narrowest data type improves query performance while reducing disk space usage. However, smaller data types may compress better than larger types.
 
-Adapting to the narrowest data type will improve query performance and also reduce disk space usage.
-However, smaller data types may compress better than larger types.
-
-For example, use the smallest numeric data type that will accommodate your data. Using ``BIGINT`` for data that fits in ``INT`` or ``SMALLINT`` can use more disk space and memory for query execution.
-
-Using ``FLOAT`` to store integers will reduce compression's effectiveness significantly.
\ No newline at end of file
+For example, SQream recommends using the smallest numeric data type that will accommodate your data. Using ``BIGINT`` for data that fits in ``INT`` or ``SMALLINT`` can use more disk space and memory for query execution. Using ``FLOAT`` to store integers will reduce compression's effectiveness significantly.
\ No newline at end of file
diff --git a/feature_guides/concurrency_and_locks.rst b/feature_guides/concurrency_and_locks.rst
index e18dea015..2a85d2642 100644
--- a/feature_guides/concurrency_and_locks.rst
+++ b/feature_guides/concurrency_and_locks.rst
@@ -10,7 +10,7 @@ Read only transactions are never blocked, and never block anything. Even if you
 
 .. _locking_modes:
 
-Locking modes
+Locking Modes
 ================
 
 SQream DB has two kinds of locks:
@@ -27,7 +27,7 @@ SQream DB has two kinds of locks:
    
    This lock allows other statements to insert or delete data from a table, but they'll have to wait in order to run DDL.
 
-When are locks obtained?
+When are Locks Obtained?
 ============================
 
 .. list-table::
@@ -64,23 +64,7 @@ When are locks obtained?
 
 Statements that wait will exit with an error if they hit the lock timeout. The default timeout is 3 seconds, see ``statementLockTimeout``.
 
-Global locks
-----------------
-
-Some operations require exclusive global locks at the cluster level. These usually short-lived locks will be obtained for the following operations:
-
-   * :ref:`create_database`
-   * :ref:`create_role`
-   * :ref:`create_table`
-   * :ref:`alter_role`
-   * :ref:`alter_table`
-   * :ref:`drop_database`
-   * :ref:`drop_role`
-   * :ref:`drop_table`
-   * :ref:`grant`
-   * :ref:`revoke`
-
-Monitoring locks
+Monitoring Locks
 ===================
 
 Monitoring locks across the cluster can be useful when transaction contention takes place, and statements appear "stuck" while waiting for a previous statement to release locks.
@@ -101,7 +85,4 @@ In this example, we create a table based on results (:ref:`create_table_as`), bu
    287          | CREATE OR REPLACE TABLE nba2 AS SELECT "Name" FROM nba WHERE REGEXP_COUNT("Name", '( )+', 8)>1; | sqream   | 192.168.1.91 | 5000 | table$t$public$nba2$Insert      | Exclusive | 2019-12-26 00:03:30  | 2019-12-26 00:03:30
    287          | CREATE OR REPLACE TABLE nba2 AS SELECT "Name" FROM nba WHERE REGEXP_COUNT("Name", '( )+', 8)>1; | sqream   | 192.168.1.91 | 5000 | table$t$public$nba2$Update      | Exclusive | 2019-12-26 00:03:30  | 2019-12-26 00:03:30
 
-For more information on troubleshooting lock related issues, see 
-
-
-
+For more information on troubleshooting lock related issues, see :ref:`lock_related_issues`.
\ No newline at end of file
diff --git a/feature_guides/data_encryption.rst b/feature_guides/data_encryption.rst
new file mode 100644
index 000000000..607d6c09a
--- /dev/null
+++ b/feature_guides/data_encryption.rst
@@ -0,0 +1,20 @@
+.. _data_encryption:
+
+***********************
+Data Encryption
+***********************
+The **Data Encryption** page describes the following:
+
+.. |icon-new_2022.1| image:: /_static/images/new_2022.1.png
+   :align: middle
+   :width: 110
+   
+.. toctree::
+   :maxdepth: 1
+   :titlesonly:
+
+   data_encryption_overview
+   data_encryption_methods
+   data_encryption_types
+   data_encryption_syntax
+   data_encryption_permissions
\ No newline at end of file
diff --git a/feature_guides/data_encryption_methods.rst b/feature_guides/data_encryption_methods.rst
new file mode 100644
index 000000000..db789d02f
--- /dev/null
+++ b/feature_guides/data_encryption_methods.rst
@@ -0,0 +1,17 @@
+.. _data_encryption_methods:
+
+***********************
+Encryption Methods
+***********************
+Data exists in one of following states and determines the encryption method:
+
+
+Encrypting Data in Transit
+----------------
+**Data in transit** refers to data you use on a regular basis, usually stored on a database and accessed through applications or programs. This data is typically transferred between several physical or remote locations through email or uploading documents to the cloud. This type of data must therefore be protected while **in transit**. SQream encrypts data in transit using SSL when, for example, users insert data files from external repositories over a JDBC or ODBC connection.
+
+For more information, see `Use TLS/SSL When Possible `_.
+
+Encrypting Data at Rest
+----------------
+**Data at rest** refers to data stored on your hard drive or on the cloud. Because this data can be potentially intercepted **physically**, it requires a form of encryption that protects your data wherever you store it. SQream faciliates encryption by letting you encrypt any columns located in your database that you want to keep private.
\ No newline at end of file
diff --git a/feature_guides/data_encryption_overview.rst b/feature_guides/data_encryption_overview.rst
new file mode 100644
index 000000000..256e3d718
--- /dev/null
+++ b/feature_guides/data_encryption_overview.rst
@@ -0,0 +1,28 @@
+.. _data_encryption_overview:
+
+***********************
+Overview
+***********************
+**Data Encryption** helps protect sensitive data at rest by concealing it from unauthorized users in the event of a breach. This is achieved by scrambling the content into an unreadable format based on encryption and decryption keys. Typically speaking, this data pertains to **PII (Personally Identifiable Information)**, which is sensitive information such as credit card numbers and other information related to an identifiable person.
+
+Users encrypt their data on a column basis by specifying ``column_name`` in the encryption syntax.
+
+The demand for confidentiality has steadily increased to protect the growing volumes of private data stored on computer systems and transmitted over the internet. To this end, regulatory bodies such as the **General Data Protection Regulation (GDPR)** have produced requirements to standardize and enforce compliance aimed at protecting customer data.
+
+Encryption can be used for the following:
+
+* Creating tables with up to three encrypted columns.
+
+   ::
+   
+* Joining encrypted columns with other tables.
+
+   ::
+   
+* Selecting data from an encrypted column.
+
+.. warning:: The ``SELECT`` statement decrypts information by default. When executing ``CREATE TABLE AS SELECT`` or ``INSERT INTO TABLE AS SELECT``, encrypted information will appear as clear text in the newly created table.
+
+For more information on the encryption syntax, see :ref:`data_encryption_syntax`.
+
+For more information on GDPR compliance requirements, see the `GDPR checklist `_.
\ No newline at end of file
diff --git a/feature_guides/data_encryption_permissions.rst b/feature_guides/data_encryption_permissions.rst
new file mode 100644
index 000000000..ba51f2501
--- /dev/null
+++ b/feature_guides/data_encryption_permissions.rst
@@ -0,0 +1,6 @@
+.. _data_encryption_permissions:
+
+***********************
+Permissions
+***********************
+Because the Data Encryption feature does not require a role, users with **Read** and **Insert** permissions can read tables containing encrypted data.
\ No newline at end of file
diff --git a/feature_guides/data_encryption_syntax.rst b/feature_guides/data_encryption_syntax.rst
new file mode 100644
index 000000000..56934378e
--- /dev/null
+++ b/feature_guides/data_encryption_syntax.rst
@@ -0,0 +1,32 @@
+.. _data_encryption_syntax:
+
+***********************
+Syntax
+***********************
+The following is the syntax for encrypting a new table:
+
+.. code-block:: console
+     
+   CREATE TABLE   (
+         NOT NULL ENCRYPT,
+          ENCRYPT,
+         ,
+          ENCRYPT);
+		
+The following is an example of encrypting a new table:
+
+.. code-block:: console
+     
+   CREATE TABLE client_name  (
+        id BIGINT NOT NULL ENCRYPT,
+        first_name TEXT ENCRYPT,
+        last_name TEXT,
+        salary INT ENCRYPT);
+		   
+.. note:: Because encryption is not associated with any role, users with **Read** or **Insert** permissions can read tables containing encrypted data.
+
+You cannot encrypt more than three columns. Attempting to encrypt more than three columns displays the following error message:
+
+.. code-block:: console
+
+   Error preparing statement: Cannot create a table with more than three encrypted columns.
diff --git a/feature_guides/data_encryption_types.rst b/feature_guides/data_encryption_types.rst
new file mode 100644
index 000000000..ad6d96dc3
--- /dev/null
+++ b/feature_guides/data_encryption_types.rst
@@ -0,0 +1,14 @@
+.. _data_encryption_types:
+
+***********************
+Data Types
+***********************
+Typically speaking, sensitive pertains to **PII (Personally Identifiable Information)**, which is sensitive information such as credit card numbers and other information related to an identifiable person.
+
+SQream's data encryption feature supports encrypting column-based data belonging to the following data types:
+
+* INT
+* BIGINT
+* TEXT
+
+For more information on the above data types, see :ref:`supported_data_types`.
\ No newline at end of file
diff --git a/feature_guides/delete.rst b/feature_guides/delete.rst
deleted file mode 100644
index 24ab5a218..000000000
--- a/feature_guides/delete.rst
+++ /dev/null
@@ -1,214 +0,0 @@
-.. _delete_guide:
-
-***********************
-Deleting Data
-***********************
-
-SQream DB supports deleting data, but it's important to understand how this works and how to maintain deleted data.
-
-How does deleting in SQream DB work?
-========================================
-
-In SQream DB, when you run a delete statement, any rows that match the delete predicate will no longer be returned when running subsequent queries.
-Deleted rows are tracked in a separate location, in *delete predicates*.
-
-After the delete statement, a separate process can be used to reclaim the space occupied by these rows, and to remove the small overhead that queries will have until this is done. 
-
-Some benefits to this design are:
-
-#. Delete transactions complete quickly
-
-#. The total disk footprint overhead at any time for a delete transaction or cleanup process is small and bounded (while the system still supports low overhead commit, rollback and recovery for delete transactions).
-
-
-Phase 1: Delete
----------------------------
-
-.. TODO: isn't the delete cleanup able to complete a certain amount of work transactionally, so that you can do a massive cleanup in stages?
-
-.. TODO: our current best practices is to use a cron job with sqream sql to run the delete cleanup. we should document how to do this, we have customers with very different delete schedules so we can give a few extreme examples and when/why you'd use them
-   
-When a :ref:`delete` statement is run, SQream DB records the delete predicates used. These predicates will be used to filter future statements on this table until all this delete predicate's matching rows have been physically cleaned up.
-
-This filtering process takes full advantage of SQream's zone map feature.
-
-Phase 2: Clean-up
---------------------
-
-The cleanup process is not automatic. This gives control to the user or DBA, and gives flexibility on when to run the clean up.
-
-Files marked for deletion during the logical deletion stage are removed from disk. This is achieved by calling both utility function commands: ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS`` sequentially.
-
-.. note::
-   * :ref:`alter_table` and other DDL operations are blocked on tables that require clean-up. See more in the :ref:`concurrency_and_locks` guide.
-   * If the estimated time for a cleanup processs is beyond a threshold, you will get an error message about it. The message will explain how to override this limitation and run the process anywhere.
-
-Notes on data deletion
-=========================================
-
-.. note::
-   * If the number of deleted records crosses the threshold defined by the ``mixedColumnChunksThreshold`` parameter, the delete operation will be aborted.
-   * This is intended to alert the user that the large number of deleted records may result in a large number of mixed chuncks.
-   * To circumvent this alert, replace XXX with the desired number of records before running the delete operation:
-
-.. code-block:: postgres
-
-   set mixedColumnChunksThreshold=XXX;
-   
-
-Deleting data does not free up space
------------------------------------------
-
-With the exception of a full table delete (:ref:`TRUNCATE`), deleting data does not free up disk space. To free up disk space, trigger the cleanup process.
-
-``SELECT`` performance on deleted rows
-----------------------------------------
-
-Queries on tables that have deleted rows may have to scan data that hasn't been cleaned up.
-In some cases, this can cause queries to take longer than expected. To solve this issue, trigger the cleanup process.
-
-Use ``TRUNCATE`` instead of ``DELETE``
----------------------------------------
-For tables that are frequently emptied entirely, consider using :ref:`truncate` rather than :ref:`delete`. TRUNCATE removes the entire content of the table immediately, without requiring a subsequent cleanup to free up disk space.
-
-Cleanup is I/O intensive
--------------------------------
-
-The cleanup process actively compacts tables by writing a complete new version of column chunks with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes.
-
-Cleanup operations can create significant I/O load on the database. Consider this when planning the best time for the cleanup process.
-
-If this is an issue with your environment, consider using ``CREATE TABLE AS`` to create a new table and then rename and drop the old table.
-
-
-Example
-=============
-
-Deleting values from a table
-------------------------------
-
-.. code-block:: psql
-
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   4,Elephant            ,6500
-   5,Rhinoceros          ,2100
-   6,\N,\N
-   
-   6 rows
-   
-   farm=> DELETE FROM cool_animals WHERE weight > 1000;
-   executed
-   
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   6,\N,\N
-   
-   4 rows
-
-Deleting values based on more complex predicates
----------------------------------------------------
-
-.. code-block:: psql
-
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   4,Elephant            ,6500
-   5,Rhinoceros          ,2100
-   6,\N,\N
-   
-   6 rows
-   
-   farm=> DELETE FROM cool_animals WHERE weight > 1000;
-   executed
-   
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   6,\N,\N
-   
-   4 rows
-
-Identifying and cleaning up tables
----------------------------------------
-
-List tables that haven't been cleaned up
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-   
-   farm=> SELECT t.table_name FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      GROUP BY 1;
-   cool_animals
-   
-   1 row
-
-Identify predicates for clean-up
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-
-   farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      WHERE t.table_name = 'cool_animals';
-   weight > 1000
-   
-   1 row
-
-Triggering a cleanup
-^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-
-   -- Chunk reorganization (aka SWEEP)
-   farm=> SELECT CLEANUP_CHUNKS('public','cool_animals');
-   executed
-
-   -- Delete leftover files (aka VACUUM)
-   farm=> SELECT CLEANUP_EXTENTS('public','cool_animals');
-   executed
-   
-   
-   farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      WHERE t.table_name = 'cool_animals';
-   
-   0 rows
-
-
-
-Best practices for data deletion
-=====================================
-
-* Run ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS`` after large ``DELETE`` operations.
-
-* When deleting large proportions of data from very large tables, consider running a ``CREATE TABLE AS`` operation instead, then rename and drop the original table.
-
-* Avoid killing ``CLEANUP_EXTENTS`` operations after they've started.
-
-* SQream DB is optimised for time-based data. When data is naturally ordered by a date or timestamp, deleting based on those columns will perform best. For more information, see our :ref:`time based data management guide`.
-
-
-
-.. soft update concept
-
-.. delete cleanup and it's properties. automatic/manual, in transaction or background
-
-.. automatic background gives fast delete, minimal transaction overhead,
-.. small cost to queries until background reorganised
-
-.. when does delete use the metadata effectively
-
-.. more examples
-
diff --git a/feature_guides/delete_guide.rst b/feature_guides/delete_guide.rst
deleted file mode 100644
index 24ab5a218..000000000
--- a/feature_guides/delete_guide.rst
+++ /dev/null
@@ -1,214 +0,0 @@
-.. _delete_guide:
-
-***********************
-Deleting Data
-***********************
-
-SQream DB supports deleting data, but it's important to understand how this works and how to maintain deleted data.
-
-How does deleting in SQream DB work?
-========================================
-
-In SQream DB, when you run a delete statement, any rows that match the delete predicate will no longer be returned when running subsequent queries.
-Deleted rows are tracked in a separate location, in *delete predicates*.
-
-After the delete statement, a separate process can be used to reclaim the space occupied by these rows, and to remove the small overhead that queries will have until this is done. 
-
-Some benefits to this design are:
-
-#. Delete transactions complete quickly
-
-#. The total disk footprint overhead at any time for a delete transaction or cleanup process is small and bounded (while the system still supports low overhead commit, rollback and recovery for delete transactions).
-
-
-Phase 1: Delete
----------------------------
-
-.. TODO: isn't the delete cleanup able to complete a certain amount of work transactionally, so that you can do a massive cleanup in stages?
-
-.. TODO: our current best practices is to use a cron job with sqream sql to run the delete cleanup. we should document how to do this, we have customers with very different delete schedules so we can give a few extreme examples and when/why you'd use them
-   
-When a :ref:`delete` statement is run, SQream DB records the delete predicates used. These predicates will be used to filter future statements on this table until all this delete predicate's matching rows have been physically cleaned up.
-
-This filtering process takes full advantage of SQream's zone map feature.
-
-Phase 2: Clean-up
---------------------
-
-The cleanup process is not automatic. This gives control to the user or DBA, and gives flexibility on when to run the clean up.
-
-Files marked for deletion during the logical deletion stage are removed from disk. This is achieved by calling both utility function commands: ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS`` sequentially.
-
-.. note::
-   * :ref:`alter_table` and other DDL operations are blocked on tables that require clean-up. See more in the :ref:`concurrency_and_locks` guide.
-   * If the estimated time for a cleanup processs is beyond a threshold, you will get an error message about it. The message will explain how to override this limitation and run the process anywhere.
-
-Notes on data deletion
-=========================================
-
-.. note::
-   * If the number of deleted records crosses the threshold defined by the ``mixedColumnChunksThreshold`` parameter, the delete operation will be aborted.
-   * This is intended to alert the user that the large number of deleted records may result in a large number of mixed chuncks.
-   * To circumvent this alert, replace XXX with the desired number of records before running the delete operation:
-
-.. code-block:: postgres
-
-   set mixedColumnChunksThreshold=XXX;
-   
-
-Deleting data does not free up space
------------------------------------------
-
-With the exception of a full table delete (:ref:`TRUNCATE`), deleting data does not free up disk space. To free up disk space, trigger the cleanup process.
-
-``SELECT`` performance on deleted rows
-----------------------------------------
-
-Queries on tables that have deleted rows may have to scan data that hasn't been cleaned up.
-In some cases, this can cause queries to take longer than expected. To solve this issue, trigger the cleanup process.
-
-Use ``TRUNCATE`` instead of ``DELETE``
----------------------------------------
-For tables that are frequently emptied entirely, consider using :ref:`truncate` rather than :ref:`delete`. TRUNCATE removes the entire content of the table immediately, without requiring a subsequent cleanup to free up disk space.
-
-Cleanup is I/O intensive
--------------------------------
-
-The cleanup process actively compacts tables by writing a complete new version of column chunks with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes.
-
-Cleanup operations can create significant I/O load on the database. Consider this when planning the best time for the cleanup process.
-
-If this is an issue with your environment, consider using ``CREATE TABLE AS`` to create a new table and then rename and drop the old table.
-
-
-Example
-=============
-
-Deleting values from a table
-------------------------------
-
-.. code-block:: psql
-
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   4,Elephant            ,6500
-   5,Rhinoceros          ,2100
-   6,\N,\N
-   
-   6 rows
-   
-   farm=> DELETE FROM cool_animals WHERE weight > 1000;
-   executed
-   
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   6,\N,\N
-   
-   4 rows
-
-Deleting values based on more complex predicates
----------------------------------------------------
-
-.. code-block:: psql
-
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   4,Elephant            ,6500
-   5,Rhinoceros          ,2100
-   6,\N,\N
-   
-   6 rows
-   
-   farm=> DELETE FROM cool_animals WHERE weight > 1000;
-   executed
-   
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   6,\N,\N
-   
-   4 rows
-
-Identifying and cleaning up tables
----------------------------------------
-
-List tables that haven't been cleaned up
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-   
-   farm=> SELECT t.table_name FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      GROUP BY 1;
-   cool_animals
-   
-   1 row
-
-Identify predicates for clean-up
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-
-   farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      WHERE t.table_name = 'cool_animals';
-   weight > 1000
-   
-   1 row
-
-Triggering a cleanup
-^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-
-   -- Chunk reorganization (aka SWEEP)
-   farm=> SELECT CLEANUP_CHUNKS('public','cool_animals');
-   executed
-
-   -- Delete leftover files (aka VACUUM)
-   farm=> SELECT CLEANUP_EXTENTS('public','cool_animals');
-   executed
-   
-   
-   farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      WHERE t.table_name = 'cool_animals';
-   
-   0 rows
-
-
-
-Best practices for data deletion
-=====================================
-
-* Run ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS`` after large ``DELETE`` operations.
-
-* When deleting large proportions of data from very large tables, consider running a ``CREATE TABLE AS`` operation instead, then rename and drop the original table.
-
-* Avoid killing ``CLEANUP_EXTENTS`` operations after they've started.
-
-* SQream DB is optimised for time-based data. When data is naturally ordered by a date or timestamp, deleting based on those columns will perform best. For more information, see our :ref:`time based data management guide`.
-
-
-
-.. soft update concept
-
-.. delete cleanup and it's properties. automatic/manual, in transaction or background
-
-.. automatic background gives fast delete, minimal transaction overhead,
-.. small cost to queries until background reorganised
-
-.. when does delete use the metadata effectively
-
-.. more examples
-
diff --git a/feature_guides/flexible_data_clustering.rst b/feature_guides/flexible_data_clustering.rst
deleted file mode 100644
index ce0f3d321..000000000
--- a/feature_guides/flexible_data_clustering.rst
+++ /dev/null
@@ -1,16 +0,0 @@
-.. _flexible_data_clustering:
-
-***********************
-Flexible Data Clustering
-***********************
-The **Flexible Data Clustering** section describes the following:
-
-.. toctree::
-   :maxdepth: 4
-   :titlesonly:
-
-   flexible_data_clustering_overview
-   flexible_data_clustering_chunks
-   flexible_data_clustering_data_clustering_methods
-   flexible_data_clustering_data_rechunking_data
-   flexible_data_clustering_data_examples
\ No newline at end of file
diff --git a/feature_guides/flexible_data_clustering_chunks.rst b/feature_guides/flexible_data_clustering_chunks.rst
deleted file mode 100644
index b8146d0fc..000000000
--- a/feature_guides/flexible_data_clustering_chunks.rst
+++ /dev/null
@@ -1,18 +0,0 @@
-.. _flexible_data_clustering_chunks:
-
-***********************
-What are Chunks?
-***********************
-Chunks, sometimes referred to as **partitions**, are a contiguous number of rows in a specific column. SQream relies on an advanced partitioning method called **chunking**, which provides all static partitioning capabilities without the known limitations.
-
-The following figure shows a table rows grouped as chunks:
-
-.. figure:: /_static/images/chunking2.png
-   :scale: 75 %
-   :align: center
-   
-The following figure shows the rows from the table above converted into chunks:
-   
-.. figure:: /_static/images/chunking_metadata2.png
-   :scale: 75 %
-   :align: center
\ No newline at end of file
diff --git a/feature_guides/flexible_data_clustering_data_clustering_methods.rst b/feature_guides/flexible_data_clustering_data_clustering_methods.rst
deleted file mode 100644
index 347be830a..000000000
--- a/feature_guides/flexible_data_clustering_data_clustering_methods.rst
+++ /dev/null
@@ -1,180 +0,0 @@
-.. _flexible_data_clustering_data_clustering_methods:
-
-***********************
-Data Clustering Methods
-***********************
-The following data clustering methods can be used in tandem or separately to enhance query performance:
-
-.. contents:: 
-   :local:
-   :depth: 1
-   
-Using Time-Based Data Management
-============
-Overview
-~~~~~~~~~~
-**Time-based data management** refers to sorting table data along naturally occuring dimensions. The most common and naturally occuring sorting mechanism is a **timestamp**, which indicates the point in time at which data was inserted into SQream. Because SQream is a columnar storage system, timestamped metadata facilitates quick and easy query processing.
-
-The following is the correct syntax for timestamping a chunk:
-
-.. code-block:: postgres
-
-   SELECT DATEPART(HOUR, timestamp),
-          MIN(transaction_amount),
-          MAX(transaction_amount),
-          avg(transaction_amount)
-   FROM transactions
-   WHERE timestamp BETWEEN (CURRENT_TIMESTAMP AND DATEADD(MONTH,-3,CURRENT_TIMESTAMP))
-   GROUP BY 1;
-
-Timestamping data includes the following properties:
-
-* Data is loaded in a natural order while being inserted.
-
-   ::
-   
-* Updates are infrequent or non-existent. Updates occur by inserting new rows, which have their own timestamps.
-
-   ::
-   
-* Queries on timestamped data is typically on continuous time range.
-
-   ::
-   
-* Inserting and reading data are performed independently, not in the operation or transaction.
-
-   ::
-  
-* Timestamped data has a high data volume and accumulates faster than typical online transactional processing workloads.
-
-The following are some scenarios ideal for timestamping:
-
-* Running analytical queries spanning specific date ranges (such as the sum of transactions during August-July 2020 versus August-July 2019).
-
-   ::
-   
-* Deleting data older than a specific number of months old.
-
-   ::
-
-* Regulations require you to maintain several years of data that you do not need to query on a regular basis.
-
-Best Practices for Time-Based Management
-~~~~~~~~~~
-Data inserted in bulks is automatically timestamped with the insertion date and time. Therefore, inserting data through small and frequent bulks has the effect of naturally ordering data according to timestamp. Frequent bulks generally refers to short time frames, such as at 15-minute, hourly, or daily intervals. As you insert new data, SQream chunks and appends it into your existing tables according to its timestamp.
-
-The ``DATE`` and ``DATETIME`` types were created to improve performance, minimze storage size, and maintain data integrity. SQream recommends using them instead of ``VARCHAR``.
-
-Using Clustering Keys
-============
-Overview
-~~~~~~~~~~
-While data clustering occurs relatively naturally within a table, certain practices can be used to actively enhance query performance and runtime. Defining **clustering keys** increases performance by explicitly co-locating your data, enabling SQream to avoid processing irrelevant chunks.
-
-A clustering key is a subset of table columns or expressions and is defined using the ``CLUSTER BY`` statement, as shown below:
-
-.. code-block:: postgres
-     
-   CREATE TABLE users (
-      name VARCHAR(30) NOT NULL,
-      start_date datetime not null,
-      country VARCHAR(30) DEFAULT 'Unknown' NOT NULL
-   ) CLUSTER BY country;
-   
-
-   
-The ``CLUSTER BY`` statement splits ingested data based on the range of data corresponding to the clustering key. This helps create chunks based on specific or related data, avoiding mixed chunks as much as possible. For example, instead of creating chunks based on a fixed number of rows, the ``CLUSTER_BY`` statement creates them based on common values. This optimizes the ``DELETE`` command as well, which deletes rows based on their location in a table.
-
-For more information, see the following:
-
-* `The CLUSTER_BY statement `_
-* `The DELETE statement `_
-* `The Deleting Data Guide `_
-
-Inspecting Clustered Table Health
-~~~~~~~~~~
-You can use the ``clustering_health`` utility function to check how well a table is clustered, as shown below:
-
-.. code-block:: postgres
-
-   SELECT CLUSTERING_HEALTH('table_name','clustering_keys');
-   
-The ``CLUSTERING_HEALTH`` function returns the average clustering depth of your table relative to the clustering keys. A lower value indicates a well-clustered table.
-
-Clustering keys are useful for restructuring large tables not optimally ordered when inserted or as a result of extensive DML. A table that uses clustering keys is referred to as a **clustered table**. Tables that are not clustered require SQream's query optimizer to scan entire tables while running queries, dramatically increasing runtime. Some queries significantly benefit from clustering, such as filtering or joining extensively on clustered columns.
-
-SQream partially sorts data that you load into a clustered table. Note that while clustering tables increases query performance, clustering during the insertion stage can decrease performance by 75%. Nevertheless, once a table is clustered subsequent queries run more quickly.
-
-.. note:: 
-
-   To determine whether clustering will enhance performance, SQream recommends end-to-end testing your clustering keys on a small subset of your data before committing them to permanent use. This is relevant for testing insert and query performance.   
-
-For more information, see the following:
-
-* **Data Manipulation commands (DML)** - see `Data Manipulation Commands (DML) `_.
-
-* **Creating tables** - see :ref:`create_table`. When you create a table, all new data is clustered upon insert.
-   
-* **Modifying tables** - see :ref:`cluster_by`.
-   
-* **Modifying a table schema** - see :ref:`alter_table`.
-
-Using Metadata
-============
-SQream uses an automated and transparent system for collecting metadata describing each chunk. This metadata enables skipping unnecessary chunks and extents during query runtime. The system collects chunk metadata when data is inserted into SQream. This is done by splitting data into chunks and collecting and storing specific parameters to be used later.
-
-Because collecting metadata is not process-heavy and does not contribute significantly to query processing, it occurs continuously as a background process. Most metadata collection is typically performed by the GPU. For example, for a 10TB dataset, the metadata storage overhead is approximately 0.5GB.
-
-When a query includes a filter (such as a ``WHERE`` or ``JOIN`` condition) on a range of values spanning a fraction of the table values, SQream scans only the filtered segment of the table.
-
-Once collected, several metadata parameters are stored for later use, including:
- 
-* The range of values on each column chunk (minimum, maximum).
-
-   ::
- 
-* The number of values.
-
-   ::
- 
-* Additional information for query optimization.
-
-Data is collected automatically and transparently on every column type.
-
-Queries filtering highly granular date and time ranges are the most effective, particularly when data is timestamped, and when tables contain a large amount of historical data.
-
-Using Chunks and Extents
-============
-SQream stores data in logical tables made up of rows spanning one or more columns. Internally, data is stored in vertical partitions by column, and horizontally by chunks. The **Using Chunks and Extents** section describes how to leverge chunking to optimize query performance.
-
-A **chunk** is a contiguous number of rows in a specific column. Depending on data type, a chunk's uncompressed size typically ranges between 1MB and a few hundred megabytes. This size range is suitable for filtering and deleting data from large tables, which may contain between hundreds, millions, or billions of chunks.
-   
-An **extent** is a specific number of contiguous chunks. Extents optimize disk access patterns, at around 20MB uncompressed, on-disk. Extents typically include between one and 25 chunks based on the compressed size of each chunk.
-
-.. note:: 
-
-   SQream compresses all data. In addition, all tables are automatically and transparently chunked.
-
-Unlike node-partitioning (or sharding), chunks are:
-
-* Small enough to be read concurrently by multiple workers.
-
-   ::
-   
-* Optimized for inserting data quickly.
-
-   ::
-  
-* Capable of carrying metadata, which narrows down their contents for the query optimizer.
-
-   ::
- 
-* Ideal for data retension because they can be deleted in bulk.
-
-   ::
- 
-* Optimized for reading into RAM and the GPU.
-
-   ::
- 
-* Compressed individually to improve compression and data locality.
\ No newline at end of file
diff --git a/feature_guides/flexible_data_clustering_data_examples.rst b/feature_guides/flexible_data_clustering_data_examples.rst
deleted file mode 100644
index 0c720ff04..000000000
--- a/feature_guides/flexible_data_clustering_data_examples.rst
+++ /dev/null
@@ -1,22 +0,0 @@
-.. _flexible_data_clustering_data_examples:
-
-***********************
-Examples
-***********************
-The **Examples** includes the following examples:
-
-.. contents:: 
-   :local:
-   :depth: 1
-   
-Creating a Clustered Table
------------------------------
-The following is an example of syntax for creating a clustered table on a table naturally ordered by ``start_date``. An alternative cluster key can be defined on such a table to improve performance on queries already ordered by ``country``:
-
-.. code-block:: postgres
-
-   CREATE TABLE users (
-      name VARCHAR(30) NOT NULL,
-      start_date datetime not null,
-      country VARCHAR(30) DEFAULT 'Unknown' NOT NULL
-   ) CLUSTER BY country;
\ No newline at end of file
diff --git a/feature_guides/flexible_data_clustering_data_rechunking_data.rst b/feature_guides/flexible_data_clustering_data_rechunking_data.rst
deleted file mode 100644
index 30a74bbaa..000000000
--- a/feature_guides/flexible_data_clustering_data_rechunking_data.rst
+++ /dev/null
@@ -1,11 +0,0 @@
-.. _flexible_data_clustering_data_rechunking_data:
-
-***********************
-Rechunking Data
-***********************
-SQream performs background storage reorganization operations to optimize I/O and read patterns.
-
-For example, when small batches of data are inserted, SQream runs two background processes called **rechunk** and **reextent** to reorganize the data into larger contiguous chunks and extents. This is also what happens when data is deleted.
-
-
-Instead of overwriting data, SQream writes new optimized chunks and extents to replace old ones. After rewriting all old data, SQream switches to the new optimized chunks and extents and deletes the old data.
\ No newline at end of file
diff --git a/feature_guides/flexible_data_clustering_overview.rst b/feature_guides/flexible_data_clustering_overview.rst
deleted file mode 100644
index 3ba59a603..000000000
--- a/feature_guides/flexible_data_clustering_overview.rst
+++ /dev/null
@@ -1,16 +0,0 @@
-.. _flexible_data_clustering_overview:
-
-***********************
-Overeview
-***********************
-**Flexible data clustering** refers to sorting table data along naturally occuring dimensions, such as name, date, or location. Data clustering optimizes table structure to significantly improve query performance, especially on very large tables. A well-clustered table increases the effectivity of the metadata collected by focusing on a specific and limited range of rows, called **chunks**.
-
-The following are some scenarios ideal for data clustering:
-
-* Queries containg a ``WHERE`` predicate written as ``column COMPARISON value``, such as ``date_column > '2019-01-01'`` or ``id = 107`` when the columns referenced are clustering keys.
-
-  In such a case SQream reads the portion of data that contain values matching these predicates only.
-
-* Two clustered tables joined by their respective clustering keys.
-
-  In such a case SQream uses metadata to more easily identify matching chunks.
\ No newline at end of file
diff --git a/feature_guides/index.rst b/feature_guides/index.rst
index a0a996fc8..abb975c5d 100644
--- a/feature_guides/index.rst
+++ b/feature_guides/index.rst
@@ -9,14 +9,13 @@ This section describes the following features:
 
 .. toctree::
    :maxdepth: 1
-   :titlesonly:
+   :titlesonly:  
 
-   delete_guide
+   automatic_foreign_table_ddl_resolution
+   query_healer
+   data_encryption
    compression
-   flexible_data_clustering
    python_functions
-   saved_queries
-   viewing_system_objects_as_ddl
    workload_manager
    transactions
    concurrency_and_locks
diff --git a/feature_guides/python_functions.rst b/feature_guides/python_functions.rst
index 3717cdcd8..7aaea5051 100644
--- a/feature_guides/python_functions.rst
+++ b/feature_guides/python_functions.rst
@@ -6,6 +6,8 @@ Python UDF (User-Defined Functions)
 
 User-defined functions (UDFs) are a feature that extends SQream DB's built in SQL functionality. SQream DB's Python UDFs allow developers to create new functionality in SQL by writing the lower-level language implementation in Python. 
 
+.. note:: Starting v2022.1.4, Python UDF are disabled by default in order to enhance product security. Use the ``enablePythonUdfs`` configuration flag in order to enable Python UDF.
+
 .. contents:: In this topic:
    :local:
 
diff --git a/feature_guides/query_healer.rst b/feature_guides/query_healer.rst
new file mode 100644
index 000000000..257de9e32
--- /dev/null
+++ b/feature_guides/query_healer.rst
@@ -0,0 +1,68 @@
+.. _query_healer:
+
+***********************
+Query Healer
+***********************
+The **Query Healer** page describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+   
+Overview
+----------
+The **Query Healer** periodically examines the progress of running statements, creating a log entry for all statements exceeding a defined time period.   
+
+Configuration
+-------------
+The following **Administration Worker** flags are required to configure the Query Healer:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+
+   * - Flag
+     - Description
+   * - ``is_healer_on``
+     - The :ref:`is_healer_on` enables and disables the Query Healer.
+   * - ``maxStatementInactivitySeconds``
+     - The :ref:`healer_max_statement_inactivity_seconds` worker level flag defines the threshold for creating a log recording a slow statement. The log includes information about the log memory, CPU and GPU. The default setting is five hours.
+   * - ``healerDetectionFrequencySeconds``
+     - The :ref:`healer_detection_frequency_seconds` worker level flag triggers the healer to examine the progress of running statements. The default setting is one hour. 
+
+Query Log
+---------------
+
+The following is an example of a log record for a query stuck in the query detection phase for more than five hours:
+
+.. code-block:: console
+
+   |INFO|0x00007f9a497fe700:Healer|192.168.4.65|5001|-1|master|sqream|-1|sqream|0|"[ERROR]|cpp/SqrmRT/healer.cpp:140 |"Stuck query found. Statement ID: 72, Last chunk producer updated: 1.
+
+Once you identify the stuck worker, you can execute the ``shutdown_server`` utility function from this specific worker, as described in the next section.
+
+Activating a Graceful Shutdown
+------------------
+You can activate a graceful shutdown if your log entry says ``Stuck query found``, as shown in the example above. You can do this by setting the **shutdown_server** utility function to ``select shutdown_server();``.
+
+**To activte a graceful shutdown:**
+
+1. Locate the IP and the Port of the stuck worker from the logs.
+
+   .. note:: The log in the previous section identifies the IP **(192.168.4.65)** and port **(5001)** referring to the stuck query.
+
+2. From the machine of the stuck query (IP: **192.168.4.65**, port: **5001**), connect to SQream SQL client:
+
+   .. code-block:: console
+
+      ./sqream sql --port=$STUCK_WORKER_IP --username=$SQREAM_USER --password=$SQREAM_PASSWORD databasename=$SQREAM_DATABASE
+
+3. Execute ``shutdown_server``.
+
+For more information, see the following:
+
+* Activating the :ref:`shutdown_server_command` utility function. This page describes all of ``shutdown_server`` options.
+
+   ::
+
+* Configuring the :ref:`shutdown_server` flag.
\ No newline at end of file
diff --git a/feature_guides/workload_manager.rst b/feature_guides/workload_manager.rst
index c8eb2468d..aece3d11f 100644
--- a/feature_guides/workload_manager.rst
+++ b/feature_guides/workload_manager.rst
@@ -4,12 +4,11 @@
 Workload Manager
 ***********************
 
-The **Workload Manager** allows SQream DB workers to identify their availability to clients with specific service names. The load balancer uses that information to route statements to specific workers. 
+The **Workload Manager** allows SQream workers to identify their availability to clients with specific service names. The load balancer uses that information to route statements to specific workers. 
 
 Overview
 ===============================
-
-The Workload Manager allows a system engineer or database administrator to allocate specific workers and compute resoucres for various tasks.
+The Workload Manager allows a system engineer or database administrator to allocate specific workers and compute resources for various tasks.
 
 For example:
 
@@ -60,7 +59,7 @@ The configuration in this example allocates resources as shown below:
      - ✓
      - ✓
 
-This configuration gives the ETL queue dedicated access to two workers, one of which cannot be used by regular queries.
+This configuration gives the ETL queue dedicated access to one worker, which cannot be used..
 
 Queries from management uses any available worker.
 
diff --git a/getting_started/creating_a_database.rst b/getting_started/creating_a_database.rst
new file mode 100644
index 000000000..6b208f123
--- /dev/null
+++ b/getting_started/creating_a_database.rst
@@ -0,0 +1,34 @@
+.. _creating_a_database:
+
+****************************
+Creating a Database
+****************************
+Once you've installed SQream you can create a database.
+
+**To create a database:**
+
+1. Write a :ref:`create_database` statement.
+
+   The following is an example of creating a new database:
+
+   .. code-block:: psql
+
+      master=> CREATE DATABASE test;
+      executed
+
+2. Reconnect to the newly created database.
+
+   1. Exit the client by typing ``\q`` and pressing **Enter**.
+   2. From the Linux shell, restart the client with the new database name:
+
+      .. code-block:: psql
+
+         $ sqream sql --port=5000 --username=rhendricks -d test
+         Password:
+   
+         Interactive client mode
+         To quit, use ^D or \q.
+   
+         test=> _
+
+    The name of the new database that you are connected to is displayed in the prompt.
\ No newline at end of file
diff --git a/getting_started/creating_your_first_table.rst b/getting_started/creating_your_first_table.rst
index c070b43ed..2837907f8 100644
--- a/getting_started/creating_your_first_table.rst
+++ b/getting_started/creating_your_first_table.rst
@@ -21,7 +21,7 @@ The ``CREATE TABLE`` syntax is used to create your first table. This table inclu
 
    CREATE TABLE cool_animals (
       id INT NOT NULL,
-      name VARCHAR(20),
+      name TEXT(20),
       weight INT
    );
 
@@ -37,7 +37,7 @@ You can drop an existing table and create a new one by adding the ``OR REPLACE``
 
    CREATE OR REPLACE TABLE cool_animals (
       id INT NOT NULL,
-      name VARCHAR(20),
+      name TEXT(20),
       weight INT
    );
 
@@ -54,7 +54,7 @@ You can list the full, verbose ``CREATE TABLE`` statement for a table by using t
    test=> SELECT GET_DDL('cool_animals');
    create table "public"."cool_animals" (
    "id" int not null,
-   "name" varchar(20),
+   "name" text(20),
    "weight" int
    );
 
diff --git a/operational_guides/hardware_guide.rst b/getting_started/hardware_guide.rst
similarity index 80%
rename from operational_guides/hardware_guide.rst
rename to getting_started/hardware_guide.rst
index d66797223..b8d9691fd 100644
--- a/operational_guides/hardware_guide.rst
+++ b/getting_started/hardware_guide.rst
@@ -3,9 +3,7 @@
 ***********************
 Hardware Guide
 ***********************
-
-This guide describes the SQream reference architecture, emphasizing the benefits to the technical audience, and provides guidance for end-users on selecting the right configuration for a SQream installation.
-
+The **Hardware Guide** describes the SQream reference architecture, emphasizing the benefits to the technical audience, and provides guidance for end-users on selecting the right configuration for a SQream installation.
 
 .. rubric:: Need help?
 
@@ -15,20 +13,22 @@ Visit `SQream's support portal `_ document.
+
+.. note:: Non production HW requirements may be found at `Non Production HW Requirements `_
\ No newline at end of file
diff --git a/getting_started/index.rst b/getting_started/index.rst
index f9a57a460..87b98870d 100644
--- a/getting_started/index.rst
+++ b/getting_started/index.rst
@@ -11,6 +11,6 @@ The **Getting Started** page describes the following things you need to start us
 
    preparing_your_machine_to_install_sqream
    installing_sqream
-   creating_a_database
    executing_statements_in_sqream
-   performing_basic_sqream_operations
\ No newline at end of file
+   performing_basic_sqream_operations
+   hardware_guide
\ No newline at end of file
diff --git a/getting_started/non_production_hardware_guide.rst b/getting_started/non_production_hardware_guide.rst
new file mode 100644
index 000000000..03c738887
--- /dev/null
+++ b/getting_started/non_production_hardware_guide.rst
@@ -0,0 +1,49 @@
+.. non_production_hardware_guide:
+
+***************************************
+Staging and Development Hardware Guide
+***************************************
+The **Staging and Development Hardware Guide** describes the SQream recommended HW for development, staging and or QA desktop and servers.
+
+.. warning:: The HW specification in this page are not intended for production use!
+
+Development Desktop
+-----------------------------------
+
++------------------+-----------------------------------------------+
+| **Component**    | **Type**                                      |
++==================+===============================================+
+| Server           | PC                                            |
++------------------+-----------------------------------------------+
+| Processor        | Intel i7                                      |
++------------------+-----------------------------------------------+
+| RAM              | 64GB RAM                                      |
++------------------+-----------------------------------------------+
+| Onboard storage  | 2TB SSD                                       |
++------------------+-----------------------------------------------+
+| GPU              | 1x NVIDIA RTX A4000 16GB                      |
++------------------+-----------------------------------------------+
+| Operating System | Red Hat Enterprise Linux v7.9 or CentOS v7.9  |
++------------------+-----------------------------------------------+
+
+
+Lab Server
+-----------------------------------
+
++------------------+------------------------------------------------------------+
+| **Component**    | **Type**                                                   |
++==================+============================================================+
+| Server           | Dell R640 or similar                                       |
++------------------+------------------------------------------------------------+
+| Processor        | x2 Intel(R) Xeon(R) Silver 4112 CPU @ 2.60GHz              |
++------------------+------------------------------------------------------------+
+| RAM              | 128 or 256 GB                                              |
++------------------+------------------------------------------------------------+
+| Onboard storage  | "2x 960GB SSD 2.5in hot plug for OS, RAID1                 |
++------------------+------------------------------------------------------------+
+|                  | 1(or more)x 3.84TB SSD 2.5in Hot plug for storage, RAID5"  |
++------------------+------------------------------------------------------------+
+| GPU              | 1xNVIDIA T4 or A40 or A10                                  |
++------------------+------------------------------------------------------------+
+| Operating System | Red Hat Enterprise Linux v7.9 or CentOS v7.9               |
++------------------+------------------------------------------------------------+
diff --git a/getting_started/performing_basic_sqream_operations.rst b/getting_started/performing_basic_sqream_operations.rst
index ba0a6fc3f..0808c22dd 100644
--- a/getting_started/performing_basic_sqream_operations.rst
+++ b/getting_started/performing_basic_sqream_operations.rst
@@ -15,4 +15,9 @@ After installing SQream you can perform the operations described on this page:
    inserting_rows
    running_queries
    deleting_rows
-   saving_query_results_to_a_csv_or_psv_file
\ No newline at end of file
+   saving_query_results_to_a_csv_or_psv_file
+
+For more information on other basic SQream operations, see the following:
+
+* `Creating a Database `_
+* :ref:`data_ingestion`
\ No newline at end of file
diff --git a/getting_started/preparing_your_machine_to_install_sqream.rst b/getting_started/preparing_your_machine_to_install_sqream.rst
index 435f35de0..d57861b1a 100644
--- a/getting_started/preparing_your_machine_to_install_sqream.rst
+++ b/getting_started/preparing_your_machine_to_install_sqream.rst
@@ -32,4 +32,4 @@ To prepare your machine to install SQream, do the following:
 For more information, see the following:
 
 * :ref:`recommended_pre-installation_configurations`
-* `Hardware Guide `_
+* :ref:`hardware_guide`
\ No newline at end of file
diff --git a/getting_started/querying_data.rst b/getting_started/querying_data.rst
index 36bf9e78b..7b6b46aed 100644
--- a/getting_started/querying_data.rst
+++ b/getting_started/querying_data.rst
@@ -11,14 +11,14 @@ To begin familiarizing yourself with querying data, you can create the following
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
diff --git a/index.rst b/index.rst
index e1455f21a..61e31b593 100644
--- a/index.rst
+++ b/index.rst
@@ -4,7 +4,50 @@
 SQream DB Documentation
 *************************
 
-For SQream version 2021.2.
+
+SQream DB is a columnar analytic SQL database management system. SQream DB supports regular SQL including :ref:`a substantial amount of ANSI SQL`, uses :ref:`serializable transactions`, and :ref:`scales horizontally` for concurrent statements. Even a :ref:`basic SQream DB machine` can support tens to hundreds of terabytes of data. SQream DB easily plugs in to third-party tools like :ref:`Tableau` comes with standard SQL client drivers, including :ref:`JDBC`, :ref:`ODBC`, and :ref:`Python DB-API`.
+
+
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| Topic                                             | Description                                                                                                                            |
++===================================================+========================================================================================================================================+
+| **Getting Started**                                                                                                                                                                        |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`preparing_your_machine_to_install_sqream`   | Set up your local machine according to SQream’s recommended pre-installation configurations.                                           |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`executing_statements_in_sqream`             | Provides more information about the available methods for executing statements in SQream.                                              |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`performing_basic_sqream_operations`         | Provides more information on performing basic operations.                                                                              |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`hardware_guide`                             | Describes SQream’s mandatory and recommended hardware settings, designed for a technical audience.                                     |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| **Installation Guides**                                                                                                                                                                    |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`installing_and_launching_sqream`            | Refers to SQream’s installation guides.                                                                                                |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`sqream_studio_installation`                 | Refers to all installation guides required for installations related to Studio.                                                        |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| **Ingesting Data**                                                                                                                                                                         |
++--------------------------+------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`csv`               | :ref:`avro`            |                                                                                                                                        |
++--------------------------+------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`parquet`           | :ref:`orc`             |                                                                                                                                        |
++--------------------------+------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`oracle`                                                                                                                                                                              |
++--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| **Connecting to SQream**                                                                                                                                                                   |
++--------------------------+------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`client_platforms`                           | Describes how to install and connect a variety of third party connection platforms and tools.                                          |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`client_drivers`                             | Describes how to use the SQream client drivers and client applications with SQream.                                                    |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| **External Storage Platforms**                                                                                                                                                             |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`s3`                                         | Describes how to insert data over a native S3 connector.                                                                               |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+| :ref:`hdfs`                                       | Describes how to configure an HDFS environment for the user sqream and is only relevant for users with an HDFS environment.            |
++---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------+
+
 
 .. only:: html
 
@@ -16,71 +59,6 @@ For SQream version 2021.2.
    
    .. tip:: This documentation is available online at https://docs.sqream.com/
 
-SQream DB is a columnar analytic SQL database management system. 
-
-SQream DB supports regular SQL including :ref:`a substantial amount of ANSI SQL`, uses :ref:`serializable transactions`, and :ref:`scales horizontally` for concurrent statements.
-
-Even a :ref:`basic SQream DB machine` can support tens to hundreds of terabytes of data.
-
-SQream DB easily plugs in to third-party tools like :ref:`Tableau` comes with standard SQL client drivers, including :ref:`JDBC`, :ref:`ODBC`, and :ref:`Python DB-API`.
-
-.. 
-   .. ref`features_tour`
-
-.. list-table::
-   :widths: 33 33 33
-   :header-rows: 0
-
-   * - **Get Started**
-     - **Reference**
-     - **Guides**
-   * -
-         `Getting Started `_
-         
-         :ref:`sql_feature_support`
-         
-         :ref:`Bulk load CSVs`
-     - 
-         :ref:`SQL Reference`
-         
-         :ref:`sql_statements`
-         
-         :ref:`sql_functions`
-     - 
-         `Setting up SQream `_
-         
-         :ref:`Best practices`
-         
-         :ref:`connect_to_tableau`
-
-   * - **Releases**
-     - **Driver and Deployment**
-     - **Help and Support**
-   * -
-         :ref:`2021.2<2021.2>`
-
-         :ref:`2021.1<2021.1>`
-        
-         :ref:`2020.3<2020.3>`
-
-         :ref:`2020.2<2020.2>`
-         
-         :ref:`2020.1<2020.1>`
-                  
-         :ref:`All recent releases`
-
-     - 
-         :ref:`Client drivers`
-
-         :ref:`Third party tools integration`
-
-         :ref:`connect_to_tableau`
-     - 
-         :ref:`troubleshooting` guide
-         
-         :ref:`information_for_support`
-
-
 
 .. rubric:: Need help?
 
@@ -89,9 +67,8 @@ If you couldn't find what you're looking for, we're always happy to help. Visit
 
 .. rubric:: Looking for older versions?
 
-This version of the documentation is for SQream DB Version 2021.2.
 
-If you're looking for an older version of the documentation, versions 1.10 through 2019.2.1 are available at http://previous.sqream.com .
+If you're looking for an older version of the documentation, go to http://previous.sqream.com .
 
 .. toctree::
    :caption: Contents:
@@ -103,10 +80,12 @@ If you're looking for an older version of the documentation, versions 1.10 throu
    getting_started/index
    installation_guides/index
    data_ingestion/index
-   third_party_tools/index
+   connecting_to_sqream/index
+   external_storage_platforms/index
+   loading_and_unloading_data/index
    feature_guides/index
    operational_guides/index
-   sqream_studio_5.4.3/index
+   sqream_studio_5.4.7/index
    architecture/index
    configuration_guides/index
    reference/index
diff --git a/installation_guides/installing_and_launching_sqream.rst b/installation_guides/installing_and_launching_sqream.rst
index 4ef1ef706..6a41ba52b 100644
--- a/installation_guides/installing_and_launching_sqream.rst
+++ b/installation_guides/installing_and_launching_sqream.rst
@@ -3,7 +3,7 @@
 *************************
 Installing and Launching SQream
 *************************
-The **Installing SQream Studio** page incudes the following installation guides:
+The **Installing and Launching SQream** page includes the following installation guides:
 
 .. toctree::
    :maxdepth: 1
@@ -14,7 +14,4 @@ The **Installing SQream Studio** page incudes the following installation guides:
    running_sqream_in_a_docker_container
    installing_sqream_with_kubernetes
    installing_monit
-   launching_sqream_with_monit
-
-
-
+   launching_sqream_with_monit
\ No newline at end of file
diff --git a/installation_guides/installing_monit.rst b/installation_guides/installing_monit.rst
index b27800cce..ab49164f6 100644
--- a/installation_guides/installing_monit.rst
+++ b/installation_guides/installing_monit.rst
@@ -1,319 +1,315 @@
-.. _installing_monit:
-
-*********************************************
-Installing Monit
-*********************************************
-
-Getting Started
-==============================
-
-Before installing SQream with Monit, verify that you have followed the required :ref:`recommended pre-installation configurations `.
-
-The procedures in the **Installing Monit** guide must be performed on each SQream cluster node.
-
-
-
-
-
-.. _back_to_top:
-
-Overview
-==============================
-
-
-Monit is a free open source supervision utility for managing and monitoring Unix and Linux. Monit lets you view system status directly from the command line or from a native HTTP web server. Monit can be used to conduct automatic maintenance and repair, such as executing meaningful causal actions in error situations.
-
-SQream uses Monit as a watchdog utility, but you can use any other utility that provides the same or similar functionality.
-
-The **Installing Monit** procedures describes how to install, configure, and start Monit.
-
-You can install Monit in one of the following ways:
-
-* :ref:`Installing Monit on CentOS `
-* :ref:`Installing Monit on CentOS offline `
-* :ref:`Installing Monit on Ubuntu `
-* :ref:`Installing Monit on Ubuntu offline `
- 
- 
- 
-
-
-
-
-.. _installing-monit-on-centos:
-
-Installing Monit on CentOS:
-------------------------------------
-
-
-
-**To install Monit on CentOS:**   
-   
-1. Install Monit as a superuser on CentOS:
- 
-    .. code-block:: console
-     
-       $ sudo yum install monit  
-       
-       
-.. _installing-monit-on-centos-offline:
-
-
-	   
-Installing Monit on CentOS Offline:
-------------------------------------
-
-
-Installing Monit on CentOS offline can be done in either of the following ways:
-
-* :ref:`Building Monit from Source Code `
-* :ref:`Building Monit from Pre-Built Binaries `
-
- 
- 
- 
-.. _building_monit_from_source_code:
-
-Building Monit from Source Code
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-
-
-**To build Monit from source code:**
-
-1. Copy the Monit package for the current version:
-       
-   .. code-block:: console
-     
-      $ tar zxvf monit-.tar.gz
-       
- The value ``x.y.z`` denotes the version numbers.
-       
-2. Navigate to the directory where you want to store the package:
-
-   .. code-block:: console
-     
-      $ cd monit-x.y.z
- 
-3. Configure the files in the package:
-
-   .. code-block:: console
-     
-      $ ./configure (use ./configure --help to view available options)
- 
-4. Build and install the package:
-
-   .. code-block:: console
-     
-      $ make && make install
-      
-The following are the default storage directories:
-
-* The Monit package: **/usr/local/bin/**
-* The **monit.1 man-file**: **/usr/local/man/man1/**
-
-5. **Optional** - To change the above default location(s), use the **--prefix** option to ./configure.
-
-..
-  _**Comment - I took this line directly from the external online documentation. Is the "prefix option" referrin gto the "--help" in Step 3? URL: https://mmonit.com/wiki/Monit/Installation**
-
-6. **Optional** - Create an RPM package for CentOS directly from the source code:
-
-   .. code-block:: console
-     
-      $ rpmbuild -tb monit-x.y.z.tar.gz
-      
-..
-  _**Comment - Is this an optional or mandatory step?**
-
- 
-
-
-.. _building_monit_from_pre_built_binaries:   
-
-Building Monit from Pre-Built Binaries
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-**To build Monit from pre-built binaries:**
-
-1. Copy the Monit package for the current version:
-       
-   .. code-block:: console
-
-      $ tar zxvf monit-x.y.z-linux-x64.tar.gz
-      
-   The value ``x.y.z`` denotes the version numbers.
-
-2. Navigate to the directory where you want to store the package:
-
-   .. code-block:: console$ cd monit-x.y.z
-   
-3. Copy the **bin/monit** and **/usr/local/bin/** directories:
- 
-    .. code-block:: console
-
-      $ cp bin/monit /usr/local/bin/
- 
-4. Copy the **conf/monitrc** and **/etc/** directories:
- 
-    .. code-block:: console
-
-      $ cp conf/monitrc /etc/
-       
-..
-  _**Comment - please review this procedure.**
-
-For examples of pre-built Monit binarties, see :ref:`Download Precompiled Binaries`.
-
-:ref:`Back to top `
-
-
-
-.. _installing-monit-on-ubuntu:
-
-
-      
-Installing Monit on Ubuntu:
-------------------------------------
-
-
-**To install Monit on Ubuntu:**   
-   
-1. Install Monit as a superuser on Ubuntu:
-
-    .. code-block:: console
-     
-       $ sudo apt-get install monit
-	   
-:ref:`Back to top `
-
-
-	   
-.. _installing-monit-on-ubuntu-offline:
-
-
-Installing Monit on Ubuntu Offline:
--------------------------------------
-
-
-You can install Monit on Ubuntu when you do not have an internet connection.
-
-**To install Monit on Ubuntu offline:**   
-   
-1. Compress the required file:
-
-   .. code-block:: console
-     
-      $ tar zxvf monit--linux-x64.tar.gz
-      
-   **NOTICE:** ** denotes the version number.
-
-2. Navigate to the directory where you want to save the file:
-   
-   .. code-block:: console
-     
-      $ cd monit-x.y.z
-       
-3. Copy the **bin/monit** directory into the **/usr/local/bin/** directory:
-
-   .. code-block:: console
-     
-      $ cp bin/monit /usr/local/bin/
-       
-4. Copy the **conf/monitrc** directory into the **/etc/** directory:
-       
-   .. code-block:: console
-     
-      $ cp conf/monitrc /etc/
-	  
-:ref:`Back to top `
-
-       
-Configuring Monit
-====================================
-
-When the installation is complete, you can configure Monit. You configure Monit by modifying the Monit configuration file, called **monitrc**. This file contains blocks for each service that you want to monitor.
-
-The following is an example of a service block:
-
-    .. code-block:: console
-     
-       $ #SQREAM1-START
-       $ check process sqream1 with pidfile /var/run/sqream1.pid
-       $ start program = "/usr/bin/systemctl start sqream1"
-       $ stop program = "/usr/bin/systemctl stop sqream1"
-       $ #SQREAM1-END
-
-For example, if you have 16 services, you can configure this block by copying the entire block 15 times and modifying all service names as required, as shown below:
-
-    .. code-block:: console
-     
-       $ #SQREAM2-START
-       $ check process sqream2 with pidfile /var/run/sqream2.pid
-       $ start program = "/usr/bin/systemctl start sqream2"
-       $ stop program = "/usr/bin/systemctl stop sqream2"
-       $ #SQREAM2-END
-       
-For servers that don't run the **metadataserver** and **serverpicker** commands, you can use the block example above, but comment out the related commands, as shown below:
-
-    .. code-block:: console
-     
-       $ #METADATASERVER-START
-       $ #check process metadataserver with pidfile /var/run/metadataserver.pid
-       $ #start program = "/usr/bin/systemctl start metadataserver"
-       $ #stop program = "/usr/bin/systemctl stop metadataserver"
-       $ #METADATASERVER-END
-
-**To configure Monit:**   
-   
-1. Copy the required block for each required service.
-2. Modify all service names in the block.
-3. Copy the configured **monitrc** file to the **/etc/monit.d/** directory:
-
-   .. code-block:: console
-     
-      $ cp monitrc /etc/monit.d/
-       
-4. Set file permissions to **600** (full read and write access):
- 
-    .. code-block:: console
-
-       $ sudo chmod 600 /etc/monit.d/monitrc
-       
-5. Reload the system to activate the current configurations:
- 
-    .. code-block:: console
-     
-       $ sudo systemctl daemon-reload
- 
-6. **Optional** - Navigate to the **/etc/sqream** directory and create a symbolic link to the **monitrc** file:
- 
-    .. code-block:: console
-     
-      $ cd /etc/sqream
-      $ sudo ln -s /etc/monit.d/monitrc monitrc    
-         
-Starting Monit
-====================================  
-
-After configuring Monit, you can start it.
-
-**To start Monit:**
-
-1. Start Monit as a super user:
-
-   .. code-block:: console
-     
-      $ sudo systemctl start monit   
- 
-2. View Monit's service status:
-
-   .. code-block:: console
-     
-      $ sudo systemctl status monit
-
-3. If Monit is functioning correctly, enable the Monit service to start on boot:
-    
-   .. code-block:: console
-     
-      $ sudo systemctl enable monit
+.. _installing_monit:
+
+*********************************************
+Installing Monit
+*********************************************
+
+Getting Started
+==============================
+
+Before installing SQream with Monit, verify that you have followed the required :ref:`recommended pre-installation configurations `. 
+
+The procedures in the **Installing Monit** guide must be performed on each SQream cluster node.
+
+.. _back_to_top:
+
+Overview
+==============================
+
+
+Monit is a free open source supervision utility for managing and monitoring Unix and Linux. Monit lets you view system status directly from the command line or from a native HTTP web server. Monit can be used to conduct automatic maintenance and repair, such as executing meaningful causal actions in error situations.
+
+SQream uses Monit as a watchdog utility, but you can use any other utility that provides the same or similar functionality.
+
+The **Installing Monit** procedures describes how to install, configure, and start Monit.
+
+You can install Monit in one of the following ways:
+
+* :ref:`Installing Monit on CentOS `
+* :ref:`Installing Monit on CentOS offline `
+* :ref:`Installing Monit on Ubuntu `
+* :ref:`Installing Monit on Ubuntu offline `
+ 
+ 
+ 
+
+
+
+
+.. _installing-monit-on-centos:
+
+Installing Monit on CentOS:
+------------------------------------
+
+
+
+**To install Monit on CentOS:**   
+   
+1. Install Monit as a superuser on CentOS:
+ 
+    .. code-block:: console
+     
+       $ sudo yum install monit  
+       
+       
+.. _installing-monit-on-centos-offline:
+
+
+	   
+Installing Monit on CentOS Offline:
+------------------------------------
+
+
+Installing Monit on CentOS offline can be done in either of the following ways:
+
+* :ref:`Building Monit from Source Code `
+* :ref:`Building Monit from Pre-Built Binaries `
+
+ 
+ 
+ 
+.. _building_monit_from_source_code:
+
+Building Monit from Source Code
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+
+**To build Monit from source code:**
+
+1. Copy the Monit package for the current version:
+       
+   .. code-block:: console
+     
+      $ tar zxvf monit-.tar.gz
+       
+ The value ``x.y.z`` denotes the version numbers.
+       
+2. Navigate to the directory where you want to store the package:
+
+   .. code-block:: console
+     
+      $ cd monit-x.y.z
+ 
+3. Configure the files in the package:
+
+   .. code-block:: console
+     
+      $ ./configure (use ./configure --help to view available options)
+ 
+4. Build and install the package:
+
+   .. code-block:: console
+     
+      $ make && make install
+      
+The following are the default storage directories:
+
+* The Monit package: **/usr/local/bin/**
+* The **monit.1 man-file**: **/usr/local/man/man1/**
+
+5. **Optional** - To change the above default location(s), use the **--prefix** option to ./configure.
+
+..
+  _**Comment - I took this line directly from the external online documentation. Is the "prefix option" referrin gto the "--help" in Step 3? URL: https://mmonit.com/wiki/Monit/Installation**
+
+6. **Optional** - Create an RPM package for CentOS directly from the source code:
+
+   .. code-block:: console
+     
+      $ rpmbuild -tb monit-x.y.z.tar.gz
+      
+..
+  _**Comment - Is this an optional or mandatory step?**
+
+ 
+
+
+.. _building_monit_from_pre_built_binaries:   
+
+Building Monit from Pre-Built Binaries
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+**To build Monit from pre-built binaries:**
+
+1. Copy the Monit package for the current version:
+       
+   .. code-block:: console
+
+      $ tar zxvf monit-x.y.z-linux-x64.tar.gz
+      
+   The value ``x.y.z`` denotes the version numbers.
+
+2. Navigate to the directory where you want to store the package:
+
+   .. code-block:: console$ cd monit-x.y.z
+   
+3. Copy the **bin/monit** and **/usr/local/bin/** directories:
+ 
+    .. code-block:: console
+
+      $ cp bin/monit /usr/local/bin/
+ 
+4. Copy the **conf/monitrc** and **/etc/** directories:
+ 
+    .. code-block:: console
+
+      $ cp conf/monitrc /etc/
+       
+..
+  _**Comment - please review this procedure.**
+
+For examples of pre-built Monit binarties, see :ref:`Download Precompiled Binaries`.
+
+:ref:`Back to top `
+
+
+
+.. _installing-monit-on-ubuntu:
+
+
+      
+Installing Monit on Ubuntu:
+------------------------------------
+
+
+**To install Monit on Ubuntu:**   
+   
+1. Install Monit as a superuser on Ubuntu:
+
+    .. code-block:: console
+     
+       $ sudo apt-get install monit
+	   
+:ref:`Back to top `
+
+
+	   
+.. _installing-monit-on-ubuntu-offline:
+
+
+Installing Monit on Ubuntu Offline:
+-------------------------------------
+
+
+You can install Monit on Ubuntu when you do not have an internet connection.
+
+**To install Monit on Ubuntu offline:**   
+   
+1. Compress the required file:
+
+   .. code-block:: console
+     
+      $ tar zxvf monit--linux-x64.tar.gz
+      
+   **NOTICE:** ** denotes the version number.
+
+2. Navigate to the directory where you want to save the file:
+   
+   .. code-block:: console
+     
+      $ cd monit-x.y.z
+       
+3. Copy the **bin/monit** directory into the **/usr/local/bin/** directory:
+
+   .. code-block:: console
+     
+      $ cp bin/monit /usr/local/bin/
+       
+4. Copy the **conf/monitrc** directory into the **/etc/** directory:
+       
+   .. code-block:: console
+     
+      $ cp conf/monitrc /etc/
+	  
+:ref:`Back to top `
+
+       
+Configuring Monit
+====================================
+
+When the installation is complete, you can configure Monit. You configure Monit by modifying the Monit configuration file, called **monitrc**. This file contains blocks for each service that you want to monitor.
+
+The following is an example of a service block:
+
+    .. code-block:: console
+     
+       $ #SQREAM1-START
+       $ check process sqream1 with pidfile /var/run/sqream1.pid
+       $ start program = "/usr/bin/systemctl start sqream1"
+       $ stop program = "/usr/bin/systemctl stop sqream1"
+       $ #SQREAM1-END
+
+For example, if you have 16 services, you can configure this block by copying the entire block 15 times and modifying all service names as required, as shown below:
+
+    .. code-block:: console
+     
+       $ #SQREAM2-START
+       $ check process sqream2 with pidfile /var/run/sqream2.pid
+       $ start program = "/usr/bin/systemctl start sqream2"
+       $ stop program = "/usr/bin/systemctl stop sqream2"
+       $ #SQREAM2-END
+       
+For servers that don't run the **metadataserver** and **serverpicker** commands, you can use the block example above, but comment out the related commands, as shown below:
+
+    .. code-block:: console
+     
+       $ #METADATASERVER-START
+       $ #check process metadataserver with pidfile /var/run/metadataserver.pid
+       $ #start program = "/usr/bin/systemctl start metadataserver"
+       $ #stop program = "/usr/bin/systemctl stop metadataserver"
+       $ #METADATASERVER-END
+
+**To configure Monit:**   
+   
+1. Copy the required block for each required service.
+2. Modify all service names in the block.
+3. Copy the configured **monitrc** file to the **/etc/monit.d/** directory:
+
+   .. code-block:: console
+     
+      $ cp monitrc /etc/monit.d/
+       
+4. Set file permissions to **600** (full read and write access):
+ 
+    .. code-block:: console
+
+       $ sudo chmod 600 /etc/monit.d/monitrc
+       
+5. Reload the system to activate the current configurations:
+ 
+    .. code-block:: console
+     
+       $ sudo systemctl daemon-reload
+ 
+6. **Optional** - Navigate to the **/etc/sqream** directory and create a symbolic link to the **monitrc** file:
+ 
+    .. code-block:: console
+     
+      $ cd /etc/sqream
+      $ sudo ln -s /etc/monit.d/monitrc monitrc    
+         
+Starting Monit
+====================================  
+
+After configuring Monit, you can start it.
+
+**To start Monit:**
+
+1. Start Monit as a super user:
+
+   .. code-block:: console
+     
+      $ sudo systemctl start monit   
+ 
+2. View Monit's service status:
+
+   .. code-block:: console
+     
+      $ sudo systemctl status monit
+
+3. If Monit is functioning correctly, enable the Monit service to start on boot:
+    
+   .. code-block:: console
+     
+      $ sudo systemctl enable monit
diff --git a/installation_guides/installing_nginx_proxy_over_secure_connection.rst b/installation_guides/installing_nginx_proxy_over_secure_connection.rst
new file mode 100644
index 000000000..5aef5eaff
--- /dev/null
+++ b/installation_guides/installing_nginx_proxy_over_secure_connection.rst
@@ -0,0 +1,403 @@
+.. _installing_nginx_proxy_over_secure_connection:
+
+*************************
+Installing an NGINX Proxy Over a Secure Connection
+*************************
+Configuring your NGINX server to use a strong encryption for client connections provides you with secure servers requests, preventing outside parties from gaining access to your traffic.
+
+The **Installing an NGINX Proxy Over a Secure Connection** page describes the following:
+
+.. contents::
+   :local:
+   :depth: 1
+
+Overview
+==============
+The Node.js platform that SQream uses with our Studio user interface is susceptible to web exposure. This page describes how to implement HTTPS access on your proxy server to establish a secure connection.
+
+**TLS (Transport Layer Security)**, and its predecessor **SSL (Secure Sockets Layer)**, are standard web protocols used for wrapping normal traffic in a protected, encrypted wrapper. This technology prevents the interception of server-client traffic. It also uses a certificate system for helping users verify the identity of sites they visit. The **Installing an NGINX Proxy Over a Secure Connection** guide describes how to set up a self-signed SSL certificate for use with an NGINX web server on a CentOS 7 server.
+
+.. note:: A self-signed certificate encrypts communication between your server and any clients. However, because it is not signed by trusted certificate authorities included with web browsers, you cannot use the certificate to automatically validate the identity of your server.
+
+A self-signed certificate may be appropriate if your domain name is not associated with your server, and in cases where your encrypted web interface is not user-facing. If you do have a domain name, using a CA-signed certificate is generally preferrable.
+
+For more information on setting up a free trusted certificate, see `How To Secure Nginx with Let's Encrypt on CentOS 7 `_.
+
+Prerequisites
+==============
+The following prerequisites are required for installing an NGINX proxy over a secure connection:
+
+* Super user privileges
+
+   ::
+   
+* A domain name to create a certificate for
+
+Installing NGINX and Adjusting the Firewall
+==============
+After verifying that you have the above preriquisites, you must verify that the NGINX web server has been installed on your machine.
+
+Though NGINX is not available in the default CentOS repositories, it is available from the **EPEL (Extra Packages for Enterprise Linux)** repository.
+
+**To install NGINX and adjust the firewall:**
+
+1. Enable the EPEL repository to enable server access to the NGINX package:
+
+   .. code-block:: console
+
+      $ sudo yum install epel-release
+
+2. Install NGINX:
+
+   .. code-block:: console
+
+      $ sudo yum install nginx
+ 
+3. Start the NGINX service:
+
+   .. code-block:: console
+
+      $ sudo systemctl start nginx
+ 
+4. Verify that the service is running:
+
+   .. code-block:: console
+
+      $ systemctl status nginx
+
+   The following is an example of the correct output:
+
+   .. code-block:: console
+
+      Output● nginx.service - The nginx HTTP and reverse proxy server
+         Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
+         Active: active (running) since Fri 2017-01-06 17:27:50 UTC; 28s ago
+
+      . . .
+
+      Jan 06 17:27:50 centos-512mb-nyc3-01 systemd[1]: Started The nginx HTTP and reverse proxy server.
+
+5. Enable NGINX to start when your server boots up:
+
+   .. code-block:: console
+
+      $ sudo systemctl enable nginx
+ 
+6. Verify that access to **ports 80 and 443** are not blocked by a firewall.
+
+    ::
+	
+7. Do one of the following:
+
+   * If you are not using a firewall, skip to :ref:`Creating Your SSL Certificate`.
+
+      ::
+	  
+   * If you have a running firewall, open ports 80 and 443:
+
+     .. code-block:: console
+
+        $ sudo firewall-cmd --add-service=http
+        $ sudo firewall-cmd --add-service=https
+        $ sudo firewall-cmd --runtime-to-permanent 
+
+8. If you have a running **iptables firewall**, for a basic rule set, add HTTP and HTTPS access:
+
+   .. code-block:: console
+
+      $ sudo iptables -I INPUT -p tcp -m tcp --dport 80 -j ACCEPT
+      $ sudo iptables -I INPUT -p tcp -m tcp --dport 443 -j ACCEPT
+
+   .. note:: The commands in Step 8 above are highly dependent on your current rule set.
+
+9. Verify that you can access the default NGINX page from a web browser.
+
+.. _creating_your_ssl_certificate:
+
+Creating Your SSL Certificate
+==============
+After installing NGINX and adjusting your firewall, you must create your SSL certificate.
+
+TLS/SSL combines public certificates with private keys. The SSL key, kept private on your server, is used to encrypt content sent to clients, while the SSL certificate is publicly shared with anyone requesting content. In addition, the SSL certificate can be used to decrypt the content signed by the associated SSL key. Your public certificate is located in the **/etc/ssl/certs** directory on your server.
+
+This section describes how to create your **/etc/ssl/private directory**, used for storing your private key file. Because the privacy of this key is essential for security, the permissions must be locked down to prevent unauthorized access:
+
+**To create your SSL certificate:**
+
+1. Set the following permissions to **private**:
+
+   .. code-block:: console
+
+      $ sudo mkdir /etc/ssl/private
+      $ sudo chmod 700 /etc/ssl/private
+ 
+2. Create a self-signed key and certificate pair with OpenSSL with the following command:
+
+   .. code-block:: console
+
+      $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt
+ 
+   The following list describes the elements in the command above:
+   
+   * **openssl** - The basic command line tool used for creating and managing OpenSSL certificates, keys, and other files.
+   
+    ::
+
+   * **req** - A subcommand for using the X.509 **Certificate Signing Request (CSR)** management. A public key infrastructure standard, SSL and TLS adhere X.509 key and certificate management regulations.
+
+    ::
+
+   * **-x509** - Used for modifying the previous subcommand by overriding the default functionality of generating a certificate signing request with making a self-signed certificate.
+
+    ::
+
+   * **-nodes** - Sets **OpenSSL** to skip the option of securing our certificate with a passphrase, letting NGINX read the file without user intervention when the server is activated. If you don't use **-nodes** you must enter your passphrase after every restart.
+
+    ::
+
+   * **-days 365** - Sets the certificate's validation duration to one year.
+
+    ::
+
+   * **-newkey rsa:2048** - Simultaneously generates a new certificate and new key. Because the key required to sign the certificate was not created in the previous step, it must be created along with the certificate. The **rsa:2048** generates an RSA 2048 bits long.
+
+    ::
+
+   * **-keyout** - Determines the location of the generated private key file.
+
+    ::
+
+   * **-out** - Determines the location of the certificate.
+
+  After creating a self-signed key and certificate pair with OpenSSL, a series of prompts about your server is presented to correctly embed the information you provided in the certificate.
+
+3. Provide the information requested by the prompts.
+
+   The most important piece of information is the **Common Name**, which is either the server **FQDN** or **your** name. You must enter the domain name associated with your server or your server’s public IP address.
+
+   The following is an example of a filled out set of prompts:
+
+   .. code-block:: console
+
+      OutputCountry Name (2 letter code) [AU]:US
+      State or Province Name (full name) [Some-State]:New York
+      Locality Name (eg, city) []:New York City
+      Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc.
+      Organizational Unit Name (eg, section) []:Ministry of Water Slides
+      Common Name (e.g. server FQDN or YOUR name) []:server_IP_address
+      Email Address []:admin@your_domain.com
+
+   Both files you create are stored in their own subdirectories of the **/etc/ssl** directory.
+
+   Although SQream uses OpenSSL, in addition we recommend creating a strong **Diffie-Hellman** group, used for negotiating **Perfect Forward Secrecy** with clients.
+   
+4. Create a strong Diffie-Hellman group:
+
+   .. code-block:: console
+
+      $ sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048
+ 
+   Creating a Diffie-Hellman group takes a few minutes, which is stored as the **dhparam.pem** file in the **/etc/ssl/certs** directory. This file can use in the configuration.
+   
+Configuring NGINX to use SSL
+==============
+After creating your SSL certificate, you must configure NGINX to use SSL.
+
+The default CentOS NGINX configuration is fairly unstructured, with the default HTTP server block located in the main configuration file. NGINX checks for files ending in **.conf** in the **/etc/nginx/conf.d** directory for additional configuration.
+
+SQream creates a new file in the **/etc/nginx/conf.d** directory to configure a server block. This block serves content using the certificate files we generated. In addition, the default server block can be optionally configured to redirect HTTP requests to HTTPS.
+
+.. note:: The example on this page uses the IP address **127.0.0.1**, which you should replace with your machine's IP address.
+
+**To configure NGINX to use SSL:**
+
+1. Create and open a file called **ssl.conf** in the **/etc/nginx/conf.d** directory:
+
+   .. code-block:: console
+
+      $ sudo vi /etc/nginx/conf.d/ssl.conf
+
+2. In the file you created in Step 1 above, open a server block:
+
+   1. Listen to **port 443**, which is the TLS/SSL default port.
+   
+       ::
+   
+   2. Set the ``server_name`` to the server’s domain name or IP address you used as the Common Name when generating your certificate.
+   
+       ::
+	   
+   3. Use the ``ssl_certificate``, ``ssl_certificate_key``, and ``ssl_dhparam`` directives to set the location of the SSL files you generated, as shown in the **/etc/nginx/conf.d/ssl.conf** file below:
+   
+   .. code-block:: console
+
+          upstream ui {
+              server 127.0.0.1:8080;
+          }
+      server {
+          listen 443 http2 ssl;
+          listen [::]:443 http2 ssl;
+
+          server_name nginx.sq.l;
+
+          ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
+          ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
+          ssl_dhparam /etc/ssl/certs/dhparam.pem;
+
+      root /usr/share/nginx/html;
+
+      #    location / {
+      #    }
+
+        location / {
+              proxy_pass http://ui;
+              proxy_set_header           X-Forwarded-Proto https;
+              proxy_set_header           X-Forwarded-For $proxy_add_x_forwarded_for;
+              proxy_set_header           X-Real-IP       $remote_addr;
+              proxy_set_header           Host $host;
+                      add_header                 Front-End-Https   on;
+              add_header                 X-Cache-Status $upstream_cache_status;
+              proxy_cache                off;
+              proxy_cache_revalidate     off;
+              proxy_cache_min_uses       1;
+              proxy_cache_valid          200 302 1h;
+              proxy_cache_valid          404 3s;
+              proxy_cache_use_stale      error timeout invalid_header updating http_500 http_502 http_503 http_504;
+              proxy_no_cache             $cookie_nocache $arg_nocache $arg_comment $http_pragma $http_authorization;
+              proxy_redirect             default;
+              proxy_max_temp_file_size   0;
+              proxy_connect_timeout      90;
+              proxy_send_timeout         90;
+              proxy_read_timeout         90;
+              proxy_buffer_size          4k;
+              proxy_buffering            on;
+              proxy_buffers              4 32k;
+              proxy_busy_buffers_size    64k;
+              proxy_temp_file_write_size 64k;
+              proxy_intercept_errors     on;
+
+              proxy_set_header           Upgrade $http_upgrade;
+              proxy_set_header           Connection "upgrade";
+          }
+
+          error_page 404 /404.html;
+          location = /404.html {
+          }
+
+          error_page 500 502 503 504 /50x.html;
+          location = /50x.html {
+          }
+      }
+ 
+4. Open and modify the **nginx.conf** file located in the **/etc/nginx/conf.d** directory as follows:
+
+   .. code-block:: console
+
+      $ sudo vi /etc/nginx/conf.d/nginx.conf
+	 
+   .. code-block:: console      
+
+       server {
+           listen       80;
+           listen       [::]:80;
+           server_name  _;
+           root         /usr/share/nginx/html;
+
+           # Load configuration files for the default server block.
+           include /etc/nginx/default.d/*.conf;
+
+           error_page 404 /404.html;
+           location = /404.html {
+           }
+
+           error_page 500 502 503 504 /50x.html;
+           location = /50x.html {
+           }
+       }
+	   
+Redirecting Studio Access from HTTP to HTTPS
+==================
+After configuring NGINX to use SSL, you must redirect Studio access from HTTP to HTTPS.
+
+According to your current configuration, NGINX responds with encrypted content for requests on port 443, but with **unencrypted** content for requests on **port 80**. This means that our site offers encryption, but does not enforce its usage. This may be fine for some use cases, but it is usually better to require encryption. This is especially important when confidential data like passwords may be transferred between the browser and the server.
+
+The default NGINX configuration file allows us to easily add directives to the default port 80 server block by adding files in the /etc/nginx/default.d directory.
+
+**To create a redirect from HTTP to HTTPS:**
+
+1. Create a new file called **ssl-redirect.conf** and open it for editing:
+
+   .. code-block:: console
+
+      $ sudo vi /etc/nginx/default.d/ssl-redirect.conf
+
+2. Copy and paste this line:
+
+   .. code-block:: console
+
+      $ return 301 https://$host$request_uri:8080/;
+	  
+Activating Your NGINX Configuration
+==============
+After redirecting from HTTP to HTTPs, you must restart NGINX to activate your new configuration.
+
+**To activate your NGINX configuration:**
+
+1. Verify that your files contain no syntax errors:
+
+   .. code-block:: console
+
+      $ sudo nginx -t
+   
+   The following output is generated if your files contain no syntax errors:
+
+   .. code-block:: console
+
+      nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
+      nginx: configuration file /etc/nginx/nginx.conf test is successful
+
+2. Restart NGINX to activate your configuration:
+
+   .. code-block:: console
+
+      $ sudo systemctl restart nginx
+
+Verifying that NGINX is Running
+==============
+After activating your NGINX configuration, you must verify that NGINX is running correctly.
+
+**To verify that NGINX is running correctly:**
+
+1. Check that the service is up and running:
+
+   .. code-block:: console
+
+      $ systemctl status nginx
+  
+   The following is an example of the correct output:
+
+   .. code-block:: console
+
+      Output● nginx.service - The nginx HTTP and reverse proxy server
+         Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
+         Active: active (running) since Fri 2017-01-06 17:27:50 UTC; 28s ago
+
+      . . .
+
+      Jan 06 17:27:50 centos-512mb-nyc3-01 systemd[1]: Started The nginx HTTP and reverse proxy server.
+ 
+2. Run the following command:
+
+   .. code-block:: console
+
+      $ sudo netstat -nltp |grep nginx
+ 
+   The following is an example of the correct output:
+
+   .. code-block:: console
+
+      [sqream@dorb-pc etc]$ sudo netstat -nltp |grep nginx
+      tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      15486/nginx: master 
+      tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      15486/nginx: master 
+      tcp6       0      0 :::80                   :::*                    LISTEN      15486/nginx: master 
+      tcp6       0      0 :::443                  :::*                    LISTEN      15486/nginx: master
\ No newline at end of file
diff --git a/installation_guides/installing_prometheus_using_binary_packages.rst b/installation_guides/installing_prometheus_using_binary_packages.rst
index a6104bdd0..5031ec181 100644
--- a/installation_guides/installing_prometheus_using_binary_packages.rst
+++ b/installation_guides/installing_prometheus_using_binary_packages.rst
@@ -102,7 +102,7 @@ You must install Prometheus before installing the Dashboard Data Collector.
        $ sudo chown -R prometheus:prometheus /etc/prometheus/consoles
        $ sudo chown -R prometheus:prometheus /etc/prometheus/console_libraries
 
-For more information on installing the Dashboard Data Collector, see `Installing the Dashboard Data Collector `_.
+For more information on installing the Dashboard Data Collector, see `Installing the Dashboard Data Collector `_.
 
 Back to :ref:`Installing Prometheus Using Binary Packages`
 
@@ -238,4 +238,4 @@ From the **Query** tab you can query metrics, as shown below:
    
    * - .. image:: /_static/images/3c9c4e8b-49bd-44a8-9829-81d1772ed962.gif   
 
-Back to :ref:`Installing Prometheus Using Binary Packages`
+Back to :ref:`Installing Prometheus Using Binary Packages`
\ No newline at end of file
diff --git a/installation_guides/installing_sqream_with_binary.rst b/installation_guides/installing_sqream_with_binary.rst
index dd1207ab7..7a3c7fff8 100644
--- a/installation_guides/installing_sqream_with_binary.rst
+++ b/installation_guides/installing_sqream_with_binary.rst
@@ -1,279 +1,277 @@
-.. _installing_sqream_with_binary:
-
-*********************************************
-Installing SQream Using Binary Packages
-*********************************************
-This procedure describes how to install SQream using Binary packages and must be done on all servers.
-
-**To install SQream using Binary packages:**
-
-1. Copy the SQream package to the **/home/sqream** directory for the current version:
-
-   .. code-block:: console
-   
-      $ tar -xf sqream-db-v<2020.2>.tar.gz
-
-2. Append the version number to the name of the SQream folder. The version number in the following example is **v2020.2**:
-
-   .. code-block:: console
-   
-      $ mv sqream sqream-db-v<2020.2>
-
-3. Move the new version of the SQream folder to the **/usr/local/** directory:
-
-   .. code-block:: console
-   
-      $ sudo mv sqream-db-v<2020.2> /usr/local/
-      
-4. Change the ownership of the folder to **sqream folder**:
-
-   .. code-block:: console
-   
-      $ sudo chown -R sqream:sqream  /usr/local/sqream-db-v<2020.2>
-
-5. Navigate to the **/usr/local/** directory and create a symbolic link to SQream:
-
-   .. code-block:: console
-   
-      $ cd /usr/local
-      $ sudo ln -s sqream-db-v<2020.2> sqream
-      
-6. Verify that the symbolic link that you created points to the folder that you created:
-
-   .. code-block:: console
-   
-      $ ls -l
-      
-7. Verify that the symbolic link that you created points to the folder that you created:
-
-   .. code-block:: console
-   
-      $ sqream -> sqream-db-v<2020.2>
-      
-8. Create the SQream configuration file destination folders and set their ownership to **sqream**:
-
-   .. code-block:: console
-   
-      $ sudo mkdir /etc/sqream
-      $ sudo chown -R sqream:sqream /etc/sqream
-      
-9. Create the SQream service log destination folders and set their ownership to **sqream**:
-
-   .. code-block:: console
-   
-      $ sudo mkdir /var/log/sqream
-      $ sudo chown -R sqream:sqream /var/log/sqream
-
-10. Navigate to the **/usr/local/** directory and copy the SQream configuration files from them:
-
-   .. code-block:: console
-   
-      $ cd /usr/local/sqream/etc/
-      $ cp * /etc/sqream
-      
-The configuration files are **service configuration files**, and the JSON files are **SQream configuration files**, for a total of four files. The number of SQream configuration files and JSON files must be identical.
-      
-**NOTICE** - Verify that the JSON files have been configured correctly and that all required flags have been set to the correct values.
-
-In each JSON file, the following parameters **must be updated**:
-
-* instanceId
-* machineIP
-* metadataServerIp
-* spoolMemoryGB
-* limitQueryMemoryGB
-* gpu
-* port
-* ssl_port
-
-Note the following:
-
-* The value of the **metadataServerIp** parameter must point to the IP that the metadata is running on.
-* The value of the **machineIP** parameter must point to the IP of your local machine.
-
-It would be same on server running metadataserver and different on other server nodes.
-
-11. **Optional** - To run additional SQream services, copy the required configuration files and create additional JSON files:
-
-   .. code-block:: console
-   
-      $ cp sqream2_config.json sqream3_config.json
-      $ vim sqream3_config.json
-      
-**NOTICE:** A unique **instanceID** must be used in each JSON file. IN the example above, the instanceID **sqream_2** is changed to **sqream_3**.
-
-12. **Optional** - If you created additional services in **Step 11**, verify that you have also created their additional configuration files:
-
-    .. code-block:: console
-   
-       $ cp sqream2-service.conf sqream3-service.conf
-       $ vim sqream3-service.conf
-      
-13. For each SQream service configuration file, do the following:
-
-    1. Change the **SERVICE_NAME=sqream2** value to **SERVICE_NAME=sqream3**.
-    
-    2. Change **LOGFILE=/var/log/sqream/sqream2.log** to **LOGFILE=/var/log/sqream/sqream3.log**.
-    
-**NOTE:** If you are running SQream on more than one server, you must configure the ``serverpicker`` and ``metadatserver`` services to start on only one of the servers. If **metadataserver** is running on the first server, the ``metadataServerIP`` value in the second server's /etc/sqream/sqream1_config.json file must point to the IP of the server on which the ``metadataserver`` service is running.
-    
-14. Set up **servicepicker**:
-
-    1. Do the following:
-
-       .. code-block:: console
-   
-          $ vim /etc/sqream/server_picker.conf
-    
-    2. Change the IP **127.0.0.1** to the IP of the server that the **metadataserver** service is running on.    
-    
-    3. Change the **CLUSTER** to the value of the cluster path.
-     
-15. Set up your service files:      
-      
-    .. code-block:: console
-   
-       $ cd /usr/local/sqream/service/
-       $ cp sqream2.service sqream3.service
-       $ vim sqream3.service      
-       
-16. Increment each **EnvironmentFile=/etc/sqream/sqream2-service.conf** configuration file for each SQream service file, as shown below:
-
-    .. code-block:: console
-     
-       $ EnvironmentFile=/etc/sqream/sqream<3>-service.conf
-       
-17. Copy and register your service files into systemd:       
-       
-    .. code-block:: console
-     
-       $ sudo cp metadataserver.service /usr/lib/systemd/system/
-       $ sudo cp serverpicker.service /usr/lib/systemd/system/
-       $ sudo cp sqream*.service /usr/lib/systemd/system/
-       
-18. Verify that your service files have been copied into systemd:
-
-    .. code-block:: console
-     
-       $ ls -l /usr/lib/systemd/system/sqream*
-       $ ls -l /usr/lib/systemd/system/metadataserver.service
-       $ ls -l /usr/lib/systemd/system/serverpicker.service
-       $ sudo systemctl daemon-reload       
-       
-19. Copy the license into the **/etc/license** directory:
-
-    .. code-block:: console
-     
-       $ cp license.enc /etc/sqream/   
-
-       
-If you have an HDFS environment, see :ref:`Configuring an HDFS Environment for the User sqream `.
-
-
-
-
-
-
-Upgrading SQream Version
--------------------------
-Upgrading your SQream version requires stopping all running services while you manually upgrade SQream.
-
-**To upgrade your version of SQream:**
-
-1. Stop all actively running SQream services.
-
-**Notice-** All SQream services must remain stopped while the upgrade is in process. Ensuring that SQream services remain stopped depends on the tool being used.
-
-For an example of stopping actively running SQream services, see :ref:`Launching SQream with Monit `.
-
-
-      
-2. Verify that SQream has stopped listening on ports **500X**, **510X**, and **310X**:
-
-   .. code-block:: console
-
-      $ sudo netstat -nltp    #to make sure sqream stopped listening on 500X, 510X and 310X ports.
-
-3. Replace the old version ``sqream-db-v2020.2``, with the new version ``sqream-db-v2021.1``:
-
-   .. code-block:: console
-    
-      $ cd /home/sqream
-      $ mkdir tempfolder
-      $ mv sqream-db-v2021.1.tar.gz tempfolder/
-      $ tar -xf sqream-db-v2021.1.tar.gz
-      $ sudo mv sqream /usr/local/sqream-db-v2021.1
-      $ cd /usr/local
-      $ sudo chown -R sqream:sqream sqream-db-v2021.1
-   
-4. Remove the symbolic link:
-
-   .. code-block:: console
-   
-      $ sudo rm sqream
-   
-5. Create a new symbolic link named "sqream" pointing to the new version:
-
-   .. code-block:: console  
-
-      $ sudo ln -s sqream-db-v2021.1 sqream
-
-6. Verify that the symbolic SQream link points to the real folder:
-
-   .. code-block:: console  
-
-      $ ls -l
-	 
-   The following is an example of the correct output:
-
-   .. code-block:: console
-    
-      $ sqream -> sqream-db-v2021.1
-
-7. **Optional-** (for major versions) Upgrade your version of SQream storage cluster, as shown in the following example:
-
-   .. code-block:: console  
-
-      $ cat /etc/sqream/sqream1_config.json |grep cluster
-      $ ./upgrade_storage 
-	  
-   The following is an example of the correct output:
-	  
-   .. code-block:: console  
-
-	  get_leveldb_version path{}
-	  current storage version 23
-      upgrade_v24
-      upgrade_storage to 24
-	  upgrade_storage to 24 - Done
-	  upgrade_v25
-	  upgrade_storage to 25
-	  upgrade_storage to 25 - Done
-	  upgrade_v26
-	  upgrade_storage to 26
-	  upgrade_storage to 26 - Done
-	  validate_leveldb
-	  ...
-      upgrade_v37
-	  upgrade_storage to 37
-	  upgrade_storage to 37 - Done
-	  validate_leveldb
-      storage has been upgraded successfully to version 37
- 
-8. Verify that the latest version has been installed:
-
-   .. code-block:: console
-    
-      $ ./sqream sql --username sqream --password sqream --host localhost --databasename master -c "SELECT SHOW_VERSION();"
-      
-   The following is an example of the correct output:
- 
-   .. code-block:: console
-    
-      v2021.1
-      1 row
-      time: 0.050603s 
- 
-For more information, see the `upgrade_storage `_ command line program.
-
-For more information about installing Studio on a stand-alone server, see `Installing Studio on a Stand-Alone Server `_.
\ No newline at end of file
+.. _installing_sqream_with_binary:
+
+*********************************************
+Installing SQream Using Binary Packages
+*********************************************
+This procedure describes how to install SQream using Binary packages and must be done on all servers.
+
+**To install SQream using Binary packages:**
+
+1. Copy the SQream package to the **/home/sqream** directory for the current version:
+
+   .. code-block:: console
+   
+      $ tar -xf sqream-db-v<2020.2>.tar.gz
+
+2. Append the version number to the name of the SQream folder. The version number in the following example is **v2020.2**:
+
+   .. code-block:: console
+   
+      $ mv sqream sqream-db-v<2020.2>
+
+3. Move the new version of the SQream folder to the **/usr/local/** directory:
+
+   .. code-block:: console
+   
+      $ sudo mv sqream-db-v<2020.2> /usr/local/
+      
+4. Change the ownership of the folder to **sqream folder**:
+
+   .. code-block:: console
+   
+      $ sudo chown -R sqream:sqream  /usr/local/sqream-db-v<2020.2>
+
+5. Navigate to the **/usr/local/** directory and create a symbolic link to SQream:
+
+   .. code-block:: console
+   
+      $ cd /usr/local
+      $ sudo ln -s sqream-db-v<2020.2> sqream
+      
+6. Verify that the symbolic link that you created points to the folder that you created:
+
+   .. code-block:: console
+   
+      $ ls -l
+      
+7. Verify that the symbolic link that you created points to the folder that you created:
+
+   .. code-block:: console
+   
+      $ sqream -> sqream-db-v<2020.2>
+      
+8. Create the SQream configuration file destination folders and set their ownership to **sqream**:
+
+   .. code-block:: console
+   
+      $ sudo mkdir /etc/sqream
+      $ sudo chown -R sqream:sqream /etc/sqream
+      
+9. Create the SQream service log destination folders and set their ownership to **sqream**:
+
+   .. code-block:: console
+   
+      $ sudo mkdir /var/log/sqream
+      $ sudo chown -R sqream:sqream /var/log/sqream
+
+10. Navigate to the **/usr/local/** directory and copy the SQream configuration files from them:
+
+   .. code-block:: console
+   
+      $ cd /usr/local/sqream/etc/
+      $ cp * /etc/sqream
+      
+The configuration files are **service configuration files**, and the JSON files are **SQream configuration files**, for a total of four files. The number of SQream configuration files and JSON files must be identical.
+      
+.. note:: Verify that the JSON files have been configured correctly and that all required flags have been set to the correct values.
+
+In each JSON file, the following parameters **must be updated**:
+
+* instanceId
+* machineIP
+* metadataServerIp
+* spoolMemoryGB
+* limitQueryMemoryGB
+* gpu
+* port
+* ssl_port
+
+Note the following:
+
+* The value of the **metadataServerIp** parameter must point to the IP that the metadata is running on.
+* The value of the **machineIP** parameter must point to the IP of your local machine.
+
+It would be same on server running metadataserver and different on other server nodes.
+
+11. **Optional** - To run additional SQream services, copy the required configuration files and create additional JSON files:
+
+   .. code-block:: console
+   
+      $ cp sqream2_config.json sqream3_config.json
+      $ vim sqream3_config.json
+      
+.. note:: A unique **instanceID** must be used in each JSON file. IN the example above, the instanceID **sqream_2** is changed to **sqream_3**.
+
+12. **Optional** - If you created additional services in **Step 11**, verify that you have also created their additional configuration files:
+
+    .. code-block:: console
+   
+       $ cp sqream2-service.conf sqream3-service.conf
+       $ vim sqream3-service.conf
+      
+13. For each SQream service configuration file, do the following:
+
+    1. Change the **SERVICE_NAME=sqream2** value to **SERVICE_NAME=sqream3**.
+    
+    2. Change **LOGFILE=/var/log/sqream/sqream2.log** to **LOGFILE=/var/log/sqream/sqream3.log**.
+    
+.. note:: If you are running SQream on more than one server, you must configure the ``serverpicker`` and ``metadatserver`` services to start on only one of the servers. If **metadataserver** is running on the first server, the ``metadataServerIP`` value in the second server's /etc/sqream/sqream1_config.json file must point to the IP of the server on which the ``metadataserver`` service is running.
+    
+14. Set up **servicepicker**:
+
+    1. Do the following:
+
+       .. code-block:: console
+   
+          $ vim /etc/sqream/server_picker.conf
+    
+    2. Change the IP **127.0.0.1** to the IP of the server that the **metadataserver** service is running on.    
+    
+    3. Change the **CLUSTER** to the value of the cluster path.
+     
+15. Set up your service files:      
+      
+    .. code-block:: console
+   
+       $ cd /usr/local/sqream/service/
+       $ cp sqream2.service sqream3.service
+       $ vim sqream3.service      
+       
+16. Increment each **EnvironmentFile=/etc/sqream/sqream2-service.conf** configuration file for each SQream service file, as shown below:
+
+    .. code-block:: console
+     
+       $ EnvironmentFile=/etc/sqream/sqream<3>-service.conf
+       
+17. Copy and register your service files into systemd:       
+       
+    .. code-block:: console
+     
+       $ sudo cp metadataserver.service /usr/lib/systemd/system/
+       $ sudo cp serverpicker.service /usr/lib/systemd/system/
+       $ sudo cp sqream*.service /usr/lib/systemd/system/
+       
+18. Verify that your service files have been copied into systemd:
+
+    .. code-block:: console
+     
+       $ ls -l /usr/lib/systemd/system/sqream*
+       $ ls -l /usr/lib/systemd/system/metadataserver.service
+       $ ls -l /usr/lib/systemd/system/serverpicker.service
+       $ sudo systemctl daemon-reload       
+       
+19. Copy the license into the **/etc/license** directory:
+
+    .. code-block:: console
+     
+       $ cp license.enc /etc/sqream/   
+
+       
+If you have an HDFS environment, see :ref:`Configuring an HDFS Environment for the User sqream `.
+
+
+
+
+
+
+Upgrading SQream Version
+-------------------------
+Upgrading your SQream version requires stopping all running services while you manually upgrade SQream.
+
+**To upgrade your version of SQream:**
+
+1. Stop all actively running SQream services.
+
+.. note:: All SQream services must remain stopped while the upgrade is in process. Ensuring that SQream services remain stopped depends on the tool being used.
+
+For an example of stopping actively running SQream services, see :ref:`Launching SQream with Monit `.
+   
+2. Verify that SQream has stopped listening on ports **500X**, **510X**, and **310X**:
+
+   .. code-block:: console
+
+      $ sudo netstat -nltp    #to make sure sqream stopped listening on 500X, 510X and 310X ports.
+
+3. Replace the old version ``sqream-db-v2020.2``, with the new version ``sqream-db-v2021.1``:
+
+   .. code-block:: console
+    
+      $ cd /home/sqream
+      $ mkdir tempfolder
+      $ mv sqream-db-v2021.1.tar.gz tempfolder/
+      $ tar -xf sqream-db-v2021.1.tar.gz
+      $ sudo mv sqream /usr/local/sqream-db-v2021.1
+      $ cd /usr/local
+      $ sudo chown -R sqream:sqream sqream-db-v2021.1
+   
+4. Remove the symbolic link:
+
+   .. code-block:: console
+   
+      $ sudo rm sqream
+   
+5. Create a new symbolic link named "sqream" pointing to the new version:
+
+   .. code-block:: console  
+
+      $ sudo ln -s sqream-db-v2021.1 sqream
+
+6. Verify that the symbolic SQream link points to the real folder:
+
+   .. code-block:: console  
+
+      $ ls -l
+	 
+   The following is an example of the correct output:
+
+   .. code-block:: console
+    
+      $ sqream -> sqream-db-v2021.1
+
+7. **Optional-** (for major versions) Upgrade your version of SQream storage cluster, as shown in the following example:
+
+   .. code-block:: console  
+
+      $ cat /etc/sqream/sqream1_config.json |grep cluster
+      $ ./upgrade_storage 
+	  
+   The following is an example of the correct output:
+	  
+   .. code-block:: console  
+
+	  get_leveldb_version path{}
+	  current storage version 23
+      upgrade_v24
+      upgrade_storage to 24
+	  upgrade_storage to 24 - Done
+	  upgrade_v25
+	  upgrade_storage to 25
+	  upgrade_storage to 25 - Done
+	  upgrade_v26
+	  upgrade_storage to 26
+	  upgrade_storage to 26 - Done
+	  validate_leveldb
+	  ...
+      upgrade_v37
+	  upgrade_storage to 37
+	  upgrade_storage to 37 - Done
+	  validate_leveldb
+      storage has been upgraded successfully to version 37
+ 
+8. Verify that the latest version has been installed:
+
+   .. code-block:: console
+    
+      $ ./sqream sql --username sqream --password sqream --host localhost --databasename master -c "SELECT SHOW_VERSION();"
+      
+   The following is an example of the correct output:
+ 
+   .. code-block:: console
+    
+      v2021.1
+      1 row
+      time: 0.050603s 
+ 
+For more information, see the `upgrade_storage `_ command line program.
+
+For more information about installing Studio on a stand-alone server, see `Installing Studio on a Stand-Alone Server `_.
\ No newline at end of file
diff --git a/installation_guides/installing_sqream_with_kubernetes.rst b/installation_guides/installing_sqream_with_kubernetes.rst
index 093f21ba3..fbf2566ed 100644
--- a/installation_guides/installing_sqream_with_kubernetes.rst
+++ b/installation_guides/installing_sqream_with_kubernetes.rst
@@ -197,27 +197,32 @@ After completing all of the steps above, you must check the CUDA version.
 
    .. code-block:: postgres
    
-      $ +-----------------------------------------------------------------------------+
-      $ | NVIDIA-SMI 418.87.00    Driver Version: 418.87.00    CUDA Version: 10.1     |
-      $ |-------------------------------+----------------------+----------------------+
-      $ | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
-      $ | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-      $ |===============================+======================+======================|
-      $ |   0  GeForce GTX 105...  Off  | 00000000:01:00.0 Off |                  N/A |
-      $ | 32%   38C    P0    N/A /  75W |      0MiB /  4039MiB |      0%      Default |
-      $ +-------------------------------+----------------------+----------------------+
-      $                                                                                
-      $ +-----------------------------------------------------------------------------+
-      $ | Processes:                                                       GPU Memory |
-      $ |  GPU       PID   Type   Process name                             Usage      |
-      $ |=============================================================================|
-      $ |  No running processes found                                                 |
-      $ +-----------------------------------------------------------------------------+
+      +-----------------------------------------------------------------------------+
+      | NVIDIA-SMI 470.82.01    Driver Version: 470.82.01    CUDA Version: 10.1    |
+      |-------------------------------+----------------------+----------------------+
+      | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
+      | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
+      |                               |                      |               MIG M. |
+      |===============================+======================+======================|
+      |   0  NVIDIA A100-PCI...  On   | 00000000:17:00.0 Off |                    0 |
+      | N/A   34C    P0    64W / 300W |  79927MiB / 80994MiB |      0%      Default |
+      |                               |                      |             Disabled |
+      +-------------------------------+----------------------+----------------------+
+      |   1  NVIDIA A100-PCI...  On   | 00000000:CA:00.0 Off |                    0 |
+      | N/A   35C    P0    60W / 300W |  79927MiB / 80994MiB |      0%      Default |
+      |                               |                      |             Disabled |
+      +-------------------------------+----------------------+----------------------+
+	  
+      +-----------------------------------------------------------------------------+
+      | Processes:                                                       GPU Memory |
+      |  GPU       PID   Type   Process name                             Usage      |
+      |=============================================================================|
+      |  No running processes found                                                 |
+      +-----------------------------------------------------------------------------+
 
 In the above output, the CUDA version is **10.1**.
 
-If the above output is not generated, CUDA has not been installed. To install CUDA, see `Installing the CUDA driver `_.
-
+If the above output is not generated, CUDA has not been installed. To install CUDA, see :ref:`installing-the-cuda-driver`.
 
 Go back to :ref:`Setting Up Your Hosts`
 
@@ -795,40 +800,46 @@ Installing the NVIDIA Docker2 Toolkit on an x86_64 Bit Processor on CentOS
 
    .. code-block:: postgres
    
-      $ docker run --runtime=nvidia --rm nvidia/cuda:10.1-base nvidia-smi
+      $ docker run --runtime=nvidia --rm nvidia/cuda:10.1.3-base-centos7 nvidia-smi
 
    The following is an example of the correct output:
 
    .. code-block:: postgres
    
-      $ docker run --runtime=nvidia --rm nvidia/cuda:10.1-base nvidia-smi
-      $ Unable to find image 'nvidia/cuda:10.1-base' locally
-      $ 10.1-base: Pulling from nvidia/cuda
-      $ d519e2592276: Pull complete 
-      $ d22d2dfcfa9c: Pull complete 
-      $ b3afe92c540b: Pull complete 
-      $ 13a10df09dc1: Pull complete 
-      $ 4f0bc36a7e1d: Pull complete 
-      $ cd710321007d: Pull complete 
-      $ Digest: sha256:635629544b2a2be3781246fdddc55cc1a7d8b352e2ef205ba6122b8404a52123
-      $ Status: Downloaded newer image for nvidia/cuda:10.1-base
-      $ Sun Feb 14 13:27:58 2021       
-      $ +-----------------------------------------------------------------------------+
-      $ | NVIDIA-SMI 418.87.00    Driver Version: 418.87.00    CUDA Version: 10.1     |
-      $ |-------------------------------+----------------------+----------------------+
-      $ | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
-      $ | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
-      $ |===============================+======================+======================|
-      $ |   0  GeForce GTX 105...  Off  | 00000000:01:00.0 Off |                  N/A |
-      $ | 32%   37C    P0    N/A /  75W |      0MiB /  4039MiB |      0%      Default |
-      $ +-------------------------------+----------------------+----------------------+
-      $                                                                                
-      $ +-----------------------------------------------------------------------------+
-      $ | Processes:                                                       GPU Memory |
-      $ |  GPU       PID   Type   Process name                             Usage      |
-      $ |=============================================================================|
-      $ |  No running processes found                                                 |
-      $ +-----------------------------------------------------------------------------+
+      docker run --runtime=nvidia --rm nvidia/cuda:10.1.3-base-centos7 nvidia-smi
+      Unable to find image 'nvidia/cuda:10.1.3-base-centos7' locally
+      10.1.3-base-centos7: Pulling from nvidia/cuda
+      d519e2592276: Pull complete 
+      d22d2dfcfa9c: Pull complete 
+      b3afe92c540b: Pull complete 
+      13a10df09dc1: Pull complete 
+      4f0bc36a7e1d: Pull complete 
+      cd710321007d: Pull complete 
+      Digest: sha256:635629544b2a2be3781246fdddc55cc1a7d8b352e2ef205ba6122b8404a52123
+      Status: Downloaded newer image for nvidia/cuda:10.1.3-base-centos7
+      Sun Feb 14 13:27:58 2021       
+      +-----------------------------------------------------------------------------+
+      | NVIDIA-SMI 470.82.01    Driver Version: 470.82.01    CUDA Version: 10.1     |
+      |-------------------------------+----------------------+----------------------+
+      | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
+      | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
+      |                               |                      |               MIG M. |
+      |===============================+======================+======================|
+      |   0  NVIDIA A100-PCI...  On   | 00000000:17:00.0 Off |                    0 |
+      | N/A   34C    P0    64W / 300W |  79927MiB / 80994MiB |      0%      Default |
+      |                               |                      |             Disabled |
+      +-------------------------------+----------------------+----------------------+
+      |   1  NVIDIA A100-PCI...  On   | 00000000:CA:00.0 Off |                    0 |
+      | N/A   35C    P0    60W / 300W |  79927MiB / 80994MiB |      0%      Default |
+      |                               |                      |             Disabled |
+      +-------------------------------+----------------------+----------------------+
+                                                                                        
+      +-----------------------------------------------------------------------------+
+      | Processes:                                                       GPU Memory |
+      |  GPU       PID   Type   Process name                             Usage      |
+      |=============================================================================|
+      |  No running processes found                                                 |
+      +-----------------------------------------------------------------------------+
 
 For more information on installing the NVIDIA Docker2 Toolkit on an x86_64 Bit Processor on CentOS, see `NVIDIA Docker Installation - CentOS distributions `_
      
diff --git a/installation_guides/installing_studio_on_stand_alone_server.rst b/installation_guides/installing_studio_on_stand_alone_server.rst
index 874adba8d..ab28f0a7c 100644
--- a/installation_guides/installing_studio_on_stand_alone_server.rst
+++ b/installation_guides/installing_studio_on_stand_alone_server.rst
@@ -5,9 +5,7 @@
 ***********************
 Installing Studio on a Stand-Alone Server
 ***********************
-
-
-The **Installing Studio on a Stand-Alone Server** guide describes how to install SQream Studio on a stand-alone server. A stand-alone server is a server that does not run SQream based on binary files, Docker, or Kubernetes.
+A stand-alone server is a server that does not run SQream based on binary files or Kubernetes.
 
 The Installing Studio on a Stand-Alone Server guide includes the following sections:
 
@@ -147,7 +145,7 @@ After installing the Dashboard Data Collector, you can install Studio.
  
    .. code-block:: console
      
-      $ npm run setup -- -y --host= --port=3108
+      $ npm run setup -- -y --host= --port=3108 --data-collector-url=http://:8100/api/dashboard/data
 
    The above command creates the **sqream-admin-config.json** configuration file in the **sqream-admin** folder and shows the following output:
    
@@ -158,6 +156,40 @@ After installing the Dashboard Data Collector, you can install Studio.
    For more information about the available set-up arguments, see :ref:`Set-Up Arguments`.
 
   ::
+  
+5. To access Studio over a secure connection, in your configuration file do the following:
+
+   #. Change your ``port`` value to **3109**.
+   
+       ::
+	   
+   #. Change your ``ssl`` flag value to **true**.
+   
+      The following is an example of the correctly modified configuration file:
+	  
+      .. code-block:: console
+     
+         {
+           "debugSqream": false,
+           "webHost": "localhost",
+           "webPort": 8080,
+           "webSslPort": 8443,
+           "logsDirectory": "",
+           "clusterType": "standalone",
+           "dataCollectorUrl": "",
+           "connections": [
+             {
+               "host": "127.0.0.1",
+               "port":3109,
+               "isCluster": true,
+               "name": "default",
+               "service": "sqream",
+               "ssl":true,
+               "networkTimeout": 60000,
+               "connectionTimeout": 3000
+             }
+           ]
+         }
    
 5. If you have installed Studio on a server where SQream is already installed, move the **sqream-admin-config.json** file to **/etc/sqream/**:
 
@@ -399,180 +431,3 @@ To upgrade Studio you need to stop the version that you currently have.
 
 Back to :ref:`Installing Studio on a Stand-Alone Server`
 
-.. _install_studio_docker_container:
-
-Installing Studio in a Docker Container
-^^^^^^^^^^^^^^^^^^^^^^^
-This guide explains how to install SQream Studio in a Docker container and includes the following sections:
-
-.. contents::
-   :local:
-   :depth: 1
-
-Installing Studio
---------------
-If you have already installed Docker, you can install Studio in a Docker container.
-
-**To install Studio:**
-
-1. Copy the downloaded image onto the target server.
-  
-::  
-
-2. Load the Docker image.
-
-   .. code-block:: console
-
-      $ docker load -i 
-
-::
-	
-3. If the downloaded image is called **sqream-acceleration-studio-5.1.3.x86_64.docker18.0.3.tar,** run the following command:
-
-   .. code-block:: console
-
-      $ docker load -i sqream-acceleration-studio-5.1.3.x86_64.docker18.0.3.tar
-
-::
-	
-4. Start the Docker container.
-
-   .. code-block:: console
-
-      $ docker run -d --restart=unless-stopped -p :8080 -e runtime=docker -e SQREAM_K8S_PICKER= -e SQREAM_PICKER_PORT= -e SQREAM_DATABASE_NAME= -e SQREAM_ADMIN_UI_PORT=8080 --name=sqream-admin-ui 
-
-   The following is an example of the command above:
-
-   .. code-block:: console
-
-      $ docker run -d --name sqream-studio  -p 8080:8080 -e runtime=docker -e SQREAM_K8S_PICKER=192.168.0.183 -e SQREAM_PICKER_PORT=3108 -e SQREAM_DATABASE_NAME=master -e SQREAM_ADMIN_UI_PORT=8080 sqream-acceleration-studio:5.1.3
-
-Back to :ref:`Installing Studio in a Docker Container`
-
-Accessing Studio
------------------
-You can access Studio from Port 8080: ``http://:8080``.
-
-If you want to use Studio over a secure connection (https), you must use the parameter values shown in the following table:
-	 
-.. list-table::
-   :widths: 10 25 65
-   :header-rows: 1  
-   
-   * - Parameter
-     - Default Value
-     - Description
-   * - ``--web-ssl-port``
-     - 8443
-     - 
-   * - ``--web-ssl-key-path``
-     - None
-     - The path of SSL key PEM file for enabling https. Leave empty to disable.
-   * - ``--web-ssl-cert-path``
-     - None
-     - The path of SSL certificate PEM file for enabling https. Leave empty to disable.	 
-
-You can configure the above parameters using the following syntax:
-
-.. code-block:: console
-
-  $ npm run setup -- -y --host=127.0.0.1 --port=3108
-  
-.. _using_docker_container_commands:
-
-Back to :ref:`Installing Studio in a Docker Container`
-
-Using Docker Container Commands
----------------
-When installing Studio in Docker, you can run the following commands:
-
-* View Docker container logs:
-
-  .. code-block:: console
-
-     $ docker logs -f sqream-admin-ui
-	  
-* Restart the Docker container: 
-
-  .. code-block:: console
-
-     $ docker restart sqream-admin-ui
-	  
-* Kill the Docker container:
-
-  .. code-block:: console
-
-     $ docker rm -f sqream-admin-ui
-      
-Back to :ref:`Installing Studio in a Docker Container`
-
-Setting Up Argument Configurations
-----------------
-When creating the **sqream-admin-config.json** configuration file, you can add ``-y`` to create the configuration file in non-interactive mode. Configuration files created in non-interactive mode use all the parameter defaults not provided in the command.
-
-The following table shows the available arguments:
-
-.. list-table::
-   :widths: 10 25 65
-   :header-rows: 1  
-   
-   * - Parameter
-     - Default Value
-     - Description
-   * - ``--web--host``
-     - 8443
-     - 
-   * - ``--web-port``
-     - 8080
-     - 
-   * - ``--web-ssl-port``
-     - 8443
-     - 
-   * - ``--web-ssl-key-path``
-     - None
-     - The path of the SSL Key PEM file for enabling https. Leave empty to disable.
-   * - ``--web-ssl-cert-path``
-     - None
-     - The path of the SSL Certificate PEM file for enabling https. Leave empty to disable.
-   * - ``--debug-sqream (flag)``
-     - false
-     - 
-   * - ``--host``
-     - 127.0.0.1
-     - 
-   * - ``--port``
-     - 3108
-     - 
-   * - ``is-cluster (flag)``
-     - true
-     - 
-   * - ``--service``
-     - sqream
-     - 
-   * - ``--ssl (flag)``
-     - false
-     - Enables the SQream SSL connection.
-   * - ``--name``
-     - default
-     - 
-   * - ``--data-collector-url``
-     - localhost:8100/api/dashboard/data
-     - Enables the Dashboard. Leaving this blank disables the Dashboard. Using a mock URL uses mock data.
-   * - ``--cluster-type``
-     - standalone (``standalone`` or ``k8s``)
-     - 
-   * - ``--config-location``
-     - ./sqream-admin-config.json
-     - 
-   * - ``--network-timeout``
-     - 60000 (60 seconds)
-     - 
-   * - ``--access-key``
-     - None
-     - If defined, UI access is blocked unless ``?ui-access=`` is included in the URL.
-	 
-Back to :ref:`Installing Studio in a Docker Container`
-
-  ::	 
-
-Back to :ref:`Installing Studio on a Stand-Alone Server`
diff --git a/installation_guides/running_sqream_in_a_docker_container.rst b/installation_guides/running_sqream_in_a_docker_container.rst
index 040223936..2a2454164 100644
--- a/installation_guides/running_sqream_in_a_docker_container.rst
+++ b/installation_guides/running_sqream_in_a_docker_container.rst
@@ -1,11 +1,9 @@
 .. _running_sqream_in_a_docker_container:
 
-
-
 ***********************
 Installing and Running SQream in a Docker Container
 ***********************
-The **Running SQream in a Docker Container** page describes how to prepare your machine's environment for installing and running SQream in a Docker container.
+The **Installing and Running SQream in a Docker Container** page describes how to prepare your machine's environment for installing and running SQream in a Docker container.
 
 This page describes the following:
 
@@ -36,19 +34,19 @@ To run SQream in a Docker container you must create a local user.
 
 1. Add a local user:
 
-   .. code-block:: console
+   .. code-block::
      
       $ useradd -m -U 
 
 2. Set the local user's password:
 
-   .. code-block:: console
+   .. code-block::
      
       $ passwd 
 
 3. Add the local user to the ``wheel`` group:
 
-   .. code-block:: console
+   .. code-block::
      
       $ usermod -aG wheel 
 
@@ -64,13 +62,13 @@ After creating a local user you must set a local language.
 
 1. Set the local language:
 
-   .. code-block:: console
+   .. code-block::
      
       $ sudo localectl set-locale LANG=en_US.UTF-8
 
 2. Set the time stamp (time and date) of the locale:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo timedatectl set-timezone Asia/Jerusalem
 
@@ -86,13 +84,13 @@ After setting a local language you must add the EPEL repository.
 
    1. RedHat (RHEL 7):
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
       
    2. CentOS 7
     
-   .. code-block:: console
+   .. code-block::
 
       $ sudo yum install epel-release
 
@@ -102,9 +100,9 @@ After adding the EPEL repository, you must install the required NTP packages.
 
 You can install the required NTP packages by running the following command:
 
-.. code-block:: console
+.. code-block::
 
-   $ sudo yum install ntp  pciutils python36 kernel-devel-$(uname -r) kernel-headers-$(uname -r) 	gcc
+   $ sudo yum install ntp  pciutils python36 kernel-devel-$(uname -r) kernel-headers-$(uname -r) gcc
 
 Installing the Recommended Tools
 ----------------
@@ -112,7 +110,7 @@ After installin gthe required NTP packages you must install the recommended tool
 
 SQream recommends installing the following recommended tools:
 
-.. code-block:: console
+.. code-block::
 
    $ sudo yum install bash-completion.noarch  vim-enhanced.x86_64 vim-common.x86_64 net-tools iotop htop psmisc screen xfsprogs wget yum-utils deltarpm dos2unix
 
@@ -136,7 +134,7 @@ After updating to the current version of the operating system you must configure
 
 2. Configure the **ntpd** service to begin running when your machine is started:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo systemctl enable ntpd
       $ sudo systemctl start ntpd
@@ -148,15 +146,15 @@ After configuring the NTP package you must configure the performance profile.
 
 **To configure the performance profile:**
 
-1. Switch the active profile:
+1. *Optional* - Switch the active profile:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo tuned-adm profile throughput-performance 
 
 2. Change the multi-user's default run level:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo systemctl set-default multi-user.target
 
@@ -168,19 +166,19 @@ After configuring the performance profile you must configure your security limit
 
 1. Run the **bash** shell as a super-user: 
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo bash
 
 2. Run the following command:
 
-   .. code-block:: console
+   .. code-block::
 
       $ echo -e "sqream soft nproc 500000\nsqream hard nproc 500000\nsqream soft nofile 500000\nsqream hard nofile 500000\nsqream soft core unlimited\nsqream hard core unlimited" >> /etc/security/limits.conf
 
 3. Run the following command:
 
-   .. code-block:: console
+   .. code-block::
 
       $ echo -e "vm.dirty_background_ratio = 5 \n vm.dirty_ratio = 10 \n vm.swappiness = 10 \n vm.zone_reclaim_mode = 0 \n vm.vfs_cache_pressure = 200 \n"  >> /etc/sysctl.conf
 
@@ -196,7 +194,7 @@ After configuring your security limits you must disable the following automatic
 
 You can abort the above but-reporting tools by running the following command:
 
-.. code-block:: console
+.. code-block::
 
    $ for i in abrt-ccpp.service abrtd.service abrt-oops.service abrt-pstoreoops.service abrt-vmcore.service abrt-xorg.service ; do sudo systemctl disable $i; sudo systemctl stop $i; done
    
@@ -205,7 +203,7 @@ Installing the Nvidia CUDA Driver
 
 1. Verify that the Tesla NVIDIA card has been installed and is detected by the system:
 
-   .. code-block:: console
+   .. code-block::
 
       $ lspci | grep -i nvidia
 
@@ -213,7 +211,7 @@ Installing the Nvidia CUDA Driver
 
 #. Verify that the open-source upstream Nvidia driver is running:
 
-   .. code-block:: console
+   .. code-block::
 
       $ lsmod | grep nouveau
 
@@ -223,7 +221,7 @@ Installing the Nvidia CUDA Driver
 
    1. Disable the open-source upstream Nvidia driver:
 
-      .. code-block:: console
+      .. code-block::
 
          $ sudo bash
          $ echo "blacklist nouveau" > /etc/modprobe.d/blacklist-nouveau.conf
@@ -233,24 +231,24 @@ Installing the Nvidia CUDA Driver
     
    2. Reboot the server and verify that the Nouveau model has not been loaded:
 
-      .. code-block:: console
+      .. code-block::
 
          $ lsmod | grep nouveau
 	 
 #. Check if the Nvidia CUDA driver has already been installed:
 
-   .. code-block:: console
+   .. code-block::
 
       $ nvidia-smi
 
    The following is an example of the correct output:
 
-   .. code-block:: console
+   .. code-block::
 
       nvidia-smi
       Wed Oct 30 14:05:42 2019
       +-----------------------------------------------------------------------------+
-      | NVIDIA-SMI 418.87.00    Driver Version: 418.87.00    CUDA Version: 10.1     |
+      | NVIDIA-SMI 470.82.01    Driver Version: 470.82.01    CUDA Version: 10.1     |
       |-------------------------------+----------------------+----------------------+
       | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
       | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
@@ -312,14 +310,13 @@ For installer type, SQream recommends selecting **runfile (local)**. The availab
 
 2. Download the base installer for Linux CentOS 7 x86_64:
 
-   .. code-block:: console
+   .. code-block::
 
       wget http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda-repo-rhel7-10-1-local-10.1.243-418.87.00-1.0-1.x86_64.rpm
 
-
 3. Install the base installer for Linux CentOS 7 x86_64 by running the following commands:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo yum localinstall cuda-repo-rhel7-10-1-local-10.1.243-418.87.00-1.0-1.x86_64.rpm
       $ sudo yum clean all
@@ -335,31 +332,27 @@ For installer type, SQream recommends selecting **runfile (local)**. The availab
 
 5. Enable the Nvidia service to start at boot and start it:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo systemctl enable nvidia-persistenced.service && sudo systemctl start nvidia-persistenced.service
 
-6. Create a symbolic link from the **/etc/systemd/system/multi-user.target.wants/nvidia-persistenced.service** file to the **/usr/lib/systemd/system/nvidia-persistenced.service** file.
-
-    ::
-
 7. Reboot the server.
 
     ::
 8. Verify that the Nvidia driver has been installed and shows all available GPU's:
 
-   .. code-block:: console
+   .. code-block::
 
       $ nvidia-smi
 	  
    The following is the correct output:
 
-   .. code-block:: console
+   .. code-block::
       
       nvidia-smi
       Wed Oct 30 14:05:42 2019
       +-----------------------------------------------------------------------------+
-      | NVIDIA-SMI 418.87.00    Driver Version: 418.87.00    CUDA Version: 10.1     |
+      | NVIDIA-SMI 470.82.01    Driver Version: 470.82.01    CUDA Version: 10.1     |
       |-------------------------------+----------------------+----------------------+
       | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
       | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
@@ -387,14 +380,14 @@ Installing the CUDA Driver Version 10.1 for IBM Power9
 
 1. Download the base installer for Linux CentOS 7 PPC64le:
 
-   .. code-block:: console
+   .. code-block::
 
       wget http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda-repo-rhel7-10-1-local-10.1.243-418.87.00-1.0-1.ppc64le.rpm
 
 
 #. Install the base installer for Linux CentOS 7 x86_64 by running the following commands:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo rpm -i cuda-repo-rhel7-10-1-local-10.1.243-418.87.00-1.0-1.ppc64le.rpm
       $ sudo yum clean all
@@ -410,20 +403,16 @@ Installing the CUDA Driver Version 10.1 for IBM Power9
    
 4. If you are using RHEL 7 version (7.6 or later), comment out, remove, or change the hot-pluggable memory rule located in file copied to the **/etc/udev/rules.d** directory by running the following command:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo cp /lib/udev/rules.d/40-redhat.rules /etc/udev/rules.d 
       $ sudo sed -i 's/SUBSYSTEM!="memory",.*GOTO="memory_hotplug_end"/SUBSYSTEM=="*", GOTO="memory_hotplug_end"/' /etc/udev/rules.d/40-redhat.rules
 
 #. Enable the **nvidia-persisted.service** file:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo systemctl enable nvidia-persistenced.service 
-
-#. Create a symbolic link from the **/etc/systemd/system/multi-user.target.wants/nvidia-persistenced.service** file to the **/usr/lib/systemd/system/nvidia-persistenced.service** file.
-
-    ::
    
 #. Reboot your system to initialize the above modifications.
 
@@ -431,13 +420,13 @@ Installing the CUDA Driver Version 10.1 for IBM Power9
    
 #. Verify that the Nvidia driver and the **nvidia-persistenced.service** files are running:
 
-   .. code-block:: console
+   .. code-block::
 
       $ nvidia smi
 
    The following is the correct output:
 
-   .. code-block:: console       
+   .. code-block::       
 
       nvidia-smi
       Wed Oct 30 14:05:42 2019
@@ -463,13 +452,13 @@ Installing the CUDA Driver Version 10.1 for IBM Power9
 
 #. Verify that the **nvidia-persistenced** service is running:
 
-   .. code-block:: console
+   .. code-block::
 
       $ systemctl status nvidia-persistenced
 
    The following is the correct output:
 
-   .. code-block:: console
+   .. code-block::
 
       root@gpudb ~]systemctl status nvidia-persistenced
         nvidia-persistenced.service - NVIDIA Persistence Daemon
@@ -519,19 +508,15 @@ Installing the Docker Engine on an IBM Power9 Processor
 ----------------------------------------
 The x86_64 processor only supports installing the **Docker Community Edition (CE)** version 18.03.
 
-
 **To install the Docker Engine on an IBM Power9 processor:**
 
 You can install the Docker Engine on an IBM Power9 processor by running the following command:
 
-.. code-block:: console
-
-   $ wget http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/container-selinux-2.9-4.el7.noarch.rpm
-   $ wget http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/docker-ce-18.03.1.ce-1.el7.centos.ppc64le.rpm
-   $ yum install -y container-selinux-2.9-4.el7.noarch.rpm
-   $ docker-ce-18.03.1.ce-1.el7.centos.ppc64le.rpm
-
+.. code-block::
 
+   wget http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/container-selinux-2.9-4.el7.noarch.rpm
+   wget http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/docker-ce-18.03.1.ce-1.el7.centos.ppc64le.rpm
+   yum install -y container-selinux-2.9-4.el7.noarch.rpm docker-ce-18.03.1.ce-1.el7.centos.ppc64le.rpm
  
 For more information on installing the Docker Engine CE on an IBM Power9 processor, see `Install Docker Engine on Ubuntu `_.
 
@@ -543,13 +528,13 @@ After installing the Docker engine you must configure Docker on your local machi
 
 1. Enable Docker to start on boot:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo systemctl enable docker && sudo systemctl start docker
 	  
 2. Enable managing Docker as a non-root user:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo usermod -aG docker $USER
 
@@ -559,7 +544,7 @@ After installing the Docker engine you must configure Docker on your local machi
 
 4. Verify that you can run the following Docker command as a non-root user (without ``sudo``):
 
-   .. code-block:: console
+   .. code-block::
 
       $ docker run hello-world
 
@@ -576,8 +561,8 @@ After configuring Docker on your local machine you must install the Nvidia Docke
 
 This section describes the following:
 
-* :ref:`Installing the NVIDIA Docker2 Toolkit on an x86_64 processor. `
-* :ref:`Installing the NVIDIA Docker2 Toolkit on a PPC64le processor. `
+* :ref:`Installing the NVIDIA Docker2 Toolkit on an x86_64 processor `
+* :ref:`Installing the NVIDIA Docker2 Toolkit on a PPC64le processor `
 
 .. _install_nvidia_docker2_toolkit_x8664_processor:
 
@@ -599,16 +584,15 @@ Installing the NVIDIA Docker2 Toolkit on a CentOS Operating System
 
 1. Install the repository for your distribution:
 
-   .. code-block:: console
+   .. code-block::
 
-      $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
-      $ curl -s -L
-      $ https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
-      $ sudo tee /etc/yum.repos.d/nvidia-docker.repo
+      distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+      curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
+      sudo tee /etc/yum.repos.d/nvidia-docker.repo
 
 2. Install the ``nvidia-docker2`` package and reload the Docker daemon configuration:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo yum install nvidia-docker2
       $ sudo pkill -SIGHUP dockerd
@@ -625,7 +609,7 @@ Installing the NVIDIA Docker2 Toolkit on a CentOS Operating System
     1. Run the ``sudo vi /etc/yum.repos.d/nvidia-docker.repo`` command if the following error is displayed when installing the ``nvidia-docker2`` package:
     
 
-       .. code-block:: console
+       .. code-block::
 
           https://nvidia.github.io/nvidia-docker/centos7/ppc64le/repodata/repomd.xml:
           [Errno -1] repomd.xml signature could not be verified for nvidia-docker
@@ -636,9 +620,9 @@ Installing the NVIDIA Docker2 Toolkit on a CentOS Operating System
 
 5. Verify that the NVIDIA-Docker run has been installed correctly:
 
-   .. code-block:: console
+   .. code-block::
 
-      $ docker run --runtime=nvidia --rm nvidia/cuda:10.1-base nvidia-smi
+      $ docker run --runtime=nvidia --rm nvidia/cuda:10.1.3-base-centos7 nvidia-smi
 
 For more information on installing the NVIDIA Docker2 Toolkit on a CentOS operating system, see :ref:`Installing the NVIDIA Docker2 Toolkit on a CentOS operating system `
 
@@ -652,23 +636,22 @@ Installing the NVIDIA Docker2 Toolkit on an Ubuntu Operating System
 
 1. Install the repository for your distribution:
 
-   .. code-block:: console
+   .. code-block::
 
-      $ curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
-      $ sudo apt-key add -
-      $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
-      $ curl -s -L
-      $ https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
-      $ sudo tee /etc/apt/sources.list.d/nvidia-docker.list
-      $ sudo apt-get update
+      curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
+      distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+      curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
+      sudo apt-get update
 
 2. Install the ``nvidia-docker2`` package and reload the Docker daemon configuration:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo apt-get install nvidia-docker2
       $ sudo pkill -SIGHUP dockerd
+      
 3. Do one of the following:
+
    * If you received an error when installing the ``nvidia-docker2`` package, skip to :ref:`Step 4 `.
    * If you successfully installed the ``nvidia-docker2`` package, skip to :ref:`Step 5 `.
 
@@ -678,7 +661,7 @@ Installing the NVIDIA Docker2 Toolkit on an Ubuntu Operating System
 
     1. Run the ``sudo vi /etc/yum.repos.d/nvidia-docker.repo`` command if the following error is displayed when installing the ``nvidia-docker2`` package:
 
-       .. code-block:: console
+       .. code-block::
 
           https://nvidia.github.io/nvidia-docker/centos7/ppc64le/repodata/repomd.xml:
           [Errno -1] repomd.xml signature could not be verified for nvidia-docker
@@ -689,9 +672,9 @@ Installing the NVIDIA Docker2 Toolkit on an Ubuntu Operating System
 
 5. Verify that the NVIDIA-Docker run has been installed correctly:
 
-   .. code-block:: console
+   .. code-block::
 
-      $ docker run --runtime=nvidia --rm nvidia/cuda:10.1-base nvidia-smi
+      $ docker run --runtime=nvidia --rm nvidia/cuda:10.1.3-base-centos7 nvidia-smi
 
 For more information on installing the NVIDIA Docker2 Toolkit on a CentOS operating system, see :ref:`Installing the NVIDIA Docker2 Toolkit on an Ubuntu operating system `
 
@@ -706,7 +689,7 @@ This section describes how to install the NVIDIA Docker2 Toolkit on an IBM RHEL
 
 1. Import the repository and install the ``libnvidia-container`` and the ``nvidia-container-runtime`` containers.
 
-   .. code-block:: console
+   .. code-block::
 
       $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
       $ curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
@@ -724,7 +707,7 @@ This section describes how to install the NVIDIA Docker2 Toolkit on an IBM RHEL
 
    1. Run the ``sudo vi /etc/yum.repos.d/nvidia-docker.repo`` command if the following error is displayed when installing the containers:
     
-      .. code-block:: console
+      .. code-block::
 
          https://nvidia.github.io/nvidia-docker/centos7/ppc64le/repodata/repomd.xml:
          [Errno -1] repomd.xml signature could not be verified for nvidia-docker
@@ -735,7 +718,7 @@ This section describes how to install the NVIDIA Docker2 Toolkit on an IBM RHEL
 		
    3. Install the ``libnvidia-container`` container.
     
-      .. code-block:: console
+      .. code-block::
 
          $ sudo yum install -y libnvidia-container*         
 
@@ -743,13 +726,13 @@ This section describes how to install the NVIDIA Docker2 Toolkit on an IBM RHEL
 
 4. Install the ``nvidia-container-runtime`` container:
 
-   .. code-block:: console
+   .. code-block::
        
       $ sudo yum install -y nvidia-container-runtime*
 
 5. Add ``nvidia runtime`` to the Docker daemon:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo mkdir -p /etc/systemd/system/docker.service.d/
       $ sudo vi /etc/systemd/system/docker.service.d/override.conf
@@ -760,14 +743,14 @@ This section describes how to install the NVIDIA Docker2 Toolkit on an IBM RHEL
 
 6. Restart Docker:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sudo systemctl daemon-reload
       $ sudo systemctl restart docker
 
 7. Verify that the NVIDIA-Docker run has been installed correctly:
 
-   .. code-block:: console
+   .. code-block::
       
       $ docker run --runtime=nvidia --rm nvidia/cuda-ppc64le nvidia-smi
 	  
@@ -795,8 +778,8 @@ For more information about the correct directory to copy the above files into, s
 
 For related information, see the following sections:
 
-* :ref:`Configuring the Hadoop and Kubernetes Configuration Files `.
-* :ref:`Setting the Hadoop and Kubernetes Configuration Parameters `.
+* :ref:`Configuring the Hadoop and Kubernetes Configuration Files `
+* :ref:`Setting the Hadoop and Kubernetes Configuration Parameters `
 
 .. _installing_sqream_software:
 
@@ -836,13 +819,13 @@ The **sqream_installer-nnn-DBnnn-COnnn-EDnnn-.tar.gz** file includes the f
 
 2. Extract the tarball file:
 
-   .. code-block:: console
+   .. code-block::
 
       $ tar -xvf sqream_installer-1.1.5-DB2019.2.1-CO1.5.4-ED3.0.0-x86_64.tar.gz
 
-When the tarball file has been extracted, a new folder will be created. The new folder is automatically given the name of the tarball file:
+   When the tarball file has been extracted, a new folder will be created. The new folder is automatically given the name of the tarball file:
 
-   .. code-block:: console
+   .. code-block::
 
       drwxrwxr-x 9 sqream sqream 4096 Aug 11 11:51 sqream_istaller-1.1.5-DB2019.2.1-CO1.5.4-ED3.0.0-x86_64/
       -rw-rw-r-- 1 sqream sqream 3130398797 Aug 11 11:20 sqream_installer-1.1.5-DB2019.2.1-CO1.5.4-ED3.0.0-x86_64.tar.gz
@@ -853,13 +836,13 @@ When the tarball file has been extracted, a new folder will be created. The new
 
 4. Verify that the folder you just created contains all of the required files.
 
-   .. code-block:: console
+   .. code-block::
 
       $ ls -la
 
    The following is an example of the files included in the new folder:
 
-   .. code-block:: console
+   .. code-block::
 
       drwxrwxr-x. 10 sqream sqream   198 Jun  3 17:57 .
       drwx------. 25 sqream sqream  4096 Jun  7 18:11 ..
@@ -966,7 +949,7 @@ Installing Your License
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 Once you've configured your local environment, you must install your license by copying it into the SQream installation package folder located in the **./license** folder:
 
-.. code-block:: console
+.. code-block::
 
    $ sudo ./sqream-install -k
 
@@ -977,19 +960,19 @@ Validating Your License
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 You can copy your license package into the SQream console folder located in the **/license** folder by running the following command:
    
-.. code-block:: console
+.. code-block::
 
    $ sudo ./sqream-install -K
 
 The following mandatory flags must be used in the first run:
    
-.. code-block:: console
+.. code-block::
 
    $ sudo ./sqream-install -i -k -v 
 
 The following is an example of the correct command syntax:
    
-.. code-block:: console
+.. code-block::
 
    $ sudo ./sqream-install -i -k -c /etc/sqream -v /home/sqream/sqreamdb -l /var/log/sqream -d /home/sqream/data_ingest
    
@@ -1001,13 +984,13 @@ The information in this section is optional, and is only relevant for Hadoop use
 
 The following is the correct syntax when setting the Hadoop and Kubernetes connectivity parameters:
 
-.. code-block:: console
+.. code-block::
 
    $ sudo ./sqream-install -p  -e  :
 
 The following is an example of setting the Hadoop and Kubernetes connectivity parameters:
 
-.. code-block:: console
+.. code-block::
 
    $ sudo ./sqream-install -p  -e  kdc.sq.com:<192.168.1.111>
    
@@ -1020,7 +1003,7 @@ Modifying Your Data Ingest Folder
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 Once you've validated your license, you can modify your data ingest folder after the first run by running the following command:
    
-.. code-block:: console
+.. code-block::
 
    $ sudo ./sqream-install -d /home/sqream/data_in
 
@@ -1032,7 +1015,7 @@ Once you've modified your data ingest folder (if needed), you must validate that
 
 1. To verify that your server network and Docker network do not overlap, run the following command:
 
-.. code-block:: console
+.. code-block::
 
    $ ifconfig | grep 172.
 
@@ -1041,7 +1024,7 @@ Once you've modified your data ingest folder (if needed), you must validate that
   * If running the above command output no results, continue the installation process.
   * If running the above command output results, run the following command:
 
-    .. code-block:: console
+    .. code-block::
 
        $ ifconfig | grep 192.168.
 
@@ -1052,13 +1035,13 @@ Once you've configured your network for Docker, you can check and verify your sy
 
 Running the following command shows you all the variables used by your SQream system:
 
-.. code-block:: console
+.. code-block::
 
    $ ./sqream-install -s
 
 The following is an example of the correct output:
 
-.. code-block:: console
+.. code-block::
 
    SQREAM_CONSOLE_TAG=1.5.4
    SQREAM_TAG=2019.2.1
@@ -1108,7 +1091,7 @@ Starting Your SQream Console
 
 You can start your SQream console by running the following command:
 
-.. code-block:: console
+.. code-block::
 
    $ ./sqream-console
 
@@ -1121,13 +1104,13 @@ Starting the SQream Master
 
 1. Start the metadata server (default port 3105) and picker (default port 3108) by running the following command:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sqream master --start
       
    The following is the correct output:
 
-   .. code-block:: console
+   .. code-block::
 
       sqream-console> sqream master --start
       starting master server in single_host mode ...
@@ -1136,7 +1119,7 @@ Starting the SQream Master
 
 2. *Optional* - Change the metadata and server picker ports by adding ``-p `` and ``-m ``:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sqream-console>sqream master --start -p 4105 -m 43108
       $ starting master server in single_host mode ...
@@ -1153,13 +1136,13 @@ Starting SQream Workers
 When starting SQream workers, setting the ```` value sets how many workers to start. Leaving the ```` value unspecified runs all of the available resources.
 
 
-.. code-block:: console
+.. code-block::
 
    $ sqream worker --start  
 
    The following is an example of expected output when setting the ```` value to ``2``:
 
-   .. code-block:: console
+   .. code-block::
 
       sqream-console>sqream worker --start 2
       started sqream_single_host_worker_0 on port 5000, allocated gpu: 0
@@ -1173,13 +1156,13 @@ Listing the Running Services
 
 You can list running SQream services to look for container names and ID's by running the following command:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream master --list
 
 The following is an example of the expected output:
 
-.. code-block:: console
+.. code-block::
 
    sqream-console>sqream master --list
    container name: sqream_single_host_worker_0, container id: c919e8fb78c8
@@ -1195,13 +1178,13 @@ You can stop running services either for a single SQream worker, or all SQream s
 
 The following is the command for stopping a running service for a single SQream worker:
 
-.. code-block:: console
+.. code-block::
      
    $ sqream worker --stop 
 
 The following is an example of expected output when stopping a running service for a single SQream worker:
 
-.. code-block:: console
+.. code-block::
 
    sqream worker stop 
    stopped container sqream_single_host_worker_0, id: 892a8f1a58c5
@@ -1209,13 +1192,13 @@ The following is an example of expected output when stopping a running service f
 
 You can stop all running SQream services (both master and worker) by running the following command:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream-console>sqream master --stop --all
 
 The following is an example of expected output when stopping all running services:
 
-.. code-block:: console
+.. code-block::
 
    sqream-console>sqream master --stop --all
    stopped container sqream_single_host_worker_0, id: 892a8f1a58c5
@@ -1232,13 +1215,13 @@ SQream Studio is an SQL statement editor.
 
 1. Run the following command:
 
-   .. code-block:: console
+   .. code-block::
 
       $ sqream studio --start
 
 The following is an example of the expected output:
 
-   .. code-block:: console
+   .. code-block::
 
       SQream Acceleration Studio is available at http://192.168.1.62:8080
 
@@ -1249,13 +1232,13 @@ The following is an example of the expected output:
 
 You can stop your SQream Studio by running the following command:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream studio --stop
 
 The following is an example of the expected output:
 
-.. code-block:: console
+.. code-block::
 
    sqream_admin    stopped
 
@@ -1264,8 +1247,6 @@ The following is an example of the expected output:
 
 Using the SQream Client
 ~~~~~~~~~~~~~~~~~
-
-
 You can use the embedded SQream Client on the following nodes:
 
 * Master node
@@ -1279,7 +1260,7 @@ When using the SQream Client on the Master node, the following default settings
 
 The following is an example:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream client --master -u sqream -w sqream
 
@@ -1288,7 +1269,7 @@ When using the SQream Client on a Worker node (or nodes), you should use the ``-
 
 The following is an example:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream client --worker -p 5000 -u sqream -w sqream
 
@@ -1322,7 +1303,7 @@ From the console you can define a spool size value.
 
 The following example shows the spool size being set to ``50``:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream-console>sqream worker --start 2 -m 50
 
@@ -1339,7 +1320,7 @@ You can start more than one sqreamd on a single GPU by splitting it.
 
 The following example shows the GPU being split into **two** sqreamd's on the GPU in **slot 0**:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream-console>sqream worker --start 2 -g 0
 
@@ -1350,7 +1331,7 @@ Splitting GPU and Setting the Spool Size
 
 You can simultaneously split a GPU and set the spool size by appending the ``-m`` flag:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream-console>sqream worker --start 2 -g 0 -m 50
 
@@ -1367,19 +1348,19 @@ The SQream console does not validate the integrity of your external configuratio
 
 When using your custom configuration file, you can use the ``-j`` flag to define the full path to the Configuration file, as in the example below: 
 
-.. code-block:: console
+.. code-block::
 
    $ sqream-console>sqream worker --start 1 -j /etc/sqream/configfile.json
 
 .. note:: To start more than one sqream daemon, you must provide files for each daemon, as in the example below:
 
-.. code-block:: console
+.. code-block::
 
    $ sqream worker --start 2 -j /etc/sqream/configfile.json /etc/sqream/configfile2.json
 
 .. note:: To split a specific GPU, you must also list the GPU flag, as in the example below:
    
-.. code-block:: console
+.. code-block::
 
    $ sqream worker --start 2 -g 0 -j /etc/sqream/configfile.json /etc/sqream/configfile2.json
 
@@ -1390,7 +1371,7 @@ Clustering Your Docker Environment
 
 SQream lets you connect to a remote Master node to start Docker in Distributed mode. If you have already connected to a Slave node server in Distributed mode, the **sqream Master** and **Client** commands are only available on the Master node.
    
-.. code-block:: console
+.. code-block::
 
    $ --master-host
    $ sqream-console>sqream worker --start 1 --master-host 192.168.0.1020
@@ -1409,13 +1390,13 @@ Checking the Status of SQream Services from the SQream Console
 
 From the SQream console, you can check the status of SQream services by running the following command:
    
-.. code-block:: console
+.. code-block::
 
    $ sqream-console>sqream master --list
 
 The following is an example of the expected output:
    
-.. code-block:: console
+.. code-block::
 
    $ sqream-console>sqream master --list
    $ checking 3 sqream services:
@@ -1430,7 +1411,7 @@ Checking the Status of SQream Services from Outside the SQream Console
 
 From outside the Sqream Console, you can check the status of SQream services by running the following commands:
  
-.. code-block:: console
+.. code-block::
      
    $ sqream-status
    $ NAMES STATUS PORTS
@@ -1456,19 +1437,19 @@ This section describes how to upgrade your SQream system.
 3. Extract the following tarball file received from the SQream Support team, under it with the same user and in the same folder that you used while :ref:`Downloading the SQream Software <_download_sqream_software>`.
 
  
-   .. code-block:: console
+   .. code-block::
      
       $ tar -xvf sqream_installer-2.0.5-DB2019.2.1-CO1.6.3-ED3.0.0-x86_64/
 
 4. Navigate to the new folder created as a result of extracting the tarball file:
 
-   .. code-block:: console
+   .. code-block::
      
       $ cd sqream_installer-2.0.5-DB2019.2.1-CO1.6.3-ED3.0.0-x86_64/
 
 5. Initiate the upgrade process:
 
-   .. code-block:: console
+   .. code-block::
    
       $ ./sqream-install -i
 
diff --git a/installation_guides/sqream_studio_installation.rst b/installation_guides/sqream_studio_installation.rst
index 8d6c16546..53c89772d 100644
--- a/installation_guides/sqream_studio_installation.rst
+++ b/installation_guides/sqream_studio_installation.rst
@@ -9,11 +9,8 @@ The **Installing SQream Studio** page incudes the following installation guides:
    :maxdepth: 1
    :glob:
 
-   installing_studio_on_stand_alone_server
    installing_prometheus_exporters
    installing_prometheus_using_binary_packages
    installing_dashboard_data_collector
-
-
-
-
+   installing_studio_on_stand_alone_server
+   installing_nginx_proxy_over_secure_connection
\ No newline at end of file
diff --git a/loading_and_unloading_data/index.rst b/loading_and_unloading_data/index.rst
new file mode 100644
index 000000000..bc515f023
--- /dev/null
+++ b/loading_and_unloading_data/index.rst
@@ -0,0 +1,36 @@
+.. _loading_and_unloading_data:
+
+***********************
+Loading and Unloading Data
+***********************
+The **Loading Data** section describes concepts and operations related to importing data into your SQream database:
+
+* `Overview of loading data `_ - Describes best practices and considerations for loading data into SQream from a variety of sources and locations.
+
+* `Alternatives to loading data (foreign tables) `_ - Useful for running queries directly on external data without importing into your SQream database.
+
+* `Supported data types `_ - Overview of supported data types, including descriptions, examples, and relevant aliases.
+   
+* `Ingesting data from external sources `_ - List of data ingestion sources that SQream supports.
+
+* `Inserting data from external tables `_ - Inserts one or more rows into a table.
+
+* `Ingesting data from third party client platforms `_ - Gives you direct access to a variety of drivers, connectors, tools, vizualisers, and utilities..
+
+* `Using the COPY FROM statement `_ - Used for loading data from files located on a filesystem into SQream tables. 
+   
+* `Importing data using Studio `_ - SQream's web-based client providing users with all functionality available from the command line in an intuitive and easy-to-use format.
+
+* `Loading data using Amazon S3 `_ - Used for loading data from Amazon S3.
+
+* Troubleshooting - Describes troubleshooting solutions related to importing data from the following:
+
+  * `SAS Viya `_
+
+  * `Tableau `_
+  
+The **Unloading Data** section describes concepts and operations related to exporting data from your SQream database:
+
+* `Overview of unloading data `_ - Describes best practices and considerations for unloading data from SQream to a variety of sources and locations.
+
+* `The COPY TO statement `_ - Used for unloading data from a SQream database table or query to a file on a filesystem.
\ No newline at end of file
diff --git a/login_5.3.1.png b/login_5.3.1.png
deleted file mode 100644
index 48c725a4c..000000000
Binary files a/login_5.3.1.png and /dev/null differ
diff --git a/operational_guides/access_control.rst b/operational_guides/access_control.rst
index 7f92f8eaf..9a1ce9f17 100644
--- a/operational_guides/access_control.rst
+++ b/operational_guides/access_control.rst
@@ -4,594 +4,12 @@
 Access Control
 **************
 
-.. contents:: In this topic:
-   :local:
-
-Overview
-==========
-
-Access control provides authentication and authorization in SQream DB. 
-
-SQream DB manages authentication and authorization using a role-based access control system (RBAC), like ANSI SQL and other SQL products.
-
-SQream DB has a default permissions system which is inspired by Postgres, but with more power. In most cases, this allows an administrator to set things up so that every object gets permissions set automatically.
-
-In SQream DB, users log in from any worker which verifies their roles and permissions from the metadata server. Each statement issues commands as the currently logged in role.
-
-Roles are defined at the cluster level, meaning they are valid for all databases in the cluster.
-
-To bootstrap SQream DB, a new install will always have one ``SUPERUSER`` role, typically named ``sqream``. To create more roles, you should first connect as this role.
-
-
-Terminology
-================
-
-Roles
-----------
-
-:term:`Role` : a role can be a user, a group, or both.
-
-Roles can own database objects (e.g. tables), and can assign permissions on those objects to other roles.
-
-Roles can be members of other roles, meaning a user role can inherit permissions from its parent role.
-
-Authentication
---------------------
-
-:term:`Authentication` : verifying the identity of the role. User roles have usernames (:term:`role names`) and passwords.
-
-
-Authorization
-----------------
-
-:term:`Authorization` : checking the role has permissions to do a particular thing. The :ref:`grant` command is used for this.
-
-
-Roles
-=====
-
-Roles are used for both users and groups.
-
-Roles are global across all databases in the SQream DB cluster.
-
-To use a ``ROLE`` as a user, it should have a password, the login permission, and connect permissions to the relevant databases.
-
-Creating new roles (users)
-------------------------------
-
-A user role can log in to the database, so it should have ``LOGIN`` permissions, as well as a password.
-
-For example:
-
-.. code-block:: postgres
-                
-   CREATE ROLE role_name ;
-   GRANT LOGIN to role_name ;
-   GRANT PASSWORD 'new_password' to role_name ;
-   GRANT CONNECT ON DATABASE database_name to role_name ;
-
-Examples:
-
-.. code-block:: postgres
-
-   CREATE  ROLE  new_role_name  ;  
-   GRANT  LOGIN  TO  new_role_name;  
-   GRANT  PASSWORD  'my_password'  TO  new_role_name;  
-   GRANT  CONNECT  ON  DATABASE  master  TO  new_role_name;
-
-A database role may have a number of permissions that define what tasks it can perform. These are assigned using the :ref:`grant` command.
-
-Dropping a user
----------------
-
-.. code-block:: postgres
-
-   DROP ROLE role_name ;
-
-Examples:
-
-.. code-block:: postgres
-
-   DROP ROLE  admin_role ;
-
-Altering a user name
-------------------------
-
-Renaming a user's role:
-
-.. code-block:: postgres
-
-   ALTER ROLE role_name RENAME TO new_role_name ;
-
-Examples:
-
-.. code-block:: postgres
-
-   ALTER ROLE  admin_role  RENAME  TO  copy_role ;
-
-.. _change_password:
-
-Changing user passwords
---------------------------
-
-To change a user role's password, grant the user a new password.
-
-.. code-block:: postgres
-
-   GRANT  PASSWORD  'new_password'  TO  rhendricks;  
-
-.. note:: Granting a new password overrides any previous password. Changing the password while the role has an active running statement does not affect that statement, but will affect subsequent statements.
-
-Public Role
------------
-
-There is a public role which always exists. Each role is granted to the ``PUBLIC`` role (i.e. is a member of the public group), and this cannot be revoked. You can alter the permissions granted to the public role.
-
-The ``PUBLIC`` role has ``USAGE`` and ``CREATE`` permissions on ``PUBLIC`` schema by default, therefore, new users can create, :ref:`insert`, :ref:`delete`, and :ref:`select` from objects in the ``PUBLIC`` schema.
-
-
-Role membership (groups)
--------------------------
-
-Many database administrators find it useful to group user roles together. By grouping users, permissions can be granted to, or revoked from a group with one command. In SQream DB, this is done by creating a group role, granting permissions to it, and then assigning users to that group role.
-
-To use a role purely as a group, omit granting it ``LOGIN`` and ``PASSWORD`` permissions.
-
-The ``CONNECT`` permission can be given directly to user roles, and/or to the groups they are part of.
-
-.. code-block:: postgres
-
-   CREATE ROLE my_group;
-
-Once the group role exists, you can add user roles (members) using the ``GRANT`` command. For example:
-
-.. code-block:: postgres
-
-   -- Add my_user to this group
-   GRANT my_group TO my_user;
-
-
-To manage object permissions like databases and tables, you would then grant permissions to the group-level role (see :ref:`the permissions table` below.
-
-All member roles then inherit the permissions from the group. For example:
-
-.. code-block:: postgres
-
-   -- Grant all group users connect permissions
-   GRANT  CONNECT  ON  DATABASE  a_database  TO  my_group;
-   
-   -- Grant all permissions on tables in public schema
-   GRANT  ALL  ON  all  tables  IN  schema  public  TO  my_group;
-
-Removing users and permissions can be done with the ``REVOKE`` command:
-
-.. code-block:: postgres
-
-   -- remove my_other_user from this group
-   REVOKE my_group FROM my_other_user;
-
-.. _permissions_table:
-
-Permissions
-===========
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-
-   * - Object/layer
-     - Permission
-     - Description
-
-   * - all databases
-     - ``LOGIN``
-     - use role to log into the system (the role also needs connect permission on the database it is connecting to)
-
-   * - all databases
-     - ``PASSWORD``
-     - the password used for logging into the system
-
-   * - all databases
-     - ``SUPERUSER``
-     - no permission restrictions on any activity
-
-   * - database
-     - ``SUPERUSER``
-     - no permission restrictions on any activity within that database (this does not include modifying roles or permissions)
-
-   * - database
-     - ``CONNECT``
-     - connect to the database
-
-   * - database
-     - ``CREATE``
-     - create schemas in the database
-
-   * - database
-     - ``CREATE FUNCTION``
-     - create and drop functions
-     
-   * - schema
-     - ``USAGE``
-     - allows additional permissions within the schema
-
-   * - schema
-     - ``CREATE``
-     - create tables in the schema
-
-   * - table
-     - ``SELECT``
-     - :ref:`select` from the table
-
-   * - table
-     - ``INSERT``
-     - :ref:`insert` into the table
-
-   * - table
-     - ``DELETE``
-     - :ref:`delete` and :ref:`truncate` on the table
-
-   * - table
-     - ``DDL``
-     - drop and alter on the table
-
-   * - table
-     - ``ALL``
-     - all the table permissions
-
-   * - function
-     - ``EXECUTE``
-     - use the function
-
-   * - function
-     - ``DDL``
-     - drop and alter on the function
-
-   * - function
-     - ``ALL``
-     - all function permissions
-
-GRANT
------
-
-:ref:`grant` gives permissions to a role.
-
-.. code-block:: postgres
-
-   -- Grant permissions at the instance/ storage cluster level:
-   GRANT 
-
-   { SUPERUSER
-   | LOGIN 
-   | PASSWORD '' 
-   } 
-   TO  [, ...] 
-
-   -- Grant permissions at the database level:
-        GRANT {{CREATE | CONNECT| DDL | SUPERUSER | CREATE FUNCTION} [, ...] | ALL [PERMISSIONS]}
-
-   ON DATABASE  [, ...]
-   TO  [, ...] 
-
-   -- Grant permissions at the schema level: 
-   GRANT {{ CREATE | DDL | USAGE | SUPERUSER } [, ...] | ALL [ 
-   PERMISSIONS ]} 
-   ON SCHEMA  [, ...] 
-   TO  [, ...] 
-       
-   -- Grant permissions at the object level: 
-   GRANT {{SELECT | INSERT | DELETE | DDL } [, ...] | ALL [PERMISSIONS]} 
-   ON { TABLE  [, ...] | ALL TABLES IN SCHEMA  [, ...]} 
-   TO  [, ...]
-       
-   -- Grant execute function permission: 
-   GRANT {ALL | EXECUTE | DDL} ON FUNCTION function_name 
-   TO role; 
-       
-   -- Allows role2 to use permissions granted to role1
-   GRANT  [, ...] 
-   TO  
-
-    -- Also allows the role2 to grant role1 to other roles:
-   GRANT  [, ...] 
-   TO  
-   WITH ADMIN OPTION
-  
-``GRANT`` examples:
-
-.. code-block:: postgres
-
-   GRANT  LOGIN,superuser  TO  admin;
-
-   GRANT  CREATE  FUNCTION  ON  database  master  TO  admin;
-
-   GRANT  SELECT  ON  TABLE  admin.table1  TO  userA;
-
-   GRANT  EXECUTE  ON  FUNCTION  my_function  TO  userA;
-
-   GRANT  ALL  ON  FUNCTION  my_function  TO  userA;
-
-   GRANT  DDL  ON  admin.main_table  TO  userB;
-
-   GRANT  ALL  ON  all  tables  IN  schema  public  TO  userB;
-
-   GRANT  admin  TO  userC;
-
-   GRANT  superuser  ON  schema  demo  TO  userA
-
-   GRANT  admin_role  TO  userB;
-
-REVOKE
-------
-
-:ref:`revoke` removes permissions from a role.
-
-.. code-block:: postgres
-
-   -- Revoke permissions at the instance/ storage cluster level:
-   REVOKE
-   { SUPERUSER
-   | LOGIN
-   | PASSWORD
-   }
-   FROM  [, ...]
-            
-   -- Revoke permissions at the database level:
-   REVOKE {{CREATE | CONNECT | DDL | SUPERUSER | CREATE FUNCTION}[, ...] |ALL [PERMISSIONS]}
-   ON DATABASE  [, ...]
-   FROM  [, ...]
-
-   -- Revoke permissions at the schema level:
-   REVOKE { { CREATE | DDL | USAGE | SUPERUSER } [, ...] | ALL [PERMISSIONS]}
-   ON SCHEMA  [, ...]
-   FROM  [, ...]
-            
-   -- Revoke permissions at the object level:
-   REVOKE { { SELECT | INSERT | DELETE | DDL } [, ...] | ALL }
-   ON { [ TABLE ]  [, ...] | ALL TABLES IN SCHEMA
-
-          [, ...] }
-   FROM  [, ...]
-            
-   -- Removes access to permissions in role1 by role 2
-   REVOKE  [, ...] FROM  [, ...] WITH ADMIN OPTION
-
-   -- Removes permissions to grant role1 to additional roles from role2
-   REVOKE  [, ...] FROM  [, ...] WITH ADMIN OPTION
-
-
-Examples:
-
-.. code-block:: postgres
-
-   REVOKE  superuser  on  schema  demo  from  userA;
-
-   REVOKE  delete  on  admin.table1  from  userB;
-
-   REVOKE  login  from  role_test;
-
-   REVOKE  CREATE  FUNCTION  FROM  admin;
-
-Default permissions
--------------------
-
-The default permissions system (See :ref:`alter_default_permissions`) 
-can be used to automatically grant permissions to newly 
-created objects (See the departmental example below for one way it can be used).
-
-A default permissions rule looks for a schema being created, or a
-table (possibly by schema), and is table to grant any permission to
-that object to any role. This happens when the create table or create
-schema statement is run.
-
-
-.. code-block:: postgres
-
-
-   ALTER DEFAULT PERMISSIONS FOR target_role_name
-        [IN schema_name, ...]
-        FOR { TABLES | SCHEMAS }
-        { grant_clause | DROP grant_clause}
-        TO ROLE { role_name | public };
-
-   grant_clause ::=
-     GRANT
-        { CREATE FUNCTION
-        | SUPERUSER
-        | CONNECT
-        | CREATE
-        | USAGE
-        | SELECT
-        | INSERT
-        | DELETE
-        | DDL
-        | EXECUTE
-        | ALL
-        }
-
-
-Departmental Example
-=======================
-
-You work in a company with several departments.
-
-The example below shows you how to manage permissions in a database shared by multiple departments, where each department has different roles for the tables by schema. It walks you through how to set the permissions up for existing objects and how to set up default permissions rules to cover newly created objects.
-
-The concept is that you set up roles for each new schema with the correct permissions, then the existing users can use these roles. 
-
-A superuser must do new setup for each new schema which is a limitation, but superuser permissions are not needed at any other time, and neither are explicit grant statements or object ownership changes.
-
-In the example, the database is called ``my_database``, and the new or existing schema being set up to be managed in this way is called ``my_schema``.
-
-.. figure:: /_static/images/access_control_department_example.png
-   :scale: 60 %
-   
-   Our departmental example has four user group roles and seven users roles
-
-There will be a group for this schema for each of the following:
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-
-   * - Group
-     - Activities
-
-   * - database designers
-     - create, alter and drop tables
-
-   * - updaters
-     - insert and delete data
-
-   * - readers
-     - read data
-
-   * - security officers
-     - add and remove users from these groups
-
-Setting up the department permissions
-------------------------------------------
-
-As a superuser, you connect to the system and run the following:
-
-.. code-block:: postgres
-
-   -- create the groups
-
-   CREATE ROLE my_schema_security_officers;
-   CREATE ROLE my_schema_database_designers;
-   CREATE ROLE my_schema_updaters;
-   CREATE ROLE my_schema_readers;
-
-   -- grant permissions for each role
-   -- we grant permissions for existing objects here too, 
-   -- so you don't have to start with an empty schema
-
-   -- security officers
-
-   GRANT connect ON DATABASE my_database TO my_schema_security_officers;
-   GRANT usage ON SCHEMA my_schema TO my_schema_security_officers;
-
-   GRANT my_schema_database_designers TO my_schema_security_officers WITH ADMIN OPTION;
-   GRANT my_schema_updaters TO my_schema_security_officers WITH ADMIN OPTION;
-   GRANT my_schema_readers TO my_schema_security_officers WITH ADMIN OPTION;
-
-   -- database designers
-
-   GRANT connect ON DATABASE my_database TO my_schema_database_designers;
-   GRANT usage ON SCHEMA my_schema TO my_schema_database_designers;
-
-   GRANT create,ddl ON SCHEMA my_schema TO my_schema_database_designers;
-
-   -- updaters
-
-   GRANT connect ON DATABASE my_database TO my_schema_updaters;
-   GRANT usage ON SCHEMA my_schema TO my_schema_updaters;
-
-   GRANT SELECT,INSERT,DELETE ON ALL TABLES IN SCHEMA my_schema TO my_schema_updaters;
-
-   -- readers
-
-   GRANT connect ON DATABASE my_database TO my_schema_readers;
-   GRANT usage ON SCHEMA my_schema TO my_schema_readers;
-
-   GRANT SELECT ON ALL TABLES IN SCHEMA my_schema TO my_schema_readers;
-   GRANT EXECUTE ON ALL FUNCTIONS TO my_schema_readers;
-
-
-   -- create the default permissions for new objects
-
-   ALTER DEFAULT PERMISSIONS FOR my_schema_database_designers IN my_schema
-    FOR TABLES GRANT SELECT,INSERT,DELETE TO my_schema_updaters;
-
-   -- For every table created by my_schema_database_designers, give access to my_schema_readers:
-   
-   ALTER DEFAULT PERMISSIONS FOR my_schema_database_designers IN my_schema
-    FOR TABLES GRANT SELECT TO my_schema_readers;
-
-.. note::
-   * This process needs to be repeated by a user with ``SUPERUSER`` permissions each time a new schema is brought into this permissions management approach.
-   
-   * 
-      By default, any new object created will not be accessible by our new ``my_schema_readers`` group.
-      Running a ``GRANT SELECT ...`` only affects objects that already exist in the schema or database.
-   
-      If you're getting a ``Missing the following permissions: SELECT on table 'database.public.tablename'`` error, make sure that
-      you've altered the default permissions with the ``ALTER DEFAULT PERMISSIONS`` statement.
-
-Creating new users in the departments
------------------------------------------
-
-After the group roles have been created, you can now create user roles for each of your users.
-
-.. code-block:: postgres
-
-   -- create the new database designer users
-   
-   CREATE  ROLE  ecodd;
-   GRANT  LOGIN  TO  ecodd;
-   GRANT  PASSWORD  'ecodds_secret_password'  TO ecodd;
-   GRANT  CONNECT  ON  DATABASE  my_database  TO  ecodd;
-   GRANT my_schema_database_designers TO ecodd;
-
-   CREATE  ROLE  ebachmann;
-   GRANT  LOGIN  TO  ebachmann;
-   GRANT  PASSWORD  'another_secret_password'  TO ebachmann;
-   GRANT  CONNECT  ON  DATABASE  my_database  TO  ebachmann;
-   GRANT my_database_designers TO ebachmann;
-
-   -- If a user already exists, we can assign that user directly to the group
-   
-   GRANT my_schema_updaters TO rhendricks;
-   
-   -- Create users in the readers group
-   
-   CREATE  ROLE  jbarker;
-   GRANT  LOGIN  TO  jbarker;
-   GRANT  PASSWORD  'action_jack'  TO jbarker;
-   GRANT  CONNECT  ON  DATABASE  my_database  TO  jbarker;
-   GRANT my_schema_readers TO jbarker;
-   
-   CREATE  ROLE  lbream;
-   GRANT  LOGIN  TO  lbream;
-   GRANT  PASSWORD  'artichoke123'  TO lbream;
-   GRANT  CONNECT  ON  DATABASE  my_database  TO  lbream;
-   GRANT my_schema_readers TO lbream;
-   
-   CREATE  ROLE  pgregory;
-   GRANT  LOGIN  TO  pgregory;
-   GRANT  PASSWORD  'c1ca6a'  TO pgregory;
-   GRANT  CONNECT  ON  DATABASE  my_database  TO  pgregory;
-   GRANT my_schema_readers TO pgregory;
-
-   -- Create users in the security officers group
-
-   CREATE  ROLE  hoover;
-   GRANT  LOGIN  TO  hoover;
-   GRANT  PASSWORD  'mintchip'  TO hoover;
-   GRANT  CONNECT  ON  DATABASE  my_database  TO  hoover;
-   GRANT my_schema_security_officers TO hoover;
-
-
-.. todo:
-   create some example users
-   show that they have the right permission
-   try out the with admin option. we can't really do a security officer because
-   only superusers can create users and logins. see what can be done
-   need 1-2 users in each group, for at least 2 schemas/departments
-   this example will be very big just to show what this setup can do ...
-   example: a security officer for a department which will only have
-     read only access to a schema can only get that with admin option
-     access granted to them
-
-After this setup:
-
-* Database designers will be able to run any ddl on objects in the schema and create new objects, including ones created by other database designers
-* Updaters will be able to insert and delete to existing and new tables
-* Readers will be able to read from existing and new tables
-
-All this will happen without having to run any more ``GRANT`` statements.
-
-Any security officer will be able to add and remove users from these
-groups. Creating and dropping login users themselves must be done by a
-superuser.
+.. toctree::
+   :maxdepth: 1
+   :titlesonly:
+
+   access_control_overview
+   access_control_password_policy
+   access_control_managing_roles
+   access_control_permissions
+   access_control_departmental_example
\ No newline at end of file
diff --git a/operational_guides/access_control_departmental_example.rst b/operational_guides/access_control_departmental_example.rst
new file mode 100644
index 000000000..0a6b55e54
--- /dev/null
+++ b/operational_guides/access_control_departmental_example.rst
@@ -0,0 +1,185 @@
+.. _access_control_departmental_example:
+
+**************
+Departmental Example
+**************
+
+You work in a company with several departments.
+
+The example below shows you how to manage permissions in a database shared by multiple departments, where each department has different roles for the tables by schema. It walks you through how to set the permissions up for existing objects and how to set up default permissions rules to cover newly created objects.
+
+The concept is that you set up roles for each new schema with the correct permissions, then the existing users can use these roles. 
+
+A superuser must do new setup for each new schema which is a limitation, but superuser permissions are not needed at any other time, and neither are explicit grant statements or object ownership changes.
+
+In the example, the database is called ``my_database``, and the new or existing schema being set up to be managed in this way is called ``my_schema``.
+
+Our departmental example has four user group roles and seven users roles
+
+There will be a group for this schema for each of the following:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+
+   * - Group
+     - Activities
+
+   * - database designers
+     - create, alter and drop tables
+
+   * - updaters
+     - insert and delete data
+
+   * - readers
+     - read data
+
+   * - security officers
+     - add and remove users from these groups
+
+Setting up the department permissions
+------------------------------------------
+
+As a superuser, you connect to the system and run the following:
+
+.. code-block:: postgres
+
+   -- create the groups
+
+   CREATE ROLE my_schema_security_officers;
+   CREATE ROLE my_schema_database_designers;
+   CREATE ROLE my_schema_updaters;
+   CREATE ROLE my_schema_readers;
+
+   -- grant permissions for each role
+   -- we grant permissions for existing objects here too, 
+   -- so you don't have to start with an empty schema
+
+   -- security officers
+
+   GRANT connect ON DATABASE my_database TO my_schema_security_officers;
+   GRANT usage ON SCHEMA my_schema TO my_schema_security_officers;
+
+   GRANT my_schema_database_designers TO my_schema_security_officers WITH ADMIN OPTION;
+   GRANT my_schema_updaters TO my_schema_security_officers WITH ADMIN OPTION;
+   GRANT my_schema_readers TO my_schema_security_officers WITH ADMIN OPTION;
+
+   -- database designers
+
+   GRANT connect ON DATABASE my_database TO my_schema_database_designers;
+   GRANT usage ON SCHEMA my_schema TO my_schema_database_designers;
+
+   GRANT create,ddl ON SCHEMA my_schema TO my_schema_database_designers;
+
+   -- updaters
+
+   GRANT connect ON DATABASE my_database TO my_schema_updaters;
+   GRANT usage ON SCHEMA my_schema TO my_schema_updaters;
+
+   GRANT SELECT,INSERT,DELETE ON ALL TABLES IN SCHEMA my_schema TO my_schema_updaters;
+
+   -- readers
+
+   GRANT connect ON DATABASE my_database TO my_schema_readers;
+   GRANT usage ON SCHEMA my_schema TO my_schema_readers;
+
+   GRANT SELECT ON ALL TABLES IN SCHEMA my_schema TO my_schema_readers;
+   GRANT EXECUTE ON ALL FUNCTIONS TO my_schema_readers;
+
+
+   -- create the default permissions for new objects
+
+   ALTER DEFAULT PERMISSIONS FOR my_schema_database_designers IN my_schema
+    FOR TABLES GRANT SELECT,INSERT,DELETE TO my_schema_updaters;
+
+   -- For every table created by my_schema_database_designers, give access to my_schema_readers:
+   
+   ALTER DEFAULT PERMISSIONS FOR my_schema_database_designers IN my_schema
+    FOR TABLES GRANT SELECT TO my_schema_readers;
+
+.. note::
+   * This process needs to be repeated by a user with ``SUPERUSER`` permissions each time a new schema is brought into this permissions management approach.
+   
+   * 
+      By default, any new object created will not be accessible by our new ``my_schema_readers`` group.
+      Running a ``GRANT SELECT ...`` only affects objects that already exist in the schema or database.
+   
+      If you're getting a ``Missing the following permissions: SELECT on table 'database.public.tablename'`` error, make sure that
+      you've altered the default permissions with the ``ALTER DEFAULT PERMISSIONS`` statement.
+
+Creating new users in the departments
+-----------------------------------------
+
+After the group roles have been created, you can now create user roles for each of your users.
+
+.. code-block:: postgres
+
+   -- create the new database designer users
+   
+   CREATE  ROLE  ecodd;
+   GRANT  LOGIN  TO  ecodd;
+   GRANT  PASSWORD  'ecodds_secret_password'  TO ecodd;
+   GRANT  CONNECT  ON  DATABASE  my_database  TO  ecodd;
+   GRANT my_schema_database_designers TO ecodd;
+
+   CREATE  ROLE  ebachmann;
+   GRANT  LOGIN  TO  ebachmann;
+   GRANT  PASSWORD  'another_secret_password'  TO ebachmann;
+   GRANT  CONNECT  ON  DATABASE  my_database  TO  ebachmann;
+   GRANT my_database_designers TO ebachmann;
+
+   -- If a user already exists, we can assign that user directly to the group
+   
+   GRANT my_schema_updaters TO rhendricks;
+   
+   -- Create users in the readers group
+   
+   CREATE  ROLE  jbarker;
+   GRANT  LOGIN  TO  jbarker;
+   GRANT  PASSWORD  'action_jack'  TO jbarker;
+   GRANT  CONNECT  ON  DATABASE  my_database  TO  jbarker;
+   GRANT my_schema_readers TO jbarker;
+   
+   CREATE  ROLE  lbream;
+   GRANT  LOGIN  TO  lbream;
+   GRANT  PASSWORD  'artichoke123'  TO lbream;
+   GRANT  CONNECT  ON  DATABASE  my_database  TO  lbream;
+   GRANT my_schema_readers TO lbream;
+   
+   CREATE  ROLE  pgregory;
+   GRANT  LOGIN  TO  pgregory;
+   GRANT  PASSWORD  'c1ca6a'  TO pgregory;
+   GRANT  CONNECT  ON  DATABASE  my_database  TO  pgregory;
+   GRANT my_schema_readers TO pgregory;
+
+   -- Create users in the security officers group
+
+   CREATE  ROLE  hoover;
+   GRANT  LOGIN  TO  hoover;
+   GRANT  PASSWORD  'mintchip'  TO hoover;
+   GRANT  CONNECT  ON  DATABASE  my_database  TO  hoover;
+   GRANT my_schema_security_officers TO hoover;
+
+
+.. todo:
+   create some example users
+   show that they have the right permission
+   try out the with admin option. we can't really do a security officer because
+   only superusers can create users and logins. see what can be done
+   need 1-2 users in each group, for at least 2 schemas/departments
+   this example will be very big just to show what this setup can do ...
+   example: a security officer for a department which will only have
+     read only access to a schema can only get that with admin option
+     access granted to them
+
+After this setup:
+
+* Database designers will be able to run any ddl on objects in the schema and create new objects, including ones created by other database designers
+* Updaters will be able to insert and delete to existing and new tables
+* Readers will be able to read from existing and new tables
+
+All this will happen without having to run any more ``GRANT`` statements.
+
+Any security officer will be able to add and remove users from these
+groups. Creating and dropping login users themselves must be done by a
+superuser.
\ No newline at end of file
diff --git a/operational_guides/access_control_managing_roles.rst b/operational_guides/access_control_managing_roles.rst
new file mode 100644
index 000000000..15729fa53
--- /dev/null
+++ b/operational_guides/access_control_managing_roles.rst
@@ -0,0 +1,124 @@
+.. _access_control_managing_roles:
+
+**************
+Managing Roles
+**************
+Roles are used for both users and groups, and are global across all databases in the SQream cluster. For a ``ROLE`` to be used as a user, it requires a password and log-in and connect permissionss to the relevant databases.
+
+The Managing Roles section describes the following role-related operations:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Creating New Roles (Users)
+------------------------------
+A user role logging in to the database requires ``LOGIN`` permissions and as a password.
+
+The following is the syntax for creating a new role:
+
+.. code-block:: postgres
+                
+   CREATE ROLE  ;
+   GRANT LOGIN to  ;
+   GRANT PASSWORD <'new_password'> to  ;
+   GRANT CONNECT ON DATABASE  to  ;
+
+The following is an example of creating a new role:
+
+.. code-block:: postgres
+
+   CREATE  ROLE  new_role_name  ;  
+   GRANT  LOGIN  TO  new_role_name;  
+   GRANT  PASSWORD  'my_password' to new_role_name;  
+   GRANT  CONNECT  ON  DATABASE  master to new_role_name;
+
+A database role may have a number of permissions that define what tasks it can perform, which are  assigned using the :ref:`grant` command.
+
+Dropping a User
+------------------------------
+The following is the syntax for dropping a user:
+
+.. code-block:: postgres
+
+   DROP ROLE  ;
+
+The following is an example of dropping a user:
+
+.. code-block:: postgres
+
+   DROP ROLE  admin_role ;
+
+Altering a User Name
+------------------------------
+The following is the syntax for altering a user name:
+
+.. code-block:: postgres
+
+   ALTER ROLE  RENAME TO  ;
+
+The following is an example of altering a user name:
+
+.. code-block:: postgres
+
+   ALTER ROLE admin_role RENAME TO copy_role ;
+
+Changing a User Password
+------------------------------
+You can change a user role's password by granting the user a new password.
+
+The following is an example of changing a user password:
+
+.. code-block:: postgres
+
+   GRANT  PASSWORD  <'new_password'>  TO  rhendricks;  
+
+.. note:: Granting a new password overrides any previous password. Changing the password while the role has an active running statement does not affect that statement, but will affect subsequent statements.
+
+Altering Public Role Permissions
+------------------------------
+
+There is a public role which always exists. Each role is granted to the ``PUBLIC`` role (i.e. is a member of the public group), and this cannot be revoked. You can alter the permissions granted to the public role.
+
+The ``PUBLIC`` role has ``USAGE`` and ``CREATE`` permissions on ``PUBLIC`` schema by default, therefore, new users can ``CREATE``, :ref:`insert`, :ref:`delete`, :ref:`select` and :ref:`UPDATE` from objects in the ``PUBLIC`` schema.
+
+
+Altering Role Membership (Groups)
+------------------------------
+
+Many database administrators find it useful to group user roles together. By grouping users, permissions can be granted to, or revoked from a group with one command. In SQream DB, this is done by creating a group role, granting permissions to it, and then assigning users to that group role.
+
+To use a role purely as a group, omit granting it ``LOGIN`` and ``PASSWORD`` permissions.
+
+The ``CONNECT`` permission can be given directly to user roles, and/or to the groups they are part of.
+
+.. code-block:: postgres
+
+   CREATE ROLE my_group;
+
+Once the group role exists, you can add user roles (members) using the ``GRANT`` command. For example:
+
+.. code-block:: postgres
+
+   -- Add my_user to this group
+   GRANT my_group TO my_user;
+
+
+To manage object permissions like databases and tables, you would then grant permissions to the group-level role (see :ref:`the permissions table` below.
+
+All member roles then inherit the permissions from the group. For example:
+
+.. code-block:: postgres
+
+   -- Grant all group users connect permissions
+   GRANT  CONNECT  ON  DATABASE  a_database  TO  my_group;
+   
+   -- Grant all permissions on tables in public schema
+   GRANT  ALL  ON  all  tables  IN  schema  public  TO  my_group;
+
+Removing users and permissions can be done with the ``REVOKE`` command:
+
+.. code-block:: postgres
+
+   -- remove my_other_user from this group
+   REVOKE my_group FROM my_other_user;
\ No newline at end of file
diff --git a/operational_guides/access_control_overview.rst b/operational_guides/access_control_overview.rst
new file mode 100644
index 000000000..080797fec
--- /dev/null
+++ b/operational_guides/access_control_overview.rst
@@ -0,0 +1,20 @@
+.. _access_control_overview:
+
+**************
+Overview
+**************
+Access control refers to SQream's authentication and authorization operations, managed using a **Role-Based Access Control (RBAC)** system, such as ANSI SQL or other SQL products. SQream's default permissions system is similar to Postgres, but is more powerful. SQream's method lets administrators prepare the system to automatically provide objects with their required permissions.
+
+SQream users can log in from any worker, which verify their roles and permissions from the metadata server. Each statement issues commands as the role that you're currently logged into. Roles are defined at the cluster level, and are valid for all databases in the cluster. To bootstrap SQream, new installations require one ``SUPERUSER`` role, typically named ``sqream``. You can only create new roles by connecting as this role.
+
+Access control refers to the following basic concepts:
+
+ * **Role** - A role can be a user, a group, or both. Roles can own database objects (such as tables) and can assign permissions on those objects to other roles. Roles can be members of other roles, meaning a user role can inherit permissions from its parent role.
+
+    ::
+   
+ * **Authentication** - Verifies the identity of the role. User roles have usernames (or **role names**) and passwords.
+
+    ::
+ 
+ * **Authorization** - Checks that a role has permissions to perform a particular operation, such as the :ref:`grant` command.
\ No newline at end of file
diff --git a/operational_guides/access_control_password_policy.rst b/operational_guides/access_control_password_policy.rst
new file mode 100644
index 000000000..6c69257ed
--- /dev/null
+++ b/operational_guides/access_control_password_policy.rst
@@ -0,0 +1,76 @@
+.. _access_control_password_policy:
+
+**************
+Password Policy
+**************
+The **Password Policy** describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Password Strength Requirements
+==============================
+As part of our compliance with GDPR standards SQream relies on a strong password policy when accessing the CLI or Studio, with the following requirements:
+
+* At least eight characters long.
+
+   ::
+
+* Mandatory upper and lowercase letters.
+
+   ::
+
+* At least one numeric character.
+
+   ::
+
+* May not include a username.
+
+   ::
+
+* Must include at least one special character, such as **?**, **!**, **$**, etc.
+
+You can create a password by using the Studio graphic interface or using the CLI, as in the following example command:
+
+.. code-block:: console
+
+   CREATE ROLE user_a ;
+   GRANT LOGIN to user_a ;
+   GRANT PASSWORD 'BBAu47?fqPL' to user_a ;
+
+Creating a password which does not comply with the password policy generates an error message with a request to include any of the missing above requirements:
+
+.. code-block:: console
+
+   The password you attempted to create does not comply with SQream's security requirements.
+
+   Your password must:
+
+   * Be at least eight characters long.
+
+   * Contain upper and lowercase letters.
+
+   * Contain at least one numeric character.
+
+   * Not include a username.
+
+   * Include at least one special character, such as **?**, **!**, **$**, etc.
+
+Brute Force Prevention
+==============================
+Unsuccessfully attempting to log in five times displays the following message:
+
+.. code-block:: console
+
+   The user is locked. Please contact your system administrator to reset the password and regain access functionality.
+
+You must have superuser permissions to release a locked user to grant a new password:
+
+.. code-block:: console
+
+   GRANT PASSWORD '' to ;
+
+For more information, see :ref:`login_max_retries`.
+
+.. warning:: Because superusers can also be blocked, **you must have** at least two superusers per cluster.
\ No newline at end of file
diff --git a/operational_guides/access_control_permissions.rst b/operational_guides/access_control_permissions.rst
new file mode 100644
index 000000000..2d8b9bf0e
--- /dev/null
+++ b/operational_guides/access_control_permissions.rst
@@ -0,0 +1,219 @@
+.. _access_control_permissions:
+
+**************
+Permissions
+**************
+
+The following table displays the access control permissions:
+
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| **Permission**     | **Description**                                                                                                         |
++====================+=========================================================================================================================+
+| **Object/Layer: All Databases**                                                                                                              |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``LOGIN``          | use role to log into the system (the role also needs connect permission on the database it is connecting to)            |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``PASSWORD``       | the password used for logging into the system                                                                           |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``SUPERUSER``      | no permission restrictions on any activity                                                                              |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| **Object/Layer: Database**                                                                                                                   |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``SUPERUSER``      | no permission restrictions on any activity within that database (this does not include modifying roles or permissions)  |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``CONNECT``        | connect to the database                                                                                                 |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``CREATE``         | create schemas in the database                                                                                          |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``CREATE FUNCTION``| create and drop functions                                                                                               |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| **Object/Layer: Schema**                                                                                                                     |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``USAGE``          | allows additional permissions within the schema                                                                         |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``CREATE``         | create tables in the schema                                                                                             |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| **Object/Layer: Table**                                                                                                                      |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``SELECT``         | :ref:`select` from the table                                                                                            |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``INSERT``         | :ref:`insert` into the table                                                                                            |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``UPDATE``         | :ref:`update` the value of certain columns in existing rows                                                             |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``DELETE``         | :ref:`delete` and :ref:`truncate` on the table                                                                          |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``DDL``            | drop and alter on the table                                                                                             |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``ALL``            | all the table permissions                                                                                               |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| **Object/Layer: Function**                                                                                                                   |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``EXECUTE``        | use the function                                                                                                        |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``DDL``            | drop and alter on the function                                                                                          |   
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+| ``ALL``            | all function permissions                                                                                                |
++--------------------+-------------------------------------------------------------------------------------------------------------------------+
+
+
+
+
+GRANT
+-----
+
+:ref:`grant` gives permissions to a role.
+
+.. code-block:: postgres
+
+   -- Grant permissions at the instance/ storage cluster level:
+   GRANT 
+
+   { SUPERUSER
+   | LOGIN 
+   | PASSWORD '' 
+   } 
+   TO  [, ...] 
+
+   -- Grant permissions at the database level:
+        GRANT {{CREATE | CONNECT| DDL | SUPERUSER | CREATE FUNCTION} [, ...] | ALL [PERMISSIONS]}
+
+   ON DATABASE  [, ...]
+   TO  [, ...] 
+
+   -- Grant permissions at the schema level: 
+   GRANT {{ CREATE | DDL | USAGE | SUPERUSER } [, ...] | ALL [ 
+   PERMISSIONS ]} 
+   ON SCHEMA  [, ...] 
+   TO  [, ...] 
+       
+   -- Grant permissions at the object level: 
+   GRANT {{SELECT | INSERT | DELETE | DDL | UPDATE } [, ...] | ALL [PERMISSIONS]} 
+   ON { TABLE  [, ...] | ALL TABLES IN SCHEMA  [, ...]} 
+   TO  [, ...]
+       
+   -- Grant execute function permission: 
+   GRANT {ALL | EXECUTE | DDL} ON FUNCTION function_name 
+   TO role; 
+       
+   -- Allows role2 to use permissions granted to role1
+   GRANT  [, ...] 
+   TO  
+
+    -- Also allows the role2 to grant role1 to other roles:
+   GRANT  [, ...] 
+   TO  
+   WITH ADMIN OPTION
+  
+``GRANT`` examples:
+
+.. code-block:: postgres
+
+   GRANT  LOGIN,superuser  TO  admin;
+
+   GRANT  CREATE  FUNCTION  ON  database  master  TO  admin;
+
+   GRANT  SELECT  ON  TABLE  admin.table1  TO  userA;
+
+   GRANT  EXECUTE  ON  FUNCTION  my_function  TO  userA;
+
+   GRANT  ALL  ON  FUNCTION  my_function  TO  userA;
+
+   GRANT  DDL  ON  admin.main_table  TO  userB;
+
+   GRANT  ALL  ON  all  tables  IN  schema  public  TO  userB;
+
+   GRANT  admin  TO  userC;
+
+   GRANT  superuser  ON  schema  demo  TO  userA
+
+   GRANT  admin_role  TO  userB;
+
+REVOKE
+------
+
+:ref:`revoke` removes permissions from a role.
+
+.. code-block:: postgres
+
+   -- Revoke permissions at the instance/ storage cluster level:
+   REVOKE
+   { SUPERUSER
+   | LOGIN
+   | PASSWORD
+   }
+   FROM  [, ...]
+            
+   -- Revoke permissions at the database level:
+   REVOKE {{CREATE | CONNECT | DDL | SUPERUSER | CREATE FUNCTION}[, ...] |ALL [PERMISSIONS]}
+   ON DATABASE  [, ...]
+   FROM  [, ...]
+
+   -- Revoke permissions at the schema level:
+   REVOKE { { CREATE | DDL | USAGE | SUPERUSER } [, ...] | ALL [PERMISSIONS]}
+   ON SCHEMA  [, ...]
+   FROM  [, ...]
+            
+   -- Revoke permissions at the object level:
+   REVOKE { { SELECT | INSERT | DELETE | DDL | UPDATE } [, ...] | ALL }
+   ON { [ TABLE ]  [, ...] | ALL TABLES IN SCHEMA
+
+          [, ...] }
+   FROM  [, ...]
+            
+   -- Removes access to permissions in role1 by role 2
+   REVOKE  [, ...] FROM  [, ...] WITH ADMIN OPTION
+
+   -- Removes permissions to grant role1 to additional roles from role2
+   REVOKE  [, ...] FROM  [, ...] WITH ADMIN OPTION
+
+
+Examples:
+
+.. code-block:: postgres
+
+   REVOKE  superuser  on  schema  demo  from  userA;
+
+   REVOKE  delete  on  admin.table1  from  userB;
+
+   REVOKE  login  from  role_test;
+
+   REVOKE  CREATE  FUNCTION  FROM  admin;
+
+Default permissions
+-------------------
+
+The default permissions system (See :ref:`alter_default_permissions`) 
+can be used to automatically grant permissions to newly 
+created objects (See the departmental example below for one way it can be used).
+
+A default permissions rule looks for a schema being created, or a
+table (possibly by schema), and is table to grant any permission to
+that object to any role. This happens when the create table or create
+schema statement is run.
+
+
+.. code-block:: postgres
+
+
+   ALTER DEFAULT PERMISSIONS FOR target_role_name
+        [IN schema_name, ...]
+        FOR { TABLES | SCHEMAS }
+        { grant_clause | DROP grant_clause}
+        TO ROLE { role_name | public };
+
+   grant_clause ::=
+     GRANT
+        { CREATE FUNCTION
+        | SUPERUSER
+        | CONNECT
+        | CREATE
+        | USAGE
+        | SELECT
+        | INSERT
+        | DELETE
+        | DDL
+        | UPDATE
+        | EXECUTE
+        | ALL
+        }
\ No newline at end of file
diff --git a/operational_guides/delete.rst b/operational_guides/delete.rst
deleted file mode 100644
index 24ab5a218..000000000
--- a/operational_guides/delete.rst
+++ /dev/null
@@ -1,214 +0,0 @@
-.. _delete_guide:
-
-***********************
-Deleting Data
-***********************
-
-SQream DB supports deleting data, but it's important to understand how this works and how to maintain deleted data.
-
-How does deleting in SQream DB work?
-========================================
-
-In SQream DB, when you run a delete statement, any rows that match the delete predicate will no longer be returned when running subsequent queries.
-Deleted rows are tracked in a separate location, in *delete predicates*.
-
-After the delete statement, a separate process can be used to reclaim the space occupied by these rows, and to remove the small overhead that queries will have until this is done. 
-
-Some benefits to this design are:
-
-#. Delete transactions complete quickly
-
-#. The total disk footprint overhead at any time for a delete transaction or cleanup process is small and bounded (while the system still supports low overhead commit, rollback and recovery for delete transactions).
-
-
-Phase 1: Delete
----------------------------
-
-.. TODO: isn't the delete cleanup able to complete a certain amount of work transactionally, so that you can do a massive cleanup in stages?
-
-.. TODO: our current best practices is to use a cron job with sqream sql to run the delete cleanup. we should document how to do this, we have customers with very different delete schedules so we can give a few extreme examples and when/why you'd use them
-   
-When a :ref:`delete` statement is run, SQream DB records the delete predicates used. These predicates will be used to filter future statements on this table until all this delete predicate's matching rows have been physically cleaned up.
-
-This filtering process takes full advantage of SQream's zone map feature.
-
-Phase 2: Clean-up
---------------------
-
-The cleanup process is not automatic. This gives control to the user or DBA, and gives flexibility on when to run the clean up.
-
-Files marked for deletion during the logical deletion stage are removed from disk. This is achieved by calling both utility function commands: ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS`` sequentially.
-
-.. note::
-   * :ref:`alter_table` and other DDL operations are blocked on tables that require clean-up. See more in the :ref:`concurrency_and_locks` guide.
-   * If the estimated time for a cleanup processs is beyond a threshold, you will get an error message about it. The message will explain how to override this limitation and run the process anywhere.
-
-Notes on data deletion
-=========================================
-
-.. note::
-   * If the number of deleted records crosses the threshold defined by the ``mixedColumnChunksThreshold`` parameter, the delete operation will be aborted.
-   * This is intended to alert the user that the large number of deleted records may result in a large number of mixed chuncks.
-   * To circumvent this alert, replace XXX with the desired number of records before running the delete operation:
-
-.. code-block:: postgres
-
-   set mixedColumnChunksThreshold=XXX;
-   
-
-Deleting data does not free up space
------------------------------------------
-
-With the exception of a full table delete (:ref:`TRUNCATE`), deleting data does not free up disk space. To free up disk space, trigger the cleanup process.
-
-``SELECT`` performance on deleted rows
-----------------------------------------
-
-Queries on tables that have deleted rows may have to scan data that hasn't been cleaned up.
-In some cases, this can cause queries to take longer than expected. To solve this issue, trigger the cleanup process.
-
-Use ``TRUNCATE`` instead of ``DELETE``
----------------------------------------
-For tables that are frequently emptied entirely, consider using :ref:`truncate` rather than :ref:`delete`. TRUNCATE removes the entire content of the table immediately, without requiring a subsequent cleanup to free up disk space.
-
-Cleanup is I/O intensive
--------------------------------
-
-The cleanup process actively compacts tables by writing a complete new version of column chunks with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes.
-
-Cleanup operations can create significant I/O load on the database. Consider this when planning the best time for the cleanup process.
-
-If this is an issue with your environment, consider using ``CREATE TABLE AS`` to create a new table and then rename and drop the old table.
-
-
-Example
-=============
-
-Deleting values from a table
-------------------------------
-
-.. code-block:: psql
-
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   4,Elephant            ,6500
-   5,Rhinoceros          ,2100
-   6,\N,\N
-   
-   6 rows
-   
-   farm=> DELETE FROM cool_animals WHERE weight > 1000;
-   executed
-   
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   6,\N,\N
-   
-   4 rows
-
-Deleting values based on more complex predicates
----------------------------------------------------
-
-.. code-block:: psql
-
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   4,Elephant            ,6500
-   5,Rhinoceros          ,2100
-   6,\N,\N
-   
-   6 rows
-   
-   farm=> DELETE FROM cool_animals WHERE weight > 1000;
-   executed
-   
-   farm=> SELECT * FROM cool_animals;
-   1,Dog                 ,7
-   2,Possum              ,3
-   3,Cat                 ,5
-   6,\N,\N
-   
-   4 rows
-
-Identifying and cleaning up tables
----------------------------------------
-
-List tables that haven't been cleaned up
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-   
-   farm=> SELECT t.table_name FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      GROUP BY 1;
-   cool_animals
-   
-   1 row
-
-Identify predicates for clean-up
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-
-   farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      WHERE t.table_name = 'cool_animals';
-   weight > 1000
-   
-   1 row
-
-Triggering a cleanup
-^^^^^^^^^^^^^^^^^^^^^^
-
-.. code-block:: psql
-
-   -- Chunk reorganization (aka SWEEP)
-   farm=> SELECT CLEANUP_CHUNKS('public','cool_animals');
-   executed
-
-   -- Delete leftover files (aka VACUUM)
-   farm=> SELECT CLEANUP_EXTENTS('public','cool_animals');
-   executed
-   
-   
-   farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
-      JOIN sqream_catalog.tables t
-      ON dp.table_id = t.table_id
-      WHERE t.table_name = 'cool_animals';
-   
-   0 rows
-
-
-
-Best practices for data deletion
-=====================================
-
-* Run ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS`` after large ``DELETE`` operations.
-
-* When deleting large proportions of data from very large tables, consider running a ``CREATE TABLE AS`` operation instead, then rename and drop the original table.
-
-* Avoid killing ``CLEANUP_EXTENTS`` operations after they've started.
-
-* SQream DB is optimised for time-based data. When data is naturally ordered by a date or timestamp, deleting based on those columns will perform best. For more information, see our :ref:`time based data management guide`.
-
-
-
-.. soft update concept
-
-.. delete cleanup and it's properties. automatic/manual, in transaction or background
-
-.. automatic background gives fast delete, minimal transaction overhead,
-.. small cost to queries until background reorganised
-
-.. when does delete use the metadata effectively
-
-.. more examples
-
diff --git a/operational_guides/delete_guide.rst b/operational_guides/delete_guide.rst
new file mode 100644
index 000000000..0d6c4a41c
--- /dev/null
+++ b/operational_guides/delete_guide.rst
@@ -0,0 +1,262 @@
+.. _delete_guide:
+
+***********************
+Deleting Data
+***********************
+The **Deleting Data** page describes how the **Delete** statement works and how to maintain data that you delete:
+
+.. contents::
+   :local:
+   :depth: 1
+
+Overview
+========================================
+Deleting data typically refers to deleting rows, but can refer to deleting other table content as well. The general workflow for deleting data is to delete data followed by triggering a cleanup operation. The cleanup operation reclaims the space occupied by the deleted rows, discussed further below.
+
+The **DELETE** statement deletes rows defined by a predicate that you have specified, preventing them from appearing in subsequent queries.
+
+For example, the predicate below defines and deletes rows containing animals heavier than 1000 weight units:
+
+.. code-block:: psql
+
+   farm=> DELETE FROM cool_animals WHERE weight > 1000;
+
+The major benefit of the DELETE statement is that it deletes transactions simply and quickly.
+
+The Deletion Process
+==========
+Deleting rows occurs in the following two phases:
+
+* **Phase 1 - Deletion** - All rows you mark for deletion are ignored when you run any query. These rows are not deleted until the clean-up phase. 
+
+   ::
+   
+* **Phase 2 - Clean-up** - The rows you marked for deletion in Phase 1 are physically deleted. The clean-up phase is not automated, letting users or DBAs control when to activate it. The files you marked for deletion during Phase 1 are removed from disk, which you do by by sequentially running the utility function commands ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS``.
+
+.. TODO: isn't the delete cleanup able to complete a certain amount of work transactionally, so that you can do a massive cleanup in stages?
+
+.. TODO: our current best practices is to use a cron job with sqream sql to run the delete cleanup. we should document how to do this, we have customers with very different delete schedules so we can give a few extreme examples and when/why you'd use them.
+
+Usage Notes
+=====================
+The **Usage Notes** section includes important information about the DELETE statement:
+
+.. contents::
+   :local:
+   :depth: 1
+   
+General Notes
+----------------
+This section describes the general notes applicable when deleting rows:
+
+* The :ref:`alter_table` command and other DDL operations are locked on tables that require clean-up. If the estimated clean-up time exceeds the permitted threshold, an error message is displayed describing how to override the threshold limitation. For more information, see :ref:`concurrency_and_locks`.
+
+   ::
+
+* If the number of deleted records exceeds the threshold defined by the ``mixedColumnChunksThreshold`` parameter, the delete operation is aborted. This alerts users that the large number of deleted records may result in a large number of mixed chunks. To circumvent this alert, use the following syntax (replacing ``XXX`` with the desired number of records) before running the delete operation:
+
+  .. code-block:: postgres
+
+     set mixedColumnChunksThreshold=XXX;
+   
+Deleting Data does not Free Space
+-----------------------------------------
+With the exception of running a full table delete, deleting data does not free unused disk space. To free unused disk space you must trigger the clean-up process.
+
+For more information on running a full table delete, see :ref:`TRUNCATE`.
+
+  ::
+  
+For more information on freeing disk space, see :ref:`Triggering a Clean-Up`.
+
+Clean-Up Operations Are I/O Intensive
+-------------------------------
+The clean-up process reduces table size by removing all unused space from column chunks. While this reduces query time, it is a time-costly operation occupying disk space for the new copy of the table until the operation is complete.
+
+.. tip::  Because clean-up operations can create significant I/O load on your database, consider using them sparingly during ideal times.
+
+If this is an issue with your environment, consider using ``CREATE TABLE AS`` to create a new table and then rename and drop the old table.
+
+Examples
+=============
+The **Examples** section includes the following examples:
+
+.. contents::
+   :local:
+   :depth: 1
+   
+Deleting Rows from a Table
+------------------------------
+The following example shows how to delete rows from a table.
+
+1. Display the table:
+
+   .. code-block:: psql
+
+      farm=> SELECT * FROM cool_animals;
+   
+   The following table is displayed:
+
+   .. code-block:: psql
+
+      1,Dog                 ,7
+      2,Possum              ,3
+      3,Cat                 ,5
+      4,Elephant            ,6500
+      5,Rhinoceros          ,2100
+      6,\N,\N
+   
+2. Delete rows from the table:
+
+   .. code-block:: psql
+
+      farm=> DELETE FROM cool_animals WHERE weight > 1000;
+	  
+3. Display the table:
+
+   .. code-block:: psql
+
+      farm=> SELECT * FROM cool_animals;
+   
+   The following table is displayed:
+  
+   .. code-block:: psql    
+
+      1,Dog                 ,7
+      2,Possum              ,3
+      3,Cat                 ,5
+      6,\N,\N
+   
+Deleting Values Based on Complex Predicates
+---------------------------------------------------
+The following example shows how to delete values based on complex predicates.
+
+1. Display the table:
+
+   .. code-block:: psql
+
+      farm=> SELECT * FROM cool_animals;
+   
+   The following table is displayed:
+
+   .. code-block:: psql
+
+      1,Dog                 ,7
+      2,Possum              ,3
+      3,Cat                 ,5
+      4,Elephant            ,6500
+      5,Rhinoceros          ,2100
+      6,\N,\N
+   
+2. Delete rows from the table:
+
+   .. code-block:: psql
+
+      farm=> DELETE FROM cool_animals WHERE weight > 1000;
+	  
+3. Display the table:
+
+   .. code-block:: psql
+
+      farm=> SELECT * FROM cool_animals;
+   
+   The following table is displayed:
+  
+   .. code-block:: psql    
+
+      1,Dog                 ,7
+      2,Possum              ,3
+      3,Cat                 ,5
+      6,\N,\N
+   
+Identifying and Cleaning Up Tables
+---------------------------------------
+The **Identifying and Cleaning Up Tables** section includes the following examples:
+
+.. contents::
+   :local:
+   :depth: 1
+   
+Listing Tables that Have Not Been Cleaned Up
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The following example shows how to list tables that have not been cleaned up:
+
+.. code-block:: psql
+   
+   farm=> SELECT t.table_name FROM sqream_catalog.delete_predicates dp
+      JOIN sqream_catalog.tables t
+      ON dp.table_id = t.table_id
+      GROUP BY 1;
+   cool_animals
+   
+   1 row
+
+Identifying Predicates for Clean-Up
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The following example shows how to identify predicates for clean-up:
+
+.. code-block:: psql
+
+   farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
+      JOIN sqream_catalog.tables t
+      ON dp.table_id = t.table_id
+      WHERE t.table_name = 'cool_animals';
+   weight > 1000
+   
+   1 row
+   
+.. _trigger_cleanup:
+
+Triggering a Clean-Up
+^^^^^^^^^^^^^^^^^^^^^^
+The following example shows how to trigger a clean-up:
+
+1. Run the chunk ``CLEANUP_CHUNKS`` command (also known as ``SWEEP``) to reorganize the chunks:
+
+   .. code-block:: psql
+
+      farm=> SELECT CLEANUP_CHUNKS('public','cool_animals');
+
+2. Run the ``CLEANUP_EXTENTS`` command (also known as ``VACUUM``) to delete the leftover files:
+
+   .. code-block:: psql
+   
+      farm=> SELECT CLEANUP_EXTENTS('public','cool_animals');
+   
+3. Display the table:
+
+   .. code-block:: psql
+   
+      farm=> SELECT delete_predicate FROM sqream_catalog.delete_predicates dp
+         JOIN sqream_catalog.tables t
+         ON dp.table_id = t.table_id
+         WHERE t.table_name = 'cool_animals';
+		 
+Best Practices
+=====================================
+This section includes the best practices when deleting rows:
+
+* Run ``CLEANUP_CHUNKS`` and ``CLEANUP_EXTENTS`` after running large ``DELETE`` operations.
+
+   ::
+
+* When you delete large segments of data from very large tables, consider running a ``CREATE TABLE AS`` operation instead, renaming, and dropping the original table.
+
+   ::
+
+* Avoid killing ``CLEANUP_EXTENTS`` operations in progress.
+
+   ::
+
+* SQream is optimized for time-based data, which is data naturally ordered according to date or timestamp. Deleting rows based on such columns leads to increased performance.
+
+.. soft update concept
+
+.. delete cleanup and it's properties. automatic/manual, in transaction or background
+
+.. automatic background gives fast delete, minimal transaction overhead,
+.. small cost to queries until background reorganised
+
+.. when does delete use the metadata effectively
+
+.. more examples
\ No newline at end of file
diff --git a/operational_guides/external_data.rst b/operational_guides/external_data.rst
index 98d157ab2..c9a6cfb33 100644
--- a/operational_guides/external_data.rst
+++ b/operational_guides/external_data.rst
@@ -3,8 +3,7 @@
 **********************************
 Working with External Data
 **********************************
-
-SQream DB supports external data sources for use with :ref:`external_tables`, :ref:`copy_from`, and :ref:`copy_to`.
+SQream supports the following external data sources:
 
 .. toctree::
    :maxdepth: 1
@@ -12,4 +11,16 @@ SQream DB supports external data sources for use with :ref:`external_tables`, :r
 
    s3
    hdfs
+   mounting_an_nfs_shared_drive
+   
+For more information, see the following:
+
+* :ref:`external_tables`
+
+   ::
+   
+* :ref:`copy_from`
+
+   ::
    
+* :ref:`copy_to`
\ No newline at end of file
diff --git a/operational_guides/external_tables.rst b/operational_guides/foreign_tables.rst
similarity index 74%
rename from operational_guides/external_tables.rst
rename to operational_guides/foreign_tables.rst
index 005dc961f..bc8c401a6 100644
--- a/operational_guides/external_tables.rst
+++ b/operational_guides/foreign_tables.rst
@@ -1,65 +1,66 @@
-.. _external_tables:
+.. _foreign_tables:
 
 ***********************
-External Tables
+Foreign Tables
 ***********************
-External tables can be used to run queries directly on data without inserting it into SQream DB first.
-SQream DB supports read only external tables, so you can query from external tables, but you cannot insert to them, or run deletes or updates on them.
+Foreign tables can be used to run queries directly on data without inserting it into SQream DB first.
+SQream DB supports read only foreign tables, so you can query from foreign tables, but you cannot insert to them, or run deletes or updates on them.
+
 Running queries directly on external data is most effectively used for things like one off querying. If you will be repeatedly querying data, the performance will usually be better if you insert the data into SQream DB first.
-Although external tables can be used without inserting data into SQream DB, one of their main use cases is to help with the insertion process. An insert select statement on an external table can be used to insert data into SQream using the full power of the query engine to perform ETL.
+
+Although foreign tables can be used without inserting data into SQream DB, one of their main use cases is to help with the insertion process. An insert select statement on a foreign table can be used to insert data into SQream using the full power of the query engine to perform ETL.
 
 .. contents:: In this topic:
    :local:
    
-What kind of data is supported?
+Supported Data Formats
 =====================================
-SQream DB supports external tables over:
+SQream DB supports foreign tables over:
 
-* text files (e.g. CSV, PSV, TSV)
+* Text files (e.g. CSV, PSV, TSV)
 * ORC
 * Parquet
 
-What kind of data staging is supported?
+Supported Data Staging
 ============================================
-SQream DB can stage data from:
+SQream can stage data from:
 
 * a local filesystem (e.g. ``/mnt/storage/....``)
 * :ref:`s3` buckets (e.g. ``s3://pp-secret-bucket/users/*.parquet``)
 * :ref:`hdfs` (e.g. ``hdfs://hadoop-nn.piedpiper.com/rhendricks/*.csv``)
 
-Using external tables - a practical example
+Using Foreign Tables
 ==============================================
-Use an external table to stage data before loading from CSV, Parquet or ORC files.
+Use a foreign table to stage data before loading from CSV, Parquet or ORC files.
 
-Planning for data staging
+Planning for Data Staging
 --------------------------------
 For the following examples, we will want to interact with a CSV file. Here's a peek at the table contents:
-
+  
 .. csv-table:: nba.csv
-
    :file: nba-t10.csv
    :widths: auto
-   :header-rows: 1 
+   :header-rows: 1
 
 The file is stored on :ref:`s3`, at ``s3://sqream-demo-data/nba_players.csv``.
 We will make note of the file structure, to create a matching ``CREATE_EXTERNAL_TABLE`` statement.
 
-Creating the external table
+Creating a Foreign Table
 -----------------------------
-Based on the source file structure, we we :ref:`create an external table` with the appropriate structure, and point it to the file.
+Based on the source file structure, we we :ref:`create a foreign table` with the appropriate structure, and point it to the file.
 
 .. code-block:: postgres
    
-   CREATE EXTERNAL TABLE nba
+   CREATE foreign table nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name varchar,
+      Team varchar,
       Number tinyint,
-      Position varchar(2),
+      Position varchar,
       Age tinyint,
-      Height varchar(4),
+      Height varchar,
       Weight real,
-      College varchar(40),
+      College varchar,
       Salary float
     )
       USING FORMAT CSV -- Text file
@@ -67,11 +68,13 @@ Based on the source file structure, we we :ref:`create an external table SELECT * FROM nba;
       master=> select * from nba;
       Record delimiter mismatch during CSV parsing. User defined line delimiter \n does not match the first delimiter \r\n found in s3://sqream-demo-data/nba.csv
-* Since the data for an external table is not stored in SQream DB, it can be changed or removed at any time by an external process. As a result, the same query can return different results each time it runs against an external table. Similarly, a query might fail if the external data is moved, removed, or has changed structure.
+* Since the data for a foreign table is not stored in SQream DB, it can be changed or removed at any time by an external process. As a result, the same query can return different results each time it runs against a foreign table. Similarly, a query might fail if the external data is moved, removed, or has changed structure.
\ No newline at end of file
diff --git a/operational_guides/hdfs.rst b/operational_guides/hdfs.rst
index 274926e36..e59c49cc7 100644
--- a/operational_guides/hdfs.rst
+++ b/operational_guides/hdfs.rst
@@ -46,13 +46,13 @@ This section describes how to configure an HDFS environment for the user **sqrea
       $ PATH=$PATH:$HOME/.local/bin:$HOME/bin:${SQREAM_HOME}/bin/:${JAVA_HOME}/bin:$HADOOP_INSTALL/bin
       $ export PATH
 
-3. Verify that the edits have been made:
+2. Verify that the edits have been made:
 
    .. code-block:: console
      
       source /home/sqream/.bash_profile
        
-4. Check if you can access Hadoop from your machine:       
+3. Check if you can access Hadoop from your machine:       
        
   .. code-block:: console
      
@@ -63,7 +63,7 @@ This section describes how to configure an HDFS environment for the user **sqrea
    **NOTICE:** If you cannot access Hadoop from your machine because it uses Kerberos, see `Connecting a SQream Server to Cloudera Hadoop with Kerberos `_
 
 
-5. Verify that an HDFS environment exists for SQream services:
+4. Verify that an HDFS environment exists for SQream services:
 
    .. code-block:: console
      
@@ -72,7 +72,7 @@ This section describes how to configure an HDFS environment for the user **sqrea
 .. _step_6:
 
       
-6. If an HDFS environment does not exist for SQream services, create one (sqream_env.sh):
+5. If an HDFS environment does not exist for SQream services, create one (sqream_env.sh):
    
    .. code-block:: console
      
@@ -92,13 +92,11 @@ This section describes how to configure an HDFS environment for the user **sqrea
       $ export PATH
 	  
 :ref:`Back to top `
-
 	  
 .. _authenticate_hadoop_servers_that_require_kerberos:
 
 Authenticating Hadoop Servers that Require Kerberos
 ---------------------------------------------------
-
 If your Hadoop server requires Kerberos authentication, do the following:
 
 1. Create a principal for the user **sqream**.
@@ -134,9 +132,9 @@ If your Hadoop server requires Kerberos authentication, do the following:
    
       $ ls -lrt
 
-5. Look for a recently updated folder containing the text **hdfs**.
+6. Look for a recently updated folder containing the text **hdfs**.
 
-The following is an example of the correct folder name:
+   The following is an example of the correct folder name:
 
    .. code-block:: console
    
@@ -150,25 +148,31 @@ The following is an example of the correct folder name:
    Comment: - Does "something" need to be replaced with "file name"
    
 
-6. Copy the .keytab file to user **sqream's** Home directory on the remote machines that you are planning to use Hadoop on.
+7. Copy the .keytab file to user **sqream's** Home directory on the remote machines that you are planning to use Hadoop on.
+
+    ::
 
-7. Copy the following files to the **sqream sqream@server:/hdfs/hadoop/etc/hadoop:** directory:
+8. Copy the following files to the **sqream sqream@server:/hdfs/hadoop/etc/hadoop:** directory:
 
    * core-site.xml
    * hdfs-site.xml
 
-8. Connect to the sqream server and verify that the .keytab file's owner is a user sqream and is granted the correct permissions:
+9. Connect to the sqream server and verify that the .keytab file's owner is a user sqream and is granted the correct permissions:
 
    .. code-block:: console
    
       $ sudo chown sqream:sqream /home/sqream/hdfs.keytab
       $ sudo chmod 600 /home/sqream/hdfs.keytab
 
-9. Log into the sqream server.
+10. Log into the sqream server.
+
+     ::
+
+11. Log in as the user **sqream**.
 
-10. Log in as the user **sqream**.
+     ::
 
-11. Navigate to the Home directory and check the name of a Kerberos principal represented by the following .keytab file:
+12. Navigate to the Home directory and check the name of a Kerberos principal represented by the following .keytab file:
 
    .. code-block:: console
    
@@ -199,15 +203,17 @@ The following is an example of the correct folder name:
       $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
       $    5 09/15/2020 18:03:05 hdfs/nn1@SQ.COM
 
-12. Verify that the hdfs service named **hdfs/nn1@SQ.COM** is shown in the generated output above.
+13. Verify that the hdfs service named **hdfs/nn1@SQ.COM** is shown in the generated output above.
 
-13. Run the following:
+     ::
+
+14. Run the following:
 
    .. code-block:: console
    
       $ kinit -kt hdfs.keytab hdfs/nn1@SQ.COM
 
- 13. Check the output:
+15. Verify that the output is correct:
   
    .. code-block:: console
    
@@ -223,15 +229,20 @@ The following is an example of the correct folder name:
       $ Valid starting       Expires              Service principal
       $ 09/16/2020 13:44:18  09/17/2020 13:44:18  krbtgt/SQ.COM@SQ.COM
 
-14. List the files located at the defined server name or IP address:
+16. List the files located at the defined server name or IP address:
 
    .. code-block:: console
    
       $ hadoop fs -ls hdfs://:8020/
 
-15. Do one of the following:
+17. Do one of the following:
+
+     ::
+
+    * If the list below is output, continue with Step 18.
+
+     ::
 
-    * If the list below is output, continue with Step 16.
     * If the list is not output, verify that your environment has been set up correctly.
 	
 If any of the following are empty, verify that you followed :ref:`Step 6 ` in the **Configuring an HDFS Environment for the User sqream** section above correctly:
@@ -245,8 +256,10 @@ If any of the following are empty, verify that you followed :ref:`Step 6 `
\ No newline at end of file
diff --git a/operational_guides/index.rst b/operational_guides/index.rst
index b7ea1502d..048efb06f 100644
--- a/operational_guides/index.rst
+++ b/operational_guides/index.rst
@@ -15,7 +15,8 @@ This section summarizes the following operational guides:
    access_control
    creating_or_cloning_a_storage_cluster
    external_data
-   external_tables
+   foreign_tables
+   delete_guide
    exporting_data
    logging
    monitoring_query_performance
diff --git a/operational_guides/logging.rst b/operational_guides/logging.rst
index a40e08601..03950784d 100644
--- a/operational_guides/logging.rst
+++ b/operational_guides/logging.rst
@@ -354,7 +354,7 @@ Assuming logs are stored at ``/home/rhendricks/sqream_storage/logs/``, a databas
 
    CREATE FOREIGN TABLE logs 
    (
-     start_marker      VARCHAR(4),
+     start_marker      TEXT(4),
      row_id            BIGINT,
      timestamp         DATETIME,
      message_level     TEXT,
@@ -368,7 +368,7 @@ Assuming logs are stored at ``/home/rhendricks/sqream_storage/logs/``, a databas
      service_name      TEXT,
      message_type_id   INT,
      message           TEXT,
-     end_message       VARCHAR(5)
+     end_message       TEXT(5)
    )
    WRAPPER csv_fdw
    OPTIONS
@@ -416,8 +416,8 @@ Finding Fatal Errors
 .. code-block:: psql
 
    t=> SELECT message FROM logs WHERE message_type_id=1010;
-   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/leveldb/LOCK: Resource temporarily unavailable
-   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/leveldb/LOCK: Resource temporarily unavailable
+   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/rocksdb/LOCK: Resource temporarily unavailable
+   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/rocksdb/LOCK: Resource temporarily unavailable
    Mismatch in storage version, upgrade is needed,Storage version: 25, Server version is: 26
    Mismatch in storage version, upgrade is needed,Storage version: 25, Server version is: 26
    Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/LOCK: Resource temporarily unavailable
diff --git a/operational_guides/monitoring_query_performance.rst b/operational_guides/monitoring_query_performance.rst
index a542f61e6..057512968 100644
--- a/operational_guides/monitoring_query_performance.rst
+++ b/operational_guides/monitoring_query_performance.rst
@@ -47,7 +47,7 @@ First, create a foreign table for the logs
 .. code-block:: postgres
    CREATE FOREIGN TABLE logs 
    (
-     start_marker      VARCHAR(4),
+     start_marker      TEXT(4),
      row_id            BIGINT,
      timestamp         DATETIME,
      message_level     TEXT,
@@ -61,7 +61,7 @@ First, create a foreign table for the logs
      service_name      TEXT,
      message_type_id   INT,
      message           TEXT,
-     end_message       VARCHAR(5)
+     end_message       TEXT(5)
    )
    WRAPPER cdv_fdw
    OPTIONS
@@ -200,7 +200,7 @@ Commonly Seen Nodes
      - Description
    * - ``CpuDecompress``
      - CPU
-     - Decompression operation, common for longer ``VARCHAR`` types
+     - Decompression operation, common for longer ``TEXT`` types
    * - ``CpuLoopJoin``
      - CPU
      - A non-indexed nested loop join, performed on the CPU
@@ -621,9 +621,9 @@ Common Solutions for Improving Filtering
 * Use :ref:`clustering keys and naturally ordered data` in your filters.
 * Avoid full table scans when possible
 
-4. Joins with ``varchar`` Keys
+4. Joins with ``text`` Keys
 -----------------------------------
-Joins on long text keys, such as ``varchar(100)`` do not perform as well as numeric data types or very short text keys.
+Joins on long text keys do not perform as well as numeric data types or very short text keys.
 
 Identifying the Situation
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -636,20 +636,20 @@ For example, consider these two table structures:
      amt            FLOAT NOT NULL,
      i              INT NOT NULL,
      ts             DATETIME NOT NULL,
-     country_code   VARCHAR(3) NOT NULL,
-     flag           VARCHAR(10) NOT NULL,
-     fk             VARCHAR(50) NOT NULL
+     country_code   TEXT(3) NOT NULL,
+     flag           TEXT(10) NOT NULL,
+     fk             TEXT(50) NOT NULL
    );
    CREATE TABLE t_b 
    (
-     id          VARCHAR(50) NOT NULL
+     id          TEXT(50) NOT NULL
      prob        FLOAT NOT NULL,
      j           INT NOT NULL,
    );
 #. 
    Run a query.
      
-   In this example, we will join ``t_a.fk`` with ``t_b.id``, both of which are ``VARCHAR(50)``.
+   In this example, we will join ``t_a.fk`` with ``t_b.id``, both of which are ``TEXT(50)``.
    
    .. code-block:: postgres
       
@@ -688,7 +688,7 @@ For example, consider these two table structures:
    
 Improving Query Performance
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-* In general, try to avoid ``VARCHAR`` as a join key. As a rule of thumb, ``BIGINT`` works best as a join key.
+* In general, try to avoid ``TEXT`` as a join key. As a rule of thumb, ``BIGINT`` works best as a join key.
 * 
    Convert text values on-the-fly before running the query. For example, the :ref:`crc64` function takes a text
    input and returns a ``BIGINT`` hash.
@@ -726,10 +726,10 @@ Improving Query Performance
    
 * You can map some text values to numeric types by using a dimension table. Then, reconcile the values when you need them by joining the dimension table.
 
-5. Sorting on big ``VARCHAR`` fields
+5. Sorting on big ``TEXT`` fields
 ---------------------------------------
 In general, SQream DB automatically inserts a ``Sort`` node which arranges the data prior to reductions and aggregations.
-When running a ``GROUP BY`` on large ``VARCHAR`` fields, you may see nodes for ``Sort`` and ``Reduce`` taking a long time.
+When running a ``GROUP BY`` on large ``TEXT`` fields, you may see nodes for ``Sort`` and ``Reduce`` taking a long time.
 
 Identifying the Situation
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -749,9 +749,9 @@ For example:
          i INT NOT NULL,
          amt DOUBLE NOT NULL,
          ts DATETIME NOT NULL,
-         country_code VARCHAR(100) NOT NULL,
-         flag VARCHAR(10) NOT NULL,
-         string_fk VARCHAR(50) NOT NULL
+         country_code TEXT(100) NOT NULL,
+         flag TEXT(10) NOT NULL,
+         string_fk TEXT(50) NOT NULL
       );
    
    We will run a query, and inspect it's execution details:
@@ -800,16 +800,16 @@ For example:
       max
       ---
       3
-   With a maximum string length of just 3 characters, our ``VARCHAR(100)`` is way oversized.
+   With a maximum string length of just 3 characters, our ``TEXT(100)`` is way oversized.
 #. 
-   We can recreate the table with a more restrictive ``VARCHAR(3)``, and can examine the difference in performance:
+   We can recreate the table with a more restrictive ``TEXT(3)``, and can examine the difference in performance:
    
    .. code-block:: psql
       t=> CREATE TABLE t_efficient 
       .     AS SELECT i,
       .              amt,
       .              ts,
-      .              country_code::VARCHAR(3) AS country_code,
+      .              country_code::TEXT(3) AS country_code,
       .              flag
       .         FROM t_inefficient;
       executed
@@ -832,8 +832,8 @@ For example:
 
 Improving Sort Performance on Text Keys
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-When using VARCHAR, ensure that the maximum length defined in the table structure is as small as necessary.
-For example, if you're storing phone numbers, don't define the field as ``VARCHAR(255)``, as that affects sort performance.
+When using TEXT, ensure that the maximum length defined in the table structure is as small as necessary.
+For example, if you're storing phone numbers, don't define the field as ``TEXT(255)``, as that affects sort performance.
    
 You can run a query to get the maximum column length (e.g. ``MAX(LEN(a_column))``), and potentially modify the table structure.
 
diff --git a/operational_guides/nba-t10.csv b/operational_guides/nba-t10.csv
new file mode 100644
index 000000000..024530355
--- /dev/null
+++ b/operational_guides/nba-t10.csv
@@ -0,0 +1,10 @@
+Name,Team,Number,Position,Age,Height,Weight,College,Salary
+Avery Bradley,Boston Celtics,0.0,PG,25.0,6-2,180.0,Texas,7730337.0
+Jae Crowder,Boston Celtics,99.0,SF,25.0,6-6,235.0,Marquette,6796117.0
+John Holland,Boston Celtics,30.0,SG,27.0,6-5,205.0,Boston University,
+R.J. Hunter,Boston Celtics,28.0,SG,22.0,6-5,185.0,Georgia State,1148640.0
+Jonas Jerebko,Boston Celtics,8.0,PF,29.0,6-10,231.0,,5000000.0
+Amir Johnson,Boston Celtics,90.0,PF,29.0,6-9,240.0,,12000000.0
+Jordan Mickey,Boston Celtics,55.0,PF,21.0,6-8,235.0,LSU,1170960.0
+Kelly Olynyk,Boston Celtics,41.0,C,25.0,7-0,238.0,Gonzaga,2165160.0
+Terry Rozier,Boston Celtics,12.0,PG,22.0,6-2,190.0,Louisville,1824360.0
diff --git a/operational_guides/optimization_best_practices.rst b/operational_guides/optimization_best_practices.rst
index 1cc0ca01e..51982f527 100644
--- a/operational_guides/optimization_best_practices.rst
+++ b/operational_guides/optimization_best_practices.rst
@@ -20,20 +20,7 @@ This section describes best practices and guidelines for designing tables.
 Use date and datetime types for columns
 -----------------------------------------
 
-When creating tables with dates or timestamps, using the purpose-built ``DATE`` and ``DATETIME`` types over integer types or ``VARCHAR`` will bring performance and storage footprint improvements, and in many cases huge performance improvements (as well as data integrity benefits). SQream DB stores dates and datetimes very efficiently and can strongly optimize queries using these specific types.
-
-Reduce varchar length to a minimum
---------------------------------------
-
-With the ``VARCHAR`` type, the length has a direct effect on query performance.
-
-If the size of your column is predictable, by defining an appropriate column length (no longer than the maximum actual value) you will get the following benefits:
-
-* Data loading issues can be identified more quickly
-
-* SQream DB can reserve less memory for decompression operations
-
-* Third-party tools that expect a data size are less likely to over-allocate memory
+When creating tables with dates or timestamps, using the purpose-built ``DATE`` and ``DATETIME`` types over integer types or ``TEXT`` will bring performance and storage footprint improvements, and in many cases huge performance improvements (as well as data integrity benefits). SQream DB stores dates and datetimes very efficiently and can strongly optimize queries using these specific types.
 
 Don't flatten or denormalize data
 -----------------------------------
@@ -61,7 +48,6 @@ The one situation when this wouldn't be as useful is when data will be only quer
 
 Use information about the column data to your advantage
 -------------------------------------------------------------
-
 Knowing the data types and their ranges can help design a better table.
 
 Set ``NULL`` or ``NOT NULL`` when relevant
@@ -71,14 +57,6 @@ For example, if a value can't be missing (or ``NULL``), specify a ``NOT NULL`` c
 
 Not only does specifying ``NOT NULL`` save on data storage, it lets the query compiler know that a column cannot have a ``NULL`` value, which can improve query performance.
 
-Keep VARCHAR lengths to a minimum
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-While it won't make a big difference in storage, large strings allocate a lot of memory at query time.
-
-If a column's string length never exceeds 50 characters, specify ``VARCHAR(50)`` rather than an arbitrarily large number.
-
-
 Sorting 
 ==============
 
@@ -86,7 +64,7 @@ Data sorting is an important factor in minimizing storage size and improving que
 
 * Minimizing storage saves on physical resources and increases performance by reducing overall disk I/O. Prioritize the sorting of low-cardinality columns. This reduces the number of chunks and extents that SQream DB reads during query execution.
 
-* Where possible, sort columns with the lowest cardinality first. Avoid sorting ``VARCHAR`` and ``TEXT/NVARCHAR`` columns with lengths exceeding 50 characters.
+* Where possible, sort columns with the lowest cardinality first. Avoid sorting ``TEXT`` columns with lengths exceeding 50 characters.
 
 * For longer-running queries that run on a regular basis, performance can be improved by sorting data based on the ``WHERE`` and ``GROUP BY`` parameters. Data can be sorted during insert by using :ref:`external_tables` or by using :ref:`create_table_as`.
 
diff --git a/operational_guides/s3.rst b/operational_guides/s3.rst
index bba878830..5e4f8b264 100644
--- a/operational_guides/s3.rst
+++ b/operational_guides/s3.rst
@@ -1,24 +1,22 @@
 .. _s3:
 
 ***********************
-Amazon S3
+Inserting Data Using Amazon S3
 ***********************
+SQream uses a native S3 connector for directly inserting data from a number of external sources directly into SQream. This is done using the ``s3://`` URI to specify an external file path on an S3 bucket. Your files can be saved in CSV or columnar format, such as Parquet and ORC, and your file names can include wildcard characters.
 
-SQream uses a native S3 connector for inserting data. The ``s3://`` URI specifies an external file path on an S3 bucket. File names may contain wildcard characters, and the files can be in CSV or columnar format, such as Parquet and ORC.
-
-The **Amazon S3** describes the following topics:
+The **Amazon S3** page describes the following topics:
 
 .. contents::
    :local:
+   :depth: 1
    
-S3 Configuration
+Configuring Amazon S3
 ==============================
-
 Any database host with access to S3 endpoints can access S3 without any configuration. To read files from an S3 bucket, the database must have listable files.
 
-S3 URI Format
+Setting the S3 URI Format
 ===============
-
 With S3, specify a location for a file (or files) when using :ref:`copy_from` or :ref:`external_tables`.
 
 The following is an example of the general S3 syntax:
@@ -27,53 +25,48 @@ The following is an example of the general S3 syntax:
  
    s3://bucket_name/path
 
-Authentication
+Authenticating Users
 =================
 
 SQream supports ``AWS ID`` and ``AWS SECRET`` authentication. These should be specified when executing a statement.
 
 Examples
 ==========
+You can use a foreign table to stage data from S3 before loading from CSV, Parquet, or ORC files.
 
-Use a foreign table to stage data from S3 before loading from CSV, Parquet, or ORC files.
-
-The **Examples** section includes the following examples:
+This section includes the following examples:
 
 .. contents::
    :local:
    :depth: 1
 
-
-
 Planning for Data Staging
 --------------------------------
-
-The examples in this section are based on a CSV file, as shown in the following table:
-
-.. csv-table:: nba.csv
-   :file: ../nba-t10.csv
+The examples in this section are based on the CSV file shown in the following table: 
+   
+.. csv-table:: nba-t10
+   :file: ../_static/samples/nba-t10.csv
    :widths: auto
-   :header-rows: 1 
+   :header-rows: 1
 
-The file is stored on Amazon S3, and this bucket is public and listable. To create a matching ``CREATE FOREIGN TABLE`` statement you can make note of the file structure.
+This CSV file is stored on Amazon S3, and this bucket is public and listable. To create a matching ``CREATE FOREIGN TABLE`` statement you can make a record of your source file's structure and use it to reproduce a corresponding foreign table, as shown in the following section.
 
 Creating a Foreign Table
 -----------------------------
-
-Based on the source file's structure, you can create a foreign table with the appropriate structure, and point it to your file as shown in the following example:
+Based on the source file's structure above, you can create a foreign table with the structure you want and point it to your file, as shown in the following example:
 
 .. code-block:: postgres
    
    CREATE FOREIGN TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     )
     WRAPPER csv_fdw
@@ -84,17 +77,16 @@ Based on the source file's structure, you can create a foreign table with the ap
       )
     ;
 
-In the example above the file format is CSV, and it is stored as an S3 object. If the path is on HDFS, you must change the URI accordingly. Note that the record delimiter is a DOS newline (``\r\n``).
+.. note:: In the example above the file format is CSV and is stored as an S3 object. If your file has an HDFS path, you must change the URI accordingly. Note that the record delimiter is a DOS newline (``\r\n``).
 
 For more information, see the following:
 
-* **Creating a foreign table** - see :ref:`create a foreign table`.
+* **Creating a foreign table** - see :ref:`creating a foreign table`.
 * **Using SQream in an HDFS environment** - see :ref:`hdfs`.
 
 Querying Foreign Tables
 ------------------------------
-
-The following shows the data in the foreign table:
+The following shows the data located in the foreign table:
 
 .. code-block:: psql
    
@@ -114,8 +106,7 @@ The following shows the data in the foreign table:
    
 Bulk Loading a File from a Public S3 Bucket
 ----------------------------------------------
-
-The ``COPY FROM`` command can also be used to load data without staging it first.
+Youc an use the ``COPY FROM`` command to load data without staging it first.
 
 .. note:: The bucket must be publicly available and objects can be listed.
 
@@ -135,4 +126,4 @@ The following is an example of loading fles from an authenticated S3 bucket:
 
    COPY nba FROM 's3://secret-bucket/*.csv' WITH OFFSET 2 RECORD DELIMITER '\r\n' 
    AWS_ID '12345678'
-   AWS_SECRET 'super_secretive_secret';
+   AWS_SECRET 'super_secretive_secret';
\ No newline at end of file
diff --git a/operational_guides/saved_queries.rst b/operational_guides/saved_queries.rst
index d554b4dc8..2ec42f247 100644
--- a/operational_guides/saved_queries.rst
+++ b/operational_guides/saved_queries.rst
@@ -4,7 +4,12 @@
 Saved Queries
 ***********************
 
-Saved queries can be used to reuse a query plan for a query to eliminate compilation times for repeated queries. They also provide a way to implement 'parameterized views'. 
+
+
+Using the ``save_query`` command will both generate and save an execution plan. This allows you to save time when running frequently used complex queries.
+
+Note that the saved execution plan is tightly coupled with the structure of its underlying tables, which means that if one or more of the objects mentioned in the query is modified, the saved query must be re-created.
+
 
 How saved queries work
 ==========================
@@ -14,11 +19,11 @@ Saved queries are compiled when they are created. When a saved query is run, thi
 Parameters support
 ===========================
 
-Query parameters can be used as substitutes for literal expressions in queries.
+Query parameters can be used as substitutes for constants expressions in queries.
 
-* Parameters cannot be used to substitute things like column names and table names.
+* Parameters cannot be used to substitute identifiers like column names and table names.
 
-* Query parameters of a string datatype (like ``VARCHAR``) must be of a fixed length, and can be used in equality checks, but not patterns (e.g. :ref:`like`, :ref:`rlike`, etc.)
+* Query parameters of a string datatype (like ``TEXT``) must be of a fixed length, and can be used in equality checks, but not patterns (e.g. :ref:`like`, :ref:`rlike`, etc.)
 
 Creating a saved query
 ======================
@@ -51,32 +56,8 @@ Use parameters to replace them later at execution time.
 ..   executed
 
 
-Listing and executing saved queries
-======================================
-
-Saved queries are saved as a database objects. They can be listed in one of two ways:
-
-Using the :ref:`catalog`:
-
-.. code-block:: psql
-
-   t=> SELECT * FROM sqream_catalog.savedqueries;
-   name                      | num_parameters
-   --------------------------+---------------
-   select_all                |              0
-   select_by_weight          |              1
-   select_by_weight_and_team |              2
-
-Using the :ref:`list_saved_queries` utility function:
-
-.. code-block:: psql
-
-   t=> SELECT LIST_SAVED_QUERIES();
-   saved_query              
-   -------------------------
-   select_all               
-   select_by_weight         
-   select_by_weight_and_team
+Executing saved queries
+=======================
 
 Executing a saved query requires calling it by it's name in a :ref:`execute_saved_query` statement. A saved query with no parameter is called without parameters.
 
@@ -103,6 +84,33 @@ Executing a saved query with parameters requires specifying the parameters in th
    Jason Thompson    | Toronto Raptors |      1 | PF       |  29 | 6-11   |    250 | Rider       |  245177
    Jonas Valanciunas | Toronto Raptors |     17 | C        |  24 | 7-0    |    255 |             | 4660482
 
+Listing saved queries
+=======================
+
+Saved queries are saved as a database objects. They can be listed in one of two ways:
+
+Using the :ref:`catalog`:
+
+.. code-block:: psql
+
+   t=> SELECT * FROM sqream_catalog.savedqueries;
+   name                      | num_parameters
+   --------------------------+---------------
+   select_all                |              0
+   select_by_weight          |              1
+   select_by_weight_and_team |              2
+
+Using the :ref:`list_saved_queries` utility function:
+
+.. code-block:: psql
+
+   t=> SELECT LIST_SAVED_QUERIES();
+   saved_query              
+   -------------------------
+   select_all               
+   select_by_weight         
+   select_by_weight_and_team
+
 
 Dropping a saved query
 =============================
@@ -119,4 +127,4 @@ When you're done with a saved query, or would like to replace it with another, y
    t=> SELECT LIST_SAVED_QUERIES();
    saved_query              
    -------------------------
-   select_by_weight         
+   select_by_weight         
\ No newline at end of file
diff --git a/operational_guides/seeing_system_objects_as_ddl.rst b/operational_guides/seeing_system_objects_as_ddl.rst
index 4f9f596dd..2aacb49e8 100644
--- a/operational_guides/seeing_system_objects_as_ddl.rst
+++ b/operational_guides/seeing_system_objects_as_ddl.rst
@@ -22,7 +22,7 @@ Getting the DDL for a table
    farm=> SELECT GET_DDL('cool_animals');
    create table "public"."cool_animals" (
      "id" int not null,
-     "name" varchar(30) not null,
+     "name" text(30) not null,
      "weight" double null,
      "is_agressive" bool default false not null )
      ;
@@ -142,7 +142,7 @@ Exporting database DDL to a client
    farm=> SELECT DUMP_DATABASE_DDL();
    create table "public"."cool_animals" (
      "id" int not null,
-     "name" varchar(30) not null,
+     "name" text(30) not null,
      "weight" double null,
      "is_agressive" bool default false not null
    )
diff --git a/reference/catalog_reference.rst b/reference/catalog_reference.rst
index 8cfa8e832..8fc0593b8 100644
--- a/reference/catalog_reference.rst
+++ b/reference/catalog_reference.rst
@@ -1,606 +1,16 @@
 .. _catalog_reference:
 
 *************************************
-Catalog reference
+Catalog Reference Guide
 *************************************
+The **Catalog Reference Guide** describes the following:
 
-SQream DB contains a schema called ``sqream_catalog`` that contains information about your database's objects - tables, columns, views, permissions, and more.
+.. toctree::
+   :maxdepth: 1
+   :glob:
 
-Some additional catalog tables are used primarily for internal introspection, which could change across SQream DB versions.
-
-
-.. contents:: In this topic:
-   :local:
-
-Types of data exposed by ``sqream_catalog``
-==============================================
-
-.. list-table:: Database objects
-   :widths: auto
-   :header-rows: 1
-   
-   * - Object
-     - Table
-   * - Clustering keys
-     - ``clustering_keys``
-   * - Columns
-     - ``columns``, ``external_table_columns``
-   * - Databases
-     - ``databases``
-   * - Permissions
-     - ``table_permissions``, ``database_permissions``, ``schema_permissions``, ``permission_types``, ``udf_permissions``
-   * - Roles
-     - ``roles``, ``roles_memeberships``
-   * - Schemas
-     - ``schemas``
-   * - Sequences
-     - ``identity_key``
-   * - Tables
-     - ``tables``, ``external_tables``
-   * - Views
-     - ``views``
-   * - UDFs
-     - ``user_defined_functions``
-
-The catalog contains a few more tables which contain storage details for internal use
-
-.. list-table:: Storage objects
-   :widths: auto
-   :header-rows: 1
-   
-   * - Object
-     - Table
-   * - Extents
-     - ``extents``
-   * - Chunks
-     - ``chunks``
-   * - Delete predicates
-     - ``delete_predicates``
-
-Tables in the catalog
-========================
-
-clustering_keys
------------------------
-
-Explicit clustering keys for tables.
-
-When more than one clustering key is defined, each key is listed in a separate row.
-
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database containing the table
-   * - ``table_id``
-     - ID of the table containing the column
-   * - ``schema_name``
-     - Name of the schema containing the table
-   * - ``table_name``
-     - Name of the table containing the column
-   * - ``clustering_key``
-     - Name of the column that is a clustering key for this table
-
-columns
---------
-
-Column objects for standard tables
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database containing the table
-   * - ``schema_name``
-     - Name of the schema containing the table
-   * - ``table_id``
-     - ID of the table containing the column
-   * - ``table_name``
-     - Name of the table containing the column
-   * - ``column_id``
-     - Ordinal of the column in the table (begins at 0)
-   * - ``column_name``
-     - Name of the column
-   * - ``type_name``
-     - :ref:`Data type ` of the column
-   * - ``column_size``
-     - The maximum length in bytes.
-   * - ``has_default``
-     - ``NULL`` if the column has no default value. ``1`` if the default is a fixed value, or ``2`` if the default is an :ref:`identity`
-   * - ``default_value``
-     - :ref:`Default value` for the column
-   * - ``compression_strategy``
-     - User-overridden compression strategy
-   * - ``created``
-     - Timestamp when the column was created
-   * - ``altered``
-     - Timestamp when the column was last altered
-
-
-.. _external_tables_table:
-
-external_tables
-----------------
-
-``external_tables`` identifies external tables in the database.
-
-For ``TABLES`` see :ref:`tables `
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database containing the table
-   * - ``table_id``
-     - Database-unique ID for the table
-   * - ``schema_name``
-     - Name of the schema containing the table
-   * - ``table_name``
-     - Name of the table
-   * - ``format``
-     - 
-         Identifies the foreign data wrapper used.
-      
-         ``0`` for csv_fdw, ``1`` for parquet_fdw, ``2`` for orc_fdw.
-         
-   * - ``created``
-     - Identifies the clause used to create the table
-
-external_table_columns
-------------------------
-
-Column objects for external tables
-
-databases
------------
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_Id``
-     - Unique ID of the database
-   * - ``database_name``
-     - Name of the database
-   * - ``default_disk_chunk_size``
-     - Internal use
-   * - ``default_process_chunk_size``
-     - Internal use
-   * - ``rechunk_size``
-     - Internal use
-   * - ``storage_subchunk_size``
-     - Internal use
-   * - ``compression_chunk_size_threshold``
-     - Internal use
-
-database_permissions
-----------------------
-
-``database_permissions`` identifies all permissions granted to databases. 
-
-There is one row for each combination of role (grantee) and permission granted to a database.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database the permission applies to
-   * - ``role_id``
-     - ID of the role granted permissions (grantee)
-   * - ``permission_type``
-     - Identifies the permission type
-  
-
-identity_key
---------------
-
-
-permission_types
-------------------
-
-``permission_types`` Identifies the permission names that exist in the database.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``permission_type_id``
-     - ID of the permission type
-   * - ``name``
-     - Name of the permission type
-
-roles
-------
-
-``roles`` identifies the roles in the database.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``role_id``
-     - Database-unique ID of the role
-   * - ``name``
-     - Name of the role
-   * - ``superuser``
-     - Identifies if this role is a superuser. ``1`` for superuser or ``0`` otherwise.
-   * - ``login``
-     - Identifies if this role can be used to log in to SQream DB. ``1`` for yes or ``0`` otherwise.
-   * - ``has_password``
-     - Identifies if this role has a password. ``1`` for yes or ``0`` otherwise.
-   * - ``can_create_function``
-     - Identifies if this role can create UDFs. ``1`` for yes, ``0`` otherwise.
-     
-roles_memberships
--------------------
-
-``roles_memberships`` identifies the role memberships in the database.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``role_id``
-     - Role ID
-   * - ``member_role_id``
-     - ID of the parent role from which this role will inherit
-   * - ``inherit``
-     - Identifies if permissions are inherited. ``1`` for yes or ``0`` otherwise.
-
-savedqueries
-----------------
-
-``savedqueries`` identifies the :ref:`saved_queries` in the database.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``name``
-     - Saved query name
-   * - ``num_parameters``
-     - Number of parameters to be replaced at run-time
-
-schemas
-----------
-
-``schemas`` identifies all the database's schemas.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``schema_id``
-     - Unique ID of the schema
-   * - ``schema_name``
-     - Name of the schema
-   * - ``schema_owner``
-     - Name of the role who owns this schema
-   * - ``rechunker_ignore``
-     - Internal use
-
-
-schema_permissions
---------------------
-
-``schema_permissions`` identifies all permissions granted to schemas. 
-
-There is one row for each combination of role (grantee) and permission granted to a schema.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database containing the schema
-   * - ``schema_id``
-     - ID of the schema the permission applies to
-   * - ``role_id``
-     - ID of the role granted permissions (grantee)
-   * - ``permission_type``
-     - Identifies the permission type
-  
-
-.. _tables_table:
-
-tables
-----------
-
-``tables`` identifies proper SQream tables in the database.
-
-For ``EXTERNAL TABLES`` see :ref:`external_tables `
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database containing the table
-   * - ``table_id``
-     - Database-unique ID for the table
-   * - ``schema_name``
-     - Name of the schema containing the table
-   * - ``table_name``
-     - Name of the table
-   * - ``row_count_valid``
-     - Identifies if the ``row_count`` can be used
-   * - ``row_count``
-     - Number of rows in the table
-   * - ``rechunker_ignore``
-     - Internal use
-
-
-table_permissions
-------------------
-
-``table_permissions`` identifies all permissions granted to tables. 
-
-There is one row for each combination of role (grantee) and permission granted to a table.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database containing the table
-   * - ``table_id``
-     - ID of the table the permission applies to
-   * - ``role_id``
-     - ID of the role granted permissions (grantee)
-   * - ``permission_type``
-     - Identifies the permission type
-  
-
-udf_permissions
-------------------
-
-user_defined_functions
--------------------------
-
-``user_defined_functions`` identifies UDFs in the database. 
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the database containing the view
-   * - ``function_id``
-     - Database-unique ID for the UDF
-   * - ``function_name``
-     - Name of the UDF
-
-views
--------
-
-``views`` identifies views in the database.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``view_id``
-     - Database-unique ID for the view
-   * - ``view_schema``
-     - Name of the schema containing the view
-   * - ``view_name``
-     - Name of the view
-   * - ``view_data``
-     - Internal use
-   * - ``view_query_text``
-     - Identifies the ``AS`` clause used to create the view
-
-
-Additional tables 
-======================
-
-There are additional tables in the catalog that can be used for performance monitoring and inspection.
-
-The definition for these tables is provided below could change across SQream DB versions.
-
-extents
-----------
-
-``extents`` identifies storage extents.
-
-Each storage extents can contain several chunks.
-
-.. note:: This is an internal table designed for low-level performance troubleshooting.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the databse containing the extent
-   * - ``table_id``
-     - ID of the table containing the extent
-   * - ``column_id``
-     - ID of the column containing the extent
-   * - ``extent_id``
-     - ID for the extent
-   * - ``size``
-     - Extent size in megabytes
-   * - ``path``
-     - Full path to the extent on the file system
-
-chunk_columns
--------------------
-
-``chunk_columns`` lists chunk information by column.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the databse containing the extent
-   * - ``table_id``
-     - ID of the table containing the extent
-   * - ``column_id``
-     - ID of the column containing the extent
-   * - ``chunk_id``
-     - ID for the chunk
-   * - ``extent_id``
-     - ID for the extent
-   * - ``compressed_size``
-     - Actual chunk size in bytes
-   * - ``uncompressed_size``
-     - Uncompressed chunk size in bytes
-   * - ``compression_type``
-     - Actual compression scheme for this chunk
-   * - ``long_min``
-     - Minimum numeric value in this chunk (if exists)
-   * - ``long_max``
-     - Maximum numeric value in this chunk (if exists)
-   * - ``string_min``
-     - Minimum text value in this chunk (if exists)
-   * - ``string_max``
-     - Maximum text value in this chunk (if exists)
-   * - ``offset_in_file``
-     - Internal use
-
-.. note:: This is an internal table designed for low-level performance troubleshooting.
-
-chunks
--------
-
-``chunks`` identifies storage chunks.
-
-.. note:: This is an internal table designed for low-level performance troubleshooting.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the databse containing the chunk
-   * - ``table_id``
-     - ID of the table containing the chunk
-   * - ``column_id``
-     - ID of the column containing the chunk
-   * - ``rows_num``
-     - Amount of rows contained in the chunk
-   * - ``deletion_status``
-     - When data is deleted from the table, it is first deleted logically. This value identifies how much data is deleted from the chunk. ``0`` for no data, ``1`` for some data, ``2`` to specify the entire chunk is deleted.
-
-delete_predicates
--------------------
-
-``delete_predicates`` identifies the existing delete predicates that have not been cleaned up.
-
-Each :ref:`DELETE ` command may result in several entries in this table.
-
-.. note:: This is an internal table designed for low-level performance troubleshooting.
-
-.. list-table::
-   :widths: auto
-   :header-rows: 1
-   
-   * - Column
-     - Description
-   * - ``database_name``
-     - Name of the databse containing the predicate
-   * - ``table_id``
-     - ID of the table containing the predicate
-   * - ``max_chunk_id``
-     - Internal use. Placeholder marker for the highest ``chunk_id`` logged during the DELETE operation.
-   * - ``delete_predicate``
-     - Identifies the DELETE predicate
-
-
-Examples
-===========
-
-List all tables in the database
-----------------------------------
-
-.. code-block:: psql
-
-   master=> SELECT * FROM sqream_catalog.tables;
-   database_name | table_id | schema_name | table_name     | row_count_valid | row_count | rechunker_ignore
-   --------------+----------+-------------+----------------+-----------------+-----------+-----------------
-   master        |        1 | public      | nba            | true            |       457 |                0
-   master        |       12 | public      | cool_dates     | true            |         5 |                0
-   master        |       13 | public      | cool_numbers   | true            |         9 |                0
-   master        |       27 | public      | jabberwocky    | true            |         8 |                0
-
-List all schemas in the database
-------------------------------------
-
-.. code-block:: psql
-   
-   master=> SELECT * FROM sqream_catalog.schemas;
-   schema_id | schema_name   | schema_owner | rechunker_ignore
-   ----------+---------------+--------------+-----------------
-           0 | public        | sqream       | false           
-           1 | secret_schema | mjordan      | false           
-
-
-List columns and their types for a specific table
----------------------------------------------------
-
-.. code-block:: postgres
-
-   SELECT column_name, type_name 
-   FROM sqream_catalog.columns
-   WHERE table_name='cool_animals';
-
-List delete predicates
-------------------------
-
-.. code-block:: postgres
-
-   SELECT  t.table_name, d.*  FROM 
-   sqream_catalog.delete_predicates AS d  
-   INNER JOIN sqream_catalog.tables AS t  
-   ON d.table_id=t.table_id;
-
-
-List :ref:`saved_queries`
------------------------------
-
-.. code-block:: postgres
-
-   SELECT * FROM sqream_catalog.savedqueries;
+   catalog_reference_overview
+   catalog_reference_schema_information
+   catalog_reference_catalog_tables
+   catalog_reference_additonal_tables
+   catalog_reference_examples
\ No newline at end of file
diff --git a/reference/catalog_reference_additonal_tables.rst b/reference/catalog_reference_additonal_tables.rst
new file mode 100644
index 000000000..7d34429d3
--- /dev/null
+++ b/reference/catalog_reference_additonal_tables.rst
@@ -0,0 +1,120 @@
+.. _catalog_reference_additonal_tables:
+
+*************************************
+Additional Tables
+*************************************
+The Reference Catalog includes additional tables that can be used for performance monitoring and inspection. The definition for these tables described on this page may change across SQream versions.
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Extents
+----------
+The ``extents`` storage object identifies storage extents, and each storage extents can contain several chunks.
+
+.. note:: This is an internal table designed for low-level performance troubleshooting.
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the databse containing the extent.
+   * - ``table_id``
+     - Shows the ID of the table containing the extent.
+   * - ``column_id``
+     - Shows the ID of the column containing the extent.
+   * - ``extent_id``
+     - Shows the ID for the extent.
+   * - ``size``
+     - Shows the extent size in megabytes.
+   * - ``path``
+     - Shows the full path to the extent on the file system.
+
+Chunk Columns
+-------------------
+The ``chunk_columns`` storage object lists chunk information by column.
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the databse containing the extent.
+   * - ``table_id``
+     - Shows the ID of the table containing the extent.
+   * - ``column_id``
+     - Shows the ID of the column containing the extent.
+   * - ``chunk_id``
+     - Shows the chunk ID.
+   * - ``extent_id``
+     - Shows the extent ID.
+   * - ``compressed_size``
+     - Shows the compressed chunk size in bytes.
+   * - ``uncompressed_size``
+     - Shows the uncompressed chunk size in bytes.
+   * - ``compression_type``
+     - Shows the chunk's actual compression scheme.
+   * - ``long_min``
+     - Shows the minimum numeric value in the chunk (if one exists).
+   * - ``long_max``
+     - Shows the maximum numeric value in the chunk (if one exists).
+   * - ``string_min``
+     - Shows the minimum text value in the chunk (if one exists).
+   * - ``string_max``
+     - Shows the maximum text value in the chunk (if one exists).
+   * - ``offset_in_file``
+     - Reserved for internal use.
+
+.. note:: This is an internal table designed for low-level performance troubleshooting.
+
+Chunks
+-------
+The ``chunks`` storage object identifies storage chunks.
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the databse containing the chunk.
+   * - ``table_id``
+     - Shows the ID of the table containing the chunk.
+   * - ``column_id``
+     - Shows the ID of the column containing the chunk.
+   * - ``rows_num``
+     - Shows the amount of rows in the chunk.
+   * - ``deletion_status``
+     - Determines what data to logically delete from the table first, and identifies how much data to delete from the chunk. The value ``0`` is ued for no data, ``1`` for some data, and ``2`` to delete the entire chunk.
+	 
+.. note:: This is an internal table designed for low-level performance troubleshooting.
+
+Delete Predicates
+-------------------
+The ``delete_predicates`` storage object identifies the existing delete predicates that have not been cleaned up.
+
+Each :ref:`DELETE ` command may result in several entries in this table.
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the databse containing the predicate.
+   * - ``table_id``
+     - Shows the ID of the table containing the predicate.
+   * - ``max_chunk_id``
+     - Reserved for internal use, this is a placeholder marker for the highest ``chunk_id`` logged during the ``DELETE`` operation.
+   * - ``delete_predicate``
+     - Identifies the DELETE predicate.
+	 
+.. note:: This is an internal table designed for low-level performance troubleshooting.
\ No newline at end of file
diff --git a/reference/catalog_reference_catalog_tables.rst b/reference/catalog_reference_catalog_tables.rst
new file mode 100644
index 000000000..4f4d60b76
--- /dev/null
+++ b/reference/catalog_reference_catalog_tables.rst
@@ -0,0 +1,453 @@
+.. _catalog_reference_catalog_tables:
+
+*************************************
+Catalog Tables
+*************************************
+The ``sqream_catalog`` includes the following tables:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+.. _clustering_keys:
+   
+Clustering Keys
+----------------
+The ``clustering_keys`` data object is used for explicit clustering keys for tables. If you define more than one clustering key, each key is listed in a separate row, and is described in the following table:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database containing the table.
+   * - ``table_id``
+     - Shows the ID of the table containing the column.
+   * - ``schema_name``
+     - Shows the name of the schema containing the table.
+   * - ``table_name``
+     - Shows the name of the table containing the column.
+   * - ``clustering_key``
+     - Shows the name of the column used as a clustering key for this table.
+
+.. _columns:
+
+Columns
+----------------
+The **Columns** database object shows the following tables:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Columns
+***********
+The ``column`` data object is used with standard tables and is described in the following table:
+
+.. list-table::
+   :widths: 20 150
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database containing the table.
+   * - ``schema_name``
+     - Shows the name of the schema containing the table.
+   * - ``table_id``
+     - Shows the ID of the table containing the column.
+   * - ``table_name``
+     - Shows the name of the table containing the column.
+   * - ``column_id``
+     - Shows the ordinal number of the column in the table (begins at **0**).
+   * - ``column_name``
+     - Shows the column's name.
+   * - ``type_name``
+     - Shows the column's data type. For more information see :ref:`Supported Data Types `.
+   * - ``column_size``
+     - Shows the maximum length in bytes.
+   * - ``has_default``
+     - Shows ``NULL`` if the column has no default value, ``1`` if the default is a fixed value, or ``2`` if the default is an identity. For more information, see :ref:`identity`.
+   * - ``default_value``
+     - Shows the column's default value. For more information, see :ref:`Default Value Constraints`.
+   * - ``compression_strategy``
+     - Shows the compression strategy that a user has overridden.
+   * - ``created``
+     - Shows the timestamp displaying when the column was created.
+   * - ``altered``
+     - Shows the timestamp displaying when the column was last altered.
+	 
+External Table Columns
+***********
+The ``external_table_columns`` is used for viewing data from foreign tables.
+
+For more information on foreign tables, see :ref:`CREATE FOREIGN TABLE`.
+
+.. _databases:
+
+Databases
+----------------
+The ``databases`` data object is used for displaying database information, and is described in the following table:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_Id``
+     - Shows the database's unique ID.
+   * - ``database_name``
+     - Shows the database's name.
+   * - ``default_disk_chunk_size``
+     - Reserved for internal use.
+   * - ``default_process_chunk_size``
+     - Reserved for internal use.
+   * - ``rechunk_size``
+     - Reserved for internal use.
+   * - ``storage_subchunk_size``
+     - Reserved for internal use.
+   * - ``compression_chunk_size_threshold``
+     - Reserved for internal use.
+
+.. _permissions:
+
+Permissions
+----------------
+The ``permissions`` data object is used for displaying permissions information, such as roles (also known as **grantees**), and is described in the following tables:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Permission Types
+***********
+The ``permission_types`` object identifies the permission names existing in the database.
+
+The following table describes the ``permission_types`` data object:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``permission_type_id``
+     - Shows the permission type's ID.
+   * - ``name``
+     - Shows the name of the permission type.
+   
+Default Permissions
+***********
+The commands included in the **Default Permissions** section describe how to check the following default permissions:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Default Table Permissions
+~~~~~~~~~~~~~~~~
+The ``sqream_catalog.table_default_permissions`` command shows the columns described below:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the database that the default permission rule applies to.
+   * - ``schema_id``
+     - Shows the schema that the rule applies to, or ``NULL`` if the ``ALTER`` statement does not specify a schema.
+   * - ``modifier_role_id``
+     - Shows the role to apply the rule to.
+   * - ``getter_role_id``
+     - Shows the role that the permission is granted to.
+   * - ``permission_type``
+     - Shows the type of permission granted.
+	 
+Default Schema Permissions
+~~~~~~~~~~~~~~~~
+The ``sqream_catalog.schema_default_permissions`` command shows the columns described below:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the database that the default permission rule applies to.
+   * - ``modifier_role_id``
+     - Shows the role to apply the rule to.
+   * - ``getter_role_id``
+     - Shows the role that the permission is granted to.
+   * - ``permission_type``
+     - Shows the type of permission granted.
+	 
+For an example of using the ``sqream_catalog.table_default_permissions`` command, see `Granting Default Table Permissions `_.
+
+Table Permissions
+***********
+The ``table_permissions`` data object identifies all permissions granted to tables. Each role-permission combination displays one row.
+
+The following table describes the ``table_permissions`` data object: 
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database containing the table.
+   * - ``table_id``
+     - Shows the ID of the table the permission applies to.
+   * - ``role_id``
+     - Shows the ID of the role granted permissions.
+   * - ``permission_type``
+     - Identifies the permission type.
+	 
+Database Permissions
+***********
+The ``database_permissions`` data object identifies all permissions granted to databases. Each role-permission combination displays one row.
+
+The following table describes the ``database_permissions`` data object: 
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database the permission applies to
+   * - ``role_id``
+     - Shows the ID of the role granted permissions.
+   * - ``permission_type``
+     - Identifies the permission type.
+	 
+Schema Permissions
+***********
+The ``schema_permissions`` data object identifies all permissions granted to schemas. Each role-permission combination displays one row.
+
+The following table describes the ``schema_permissions`` data object: 
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database containing the schema.
+   * - ``schema_id``
+     - Shows the ID of the schema the permission applies to.
+   * - ``role_id``
+     - Shows the ID of the role granted permissions.
+   * - ``permission_type``
+     - Identifies the permission type.	 
+
+UDF Permissions
+***********
+**Comment** - *No content.*
+
+.. _queries:
+
+Queries
+----------------
+The ``savedqueries`` data object identifies the saved_queries in the database, as shown in the following table:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``name``
+     - Shows the saved query name.
+   * - ``num_parameters``
+     - Shows the number of parameters to be replaced at run-time.
+
+For more information, see :ref:`saved_queries`.
+
+.. _roles:
+	 
+Roles
+----------------
+The ``roles`` data object is used for displaying role information, and is described in the following tables:
+
+.. contents:: 
+   :local:
+   :depth: 1   
+
+Roles
+***********
+The ``roles`` data object identifies the roles in the database, as shown in the following table:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``role_id``
+     - Shows the role's database-unique ID.
+   * - ``name``
+     - Shows the role's name.
+   * - ``superuser``
+     - Identifies whether the role is a superuser (``1`` - superuser, ``0`` - regular user).
+   * - ``login``
+     - Identifies whether the role can be used to log in to SQream (``1`` - yes, ``0`` - no).
+   * - ``has_password``
+     - Identifies whether the role has a password (``1`` - yes, ``0`` - no).
+     
+Role Memberships
+***********
+The ``roles_memberships`` data object identifies the role memberships in the database, as shown below:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``role_id``
+     - Shows the role ID.
+   * - ``member_role_id``
+     - Shows the ID of the parent role that this role inherits from.
+   * - ``inherit``
+     - Identifies whether permissions are inherited (``1`` - yes, ``0`` - no).	 
+
+.. _schemas:
+
+Schemas
+----------------
+The ``schemas`` data object identifies all the database's schemas, as shown below:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``schema_id``
+     - Shows the schema's unique ID.
+   * - ``schema_name``
+     - Shows the schema's name.
+   * - ``schema_owner``
+     - Shows the name of the role that owns the schema.
+   * - ``rechunker_ignore``
+     - Reserved for internal use.
+
+.. _sequences:
+
+Sequences
+----------------
+The ``sequences`` data object is used for displaying identity key information, as shown below:
+
+Identity Key
+***********
+**Comment** - *No content.*
+
+.. _tables:
+
+Tables
+----------------
+The ``tables`` data object is used for displaying table information, and is described in the following tables:
+
+.. contents:: 
+   :local:
+   :depth: 1   
+
+Tables
+***********
+The ``tables`` data object identifies proper (**Comment** - *What does "proper" mean?*) SQream tables in the database, as shown in the following table:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database containing the table.
+   * - ``table_id``
+     - Shows the table's database-unique ID.
+   * - ``schema_name``
+     - Shows the name of the schema containing the table.
+   * - ``table_name``
+     - Shows the name of the table.
+   * - ``row_count_valid``
+     - Identifies whether the ``row_count`` can be used.
+   * - ``row_count``
+     - Shows the number of rows in the table.
+   * - ``rechunker_ignore``
+     - Relevant for internal use.
+	 
+Foreign Tables
+***********
+The ``external_tables`` data object identifies foreign tables in the database, as shown below:
+
+.. list-table::
+   :widths: 20 200
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database containing the table.
+   * - ``table_id``
+     - Shows the table's database-unique ID.
+   * - ``schema_name``
+     - Shows the name of the schema containing the table.
+   * - ``table_name``
+     - Shows the name of the table.
+   * - ``format``
+     - Identifies the foreign data wrapper used. ``0`` for ``csv_fdw``, ``1`` for ``parquet_fdw``, ``2`` for ``orc_fdw``.         
+   * - ``created``
+     - Identifies the clause used to create the table.
+
+.. _views:
+
+Views
+----------------
+The ``views`` data object is used for displaying views in the database, as shown below:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``view_id``
+     - Shows the view's database-unique ID.
+   * - ``view_schema``
+     - Shows the name of the schema containing the view.
+   * - ``view_name``
+     - Shows the name of the view.
+   * - ``view_data``
+     - Reserved for internal use.
+   * - ``view_query_text``
+     - Identifies the ``AS`` clause used to create the view.
+
+.. _udfs:
+
+User Defined Functions
+----------------
+The ``udf`` data object is used for displaying UDFs in the database, as shown below:
+
+.. list-table::
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Column
+     - Description
+   * - ``database_name``
+     - Shows the name of the database containing the view.
+   * - ``function_id``
+     - Shows the UDF's database-unique ID.
+   * - ``function_name``
+     - Shows the name of the UDF.
\ No newline at end of file
diff --git a/reference/catalog_reference_examples.rst b/reference/catalog_reference_examples.rst
new file mode 100644
index 000000000..4531dfefc
--- /dev/null
+++ b/reference/catalog_reference_examples.rst
@@ -0,0 +1,64 @@
+.. _catalog_reference_examples:
+
+*************************************
+Examples
+*************************************
+The **Examples** page includes the following examples:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Listing All Tables in a Database
+----------------------------------
+
+.. code-block:: psql
+
+   master=> SELECT * FROM sqream_catalog.tables;
+   database_name | table_id | schema_name | table_name     | row_count_valid | row_count | rechunker_ignore
+   --------------+----------+-------------+----------------+-----------------+-----------+-----------------
+   master        |        1 | public      | nba            | true            |       457 |                0
+   master        |       12 | public      | cool_dates     | true            |         5 |                0
+   master        |       13 | public      | cool_numbers   | true            |         9 |                0
+   master        |       27 | public      | jabberwocky    | true            |         8 |                0
+
+Listing All Schemas in a Database
+------------------------------------
+
+.. code-block:: psql
+   
+   master=> SELECT * FROM sqream_catalog.schemas;
+   schema_id | schema_name   | schema_owner | rechunker_ignore
+   ----------+---------------+--------------+-----------------
+           0 | public        | sqream       | false           
+           1 | secret_schema | mjordan      | false           
+
+
+Listing Columns and Their Types for a Specific Table
+---------------------------------------------------
+
+.. code-block:: postgres
+
+   SELECT column_name, type_name 
+   FROM sqream_catalog.columns
+   WHERE table_name='cool_animals';
+
+Listing Delete Predicates
+------------------------
+
+.. code-block:: postgres
+
+   SELECT  t.table_name, d.*  FROM 
+   sqream_catalog.delete_predicates AS d  
+   INNER JOIN sqream_catalog.tables AS t  
+   ON d.table_id=t.table_id;
+
+
+Listing Saved Queries
+-----------------------------
+
+.. code-block:: postgres
+
+   SELECT * FROM sqream_catalog.savedqueries;
+   
+For more information, see  :ref:`saved_queries`.
\ No newline at end of file
diff --git a/reference/catalog_reference_overview.rst b/reference/catalog_reference_overview.rst
new file mode 100644
index 000000000..b74663509
--- /dev/null
+++ b/reference/catalog_reference_overview.rst
@@ -0,0 +1,11 @@
+.. _catalog_reference_overview:
+
+*************************************
+Overview
+*************************************
+The SQream database uses a schema called ``sqream_catalog`` that contains information about your database's objects, such tables, columns, views, and permissions. Some additional catalog tables are used primarily for internal analysis and which may be different across SQream versions.
+
+* :ref:`catalog_reference_schema_information`
+* :ref:`catalog_reference_catalog_tables`
+* :ref:`catalog_reference_additonal_tables`
+* :ref:`catalog_reference_examples`
\ No newline at end of file
diff --git a/reference/catalog_reference_schema_information.rst b/reference/catalog_reference_schema_information.rst
new file mode 100644
index 000000000..6cd43ab6a
--- /dev/null
+++ b/reference/catalog_reference_schema_information.rst
@@ -0,0 +1,62 @@
+.. _catalog_reference_schema_information:
+
+*************************************
+What Information Does the Schema Contain?
+*************************************
+The schema includes tables designated and relevant for both external and internal use:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+External Tables
+-----------------
+The following table shows the data objects contained in the ``sqream_catalog`` schema designated for external use:
+
+.. list-table:: Database Objects
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Database Object
+     - Table
+   * - :ref:`Clustering Keys`
+     - ``clustering_keys``
+   * - :ref:`Columns`
+     - ``columns``, ``external_table_columns``
+   * - :ref:`Databases`
+     - ``databases``
+   * - :ref:`Permissions`
+     - ``table_permissions``, ``database_permissions``, ``schema_permissions``, ``permission_types``, ``udf_permissions``, ``sqream_catalog.table_default_permissions``
+   * - :ref:`Queries`
+     - ``saved_queries``
+   * - :ref:`Roles`
+     - ``roles``, ``roles_memeberships``
+   * - :ref:`Schemas`
+     - ``schemas``
+   * - :ref:`Sequences`
+     - ``identity_key``
+   * - :ref:`Tables`
+     - ``tables``, ``external_tables``
+   * - :ref:`Views`
+     - ``views``
+   * - :ref:`User Defined Functions`
+     - ``user_defined_functions``
+
+Internal Tables
+-----------------
+The following table shows the data objects contained in the ``sqream_catalog`` schema designated for internal use:
+
+.. list-table:: Storage Objects
+   :widths: 20 180
+   :header-rows: 1
+   
+   * - Database Object
+     - Table
+   * - Extents
+     - Shows ``extents``.
+   * - Chunk columns
+     - Shows ``chunks_columns``.
+   * - Chunks
+     - Shows ``chunks``.
+   * - Delete predicates
+     - Shows ``delete_predicates``. For more information, see :ref:`Deleting Data`.
\ No newline at end of file
diff --git a/reference/cli/server_picker.rst b/reference/cli/server_picker.rst
index b4869c181..3bfd4a004 100644
--- a/reference/cli/server_picker.rst
+++ b/reference/cli/server_picker.rst
@@ -1,7 +1,7 @@
 .. _server_picker_cli_reference:
 
 *************************
-server_picker
+Server Picker
 *************************
 
 SQream DB's load balancer is called ``server_picker``.
@@ -31,7 +31,7 @@ Positional command line arguments
    * - ``TCP listen port``
      - ``3108``
      - TCP port for server picker to listen on
-   * - ``Metadata server port``
+   * - ``SSL listen port``
      - ``3109``
      - SSL port for server picker to listen on
 
diff --git a/reference/cli/sqream_sql.rst b/reference/cli/sqream_sql.rst
index 54a38300d..a4af489cf 100644
--- a/reference/cli/sqream_sql.rst
+++ b/reference/cli/sqream_sql.rst
@@ -225,7 +225,7 @@ Creating a new database and switching over to it without reconnecting:
 
 .. code-block:: psql
 
-   farm=> create table animals(id int not null, name varchar(30) not null, is_angry bool not null);
+   farm=> create table animals(id int not null, name text(30) not null, is_angry bool not null);
    executed
    time: 0.011940s
 
@@ -303,7 +303,7 @@ Assuming a file containing SQL statements (separated by semicolons):
 
    $ cat some_queries.sql
       CREATE TABLE calm_farm_animals 
-     ( id INT IDENTITY(0, 1), name VARCHAR(30) 
+     ( id INT IDENTITY(0, 1), name TEXT(30) 
      ); 
 
    INSERT INTO calm_farm_animals (name) 
diff --git a/reference/cli/upgrade_storage.rst b/reference/cli/upgrade_storage.rst
index c95ef25f3..867df3e03 100644
--- a/reference/cli/upgrade_storage.rst
+++ b/reference/cli/upgrade_storage.rst
@@ -51,7 +51,7 @@ Results and error codes
      - ``no need to upgrade``
      - Storage doesn't need an upgrade
    * - Failure: can't read storage
-     - ``levelDB is in use by another application``
+     - ``RocksDB is in use by another application``
      - Check permissions, and ensure no SQream DB workers or :ref:`metadata_server ` are running when performing this operation.
 
 
@@ -64,7 +64,7 @@ Upgrade SQream DB's storage cluster
 .. code-block:: console
 
    $ ./upgrade_storage /home/rhendricks/raviga_database
-   get_leveldb_version path{/home/rhendricks/raviga_database}
+   get_rocksdb_version path{/home/rhendricks/raviga_database}
    current storage version 23
    upgrade_v24
    upgrade_storage to 24
@@ -75,7 +75,7 @@ Upgrade SQream DB's storage cluster
    upgrade_v26
    upgrade_storage to 26
    upgrade_storage to 26 - Done
-   validate_leveldb
+   validate_rocksdb
    storage has been upgraded successfully to version 26
 
 This message confirms that the cluster has already been upgraded correctly.
diff --git a/reference/configuration.rst b/reference/configuration.rst
deleted file mode 100644
index bf487496e..000000000
--- a/reference/configuration.rst
+++ /dev/null
@@ -1,5 +0,0 @@
-.. _configuration_reference:
-
-*************************
-Configuration
-*************************
diff --git a/reference/sql/sql_functions/aggregate_functions/avg.rst b/reference/sql/sql_functions/aggregate_functions/avg.rst
index be0294d46..4768061c8 100644
--- a/reference/sql/sql_functions/aggregate_functions/avg.rst
+++ b/reference/sql/sql_functions/aggregate_functions/avg.rst
@@ -62,14 +62,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/corr.rst b/reference/sql/sql_functions/aggregate_functions/corr.rst
index 212ab89d2..5963c835f 100644
--- a/reference/sql/sql_functions/aggregate_functions/corr.rst
+++ b/reference/sql/sql_functions/aggregate_functions/corr.rst
@@ -51,14 +51,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/count.rst b/reference/sql/sql_functions/aggregate_functions/count.rst
index 803e529c0..133d658f4 100644
--- a/reference/sql/sql_functions/aggregate_functions/count.rst
+++ b/reference/sql/sql_functions/aggregate_functions/count.rst
@@ -67,14 +67,14 @@ The examples in this section are based on a table named ``nba``, structured as f
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/covar_pop.rst b/reference/sql/sql_functions/aggregate_functions/covar_pop.rst
index c0d7cd35d..24300b5d7 100644
--- a/reference/sql/sql_functions/aggregate_functions/covar_pop.rst
+++ b/reference/sql/sql_functions/aggregate_functions/covar_pop.rst
@@ -55,14 +55,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/covar_samp.rst b/reference/sql/sql_functions/aggregate_functions/covar_samp.rst
index 29d7b7493..2c8451023 100644
--- a/reference/sql/sql_functions/aggregate_functions/covar_samp.rst
+++ b/reference/sql/sql_functions/aggregate_functions/covar_samp.rst
@@ -56,14 +56,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/index.rst b/reference/sql/sql_functions/aggregate_functions/index.rst
index 9bf0527d6..5d3dfc125 100644
--- a/reference/sql/sql_functions/aggregate_functions/index.rst
+++ b/reference/sql/sql_functions/aggregate_functions/index.rst
@@ -6,31 +6,27 @@ Aggregate Functions
 
 Overview
 ===========
-
 Aggregate functions perform calculations based on a set of values and return a single value. Most aggregate functions ignore null values. Aggregate functions are often used with the ``GROUP BY`` clause of the :ref:`select` statement.
 
 Available Aggregate Functions
 ===============
 The following list shows the available aggregate functions:
 
-
-.. toctree::
-   :maxdepth: 1
-   :glob:
+.. hlist::
+   :columns: 2
    
-
-   avg
-   corr
-   count
-   covar_pop
-   covar_samp
-   max
-   min
-   mode
-   percentile_cont
-   percentile_disc
-   stddev_pop
-   stddev_samp
-   sum
-   var_pop
-   var_samp
+   * :ref:`AVG`
+   * :ref:`CORR`
+   * :ref:`COUNT`
+   * :ref:`COVAR_POP`
+   * :ref:`COVAR_SAMP`
+   * :ref:`MAX`
+   * :ref:`MIN`
+   * :ref:`MODE`
+   * :ref:`PERCENTILE_CONT`
+   * :ref:`PERCENTILE_DISC`
+   * :ref:`STDDEV_POP`
+   * :ref:`STDDEV_SAMP`
+   * :ref:`SUM`
+   * :ref:`VAR_POP`
+   * :ref:`VAR_SAMP`
\ No newline at end of file
diff --git a/reference/sql/sql_functions/aggregate_functions/max.rst b/reference/sql/sql_functions/aggregate_functions/max.rst
index 529e2230d..994a3aaca 100644
--- a/reference/sql/sql_functions/aggregate_functions/max.rst
+++ b/reference/sql/sql_functions/aggregate_functions/max.rst
@@ -53,14 +53,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/min.rst b/reference/sql/sql_functions/aggregate_functions/min.rst
index d488d87fc..dd4d39177 100644
--- a/reference/sql/sql_functions/aggregate_functions/min.rst
+++ b/reference/sql/sql_functions/aggregate_functions/min.rst
@@ -53,14 +53,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/stddev_pop.rst b/reference/sql/sql_functions/aggregate_functions/stddev_pop.rst
index 5a8a7e677..8687c0e76 100644
--- a/reference/sql/sql_functions/aggregate_functions/stddev_pop.rst
+++ b/reference/sql/sql_functions/aggregate_functions/stddev_pop.rst
@@ -58,14 +58,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/stddev_samp.rst b/reference/sql/sql_functions/aggregate_functions/stddev_samp.rst
index 0328e2241..81c7a1f51 100644
--- a/reference/sql/sql_functions/aggregate_functions/stddev_samp.rst
+++ b/reference/sql/sql_functions/aggregate_functions/stddev_samp.rst
@@ -62,14 +62,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/sum.rst b/reference/sql/sql_functions/aggregate_functions/sum.rst
index e8f648894..51d7f6d97 100644
--- a/reference/sql/sql_functions/aggregate_functions/sum.rst
+++ b/reference/sql/sql_functions/aggregate_functions/sum.rst
@@ -65,14 +65,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/var_pop.rst b/reference/sql/sql_functions/aggregate_functions/var_pop.rst
index 4ddf45a1e..de078a3ae 100644
--- a/reference/sql/sql_functions/aggregate_functions/var_pop.rst
+++ b/reference/sql/sql_functions/aggregate_functions/var_pop.rst
@@ -58,14 +58,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/aggregate_functions/var_samp.rst b/reference/sql/sql_functions/aggregate_functions/var_samp.rst
index 6f0c91e63..cc9721225 100644
--- a/reference/sql/sql_functions/aggregate_functions/var_samp.rst
+++ b/reference/sql/sql_functions/aggregate_functions/var_samp.rst
@@ -63,14 +63,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/index.rst b/reference/sql/sql_functions/index.rst
index 46d643308..a66760b7f 100644
--- a/reference/sql/sql_functions/index.rst
+++ b/reference/sql/sql_functions/index.rst
@@ -117,9 +117,7 @@ The following table shows the **date and time** functions:
 
 Numeric
 ^^^^^^^^^^^
-The following table shows the **arithmetic operators**
-
-For more information about arithmetic operator, see :ref:`arithmetic_operators`.
+The following table shows the **arithmetic operators**:
 
 .. list-table:: Arithmetic Operators
    :widths: auto
@@ -150,6 +148,8 @@ For more information about arithmetic operator, see :ref:`arithmetic_operators`.
      - ``a % b``
      - Modulu of ``a`` by ``b``. See also :ref:`mod`
 
+For more information about arithmetic operators, see :ref:`arithmetic_operators`.
+
 The following table shows the **arithmetic operator** functions:
 
 .. list-table:: Arithemtic Operator Functions
@@ -209,7 +209,7 @@ The following table shows the **arithmetic operator** functions:
 
 Strings
 ^^^^^^^^^^^
-The following table shows the **string* functions:
+The following table shows the **string** functions:
 
 .. list-table:: 
    :widths: auto
@@ -223,6 +223,8 @@ The following table shows the **string* functions:
      - Calculates the position where a string starts inside another string
    * - :ref:`concat`
      - Concatenates two strings
+   * - :ref:`decode`
+     - Decodes or extracts binary data from a textual input string
    * - :ref:`isprefixof`
      - Matches if a string is the prefix of another string
    * - :ref:`left`
@@ -268,15 +270,13 @@ The following table shows the **string* functions:
 
 User-Defined Scalar Functions
 ---------------------
-For more information about user-defined scalar functions, see :ref:`scalar_sql_udf`
+For more information about user-defined scalar functions, see :ref:`scalar_sql_udf`.
 
 
 Aggregate Functions
 ---------------------
 The following table shows the **aggregate** functions:
 
-For more information about aggregate functions, see :ref:`aggregate_functions`.
-
 .. list-table:: 
    :widths: auto
    :header-rows: 1
@@ -321,12 +321,12 @@ For more information about aggregate functions, see :ref:`aggregate_functions`.
      - ``varp``
      - Calculates population variance of values
 
+For more information about aggregate functions, see :ref:`aggregate_functions`.
+
 Window Functions
 -------------------
 The following table shows the **window** functions:
 
-For more information about window functions, see :ref:`window_functions`.
-
 .. list-table:: 
    :widths: auto
    :header-rows: 1
@@ -360,33 +360,7 @@ For more information about window functions, see :ref:`window_functions`.
    * - :ref:`ntile`
      - Returns an integer ranging between ``1`` and the argument value, dividing the partitions as equally as possible
 
-
-System Functions
-------------------
-System functions allow you to execute actions in the system, such as aborting a query or get information about system processes.
-
-The following table shows the **system** functions:
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-   
-   * - Function
-     - Description
-   * - :ref:`explain`
-     - Returns a static query plan for a statement
-   * - :ref:`show_connections`
-     - Returns a list of jobs and statements on the current worker
-   * - :ref:`show_locks`
-     - Returns any existing locks in the database
-   * - :ref:`show_node_info`
-     - Returns a query plan for an actively running statement with timing information
-   * - :ref:`show_server_status`
-     - Shows running statements across the cluster
-   * - :ref:`show_version`
-     - Returns the version of SQream DB
-   * - :ref:`stop_statement`
-     - Stops a query (or statement) if it is currently running
+For more information about window functions, see :ref:`window_functions`.
 
 Workload Management Functions
 ---------------------------------
@@ -415,7 +389,4 @@ The following table shows the **workload management** functions:
    scalar_functions/index
    user_defined_functions/index
    aggregate_functions/index
-   window_functions/index
-   system_functions/index
-
-
+   window_functions/index
\ No newline at end of file
diff --git a/reference/sql/sql_functions/scalar_functions/conditionals/is_ascii.rst b/reference/sql/sql_functions/scalar_functions/conditionals/is_ascii.rst
index bb9e3b2f9..495e46e76 100644
--- a/reference/sql/sql_functions/scalar_functions/conditionals/is_ascii.rst
+++ b/reference/sql/sql_functions/scalar_functions/conditionals/is_ascii.rst
@@ -45,20 +45,19 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
    
-   CREATE TABLE dictionary (id INT NOT NULL, fw TEXT(30), en VARCHAR(30));
-   
-   INSERT INTO dictionary VALUES (1, '行こう', 'Let''s go'), (2, '乾杯', 'Cheers'), (3, 'L''chaim', 'Cheers');
+   CREATE TABLE dictionary (id INT NOT NULL, text TEXT);
+   INSERT INTO dictionary VALUES (1, '行こう'), (2, '乾杯'), (3, 'L''chaim');
+   SELECT id, text, IS_ASCII(text) FROM dictionary;
 
 IS NULL
 -----------
 
 .. code-block:: psql
 
-   m=> SELECT id, en, fw, IS_ASCII(fw) FROM dictionary;
-   id | en       | fw       | is_ascii
+   id | text     | is_ascii
    ---+----------+----------+---------
-    1 | Let's go | 行こう     | false   
-    2 | Cheers   | 乾杯      | false   
-    3 | Cheers   | L'chaim  | true    
+    1 | 行こう    | false
+    2 | 乾杯      | false
+    3 | L'chaim   | true	
 
 
diff --git a/reference/sql/sql_functions/scalar_functions/conditionals/is_null.rst b/reference/sql/sql_functions/scalar_functions/conditionals/is_null.rst
index c99f4e7d1..94f8605f7 100644
--- a/reference/sql/sql_functions/scalar_functions/conditionals/is_null.rst
+++ b/reference/sql/sql_functions/scalar_functions/conditionals/is_null.rst
@@ -40,7 +40,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
    
-   CREATE TABLE t (id INT NOT NULL, name VARCHAR(30), weight INT);
+   CREATE TABLE t (id INT NOT NULL, name TEXT(30), weight INT);
    
    INSERT INTO t VALUES (1, 'Kangaroo', 120), (2, 'Koala', 20), (3, 'Wombat', 60)
                        ,(4, 'Kappa', NULL),(5, 'Echidna', 8),(6, 'Chupacabra', NULL)
diff --git a/reference/sql/sql_functions/scalar_functions/conversion/to_hex.rst b/reference/sql/sql_functions/scalar_functions/conversion/to_hex.rst
index f3cf6fb82..d33643f22 100644
--- a/reference/sql/sql_functions/scalar_functions/conversion/to_hex.rst
+++ b/reference/sql/sql_functions/scalar_functions/conversion/to_hex.rst
@@ -11,7 +11,7 @@ Syntax
 
 .. code-block:: postgres
 
-   TO_HEX( expr ) --> VARCHAR
+   TO_HEX( expr ) --> TEXT
 
 Arguments
 ============
@@ -28,7 +28,7 @@ Arguments
 Returns
 ============
 
-* Representation of the hexadecimal number of type ``VARCHAR``.
+* Representation of the hexadecimal number of type ``TEXT``.
 
 
 Examples
diff --git a/reference/sql/sql_functions/scalar_functions/date_and_time/dateadd.rst b/reference/sql/sql_functions/scalar_functions/date_and_time/dateadd.rst
index cff5268e6..240b0bf2e 100644
--- a/reference/sql/sql_functions/scalar_functions/date_and_time/dateadd.rst
+++ b/reference/sql/sql_functions/scalar_functions/date_and_time/dateadd.rst
@@ -106,7 +106,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE cool_dates(name VARCHAR(40), d DATE, dt DATETIME);
+   CREATE TABLE cool_dates(name TEXT(40), d DATE, dt DATETIME);
    
    INSERT INTO cool_dates VALUES ('Marty McFly goes back to this time','1955-11-05','1955-11-05 01:21:00.000')
        , ('Marty McFly came from this time', '1985-10-26', '1985-10-26 01:22:00.000')
diff --git a/reference/sql/sql_functions/scalar_functions/date_and_time/datediff.rst b/reference/sql/sql_functions/scalar_functions/date_and_time/datediff.rst
index 5c91a88d9..1af8827de 100644
--- a/reference/sql/sql_functions/scalar_functions/date_and_time/datediff.rst
+++ b/reference/sql/sql_functions/scalar_functions/date_and_time/datediff.rst
@@ -100,7 +100,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE cool_dates(name VARCHAR(40), d DATE, dt DATETIME);
+   CREATE TABLE cool_dates(name TEXT(40), d DATE, dt DATETIME);
    
    INSERT INTO cool_dates VALUES ('Marty McFly goes back to this time','1955-11-05','1955-11-05 01:21:00.000')
        , ('Marty McFly came from this time', '1985-10-26', '1985-10-26 01:22:00.000')
diff --git a/reference/sql/sql_functions/scalar_functions/date_and_time/datepart.rst b/reference/sql/sql_functions/scalar_functions/date_and_time/datepart.rst
index 8a43a1472..a779663b5 100644
--- a/reference/sql/sql_functions/scalar_functions/date_and_time/datepart.rst
+++ b/reference/sql/sql_functions/scalar_functions/date_and_time/datepart.rst
@@ -109,7 +109,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE cool_dates(name VARCHAR(40), d DATE, dt DATETIME);
+   CREATE TABLE cool_dates(name TEXT(40), d DATE, dt DATETIME);
    
    INSERT INTO cool_dates VALUES ('Marty McFly goes back to this time','1955-11-05','1955-11-05 01:21:00.000')
        , ('Marty McFly came from this time', '1985-10-26', '1985-10-26 01:22:00.000')
diff --git a/reference/sql/sql_functions/scalar_functions/date_and_time/eomonth.rst b/reference/sql/sql_functions/scalar_functions/date_and_time/eomonth.rst
index 92e3f7940..50bcf7410 100644
--- a/reference/sql/sql_functions/scalar_functions/date_and_time/eomonth.rst
+++ b/reference/sql/sql_functions/scalar_functions/date_and_time/eomonth.rst
@@ -48,7 +48,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE cool_dates(name VARCHAR(40), d DATE, dt DATETIME);
+   CREATE TABLE cool_dates(name TEXT(40), d DATE, dt DATETIME);
    
    INSERT INTO cool_dates VALUES ('Marty McFly goes back to this time','1955-11-05','1955-11-05 01:21:00.000')
        , ('Marty McFly came from this time', '1985-10-26', '1985-10-26 01:22:00.000')
diff --git a/reference/sql/sql_functions/scalar_functions/date_and_time/extract.rst b/reference/sql/sql_functions/scalar_functions/date_and_time/extract.rst
index 2fd79ca86..f0ca54a58 100644
--- a/reference/sql/sql_functions/scalar_functions/date_and_time/extract.rst
+++ b/reference/sql/sql_functions/scalar_functions/date_and_time/extract.rst
@@ -86,7 +86,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE cool_dates(name VARCHAR(40), d DATE, dt DATETIME);
+   CREATE TABLE cool_dates(name TEXT(40), d DATE, dt DATETIME);
    
    INSERT INTO cool_dates VALUES ('Marty McFly goes back to this time','1955-11-05','1955-11-05 01:21:00.000')
        , ('Marty McFly came from this time', '1985-10-26', '1985-10-26 01:22:00.000')
diff --git a/reference/sql/sql_functions/scalar_functions/date_and_time/trunc.rst b/reference/sql/sql_functions/scalar_functions/date_and_time/trunc.rst
index d9d791cc3..1e888cfe5 100644
--- a/reference/sql/sql_functions/scalar_functions/date_and_time/trunc.rst
+++ b/reference/sql/sql_functions/scalar_functions/date_and_time/trunc.rst
@@ -104,7 +104,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE cool_dates(name VARCHAR(40), d DATE, dt DATETIME);
+   CREATE TABLE cool_dates(name TEXT(40), d DATE, dt DATETIME);
    
    INSERT INTO cool_dates VALUES ('Marty McFly goes back to this time','1955-11-05','1955-11-05 01:21:00.000')
        , ('Marty McFly came from this time', '1985-10-26', '1985-10-26 01:22:00.000')
diff --git a/reference/sql/sql_functions/scalar_functions/index.rst b/reference/sql/sql_functions/scalar_functions/index.rst
index 1a7d639b3..ae70874e3 100644
--- a/reference/sql/sql_functions/scalar_functions/index.rst
+++ b/reference/sql/sql_functions/scalar_functions/index.rst
@@ -1,20 +1,86 @@
 .. _scalar_functions:
 
 ****************
-Built-In Scalar functions
+Built-In Scalar Functions
 ****************
+The **Built-In Scalar Functions** page describes functions that return one value per call:
 
-Built-in scalar functions return one value per call.
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Built-in scalar functions
-   :glob:
-   
-   bitwise/*
-   conditionals/*
-   conversion/*
-   date_and_time/*
-   numeric/*
-   string/*
\ No newline at end of file
+.. hlist::
+   :columns: 5
+		
+   * `AND `_
+   * `NOT `_
+   * `OR `_
+   * `SHIFT_LEFT `_
+   * `SHIFT_RIGHT `_
+   * `XOR `_
+   * :ref:`between`
+   * :ref:`case`
+   * :ref:`coalesce`
+   * :ref:`decode`
+   * :ref:`in`
+   * :ref:`is_ascii`
+   * :ref:`is_null`
+   * :ref:`isnull`
+   * :ref:`from_unixts`
+   * :ref:`to_hex`
+   * :ref:`to_unixts`
+   * :ref:`curdate`
+   * :ref:`current_date`
+   * :ref:`current_timestamp`
+   * :ref:`dateadd`
+   * :ref:`datediff`
+   * :ref:`datepart`
+   * :ref:`eomonth`
+   * :ref:`extract`
+   * :ref:`getdate`
+   * :ref:`sysdate`
+   * :ref:`trunc`
+   * :ref:`abs`
+   * :ref:`acos`
+   * :ref:`asin`
+   * :ref:`atan`
+   * :ref:`atn2`
+   * :ref:`ceiling`
+   * :ref:`cos`
+   * :ref:`cot`
+   * :ref:`crc64`
+   * :ref:`degrees`
+   * :ref:`exp`
+   * :ref:`floor`
+   * :ref:`log`
+   * :ref:`log10`
+   * :ref:`mod`
+   * :ref:`pi`
+   * :ref:`power`
+   * :ref:`radians`
+   * :ref:`round`
+   * :ref:`sin`
+   * :ref:`sqrt`
+   * :ref:`square`
+   * :ref:`tan`
+   * :ref:`trunc`
+   * :ref:`char_length`
+   * :ref:`charindex`
+   * :ref:`concat`
+   * :ref:`isprefixof`
+   * :ref:`left`
+   * :ref:`len`
+   * :ref:`like`
+   * :ref:`lower`
+   * :ref:`ltrim`
+   * :ref:`octet_length`
+   * :ref:`patindex`
+   * :ref:`regexp_count`
+   * :ref:`regexp_instr`
+   * :ref:`regexp_replace`
+   * :ref:`regexp_substr`
+   * :ref:`repeat`
+   * :ref:`replace`
+   * :ref:`reverse`
+   * :ref:`right`
+   * :ref:`rlike`
+   * :ref:`rtrim`
+   * :ref:`substring`
+   * :ref:`trim`
+   * :ref:`upper`
\ No newline at end of file
diff --git a/reference/sql/sql_functions/scalar_functions/numeric/ceiling.rst b/reference/sql/sql_functions/scalar_functions/numeric/ceiling.rst
index 2f4e2d988..314c33c02 100644
--- a/reference/sql/sql_functions/scalar_functions/numeric/ceiling.rst
+++ b/reference/sql/sql_functions/scalar_functions/numeric/ceiling.rst
@@ -15,7 +15,7 @@ Syntax
 
    CEILING( expr )
    
-   CEIL ( expr ) --> DOUBLE
+   CEIL ( expr )
 
 Arguments
 ============
@@ -32,9 +32,8 @@ Arguments
 Returns
 ============
 
-* ``CEIL`` Always returns a floating point result.
+``CEILING`` and ``CEIL`` always return a ``double`` floating point number.
 
-* ``CEILING`` returns the same type as the argument supplied.
 
 Notes
 =======
diff --git a/reference/sql/sql_functions/scalar_functions/numeric/crc64.rst b/reference/sql/sql_functions/scalar_functions/numeric/crc64.rst
index 7dbd6ddf1..8d067485d 100644
--- a/reference/sql/sql_functions/scalar_functions/numeric/crc64.rst
+++ b/reference/sql/sql_functions/scalar_functions/numeric/crc64.rst
@@ -12,8 +12,6 @@ Syntax
 .. code-block:: postgres
 
    CRC64( expr ) --> BIGINT
-   
-   CRC64_JOIN( expr ) --> BIGINT
 
 Arguments
 ============
@@ -25,21 +23,14 @@ Arguments
    * - Parameter
      - Description
    * - ``expr``
-     - Text expression (``VARCHAR``, ``TEXT``)
+     - Text expression (``TEXT``)
 
 Returns
 ============
 
 Returns a CRC-64 hash of the text input, of type ``BIGINT``.
 
-Notes
-=======
-
-* If the input value is NULL, the result is NULL.
-
-* The ``CRC64_JOIN`` can be used with ``VARCHAR`` only. It can not be used with ``TEXT``.
-
-* The ``CRC64_JOIN`` variant ignores leading whitespace when used as a ``JOIN`` key.
+.. note:: If the input value is NULL, the result is NULL.
 
 Examples
 ===========
@@ -49,10 +40,5 @@ Calculate a CRC-64 hash of a string
 
 .. code-block:: psql
 
-   numbers=> SELECT CRC64(x) FROM 
-   .    (VALUES ('This is a relatively long text string, that can be converted to a shorter hash' :: varchar(80)))
-   .    as t(x);
-   crc64               
-   --------------------
-   -9085161068710498500
-
+   SELECT CRC64(x) FROM (VALUES ('This is a relatively long text string, that can be converted to a shorter hash' :: text)) as t(x);
+   -8397827068206190216
\ No newline at end of file
diff --git a/reference/sql/sql_functions/scalar_functions/numeric/round.rst b/reference/sql/sql_functions/scalar_functions/numeric/round.rst
index 842224edd..420bd3c0f 100644
--- a/reference/sql/sql_functions/scalar_functions/numeric/round.rst
+++ b/reference/sql/sql_functions/scalar_functions/numeric/round.rst
@@ -13,8 +13,10 @@ Syntax
 
 .. code-block:: postgres
 
-   ROUND( expr [, scale ] )
-
+   ROUND( numeric ) -> numeric
+   ROUND( numeric [, int ] ) -> numeric
+   ROUND( double ) -> double
+   
 Arguments
 ============
 
@@ -24,20 +26,23 @@ Arguments
    
    * - Parameter
      - Description
-   * - ``expr``
-     - Numeric expression to round
-   * - ``scale``
-     - Number of digits after the decimal point to round to. Defaults to 0 if not specified.
+   * - ``numeric``
+     - Stores numeric values such as integers, decimal numbers, and currency values
+   * - ``int``
+     - Stores integer values
 
 Returns
 ============
 
-Always returns a floating point result.
+The ``ROUND()`` function returns a ``numeric`` value when used with numeric input types, such as ``integer`` or ``decimal``. When the input is ``double``, the return type is also ``double``.
+
+
+.. note:: ``integer`` data types are automatically cast to ``numeric`` data types.
 
 Notes
 =======
 
-* If the input value is NULL, the result is NULL.
+If the input value is NULL, the result is NULL.
 
 Examples
 ===========
diff --git a/reference/sql/sql_functions/scalar_functions/string/char_length.rst b/reference/sql/sql_functions/scalar_functions/string/char_length.rst
index f89c397ab..79bbdcdbc 100644
--- a/reference/sql/sql_functions/scalar_functions/string/char_length.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/char_length.rst
@@ -1,25 +1,22 @@
 .. _char_length:
 
 **************************
-CHAR_LENGTH
+CHARACTER_LENGTH / CHAR_LENGTH
 **************************
 
 Calculates the number of characters in a string.
 
 .. note::
-   
-   * This function is supported on ``TEXT`` only.
-   
+     
    * To get the length in bytes, see :ref:`octet_length`.
    
-   * For ``VARCHAR`` strings, the octet length is the number of characters. Use :ref:`len` instead.
-
 Syntax
 ==========
 
 .. code-block:: postgres
 
    CHAR_LEN( text_expr ) --> INT
+   CHARACTER_LEN( text_expr ) --> INT
 
 Arguments
 ============
@@ -36,7 +33,7 @@ Arguments
 Returns
 ============
 
-Returns an integer containing the number of characters in the string.
+Return an integer containing the number of characters in the string.
 
 Notes
 =======
@@ -63,7 +60,7 @@ Length in characters and bytes of strings
 
 ASCII characters take up 1 byte per character, while Thai takes up 3 bytes and Hebrew takes up 2 bytes.
 
-Unlike :ref:`len`, ``CHAR_LENGTH`` preserves the trailing whitespaces.
+Unlike :ref:`len`, ``CHARACTER_LENGTH`` and ``CHAR_LENGTH`` preserve the trailing white spaces.
 
 .. code-block:: psql
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/charindex.rst b/reference/sql/sql_functions/scalar_functions/string/charindex.rst
index fa9c89027..3ad4ca1f3 100644
--- a/reference/sql/sql_functions/scalar_functions/string/charindex.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/charindex.rst
@@ -49,7 +49,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE jabberwocky(line VARCHAR(50));
+   CREATE TABLE jabberwocky(line TEXT(50));
 
    INSERT INTO jabberwocky VALUES 
       ('''Twas brillig, and the slithy toves '), ('      Did gyre and gimble in the wabe: ')
@@ -73,4 +73,4 @@ Using ``CHARINDEX``
    "Beware the Jabberwock, my son!                 |         9
          The jaws that bite, the claws that catch! |        27
    Beware the Jubjub bird, and shun                |         8
-         The frumious Bandersnatch!"               |         0
+         The frumious Bandersnatch!"               |         0
\ No newline at end of file
diff --git a/reference/sql/sql_functions/scalar_functions/string/concat.rst b/reference/sql/sql_functions/scalar_functions/string/concat.rst
index 0409216cd..c612a9bdd 100644
--- a/reference/sql/sql_functions/scalar_functions/string/concat.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/concat.rst
@@ -48,14 +48,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
@@ -76,7 +76,7 @@ Convert values to string types before concatenation
 .. code-block:: psql
 
    
-   nba=> SELECT ("Age" :: VARCHAR(2)) || "Name" FROM nba ORDER BY 1 DESC LIMIT 5;
+   nba=> SELECT ("Age" :: TEXT(2)) || "Name" FROM nba ORDER BY 1 DESC LIMIT 5;
    ?column?        
    ----------------
    40Tim Duncan    
@@ -116,12 +116,11 @@ Add a space and concatenate it first to bypass the space trimming issue
 
 .. code-block:: psql
 
-   nba=> SELECT ("Age" :: VARCHAR(2) || (' ' || "Name")) FROM nba ORDER BY 1 DESC LIMIT 5;
+   nba=> SELECT ("Age" :: TEXT(2) || (' ' || "Name")) FROM nba ORDER BY 1 DESC LIMIT 5;
    ?column?         
    -----------------
    40 Tim Duncan    
    40 Kevin Garnett 
    40 Andre Miller  
    39 Vince Carter  
-   39 Pablo Prigioni
-
+   39 Pablo Prigioni
\ No newline at end of file
diff --git a/reference/sql/sql_functions/scalar_functions/string/decode.rst b/reference/sql/sql_functions/scalar_functions/string/decode.rst
new file mode 100644
index 000000000..1ed10e399
--- /dev/null
+++ b/reference/sql/sql_functions/scalar_functions/string/decode.rst
@@ -0,0 +1,47 @@
+.. _decode:
+
+********************
+DECODE
+********************
+The **DECODE** function  is a PostgreSQL function used for decoding or extracting binary data from a textual input string.
+
+Syntax
+==========
+The following shows the correct syntax for the DECODE function:
+
+.. code-block:: postgres
+
+   decode(string input_text, format type_text)
+
+Parameters
+============
+The following table shows the DECODE parameters:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+   * - ``input_text``
+     - Defines the input text string.
+   * - ``type_text``
+     - Defines the format used for decoding the input text.
+
+Returns
+=========
+**Comment** - *What does it return?*
+
+Notes
+===========
+**Comment** - *Are there any relevant notes?*
+
+Examples
+===========
+**Comment** - *What does the actual output look like? Can you provide an example?*
+   
+Permissions
+=============
+**Comment** - *Please confirm what permissions the role requires.*
+
+The role must have the ``SUPERUSER`` permissions.
\ No newline at end of file
diff --git a/reference/sql/sql_functions/scalar_functions/string/isprefixof.rst b/reference/sql/sql_functions/scalar_functions/string/isprefixof.rst
index 4a978b1ff..3da356969 100644
--- a/reference/sql/sql_functions/scalar_functions/string/isprefixof.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/isprefixof.rst
@@ -50,7 +50,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE jabberwocky(line VARCHAR(50));
+   CREATE TABLE jabberwocky(line TEXT(50));
 
    INSERT INTO jabberwocky VALUES 
       ('''Twas brillig, and the slithy toves '), ('      Did gyre and gimble in the wabe: ')
diff --git a/reference/sql/sql_functions/scalar_functions/string/left.rst b/reference/sql/sql_functions/scalar_functions/string/left.rst
index 77c99b0f1..143c6eeb7 100644
--- a/reference/sql/sql_functions/scalar_functions/string/left.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/left.rst
@@ -27,13 +27,15 @@ Arguments
    * - ``expr``
      - String expression
    * - ``character_count``
-     - A positive integer that specifies how many characters to return.
-
+     - A positive integer that specifies how many characters to return
+   * - ``cnt``
+     - The number of characters to be returned. If ``cnt <= 0``, an empty string is returned. 
 Returns
 ============
 
 Returns the same type as the argument supplied.
 
+
 Notes
 =======
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/len.rst b/reference/sql/sql_functions/scalar_functions/string/len.rst
index d3cba24b2..fdd671423 100644
--- a/reference/sql/sql_functions/scalar_functions/string/len.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/len.rst
@@ -49,7 +49,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
    
-   CREATE TABLE jabberwocky(line VARCHAR(50));
+   CREATE TABLE jabberwocky(line TEXT(50));
    
    INSERT INTO jabberwocky VALUES 
       ($$'Twas brillig, and the slithy toves$$), ('      Did gyre and gimble in the wabe:')
diff --git a/reference/sql/sql_functions/scalar_functions/string/like.rst b/reference/sql/sql_functions/scalar_functions/string/like.rst
index 4640ba9be..ce5ca4942 100644
--- a/reference/sql/sql_functions/scalar_functions/string/like.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/like.rst
@@ -83,14 +83,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/lower.rst b/reference/sql/sql_functions/scalar_functions/string/lower.rst
index 69dfd4f1a..4318015dc 100644
--- a/reference/sql/sql_functions/scalar_functions/string/lower.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/lower.rst
@@ -45,7 +45,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE jabberwocky(line VARCHAR(50));
+   CREATE TABLE jabberwocky(line TEXT(50));
 
    INSERT INTO jabberwocky VALUES 
       ('''Twas brillig, and the slithy toves'), ('      Did gyre and gimble in the wabe:')
diff --git a/reference/sql/sql_functions/scalar_functions/string/octet_length.rst b/reference/sql/sql_functions/scalar_functions/string/octet_length.rst
index 8bb1e3daf..0836b4685 100644
--- a/reference/sql/sql_functions/scalar_functions/string/octet_length.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/octet_length.rst
@@ -6,14 +6,10 @@ OCTET_LENGTH
 
 Calculates the number of bytes in a string.
 
-.. note::
+.. note::   
+ 
+   * To get the length in bytes, see :ref:`octet_length`.
    
-   * This function is supported on ``TEXT`` strings only.
-   
-   * To get the length in characters, see :ref:`char_length`.
-   
-   * For ``VARCHAR`` strings, the octet length is the number of characters. Use :ref:`len` instead.
-
 Syntax
 ==========
 The following is the correct syntax for the ``OCTET_LENGTH`` function:
diff --git a/reference/sql/sql_functions/scalar_functions/string/patindex.rst b/reference/sql/sql_functions/scalar_functions/string/patindex.rst
index 063fe6d5c..b1ffcda9b 100644
--- a/reference/sql/sql_functions/scalar_functions/string/patindex.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/patindex.rst
@@ -69,7 +69,7 @@ Notes
 
 * If the value is NULL, the result is NULL.
 
-* PATINDEX works on ``VARCHAR`` text types only.
+* PATINDEX works on ``TEXT`` text types only.
 
 * PATINDEX does not work on all literal values - only on column values.
    
diff --git a/reference/sql/sql_functions/scalar_functions/string/regexp_count.rst b/reference/sql/sql_functions/scalar_functions/string/regexp_count.rst
index 5f3bd75a0..26191eccc 100644
--- a/reference/sql/sql_functions/scalar_functions/string/regexp_count.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/regexp_count.rst
@@ -99,14 +99,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/regexp_instr.rst b/reference/sql/sql_functions/scalar_functions/string/regexp_instr.rst
index f401fdfff..d72f0af4f 100644
--- a/reference/sql/sql_functions/scalar_functions/string/regexp_instr.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/regexp_instr.rst
@@ -104,14 +104,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/regexp_substr.rst b/reference/sql/sql_functions/scalar_functions/string/regexp_substr.rst
index 1730d6ebf..36611424e 100644
--- a/reference/sql/sql_functions/scalar_functions/string/regexp_substr.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/regexp_substr.rst
@@ -104,14 +104,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/replace.rst b/reference/sql/sql_functions/scalar_functions/string/replace.rst
index 5552be269..d2fab561a 100644
--- a/reference/sql/sql_functions/scalar_functions/string/replace.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/replace.rst
@@ -6,9 +6,6 @@ REPLACE
 
 Replaces all occurrences of a specified string value with another string value.
 
-.. warning:: With ``VARCHAR``, a substring can only be replaced with another substring of equal **byte length**. See :ref:`octet_length`.
-
-
 Syntax
 ==========
 
@@ -37,12 +34,7 @@ Returns
 
 Returns the same type as the argument supplied.
 
-Notes
-=======
-
-* In ``VARCHAR`` strings, the ``source_expr`` and ``replacement_expr`` must be the same **byte length**. See :ref:`octet_length`.
-
-* If the value is NULL, the result is NULL.
+.. note:: If the value is NULL, the result is NULL.
 
 Examples
 ===========
diff --git a/reference/sql/sql_functions/scalar_functions/string/right.rst b/reference/sql/sql_functions/scalar_functions/string/right.rst
index 158de7da0..8cf750510 100644
--- a/reference/sql/sql_functions/scalar_functions/string/right.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/right.rst
@@ -28,7 +28,8 @@ Arguments
      - String expression
    * - ``character_count``
      - A positive integer that specifies how many characters to return.
-
+   * - ``cnt``
+     - The number of characters to be returned. If ``cnt <= 0``, an empty string is returned. 
 Returns
 ============
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/rlike.rst b/reference/sql/sql_functions/scalar_functions/string/rlike.rst
index 324a6e525..3c61cb959 100644
--- a/reference/sql/sql_functions/scalar_functions/string/rlike.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/rlike.rst
@@ -99,14 +99,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/rtrim.rst b/reference/sql/sql_functions/scalar_functions/string/rtrim.rst
index 2bd5bbc38..61fdc877d 100644
--- a/reference/sql/sql_functions/scalar_functions/string/rtrim.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/rtrim.rst
@@ -35,7 +35,7 @@ Returns the same type as the argument supplied.
 Notes
 =======
 
-* When using ``VARCHAR`` values, SQream DB automatically trims the trailing whitespace. Using ``RTRIM`` on ``VARCHAR`` does not affect the result.
+* When using ``TEXT`` values, SQream DB automatically trims the trailing whitespace. Using ``RTRIM`` on ``TEXT`` does not affect the result.
 
 * This function is equivalent to the ANSI form ``TRIM( TRAILING FROM expr )``
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/substring.rst b/reference/sql/sql_functions/scalar_functions/string/substring.rst
index b07d951fb..6e6359167 100644
--- a/reference/sql/sql_functions/scalar_functions/string/substring.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/substring.rst
@@ -54,14 +54,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/trim.rst b/reference/sql/sql_functions/scalar_functions/string/trim.rst
index d6e90c2f8..d249c8952 100644
--- a/reference/sql/sql_functions/scalar_functions/string/trim.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/trim.rst
@@ -35,7 +35,7 @@ Returns the same type as the argument supplied.
 Notes
 =======
 
-* When using ``VARCHAR`` values, SQream DB automatically trims the trailing whitespace.
+* When using ``TEXT`` values, SQream DB automatically trims the trailing whitespace.
 
 * This function is equivalent to the ANSI form ``TRIM( BOTH FROM expr )``
 
diff --git a/reference/sql/sql_functions/scalar_functions/string/upper.rst b/reference/sql/sql_functions/scalar_functions/string/upper.rst
index 219bc854e..1f9cc1b96 100644
--- a/reference/sql/sql_functions/scalar_functions/string/upper.rst
+++ b/reference/sql/sql_functions/scalar_functions/string/upper.rst
@@ -45,7 +45,7 @@ For these examples, consider the following table and contents:
 
 .. code-block:: postgres
 
-   CREATE TABLE jabberwocky(line VARCHAR(50));
+   CREATE TABLE jabberwocky(line TEXT(50));
 
    INSERT INTO jabberwocky VALUES 
       ('''Twas brillig, and the slithy toves'), ('      Did gyre and gimble in the wabe:')
diff --git a/reference/sql/sql_functions/system_functions/index.rst b/reference/sql/sql_functions/system_functions/index.rst
deleted file mode 100644
index 734a9133a..000000000
--- a/reference/sql/sql_functions/system_functions/index.rst
+++ /dev/null
@@ -1,16 +0,0 @@
-.. _system_functions_functions:
-
-********************
-System Functions
-********************
-
-System functions are used for working with database objects, settings, and values.
-
-
-.. toctree::
-   :maxdepth: 1
-   :glob:
-   :hidden:
-   
-   show_version
-
diff --git a/reference/sql/sql_functions/user_defined_functions/index.rst b/reference/sql/sql_functions/user_defined_functions/index.rst
index 225c9614e..028e35a07 100644
--- a/reference/sql/sql_functions/user_defined_functions/index.rst
+++ b/reference/sql/sql_functions/user_defined_functions/index.rst
@@ -4,22 +4,9 @@
 User-Defined Functions
 ********************
 
-The following user-defined functions are functions that can be defined and configured by users:
+The following user-defined functions are functions that can be defined and configured by users.
 
+The **User-Defined Functions** page describes the following:
 
-
-* `Python user-defined functions `_.
-* `Scalar SQL user-defined functions `_.
-
-
-
-
-.. toctree::
-   :maxdepth: 8
-   :glob:
-   :hidden:
-   
-   
-   
-   python_functions
-   scalar_sql_udf
+* `Python user-defined functions `_
+* `Scalar SQL user-defined functions `_
\ No newline at end of file
diff --git a/reference/sql/sql_functions/window_functions/first_value.rst b/reference/sql/sql_functions/window_functions/first_value.rst
index 07708872d..78896746a 100644
--- a/reference/sql/sql_functions/window_functions/first_value.rst
+++ b/reference/sql/sql_functions/window_functions/first_value.rst
@@ -5,7 +5,7 @@ FIRST_VALUE
 **************************
 The **FIRST_VALUE** function returns the value located in the selected column of the first row of a segment. If the table is not segmented, the FIRST_VALUE function returns the value from the first row of the whole table.
 
-This function returns the same type of variable that you input for your requested value. For example, requesting the value for the first employee in a list using an ``int`` type output returns an ``int`` type ID column. If you use a ``varchar`` type, the function returns a ``varchar`` type name column.
+This function returns the same type of variable that you input for your requested value. For example, requesting the value for the first employee in a list using an ``int`` type output returns an ``int`` type ID column. If you use a ``text`` type, the function returns a ``text`` type name column.
 
 Syntax
 -------
@@ -21,4 +21,4 @@ None
 
 Returns
 ---------
-Returns the value located in the selected column of the first row of a segment.
+Returns the value located in the selected column of the first row of a segment.
\ No newline at end of file
diff --git a/reference/sql/sql_functions/window_functions/index.rst b/reference/sql/sql_functions/window_functions/index.rst
index 4061ad239..e949e0e3c 100644
--- a/reference/sql/sql_functions/window_functions/index.rst
+++ b/reference/sql/sql_functions/window_functions/index.rst
@@ -4,26 +4,21 @@
 Window Functions
 ********************
 
-Window functions are functions applied over a subset (known as a window) of the rows returned by a :ref:`select` query. 
+Window functions are functions applied over a subset (known as a window) of the rows returned by a :ref:`select` query and describes the following:
 
-Read more about :ref:`window_functions` in the :ref:`sql_syntax` section.
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Window Functions:
-   :glob:
-   :hidden:
+.. hlist::
+   :columns: 1
    
-   lag
-   lead
-   row_number
-   rank
-   first_value
-   last_value
-   nth_value
-   dense_rank
-   percent_rank
-   cume_dist
-   ntile
-
+   * :ref:`lag`
+   * :ref:`lead`
+   * :ref:`row_number`
+   * :ref:`rank`
+   * :ref:`first_value`
+   * :ref:`last_value`
+   * :ref:`nth_value`
+   * :ref:`dense_rank`
+   * :ref:`percent_rank`
+   * :ref:`cume_dist`
+   * :ref:`ntile`
 
+For more information, see :ref:`window_functions` in the :ref:`sql_syntax` section.
\ No newline at end of file
diff --git a/reference/sql/sql_functions/window_functions/lag.rst b/reference/sql/sql_functions/window_functions/lag.rst
index e93be2821..96ea55bed 100644
--- a/reference/sql/sql_functions/window_functions/lag.rst
+++ b/reference/sql/sql_functions/window_functions/lag.rst
@@ -59,14 +59,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/window_functions/last_value.rst b/reference/sql/sql_functions/window_functions/last_value.rst
index 3cadaa9d1..ae1276e79 100644
--- a/reference/sql/sql_functions/window_functions/last_value.rst
+++ b/reference/sql/sql_functions/window_functions/last_value.rst
@@ -5,7 +5,7 @@ LAST_VALUE
 **************************
 The **LAST_VALUE** function returns the value located in the selected column of the last row of a segment. If the table is not segmented, the LAST_VALUE function returns the value from the last row of the whole table.
 
-This function returns the same type of variable that you input for your requested value. For example, requesting the value for the last employee in a list using an ``int`` type output returns an ``int`` type ID column. If you use a ``varchar`` type, the function returns a ``varchar`` type name column. 
+This function returns the same type of variable that you input for your requested value. For example, requesting the value for the last employee in a list using an ``int`` type output returns an ``int`` type ID column. If you use a ``text`` type, the function returns a ``text`` type name column. 
 
 Syntax
 -------
@@ -21,4 +21,4 @@ None
 
 Returns
 ---------
-Returns the value located in the selected column of the last row of a segment.
+Returns the value located in the selected column of the last row of a segment.
\ No newline at end of file
diff --git a/reference/sql/sql_functions/window_functions/lead.rst b/reference/sql/sql_functions/window_functions/lead.rst
index bc311689f..f1c52e1ec 100644
--- a/reference/sql/sql_functions/window_functions/lead.rst
+++ b/reference/sql/sql_functions/window_functions/lead.rst
@@ -59,14 +59,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/window_functions/nth_value.rst b/reference/sql/sql_functions/window_functions/nth_value.rst
index a2c1dd9a6..75e97a939 100644
--- a/reference/sql/sql_functions/window_functions/nth_value.rst
+++ b/reference/sql/sql_functions/window_functions/nth_value.rst
@@ -22,8 +22,8 @@ The following example shows the syntax for a table named ``superstore`` used for
 
    CREATE TABLE superstore
    (
-      "Section" varchar(40),
-      "Product_Name" varchar(40),
+      "Section" text(40),
+      "Product_Name" text(40),
       "Sales_In_K" int,
        );
 	   
diff --git a/reference/sql/sql_functions/window_functions/rank.rst b/reference/sql/sql_functions/window_functions/rank.rst
index 28856bd04..7699fd399 100644
--- a/reference/sql/sql_functions/window_functions/rank.rst
+++ b/reference/sql/sql_functions/window_functions/rank.rst
@@ -48,14 +48,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_functions/window_functions/row_number.rst b/reference/sql/sql_functions/window_functions/row_number.rst
index ea5786aef..cfcc14b7b 100644
--- a/reference/sql/sql_functions/window_functions/row_number.rst
+++ b/reference/sql/sql_functions/window_functions/row_number.rst
@@ -48,14 +48,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_statements/access_control_commands/alter_default_permissions.rst b/reference/sql/sql_statements/access_control_commands/alter_default_permissions.rst
index 220e05eb3..a49e53f88 100644
--- a/reference/sql/sql_statements/access_control_commands/alter_default_permissions.rst
+++ b/reference/sql/sql_statements/access_control_commands/alter_default_permissions.rst
@@ -3,22 +3,27 @@
 *****************************
 ALTER DEFAULT PERMISSIONS
 *****************************
+The **ALTER DEFAULT PERMISSIONS** page describes the following:
 
-``ALTER DEFAULT PERMISSIONS`` allows granting automatic permissions to future objects.
+.. contents:: 
+   :local:
+   :depth: 1
 
-By default, if one user creates a table, another user will not have ``SELECT`` permissions on it.
-By modifying the target role's default permissions, a database administrator can ensure that
-all objects created by that role will be accessible to others.
+Overview
+=============
+The ``ALTER DEFAULT PERMISSIONS`` command lets you grant automatic permissions to future objects.
+
+By default, users do not have ``SELECT`` permissions on tables created by other users. Database administrators can grant access to other users by modifying the target role default permissions.
 
-Learn more about the permission system in the :ref:`access control guide`.
+For more information about access control, see :ref:`Access Control`.
 
 Permissions
 =============
-
-To alter default permissions, the current role must have the ``SUPERUSER`` permission.
+The ``SUPERUSER`` permission is required to alter default permissions.
 
 Syntax
 ==========
+The following is the syntax for altering default permissions:
 
 .. code-block:: postgres
 
@@ -38,6 +43,7 @@ Syntax
          | USAGE
          | SELECT
          | INSERT
+         | UPDATE
          | DELETE
          | DDL
          | EXECUTE
@@ -55,32 +61,71 @@ Syntax
    :start-line: 127
    :end-line: 180
 
-
 Examples
 ============
+This section includes the following examples:
 
-Automatic permissions for newly created schemas
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Granting Default Table Permissions
 -------------------------------------------------
+This example is based on the roles **r1** and **r2**, created as follows:
 
-When role ``demo`` creates a new schema, roles u1,u2 will get USAGE and CREATE permissions in the new schema:
+.. code-block:: postgres
+
+   create role r1;
+   create role r2;
+   alter default permissions for r1 for tables grant select to r2;
+
+Once created, you can build and run the following query based on the above:
 
 .. code-block:: postgres
 
-   ALTER DEFAULT PERMISSIONS FOR demo FOR SCHEMAS GRANT USAGE, CREATE TO u1,u2;
+   select
+     tdp.database_name as "database_name",
+     ss.schema_name as "schema_name",
+     rs1.name as "table_creator",
+     rs2.name as "grant_to",
+     pts.name  as "permission_type"
+   from sqream_catalog.table_default_permissions tdp
+   inner join sqream_catalog.roles rs1 on tdp.modifier_role_id = rs1.role_id
+   inner join sqream_catalog.roles rs2 on tdp.getter_role_id = rs2.role_id
+   left join sqream_catalog.schemas ss on tdp.schema_id = ss.schema_id
+   inner join sqream_catalog.permission_types pts on pts.permission_type_id=tdp.permission_type
+   ;   
+   
+The following is an example of the output generated from the above queries:
 
++-----------------------+----------------------+-------------------+--------------+------------------------------+
+| **database_name**     | **schema_name**      | **table_creator** | **grant_to** | **permission_type**          |
++-----------------------+----------------------+-------------------+--------------+------------------------------+
+| master                |   NULL               | public            | public       | select                       | 
++-----------------------+----------------------+-------------------+--------------+------------------------------+
 
-Automatic permissions for newly created tables in a schema
-----------------------------------------------------------------
+For more information about default permissions, see `Default Permissions `_.  
+   
+Granting Automatic Permissions for Newly Created Schemas
+-------------------------------------------------
+When the role ``demo`` creates a new schema, roles **u1,u2** are granted ``USAGE`` and ``CREATE`` permissions in the new schema, as shown below:
 
-When role ``demo`` creates a new table in schema ``s1``, roles u1,u2 wil be granted with SELECT on it:
+.. code-block:: postgres
+
+   ALTER DEFAULT PERMISSIONS FOR demo FOR SCHEMAS GRANT USAGE, CREATE TO u1,u2;
+
+Granting Automatic Permissions for Newly Created Tables in a Schema
+----------------------------------------------------------------
+When the role ``demo`` creates a new table in schema ``s1``, roles **u1,u2** are granted ``SELECT`` permissions, as shown below:
 
 .. code-block:: postgres
 
    ALTER DEFAULT PERMISSIONS FOR demo IN s1 FOR TABLES GRANT SELECT TO u1,u2;
 
-Revoke (``DROP GRANT``) permissions for newly created tables
+Revoking Permissions from Newly Created Tables
 ---------------------------------------------------------------
+Revoking permissions refers to using the ``DROP GRANT`` command, as shown below:
 
 .. code-block:: postgres
 
-   ALTER DEFAULT PERMISSIONS FOR public FOR TABLES DROP GRANT SELECT,DDL,INSERT,DELETE TO public;
+   ALTER DEFAULT PERMISSIONS FOR public FOR TABLES DROP GRANT SELECT,DDL,INSERT,DELETE TO public;
\ No newline at end of file
diff --git a/reference/sql/sql_statements/access_control_commands/drop_role.rst b/reference/sql/sql_statements/access_control_commands/drop_role.rst
index f519870cd..61c4d5bb9 100644
--- a/reference/sql/sql_statements/access_control_commands/drop_role.rst
+++ b/reference/sql/sql_statements/access_control_commands/drop_role.rst
@@ -13,7 +13,7 @@ See also :ref:`create_role`.
 Permissions
 =============
 
-To drop a role, the current role must have the ``SUPERUSER`` permission.
+To drop a role, the current role must have a ``SUPERUSER`` cluster-level permission.
 
 Syntax
 ==========
diff --git a/reference/sql/sql_statements/access_control_commands/grant.rst b/reference/sql/sql_statements/access_control_commands/grant.rst
index dfb48c212..d86522900 100644
--- a/reference/sql/sql_statements/access_control_commands/grant.rst
+++ b/reference/sql/sql_statements/access_control_commands/grant.rst
@@ -15,13 +15,9 @@ Learn more about the permission system in the :ref:`access control guide`, and use a more flexible foreign data wrapper concept. See :ref:`create_foreign_table` instead.
-   
-   Upgrading to a new version of SQream DB converts existing tables automatically. When creating a new external tables, use the new foreign table syntax.
-
-
-``CREATE TABLE`` creates a new external table in an existing database.
-
-See more in the :ref:`External tables guide`.
-
-.. tip::
-
-   * Data in an external table can change if the sources change, and frequent access to remote files may harm performance.
-
-   * To create a regular table, see :ref:`CREATE TABLE `
-
-Permissions
-=============
-
-The role must have the ``CREATE`` permission at the database level.
-
-Syntax
-==========
-
-.. code-block:: postgres
-
-   create_table_statement ::=
-       CREATE [ OR REPLACE ] EXTERNAL TABLE [schema_name].table_name (
-           { column_def [, ...] }
-       )
-       USING FORMAT format_def
-       WITH { external_table_option [ ...] }
-       ;
-
-   schema_name ::= identifier  
-
-   table_name ::= identifier  
-
-   format_def ::= { PARQUET | ORC | CSV }
-   
-   external_table_option ::= {
-      PATH '{ path_spec }' 
-      | FIELD DELIMITER '{ field_delimiter }'
-      | RECORD DELIMITER '{ record_delimiter }'
-      | AWS_ID '{ AWS ID }'
-      | AWS_SECRET '{ AWS SECRET }'
-   }
-   
-   path_spec ::= { local filepath | S3 URI | HDFS URI }
-   
-   field_delimiter ::= delimiter_character
-   
-   record_delimiter ::= delimiter_character
-      
-   column_def ::= { column_name type_name [ default ] [ column_constraint ] }
-
-   column_name ::= identifier
-   
-   column_constraint ::=
-       { NOT NULL | NULL }
-   
-   default ::=
-   
-       DEFAULT default_value
-       | IDENTITY [ ( start_with [ , increment_by ] ) ]
-
-.. _cet_parameters:
-
-Parameters
-============
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-   
-   * - Parameter
-     - Description
-   * - ``OR REPLACE``
-     - Create a new table, and overwrite any existing table by the same name. Does not return an error if the table already exists. ``CREATE OR REPLACE`` does not check the table contents or structure, only the table name.
-   * - ``schema_name``
-     - The name of the schema in which to create the table.
-   * - ``table_name``
-     - The name of the table to create, which must be unique inside the schema.
-   * - ``column_def``
-     - A comma separated list of column definitions. A minimal column definition includes a name identifier and a datatype. Other column constraints and default values can be added optionally.
-   * - ``USING FORMAT ...``
-     - Specifies the format of the source files, such as ``PARQUET``, ``ORC``, or ``CSV``.
-   * - ``WITH PATH ...``
-     - Specifies a path or URI of the source files, such as ``/path/to/*.parquet``.
-   * - ``FIELD DELIMITER``
-     - Specifies the field delimiter for CSV files. Defaults to ``,``.
-   * - ``RECORD DELIMITER``
-     - Specifies the record delimiter for CSV files. Defaults to a newline, ``\n``
-   * - ``AWS_ID``, ``AWS_SECRET``
-     - Credentials for authenticated S3 access
-
-
-Examples
-===========
-
-A simple table from Tab-delimited file (TSV)
-----------------------------------------------
-
-.. code-block:: postgres
-
-   CREATE OR REPLACE EXTERNAL TABLE cool_animals
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, weight FLOAT NOT NULL)  
-   USING FORMAT csv 
-   WITH  PATH  '/home/rhendricks/cool_animals.csv'
-         FIELD DELIMITER '\t';
-
-
-A table from a directory of Parquet files on HDFS
------------------------------------------------------
-
-.. code-block:: postgres
-
-   CREATE EXTERNAL TABLE users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
-   USING FORMAT Parquet
-   WITH  PATH  'hdfs://hadoop-nn.piedpiper.com/rhendricks/users/*.parquet';
-
-A table from a bucket of files on S3
---------------------------------------
-
-.. code-block:: postgres
-
-   CREATE EXTERNAL TABLE users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
-   USING FORMAT Parquet
-   WITH  PATH  's3://pp-secret-bucket/users/*.parquet'
-         AWS_ID 'our_aws_id'
-         AWS_SECRET 'our_aws_secret';
-
-
-Changing an external table to a regular table
-------------------------------------------------
-
-Materializes an external table into a regular table.
-
-.. tip: Using an external table allows you to perform ETL-like operations in SQream DB by applying SQL functions and operations to raw files
-
-.. code-block:: postgres
-
-   CREATE TABLE real_table
-    AS SELECT * FROM external_table;
-
diff --git a/reference/sql/sql_statements/ddl_commands/create_foreign_table.rst b/reference/sql/sql_statements/ddl_commands/create_foreign_table.rst
index d50e13380..2cda212d6 100644
--- a/reference/sql/sql_statements/ddl_commands/create_foreign_table.rst
+++ b/reference/sql/sql_statements/ddl_commands/create_foreign_table.rst
@@ -6,7 +6,7 @@ CREATE FOREIGN TABLE
 
 .. note:: 
    
-   Starting with SQream DB v2020.2, external tables have been renamed to foreign tables, and use a more flexible foreign data wrapper concept.
+   Starting with SQream DB v2020.2, external tables have been renamed to foreign tables, and use a more flexible foreign data wrapper concept. 
    
    Upgrading to a new version of SQream DB converts existing external tables automatically. 
 
@@ -113,7 +113,7 @@ A simple table from Tab-delimited file (TSV)
 .. code-block:: postgres
 
    CREATE OR REPLACE FOREIGN TABLE cool_animals
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, weight FLOAT NOT NULL)  
+     (id INT NOT NULL, name text(30) NOT NULL, weight FLOAT NOT NULL)  
    WRAPPER csv_fdw
    OPTIONS
      ( LOCATION = '/home/rhendricks/cool_animals.csv',
@@ -128,7 +128,7 @@ A table from a directory of Parquet files on HDFS
 .. code-block:: postgres
 
    CREATE FOREIGN TABLE users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
+     (id INT NOT NULL, name text(30) NOT NULL, email text(50) NOT NULL)  
    WRAPPER parquet_fdw
    OPTIONS
      (
@@ -141,7 +141,7 @@ A table from a bucket of ORC files on S3
 .. code-block:: postgres
 
    CREATE FOREIGN TABLE users
-     (id INT NOT NULL, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL)  
+     (id INT NOT NULL, name text(30) NOT NULL, email text(50) NOT NULL)  
    WRAPPER orc_fdw
    OPTIONS
      (
diff --git a/reference/sql/sql_statements/ddl_commands/create_function.rst b/reference/sql/sql_statements/ddl_commands/create_function.rst
index 339543a0a..d28da9784 100644
--- a/reference/sql/sql_statements/ddl_commands/create_function.rst
+++ b/reference/sql/sql_statements/ddl_commands/create_function.rst
@@ -4,7 +4,7 @@
 CREATE FUNCTION
 *****************
 
-``CREATE FUNCTION`` creates a new user-defined function (UDF) in an existing database.
+``CREATE FUNCTION`` creates a new user-defined function (UDF) in an existing database. 
 
 See more in our :ref:`Python UDF (user-defined functions)` guide.
 
@@ -52,7 +52,7 @@ Parameters
    * - ``argument_list``
      - A comma separated list of column definitions. A column definition includes a name identifier and a datatype.
    * - ``return_type``
-     - The SQL datatype of the return value, such as ``INT``, ``VARCHAR``, etc.
+     - The SQL datatype of the return value, such as ``INT``, ``TEXT``, etc.
    * - ``function_body``
      - Python code, dollar-quoted (``$$``). 
 
diff --git a/reference/sql/sql_statements/ddl_commands/create_schema.rst b/reference/sql/sql_statements/ddl_commands/create_schema.rst
index e85f328a9..aefa64b4f 100644
--- a/reference/sql/sql_statements/ddl_commands/create_schema.rst
+++ b/reference/sql/sql_statements/ddl_commands/create_schema.rst
@@ -3,7 +3,7 @@
 *****************
 CREATE SCHEMA
 *****************
-The **CREATE SCHEMA** page describes the following:
+The **CREATE SCHEMA** page describes the following: 
 
 
 .. contents:: 
diff --git a/reference/sql/sql_statements/ddl_commands/create_table.rst b/reference/sql/sql_statements/ddl_commands/create_table.rst
index b660e442c..5ca988f19 100644
--- a/reference/sql/sql_statements/ddl_commands/create_table.rst
+++ b/reference/sql/sql_statements/ddl_commands/create_table.rst
@@ -51,7 +51,7 @@ The following parameters can be used when creating a table:
    * - Parameter
      - Description
    * - ``OR REPLACE``
-     - Creates a new tables and overwrites any existing table by the same name. Does not return an error if the table already exists. ``CREATE OR REPLACE`` does not check the table contents or structure, only the table name.
+     - Creates a new table and overwrites any existing table by the same name. Does not return an error if the table already exists. ``CREATE OR REPLACE`` does not check table contents or structure, only the table name.
    * - ``schema_name``
      - The name of the schema in which to create the table.
    * - ``table_name``
@@ -62,7 +62,7 @@ The following parameters can be used when creating a table:
      - 
          A commma separated list of clustering column keys.
          
-         See :ref:`data_clustering` for more information.
+         See :ref:`flexible_data_clustering` for more information.
    * - ``LIKE``
      - Duplicates the column structure of an existing table.
 	 
@@ -74,7 +74,7 @@ Default Value Constraints
 
 The ``DEFAULT`` value constraint specifies a value to use if one is not defined in an :ref:`insert` or :ref:`copy_from` statement. 
 
-The value may be either a literal or **GETDATE()**, which is evaluated at the time the row is created.
+The default value may be a literal or NULL.
 
 .. note:: The ``DEFAULT`` constraint only applies if the column does not have a value specified in the :ref:`insert` or :ref:`copy_from` statement. You can still insert a ``NULL`` into an nullable column by explicitly inserting ``NULL``. For example, ``INSERT INTO cool_animals VALUES (1, 'Gnu', NULL)``.
 
@@ -141,7 +141,7 @@ The following is an example of the syntax used to create a standard table:
 
    CREATE TABLE cool_animals (
       id INT NOT NULL,
-      name varchar(30) NOT NULL,
+      name text(30) NOT NULL,
       weight FLOAT,
       is_agressive BOOL
    );
@@ -155,7 +155,7 @@ The following is an example of the syntax used to create a table with default va
 
    CREATE TABLE cool_animals (
       id INT NOT NULL,
-      name varchar(30) NOT NULL,
+      name text(30) NOT NULL,
       weight FLOAT,
       is_agressive BOOL DEFAULT false NOT NULL
    );
@@ -171,14 +171,11 @@ The following is an example of the syntax used to create a table with an identit
 
    CREATE TABLE users (
       id BIGINT IDENTITY(0,1) NOT NULL , -- Start with 0, increment by 1
-      name VARCHAR(30) NOT NULL,
-      country VARCHAR(30) DEFAULT 'Unknown' NOT NULL
+      name TEXT(30) NOT NULL,
+      country TEXT(30) DEFAULT 'Unknown' NOT NULL
    );
 
-.. note:: 
-   * Identity columns are supported on ``BIGINT`` columns.
-   
-   * Identity does not enforce the uniqueness of values. The identity value can be bypassed by specifying it in an :ref:`insert` command.
+.. note:: Identity does not enforce the uniqueness of values. The identity value can be bypassed by specifying it in an :ref:`insert` command.
 
 Creating a Table from a SELECT Query
 -----------------------------------------
@@ -203,9 +200,9 @@ The following is an example of the syntax used to create a table with a clusteri
 .. code-block:: postgres
 
    CREATE TABLE users (
-      name VARCHAR(30) NOT NULL,
+      name TEXT(30) NOT NULL,
       start_date datetime not null,
-      country VARCHAR(30) DEFAULT 'Unknown' NOT NULL
+      country TEXT(30) DEFAULT 'Unknown' NOT NULL
    ) CLUSTER BY start_date;
    
 For more information on data clustering, see :ref:`data_clustering`.
@@ -261,9 +258,9 @@ Either of the following examples can be used to create a second table based on t
    
 The generated output of both of the statements above is identical.
    
-Creating a Table based on External Tables and Views
+Creating a Table based on Foreign Tables and Views
 ~~~~~~~~~~~~
-The following is example of creating a table based on external tables and views:
+The following is an example of creating a table based on foreign tables and views:
 
 
 .. code-block:: postgres
@@ -271,7 +268,25 @@ The following is example of creating a table based on external tables and views:
    CREATE VIEW v as SELECT x+1,y,y || 'abc' from t1;
    CREATE TABLE t3 LIKE v;
 
-When duplicating the column structure of an existing table, the target table of the ``LIKE`` clause can be a regular or an external table, or a view.
+When duplicating the column structure of an existing table, the target table of the ``LIKE`` clause can be either a native, a regular, or an external table, or a view.
+
+The following table describes which properties are copied from the target table to the newly created table:
+
++-----------------------------+------------------+---------------------------------+---------------------------------+
+| **Property**                | **Native Table** | **External Table**              | **View**                        |
++-----------------------------+------------------+---------------------------------+---------------------------------+
+| Column names                | Copied           | Copied                          | Copied                          |
++-----------------------------+------------------+---------------------------------+---------------------------------+
+| Column types                | Copied           | Copied                          | Copied                          |
++-----------------------------+------------------+---------------------------------+---------------------------------+
+| ``NULL``/``NOT NULL``       | Copied           | Copied                          | Copied                          |
++-----------------------------+------------------+---------------------------------+---------------------------------+
+| ``text`` length constraints | Copied           | Copied                          | Does not exist in source object |
++-----------------------------+------------------+---------------------------------+---------------------------------+
+| Compression specification   | Copied           | Does not exist in source object | Does not exist in source object |
++-----------------------------+------------------+---------------------------------+---------------------------------+
+| Default/identity            | Copied           | Does not exist in source object | Does not exist in source object |
++-----------------------------+------------------+---------------------------------+---------------------------------+
 
 Permissions
 =============
diff --git a/reference/sql/sql_statements/ddl_commands/create_table_as.rst b/reference/sql/sql_statements/ddl_commands/create_table_as.rst
index a7f9dd4d4..7ffd565df 100644
--- a/reference/sql/sql_statements/ddl_commands/create_table_as.rst
+++ b/reference/sql/sql_statements/ddl_commands/create_table_as.rst
@@ -3,7 +3,7 @@
 *****************
 CREATE TABLE AS
 *****************
-
+ 
 The ``CREATE TABLE AS`` commands creates a new table from the result of a select query.
 
 
@@ -64,6 +64,8 @@ This section includes the following examples:
    :local:
    :depth: 1
 
+.. warning:: The ``SELECT`` statement decrypts information by default. When executing ``CREATE TABLE AS SELECT``, encrypted information will appear as clear text in the newly created table.
+
 Creating a Copy of a Foreign Table or View
 ---------------------------------------------------------------------------
 
diff --git a/reference/sql/sql_statements/ddl_commands/create_view.rst b/reference/sql/sql_statements/ddl_commands/create_view.rst
index 9812ddeec..4c6a98427 100644
--- a/reference/sql/sql_statements/ddl_commands/create_view.rst
+++ b/reference/sql/sql_statements/ddl_commands/create_view.rst
@@ -3,7 +3,7 @@
 *****************
 CREATE VIEW
 *****************
-
+ 
 ``CREATE VIEW`` creates a new view in an existing database. A view is a virtual table.
 
 .. tip:: 
diff --git a/reference/sql/sql_statements/ddl_commands/drop_clustering_key.rst b/reference/sql/sql_statements/ddl_commands/drop_clustering_key.rst
index 41b10bdfa..06cb848ad 100644
--- a/reference/sql/sql_statements/ddl_commands/drop_clustering_key.rst
+++ b/reference/sql/sql_statements/ddl_commands/drop_clustering_key.rst
@@ -3,7 +3,7 @@
 **********************
 DROP CLUSTERING KEY
 **********************
-
+ 
 ``DROP CLUSTERING KEY`` drops all clustering keys in a table.
 
 Read our :ref:`data_clustering` guide for more information.
diff --git a/reference/sql/sql_statements/ddl_commands/drop_column.rst b/reference/sql/sql_statements/ddl_commands/drop_column.rst
index 391367e16..f4c4e4504 100644
--- a/reference/sql/sql_statements/ddl_commands/drop_column.rst
+++ b/reference/sql/sql_statements/ddl_commands/drop_column.rst
@@ -3,7 +3,7 @@
 **********************
 DROP COLUMN
 **********************
-
+ 
 ``DROP COLUMN`` can be used to remove columns from a table.
 
 Permissions
diff --git a/reference/sql/sql_statements/ddl_commands/drop_database.rst b/reference/sql/sql_statements/ddl_commands/drop_database.rst
index 0cfbbcd30..f45b0a5c4 100644
--- a/reference/sql/sql_statements/ddl_commands/drop_database.rst
+++ b/reference/sql/sql_statements/ddl_commands/drop_database.rst
@@ -3,7 +3,7 @@
 **********************
 DROP DATABASE
 **********************
-
+ 
 ``DROP DATABASE`` can be used to remove a database and all of its objects.
 
 Permissions
diff --git a/reference/sql/sql_statements/ddl_commands/drop_function.rst b/reference/sql/sql_statements/ddl_commands/drop_function.rst
index 726085f9f..98b957ad8 100644
--- a/reference/sql/sql_statements/ddl_commands/drop_function.rst
+++ b/reference/sql/sql_statements/ddl_commands/drop_function.rst
@@ -3,7 +3,7 @@
 **********************
 DROP FUNCTION
 **********************
-
+ 
 ``DROP FUNCTION`` can be used to remove a user defined function.
 
 Permissions
diff --git a/reference/sql/sql_statements/ddl_commands/drop_schema.rst b/reference/sql/sql_statements/ddl_commands/drop_schema.rst
index c10ab7f8f..0f2dfd454 100644
--- a/reference/sql/sql_statements/ddl_commands/drop_schema.rst
+++ b/reference/sql/sql_statements/ddl_commands/drop_schema.rst
@@ -3,7 +3,7 @@
 **********************
 DROP SCHEMA
 **********************
-
+ 
 ``DROP SCHEMA`` can be used to remove a schema.
 
 The schema has to be empty before removal. 
diff --git a/reference/sql/sql_statements/ddl_commands/drop_table.rst b/reference/sql/sql_statements/ddl_commands/drop_table.rst
index e2a704ff8..53fdc8445 100644
--- a/reference/sql/sql_statements/ddl_commands/drop_table.rst
+++ b/reference/sql/sql_statements/ddl_commands/drop_table.rst
@@ -3,7 +3,7 @@
 **********************
 DROP TABLE
 **********************
-
+ 
 ``DROP TABLE`` can be used to remove a table and all of its contents.
 
 Permissions
diff --git a/reference/sql/sql_statements/ddl_commands/drop_view.rst b/reference/sql/sql_statements/ddl_commands/drop_view.rst
index e93629ab4..6e18254e6 100644
--- a/reference/sql/sql_statements/ddl_commands/drop_view.rst
+++ b/reference/sql/sql_statements/ddl_commands/drop_view.rst
@@ -3,7 +3,7 @@
 **********************
 DROP VIEW
 **********************
-
+ 
 ``DROP VIEW`` can be used to remove a view.
 
 Because a view is logical, this does not affect any data in any of the referenced tables.
diff --git a/reference/sql/sql_statements/ddl_commands/rename_column.rst b/reference/sql/sql_statements/ddl_commands/rename_column.rst
index f91933f71..1022ce0f2 100644
--- a/reference/sql/sql_statements/ddl_commands/rename_column.rst
+++ b/reference/sql/sql_statements/ddl_commands/rename_column.rst
@@ -3,16 +3,11 @@
 **********************
 RENAME COLUMN
 **********************
-
-``RENAME COLUMN`` can be used to rename columns in a table.
-
-Permissions
-=============
-
-The role must have the ``DDL`` permission at the database or table level.
+The ``RENAME COLUMN`` command can be used to rename columns in a table.
 
 Syntax
 ==========
+The following is the correct syntax for the ``RENAME_COLUMN`` command:
 
 .. code-block:: postgres
 
@@ -30,6 +25,7 @@ Syntax
 
 Parameters
 ============
+The following table describes the `RENAME_COLUMN`` parameters:
 
 .. list-table:: 
    :widths: auto
@@ -48,18 +44,29 @@ Parameters
      
 Examples
 ===========
+The **Examples** section includes the following examples:
 
-Renaming a column
+.. contents::
+   :local:
+   :depth: 1
+
+Renaming a Column
 -----------------------------------------
+The following is an example of renaming a column:
 
 .. code-block:: postgres
 
    -- Remove the 'weight' column
    ALTER TABLE users RENAME COLUMN weight TO mass;
 
-Renaming a quoted name
+Renaming a Quoted Name
 --------------------------
+The following is an example of renaming a quoted name:
 
 .. code-block:: postgres
 
-   ALTER TABLE users RENAME COLUMN "mass" TO "Mass (Kilograms);
\ No newline at end of file
+   ALTER TABLE users RENAME COLUMN "mass" TO "Mass (Kilograms);
+   
+Permissions
+=============
+The role must have the ``DDL`` permission at the database or table level.
\ No newline at end of file
diff --git a/reference/sql/sql_statements/ddl_commands/rename_table.rst b/reference/sql/sql_statements/ddl_commands/rename_table.rst
index e24ba6efe..96cc7102e 100644
--- a/reference/sql/sql_statements/ddl_commands/rename_table.rst
+++ b/reference/sql/sql_statements/ddl_commands/rename_table.rst
@@ -3,8 +3,8 @@
 **********************
 RENAME TABLE
 **********************
-
-``RENAME TABLE`` can be used to rename a table.
+ 
+``RENAME TABLE`` can be used to rename a table. 
 
 .. warning:: Renaming a table can void existing views that use this table. See more about :ref:`recompiling views `.
 
diff --git a/reference/sql/sql_statements/dml_commands/copy_to.rst b/reference/sql/sql_statements/dml_commands/copy_to.rst
index 61e6b35b2..167f70c23 100644
--- a/reference/sql/sql_statements/dml_commands/copy_to.rst
+++ b/reference/sql/sql_statements/dml_commands/copy_to.rst
@@ -3,20 +3,23 @@
 **********************
 COPY TO
 **********************
+The **COPY TO** page includes the following sections:
 
+.. contents:: 
+   :local:
+   :depth: 1
+
+Overview
+==========
 ``COPY ... TO`` is a statement that can be used to export data from a SQream database table or query to a file on the filesystem.
 
 In general, ``COPY`` moves data between filesystem files and SQream DB tables.
 
 .. note:: To copy data from a file to a table, see :ref:`COPY FROM`.
 
-Permissions
-=============
-
-The role must have the ``SELECT`` permission on every table or schema that is referenced by the statement.
-
 Syntax
 ==========
+The following is the correct syntax for using the **COPY TO** statement:
 
 .. code-block:: postgres
 
@@ -29,7 +32,7 @@ Syntax
        )
        ;
        
-   fdw_name ::= csw_fdw | parquet_fdw | orc_fdw
+   fdw_name ::= csv_fdw | parquet_fdw | orc_fdw
    
    schema_name ::= identifer
   
@@ -48,6 +51,11 @@ Syntax
       | AWS_ID = '{ AWS ID }'
       
       | AWS_SECRET = '{ AWS Secret }'
+	  
+      | MAX_FILE_SIZE = '{ size_in_bytes }'
+	  
+      | ENFORCE_SINGLE_FILE = { true | false }
+
 
   delimiter ::= string
 
@@ -57,8 +65,12 @@ Syntax
 
   AWS Secret ::= string
 
+  
+.. note:: In Studio, you must write the parameters using lower case letters. Using upper case letters generates an error.
+
 Elements
 ============
+The following table shows the ``COPY_TO`` elements:
 
 .. list-table:: 
    :widths: auto
@@ -71,50 +83,343 @@ Elements
    * - ``query``
      - An SQL query that returns a table result, or a table name
    * - ``fdw_name``
-     - The name of the Foreign Data Wrapper to use. Supported FDWs are ``csv_fdw``, ``orc_fdw``, or ``parquet_fdw``.
+     - The name of the Foreign Data Wrapper to use. Supported FDWs are ``csv_fdw``, ``orc_fdw``, ``avro_fdw`` or ``parquet_fdw``.
    * - ``LOCATION``
      - A path on the local filesystem, S3, or HDFS URI. For example, ``/tmp/foo.csv``, ``s3://my-bucket/foo.csv``, or ``hdfs://my-namenode:8020/foo.csv``. The local path must be an absolute path that SQream DB can access.
    * - ``HEADER``
      - The CSV file will contain a header line with the names of each column in the file. This option is allowed only when using CSV format.
    * - ``DELIMITER``
-     - Specifies the character that separates fields (columns) within each row of the file. The default is a comma character (``,``).
+     - Specifies the character or string that separates fields (columns) within each row of the file. The default is a comma character (``,``). This option is allowed only when using CSV format.
+   * - ``RECORD_DELIMITER``
+     - Specifies the character or string that separates records in a data set. This option is allowed only when using CSV format.
    * - ``AWS_ID``, ``AWS_SECRET``
      - Specifies the authentication details for secured S3 buckets
+   * - ``MAX_FILE_SIZE``
+     - Sets the maximum file size (bytes). Default value: 16*2^20 (16MB).
+   * - ``ENFORCE_SINGLE_FILE``
+     - Enforces the maximum file size (bytes). Permitted values: ``true`` - creates one file of unlimited size, ``false`` - permits creating several files together limited by the ``MAX_FILE_SIZE``. When set to ``true``, the single file size is not limited by the ``MAX_FILE_SIZE`` setting. When set to ``false``, the combined file sizes cannot exceed the ``MAX_FILE_SIZE``. Default value: ``TRUE``.
 
-Usage notes
+Usage Notes
 ===============
+The **Usage Notes** describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
 
-Supported field delimiters
+Supported Field Delimiters
 ------------------------------
+The **Supported Field Delimiters** section describes the following:
 
-Printable characters
-^^^^^^^^^^^^^^^^^^^^^
+.. contents:: 
+   :local:
+   :depth: 1
 
+Printable ASCII Characters
+^^^^^^^^^^^^^^^^^^^^^
 Any printable ASCII character can be used as a delimiter without special syntax. The default CSV field delimiter is a comma (``,``).
 
-A printable character is any ASCII character in the range 32 - 126.
-
-Non-printable characters
+The following table shows the supported printable ASCII characters:
+
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| **Character** | **Description**      | **ASCII** | **Octal** | **Hex** | **Binary** | **HTML Code** | **HTML Name** |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| (Space)       | Space                | 32        | 40        | 20      | 100000     |           |               |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| !             | Exclamation Mark     | 33        | 41        | 21      | 100001     | !         | !        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| #             | Hash or Number       | 35        | 43        | 23      | 100011     | #         | #         |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| $             | Dollar Sign          | 36        | 44        | 24      | 100100     | $         | $      |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| %             | Percentage           | 37        | 45        | 25      | 100101     | %         | %      |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| &             | Ampersand            | 38        | 46        | 26      | 100110     | &         | &         |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| (             | Left Parenthesis     | 40        | 50        | 28      | 101000     | (         | (        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| )             | Right Parenthesis    | 41        | 51        | 29      | 101001     | )         | )        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| *             | Asterisk             | 42        | 52        | 2A      | 101010     | *         | *         |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| +             | Plus Sign            | 43        | 53        | 2B      | 101011     | +         | +        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ,             | Comma                | 44        | 54        | 2C      | 101100     | ,         | ,       |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| /             | Slash                | 47        | 57        | 2F      | 101111     | /         | /         |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ;             | Semicolon            | 59        | 73        | 3B      | 111011     | ;         | ;        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| <             | Less Than            | 60        | 74        | 3C      | 111100     | <         | <          |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| =             | Equals Sign          | 61        | 75        | 3D      | 111101     | =         | =      |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| >             | Greater Than         | 62        | 76        | 3E      | 111110     | >         | >          |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ?             | Question Mark        | 63        | 77        | 3F      | 111111     | ?         | ?       |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| @             | At Sign              | 64        | 100       | 40      | 1000000    | @         | @      |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| [             | Left Square Bracket  | 91        | 133       | 5B      | 1011011    | [         | [        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| \             | Backslash            | 92        | 134       | 5C      | 1011100    | \&\#92\;      | \        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ]             | Right Square Bracket | 93        | 135       | 5D      | 1011101    | ]         | ]        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ^             | Caret or Circumflex  | 94        | 136       | 5E      | 1011110    | ^         | &hat;         |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| _             | Underscore           | 95        | 137       | 5F      | 1011111    | _         | _      |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| `             | Grave Accent         | 96        | 140       | 60      | 1100000    | `         | `       |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| {             | Left Curly Bracket   | 123       | 173       | 7B      | 1111011    | {        | {        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| |             | Vertical Bar         | 124       | 174       | 7C      | 1111100    | |        | |      |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| }             | Right Curly Bracket  | 125       | 175       | 7D      | 1111101    | }        | }        |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ~             | Tilde                | 126       | 176       | 7E      | 1111110    | ~        | ˜       |
++---------------+----------------------+-----------+-----------+---------+------------+---------------+---------------+
+
+Non-Printable ASCII Characters
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A non-printable character (1 - 31, 127) can be used in its octal form. 
-
+The following table shows the supported non-printable ASCII characters:
+
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| **Character** | **Description**           | **Octal** | **ASCII** | **Hex** | **Binary** | **HTML Code** | **HTML Name** |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| NUL           | Null                      | 0         | 0         | 0       | 0          | �          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| SOH           | Start of Heading          | 1         | 1         | 1       | 1          |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| STX           | Start of Text             | 2         | 2         | 2       | 10         |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ETX           | End of Text               | 3         | 3         | 3       | 11         |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| EOT           | End of Transmission       | 4         | 4         | 4       | 100        |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ENQ           | Enquiry                   | 5         | 5         | 5       | 101        |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ACK           | Acknowledge               | 6         | 6         | 6       | 110        |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| BEL           | Bell                      | 7         | 7         | 7       | 111        |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| BS            | Backspace                 | 10        | 8         | 8       | 1000       |           |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| HT            | Horizontal Tab            | 11        | 9         | 9       | 1001       | 	          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| VT            | Vertical Tab              | 13        | 11        | 0B      | 1011       |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| FF            | NP Form Feed, New Page    | 14        | 12        | 0C      | 1100       |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| SO            | Shift Out                 | 16        | 14        | 0E      | 1110       |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| SI            | Shift In                  | 17        | 15        | 0F      | 1111       |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| DLE           | Data Link Escape          | 20        | 16        | 10      | 10000      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| DC1           | Device Control 1          | 21        | 17        | 11      | 10001      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| DC2           | Device Control 2          | 22        | 18        | 12      | 10010      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| DC3           | Device Control 3          | 23        | 19        | 13      | 10011      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| DC4           | Device Control 4          | 24        | 20        | 14      | 10100      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| NAK           | Negative Acknowledge      | 25        | 21        | 15      | 10101      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| SYN           | Synchronous Idle          | 26        | 22        | 16      | 10110      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ETB           | End of Transmission Block | 27        | 23        | 17      | 10111      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| CAN           | Cancel                    | 30        | 24        | 18      | 11000      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| EM            | End of Medium             | 31        | 25        | 19      | 11001      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| SUB           | Substitute                | 32        | 26        | 1A      | 11010      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| ESC           | Escape                    | 33        | 27        | 1B      | 11011      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| FS            | File Separator            | 34        | 28        | 1C      | 11100      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| GS            | Group Separator           | 35        | 29        | 1D      | 11101      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| RS            | Record Separator          | 36        | 30        | 1E      | 11110      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| US            | Unit Separator            | 37        | 31        | 1F      | 11111      |          |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+| DEL           | Delete                    | 177       | 127       | 7F      | 1111111    |         |               |
++---------------+---------------------------+-----------+-----------+---------+------------+---------------+---------------+
+   
 A tab can be specified by escaping it, for example ``\t``. Other non-printable characters can be specified using their octal representations, by using the ``E'\000'`` format, where ``000`` is the octal value of the character.
 
 For example, ASCII character ``15``, known as "shift in", can be specified using ``E'\017'``.
 
+.. note:: Delimiters are only applicable to the CSV file format.
 
-Date format
+Unsupported ASCII Field Delimiters
+------------------------------
+The following table shows the unsupported ASCII field delimiters:
+
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| **ASCII** | **Character** | **Description**        | **Octal** | **Hex** | **Binary** | **HTML Code** | **HTML Name** |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 10        | LF            | NL Line Feed, New Line | 12        | 0A      | 1010       | 
         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 13        | CR            | Carriage Return        | 15        | 0D      | 1101       | 
         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 34        | "             | Double Quote           | 42        | 22      | 100010     | "         | "        |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 45        | -             | Minus Sign             | 55        | 2D      | 101101     | -         | −       |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 46        | .             | Period                 | 56        | 2E      | 101110     | .         | .      |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 48        | 0             | Zero                   | 60        | 30      | 110000     | 0         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 49        | 1             | Number One             | 61        | 31      | 110001     | 1         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 50        | 2             | Number Two             | 62        | 32      | 110010     | 2         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 51        | 3             | Number Three           | 63        | 33      | 110011     | 3         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 52        | 4             | Number Four            | 64        | 34      | 110100     | 4         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 53        | 5             | Number Five            | 65        | 35      | 110101     | 5         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 54        | 6             | Number Six             | 66        | 36      | 110110     | 6         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 55        | 7             | Number Seven           | 67        | 37      | 110111     | 7         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 56        | 8             | Number Eight           | 70        | 38      | 111000     | 8         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 57        | 9             | Number Nine            | 71        | 39      | 111001     | 9         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 58        | :             | Colon                  | 72        | 3A      | 111010     | :         | :       |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 65        | A             | Upper Case Letter A    | 101       | 41      | 1000001    | A         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 66        | B             | Upper Case Letter B    | 102       | 42      | 1000010    | B         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 67        | C             | Upper Case Letter C    | 103       | 43      | 1000011    | C         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 68        | D             | Upper Case Letter D    | 104       | 44      | 1000100    | D         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 69        | E             | Upper Case Letter E    | 105       | 45      | 1000101    | E         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 70        | F             | Upper Case Letter F    | 106       | 46      | 1000110    | F         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 71        | G             | Upper Case Letter G    | 107       | 47      | 1000111    | G         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 72        | H             | Upper Case Letter H    | 110       | 48      | 1001000    | H         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 73        | I             | Upper Case Letter I    | 111       | 49      | 1001001    | I         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 74        | J             | Upper Case Letter J    | 112       | 4A      | 1001010    | J         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 75        | K             | Upper Case Letter K    | 113       | 4B      | 1001011    | K         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 76        | L             | Upper Case Letter L    | 114       | 4C      | 1001100    | L         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 77        | M             | Upper Case Letter M    | 115       | 4D      | 1001101    | M         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 78        | N             | Upper Case Letter N    | 116       | 4E      | 1001110    | N         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 79        | O             | Upper Case Letter O    | 117       | 4F      | 1001111    | O         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 80        | P             | Upper Case Letter P    | 120       | 50      | 1010000    | P         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 81        | Q             | Upper Case Letter Q    | 121       | 51      | 1010001    | Q         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 82        | R             | Upper Case Letter R    | 122       | 52      | 1010010    | R         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 83        | S             | Upper Case Letter S    | 123       | 53      | 1010011    | S         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 84        | T             | Upper Case Letter T    | 124       | 54      | 1010100    | T         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 85        | U             | Upper Case Letter U    | 125       | 55      | 1010101    | U         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 86        | V             | Upper Case Letter V    | 126       | 56      | 1010110    | V         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 87        | W             | Upper Case Letter W    | 127       | 57      | 1010111    | W         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 88        | X             | Upper Case Letter X    | 130       | 58      | 1011000    | X         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 89        | Y             | Upper Case Letter Y    | 131       | 59      | 1011001    | Y         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 90        | Z             | Upper Case Letter Z    | 132       | 5A      | 1011010    | Z         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 92        | \\            | Backslash              | 134       | 5C      | 01011100   | \&\#92\;      |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 97        | a             | Lower Case Letter a    | 141       | 61      | 1100001    | a         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 98        | b             | Lower Case Letter b    | 142       | 62      | 1100010    | b         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 99        | c             | Lower Case Letter c    | 143       | 63      | 1100011    | c         |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 100       | d             | Lower Case Letter d    | 144       | 64      | 1100100    | d        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 101       | e             | Lower Case Letter e    | 145       | 65      | 1100101    | e        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 102       | f             | Lower Case Letter f    | 146       | 66      | 1100110    | f        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 103       | g             | Lower Case Letter g    | 147       | 67      | 1100111    | g        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 104       | h             | Lower Case Letter h    | 150       | 68      | 1101000    | h        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 105       | i             | Lower Case Letter i    | 151       | 69      | 1101001    | i        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 106       | j             | Lower Case Letter j    | 152       | 6A      | 1101010    | j        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 107       | k             | Lower Case Letter k    | 153       | 6B      | 1101011    | k        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 108       | l             | Lower Case Letter l    | 154       | 6C      | 1101100    | l        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 109       | m             | Lower Case Letter m    | 155       | 6D      | 1101101    | m        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 110       | n             | Lower Case Letter n    | 156       | 6E      | 1101110    | n        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 111       | o             | Lower Case Letter o    | 157       | 6F      | 1101111    | o        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 112       | p             | Lower Case Letter p    | 160       | 70      | 1110000    | p        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 113       | q             | Lower Case Letter q    | 161       | 71      | 1110001    | q        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 114       | r             | Lower Case Letter r    | 162       | 72      | 1110010    | r        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 115       | s             | Lower Case Letter s    | 163       | 73      | 1110011    | s        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 116       | t             | Lower Case Letter t    | 164       | 74      | 1110100    | t        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 117       | u             | Lower Case Letter u    | 165       | 75      | 1110101    | u        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 118       | v             | Lower Case Letter v    | 166       | 76      | 1110110    | v        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 119       | w             | Lower Case Letter w    | 167       | 77      | 1110111    | w        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 120       | x             | Lower Case Letter x    | 170       | 78      | 1111000    | x        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 121       | y             | Lower Case Letter y    | 171       | 79      | 1111001    | y        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+| 122       | z             | Lower Case Letter z    | 172       | 7A      | 1111010    | z        |               |
++-----------+---------------+------------------------+-----------+---------+------------+---------------+---------------+
+
+Date Format
 ---------------
-
 The date format in the output CSV is formatted as ISO 8601 (``2019-12-31 20:30:55.123``), regardless of how it was parsed initially with :ref:`COPY FROM date parsers`.
 
+For more information on the ``datetime`` format, see :ref:`sql_data_types_date`.
 
 Examples
 ===========
+The **Examples** section shows the following examples:
+
+.. contents:: 
+   :local:
+   :depth: 1
 
-Export table to a CSV without HEADER
+Exporting a Table to a CSV File without a HEADER Row
 ------------------------------------
+The following is an example of exporting a table to a CSV file without a HEADER row:
 
 .. code-block:: psql
    
@@ -130,8 +435,9 @@ Export table to a CSV without HEADER
    Jonas Jerebko,Boston Celtics,8,PF,29,6-10,231,\N,5000000
    Amir Johnson,Boston Celtics,90,PF,29,6-9,240,\N,12000000
 
-Export table to a CSV with a HEADER row
+Exporting a Table to a CSV with a HEADER Row
 -----------------------------------------
+The following is an example of exporting a table to a CSV file with a HEADER row:
 
 .. code-block:: psql
    
@@ -147,12 +453,13 @@ Export table to a CSV with a HEADER row
    R.J. Hunter,Boston Celtics,28,SG,22,6-5,185,Georgia State,1148640
    Jonas Jerebko,Boston Celtics,8,PF,29,6-10,231,\N,5000000
 
-Export table to a TSV with a header row
+Exporting a Table to TSV with a HEADER Row
 -----------------------------------------
+The following is an example of exporting a table to a TSV file with a HEADER row:
 
 .. code-block:: psql
    
-	COPY nba TO WRAPPER csv_fdw OPTIONS (LOCATION = '/tmp/nba_export.csv', DELIMITER = '|', HEADER = true);
+	COPY nba TO WRAPPER csv_fdw OPTIONS (LOCATION = '/tmp/nba_export.csv', DELIMITER = '\t', HEADER = true);
 
 .. code-block:: console
    
@@ -164,8 +471,9 @@ Export table to a TSV with a header row
    R.J. Hunter     Boston Celtics  28      SG      22      6-5     185     Georgia State   1148640
    Jonas Jerebko   Boston Celtics  8       PF      29      6-10    231     \N     5000000
 
-Use non-printable ASCII characters as delimiter
+Using Non-Printable ASCII Characters as Delimiters
 -------------------------------------------------------
+The following is an example of using non-printable ASCII characters as delimiters:
 
 Non-printable characters can be specified using their octal representations, by using the ``E'\000'`` format, where ``000`` is the octal value of the character.
 
@@ -179,8 +487,9 @@ For example, ASCII character ``15``, known as "shift in", can be specified using
    
 	COPY nba TO WRAPPER csv_fdw OPTIONS (LOCATION = '/tmp/nba_export.csv', DELIMITER = E'\011'); -- 011 is a tab character
 
-Exporting the result of a query to a CSV
+Exporting the Result of a Query to CSV File
 --------------------------------------------
+The following is an example of exporting the result of a query to a CSV file:
 
 .. code-block:: psql
    
@@ -195,40 +504,46 @@ Exporting the result of a query to a CSV
    Charlotte Hornets,5222728
    Chicago Bulls,5785558
 
-Saving files to an authenticated S3 bucket
+Saving Files to an Authenticated S3 Bucket
 --------------------------------------------
+The following is an example of saving files to an authenticated S3 bucket:
 
 .. code-block:: psql
    
 	COPY (SELECT "Team", AVG("Salary") FROM nba GROUP BY 1) TO WRAPPER csv_fdw OPTIONS (LOCATION = 's3://my_bucket/salaries/nba_export.csv', AWS_ID = 'my_aws_id', AWS_SECRET = 'my_aws_secret');
 
-Saving files to an HDFS path
+Saving Files to an HDFS Path
 --------------------------------------------
+The following is an example of saving files to an HDFS path:
 
 .. code-block:: psql
    
    	COPY (SELECT "Team", AVG("Salary") FROM nba GROUP BY 1) TO WRAPPER csv_fdw OPTIONS (LOCATION = 'hdfs://pp_namenode:8020/nba_export.csv');
 
-
-Export table to a parquet file
+Exporting a Table to a Parquet File
 ------------------------------
+The following is an example of exporting a table to a Parquet file:
 
 .. code-block:: psql
    
 	COPY nba TO WRAPPER parquet_fdw OPTIONS (LOCATION = '/tmp/nba_export.parquet');
 
-
-Export a query to a parquet file
+Exporting a Query to a Parquet File
 --------------------------------
+The following is an example of exporting a query to a Parquet file:
 
 .. code-block:: psql
 
 	COPY (select x,y from t where z=0) TO WRAPPER parquet_fdw OPTIONS (LOCATION = '/tmp/file.parquet');
 
-
-Export table to a ORC file
+Exporting a Table to an ORC File
 ------------------------------
+The following is an example of exporting a table to an ORC file:
 
 .. code-block:: psql
    
 	COPY nba TO WRAPPER orc_fdw OPTIONS (LOCATION = '/tmp/nba_export.orc');
+
+Permissions
+=============
+The role must have the ``SELECT`` permission on every table or schema that is referenced by the statement.
\ No newline at end of file
diff --git a/reference/sql/sql_statements/dml_commands/delete.rst b/reference/sql/sql_statements/dml_commands/delete.rst
index 2aa9c6729..b561eaf78 100644
--- a/reference/sql/sql_statements/dml_commands/delete.rst
+++ b/reference/sql/sql_statements/dml_commands/delete.rst
@@ -72,6 +72,13 @@ The following is the correct syntax for triggering a clean-up:
    
    schema_name ::= identifier
 
+For systems with delete parallelism capabilities, use the following syntax to enhance deletion performance and shorten runtime:
+
+.. code-block:: postgres
+
+	SELECT set_parallel_delete_threads(x);
+
+.. note:: You may configure up to 10 threads.
 
 Parameters
 ============
diff --git a/reference/sql/sql_statements/dml_commands/insert.rst b/reference/sql/sql_statements/dml_commands/insert.rst
index 2607669bd..4005b471e 100644
--- a/reference/sql/sql_statements/dml_commands/insert.rst
+++ b/reference/sql/sql_statements/dml_commands/insert.rst
@@ -113,6 +113,9 @@ For example,
      SELECT name, weight FROM all_animals
      WHERE region = 'Australia';
 
+
+.. warning:: The ``SELECT`` statement decrypts information by default. When executing ``INSERT INTO TABLE AS SELECT``, encrypted information will appear as clear text in the newly created table.
+
 Inserting data with positional placeholders
 ---------------------------------------------
 
diff --git a/reference/sql/sql_statements/dml_commands/select.rst b/reference/sql/sql_statements/dml_commands/select.rst
index f47ffaec1..03bd20d70 100644
--- a/reference/sql/sql_statements/dml_commands/select.rst
+++ b/reference/sql/sql_statements/dml_commands/select.rst
@@ -163,14 +163,14 @@ Assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
diff --git a/reference/sql/sql_statements/dml_commands/update.rst b/reference/sql/sql_statements/dml_commands/update.rst
new file mode 100644
index 000000000..9f1d321b9
--- /dev/null
+++ b/reference/sql/sql_statements/dml_commands/update.rst
@@ -0,0 +1,194 @@
+.. _update:
+
+**********************
+UPDATE
+**********************
+The **UPDATE** statement page describes the following:
+
+.. |icon-new_2022.1| image:: /_static/images/new_2022.1.png
+   :align: middle
+   :width: 110
+
+.. contents::
+   :local:
+   :depth: 1
+
+Overview
+==========
+The ``UPDATE`` statement is used to modify the value of certain columns in existing rows without creating a table.
+
+It can be used to do the following:
+
+* Performing localized changes in existing data, such as correcting mistakes discovered after ingesting data.
+
+   ::
+
+* Setting columns based on the values of others.
+
+.. warning:: Using the ``UPDATE`` command on column clustered using a cluster key can undo your clustering.
+
+The ``UPDATE`` statement cannot be used to reference other tables in the ``WHERE`` or ``SET`` clauses.
+
+Syntax
+==========
+The following is the correct syntax for the ``UPDATE`` command:
+
+.. code-block:: postgres
+ 
+   UPDATE target_table_name [[AS] alias1]
+   SET column_name = expression [,...]
+  [FROM additional_table_name [[AS] alias2][,...]]
+  [WHERE condition]
+  
+The following is the correct syntax for triggering a clean-up:
+
+.. code-block:: postgres
+
+   SELECT cleanup_chunks('schema_name','table_name');
+   SELECT cleanup_extents('schema_name','table_name');
+   
+Parameters
+============
+The following table describes the ``UPDATE`` parameters:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+   * - ``target_table_name``
+     - Specifies the table containing the data to be updated.
+   * - ``column_name``
+     - Specifies the column containing the data to be updated.
+   * - ``additional_table_name``
+     - Additional tables used in the WHERE condition for performing complex joins.
+   * - ``condition``
+     - Specifies the condition for updating the data.
+	 
+.. note:: Similar to a DELETE statement, an UPDATE statement may leave some uncleaned data behind, which requires a cleanup operation.
+
+Examples
+===========
+The **Examples** section includes the following examples:
+
+.. contents::
+   :local:
+   :depth: 1
+
+Updating an Entire Table
+-----------------
+The Examples section shows how to modify the value of certain columns in existing rows without creating a table. The examples are based on the following tables:
+
+.. image:: /_static/images/delete_optimization.png
+
+The following methods for updating an entire table generate the same output, and result with the ``bands`` record set to ``NULL``:
+
+.. code-block:: postgres
+
+   UPDATE bands SET records_sold = 0;
+   
+.. code-block:: postgres
+
+   UPDATE bands SET records_sold = 0 WHERE true;
+   
+.. code-block:: postgres
+
+   UPDATE bands SET records_sold = 0 USING countries;
+
+.. code-block:: postgres
+
+   UPDATE bands SET records_sold = 0 USING countries WHERE 1=1;
+
+Performing Simple Updates
+-----------------
+The following is an example of performing a simple update:
+
+.. code-block:: postgres
+
+   UPDATE bands SET records_sold = records_sold + 1 WHERE name LIKE 'The %';
+
+Updating Tables that Contain Multi-Table Conditions
+-----------------
+The following shows an example of updating tables that contain multi-table conditions:
+
+.. code-block:: postgres
+
+   UPDATE bands
+   SET records_sold = records_sold + 1
+   WHERE EXISTS (
+     SELECT 1 FROM countries
+     WHERE countries.id=bands.country_id
+     AND country.name = 'Sweden'
+   );
+
+You can also write the statement above using the FROM clause:
+
+.. code-block:: psql
+
+   UPDATE bands
+   SET records_sold = records_sold + 1
+   FROM countries
+   WHERE countries.id=bands.country_id AND country.name = 'Sweden';
+
+Updating Tables that Contain Multi-Table Expressions
+-----------------
+The following shows an example of updating tables that contain multi-table expressions:
+
+.. code-block:: postgres
+
+   UPDATE bands
+   SET records_sold = records_sold +
+     CASE
+       WHEN c.name = 'Israel' THEN 2
+       ELSE 1
+     END
+   FROM countries c
+   
+Configuring Update Behavior
+-----------------
+The ``failOnNondeterministicUpdate`` flag is used to configure ``UPDATE`` behavior when updating tables containing multi-table expressions. This flag is needed when you use the ``FROM`` clause along with a set expression containing columns from additional tables. Doing this can cause a match to occur between a row from the target table with multiple rows from the additional tables.
+
+For instance, the example in the previous section sets the records sold to ``2`` when the country name is Israel. If you were to insert a new entry into this table with Israel spelled in Hebrew (using the same country ID), you would have two rows with identical country ID's. 
+
+When this happens, both rows 5 and 6 in the ``bands`` table match both Israel entries. Because no algorithm exists for determining which entry to use, updating this table may either increase ``records_sold`` by 2 (for Israel in English) or 1 (for Israel in Hebrew).
+
+You must set the ``failOnNondeterministicUpdate`` flag to ``FALSE`` to prevent an error from occuring.
+
+Note that a similar ambiguity can occur when the Hebrew spelling is used in the following example:
+
+.. code-block:: postgres
+
+   UPDATE bands
+   SET record_count = record_count + 1
+   FROM countries c
+   WHERE c.name = 'Israel'
+   
+However, the ``WHERE`` clause above prevents a match with any entry other than the defined one. Because the target table row must match with the ``WHERE`` condition at least once to be included in the UPDATE statment, this scenario does not require configuring the ``failOnNondeterministicUpdate`` flag.
+
+Triggering a Clean-Up
+---------------------------------------
+The following shows an example of triggering a clean-up:
+
+.. code-block:: psql
+
+   SELECT * FROM sqream_catalog.discarded_chunks;
+   SELECT cleanup_discarded_chunks('public','t'); 
+
+The following is an example of the output generated from the above:
+
+* **database_name** - _discarded_master
+* **table_id** - 24
+* **column_id** - 1
+* **extent_ID** - 0
+   
+Permissions
+=============
+Executing an ``UPDATE`` statement requires the following permissions:
+
+* Both ``UPDATE`` and ``SELECT`` permissions on the target table.
+* The ``SELECT`` permission for each additional table you reference in the statement (in ither the ``FROM`` clause or ``WHERE`` subquery section).
+
+Locking and Concurrency
+=============
+Executing the ``UPDATE`` statement obtains an exclusive UPDATE lock on the target table, but does not lock the destination tables.
\ No newline at end of file
diff --git a/reference/sql/sql_statements/index.rst b/reference/sql/sql_statements/index.rst
index 955e1212f..325647bda 100644
--- a/reference/sql/sql_statements/index.rst
+++ b/reference/sql/sql_statements/index.rst
@@ -3,78 +3,87 @@
 ***************
 SQL Statements
 ***************
+The **SQL Statements** page describes the following commands:
 
-SQream DB supports commands from ANSI SQL.
+.. contents::
+   :local:
+   :depth: 1
+
+SQream supports commands from ANSI SQL.
 
 .. _ddl_commands_list:
 
 Data Definition Commands (DDL)
 ================================
+The following table shows the Data Definition commands:
 
-.. list-table:: DDL Commands
-   :widths: auto
+.. list-table::
+   :widths: 10 100
    :header-rows: 1
    :name: ddl_commands
    
    * - Command
      - Usage
-   * - :ref:`ADD COLUMN`
+   * - :ref:`ADD_COLUMN`
      - Add a new column to a table
-   * - :ref:`ALTER DEFAULT SCHEMA`
+   * - :ref:`ALTER_DEFAULT_SCHEMA`
      - Change the default schema for a role
-   * - :ref:`ALTER TABLE`
+   * - :ref:`ALTER_TABLE`
      - Change the schema of a table
-   * - :ref:`CREATE DATABASE`
+   * - :ref:`CLUSTER_BY`
+     - Change clustering keys in a table
+   * - :ref:`CREATE_DATABASE`
      - Create a new database
-   * - :ref:`CREATE EXTERNAL TABLE`
-     - Create a new external table in the database (deprecated)
-   * - :ref:`CREATE FOREIGN TABLE`
+   * - :ref:`CREATE_FOREIGN_TABLE`
      - Create a new foreign table in the database
-   * - :ref:`CREATE FUNCTION `
+   * - :ref:`CREATE_FUNCTION`
      - Create a new user defined function in the database
-   * - :ref:`CREATE SCHEMA`
+   * - :ref:`CREATE_SCHEMA`
      - Create a new schema in the database
-   * - :ref:`CREATE TABLE`
+   * - :ref:`CREATE_TABLE`
      - Create a new table in the database
-   * - :ref:`CREATE TABLE AS`
+   * - :ref:`CREATE_TABLE_AS`
      - Create a new table in the database using results from a select query
-   * - :ref:`CREATE VIEW`
+   * - :ref:`CREATE_VIEW`
      - Create a new view in the database
-   * - :ref:`DROP COLUMN`
+   * - :ref:`DROP_CLUSTERING_KEY`
+     - Drops all clustering keys in a table
+   * - :ref:`DROP_COLUMN`
      - Drop a column from a table
-   * - :ref:`DROP DATABASE`
+   * - :ref:`DROP_DATABASE`
      - Drop a database and all of its objects
-   * - :ref:`DROP FUNCTION`
+   * - :ref:`DROP_FUNCTION`
      - Drop a function
-   * - :ref:`DROP SCHEMA`
+   * - :ref:`DROP_SCHEMA`
      - Drop a schema
-   * - :ref:`DROP TABLE`
+   * - :ref:`DROP_TABLE`
      - Drop a table and its contents from a database
-   * - :ref:`DROP VIEW`
+   * - :ref:`DROP_VIEW`
      - Drop a view
-   * - :ref:`RENAME COLUMN`
+   * - :ref:`RENAME_COLUMN`
      - Rename a column
-   * - :ref:`RENAME TABLE`
+   * - :ref:`RENAME_TABLE`
      - Rename a table
 
+
 Data Manipulation Commands (DML)
 ================================
+The following table shows the Data Manipulation commands:
 
-.. list-table:: DML Commands
-   :widths: auto
+.. list-table::
+   :widths: 10 100
    :header-rows: 1
    :name: dml_commands
-
    
    * - Command
      - Usage
-   * - :ref:`CREATE TABLE AS`
+   * - :ref:`CREATE_TABLE_AS`
      - Create a new table in the database using results from a select query
    * - :ref:`DELETE`
      - Delete specific rows from a table
-   * - :ref:`COPY FROM`
+   * - :ref:`COPY_FROM`
      - Bulk load CSV data into an existing table
-   * - :ref:`COPY TO`
+   * - :ref:`COPY_TO`
      - Export a select query or entire table to CSV files
    * - :ref:`INSERT`
      - Insert rows into a table
@@ -82,18 +91,31 @@ Data Manipulation Commands (DML)
      - Select rows and column from a table
    * - :ref:`TRUNCATE`
      - Delete all rows from a table
+   * - :ref:`UPDATE`
+     - Modify the value of certain columns in existing rows without creating a table
    * - :ref:`VALUES`
      - Return rows containing literal values
 
 Utility Commands
 ==================
+The following table shows the Utility commands:
 
-.. list-table:: Utility Commands
-   :widths: auto
+.. list-table::
+   :widths: 10 100
    :header-rows: 1
    
    * - Command
      - Usage
+   * - :ref:`DROP SAVED QUERY`
+     - Drops a saved query
+   * - :ref:`EXECUTE SAVED QUERY`
+     - Executes a previously saved query
+   * - :ref:`EXPLAIN`
+     - Returns a static query plan, which can be used to debug query plans
+   * - :ref:`LIST SAVED QUERIES`
+     - Lists previously saved query names, one per row.
+   * - :ref:`RECOMPILE SAVED QUERY`
+     - Recompiles a saved query that has been invalidated due to a schema change
    * - :ref:`SELECT GET_LICENSE_INFO`
      - View a user's license information
    * - :ref:`SELECT GET_DDL`
@@ -106,63 +128,37 @@ Utility Commands
      - Recreate a view after schema changes
    * - :ref:`SELECT DUMP_DATABASE_DDL`
      - View the ``CREATE TABLE`` statement for an current database
-
-Saved Queries
-===================
-
-.. list-table:: Saved Queries
-   :widths: auto
-   :header-rows: 1
-   
-   * - Command
-     - Usage
-   * - :ref:`SELECT DROP_SAVED_QUERY`
-     - Drop a saved query
-   * - :ref:`SELECT EXECUTE_SAVED_QUERY`
-     - Executes a saved query
-   * - :ref:`SELECT LIST_SAVED_QUERIES`
-     - Returns a list of saved queries
-   * - :ref:`SELECT RECOMPILE_SAVED_QUERY`
-     - Recompiles a query that has been invalidated by a schema change
-   * - :ref:`SELECT SAVE_QUERY`
-     - Compiles and saves a query for re-use and sharing
-   * - :ref:`SELECT SHOW_SAVED_QUERY`
-     - Shows query text for a saved query
-	 
-For more information, see :ref:`saved_queries`
-
-
-Monitoring
-===============
-
-Monitoring statements allow a database administrator to execute actions in the system, such as aborting a query or get information about system processes.
-
-.. list-table:: Monitoring
-   :widths: auto
-   :header-rows: 1
-   
-   * - Command
-     - Usage
-   * - :ref:`explain`
-     - Returns a static query plan for a statement
-   * - :ref:`show_connections`
-     - Returns a list of jobs and statements on the current worker
-   * - :ref:`show_locks`
-     - Returns any existing locks in the database
-   * - :ref:`show_node_info`
-     - Returns a query plan for an actively running statement with timing information
-   * - :ref:`show_server_status`
-     - Shows running statements across the cluster
-   * - :ref:`show_version`
-     - Returns the version of SQream DB
-   * - :ref:`stop_statement`
-     - Stops a query (or statement) if it is currently running
+   * - :ref:`SHOW CONNECTIONS`
+     - Returns a list of active sessions on the current worker
+   * - :ref:`SHOW LOCKS`
+     - Returns a list of locks from across the cluster
+   * - :ref:`SHOW NODE INFO`
+     - Returns a snapshot of the current query plan, similar to ``EXPLAIN ANALYZE`` from other databases
+   * - :ref:`SHOW SAVED QUERY`
+     - Returns a single row result containing the saved query string
+   * - :ref:`SHOW SERVER STATUS`
+     - Returns a list of active sessions across the cluster
+   * - :ref:`SHOW VERSION`
+     - Returns the system version for SQream DB
+   * - :ref:`SHUTDOWN_SERVER`
+     - Sets your server to finish compiling all active queries before shutting down according to a user-defined time value
+   * - :ref:`STOP STATEMENT`
+     - Stops or aborts an active statement
+
+.. |icon-new_2022.1| image:: /_static/images/new_2022.1.png
+   :align: middle
+   :width: 110
+
+.. |icon-New_Dark_Gray| image:: /_static/images/New_Dark_Gray.png
+   :align: middle
+   :width: 110
 
 Workload Management
 ======================
+The following table shows the Workload Management commands:
 
-.. list-table:: Workload Management
-   :widths: auto
+.. list-table::
+   :widths: 10 100
    :header-rows: 1
    
    * - Command
@@ -170,16 +166,17 @@ Workload Management
    * - :ref:`subscribe_service`
      - Add a SQream DB worker to a service queue 
    * - :ref:`unsubscribe_service`
-     - Remove a SQream DB worker to a service queue
+     - Remove a SQream DB worker from a service queue
    * - :ref:`show_subscribed_instances`
      - Return a list of service queues and workers
 
 Access Control Commands
 ================================
+The following table shows the Access Control commands:
 
-.. list-table:: Access Control Commands
-   :widths: auto
-   :header-rows: 1
+.. list-table::
+   :widths: 10 100
+   :header-rows: 1   
    
    * - Command
      - Usage
@@ -191,6 +188,16 @@ Access Control Commands
      - Creates a roles, which lets a database administrator control permissions on tables and databases
    * - :ref:`drop_role`
      - Removes roles
+   * - :ref:`get_role_permissions`
+     - Returns all permissions granted to a role in table format
+   * - :ref:`get_role_global_ddl`
+     - Returns the definition of a global role in DDL format
+   * - :ref:`get_all_roles_global_ddl`
+     - Returns the definition of all global roles in DDL format
+   * - :ref:`get_role_database_ddl`
+     - Returns the definition of a role's database in DDL format
+   * - :ref:`get_all_roles_database_ddl`
+     - Returns the definition of all role databases in DDL format
    * - :ref:`get_statement_permissions`
      - Returns a list of permissions required to run a statement or query
    * - :ref:`grant`
@@ -198,18 +205,4 @@ Access Control Commands
    * - :ref:`revoke`
      - Revoke permissions from a role
    * - :ref:`rename_role`
-     - Rename a role
-
-
-.. toctree::
-   :maxdepth: 1
-   :titlesonly:
-   :hidden:
-   :glob:
-
-   ddl_commands/*
-   dml_commands/*
-   utility_commands/*
-   monitoring_commands/*
-   wlm_commands/*
-   access_control_commands/*
\ No newline at end of file
+     - Rename a role
\ No newline at end of file
diff --git a/reference/sql/sql_statements/monitoring_commands/show_server_status.rst b/reference/sql/sql_statements/monitoring_commands/show_server_status.rst
deleted file mode 100644
index f59f79ccc..000000000
--- a/reference/sql/sql_statements/monitoring_commands/show_server_status.rst
+++ /dev/null
@@ -1,108 +0,0 @@
-.. _show_server_status:
-
-********************
-SHOW_SERVER_STATUS
-********************
-
-``SHOW_SERVER_STATUS`` returns a list of active sessions across the cluster.
-
-To list active statements on the current worker only, see :ref:`show_connections`.
-
-Permissions
-=============
-
-The role must have the ``SUPERUSER`` permissions.
-
-Syntax
-==========
-
-.. code-block:: postgres
-
-   show_server_status_statement ::=
-       SELECT SHOW_SERVER_STATUS()
-       ;
-
-Parameters
-============
-
-None
-
-Returns
-=========
-
-This function returns a list of active sessions. If no sessions are active across the cluster, the result set will be empty.
-
-.. list-table:: Result columns
-   :widths: auto
-   :header-rows: 1
-   
-   * - ``service``
-     - The service name for the statement
-   * - ``instance``
-     - The worker ID
-   * - ``connection_id``
-     - Connection ID
-   * - ``serverip``
-     - Worker end-point IP
-   * - ``serverport``
-     - Worker end-point port
-   * - ``database_name``
-     - Database name for the statement
-   * - ``user_name``
-     - Username running the statement
-   * - ``clientip``
-     - Client IP
-   * - ``statementid``
-     - Statement ID
-   * - ``statement``
-     - Statement text
-   * - ``statementstarttime``
-     - Statement start timestamp
-   * - ``statementstatus``
-     - Statement status (see table below)
-   * - ``statementstatusstart``
-     - Last updated timestamp
-
-.. include from here: 66
-
-
-.. list-table:: Statement status values
-   :widths: auto
-   :header-rows: 1
-   
-   * - Status
-     - Description
-   * - ``Preparing``
-     - Statement is being prepared
-   * - ``In queue``
-     - Statement is waiting for execution
-   * - ``Initializing``
-     - Statement has entered execution checks
-   * - ``Executing``
-     - Statement is executing
-   * - ``Stopping``
-     - Statement is in the process of stopping
-
-
-.. include until here 86
-
-Notes
-===========
-
-* This utility shows the active sessions. Some sessions may be actively connected, but not running any statements.
-
-Examples
-===========
-
-Using ``SHOW_SERVER_STATUS`` to get statement IDs
-----------------------------------------------------
-
-
-.. code-block:: psql
-
-   t=> SELECT SHOW_SERVER_STATUS();
-   service | instanceid | connection_id | serverip     | serverport | database_name | user_name  | clientip    | statementid | statement                   | statementstarttime  | statementstatus | statementstatusstart
-   --------+------------+---------------+--------------+------------+---------------+------------+-------------+-------------+-----------------------------+---------------------+-----------------+---------------------
-   sqream  |            |           102 | 192.168.1.91 |       5000 | t             | rhendricks | 192.168.0.1 |         128 | SELECT SHOW_SERVER_STATUS() | 24-12-2019 00:14:53 | Executing       | 24-12-2019 00:14:53 
-
-The statement ID is ``128``, running on worker ``192.168.1.91``.
diff --git a/reference/sql/sql_statements/utility_commands/drop_saved_query.rst b/reference/sql/sql_statements/utility_commands/drop_saved_query.rst
index f7faef6c5..9e7d8d725 100644
--- a/reference/sql/sql_statements/utility_commands/drop_saved_query.rst
+++ b/reference/sql/sql_statements/utility_commands/drop_saved_query.rst
@@ -6,7 +6,7 @@ DROP_SAVED_QUERY
 
 ``DROP_SAVED_QUERY`` drops a :ref:`previously saved query`.
 
-Read more in the :ref:`saved_queries` guide.
+Read more in the :ref:`saved_queries` guide.
 
 See also: ref:`save_query`, :ref:`execute_saved_query`,  ref:`show_saved_query`,  ref:`list_saved_queries`.
 
diff --git a/reference/sql/sql_statements/utility_commands/dump_database_ddl.rst b/reference/sql/sql_statements/utility_commands/dump_database_ddl.rst
index bf246b803..fc9ca1282 100644
--- a/reference/sql/sql_statements/utility_commands/dump_database_ddl.rst
+++ b/reference/sql/sql_statements/utility_commands/dump_database_ddl.rst
@@ -51,7 +51,7 @@ Getting the DDL for a database
    farm=> SELECT DUMP_DATABASE_DDL();
    create table "public"."cool_animals" (
      "id" int not null,
-     "name" varchar(30) not null,
+     "name" text(30) not null,
      "weight" double null,
      "is_agressive" bool default false not null
    )
diff --git a/reference/sql/sql_statements/utility_commands/execute_saved_query.rst b/reference/sql/sql_statements/utility_commands/execute_saved_query.rst
index 6fe41fa08..39675d47f 100644
--- a/reference/sql/sql_statements/utility_commands/execute_saved_query.rst
+++ b/reference/sql/sql_statements/utility_commands/execute_saved_query.rst
@@ -8,7 +8,7 @@ EXECUTE_SAVED_QUERY
 
 Read more in the :ref:`saved_queries` guide.
 
-See also: ref:`save_query`, :ref:`drop_saved_query`,  ref:`show_saved_query`,  ref:`list_saved_queries`.
+See also: :ref:`save_query`, :ref:`drop_saved_query`, :ref:`show_saved_query`, :ref:`list_saved_queries`.
 
 Permissions
 =============
@@ -53,7 +53,7 @@ Notes
 
 * Query parameters can be used as substitutes for literal expressions. Parameters cannot be used to substitute identifiers, column names, table names, or other parts of the query.
 
-* Query parameters of a string datatype (like ``VARCHAR``) must be of a fixed length, and can be used in equality checks, but not patterns (e.g. :ref:`like`, :ref:`rlike`, etc)
+* Query parameters of a string datatype (like ``text``) must be of a fixed length, and can be used in equality checks, but not patterns (e.g. :ref:`like`, :ref:`rlike`, etc)
 
 * Query parameters' types are inferred at compile time.
 
@@ -66,14 +66,14 @@ Assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
diff --git a/reference/sql/sql_statements/monitoring_commands/explain.rst b/reference/sql/sql_statements/utility_commands/explain.rst
similarity index 100%
rename from reference/sql/sql_statements/monitoring_commands/explain.rst
rename to reference/sql/sql_statements/utility_commands/explain.rst
diff --git a/reference/sql/sql_statements/utility_commands/get_all_roles_database_ddl.rst b/reference/sql/sql_statements/utility_commands/get_all_roles_database_ddl.rst
new file mode 100644
index 000000000..430ede73e
--- /dev/null
+++ b/reference/sql/sql_statements/utility_commands/get_all_roles_database_ddl.rst
@@ -0,0 +1,46 @@
+.. _get_all_roles_database_ddl:
+
+********************
+GET_ALL_ROLES_DATABASE_DDL
+********************
+The ``GET_ALL_ROLES_DATABASE_DDL`` statement returns the definition of all role databases in DDL format.
+
+.. contents:: 
+   :local:
+   :depth: 1   
+
+Syntax
+==========
+The following is the correct syntax for using the ``GET_ALL_ROLES_DATABASE_DDL`` statement:
+
+.. code-block:: postgres
+
+   select get_all_roles_database_ddl()
+
+Example
+===========
+The following is an example of using the ``GET_ALL_ROLES_DATABASE_DDL`` statement:
+
+.. code-block:: psql
+
+   select get_all_roles_database_ddl();
+   
+Output
+==========
+The following is an example of the output of the ``GET_ALL_ROLES_DATABASE_DDL`` statement:
+
+.. code-block:: postgres
+
+   grant create, usage on schema "public" to "public" ; alter default schema for "public" to "public"; alter default permissions for "public" for schemas grant superuser to creator_role ; alter default permissions for "public" for tables grant select, insert, delete, ddl, update to creator_role ; grant select, insert, delete, ddl, update on table "public"."customer" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."d_customer" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."demo_customer" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."demo_lineitem" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."lineitem" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."nation" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."orders" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."part" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."partsupp" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."region" to "sqream" ; grant select, insert, delete, ddl, update on table "public"."supplier" to "sqream" ; alter default schema for "sqream" to "public";
+
+Permissions
+=============
+Using the ``GET_ALL_ROLES_DATABASE_DDL`` statement requires no special permissions.
+
+For more information, see the following:
+
+* :ref:`get_all_roles_global_ddl`
+
+    ::
+	
+* :ref:`get_role_permissions`
\ No newline at end of file
diff --git a/reference/sql/sql_statements/utility_commands/get_all_roles_global_ddl.rst b/reference/sql/sql_statements/utility_commands/get_all_roles_global_ddl.rst
new file mode 100644
index 000000000..7a77dd810
--- /dev/null
+++ b/reference/sql/sql_statements/utility_commands/get_all_roles_global_ddl.rst
@@ -0,0 +1,47 @@
+.. _get_all_roles_global_ddl:
+
+********************
+GET_ALL_ROLES_GLOBAL_DDL
+********************
+The ``GET_ALL_ROLES_GLOBAL_DDL`` statement returns the definition of all global roles in DDL format.
+
+.. contents:: 
+   :local:
+   :depth: 1   
+
+Syntax
+==========
+The following is the correct syntax for using the ``GET_ALL_ROLES_GLOBAL_DDL`` statement:
+
+.. code-block:: postgres
+
+   select get_all_roles_global_ddl()
+   
+Example
+===========
+The following is an example of using the ``GET_ALL_ROLES_GLOBAL_DDL`` statement:
+
+.. code-block:: psql
+
+   select get_all_roles_global_ddl();
+
+
+Output
+==========
+The following is an example of the output of the ``GET_ALL_ROLES_GLOBAL_DDL`` statement:
+
+.. code-block:: postgres
+
+   create role "public"; create role "sqream"; grant superuser, login to "sqream" ;
+
+Permissions
+=============
+Using the ``GET_ALL_ROLES_GLOBAL_DDL`` statement requires no special permissions.
+
+For more information, see the following:
+
+* :ref:`get_all_roles_database_ddl`
+
+    ::
+	
+* :ref:`get_role_permissions`
\ No newline at end of file
diff --git a/reference/sql/sql_statements/utility_commands/get_ddl.rst b/reference/sql/sql_statements/utility_commands/get_ddl.rst
index f2566e99a..bc3b9ef54 100644
--- a/reference/sql/sql_statements/utility_commands/get_ddl.rst
+++ b/reference/sql/sql_statements/utility_commands/get_ddl.rst
@@ -55,7 +55,7 @@ The result of the ``GET_DDL`` function is a verbose version of the :ref:`create_
 
    farm=> CREATE TABLE cool_animals (
       id INT NOT NULL,
-      name varchar(30) NOT NULL,
+      name text(30) NOT NULL,
       weight FLOAT,
       is_agressive BOOL DEFAULT false NOT NULL
    );
@@ -64,7 +64,7 @@ The result of the ``GET_DDL`` function is a verbose version of the :ref:`create_
    farm=> SELECT GET_DDL('cool_animals');
    create table "public"."cool_animals" (
      "id" int not null,
-     "name" varchar(30) not null,
+     "name" text(30) not null,
      "weight" double null,
      "is_agressive" bool default false not null )
      ;
diff --git a/reference/sql/sql_statements/utility_commands/get_role_database_ddl.rst b/reference/sql/sql_statements/utility_commands/get_role_database_ddl.rst
new file mode 100644
index 000000000..5ff3ecf79
--- /dev/null
+++ b/reference/sql/sql_statements/utility_commands/get_role_database_ddl.rst
@@ -0,0 +1,61 @@
+.. _get_role_database_ddl:
+
+********************
+GET_ROLE_DATABASE_DDL
+********************
+The ``GET_ROLE_DATABASE_DDL`` statement returns the definition of a role's database in DDL format.
+
+The ``GET_ROLE_DATABASE_DDL`` page describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1   
+
+Syntax
+==========
+The following is the correct syntax for using the ``GET_ROLE_DATABASE_DDL`` statement:
+
+.. code-block:: postgres
+
+   select get_role_database_ddl(<'role_name'>)
+
+Example
+===========
+The following is an example of using the ``GET_ROLE_DATABASE_DDL`` statement:
+
+.. code-block:: psql
+
+   select get_role_database_ddl('public');
+
+Parameters
+============
+The following table shows the ``GET_ROLE_DATABASE_DDL`` parameters:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+   * - ``role_name``
+     - The definition of the database role in DDL format.
+   
+Output
+==========
+The following is an example of the output of the ``GET_ROLE_DATABASE_DDL`` statement:
+
+.. code-block:: postgres
+
+   grant create, usage on schema "public" to "public" ; alter default schema for "public" to "public"; alter default permissions for "public" for schemas grant superuser to creator_role ; alter default permissions for "public" for tables grant select, insert, delete, ddl to creator_role ;
+
+Permissions
+=============
+Using the ``GET_ROLE_DATABASE_DDL`` statement requires no special permissions.
+
+For more information, see the following:
+
+* :ref:`get_role_global_ddl`
+
+    ::
+	
+* :ref:`get_role_permissions`
\ No newline at end of file
diff --git a/reference/sql/sql_statements/utility_commands/get_role_global_ddl.rst b/reference/sql/sql_statements/utility_commands/get_role_global_ddl.rst
new file mode 100644
index 000000000..3ba0255d3
--- /dev/null
+++ b/reference/sql/sql_statements/utility_commands/get_role_global_ddl.rst
@@ -0,0 +1,61 @@
+.. _get_role_global_ddl:
+
+********************
+GET_ROLE_GLOBAL_DDL
+********************
+The ``GET_ROLE_GLOBAL_DDL`` statement returns the definition of a global role in DDL format.
+
+The ``GET_ROLE_GLOBAL_DDL`` page describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1   
+
+Syntax
+==========
+The following is the correct syntax for using the ``GET_ROLE_GLOBAL_DDL`` statement:
+
+.. code-block:: postgres
+
+   select get_role_global_ddl(<'role_name'>)
+   
+Example
+===========
+The following is an example of using the ``GET_ROLE_GLOBAL_DDL`` statement:
+
+.. code-block:: psql
+
+   select get_role_global_ddl('public');
+
+Parameters
+============
+The following table shows the ``GET_ROLE_GLOBAL_DDL`` parameters:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+   * - ``role_name``
+     - The definition of the global role in DDL format.
+
+Output
+==========
+The following is an example of the output of the ``GET_ROLE_GLOBAL_DDL`` statement:
+
+.. code-block:: postgres
+
+   create role "public";
+
+Permissions
+=============
+Using the ``GET_ROLE_GLOBAL_DDL`` statement requires no special permissions.
+
+For more information, see the following:
+
+* :ref:`get_role_database_ddl`
+
+    ::
+	
+* :ref:`get_role_permissions`
\ No newline at end of file
diff --git a/reference/sql/sql_statements/utility_commands/get_role_permissions.rst b/reference/sql/sql_statements/utility_commands/get_role_permissions.rst
new file mode 100644
index 000000000..8723f98c8
--- /dev/null
+++ b/reference/sql/sql_statements/utility_commands/get_role_permissions.rst
@@ -0,0 +1,74 @@
+.. _get_role_permissions:
+
+********************
+GET_ROLE_PERMISSIONS
+********************
+The ``GET_ROLE_PERMISSIONS`` statement returns all permissions granted to a role in table format.
+
+The ``GET_ROLE_PERMISSIONS`` page describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1 
+
+Syntax
+==========
+The following is the correct syntax for using the ``GET_ROLE_PERMISSIONS`` statement:
+
+.. code-block:: postgres
+
+   select get_role_permissions()
+      
+Example
+===========
+The following is an example of using the ``GET_ROLE_PERMISSIONS`` statement:
+
+.. code-block:: psql
+
+   select get_role_permissions();
+
+Parameters
+============
+The following table shows the ``GET_ROLE_PERMISSIONS`` parameters:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+   * - ``()``
+     - The permissions belonging to the role.
+
+Output
+==========
+The following is an example of the output of the ``GET_ROLE_PERMISSIONS`` statement:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+     - Example
+   * - ``permission_type``
+     - The permission type granted to the role.
+     - SUPERUSER
+   * - ``object_type``
+     - The data object type.
+     - table
+   * - ``object_name``
+     - The name of the object.
+     - master.public.nba
+
+Permissions
+=============
+Using the ``GET_ROLE_PERMISSIONS`` statement requires no special permissions.
+
+For more information, see the following:
+
+* :ref:`get_role_database_ddl`
+
+    ::
+	
+* :ref:`get_role_global_ddl`
\ No newline at end of file
diff --git a/reference/sql/sql_statements/utility_commands/recompile_saved_query.rst b/reference/sql/sql_statements/utility_commands/recompile_saved_query.rst
index d6b63e30e..97b1139e9 100644
--- a/reference/sql/sql_statements/utility_commands/recompile_saved_query.rst
+++ b/reference/sql/sql_statements/utility_commands/recompile_saved_query.rst
@@ -6,6 +6,8 @@ RECOMPILE_SAVED_QUERY
 
 ``RECOMPILE_SAVED_QUERY`` recompiles a saved query that has been invalidated due to a schema change.
 
+Read more in the :ref:`saved_queries` guide.
+
 Permissions
 =============
 
diff --git a/reference/sql/sql_statements/utility_commands/save_query.rst b/reference/sql/sql_statements/utility_commands/save_query.rst
index be34c33ed..c65cd48ba 100644
--- a/reference/sql/sql_statements/utility_commands/save_query.rst
+++ b/reference/sql/sql_statements/utility_commands/save_query.rst
@@ -56,7 +56,7 @@ Notes
 
 * Query parameters can be used as substitutes for literal expressions. Parameters cannot be used to substitute identifiers, column names, table names, or other parts of the query.
 
-* Query parameters of a string datatype (like ``VARCHAR``) must be of a fixed length, and can be used in equality checks, but not patterns (e.g. :ref:`like`, :ref:`rlike`, etc)
+* Query parameters of a string datatype (like ``TEXT``) must be of a fixed length, and can be used in equality checks, but not patterns (e.g. :ref:`like`, :ref:`rlike`, etc)
 
 * Query parameters' types are inferred at compile time.
 
@@ -70,14 +70,14 @@ Assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      Name varchar(40),
-      Team varchar(40),
+      Name text(40),
+      Team text(40),
       Number tinyint,
-      Position varchar(2),
+      Position text(2),
       Age tinyint,
-      Height varchar(4),
+      Height text(4),
       Weight real,
-      College varchar(40),
+      College text(40),
       Salary float
     );
 
@@ -123,5 +123,4 @@ Use parameters to replace them later at execution time.
    Bismack Biyombo   | Toronto Raptors |      8 | C        |  23 | 6-9    |    245 |             | 2814000
    James Johnson     | Toronto Raptors |      3 | PF       |  29 | 6-9    |    250 | Wake Forest | 2500000
    Jason Thompson    | Toronto Raptors |      1 | PF       |  29 | 6-11   |    250 | Rider       |  245177
-   Jonas Valanciunas | Toronto Raptors |     17 | C        |  24 | 7-0    |    255 |             | 4660482
-
+   Jonas Valanciunas | Toronto Raptors |     17 | C        |  24 | 7-0    |    255 |             | 4660482
\ No newline at end of file
diff --git a/reference/sql/sql_statements/monitoring_commands/show_connections.rst b/reference/sql/sql_statements/utility_commands/show_connections.rst
similarity index 100%
rename from reference/sql/sql_statements/monitoring_commands/show_connections.rst
rename to reference/sql/sql_statements/utility_commands/show_connections.rst
diff --git a/reference/sql/sql_statements/monitoring_commands/show_locks.rst b/reference/sql/sql_statements/utility_commands/show_locks.rst
similarity index 100%
rename from reference/sql/sql_statements/monitoring_commands/show_locks.rst
rename to reference/sql/sql_statements/utility_commands/show_locks.rst
diff --git a/reference/sql/sql_statements/monitoring_commands/show_node_info.rst b/reference/sql/sql_statements/utility_commands/show_node_info.rst
similarity index 99%
rename from reference/sql/sql_statements/monitoring_commands/show_node_info.rst
rename to reference/sql/sql_statements/utility_commands/show_node_info.rst
index 345d16440..9c1e1ec11 100644
--- a/reference/sql/sql_statements/monitoring_commands/show_node_info.rst
+++ b/reference/sql/sql_statements/utility_commands/show_node_info.rst
@@ -108,7 +108,7 @@ This is a full list of node types:
      - Compress data with both CPU and GPU schemes
    * - ``CpuDecompress``
      - CPU
-     - Decompression operation, common for longer ``VARCHAR`` types
+     - Decompression operation, common for longer ``TEXT`` types
    * - ``CpuLoopJoin``
      - CPU
      - A non-indexed nested loop join, performed on the CPU
diff --git a/reference/sql/sql_statements/utility_commands/show_server_status.rst b/reference/sql/sql_statements/utility_commands/show_server_status.rst
new file mode 100644
index 000000000..73902a046
--- /dev/null
+++ b/reference/sql/sql_statements/utility_commands/show_server_status.rst
@@ -0,0 +1,108 @@
+.. _show_server_status:
+
+********************
+SHOW_SERVER_STATUS
+********************
+``SHOW_SERVER_STATUS`` returns a list of active sessions across the cluster.
+
+To list active statements on the current worker only, see :ref:`show_connections`.
+
+Syntax
+==========
+The following is the correct syntax when showing your server status:
+
+.. code-block:: postgres
+
+   show_server_status_statement ::=
+       SELECT SHOW_SERVER_STATUS()
+       ;
+
+Parameters
+============
+The Parameters section is not relevant for the ``SHOW_SERVER_STATUS`` statement.
+
+Returns
+=========
+The ``SHOW_SERVER_STATUS`` function returns a list of active sessions. If no sessions are active across the cluster, the result set will be empty.
+
+The following table shows the ``SHOW_SERVER_STATUS`` result columns;
+
+.. list-table:: Result Columns
+   :widths: auto
+   :header-rows: 1
+   
+   * - service
+     - Statement Service Name
+   * - ``instance``
+     - Shows the worker ID.
+   * - ``connection_id``
+     - Shows the connection ID.
+   * - ``serverip``
+     - Shows the worker end-point IP.
+   * - ``serverport``
+     - Shows the worker end-point port.
+   * - ``database_name``
+     - Shows the statement's database name.
+   * - ``user_name``
+     - Shows the username running the statement.
+   * - ``clientip``
+     - Shows the client IP.
+   * - ``statementid``
+     - Shows the statement ID.
+   * - ``statement``
+     - Shows the statement text.
+   * - ``statementstarttime``
+     - Shows the statement start timestamp.
+   * - ``statementstatus``
+     - Shows the statement status (see table below).
+   * - ``statementstatusstart``
+     - Shows the most recently updated timestamp.
+
+.. include from here: 66
+
+The following table shows the statement status values:
+
+.. list-table:: Statement Status Values
+   :widths: auto
+   :header-rows: 1
+   
+   * - Status
+     - Description
+   * - ``Preparing``
+     - The statement is being prepared.
+   * - ``In queue``
+     - The statement is waiting for execution.
+   * - ``Initializing``
+     - The statement has entered execution checks.
+   * - ``Executing``
+     - The statement is executing.
+   * - ``Stopping``
+     - The statement is in the process of stopping.
+
+.. include until here 86
+
+Notes
+===========
+This utility shows the active sessions. Some sessions may be actively connected, but not running any statements.
+
+Example
+===========
+
+Using SHOW_SERVER_STATUS to Get Statement IDs
+----------------------------------------------------
+The following example shows how to use the ``SHOW_SERVER_STATUS`` statement to get statement IDs:
+
+.. code-block:: psql
+
+   t=> SELECT SHOW_SERVER_STATUS();
+   service | instanceid | connection_id | serverip      | serverport | database_name | user_name        | clientip      | statementid | statement                                                                                             | statementstarttime  | statementstatus | statementstatusstart
+   --------+------------+---------------+---------------+------------+---------------+------------------+---------------+-------------+-------------------------------------------------------------------------------------------------------+---------------------+-----------------+---------------------
+   sqream  | sqream_2   |  19           | 192.168.0.111 |       5000 | master        | etl              | 192.168.0.011 |2484923      | SELECT t1.account, t1.msisd from table a t1 join table b t2 on t1.id = t2.id where t1.msid='123123';  | 17-01-2022 16:19:31 | Executing       | 17-01-2022 16:19:32
+   sqream  | sqream_1   |  2            | 192.168.1.112 |       5000 | master        | etl              | 192.168.1.112 |2484924      | select show_server_status();                                                                          | 17-01-2022 16:19:39 | Executing       | 17-01-2022 16:19:39
+   sqream  | None       |  248          | 192.168.1.112 |       5007 | master        | maintenance_user | 192.168.1.112 |2484665      | select * from  sqream_catalog.tables;                                                                 | 17-01-2022 15:55:01 | In Queue        | 17-01-2022 15:55:02
+
+The statement ID is ``128``, running on worker ``192.168.1.91``.
+
+Permissions
+=============
+The role must have the ``SUPERUSER`` permissions.
diff --git a/reference/sql/sql_functions/system_functions/show_version.rst b/reference/sql/sql_statements/utility_commands/show_version.rst
similarity index 100%
rename from reference/sql/sql_functions/system_functions/show_version.rst
rename to reference/sql/sql_statements/utility_commands/show_version.rst
diff --git a/reference/sql/sql_statements/utility_commands/shutdown_server_command.rst b/reference/sql/sql_statements/utility_commands/shutdown_server_command.rst
new file mode 100644
index 000000000..bc42fcd5d
--- /dev/null
+++ b/reference/sql/sql_statements/utility_commands/shutdown_server_command.rst
@@ -0,0 +1,113 @@
+.. _shutdown_server_command:
+
+********************
+SHUTDOWN SERVER
+********************
+The **SHUTDOWN_SERVER** guide describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+Overview
+===============
+SQream's current method for stopping the SQream server is running the ``shutdown_server()`` utility command. Because this command abruptly shuts down the server while executing operations, it has been modified to perform a graceful shutdown by setting it to ``select shutdown_server([is_graceful, [timeout]]);``. This causes the server to wait for any queued statements to complete before shutting down.
+
+How Does it Work?
+========================
+Running the ``SHUTDOWN_SERVER`` command gives you more control over the following:
+
+* Preventing new queries from connecting to the server by:
+
+  * Setting the server as unavailable in the metadata server.
+
+      ::
+
+  * Unsubscribing the server from its service.
+
+* Stopping users from making new connections to the server. Attempting to connect to the server after activating a graceful shutdown displays the following message:
+
+  .. code-block:: postgres
+
+     Server is shutting down, no new connections are possible at the moment.
+  
+* The amount of time to wait before shutting down the server.
+
+   ::
+   
+* Configurations related to shutting down the server.
+
+Syntax
+==========
+The following is the syntax for using the ``SHUTDOWN_SERVER`` command:
+
+.. code-block:: postgres
+
+   select shutdown_server([true/false, [timeout]]);
+   
+Returns
+==========
+Running the ``shutdown_server`` command returns no output.
+
+Parameters
+============
+The following table shows the ``shutdown_server`` parameters:
+
+.. list-table:: 
+   :widths: auto
+   :header-rows: 1
+   
+   * - Parameter
+     - Description
+     - Example
+     - Default
+   * - ``is_graceful``
+     - Determines the method used to shut down the server.
+     - Selecting ``false`` shuts down the server while queries are running. Selecting ``true`` uses the graceful shutdown method.
+     - NA
+   * - ``timeout``
+     - Sets the maximum amount of minutes for the graceful shutdown method to run before the server is shut down using the standard method.
+     - ``([is_graceful, [30]]);``
+     - Five minutes
+	 
+.. note:: Setting ``is_graceful`` to ``false`` and defining the ``timeout`` value shuts the server down mid-query after the defined time.
+
+You can define the ``timeout`` argument as the amount minutes after which a forceful shutdown will run, even if a graceful shutdown is in progress. 
+
+Note that activating a forced shutdown with a timeout, such as ``select shutdown_server(false, 30)``, outputs the following error message:
+
+.. code-block:: postgres
+
+   forced shutdown has no timeout timer
+
+.. note:: You can set the timeout value using the ``defaultGracefulShutdownTimeoutMinutes`` flag in the Acceleration Studio.
+
+For more information, see :ref:`shutdown_server`.
+
+Examples
+===================
+This section shows the following examples:
+
+**Example 1 - Activating a Forceful Shutdown**
+
+.. code-block:: postgres
+
+   shutdown_server()
+
+**Example 2 - Activating a Graceful Shutdown**
+
+.. code-block:: postgres
+
+   shutdown_server (true)
+
+**Example 3 - Overriding the timeout Default with Another Value**
+
+.. code-block:: postgres
+
+   shutdown_server (500)
+
+The ``timeout`` unit is minutes.
+
+Permissions
+=============
+Using the ``shutdown_server`` command requires no special permissions.
\ No newline at end of file
diff --git a/reference/sql/sql_statements/monitoring_commands/stop_statement.rst b/reference/sql/sql_statements/utility_commands/stop_statement.rst
similarity index 100%
rename from reference/sql/sql_statements/monitoring_commands/stop_statement.rst
rename to reference/sql/sql_statements/utility_commands/stop_statement.rst
diff --git a/reference/sql/sql_statements/wlm_commands/unsubscribe_service.rst b/reference/sql/sql_statements/wlm_commands/unsubscribe_service.rst
index a26df0554..939d8918a 100644
--- a/reference/sql/sql_statements/wlm_commands/unsubscribe_service.rst
+++ b/reference/sql/sql_statements/wlm_commands/unsubscribe_service.rst
@@ -47,7 +47,7 @@ Notes
 
 * If the service name does not currently exist, it will be created
 
-.. warning:: ``UNSUBSCRIBE_SERVICE`` applies the service subscription immediately, but the setting applies for the duration of the session. To apply a persistent setting, use the ``initialSubscribedServices`` configuration setting. Read the :ref:`Workload manager guide` for more information.
+.. warning:: ``UNSUBSCRIBE_SERVICE`` removes the service subscription immediately, but the setting applies for the duration of the session. To apply a persistent setting, use the ``initialSubscribedServices`` configuration setting. Read the :ref:`Workload manager guide` for more information.
 
 Examples
 ===========
diff --git a/reference/sql/sql_syntax/index.rst b/reference/sql/sql_syntax/index.rst
index 90caf01e5..9e0422db4 100644
--- a/reference/sql/sql_syntax/index.rst
+++ b/reference/sql/sql_syntax/index.rst
@@ -4,18 +4,16 @@
 SQL Syntax Features
 **********************
 
-SQream DB supports SQL from the ANSI 92 syntax.
+SQream DB supports SQL from the ANSI 92 syntax and describes the following:
 
-.. toctree::
-   :maxdepth: 2
-   :caption: SQL Syntax Topics
-   :glob:
+.. hlist::
+   :columns: 1
 
-   keywords_and_identifiers
-   literals
-   scalar_expressions
-   joins
-   common_table_expressions
-   window_functions
-   subqueries
-   null_handling
+   * :ref:`keywords_and_identifiers`
+   * :ref:`literals`
+   * :ref:`scalar_expressions`
+   * :ref:`joins`
+   * :ref:`common_table_expressions`
+   * :ref:`window_functions`
+   * :ref:`subqueries`
+   * :ref:`null_handling`
\ No newline at end of file
diff --git a/reference/sql/sql_syntax/joins.rst b/reference/sql/sql_syntax/joins.rst
index 2563e7a5d..b12b08875 100644
--- a/reference/sql/sql_syntax/joins.rst
+++ b/reference/sql/sql_syntax/joins.rst
@@ -46,8 +46,8 @@ The following shows the correct syntax for creating an **inner join**:
 
 .. code-block:: postgres
 
-   left_side [ INNER ] JOIN right_side ON value_expr
-   left_side [ INNER ] JOIN right_side USING ( join_column [, ... ] )
+   left_side [ INNER ] JOIN left_side ON value_expr
+   left_side [ INNER ] JOIN left_side USING ( join_column [, ... ] )
 
 
 Inner joins are the default join type and return rows from the ``left_side`` and ``right_side`` based on a matching condition.
@@ -60,7 +60,6 @@ An inner join can also be specified by listing several tables in the ``FROM`` cl
    [ { INNER JOIN
      | LEFT [OUTER] JOIN
      | RIGHT [OUTER] JOIN
-     | FULL [OUTER] JOIN } table2
    ON table1.column1 = table2.column1 ]
 
 Omitting the ``ON`` or ``WHERE`` clause creates a ``CROSS JOIN``, where every ``left_side`` row is matched with every ``right_side`` row.
@@ -117,7 +116,7 @@ The ``CROSS JOIN`` clause cannot have an ``ON`` clause, but the ``WHERE`` clause
 
 The following is an example of two tables that will be used as the basis for a cross join:
 
-.. image:: /_static/images/joins/color_table.png
+.. image:: /_static/images/color_table.png
 
 The following is the output result of the cross join:
 
@@ -323,4 +322,4 @@ The following is an example of using a join hint:
    --+---
    2 |  2
    4 |  4
-   5 |  5
+   5 |  5
\ No newline at end of file
diff --git a/reference/sql/sql_syntax/keywords_and_identifiers.rst b/reference/sql/sql_syntax/keywords_and_identifiers.rst
index e5d1a6fcf..bc2cb1de6 100644
--- a/reference/sql/sql_syntax/keywords_and_identifiers.rst
+++ b/reference/sql/sql_syntax/keywords_and_identifiers.rst
@@ -13,13 +13,13 @@ Regular identifiers must follow these rules:
 * Must be case-insensitive. SQream converts all identifiers to lowercase unless quoted.
 * Does not equal any keywords, such as ``SELECT``, ``OR``, or ``AND``, etc.
 
-To bypass the rules above you can surround an identifier with double quotes (``"``).
+To bypass the rules above you can surround an identifier with double quotes (``"``) or square brackets (``[]``).
 
 Quoted identifiers must follow these rules:
 
-* Must be surrounded with double quotes (``"``).
+* Must be surrounded with double quotes (``"``) or square brackets (``[]``).
 * May contain any ASCII character except ``@``, ``$`` or ``"``.
-* Must be case-sensitive and referenced with double quotes.
+* Must be case-sensitive and referenced with double quotes or square brackets (``[]``).
 
 Identifiers are different than **keywords**, which are predefined words reserved with specific meanings in a statement. Some examples of keywords are ``SELECT``, ``CREATE``, and ``WHERE``. Note that keywords **cannot** be used as identifiers.
 
@@ -28,43 +28,45 @@ The following table shows a full list of the reserved keywords:
 +-------------------------------------------------------------------------------------------------+
 | **Keywords**                                                                                    |
 +-------------------+---------------------+--------------------+------------------+---------------+
+| **A - C**         | **C - G**           | **H - N**          | **N - S**        | **S - W**     |
++-------------------+---------------------+--------------------+------------------+---------------+
 | ``ALL``           | ``CURRENT_CATALOG`` | ``HASH``           | ``NOT``          | ``SIMILAR``   |
 +-------------------+---------------------+--------------------+------------------+---------------+
 | ``ANALYSE``       | ``CURRENT_ROLE``    | ``HAVING``         | ``NOTNULL``      | ``SOME``      |
 +-------------------+---------------------+--------------------+------------------+---------------+
 | ``ANALYZE``       | ``CURRENT_TIME``    | ``ILIKE``          | ``NULL``         | ``SYMMETRIC`` |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``AND``           | ``CURRENT_USER``    | ``IN``             | ``OFFSET``       | ``SYMMETRIC`` |
-+-------------------+---------------------+--------------------+------------------+---------------+
-| ``ANY``           | ``DEFAULT``         | ``INITIALLY``      | ``ON``           | ``TABLE``     |
+| ``AND``           | ``CURRENT_USER``    | ``IN``             | ``OFFSET``       | ``TABLE``     |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``ARRAY``         | ``DEFERRABLE``      | ``INNER``          | ``ONLY``         | ``THEN``      |
+| ``ANY``           | ``DEFAULT``         | ``INITIALLY``      | ``ON``           | ``THEN``      |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``AS``            | ``DESC``            | ``INTERSECT``      | ``OPTION``       | ``TO``        |
+| ``ARRAY``         | ``DEFERRABLE``      | ``INNER``          | ``ONLY``         | ``TO``        |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``ASC``           | ``DISTINCT``        | ``INTO``           | ``OR``           | ``TRAILING``  |
+| ``AS``            | ``DESC``            | ``INTERSECT``      | ``OPTION``       | ``TRAILING``  |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``AUTHORIZATION`` | ``DO``              | ``IS``             | ``ORDER``        | ``TRUE``      |
+| ``ASC``           | ``DISTINCT``        | ``INTO``           | ``OR``           | ``TRUE``      |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``BINARY``        | ``ELSE``            | ``ISNULL``         | ``OUTER``        | ``UNION``     |
+| ``AUTHORIZATION`` | ``DO``              | ``IS``             | ``ORDER``        | ``UNION``     |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``BOTH``          | ``END``             | ``JOIN``           | ``OVER``         | ``UNIQUE``    |
+| ``BINARY``        | ``ELSE``            | ``ISNULL``         | ``OUTER``        | ``UNIQUE``    |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``CASE``          | ``EXCEPT``          | ``LEADING``        | ``OVERLAPS``     | ``USER``      |
+| ``BOTH``          | ``END``             | ``JOIN``           | ``OVER``         | ``USER``      |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``CAST``          | ``FALSE``           | ``LEFT``           | ``PLACING``      | ``USING``     |
+| ``CASE``          | ``EXCEPT``          | ``LEADING``        | ``OVERLAPS``     | ``USING``     |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``CHECK``         | ``FETCH``           | ``LIKE``           | ``PRIMARY``      | ``VARIADIC``  |
+| ``CAST``          | ``FALSE``           | ``LEFT``           | ``PLACING``      | ``VARIADIC``  |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``COLLATE``       | ``FOR``             | ``LIMIT``          | ``REFERENCES``   | ``VERBOSE``   |
+| ``CHECK``         | ``FETCH``           | ``LIKE``           | ``PRIMARY``      | ``VERBOSE``   |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``COLUMN``        | ``FREEZE``          | ``LOCALTIME``      | ``RETURNING``    | ``WHEN``      |
+| ``COLLATE``       | ``FOR``             | ``LIMIT``          | ``REFERENCES``   | ``WHEN``      |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``CONCURRENTLY``  | ``FROM``            | ``LOCALTIMESTAMP`` | ``RIGHT``        | ``WHERE``     |
+| ``COLUMN``        | ``FREEZE``          | ``LOCALTIME``      | ``RETURNING``    | ``WHERE``     |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``CONSTRAINT``    | ``FULL``            | ``LOOP``           | ``RLIKE``        | ``WINDOW``    |
+| ``CONCURRENTLY``  | ``FROM``            | ``LOCALTIMESTAMP`` | ``RIGHT``        | ``WINDOW``    |
 +-------------------+---------------------+--------------------+------------------+---------------+
-| ``CREATE``        | ``GRANT``           | ``MERGE``          | ``SELECT``       | ``WITH``      |
-+-------------------+---------------------+--------------------+------------------+               |
-| ``CROSS``         | ``GROUP``           | ``NATURAL``        | ``SESSION_USER`` |               |
+| ``CONSTRAINT``    | ``FULL``            | ``LOOP``           | ``RLIKE``        | ``WITH``      |
++-------------------+---------------------+--------------------+------------------+               | 
+| ``CREATE``        | ``GRANT``           | ``MERGE``          | ``SELECT``       |               |
++-------------------+---------------------+--------------------+------------------+               |  
+| ``CROSS``         | ``GROUP``           | ``NATURAL``        | ``SESSION_USER`` |               |  
 +-------------------+---------------------+--------------------+------------------+---------------+
diff --git a/reference/sql/sql_syntax/subqueries.rst b/reference/sql/sql_syntax/subqueries.rst
index 4cd995977..7f788ff46 100644
--- a/reference/sql/sql_syntax/subqueries.rst
+++ b/reference/sql/sql_syntax/subqueries.rst
@@ -28,14 +28,14 @@ The following is an example of table named ``nba`` with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql/sql_syntax/window_functions.rst b/reference/sql/sql_syntax/window_functions.rst
index cb87e085e..7718e8b3a 100644
--- a/reference/sql/sql_syntax/window_functions.rst
+++ b/reference/sql/sql_syntax/window_functions.rst
@@ -161,7 +161,7 @@ Without ``PARTITION BY``, all rows produced by the query are treated as a single
 ``ORDER BY``
 ----------------------
 
-The ``ORDER BY`` clause determines the order in which the rows of a partition are processed by the window function. It works similarly to a query-level ``ORDER BY`` clause, but cannot use output-column names or numbers.
+The ``ORDER BY`` clause determines the order in which the rows of a partition are processed by the window function. It works similarly to a query-level ``ORDER BY`` clause, but cannot use output-column names or indexes.
 
 Without ``ORDER BY``, rows are processed in an unspecified order.
 
@@ -221,14 +221,14 @@ For these examples, assume a table named ``nba``, with the following structure:
    
    CREATE TABLE nba
    (
-      "Name" varchar(40),
-      "Team" varchar(40),
+      "Name" text(40),
+      "Team" text(40),
       "Number" tinyint,
-      "Position" varchar(2),
+      "Position" text(2),
       "Age" tinyint,
-      "Height" varchar(4),
+      "Height" text(4),
       "Weight" real,
-      "College" varchar(40),
+      "College" text(40),
       "Salary" float
     );
 
diff --git a/reference/sql_feature_support.rst b/reference/sql_feature_support.rst
index ba4ca39e4..4443d631a 100644
--- a/reference/sql_feature_support.rst
+++ b/reference/sql_feature_support.rst
@@ -14,7 +14,7 @@ To understand which ANSI SQL and other SQL features SQream DB supports, use the
 Data Types and Values
 =========================
 
-Read more about :ref:`supported data types`.
+Read more about :ref:`Yes data types`.
 
 .. list-table:: Data Types and Values
    :widths: auto
@@ -24,46 +24,43 @@ Read more about :ref:`supported data types`.
      - Supported
      - Further information
    * - ``BOOL``
-     - ✓
+     - Yes
      - Boolean values
    * - ``TINTINT``
-     - ✓
+     - Yes
      - Unsigned 1 byte integer (0 - 255)
    * - ``SMALLINT``
-     - ✓
+     - Yes
      - 2 byte integer (-32,768 - 32,767)
    * - ``INT``
-     - ✓
+     - Yes
      - 4 byte integer (-2,147,483,648 - 2,147,483,647)
    * - ``BIGINT``
-     - ✓
+     - Yes
      - 8 byte integer (-9,223,372,036,854,775,808 - 9,223,372,036,854,775,807)
    * - ``REAL``
-     - ✓
+     - Yes
      - 4 byte floating point
    * - ``DOUBLE``, ``FLOAT``
-     - ✓
+     - Yes
      - 8 byte floating point
    * - ``DECIMAL``, ``NUMERIC``
-     - ✓
+     - Yes
      - Fixed-point numbers.
-   * - ``VARCHAR``
-     - ✓
-     - Variable length string - ASCII only
    * - ``TEXT``
-     - ✓
+     - Yes
      - Variable length string - UTF-8 encoded
    * - ``DATE``
-     - ✓
+     - Yes
      - Date
    * - ``DATETIME``, ``TIMESTAMP``
-     - ✓
+     - Yes
      - Date and time
    * - ``NULL``
-     - ✓
+     - Yes
      - ``NULL`` values
    * - ``TIME``
-     - ✗
+     - No
      - Can be stored as a text string or as part of a ``DATETIME``
 
 
@@ -77,14 +74,14 @@ Contraints
    * - Item
      - Supported
      - Further information
-   * - Not null
-     - ✓
+   * - ``Not null``
+     - Yes
      - ``NOT NULL``
-   * - Default values
-     - ✓
+   * - ``Default values``
+     - Yes
      - ``DEFAULT``
    * - ``AUTO INCREMENT``
-     - ✓ Different name
+     - Yes (different name)
      - ``IDENTITY``
 
 
@@ -118,43 +115,43 @@ Schema Changes
      - Supported
      - Further information
    * - ``ALTER TABLE``
-     - ✓
+     - Yes
      - :ref:`alter_table` - Add column, alter column, drop column, rename column, rename table, modify clustering keys
    * - Rename database
-     - ✗
+     - No
      - 
    * - Rename table
-     - ✓
+     - Yes
      - :ref:`rename_table`
    * - Rename column
-     - ✓ 
+     - Yes 
      - :ref:`rename_column`
    * - Add column
-     - ✓
+     - Yes
      - :ref:`add_column`
    * - Remove column
-     - ✓
+     - Yes
      - :ref:`drop_column`
    * - Alter column data type
-     - ✗
+     - No
      - 
    * - Add / modify clustering keys
-     - ✓
+     - Yes
      - :ref:`cluster_by`
    * - Drop clustering keys
-     - ✓
+     - Yes
      - :ref:`drop_clustering_key`
    * - Add / Remove constraints
-     - ✗
+     - No
      - 
    * - Rename schema
-     - ✗
+     - No
      - 
    * - Drop schema
-     - ✓
+     - Yes
      - :ref:`drop_schema`
    * - Alter default schema per user
-     - ✓
+     - Yes
      - :ref:`alter_default_schema`
 
 
@@ -169,28 +166,28 @@ Statements
      - Supported
      - Further information
    * - SELECT
-     - ✓
+     - Yes
      - :ref:`select`
    * - CREATE TABLE
-     - ✓
+     - Yes
      - :ref:`create_table`
    * - CREATE FOREIGN / EXTERNAL TABLE
-     - ✓
+     - Yes
      - :ref:`create_foreign_table`
    * - DELETE
-     - ✓
+     - Yes
      - :ref:`delete_guide`
    * - INSERT
-     - ✓
+     - Yes
      - :ref:`insert`, :ref:`copy_from`
    * - TRUNCATE
-     - ✓
+     - Yes
      - :ref:`truncate`
    * - UPDATE
-     - ✗
+     - No
      -
    * - VALUES
-     - ✓
+     - Yes
      - :ref:`values`
 
 Clauses
@@ -204,19 +201,19 @@ Clauses
      - Supported
      - Further information
    * - ``LIMIT`` / ``TOP``
-     - ✓
+     - Yes
      -
    * - ``LIMIT`` with ``OFFSET``
-     - ✗
+     - No
      -
    * - ``WHERE``
-     - ✓
+     - Yes
      -
    * - ``HAVING``
-     - ✓
+     - Yes
      -
    * - ``OVER``
-     - ✓
+     - Yes
      -
 
 Table Expressions
@@ -230,19 +227,19 @@ Table Expressions
      - Supported
      - Further information
    * - Tables, Views
-     - ✓
+     - Yes
      -
    * - Aliases, ``AS``
-     - ✓
+     - Yes
      -
    * - ``JOIN`` - ``INNER``, ``LEFT [ OUTER ]``, ``RIGHT [ OUTER ]``, ``CROSS``
-     - ✓
+     - Yes
      -
    * - Table expression subqueries
-     - ✓
+     - Yes
      -
    * - Scalar subqueries
-     - ✗
+     - No
      - 
 
 
@@ -259,34 +256,34 @@ Read more about :ref:`scalar_expressions`.
      - Supported
      - Further information
    * - Common functions
-     - ✓
+     - Yes
      - ``CURRENT_TIMESTAMP``, ``SUBSTRING``, ``TRIM``, ``EXTRACT``, etc.
    * - Comparison operators
-     - ✓
+     - Yes
      - ``<``, ``<=``, ``>``, ``>=``, ``=``, ``<>, !=``, ``IS``, ``IS NOT``
    * - Boolean operators
-     - ✓
+     - Yes
      - ``AND``, ``NOT``, ``OR``
    * - Conditional expressions
-     - ✓
+     - Yes
      - ``CASE .. WHEN``
    * - Conditional functions
-     - ✓
+     - Yes
      - ``COALESCE``
    * - Pattern matching
-     - ✓
+     - Yes
      - ``LIKE``, ``RLIKE``, ``ISPREFIXOF``, ``CHARINDEX``, ``PATINDEX``
    * - REGEX POSIX pattern matching
-     - ✓
+     - Yes
      - ``RLIKE``, ``REGEXP_COUNT``, ``REGEXP_INSTR``, ``REGEXP_SUBSTR``, 
    * - ``EXISTS``
-     - ✗
+     - No
      - 
    * - ``IN``, ``NOT IN``
      - Partial
      - Literal values only
    * - Bitwise arithmetic
-     - ✓
+     - Yes
      - ``&``, ``|``, ``XOR``, ``~``, ``>>``, ``<<``
 
 
@@ -304,16 +301,16 @@ Read more about :ref:`access_control` in SQream DB.
      - Supported
      - Further information
    * - Roles as users and groups
-     - ✓
+     - Yes
      - 
    * - Object default permissions
-     - ✓
+     - Yes
      - 
    * - Column / Row based permissions
-     - ✗
+     - No
      -
    * - Object ownership
-     - ✗
+     - No
      - 
 
 
@@ -329,20 +326,20 @@ Extra Functionality
      - Supported
      - Further information
    * - Information schema
-     - ✓
+     - Yes
      - :ref:`catalog_reference`
    * - Views
-     - ✓
+     - Yes
      - :ref:`create_view`
    * - Window functions
-     - ✓
+     - Yes
      - :ref:`window_functions`
    * - CTEs
-     - ✓
+     - Yes
      - :ref:`common_table_expressions`
    * - Saved queries, Saved queries with parameters
-     - ✓
+     - Yes
      - :ref:`saved_queries`
    * - Sequences
-     - ✓
+     - Yes
      - :ref:`identity`
diff --git a/releases/2019.2.1.rst b/releases/2019.2.1.rst
deleted file mode 100644
index c9c96b59b..000000000
--- a/releases/2019.2.1.rst
+++ /dev/null
@@ -1,94 +0,0 @@
-.. _2019.2.1:
-
-******************************
-Release Notes 2019.2.1
-******************************
-
-* 250 bugs fixed. Thanks to all of our customers and an unprecedented number of deployments for helping us find and fix these!
-* Improved Unicode text handling on the GPU
-* Improved logging and monitoring of statements
-* Alibaba DataX connector
-
-
-Improvements
-=====================
-
-* We’ve updated the ``show_server_status()`` function to more accurately reflect the status of statements across the cluster:
-
-   * Preparing – Initial validation
-   * In queue – Waiting for execution
-   * Initializing – Pre-execution processing
-   * Executing – statement is running
-
-* We’ve improved our log files and have unified them into a single file per worker, per date. Each message type has a unique code which can help identify potential issues. See the documentation for full details on the changes to the log structures.
-
-* ``WITH ADMIN OPTION`` added in ``GRANT``/``REVOKE`` operations, allowing roles to grant their own permissions to others.
-
-* HA cluster fully supports qualified hostnames, and no longer requires explicit IP addresses.
-
-* SQream DB CLI’s history can be disabled, by passing ``./ClientCmd --no-history``
-
-
-Behaviour Changes
-=====================
-
-* SQream DB no longer applies an implicit cast from a long text column to a shorter text column (``VARCHAR``/``TEXT``). This means some ``INSERT``/``COPY`` operations will now error instead of truncating the text. This is intended to prevent accidental truncation of text columns. If you want the old truncation behaviour, you can use the ``SUBSTRING`` function to truncate the text.
-
-
-Operations
-=====================
-
-* The client-server protocol has been updated to support a wider range of encodings. End users are required to use only the latest ClientCmd, JDBC, and ODBC drivers delivered with this version.
-
-* Clients such as SecureCRT and other shells must have locale set as ``cp874`` or equivalent
-
-* When upgrading from SQream DB v3.2 or lower, the storage version must be upgraded using the :ref:`upgrade_storage_cli_reference` utility: ``./bin/upgrade_storage /path/to/storage/sqreamdb/``
-
-
-Known Issues and Limitations
-===================================
-
-* TEXT columns cannot be used as a ``GROUP BY`` key when there are multiple ``COUNT (DISTINCT …)`` operations in a query
-
-* TEXT columns cannot be used in a statement containing window functions
-
-* TEXT is not supported as a join key
-
-* The following functions are not supported on ``TEXT`` column types: ``chr``, ``min``, ``max``, ``patindex``, ``to_binary``, ``to_hex``, ``rlike``, ``regexp_count``, ``regexp_instr``, ``regexp_substr``
-
-* SQream Dashboard: Only works with a HA clustered installation
-
-* SQream Editor: External tables and UDFs don’t appear in the DB Tree but do appear in the relevant sqream_catalog entries.
-
-
-Fixes
-=====================
-
-250 bugs and issues fixed, including:
-
-* Variety of performance improvements:
-
-* Improved performance of ``TEXT`` by up to 315% for a variety of scenarios, including ``COPY FROM``, ``INNER JOIN``, ``LEFT JOIN``.
-
-* Improved load performance from previous versions
-
-* Faster compilation times for very complex queries
-
-* DWLM:
-
-   * Fixed situation where queries were not distributed correctly among all available workers
-   * Fixed ``cannot execute - reconnectDb error`` error
-   * Fixed occasional hanging statement
-   * Fixed occasional ``Connection refused``
-
-* Window functions:
-
-   * Fixed window function edge-case ``error WindowA with no functions``
-   * Fixed situations where the SUM window function is applied on a column, partitioned by a second, and sorted by a third would return wrong results when scanning very large datasets
-
-* Other bugs:
-
-   * Fixed situation where many concurrent statements running would result in ``map::at`` appearing
-   * Fixed situation where SQream DB would restart when force-stopping an ``INSERT`` over the network
-   * Fixed situation where RAM wasn’t released immediately after statement has been executed
-   * Fixed Type doesn’t have a fixed size error that appeared when using an external table joined with a standard SQream DB table
diff --git a/releases/2020.1.rst b/releases/2020.1.rst
index e4928855e..769fa0ebb 100644
--- a/releases/2020.1.rst
+++ b/releases/2020.1.rst
@@ -185,4 +185,5 @@ Upgrading to v2020.1
 
 Versions are available for IBM POWER9, RedHat (CentOS) 7, Ubuntu 18.04, and other OSs via Docker.
 
-Contact your account manager to get the latest release of SQream DB.
+Contact your account manager to get the latest release of SQream.
+
diff --git a/releases/2020.2.rst b/releases/2020.2.rst
index 3dc25b78a..5a66e99bd 100644
--- a/releases/2020.2.rst
+++ b/releases/2020.2.rst
@@ -113,3 +113,4 @@ Upgrading to  Version 2020.2
 Versions are available for IBM POWER9, RedHat (CentOS) 7, Ubuntu 18.04, and other OSs via Docker.
 
 Contact your account manager to get the latest release of SQream.
+
diff --git a/releases/2020.3.1.rst b/releases/2020.3.1.rst
index 0667306d7..9fa40cbb0 100644
--- a/releases/2020.3.1.rst
+++ b/releases/2020.3.1.rst
@@ -69,4 +69,5 @@ Upgrading to v2020.3.1
 
 Versions are available for IBM POWER9, RedHat (CentOS) 7, Ubuntu 18.04, and other OSs via Docker.
 
-Contact your account manager to get the latest release of SQream DB.
\ No newline at end of file
+Contact your account manager to get the latest release of SQream.
+
diff --git a/releases/2020.3.2.1.rst b/releases/2020.3.2.1.rst
index 3c551b636..29f3b3e88 100644
--- a/releases/2020.3.2.1.rst
+++ b/releases/2020.3.2.1.rst
@@ -28,4 +28,5 @@ Upgrading to v2020.3.2.1
 
 Versions are available for IBM POWER9, RedHat (CentOS) 7, Ubuntu 18.04, and other OSs via Docker.
 
-Contact your account manager to get the latest release of SQream DB.
\ No newline at end of file
+Contact your account manager to get the latest release of SQream.
+
diff --git a/releases/2020.3.2.rst b/releases/2020.3.2.rst
new file mode 100644
index 000000000..c97e2bd47
--- /dev/null
+++ b/releases/2020.3.2.rst
@@ -0,0 +1,28 @@
+.. _2020.3.2:
+
+**************************
+What's new in 2020.3.2
+**************************
+
+SQream DB v2020.3.2 contains major performance improvements and some bug fixes.
+
+Performance Enhancements
+=========================
+* Metadata on Demand optimization resulting in reduced latency and improved overall performance
+
+
+Known Issues & Limitations
+================================
+* Bug with STDDEV_SAMP,STDDEV_POP and STDEV functions
+* Window function query returns wrong results
+* rank() in window function sometimes returns garbage
+* Window function on null value could have bad result
+* Window function lead() on varchar can have garbage results
+* Performance degradation when using "groupby" or outer_join
+
+Upgrading to v2020.3.2
+========================
+
+Versions are available for IBM POWER9, RedHat (CentOS) 7, Ubuntu 18.04, and other OSs via Docker.
+
+Contact your account manager to get the latest release of SQream DB.
diff --git a/releases/2020.3.rst b/releases/2020.3.rst
index d072b15da..eb8ca8f62 100644
--- a/releases/2020.3.rst
+++ b/releases/2020.3.rst
@@ -26,9 +26,12 @@ The following list describes the new features:
 
 * ``TEXT`` is ramping up with new features (previously only available with VARCHARs):
 
-    * :ref:`substring`, :ref:`lower`, :ref:`ltrim`, :ref:`charindex`, :ref:`replace`, etc.
+    * `SUBSTRING `_ 
+    * `LTRIM `_ 
+    * `CHARINDEX `_
+    * `REPLACE `_ 
 
-    * Binary operators - :ref:`concat`, :ref:`like`, etc.
+    * Binary operators - `CONCAT `_ , `REPLACE `_ , etc.
 
     * Casts to and from ``TEXT``
 
diff --git a/releases/2020.3_index.rst b/releases/2020.3_index.rst
index b13340b52..a662cb48a 100644
--- a/releases/2020.3_index.rst
+++ b/releases/2020.3_index.rst
@@ -3,7 +3,7 @@
 **************************
 Release Notes 2020.3
 **************************
-The 2020.3 Release Notes describe the following releases:
+The 2020.3 release notes describe the following releases:
 
 .. contents:: 
    :local:
@@ -14,5 +14,6 @@ The 2020.3 Release Notes describe the following releases:
    :glob:
 
    2020.3.2.1
+   2020.3.2
    2020.3.1
    2020.3
\ No newline at end of file
diff --git a/releases/2021.1.1.rst b/releases/2021.1.1.rst
deleted file mode 100644
index 8e6417a43..000000000
--- a/releases/2021.1.1.rst
+++ /dev/null
@@ -1,64 +0,0 @@
-.. _2021.1.1:
-
-**************************
-Release Notes 2021.1.1
-**************************
-The 2021.1.1 release notes were released on 7/27/2021 and describe the following:
-
-.. contents:: 
-   :local:
-   :depth: 1   
-   
-New Features
--------------
-The 2021.1.1 Release Notes include the following new features:
-
-.. contents:: 
-   :local:
-   :depth: 1   
-
-Complete Ranking Function Support
-************
-SQream now supports the following new ranking functions:
-
-.. list-table::
-   :widths: 1 23 76
-   :header-rows: 1
-   
-   * - Function
-     - Return Type
-     - Description
-   * - first_value
-     - Same type as value
-     - Returns the value in the first row of a window.
-   * - last_value
-     - Same type as value
-     - Returns the value in the last row of a window.
-   * - nth_value
-     - Same type as value
-     - Returns the value in a specified (``n``) row of a window. if the specified row does not exist, this function returns ``NULL``.
-   * - dense_rank
-     - bigint
-     - Returns the rank of the current row with no gaps.
-   * - percent_rank
-     - double
-     - Returns the relative rank of the current row.
-   * - cume_dist
-     - double
-     - Returns the cumulative distribution of rows.
-   * - ntile(buckets)
-     - integer
-     - Returns an integer ranging between ``1`` and the argument value, dividing the partitions as equally as possible.
-
-For more information, navigate to Windows Functions and scroll to the `Ranking Functions table `_.
-
-
-Resolved Issues
--------------
-The following list describes the resolved issues:
-
-* SQream did not support exporting and reading **Int64** columns as **bigint** in Parquet. This was fixed.
-* The Decimal column was not supported when inserting data from Parquet files. This was fixed.
-* Values in Parquet Numeric columns were not being converted correctly. This was fixed.
-* Converting ``string`` data type to ``datetime`` was not working correctly. This was fixed.
-* Casting ``datetime`` to ``text`` truncated the time. This was fixed.
\ No newline at end of file
diff --git a/releases/2021.1.2.rst b/releases/2021.1.2.rst
index 43ce6db7d..ee33cfbd8 100644
--- a/releases/2021.1.2.rst
+++ b/releases/2021.1.2.rst
@@ -40,17 +40,17 @@ String Literals Containing ASCII Characters Interepreted as TEXT
 ************
 SQream now interprets all string literals, including those containing ASCII characters, as ``text``.
 
-For more information, see `String Types `_.
+For more information, see `String Types `_.
 
 Decimal Literals Interpreted as Numeric Columns
 ************
 SQream now interprets literals containing decimal points as ``numeric`` instead of as ``double``.
 
-For more information, see `Data Types `_.
+For more information, see `Data Types `_.
 
-Roles Area Added to Studio Version 5.3.3
+Roles Area Added to Studio Version 5.4.3
 ****************
-The **Roles** area has been added to `Studio version 5.3.3 `_. From the Roles area users can create and assign roles and manage user permissions.
+The **Roles** area has been added to `Studio version 5.4.3 `_. From the Roles area users can create and assign roles and manage user permissions.
 
 Resolved Issues
 -------------
@@ -58,4 +58,5 @@ The following list describes the resolved issues:
 
 * In Parquet files, ``float`` columns could not be mapped to SQream ``double`` columns. This was fixed.
 * The ``REPLACE`` function only supported constant values as arguments. This was fixed.
-* The ``LIKE`` function did not check for incorrect patterns or handle escape characters. This was fixed.
\ No newline at end of file
+* The ``LIKE`` function did not check for incorrect patterns or handle escape characters. This was fixed.
+
diff --git a/releases/2021.1.rst b/releases/2021.1.rst
index b2b0dcfd8..02d5377d5 100644
--- a/releases/2021.1.rst
+++ b/releases/2021.1.rst
@@ -40,7 +40,7 @@ SQream now supports Numeric Data types for the following operations:
    * All aggregation types (not including Window functions).
    * Scalar functions (not including some trigonometric and logarithmic functions).
    
-For more information, see `Numeric Data Types `_.
+For more information, see `Numeric Data Types `_.
 
 Text Data Type
 ************
@@ -54,14 +54,14 @@ SQream now supports TEXT data types in all operations, which is default string d
    * Support text columns in queries with multiple distinct aggregates.
    * Text literal support for all functions.
    
-For more information, see `String Types `_.
+For more information, see `String Types `_.
 
 
 Supports Scalar Subqueries
 ************
 SQream now supports running initial scalar subqueries.
 
-For more information, see `Subqueries `_.
+For more information, see `Subqueries `_.
 
 Literal Arguments
 ************
@@ -72,7 +72,7 @@ Simple Scalar SQL UDFs
 ************
 SQream now supports simple scalar SQL UDF's.
 
-For more information, see `Simple Scalar SQL UDF’s `_.
+For more information, see `Simple Scalar SQL UDF’s `_.
 
 Logging Enhancements
 ************
@@ -91,7 +91,7 @@ Improved Presented License Information
 ************
 SQream now displays information related to data size limitations, expiration date, type of license shown by the new UF. The **Utility Function (UF)** name is ``get_license_info()``.
 
-For more information, see `GET_LICENSE_INFO `_.
+For more information, see `GET_LICENSE_INFO `_.
 
 
   
@@ -171,7 +171,7 @@ Operations and Configuration Changes
 Recommended SQream Configuration on Cloud
 ************
 
-For more information about AWS, see `Amazon S3 `_.
+For more information about AWS, see `Amazon S3 `_.
 
 
 
@@ -183,7 +183,7 @@ SQream now has a new ``runtimeGlobalFlags`` flag called ``WriteToFileThreads``.
 
 This flag configures the number of threads in the **WriteToFile** function. The default value is ``16``.
 
-For more information about the ``runtimeGlobalFlags`` flag, see the **Runtime Global Flags** table in `Configuration `_.
+For more information about the ``runtimeGlobalFlags`` flag, see the **Runtime Global Flags** table in `Configuration `_.
 
 
 
@@ -211,3 +211,4 @@ The the list below describes the following known issues and limitations:
 Upgrading to v2021.1
 -------
 Due to the known issue of a limitation on the amount of access requests that can be simultaneously sent to AWS, deploying S3 requires setting the ``ObjectStoreClients`` parameter to ``40``.
+
diff --git a/releases/2021.2.1.24.rst b/releases/2021.2.1.24.rst
new file mode 100644
index 000000000..2016dd564
--- /dev/null
+++ b/releases/2021.2.1.24.rst
@@ -0,0 +1,85 @@
+.. _2021.2.1.24:
+
+**************************
+Release Notes 2021.2.1.24
+**************************
+The 2021.2.1.24 release notes were released on 7/28/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+Version Content
+----------
+The 2021.2.1.24 Release Notes includes a query maintenance feature.
+
+New Features
+----------
+The 2021.2.1.24 Release Notes include the following new features:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Query Healer
+************
+The new **Query Healer** feature periodically examines the progress of running statements, and is used for query maintenance.
+
+For more information, see `Query Healer `_.
+
+Resolved Issues
+---------
+The following table lists the resolved issues for Version 2021.2.1.24:
+
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                                                                    |
++=============+====================================================================================================================================+
+| SQ-10606    | Queries were getting stuck in the queue for a prolonged time.                                                                      |
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+| SQ-10691    | The DB schema identifier was causing an error when running queries from joins suite.                                               |
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+| SQ-10918    | The Workload Manager was only assigning jobs sequentially, delaying user SQLs assigned to workers running very large jobs.         |
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+| SQ-10955    | Metadata filters were not being applied when users filtered by nullable dates using ``dateadd``                                    |
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+
+Known Issues
+---------
+The following table lists the known issues for Version 2021.2.1.24:
+
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                                                                    |
++=============+====================================================================================================================================+
+| SQ-10071    | An error occurred on existing subqueries with ``TEXT`` and ``VARCHAR`` equality conditions.                                        |
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+| SQ-10902    | Inserting a null value into non-null column was causing SQream to crash.                                                           |
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+| SQ-11088    | Specific workers caused low performance during compilation.                                                                        |
++-------------+------------------------------------------------------------------------------------------------------------------------------------+
+
+Operations and Configuration Changes 
+--------
+The following worker level configuration flags were added:
+
+ * :ref:`is_healer_on`
+
+    ::
+
+ * :ref:`healer_max_statement_inactivity_seconds`
+ 
+    ::
+	
+ * :ref:`healer_detection_frequency_seconds`
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+Version 2021.2.1.24 includes no deprecated features.
+
+End of Support
+-------
+The End of Support section is not relevant to Version 2021.2.1.24.
+ 
diff --git a/releases/2021.2.1.rst b/releases/2021.2.1.rst
index f17bdd516..d50f1084e 100644
--- a/releases/2021.2.1.rst
+++ b/releases/2021.2.1.rst
@@ -21,30 +21,29 @@ CREATE TABLE
 ************
 SQream now supports duplicating the column structure of an existing table using the ``LIKE`` clause.
 
-For more information, see `Duplicating the Column Structure of an Existing Table `_.
+For more information, see `Duplicating the Column Structure of an Existing Table `_.
 
 PERCENTILE FUNCTIONS
 ************
 SQream now supports the following aggregation functions:
 
-* :ref:`percentile_cont`
-* :ref:`percentile_disc`
-* :ref:`mode`
+* `PERCENTILE_CONT `_
+* `PERCENTILE_DISC `_
+* `MODE `_
 
 REGEX REPLACE
 ************   
 SQream now supports the ``REGEXP_REPLACE`` function for finding and replacing text column substrings.
 
-For more information, see :ref:`regexp_replace`.
+For more information, see `REGEX_REPLACE `_.
 
 Delete Optimization
 ************
 The ``DELETE`` statement can now delete values that contain multi-table conditions.
 
-For more information, see `Deleting Values that Contain Multi-Table Conditions `_.
-
-For more information, see :ref:`regexp_replace`.
+For more information, see `Deleting Values that Contain Multi-Table Conditions `_.
 
+For more information, see `REGEX_REPLACE `_.
 
 Performance Enhancements
 ------
@@ -61,8 +60,7 @@ The following table lists the issues that were resolved in Version 2021.2.1:
    * - SQ No.
      - Description
    * - SQ-8267
-     - A method has been provided for including the ``GROUP BY`` and ``DISTINCT COUNT`` statements.     
-  
+     - A method has been provided for including the ``GROUP BY`` and ``DISTINCT COUNT`` statements.
 
 Known Issues
 ------
@@ -78,4 +76,5 @@ The **End of Support** section is not relevant to Version 2021.2.1.
 
 Deprecated Features
 ------
-The **Deprecated Components** section is not relevant to Version 2021.2.1.
\ No newline at end of file
+The **Deprecated Components** section is not relevant to Version 2021.2.1.
+
diff --git a/releases/2021.2.rst b/releases/2021.2.rst
index ec4773669..f95de16a5 100644
--- a/releases/2021.2.rst
+++ b/releases/2021.2.rst
@@ -33,18 +33,14 @@ SQream now uses a new configuration system based on centralized configuration ac
 
 For more information, see the following:
 
-* `Configuration `_ - describes how to configure your instance of SQream from a centralized location.
-* `SQream Studio 5.4.2 `_ - configure your instance of SQream from Studio.
+* `Configuration `_ - describes how to configure your instance of SQream from a centralized location.
+* `SQream Studio 5.4.3 `_ - configure your instance of SQream from Studio.
    
 Qualifying Schemas Without Providing an Alias
 ************
 When running queries, SQream now supports qualifying schemas without providing an alias.
 
-For more information, see :ref:`create_schema`.
-
-
-
-
+For more information, see `SQream Studio 5.4.3 `_.
 
 Double-Quotations Supported When Importing and Exporting CSVs
 ************
@@ -52,9 +48,10 @@ When importing and exporting CSVs, SQream now supports using quotation character
 
 For more information, see the following:
 
-* :ref:`copy_from`
+* `COPY_FROM `_
+
+* `COPY_TO `_
 
-* :ref:`copy_to`
 
 
 Note the following:
@@ -92,9 +89,9 @@ Note the following:
 For more information, see the following statements:
 
 
-* :ref:`copy_from`
+* `COPY_FROM `_
 
-* :ref:`create_foreign_table`
+* `CREATE_FOREIGN_TABLE `_
 
 Performance Enhancements
 ------
@@ -132,7 +129,7 @@ NVARCHAR Data Type Renamed TEXT
 The ``NVARCHAR`` data type has been renamed ``TEXT``.
 
 
-For more information on the ``TEXT`` data type, see `String (TEXT) `_
+For more information on the ``TEXT`` data type, see `String (TEXT) `_
 
 End of Support
 ------
@@ -159,14 +156,15 @@ When upgrading from a SQream version earlier than 2021.2 you must upgrade your s
       $ cat /etc/sqream/sqream1_config.json |grep cluster
       $ ./upgrade_storage 
 	  
-For more information on upgrading your SQream version, see `Upgrading SQream Version `_.
+For more information on upgrading your SQream version, see `Upgrading SQream Version `_.
 
 Upgrading Your Client Drivers
 ************
-For more information on the client drivers for version 2021.2, see `Client Drivers for 2021.2 `_.
+For more information on the client drivers for version 2021.2, see `Client Drivers for 2021.2 `_.
 
 Configuring Your Instance of SQream
 ************
 A new configuration method is used starting with Version 2021.2.
 
-For more information about configuring your instance of SQream, see :ref:`configuration`.
\ No newline at end of file
+For more information about configuring your instance of SQream, see `Client Drivers for 2021.2 `_.
+
diff --git a/releases/2021.2_index.rst b/releases/2021.2_index.rst
index 77a22b0ae..9bee5fd66 100644
--- a/releases/2021.2_index.rst
+++ b/releases/2021.2_index.rst
@@ -13,5 +13,6 @@ The 2021.2 Release Notes describe the following releases:
    :maxdepth: 1
    :glob:
 
+   2021.2.1.24
    2021.2.1
    2021.2
\ No newline at end of file
diff --git a/releases/2022.1.1.rst b/releases/2022.1.1.rst
new file mode 100644
index 000000000..436287c6c
--- /dev/null
+++ b/releases/2022.1.1.rst
@@ -0,0 +1,109 @@
+.. _2022.1.1:
+
+**************************
+Release Notes 2022.1.1
+**************************
+The 2022.1.1 release notes were released on 7/19/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+Version Content
+----------
+The 2022.1.1 Release Notes describes the following: 
+
+* Enhanced security features.
+
+New Features
+----------
+The 2022.1.1 Release Notes include the following new features:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Password Security Compliance
+************
+In compliance with GDPR standards, SQream now requires a strong password policy when accessing the CLI or Studio.
+
+For more information, see :ref:`access_control_password_policy`.
+
+Known Issues
+---------
+The following table lists the known issues for Version 2022.1.1:
+
++-------------+------------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                                |
++=============+================================================================================================+
+| SQ-6419     | An internal compiler error occurred when casting Numeric literals in an aggregation function.  |
++-------------+------------------------------------------------------------------------------------------------+
+
+Resolved Issues
+---------
+The following table lists the issues that were resolved in Version 2022.1.1:
+
++-------------+----------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                        |
++=============+========================================================================================+
+| SQ-10873    | Inserting 100K bytes into a text column resulted in an unclear error message.          |
++-------------+----------------------------------------------------------------------------------------+
+| SQ-10892    | An unclear message was displayed when users ran ``UPDATE`` on foreign tables.          |
++-------------+----------------------------------------------------------------------------------------+
+
+Operations and Configuration Changes
+--------
+The ``login_max_retries`` configuration flag is required for adjusting the permitted log-in attempts.
+
+For more information, see `Adjusting the Permitted Log-In Attempts `_.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+In SQream version 2022.1 the ``VARCHAR`` data type has been deprecated and replaced with ``TEXT``. SQream will maintain ``VARCHAR`` in all previous versions until completing the migration to ``TEXT``, at which point it will be deprecated in all earlier versions. SQream also provides an automated and secure tool to facilitate and simplify migration from ``VARCHAR`` to ``TEXT``.  
+
+If you are using an earlier version of SQream, see the `Using Legacy String Literals `_ configuration flag.
+
+End of Support
+-------
+The End of Support section is not relevant to Version 2022.1.1.
+
+Upgrading to v2022.1.1
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version `_ procedure.
+  
diff --git a/releases/2022.1.2.rst b/releases/2022.1.2.rst
new file mode 100644
index 000000000..5c007f0ec
--- /dev/null
+++ b/releases/2022.1.2.rst
@@ -0,0 +1,99 @@
+.. _2022.1.2:
+
+**************************
+Release Notes 2022.1.2
+**************************
+The 2022.1.2 release notes were released on 8/24/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+Version Content
+----------
+The 2022.1.2 Release Notes describes the following: 
+
+* Automatic schema identification.
+
+   ::
+
+* Optimized queries on external Parquet tables.
+
+New Features
+----------
+The 2022.1.2 Release Notes include the following new features:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Parquet Read Optimization
+************
+Querying Parquet foreign tables has been optimized and is now up to 20x faster than in previous versions.
+
+Resolved Issues
+---------
+The following table lists the issues that were resolved in Version 2022.1.2:
+
++-------------+-------------------------------------------------------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                                                                           |
++=============+===========================================================================================================================================+
+| SQ-10892    | An incorrect error message was displayed when users ran the ``UPDATE`` command on foreign tables.                                         |
++-------------+-------------------------------------------------------------------------------------------------------------------------------------------+
+| SQ-11273    | Clustering optimization only occurs when copying data from CSV files.                                                                     |
++-------------+-------------------------------------------------------------------------------------------------------------------------------------------+
+
+Operations and Configuration Changes
+--------
+No configuration changes were made.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+In SQream version 2022.1 the ``VARCHAR`` data type has been deprecated and replaced with ``TEXT``. SQream will maintain ``VARCHAR`` in all previous versions until completing the migration to ``TEXT``, at which point it will be deprecated in all earlier versions. SQream also provides an automated and secure tool to facilitate and simplify migration from ``VARCHAR`` to ``TEXT``.  
+
+If you are using an earlier version of SQream, see the `Using Legacy String Literals `_ configuration flag.
+
+End of Support
+-------
+The End of Support section is not relevant to Version 2022.1.2.
+
+Upgrading to v2022.1.2
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version `_ procedure.
+  
diff --git a/releases/2022.1.3.rst b/releases/2022.1.3.rst
new file mode 100644
index 000000000..b88eb78d9
--- /dev/null
+++ b/releases/2022.1.3.rst
@@ -0,0 +1,122 @@
+.. _2022.1.3:
+
+**************************
+Release Notes 2022.1.3
+**************************
+The 2022.1.3 release notes were released on 9/20/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+Version Content
+----------
+The 2022.1.3 Release Notes describes the following: 
+
+* Optimize the delete operation by removing redundant calls.
+
+   ::
+
+* Support LIKE condition for filtering metadata.
+
+   ::
+
+* Migration tool for converting VARCHAR columns into TEXT columns.
+
+   ::
+
+* Support sub-queries in the UPDATE condition.
+
+Known Issues
+---------
+The following table lists the issues that are known limitations in Version 2022.1.3:
+
++-------------+--------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                            |
++=============+============================================================================================+
+| SQ-11677    | UPADTE or DELETE using a sub-query that includes '%' (modulo) is crashing SQreamDB worker  |
++-------------+--------------------------------------------------------------------------------------------+
+
+
+Resolved Issues
+---------
+The following table lists the issues that were resolved in Version 2022.1.3:
+
++-------------+-------------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                                 |
++=============+=================================================================================================+
+| SQ-11487    | COPY FROM with offset = 0 (which is an unsupported option) is stuck up to the query timeout.    |
++-------------+-------------------------------------------------------------------------------------------------+
+| SQ-11373    | SQL statement fails after changing the foreign table the statement tries to query.              |
++-------------+-------------------------------------------------------------------------------------------------+
+| SQ-11320    | Locked users are not being released on system reset.                                            |
++-------------+-------------------------------------------------------------------------------------------------+
+| SQ-11310    | Using "create table like" on foreign tables results in flat compression of the created table.   |
++-------------+-------------------------------------------------------------------------------------------------+
+| SQ-11287    | SQL User Defined Function fails when function definition contain parenthesis                    |
++-------------+-------------------------------------------------------------------------------------------------+
+| SQ-11187    | FLAT compression is wrongly chosen when dealing with data sets starting with all-nulls          |
++-------------+-------------------------------------------------------------------------------------------------+
+| SQ-10892    | Update - enhanced error message when trying to run update on foreign table.                     |
++-------------+-------------------------------------------------------------------------------------------------+
+
+
+
+Operations and Configuration Changes
+--------
+No configuration changes were made.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+SQream is declaring end of support of VARCHAR data type, the decision resulted by SQream's effort to enhance its core functionalities and with respect to ever changing echo system requirements.
+
+VARCHAR is no longer supported for new customers - effective immediately.  
+
+TEXT data type is replacing VARCHAR - SQream will maintain VARCHAR data type support until 09/30/2023.
+
+As part of this release 2022.1.3, SQream provides an automated and secured migration tool to help customers with the conversion phase from VARCHAR to TEXT data type, please address delivery for further information.
+
+End of Support
+-------
+No End of Support changes were made.
+
+Upgrading to v2022.1.3
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version <../installation_guides/installing_sqream_with_binary.html#upgrading-sqream-version>`_ procedure.
+  
diff --git a/releases/2022.1.4.rst b/releases/2022.1.4.rst
new file mode 100644
index 000000000..338fdf164
--- /dev/null
+++ b/releases/2022.1.4.rst
@@ -0,0 +1,103 @@
+.. _2022.1.4:
+
+**************************
+Release Notes 2022.1.4
+**************************
+The 2022.1.4 release notes were released on 10/11/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+Version Content
+----------
+The 2022.1.4 Release Notes describes the following: 
+
+* Security enhancement - Disable Python UDFs by default.
+
+   ::
+
+
+Known Issues
+---------
+No relevant Known Issues.
+
+
+Resolved Issues
+---------
+The following table lists the issues that were resolved in Version 2022.1.4:
+
++---------------------+------------------------------------------------------------------------------------------------------------------+
+| **SQ No.**          | **Description**                                                                                                  |
++=====================+==================================================================================================================+
+| SQ-11782            | Alter default permissions to grant update results in error                                                       |
++---------------------+------------------------------------------------------------------------------------------------------------------+
+| SQ-11740            | A correlated subquery is blocked when having 'not exist' where clause in update query                            |
++---------------------+------------------------------------------------------------------------------------------------------------------+
+| SQ-11686, SQ-11584  | CUDA malloc error                                                                                                |
++---------------------+------------------------------------------------------------------------------------------------------------------+
+| SQ-10602            | Group by clause error                                                                                            |
++---------------------+------------------------------------------------------------------------------------------------------------------+
+| SQ-9813             | When executing copy from a parquet file that contain date values earlier than 1970, values are changed to 1970.  |
++---------------------+------------------------------------------------------------------------------------------------------------------+
+
+
+
+
+Operations and Configuration Changes
+--------
+No configuration changes were made.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+SQream is declaring end of support of VARCHAR data type, the decision resulted by SQream's effort to enhance its core functionalities and with respect to ever changing echo system requirements.
+
+VARCHAR is no longer supported for new customers - effective from Version 2022.1.3 (September 2022).  
+
+TEXT data type is replacing VARCHAR - SQream will maintain VARCHAR data type support until 09/30/2023.
+
+
+End of Support
+-------
+No End of Support changes were made.
+
+Upgrading to v2022.1.4
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version <../installation_guides/installing_sqream_with_binary.html#upgrading-sqream-version>`_ procedure.
+  
diff --git a/releases/2022.1.5.rst b/releases/2022.1.5.rst
new file mode 100644
index 000000000..d63bf3b7a
--- /dev/null
+++ b/releases/2022.1.5.rst
@@ -0,0 +1,116 @@
+.. _2022.1.5:
+
+**************************
+Release Notes 2022.1.5
+**************************
+The 2022.1.5 release notes were released on 11/02/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+New Features
+----------
+The 2022.1.5 Release Notes include the following new features:
+ 
+* keys_evaluate utility function enhancement - add problematic chunk ID to the function's output report.
+
+	::
+
+* Automatically close database client connections that have been open for 24 hours without any active statements.
+
+	::
+
+* release_defunct_locks utility function enhancement to receive new optional input parameter to specify timeout - for more details see `Lock Related Issues <../troubleshooting/lock_related_issues.html>`_.
+
+   
+
+
+Known Issues
+---------
+Recently discovered issue with the encryption feature, at this time SQream recommends to avoid using this feature - a fix will be introduced in the near future.
+
+
+Resolved Issues
+---------
+The following table lists the issues that were resolved in Version 2022.1.5:
+
++--------------+------------------------------------------------------------------------------------------+
+| **SQ No.**   | **Description**                                                                          |
++==============+==========================================================================================+
+| SQ-11081     | Tableau connection are not getting closed                                                |
++--------------+------------------------------------------------------------------------------------------+
+| SQ-11473     | SQream Command Line Interface connectivity issues                                        |
++--------------+------------------------------------------------------------------------------------------+
+| SQ-11551     | SQream Studio Logs pages filtering issues                                                |
++--------------+------------------------------------------------------------------------------------------+
+| SQ-11631     | Log related configuration flags are not working as expected                              |
++--------------+------------------------------------------------------------------------------------------+
+| SQ-11745     | Missing validation of sufficient GPU memory                                              |
++--------------+------------------------------------------------------------------------------------------+
+| SQ-11792     | CUME_DIST function causes query execution errors                                         |
++--------------+------------------------------------------------------------------------------------------+
+| SQ-11905     | GetDate casting to as text returns DATE with 0s in the time part or no time part at all  |
++--------------+------------------------------------------------------------------------------------------+
+
+
+
+
+
+Operations and Configuration Changes
+--------
+No configuration changes were made.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+SQream is declaring end of support of VARCHAR data type, the decision resulted by SQream's effort to enhance its core functionalities and with respect to ever changing echo system requirements.
+
+VARCHAR is no longer supported for new customers - effective from Version 2022.1.3 (September 2022).  
+
+TEXT data type is replacing VARCHAR - SQream will maintain VARCHAR data type support until 09/30/2023.
+
+
+End of Support
+-------
+No End of Support changes were made.
+
+Upgrading to v2022.1.5
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version <../installation_guides/installing_sqream_with_binary.html#upgrading-sqream-version>`_ procedure.
+  
diff --git a/releases/2022.1.6.rst b/releases/2022.1.6.rst
new file mode 100644
index 000000000..67eb46222
--- /dev/null
+++ b/releases/2022.1.6.rst
@@ -0,0 +1,102 @@
+.. _2022.1.6:
+
+**************************
+Release Notes 2022.1.6
+**************************
+The 2022.1.6 release notes were released on 11/29/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+New Features
+----------
+ 
+* :ref:`.Net Driver` now supports .NET version 6 or newer. 
+
+	::
+
+Known Issues
+---------
+:ref:`Percentile` is not supported for Window functions.
+
+Version 2022.1.6 resolved Issues
+---------
+
++--------------------------------+--------------------------------------------------------------------------------------------+
+|  **SQ No.**                    |  **Description**                                                                           |
++================================+============================================================================================+
+| SQ-10160                       | Spotfire casting issues when reading SQream data                                           |
++--------------------------------+--------------------------------------------------------------------------------------------+
+| SQ-12089                       | ``COUNT (*)`` execution fails when using foreign table                                     |
++--------------------------------+--------------------------------------------------------------------------------------------+
+| SQ-12019                       | Using ``PERCENTILE_DISC`` function with ``PARTITION BY`` function causes internal error    |
++--------------------------------+--------------------------------------------------------------------------------------------+
+| SQ-12117                       | Running TCPH-21 results in out of memory                                                   |
++--------------------------------+--------------------------------------------------------------------------------------------+
+| SQ-11940, SQ-11926, SQ-11874   |  Known encryption issues                                                                   |
++--------------------------------+--------------------------------------------------------------------------------------------+
+| SQ-11295                       | ``max_file_size`` when executing ``COPY_TO`` is imprecise                                  |
++--------------------------------+--------------------------------------------------------------------------------------------+
+| SQ-12204                       | Possible issue when trying to INSERT Unicode data using .Net client                        |
++--------------------------------+--------------------------------------------------------------------------------------------+
+ 
+
+
+Configuration Changes
+--------
+No configuration changes were made.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+SQream is declaring end of support of VARCHAR data type, the decision resulted by SQream's effort to enhance its core functionalities and with respect to ever changing echo system requirements.
+
+VARCHAR is no longer supported for new customers - effective from Version 2022.1.3 (September 2022).  
+
+TEXT data type is replacing VARCHAR and NVARCHAR - SQream will maintain VARCHAR data type support until 09/30/2023.
+
+
+End of Support
+-------
+No End of Support changes were made.
+
+Upgrading to v2022.1.6
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version <../installation_guides/installing_sqream_with_binary.html#upgrading-sqream-version>`_ procedure.
+  
diff --git a/releases/2022.1.7.rst b/releases/2022.1.7.rst
new file mode 100644
index 000000000..b3fda33ff
--- /dev/null
+++ b/releases/2022.1.7.rst
@@ -0,0 +1,97 @@
+.. _2022.1.7:
+
+**************************
+Release Notes 2022.1.7
+**************************
+The 2022.1.7 release notes were released on 12/15/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+New Features
+----------
+
+ * Ingesting data from :ref:`JSON` files.
+
+	::
+
+ * ZLIB compression performance enhancements.
+
+	::
+
+
+Known Issues
+---------
+:ref:`Percentile` is not supported for Window functions.
+
+Version 2022.1.7 resolved Issues
+---------
+
++------------------+-----------------------------------------------------------------------+
+| **SQ No.**       | **Description**                                                       |
++==================+=======================================================================+
+| SQ-11523         | ``SAVED QUERY`` execution internal error                              |
++------------------+-----------------------------------------------------------------------+
+| SQ-11811         |  Missing metadata optimization when joining ``TEXT`` columns          |
++------------------+-----------------------------------------------------------------------+
+| SQ-12178         | SQreamNet does not support the ``ExecuteNonQuery`` ADO.NET command    |
++------------------+-----------------------------------------------------------------------+
+
+Configuration Changes
+--------
+No configuration changes were made.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+SQream is declaring end of support of VARCHAR data type, the decision resulted by SQream's effort to enhance its core functionalities and with respect to ever changing echo system requirements.
+
+VARCHAR is no longer supported for new customers - effective from Version 2022.1.3 (September 2022).  
+
+TEXT data type is replacing VARCHAR and NVARCHAR - SQream will maintain VARCHAR data type support until 09/30/2023.
+
+
+End of Support
+-------
+No End of Support changes were made.
+
+Upgrading to v2022.1.7
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version <../installation_guides/installing_sqream_with_binary.html#upgrading-sqream-version>`_ procedure.
+  
diff --git a/releases/2022.1.rst b/releases/2022.1.rst
new file mode 100644
index 000000000..6614d6458
--- /dev/null
+++ b/releases/2022.1.rst
@@ -0,0 +1,131 @@
+.. _2022.1:
+
+**************************
+Release Notes 2022.1
+**************************
+The 2022.1 release notes were released on 7/19/2022 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+Version Content
+----------
+The 2022.1 Release Notes describes the following:
+
+* Enhanced security features.
+* New data manipulation command.
+* Additional data ingestion format.
+
+New Features
+----------
+The 2022.1 Release Notes include the following new features:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Data Encryption
+************
+SQream now supports data encryption mechanisms in accordance with **General Data Protection Regulation (GDPR)** standards.
+
+Using the data encryption feature may lead to a maximum of a 10% increase in performance degradation.
+
+For more information, see `Data Encryption `_.
+
+Update Feature
+************
+SQream now supports the DML **Update** feature, which is used for modifying the value of certain columns in existing rows.
+
+For more information, see `UPDATE `_.
+
+Avro Ingestion
+************
+SQream now supports ingesting data from Avro files.
+
+For more information, see `Inserting Data from Avro `_.
+
+Known Issues
+---------
+The following table lists the known issues for Version 2022.1:
+
++-------------+-------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                           |
++=============+===========================================================================================+
+| SQ-7732     | Reading numeric columns from an external Parquet file generated an error.                 |
++-------------+-------------------------------------------------------------------------------------------+
+| SQ-9889     | Running a query including Thai characters generated an internal runtime error.            |
++-------------+-------------------------------------------------------------------------------------------+
+| SQ-10071    | Error on existing subqueries with TEXT and VARCHAR equality condition                     |
++-------------+-------------------------------------------------------------------------------------------+
+| SQ-10191    | The ``ALTER DEFAULT SCHEMA`` command was not functioning correctly.                       |
++-------------+-------------------------------------------------------------------------------------------+
+| SQ-10629    | Inserting data into a table significantly slowed down running queries.                    |
++-------------+-------------------------------------------------------------------------------------------+
+| SQ-10659    | Using a comment generated a compile error.                                                |
++-------------+-------------------------------------------------------------------------------------------+
+
+Resolved Issues
+---------
+The following table lists the issues that were resolved in Version 2022.1:
+
++-------------+-------------------------------------------------------------------------------------------+
+| **SQ No.**  | **Description**                                                                           |
++=============+===========================================================================================+
+| SQ-10111    | Reading numeric columns from an external Parquet file generated an error.                 |
++-------------+-------------------------------------------------------------------------------------------+
+
+Operations and Configuration Changes
+--------
+No relevant operations and configuration changes were made.
+
+Naming Changes
+-------
+No relevant naming changes were made.
+
+Deprecated Features
+-------
+In SQream version 2022.1 the ``VARCHAR`` data type has been deprecated and replaced with ``TEXT``. SQream will maintain ``VARCHAR`` in all previous versions until completing the migration to ``TEXT``, at which point it will be deprecated in all earlier versions. SQream also provides an automated and secure tool to facilitate and simplify migration from ``VARCHAR`` to ``TEXT``.  
+
+If you are using an earlier version of SQream, see the `Using Legacy String Literals `_ configuration flag.
+
+End of Support
+-------
+The End of Support section is not relevant to Version 2022.1.
+
+Upgrading to v2022.1
+-------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version `_ procedure.
+  
diff --git a/releases/2022.1_index.rst b/releases/2022.1_index.rst
new file mode 100644
index 000000000..e63294405
--- /dev/null
+++ b/releases/2022.1_index.rst
@@ -0,0 +1,23 @@
+.. _2022.1_index:
+
+**************************
+Release Notes 2022.1
+**************************
+The 2022.1 Release Notes describe the following releases:
+
+.. contents:: 
+   :local:
+   :depth: 1
+
+.. toctree::
+   :maxdepth: 1
+   :glob:
+
+   2022.1.7
+   2022.1.6
+   2022.1.5
+   2022.1.4
+   2022.1.3
+   2022.1.2
+   2022.1.1
+   2022.1
\ No newline at end of file
diff --git a/releases/4.0.0_index.rst b/releases/4.0.0_index.rst
new file mode 100644
index 000000000..15f442f1a
--- /dev/null
+++ b/releases/4.0.0_index.rst
@@ -0,0 +1,122 @@
+.. _4.0.0:
+
+**************************
+Release Notes 4.0.0
+**************************
+
+SQream is introducing a new version release system that follows the more commonly used Major.Minor.Patch versioning schema. The newly released **4.0.0 version** is a minor version upgrade and does not require considerable preparation.
+
+The 4.0.0 release notes were released on 01/25/2023 and describe the following:
+
+.. contents:: 
+   :local:
+   :depth: 1      
+
+New Features
+------------
+
+ * Re-enabling an enhanced version of the :ref:`License Storage Capacity` feature 
+
+	::
+
+ * :ref:`Lightweight Directory Access Protocol(LDAP)` may be used to authenticate SQream roles
+
+	::
+
+ * :ref:`Physical deletion performance enhancement` by supporting file systems with parallelism capabilities
+ 
+SQream Studio Updates and Improvements
+--------------------------------------
+
+ *  When creating a **New Role**, you may now create a group role by selecting **Set as a group role**.
+
+	::
+ *   When editing an **Existing Role** user is no longer obligated to update the password.
+
+	::
+
+Known Issues
+------------
+:ref:`Percentile` is not supported for Window functions.
+
+Version 4.0.0 resolved Issues
+-----------------------------
+
++-----------------+------------------------------------------------------------------------------------------+
+|  **SQ No.**     | **Description**                                                                          |
++=================+==========================================================================================+
+| SQ-10544        | SQream Studio Dashboard periodic update enhancement                                      |
++-----------------+------------------------------------------------------------------------------------------+
+| SQ-11772        | Slow query performance when using ``JOIN`` clause                                        |
++-----------------+------------------------------------------------------------------------------------------+
+| SQ-12318        | JDBC connector ``insertBuffer`` parameter issue                                          |
++-----------------+------------------------------------------------------------------------------------------+
+| SQ-12364        | ``GET DDL`` foreign table output issue                                                   |
++-----------------+------------------------------------------------------------------------------------------+
+| SQ-12446        | SQream Studio group role modification issue                                              |
++-----------------+------------------------------------------------------------------------------------------+
+| SQ-12468        | Internal compiler error                                                                  |
++-----------------+------------------------------------------------------------------------------------------+
+| SQ-12580        | Server Picker GPU dependency                                                             |
++-----------------+------------------------------------------------------------------------------------------+
+| SQ-12652        | SQream Studio result panel adjustment                                                    |
++-----------------+------------------------------------------------------------------------------------------+
+
+
+Configuration Changes
+---------------------
+No configuration changes were made.
+
+Naming Changes
+--------------
+No relevant naming changes were made.
+
+Deprecated Features
+-------------------
+SQream is declaring end of support of VARCHAR data type, the decision resulted by SQream's effort to enhance its core functionalities and with respect to ever changing echo system requirements.
+
+VARCHAR is no longer supported for new customers - effective from Version 2022.1.3 (September 2022).  
+
+TEXT data type is replacing VARCHAR and NVARCHAR - SQream will maintain VARCHAR data type support until 09/30/2023.
+
+
+End of Support
+---------------
+No End of Support changes were made.
+
+Upgrading to v4.0.0
+-------------------
+1. Generate a back-up of the metadata by running the following command:
+
+   .. code-block:: console
+
+      $ select backup_metadata('out_path');
+	  
+   .. tip:: SQream recommends storing the generated back-up locally in case needed.
+   
+   SQream runs the Garbage Collector and creates a clean backup tarball package.
+   
+2. Shut down all SQream services.
+
+    ::
+
+3. Extract the recently created back-up file.
+
+    ::
+
+4. Replace your current metadata with the metadata you stored in the back-up file.
+
+    ::
+
+5. Navigate to the new SQream package bin folder.
+
+    ::
+
+6. Run the following command:
+
+   .. code-block:: console
+
+      $ ./upgrade_storage 
+
+  .. note:: Upgrading from a major version to another major version requires you to follow the **Upgrade Storage** step. This is described in Step 7 of the `Upgrading SQream Version <../installation_guides/installing_sqream_with_binary.html#upgrading-sqream-version>`_ procedure.
+  
diff --git a/releases/index.rst b/releases/index.rst
index 472197afc..107f323c8 100644
--- a/releases/index.rst
+++ b/releases/index.rst
@@ -12,6 +12,10 @@ Release Notes
    
    * - Version
      - Release Date
+   * - :ref:`4.0.0`
+     - XXXX xx, xxxx
+   * - :ref:`2022.1`
+     - July 19, 2022
    * - :ref:`2021.2`
      - September 13, 2021
    * - :ref:`2021.1`
@@ -30,8 +34,10 @@ Release Notes
    :glob:
    :hidden:
 
+   4.0.0_index
+   2022.1_index
    2021.2_index
    2021.1_index
    2020.3_index
    2020.2
-   2020.1
+   2020.1
\ No newline at end of file
diff --git a/sqream_studio_5.4.3/configuring_your_instance_of_sqream.rst b/sqream_studio_5.4.7/configuring_your_instance_of_sqream.rst
similarity index 91%
rename from sqream_studio_5.4.3/configuring_your_instance_of_sqream.rst
rename to sqream_studio_5.4.7/configuring_your_instance_of_sqream.rst
index 2a60146e0..3fcac7861 100644
--- a/sqream_studio_5.4.3/configuring_your_instance_of_sqream.rst
+++ b/sqream_studio_5.4.7/configuring_your_instance_of_sqream.rst
@@ -1,23 +1,23 @@
-.. _configuring_your_instance_of_sqream:
-
-****************************
-Configuring Your Instance of SQream
-****************************
-The **Configuration** section lets you edit parameters from one centralized location. While you can edit these parameters from the **worker configuration file (config.json)** or from your CLI, you can also modify them in Studio in an easy-to-use format.
-
-Configuring your instance of SQream in Studio is session-based, which enables you to edit parameters per session on your own device. 
-Because session-based configurations are not persistent and are deleted when your session ends, you can edit your required parameters while avoiding conflicts between parameters edited on different devices at different points in time.
-
-Editing Your Parameters
--------------------------------
-When configuring your instance of SQream in Studio you can edit parameters for the **Generic** and **Admin** parameters only.
-
-Studio includes two types of parameters: toggle switches, such as **flipJoinOrder**, and text fields, such as **logSysLevel**. After editing a parameter, you can reset each one to its previous value or to its default value individually, or revert all parameters to their default setting simultaneously. Note that you must click **Save** to save your configurations.
-
-You can hover over the **information** icon located on each parameter to read a short description of its behavior.
-
-Exporting and Importing Configuration Files
--------------------------
-You can also export and import your configuration settings into a .json file. This allows you to easily edit your parameters and to share this file with other users if required.
-
-For more information about configuring your instance of SQream, see `Configuration Guides `_.
+.. _configuring_your_instance_of_sqream:
+
+****************************
+Configuring Your Instance of SQreams
+****************************
+The **Configuration** section lets you edit parameters from one centralized location. While you can edit these parameters from the **worker configuration file (config.json)** or from your CLI, you can also modify them in Studio in an easy-to-use format.
+
+Configuring your instance of SQream in Studio is session-based, which enables you to edit parameters per session on your own device. 
+Because session-based configurations are not persistent and are deleted when your session ends, you can edit your required parameters while avoiding conflicts between parameters edited on different devices at different points in time.
+
+Editing Your Parameters
+-------------------------------
+When configuring your instance of SQream in Studio you can edit parameters for the **Generic** and **Admin** parameters only.
+
+Studio includes two types of parameters: toggle switches, such as **flipJoinOrder**, and text fields, such as **logSysLevel**. After editing a parameter, you can reset each one to its previous value or to its default value individually, or revert all parameters to their default setting simultaneously. Note that you must click **Save** to save your configurations.
+
+You can hover over the **information** icon located on each parameter to read a short description of its behavior.
+
+Exporting and Importing Configuration Files
+-------------------------
+You can also export and import your configuration settings into a .json file. This allows you to easily edit your parameters and to share this file with other users if required.
+
+For more information about configuring your instance of SQream, see :ref:`Configuration`.
\ No newline at end of file
diff --git a/sqream_studio_5.4.3/creating_assigning_and_managing_roles_and_permissions.rst b/sqream_studio_5.4.7/creating_assigning_and_managing_roles_and_permissions.rst
similarity index 95%
rename from sqream_studio_5.4.3/creating_assigning_and_managing_roles_and_permissions.rst
rename to sqream_studio_5.4.7/creating_assigning_and_managing_roles_and_permissions.rst
index 325563761..31ff716cb 100644
--- a/sqream_studio_5.4.3/creating_assigning_and_managing_roles_and_permissions.rst
+++ b/sqream_studio_5.4.7/creating_assigning_and_managing_roles_and_permissions.rst
@@ -1,98 +1,98 @@
-.. _creating_assigning_and_managing_roles_and_permissions:
-
-.. _roles_5.4.3:
-
-****************************
-Creating, Assigning, and Managing Roles and Permissions
-****************************
-The **Creating, Assigning, and Managing Roles and Permissions** describes the following:
-
-.. contents:: 
-   :local:
-   :depth: 1   
-   
-Overview
----------------
-In the **Roles** area you can create and assign roles and manage user permissions. 
-
-The **Type** column displays one of the following assigned role types:
-
-.. list-table::
-   :widths: 15 75
-   :header-rows: 1   
-   
-   * - Role Type
-     - Description
-   * - Groups
-     - Roles with no users.
-   * - Enabled users
-     - Users with log-in permissions and a password.
-   * - Disabled users
-     - Users with log-in permissions and with a disabled password. An admin may disable a user's password permissions to temporary disable access to the system.
-
-.. note:: If you disable a password, when you enable it you have to create a new one.
-
-:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
-
-
-Viewing Information About a Role
---------------------
-Clicking a role in the roles table displays the following information:
-
- * **Parent Roles** - displays the parent roles of the selected role. Roles inherit all roles assigned to the parent.
- 
-    ::
-   
- * **Members** - displays all members that the role has been assigned to. The arrow indicates the roles that the role has inherited. Hovering over a member displays the roles that the role is inherited from.
-
-    ::
-   
- * **Permissions** - displays the role's permissions. The arrow indicates the permissions that the role has inherited. Hovering over a permission displays the roles that the permission is inherited from.
- 
-:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
-
-
-Creating a New Role
---------------------
-You can create a new role by clicking **New Role**.
-
-
-   
-An admin creates a **user** by granting login permissions and a password to a role. Each role is defined by a set of permissions. An admin can also group several roles together to form a **group** to manage them simultaneously. For example, permissions can be granted to or revoked on a group level.
-
-Clicking **New Role** lets you do the following:
-
- * Add and assign a role name (required)
- * Enable or disable log-in permissions for the role.
- * Set a password.
- * Assign or delete parent roles.
- * Add or delete permissions.
- * Grant the selected user with superuser permissions.
- 
-From the New Role panel you view directly and indirectly (or inherited) granted permissions. Disabled permissions have no connect permissions for the referenced database and are displayed in gray text. You can add or remove permissions from the **Add permissions** field. From the New Role panel you can also search and scroll through the permissions. In the **Search** field you can use the **and** operator to search for strings that fulfill multiple criteria.
-
-When adding a new role, you must select the **Enable login for this role** and **Has password** check boxes.
-
-:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
-
-Editing a Role
---------------------
-Once you've created a role, clicking the **Edit Role** button lets you do the following:
-
- * Edit the role name.
- * Enable or disable log-in permissions.
- * Set a password.
- * Assign or delete parent roles.
- * Assign a role **administrator** permissions.
- * Add or delete permissions.
- * Grant the selected user with superuser permissions.
-
-From the Edit Role panel you view directly and indirectly (or inherited) granted permissions. Disabled permissions have no connect permissions for the referenced database and are displayed in gray text. You can add or remove permissions from the **Add permissions** field. From the Edit Role panel you can also search and scroll through the permissions. In the **Search** field you can use the **and** operator to search for strings that fulfill multiple criteria.
-
-:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
-
-Deleting a Role
------------------
-Clicking the **delete** icon displays a confirmation message with the amount of users and groups that will be impacted by deleting the role.
-
-:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
\ No newline at end of file
+.. _creating_assigning_and_managing_roles_and_permissions:
+
+.. _roles_5.4.7:
+
+****************************
+Creating, Assigning, and Managing Roles and Permissions
+****************************
+The **Creating, Assigning, and Managing Roles and Permissions** describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1   
+   
+Overview
+---------------
+In the **Roles** area you can create and assign roles and manage user permissions. 
+
+The **Type** column displays one of the following assigned role types:
+
+.. list-table::
+   :widths: 15 75
+   :header-rows: 1   
+   
+   * - Role Type
+     - Description
+   * - Groups
+     - Roles with no users.
+   * - Enabled users
+     - Users with log-in permissions and a password.
+   * - Disabled users
+     - Users with log-in permissions and with a disabled password. An admin may disable a user's password permissions to temporary disable access to the system.
+
+.. note:: If you disable a password, when you enable it you have to create a new one.
+
+:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
+
+
+Viewing Information About a Role
+--------------------
+Clicking a role in the roles table displays the following information:
+
+ * **Parent Roles** - displays the parent roles of the selected role. Roles inherit all roles assigned to the parent.
+ 
+    ::
+   
+ * **Members** - displays all members that the role has been assigned to. The arrow indicates the roles that the role has inherited. Hovering over a member displays the roles that the role is inherited from.
+
+    ::
+   
+ * **Permissions** - displays the role's permissions. The arrow indicates the permissions that the role has inherited. Hovering over a permission displays the roles that the permission is inherited from.
+ 
+:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
+
+
+Creating a New Role
+--------------------
+You can create a new role by clicking **New Role**.
+
+
+   
+An admin creates a **user** by granting login permissions and a password to a role. Each role is defined by a set of permissions. An admin can also group several roles together to form a **group** to manage them simultaneously. For example, permissions can be granted to or revoked on a group level.
+
+Clicking **New Role** lets you do the following:
+
+ * Add and assign a role name (required)
+ * Enable or disable log-in permissions for the role.
+ * Set a password.
+ * Assign or delete parent roles.
+ * Add or delete permissions.
+ * Grant the selected user with superuser permissions.
+ 
+From the New Role panel you view directly and indirectly (or inherited) granted permissions. Disabled permissions have no connect permissions for the referenced database and are displayed in gray text. You can add or remove permissions from the **Add permissions** field. From the New Role panel you can also search and scroll through the permissions. In the **Search** field you can use the **and** operator to search for strings that fulfill multiple criteria.
+
+When adding a new role, you must select the **Enable login for this role** and **Has password** check boxes.
+
+:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
+
+Editing a Role
+--------------------
+Once you've created a role, clicking the **Edit Role** button lets you do the following:
+
+ * Edit the role name.
+ * Enable or disable log-in permissions.
+ * Set a password.
+ * Assign or delete parent roles.
+ * Assign a role **administrator** permissions.
+ * Add or delete permissions.
+ * Grant the selected user with superuser permissions.
+
+From the Edit Role panel you view directly and indirectly (or inherited) granted permissions. Disabled permissions have no connect permissions for the referenced database and are displayed in gray text. You can add or remove permissions from the **Add permissions** field. From the Edit Role panel you can also search and scroll through the permissions. In the **Search** field you can use the **and** operator to search for strings that fulfill multiple criteria.
+
+:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
+
+Deleting a Role
+-----------------
+Clicking the **delete** icon displays a confirmation message with the amount of users and groups that will be impacted by deleting the role.
+
+:ref:`Back to Creating, Assigning, and Managing Roles and Permissions`
\ No newline at end of file
diff --git a/sqream_studio_5.4.3/executing_statements_and_running_queries_from_the_editor.rst b/sqream_studio_5.4.7/executing_statements_and_running_queries_from_the_editor.rst
similarity index 89%
rename from sqream_studio_5.4.3/executing_statements_and_running_queries_from_the_editor.rst
rename to sqream_studio_5.4.7/executing_statements_and_running_queries_from_the_editor.rst
index 55369d761..882b226e0 100644
--- a/sqream_studio_5.4.3/executing_statements_and_running_queries_from_the_editor.rst
+++ b/sqream_studio_5.4.7/executing_statements_and_running_queries_from_the_editor.rst
@@ -1,492 +1,492 @@
-.. _executing_statements_and_running_queries_from_the_editor:
-
-.. _editor_top_5.4.3:
-
-****************************
-Executing Statements and Running Queries from the Editor
-****************************
-The **Editor** is used for the following:
-
-* Selecting an active database and executing queries.
-* Performing statement-related operations and showing metadata.
-* Executing pre-defined queries.
-* Writing queries and statements and viewing query results.
-	 
-The following is a brief description of the Editor panels:
-
-
-.. list-table::
-   :widths: 10 34 56
-   :header-rows: 1  
-   
-   * - No.
-     - Element
-     - Description
-   * - 1
-     - :ref:`Toolbar`
-     - Used to select the active database you want to work on, limit the number of rows, save query, etc.
-   * - 2
-     - :ref:`Database Tree and System Queries panel`
-     - Shows a hierarchy tree of databases, views, tables, and columns
-   * - 3
-     - :ref:`Statement panel`
-     - Used for writing queries and statements
-   * - 4
-     - :ref:`Results panel`
-     - Shows query results and execution information.
-
-
-
-.. _top_5.4.3:
-
-.. _studio_5.4.3_editor_toolbar:
-
-Executing Statements from the Toolbar
-================
-You can access the following from the Toolbar pane:
-
-* **Database dropdown list** - select a database that you want to run statements on.
-
-    ::
-
-* **Service dropdown list** - select a service that you want to run statements on. The options in the service dropdown menu depend on the database you select from the **Database** dropdown list.
-
-    ::
-
-* **Execute** - lets you set which statements to execute. The **Execute** button toggles between **Execute** and **Stop**, and can be used to stop an active statement before it completes:
-
-  * **Statements** - executes the statement at the location of the cursor.
-  * **Selected** - executes only the highlighted text. This mode should be used when executing subqueries or sections of large queries (as long as they are valid SQLs).
-  * **All** - executes all statements in a selected tab.
-   
-* **Format SQL** - Lets you reformat and reindent statements.
-
-    ::
-
-* **Download query** - Lets you download query text to your computer.
-
-    ::
-
-* **Open query** - Lets you upload query text from your computer.
-
-    ::
-
-* **Max Rows** - By default, the Editor fetches only the first 10,000 rows. You can modify this number by selecting an option from the **Max Rows** dropdown list. Note that setting a higher number may slow down your browser if the result is very large. This number is limited to 100,000 results. To see a higher number, you can save the results in a file or a table using the :ref:`create_table_as` command.
-
-
-For more information on stopping active statements, see the :ref:`STOP_STATEMENT` command.
-
-:ref:`Back to Executing Statements and Running Queries from the Editor`
-
-
-.. _studio_5.4.3_editor_db_tree:
-
-Performing Statement-Related Operations from the Database Tree
-================
-From the Database Tree you can perform statement-related operations and show metadata (such as a number indicating the amount of rows in the table).
-
-
-
-
-
-The database object functions are used to perform the following:
-
-* The **SELECT** statement - copies the selected table's **columns** into the Statement panel as ``SELECT`` parameters.  
-
-   ::
-
-* The **copy** feature |icon-copy| - copies the selected table's **name** into the Statement panel. 
-
-   ::
-
-* The **additional operations** |icon-dots| - displays the following additional options:
-  
-
-.. |icon-user| image:: /_static/images/studio_icon_user.png
-   :align: middle
-   
-.. |icon-dots| image:: /_static/images/studio_icon_dots.png
-   :align: middle   
-   
-.. |icon-editor| image:: /_static/images/studio_icon_editor.png
-   :align: middle
-
-.. |icon-copy| image:: /_static/images/studio_icon_copy.png
-   :align: middle
-
-.. |icon-select| image:: /_static/images/studio_icon_select.png
-   :align: middle
-
-.. |icon-dots| image:: /_static/images/studio_icon_dots.png
-   :align: middle
-
-.. |icon-filter| image:: /_static/images/studio_icon_filter.png
-   :align: middle
-
-.. |icon-ddl-edit| image:: /_static/images/studio_icon_ddl_edit.png
-   :align: middle
-
-.. |icon-run-optimizer| image:: /_static/images/studio_icon_run_optimizer.png
-   :align: middle
-
-.. |icon-generate-create-statement| image:: /_static/images/studio_icon_generate_create_statement.png
-   :align: middle
-
-.. |icon-plus| image:: /_static/images/studio_icon_plus.png
-   :align: middle
-
-.. |icon-close| image:: /_static/images/studio_icon_close.png
-   :align: middle
-
-.. |icon-left| image:: /_static/images/studio_icon_left.png
-   :align: middle
-
-.. |icon-right| image:: /_static/images/studio_icon_right.png
-   :align: middle
-
-.. |icon-format-sql| image:: /_static/images/studio_icon_format.png
-   :align: middle
-
-.. |icon-download-query| image:: /_static/images/studio_icon_download_query.png
-   :align: middle
-
-.. |icon-open-query| image:: /_static/images/studio_icon_open_query.png
-   :align: middle
-
-.. |icon-execute| image:: /_static/images/studio_icon_execute.png
-   :align: middle
-
-.. |icon-stop| image:: /_static/images/studio_icon_stop.png
-   :align: middle
-
-.. |icon-dashboard| image:: /_static/images/studio_icon_dashboard.png
-   :align: middle
-
-.. |icon-expand| image:: /_static/images/studio_icon_expand.png
-   :align: middle
-
-.. |icon-scale| image:: /_static/images/studio_icon_scale.png
-   :align: middle
-
-.. |icon-expand-down| image:: /_static/images/studio_icon_expand_down.png
-   :align: middle
-
-.. |icon-add| image:: /_static/images/studio_icon_add.png
-   :align: middle
-
-.. |icon-add-worker| image:: /_static/images/studio_icon_add_worker.png
-   :align: middle
-
-.. |keep-tabs| image:: /_static/images/studio_keep_tabs.png
-   :align: middle
-
-
-.. list-table::
-   :widths: 30 70
-   :header-rows: 1   
-   
-   * - Function
-     - Description
-   * - Insert statement
-     - Generates an `INSERT `_ statement for the selected table in the editing area.
-   * - Delete statement
-     - Generates a `DELETE `_ statement for the selected table in the editing area.
-   * - Create Table As statement
-     - Generates a `CREATE TABLE AS `_ statement for the selected table in the editing area.	 
-   * - Rename statement
-     - Generates an `RENAME TABLE AS `_ statement for renaming the selected table in the editing area.
-   * - Adding column statement
-     - Generates an `ADD COLUMN `_ statement for adding columns to the selected table in the editing area.
-   * - Truncate table statement
-     - Generates a `TRUNCATE_IF_EXISTS `_ statement for the selected table in the editing area.
-   * - Drop table statement
-     - Generates a ``DROP`` statement for the selected object in the editing area.
-   * - Table DDL
-     - Generates a DDL statement for the selected object in the editing area. To get the entire database DDL, click the |icon-ddl-edit| icon next to the database name in the tree root. See `Seeing System Objects as DDL `_.
-   * - DDL Optimizer
-     - The `DDL Optimizer `_  lets you analyze database tables and recommends possible optimizations.
-
-Optimizing Database Tables Using the DDL Optimizer
------------------------
-The **DDL Optimizer** tab analyzes database tables and recommends possible optimizations according to SQream's best practices.
-
-As described in the previous table, you can access the DDL Optimizer by clicking the **additional options icon** and selecting **DDL Optimizer**.
-
-The following table describes the DDL Optimizer screen:
-
-.. list-table::
-   :widths: 15 75
-   :header-rows: 1   
-   
-   * - Element
-     - Description
-   * - Column area
-     - Shows the column **names** and **column types** from the selected table. You can scroll down or to the right/left for long column lists.
-   * - Optimization area
-     - Shows the number of rows to sample as the basis for running an optimization, the default setting (1,000,000) when running an optimization (this is also the overhead threshold used when analyzing ``VARCHAR`` fields),  and the default percent buffer to add to ``VARCHAR`` lengths (10%). Attempts to determine field nullability.
-   * - Run Optimizer
-     - Starts the optimization process.
-
-Clicking **Run Optimizer** adds a tab to the Statement panel showing the optimized results of the selected object.
-
-For more information, see `Optimization and Best Practices `_.
-
-Executing Pre-Defined Queries from the System Queries Panel
----------------
-The **System Queries** panel lets you execute predefined queries and includes the following system query types:
-
-* **Catalog queries** - used for analyzing table compression rates, users and permissions, etc.
-    
-	::
-	
-* **Admin queries** - queries related to available  (describe the functionality in a general way). Queries useful for SQream database management.
-
-Clicking an item pastes the query into the Statement pane, and you can undo a previous operation by pressing **Ctrl + Z**.
-
-.. _studio_5.4.3_editor_statement_area:
-
-Writing Statements and Queries from the Statement Panel
-==============
-The multi-tabbed statement area is used for writing queries and statements, and is used in tandem with the toolbar. When writing and executing statements, you must first select a database from the **Database** dropdown menu in the toolbar. When you execute a statement, it passes through a series of statuses until completed. Knowing the status helps you with statement maintenance, and the statuses are shown in the **Results panel**.
-
-The auto-complete feature assists you when writing statements by suggesting statement options.
-
-The following table shows the statement statuses:
-	 
-.. list-table::
-   :widths: 45 160
-   :header-rows: 1  
-   
-   * - Status
-     - Description
-   * - Pending
-     - The statement is pending.
-   * - In queue
-     - The statement is waiting for execution.
-   * - Initializing
-     - The statement has entered execution checks.
-   * - Executing
-     - The statement is executing.
-   * - Statement stopped
-     - The statement has been stopped.
-	 
-You can add and name new tabs for each statement that you need to execute, and Studio preserves your created tabs when you switch between databases. You can add new tabs by clicking |icon-plus| , which creates a new tab to the right with a default name of SQL and an increasing number. This helps you keep track of your statements.
-
-You can also rename the default tab name by double-clicking it and typing a new name and write multiple statements in tandem in the same tab by separating them with semicolons (``;``).If too many tabs to fit into the Statement Pane are open at the same time, the tab arrows are displayed. You can scroll through the tabs by clicking |icon-left| or |icon-right|, and close tabs by clicking |icon-close|. You can also close all tabs at once by clicking **Close all** located to the right of the tabs.
-
-.. tip:: If this is your first time using SQream, see `Getting Started `_.
-
-
-.. Keyboard shortcuts
-.. ^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. :kbd:`Ctrl` +: kbd:`Enter` - Execute all queries in the statement area, or just the highlighted part of the query.
-
-.. :kbd:`Ctrl` + :kbd:`Space` - Auto-complete the current keyword
-
-.. :kbd:`Ctrl` + :kbd:`↑` - Switch to next tab.
-
-.. :kbd:`Ctrl` + :kbd:`↓` - Switch to previous tab
-
-.. _studio_editor_results_5.4.3:
-
-:ref:`Back to Executing Statements and Running Queries from the Editor`
-
-.. _studio_5.4.3_editor_results:
-
-.. _results_panel_5.4.3:
-
-Viewing Statement and Query Results from the Results Panel
-==============
-The results panel shows statement and query results. By default, only the first 10,000 results are returned, although you can modify this from the :ref:`studio_editor_toolbar`, as described above. By default, executing several statements together opens a separate results tab for each statement. Executing statements together executes them serially, and any failed statement cancels all subsequent executions.
-
-.. image:: /_static/images/results_panel.png
-
-The following is a brief description of the Results panel views highlighted in the figure above:
-
-.. list-table::
-   :widths: 45 160
-   :header-rows: 1  
-   
-   * - Element
-     - Description
-   * - :ref:`Results view`
-     - Lets you view search query results.
-   * - :ref:`Execution Details view`
-     - Lets you analyze your query for troubleshooting and optimization purposes.
-   * - :ref:`SQL view`
-     - Lets you see the SQL view.
-
-
-.. _results_view_5.4.3:
-
-:ref:`Back to Executing Statements and Running Queries from the Editor`
-	 
-Searching Query Results in the Results View
-----------------
-The **Results view** lets you view search query results.
-
-From this view you can also do the following:
-
-* View the amount of time (in seconds) taken for a query to finish executing.
-* Switch and scroll between tabs.
-* Close all tabs at once.
-* Enable keeping tabs by selecting **Keep tabs**.
-* Sort column results.
-
-Saving Results to the Clipboard
-^^^^^^^^^^^^
-The **Save results to clipboard** function lets you save your results to the clipboard to paste into another text editor or into Excel for further analysis.
-
-.. _save_results_to_local_file_5.4.3:
-
-Saving Results to a Local File
-^^^^^^^^^^^^
-The **Save results to local file** functions lets you save your search query results to a local file. Clicking **Save results to local file** downloads the contents of the Results panel to an Excel sheet. You can then use copy and paste this content into other editors as needed.
-
-In the Results view you can also run parallel statements, as described in **Running Parallel Statements** below.
-
-.. _running_parallel_statements_5.4.3:
-
-Running Parallel Statements
-^^^^^^^^^^^^
-While Studio's default functionality is to open a new tab for each executed statement, Studio supports running parallel statements in one statement tab. Running parallel statements requires using macros and is useful for advanced users.
-
-The following shows the syntax for running parallel statements:
-
-.. code-block:: console
-     
-   $ @@ parallel
-   $ $$
-   $ select 1;
-   $ select 2;
-   $ select 3;
-   $ $$
-
-
-:ref:`Back to Viewing Statement and Query Results from the Results Panel`
-
-.. _execution_details_view_5.4.3:
-
-.. _execution_tree_5.4.3:
-
-Execution Details View
---------------
-The **Execution Details View** section describes the following:
-
-.. contents:: 
-   :local:
-   :depth: 1
-   
-Overview
-^^^^^^^^^^^^
-Clicking **Execution Details View** displays the **Execution Tree**, which is a chronological tree of processes that occurred to execute your queries. The purpose of the Execution Tree is to analyze all aspects of your query for troubleshooting and optimization purposes, such as resolving queries with an exceptionally long runtime.
-
-.. note::  The **Execution Details View** button is enabled only when a query takes longer than five seconds. 
-
-From this screen you can scroll in, out, and around the execution tree with the mouse to analyze all aspects of your query. You can navigate around the execution tree by dragging or by using the mini-map in the bottom right corner.
-
-.. image:: /_static/images/execution_tree_1.png
-
-You can also search for query data by pressing **Ctrl+F** or clicking the search icon |icon-search| in the search field in the top right corner and typing text.
-
-.. image:: /_static/images/search_field.png
-
-Pressing **Enter** takes you directly to the next result matching your search criteria, and pressing **Shift + Enter** takes you directly to the previous result. You can also search next and previous results using the up and down arrows.
-
-.. |icon-search| image:: /_static/images/studio_icon_search.png
-   :align: middle
-
-The nodes are color-coded based on the following:
-
-* **Slow nodes** - red
-* **In progress nodes** - yellow
-* **Completed nodes** - green
-* **Pending nodes** - white
-* **Currently selected node** - blue
-* **Search result node** - purple (in the mini-map)
-
-The execution tree displays the same information as shown in the plain view in tree format.
-
-The Execution Tree tracks each phase of your query in real time as a vertical tree of nodes. Each node refers to an operation that occurred on the GPU or CPU. When a phase is completed, the next branch begins to its right until the entire query is complete. Joins are displayed as two parallel branches merged together in a node called **Join**, as shown in the figure above. The nodes are connected by a line indicating the number of rows passed from one node to the next. The width of the line indicates the amount of rows on a logarithmic scale.
-
-Each node displays a number displaying its **node ID**, its **type**, **table name** (if relevant), **status**, and **runtime**. The nodes are color-coded for easy identification. Green nodes indicate **completed nodes**, yellow indicates **nodes in progress**, and red indicates **slowest nodes**, typically joins, as shown below:
-
-.. image:: /_static/images/nodes.png
-
-Viewing Query Statistics
-^^^^^^^^^^^^
-The following statistical information is displayed in the top left corner, as shown in the figure above:
-
-* **Query Statistics**:
-
-    * **Elapsed** - the total time taken for the query to complete.
-    * **Result rows** - the amount of rows fetched.
-    * **Running nodes completion**
-    * **Total query completion** - the amount of the total execution tree that was executed (nodes marked green).
-	
-* **Slowest Nodes** information is displayed in the top right corner in red text. Clicking the slowest node centers automatically on that node in the execution tree.
-
-You can also view the following **Node Statistics** in the top right corner for each individual node by clicking a node:
-
-.. list-table::
-   :widths: 45 160
-   :header-rows: 1  
-   
-   * - Element
-     - Description
-   * - Node type
-     - Shows the node type.
-   * - Status
-     - Shows the execution status.
-   * - Time
-     - The total time taken to execute.
-   * - Rows
-     - Shows the number of produced rows passed to the next node.
-   * - Chunks
-     - Shows number of produced chunks.
-   * - Average rows per chunk
-     - Shows the number of average rows per chunk.
-   * - Table (for **ReadTable** and joins only)
-     - Shows the table name.
-   * - Write (for joins only)
-     - Shows the total date size written to the disk.
-   * - Read (for **ReadTable** and joins only)
-     - Shows the total data size read from the disk.
-
-Note that you can scroll the Node Statistics table. You can also download the execution plan table in .csv format by clicking the download arrow |icon-download| in the upper-right corner.
-
-.. |icon-download| image:: /_static/images/studio_icon_download.png
-   :align: middle
-
-Using the Plain View
-^^^^^^^^^^^^
-You can use the **Plain View** instead of viewing the execution tree by clicking **Plain View** |icon-plain| in the top right corner. The plain view displays the same information as shown in the execution tree in table format.
-
-.. |icon-plain| image:: /_static/images/studio_icon_plain.png
-   :align: middle
-   
-
-
-
-The plain view lets you view a query’s execution plan for monitoring purposes and highlights rows based on how long they ran relative to the entire query.
-
-This can be seen in the **timeSum** column as follows:
-
-* **Rows highlighted red** - longest runtime
-* **Rows highlighted orange** - medium runtime
-* **Rows highlighted yellow** - shortest runtime
-
-:ref:`Back to Viewing Statement and Query Results from the Results Panel`
-
-.. _sql_view_5.4.3:
-
-Viewing Wrapped Strings in the SQL View
-------------------
-The SQL View panel allows you to more easily view certain queries, such as a long string that appears on one line. The SQL View makes it easier to see by wrapping it so that you can see the entire string at once. It also reformats and organizes query syntax entered in the Statement panel for more easily locating particular segments of your queries. The SQL View is identical to the **Format SQL** feature in the Toolbar, allowing you to retain your originally constructed query while viewing a more intuitively structured snapshot of it.
-
-.. _save_results_to_clipboard_5.4.3:
-
-:ref:`Back to Viewing Statement and Query Results from the Results Panel`
-
-:ref:`Back to Executing Statements and Running Queries from the Editor`
+.. _executing_statements_and_running_queries_from_the_editor:
+
+.. _editor_top_5.4.7:
+
+****************************
+Executing Statements and Running Queries from the Editor
+****************************
+The **Editor** is used for the following:
+
+* Selecting an active database and executing queries.
+* Performing statement-related operations and showing metadata.
+* Executing pre-defined queries.
+* Writing queries and statements and viewing query results.
+	 
+The following is a brief description of the Editor panels:
+
+
+.. list-table::
+   :widths: 10 34 56
+   :header-rows: 1  
+   
+   * - No.
+     - Element
+     - Description
+   * - 1
+     - :ref:`Toolbar`
+     - Used to select the active database you want to work on, limit the number of rows, save query, etc.
+   * - 2
+     - :ref:`Database Tree and System Queries panel`
+     - Shows a hierarchy tree of databases, views, tables, and columns
+   * - 3
+     - :ref:`Statement panel`
+     - Used for writing queries and statements
+   * - 4
+     - :ref:`Results panel`
+     - Shows query results and execution information.
+
+
+
+.. _top_5.4.7:
+
+.. _studio_5.4.7_editor_toolbar:
+
+Executing Statements from the Toolbar
+================
+You can access the following from the Toolbar pane:
+
+* **Database dropdown list** - select a database that you want to run statements on.
+
+    ::
+
+* **Service dropdown list** - select a service that you want to run statements on. The options in the service dropdown menu depend on the database you select from the **Database** dropdown list.
+
+    ::
+
+* **Execute** - lets you set which statements to execute. The **Execute** button toggles between **Execute** and **Stop**, and can be used to stop an active statement before it completes:
+
+  * **Statements** - executes the statement at the location of the cursor.
+  * **Selected** - executes only the highlighted text. This mode should be used when executing subqueries or sections of large queries (as long as they are valid SQLs).
+  * **All** - executes all statements in a selected tab.
+   
+* **Format SQL** - Lets you reformat and reindent statements.
+
+    ::
+
+* **Download query** - Lets you download query text to your computer.
+
+    ::
+
+* **Open query** - Lets you upload query text from your computer.
+
+    ::
+
+* **Max Rows** - By default, the Editor fetches only the first 10,000 rows. You can modify this number by selecting an option from the **Max Rows** dropdown list. Note that setting a higher number may slow down your browser if the result is very large. This number is limited to 100,000 results. To see a higher number, you can save the results in a file or a table using the :ref:`create_table_as` command.
+
+
+For more information on stopping active statements, see the :ref:`STOP_STATEMENT` command.
+
+:ref:`Back to Executing Statements and Running Queries from the Editor`
+
+
+.. _studio_5.4.7_editor_db_tree:
+
+Performing Statement-Related Operations from the Database Tree
+================
+From the Database Tree you can perform statement-related operations and show metadata (such as a number indicating the amount of rows in the table).
+
+
+
+
+
+The database object functions are used to perform the following:
+
+* The **SELECT** statement - copies the selected table's **columns** into the Statement panel as ``SELECT`` parameters.  
+
+   ::
+
+* The **copy** feature |icon-copy| - copies the selected table's **name** into the Statement panel. 
+
+   ::
+
+* The **additional operations** |icon-dots| - displays the following additional options:
+  
+
+.. |icon-user| image:: /_static/images/studio_icon_user.png
+   :align: middle
+   
+.. |icon-dots| image:: /_static/images/studio_icon_dots.png
+   :align: middle   
+   
+.. |icon-editor| image:: /_static/images/studio_icon_editor.png
+   :align: middle
+
+.. |icon-copy| image:: /_static/images/studio_icon_copy.png
+   :align: middle
+
+.. |icon-select| image:: /_static/images/studio_icon_select.png
+   :align: middle
+
+.. |icon-dots| image:: /_static/images/studio_icon_dots.png
+   :align: middle
+
+.. |icon-filter| image:: /_static/images/studio_icon_filter.png
+   :align: middle
+
+.. |icon-ddl-edit| image:: /_static/images/studio_icon_ddl_edit.png
+   :align: middle
+
+.. |icon-run-optimizer| image:: /_static/images/studio_icon_run_optimizer.png
+   :align: middle
+
+.. |icon-generate-create-statement| image:: /_static/images/studio_icon_generate_create_statement.png
+   :align: middle
+
+.. |icon-plus| image:: /_static/images/studio_icon_plus.png
+   :align: middle
+
+.. |icon-close| image:: /_static/images/studio_icon_close.png
+   :align: middle
+
+.. |icon-left| image:: /_static/images/studio_icon_left.png
+   :align: middle
+
+.. |icon-right| image:: /_static/images/studio_icon_right.png
+   :align: middle
+
+.. |icon-format-sql| image:: /_static/images/studio_icon_format.png
+   :align: middle
+
+.. |icon-download-query| image:: /_static/images/studio_icon_download_query.png
+   :align: middle
+
+.. |icon-open-query| image:: /_static/images/studio_icon_open_query.png
+   :align: middle
+
+.. |icon-execute| image:: /_static/images/studio_icon_execute.png
+   :align: middle
+
+.. |icon-stop| image:: /_static/images/studio_icon_stop.png
+   :align: middle
+
+.. |icon-dashboard| image:: /_static/images/studio_icon_dashboard.png
+   :align: middle
+
+.. |icon-expand| image:: /_static/images/studio_icon_expand.png
+   :align: middle
+
+.. |icon-scale| image:: /_static/images/studio_icon_scale.png
+   :align: middle
+
+.. |icon-expand-down| image:: /_static/images/studio_icon_expand_down.png
+   :align: middle
+
+.. |icon-add| image:: /_static/images/studio_icon_add.png
+   :align: middle
+
+.. |icon-add-worker| image:: /_static/images/studio_icon_add_worker.png
+   :align: middle
+
+.. |keep-tabs| image:: /_static/images/studio_keep_tabs.png
+   :align: middle
+
+
+.. list-table::
+   :widths: 30 70
+   :header-rows: 1   
+   
+   * - Function
+     - Description
+   * - Insert statement
+     - Generates an `INSERT `_ statement for the selected table in the editing area.
+   * - Delete statement
+     - Generates a `DELETE `_ statement for the selected table in the editing area.
+   * - Create Table As statement
+     - Generates a `CREATE TABLE AS `_ statement for the selected table in the editing area.	 
+   * - Rename statement
+     - Generates an `RENAME TABLE AS `_ statement for renaming the selected table in the editing area.
+   * - Adding column statement
+     - Generates an `ADD COLUMN `_ statement for adding columns to the selected table in the editing area.
+   * - Truncate table statement
+     - Generates a `TRUNCATE_IF_EXISTS `_ statement for the selected table in the editing area.
+   * - Drop table statement
+     - Generates a ``DROP`` statement for the selected object in the editing area.
+   * - Table DDL
+     - Generates a DDL statement for the selected object in the editing area. To get the entire database DDL, click the |icon-ddl-edit| icon next to the database name in the tree root. See `Seeing System Objects as DDL `_.
+   * - DDL Optimizer
+     - The `DDL Optimizer `_  lets you analyze database tables and recommends possible optimizations.
+
+Optimizing Database Tables Using the DDL Optimizer
+-----------------------
+The **DDL Optimizer** tab analyzes database tables and recommends possible optimizations according to SQream's best practices.
+
+As described in the previous table, you can access the DDL Optimizer by clicking the **additional options icon** and selecting **DDL Optimizer**.
+
+The following table describes the DDL Optimizer screen:
+
+.. list-table::
+   :widths: 15 75
+   :header-rows: 1   
+   
+   * - Element
+     - Description
+   * - Column area
+     - Shows the column **names** and **column types** from the selected table. You can scroll down or to the right/left for long column lists.
+   * - Optimization area
+     - Shows the number of rows to sample as the basis for running an optimization, the default setting (1,000,000) when running an optimization (this is also the overhead threshold used when analyzing ``TEXT`` fields),  and the default percent buffer to add to ``TEXT`` lengths (10%). Attempts to determine field nullability.
+   * - Run Optimizer
+     - Starts the optimization process.
+
+Clicking **Run Optimizer** adds a tab to the Statement panel showing the optimized results of the selected object.
+
+For more information, see `Optimization and Best Practices `_.
+
+Executing Pre-Defined Queries from the System Queries Panel
+---------------
+The **System Queries** panel lets you execute predefined queries and includes the following system query types:
+
+* **Catalog queries** - Used for analyzing table compression rates, users and permissions, etc.
+    
+	::
+	
+* **Admin queries** - Queries useful for SQream database management.
+
+Clicking an item pastes the query into the Statement pane, and you can undo a previous operation by pressing **Ctrl + Z**.
+
+.. _studio_5.4.7_editor_statement_area:
+
+Writing Statements and Queries from the Statement Panel
+==============
+The multi-tabbed statement area is used for writing queries and statements, and is used in tandem with the toolbar. When writing and executing statements, you must first select a database from the **Database** dropdown menu in the toolbar. When you execute a statement, it passes through a series of statuses until completed. Knowing the status helps you with statement maintenance, and the statuses are shown in the **Results panel**.
+
+The auto-complete feature assists you when writing statements by suggesting statement options.
+
+The following table shows the statement statuses:
+	 
+.. list-table::
+   :widths: 45 160
+   :header-rows: 1  
+   
+   * - Status
+     - Description
+   * - Pending
+     - The statement is pending.
+   * - In queue
+     - The statement is waiting for execution.
+   * - Initializing
+     - The statement has entered execution checks.
+   * - Executing
+     - The statement is executing.
+   * - Statement stopped
+     - The statement has been stopped.
+	 
+You can add and name new tabs for each statement that you need to execute, and Studio preserves your created tabs when you switch between databases. You can add new tabs by clicking |icon-plus| , which creates a new tab to the right with a default name of SQL and an increasing number. This helps you keep track of your statements.
+
+You can also rename the default tab name by double-clicking it and typing a new name and write multiple statements in tandem in the same tab by separating them with semicolons (``;``).If too many tabs to fit into the Statement Pane are open at the same time, the tab arrows are displayed. You can scroll through the tabs by clicking |icon-left| or |icon-right|, and close tabs by clicking |icon-close|. You can also close all tabs at once by clicking **Close all** located to the right of the tabs.
+
+.. tip:: If this is your first time using SQream, see `Getting Started `_.
+
+
+.. Keyboard shortcuts
+.. ^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. :kbd:`Ctrl` +: kbd:`Enter` - Execute all queries in the statement area, or just the highlighted part of the query.
+
+.. :kbd:`Ctrl` + :kbd:`Space` - Auto-complete the current keyword
+
+.. :kbd:`Ctrl` + :kbd:`↑` - Switch to next tab.
+
+.. :kbd:`Ctrl` + :kbd:`↓` - Switch to previous tab
+
+.. _studio_editor_results_5.4.7:
+
+:ref:`Back to Executing Statements and Running Queries from the Editor`
+
+.. _studio_5.4.7_editor_results:
+
+.. _results_panel_5.4.7:
+
+Viewing Statement and Query Results from the Results Panel
+==============
+The results panel shows statement and query results. By default, only the first 10,000 results are returned, although you can modify this from the :ref:`studio_editor_toolbar`, as described above. By default, executing several statements together opens a separate results tab for each statement. Executing statements together executes them serially, and any failed statement cancels all subsequent executions.
+
+.. image:: /_static/images/results_panel.png
+
+The following is a brief description of the Results panel views highlighted in the figure above:
+
+.. list-table::
+   :widths: 45 160
+   :header-rows: 1  
+   
+   * - Element
+     - Description
+   * - :ref:`Results view`
+     - Lets you view search query results.
+   * - :ref:`Execution Details view`
+     - Lets you analyze your query for troubleshooting and optimization purposes.
+   * - :ref:`SQL view`
+     - Lets you see the SQL view.
+
+
+.. _results_view_5.4.7:
+
+:ref:`Back to Executing Statements and Running Queries from the Editor`
+	 
+Searching Query Results in the Results View
+----------------
+The **Results view** lets you view search query results.
+
+From this view you can also do the following:
+
+* View the amount of time (in seconds) taken for a query to finish executing.
+* Switch and scroll between tabs.
+* Close all tabs at once.
+* Enable keeping tabs by selecting **Keep tabs**.
+* Sort column results.
+
+Saving Results to the Clipboard
+^^^^^^^^^^^^
+The **Save results to clipboard** function lets you save your results to the clipboard to paste into another text editor or into Excel for further analysis.
+
+.. _save_results_to_local_file_5.4.7:
+
+Saving Results to a Local File
+^^^^^^^^^^^^
+The **Save results to local file** functions lets you save your search query results to a local file. Clicking **Save results to local file** downloads the contents of the Results panel to an Excel sheet. You can then use copy and paste this content into other editors as needed.
+
+In the Results view you can also run parallel statements, as described in **Running Parallel Statements** below.
+
+.. _running_parallel_statements_5.4.7:
+
+Running Parallel Statements
+^^^^^^^^^^^^
+While Studio's default functionality is to open a new tab for each executed statement, Studio supports running parallel statements in one statement tab. Running parallel statements requires using macros and is useful for advanced users.
+
+The following shows the syntax for running parallel statements:
+
+.. code-block:: console
+     
+   $ @@ parallel
+   $ $$
+   $ select 1;
+   $ select 2;
+   $ select 3;
+   $ $$
+
+
+:ref:`Back to Viewing Statement and Query Results from the Results Panel`
+
+.. _execution_details_view_5.4.7:
+
+.. _execution_tree_5.4.7:
+
+Execution Details View
+--------------
+The **Execution Details View** section describes the following:
+
+.. contents:: 
+   :local:
+   :depth: 1
+   
+Overview
+^^^^^^^^^^^^
+Clicking **Execution Details View** displays the **Execution Tree**, which is a chronological tree of processes that occurred to execute your queries. The purpose of the Execution Tree is to analyze all aspects of your query for troubleshooting and optimization purposes, such as resolving queries with an exceptionally long runtime.
+
+.. note::  The **Execution Details View** button is enabled only when a query takes longer than five seconds. 
+
+From this screen you can scroll in, out, and around the execution tree with the mouse to analyze all aspects of your query. You can navigate around the execution tree by dragging or by using the mini-map in the bottom right corner.
+
+.. image:: /_static/images/execution_tree_1.png
+
+You can also search for query data by pressing **Ctrl+F** or clicking the search icon |icon-search| in the search field in the top right corner and typing text.
+
+.. image:: /_static/images/search_field.png
+
+Pressing **Enter** takes you directly to the next result matching your search criteria, and pressing **Shift + Enter** takes you directly to the previous result. You can also search next and previous results using the up and down arrows.
+
+.. |icon-search| image:: /_static/images/studio_icon_search.png
+   :align: middle
+
+The nodes are color-coded based on the following:
+
+* **Slow nodes** - red
+* **In progress nodes** - yellow
+* **Completed nodes** - green
+* **Pending nodes** - white
+* **Currently selected node** - blue
+* **Search result node** - purple (in the mini-map)
+
+The execution tree displays the same information as shown in the plain view in tree format.
+
+The Execution Tree tracks each phase of your query in real time as a vertical tree of nodes. Each node refers to an operation that occurred on the GPU or CPU. When a phase is completed, the next branch begins to its right until the entire query is complete. Joins are displayed as two parallel branches merged together in a node called **Join**, as shown in the figure above. The nodes are connected by a line indicating the number of rows passed from one node to the next. The width of the line indicates the amount of rows on a logarithmic scale.
+
+Each node displays a number displaying its **node ID**, its **type**, **table name** (if relevant), **status**, and **runtime**. The nodes are color-coded for easy identification. Green nodes indicate **completed nodes**, yellow indicates **nodes in progress**, and red indicates **slowest nodes**, typically joins, as shown below:
+
+.. image:: /_static/images/nodes.png
+
+Viewing Query Statistics
+^^^^^^^^^^^^
+The following statistical information is displayed in the top left corner, as shown in the figure above:
+
+* **Query Statistics**:
+
+    * **Elapsed** - the total time taken for the query to complete.
+    * **Result rows** - the amount of rows fetched.
+    * **Running nodes completion**
+    * **Total query completion** - the amount of the total execution tree that was executed (nodes marked green).
+	
+* **Slowest Nodes** information is displayed in the top right corner in red text. Clicking the slowest node centers automatically on that node in the execution tree.
+
+You can also view the following **Node Statistics** in the top right corner for each individual node by clicking a node:
+
+.. list-table::
+   :widths: 45 160
+   :header-rows: 1  
+   
+   * - Element
+     - Description
+   * - Node type
+     - Shows the node type.
+   * - Status
+     - Shows the execution status.
+   * - Time
+     - The total time taken to execute.
+   * - Rows
+     - Shows the number of produced rows passed to the next node.
+   * - Chunks
+     - Shows number of produced chunks.
+   * - Average rows per chunk
+     - Shows the number of average rows per chunk.
+   * - Table (for **ReadTable** and joins only)
+     - Shows the table name.
+   * - Write (for joins only)
+     - Shows the total date size written to the disk.
+   * - Read (for **ReadTable** and joins only)
+     - Shows the total data size read from the disk.
+
+Note that you can scroll the Node Statistics table. You can also download the execution plan table in .csv format by clicking the download arrow |icon-download| in the upper-right corner.
+
+.. |icon-download| image:: /_static/images/studio_icon_download.png
+   :align: middle
+
+Using the Plain View
+^^^^^^^^^^^^
+You can use the **Plain View** instead of viewing the execution tree by clicking **Plain View** |icon-plain| in the top right corner. The plain view displays the same information as shown in the execution tree in table format.
+
+.. |icon-plain| image:: /_static/images/studio_icon_plain.png
+   :align: middle
+   
+
+
+
+The plain view lets you view a query’s execution plan for monitoring purposes and highlights rows based on how long they ran relative to the entire query.
+
+This can be seen in the **timeSum** column as follows:
+
+* **Rows highlighted red** - longest runtime
+* **Rows highlighted orange** - medium runtime
+* **Rows highlighted yellow** - shortest runtime
+
+:ref:`Back to Viewing Statement and Query Results from the Results Panel`
+
+.. _sql_view_5.4.7:
+
+Viewing Wrapped Strings in the SQL View
+------------------
+The SQL View panel allows you to more easily view certain queries, such as a long string that appears on one line. The SQL View makes it easier to see by wrapping it so that you can see the entire string at once. It also reformats and organizes query syntax entered in the Statement panel for more easily locating particular segments of your queries. The SQL View is identical to the **Format SQL** feature in the Toolbar, allowing you to retain your originally constructed query while viewing a more intuitively structured snapshot of it.
+
+.. _save_results_to_clipboard_5.4.7:
+
+:ref:`Back to Viewing Statement and Query Results from the Results Panel`
+
+:ref:`Back to Executing Statements and Running Queries from the Editor`
diff --git a/sqream_studio_5.4.3/getting_started.rst b/sqream_studio_5.4.7/getting_started.rst
similarity index 84%
rename from sqream_studio_5.4.3/getting_started.rst
rename to sqream_studio_5.4.7/getting_started.rst
index 3b9644cdc..21d9348c0 100644
--- a/sqream_studio_5.4.3/getting_started.rst
+++ b/sqream_studio_5.4.7/getting_started.rst
@@ -1,61 +1,61 @@
-.. _getting_started:
-
-****************************
-Getting Started with SQream Acceleration Studio 5.4.3
-****************************
-Setting Up and Starting Studio
-----------------
-Studio is included with all `dockerized installations of SQream DB `_. When starting Studio, it listens on the local machine on port 8080.
-
-Logging In to Studio
----------------
-**To log in to SQream Studio:**
-
-1. Open a browser to the host on **port 8080**.
-
-   For example, if your machine IP address is ``192.168.0.100``, insert the IP address into the browser as shown below:
-
-   .. code-block:: console
-
-      $ http://192.168.0.100:8080
-
-2. Fill in your SQream DB login credentials. These are the same credentials used for :ref:`sqream sql` or JDBC.
-
-   When you sign in, the License Warning is displayed.
-   
-Navigating Studio's Main Features
--------------
-When you log in, you are automatically taken to the **Editor** screen. The Studio's main functions are displayed in the **Navigation** pane on the left side of the screen.
-
-From here you can navigate between the main areas of the Studio:
-
-.. list-table::
-   :widths: 10 90
-   :header-rows: 1   
-   
-   * - Element
-     - Description
-   * - :ref:`Dashboard`
-     - Lets you monitor system health and manage queues and workers.
-   * - :ref:`Editor`
-     - Lets you select databases, perform statement operations, and write and execute queries.   
-   * - :ref:`Logs`
-     - Lets you view usage logs.
-   * - :ref:`Roles`
-     - Lets you create users and manage user permissions.
-   * - :ref:`Configuration`
-     - Lets you configure your instance of SQream.
-
-By clicking the user icon, you can also use it for logging out and viewing the following:
-
-* User information
-* Connection type
-* SQream version
-* SQream Studio version
-* License expiration date
-* License storage capacity
-* Log out
-
-.. _back_to_dashboard_5.4.3:
-
-.. _studio_dashboard_5.4.3:
+.. _getting_started:
+
+****************************
+Getting Started with SQream Acceleration Studio 5.4.7
+****************************
+Setting Up and Starting Studio
+----------------
+Studio is included with all `dockerized installations of SQream DB `_. When starting Studio, it listens on the local machine on port 8080.
+
+Logging In to Studio
+---------------
+**To log in to SQream Studio:**
+
+1. Open a browser to the host on **port 8080**.
+
+   For example, if your machine IP address is ``192.168.0.100``, insert the IP address into the browser as shown below:
+
+   .. code-block:: console
+
+      $ http://192.168.0.100:8080
+
+2. Fill in your SQream DB login credentials. These are the same credentials used for :ref:`sqream sql` or JDBC.
+
+   When you sign in, the License Warning is displayed.
+   
+Navigating Studio's Main Features
+-------------
+When you log in, you are automatically taken to the **Editor** screen. The Studio's main functions are displayed in the **Navigation** pane on the left side of the screen.
+
+From here you can navigate between the main areas of the Studio:
+
+.. list-table::
+   :widths: 10 90
+   :header-rows: 1   
+   
+   * - Element
+     - Description
+   * - :ref:`Dashboard`
+     - Lets you monitor system health and manage queues and workers.
+   * - :ref:`Editor`
+     - Lets you select databases, perform statement operations, and write and execute queries.   
+   * - :ref:`Logs`
+     - Lets you view usage logs.
+   * - :ref:`Roles`
+     - Lets you create users and manage user permissions.
+   * - :ref:`Configuration`
+     - Lets you configure your instance of SQream.
+
+By clicking the user icon, you can also use it for logging out and viewing the following:
+
+* User information
+* Connection type
+* SQream version
+* SQream Studio version
+* License expiration date
+* License storage capacity
+* Log out
+
+.. _back_to_dashboard_5.4.7:
+
+.. _studio_dashboard_5.4.7:
diff --git a/sqream_studio_5.4.3/index.rst b/sqream_studio_5.4.7/index.rst
similarity index 52%
rename from sqream_studio_5.4.3/index.rst
rename to sqream_studio_5.4.7/index.rst
index ac607b121..17c7ae05c 100644
--- a/sqream_studio_5.4.3/index.rst
+++ b/sqream_studio_5.4.7/index.rst
@@ -1,19 +1,19 @@
-.. _sqream_studio_5.4.3:
-
-**********************************
-SQream Acceleration Studio 5.4.3
-**********************************
-The SQream Acceleration Studio is a web-based client for use with SQream. Studio provides users with all functionality available from the command line in an intuitive and easy-to-use format. This includes running statements, managing roles and permissions, and managing SQream clusters.
-
-This section describes how to use the SQream Accleration Studio version 5.4.3:
-
-.. toctree::
-   :maxdepth: 1
-   :glob:
-
-   getting_started
-   monitoring_workers_and_services_from_the_dashboard
-   executing_statements_and_running_queries_from_the_editor
-   viewing_logs
-   creating_assigning_and_managing_roles_and_permissions
+.. _sqream_studio_:
+
+**********************************
+SQream Acceleration Studio 5.4.7
+**********************************
+The SQream Acceleration Studio 5.4.7 is a web-based client for use with SQream. Studio provides users with all functionality available from the command line in an intuitive and easy-to-use format. This includes running statements, managing roles and permissions, and managing SQream clusters.
+
+This section describes how to use the SQream Accleration Studio version 5.4.7:
+
+.. toctree::
+   :maxdepth: 1
+   :glob:
+
+   getting_started
+   monitoring_workers_and_services_from_the_dashboard
+   executing_statements_and_running_queries_from_the_editor
+   viewing_logs
+   creating_assigning_and_managing_roles_and_permissions
    configuring_your_instance_of_sqream
\ No newline at end of file
diff --git a/sqream_studio_5.4.3/monitoring_workers_and_services_from_the_dashboard.rst b/sqream_studio_5.4.7/monitoring_workers_and_services_from_the_dashboard.rst
similarity index 83%
rename from sqream_studio_5.4.3/monitoring_workers_and_services_from_the_dashboard.rst
rename to sqream_studio_5.4.7/monitoring_workers_and_services_from_the_dashboard.rst
index e30962f37..4283f64a8 100644
--- a/sqream_studio_5.4.3/monitoring_workers_and_services_from_the_dashboard.rst
+++ b/sqream_studio_5.4.7/monitoring_workers_and_services_from_the_dashboard.rst
@@ -1,265 +1,265 @@
-.. _monitoring_workers_and_services_from_the_dashboard:
-
-.. _back_to_dashboard_5.4.3:
-
-****************************
-Monitoring Workers and Services from the Dashboard
-****************************
-The **Dashboard** is used for the following:
-
-* Monitoring system health.
-* Viewing, monitoring, and adding defined service queues.
-* Viewing and managing worker status and add workers.
-
-The following is an image of the Dashboard:
-
-.. image:: /_static/images/dashboard.png
-
-You can only access the Dashboard if you signed in with a ``SUPERUSER`` role.
-
-The following is a brief description of the Dashboard panels:
-
-.. list-table::
-   :widths: 10 25 65
-   :header-rows: 1  
-   
-   * - No.
-     - Element
-     - Description
-   * - 1
-     - :ref:`Services panel`
-     - Used for viewing and monitoring the defined service queues.
-   * - 2
-     - :ref:`Workers panel`
-     - Monitors system health and shows each Sqreamd worker running in the cluster.
-   * - 3
-     - :ref:`License information`
-     - Shows the remaining amount of days left on your license.
-   
-
-.. _data_storage_panel_5.4.3:
-
-
-
-:ref:`Back to Monitoring Workers and Services from the Dashboard`
-
-.. _services_panel_5.4.3:
-
-Subscribing to Workers from the Services Panel
---------------------------
-Services are used to categorize and associate (also known as **subscribing**) workers to particular services. The **Service** panel is used for viewing, monitoring, and adding defined `service queues `_.
-
-
-
-The following is a brief description of each pane:
-	 
-.. list-table::
-   :widths: 10 90
-   :header-rows: 1  
-   
-   * - No.
-     - Description
-   * - 1
-     - Adds a worker to the selected service.
-   * - 2
-     - Shows the service name.
-   * - 3
-     - Shows a trend graph of queued statements loaded over time.
-   * - 4
-     - Adds a service.
-   * - 5
-     - Shows the currently processed queries belonging to the service/total queries for that service in the system (including queued queries).	 
-
-Adding A Service
-^^^^^^^^^^^^^^^^^^^^^	 
-You can add a service by clicking **+ Add** and defining the service name.
-
-.. note:: If you do not associate a worker with the new service, it will not be created.
-
-You can manage workers from the **Workers** panel. For more information about managing workers, see the following:
-
-* :ref:`Managing Workers from the Workers Panel`
-* `Workers `_
-
-:ref:`Back to Monitoring Workers and Services from the Dashboard`
-
-.. _workers_panel_5.4.3:
-
-Managing Workers from the Workers Panel
-------------
-From the **Workers** panel you can do the following:
-
-* :ref:`View workers `
-* :ref:`Add a worker to a service`
-* :ref:`View a worker's active query information`
-* :ref:`View a worker's execution plan`
-
-.. _view_workers_5.4.3:
-
-Viewing Workers
-^^^^^^^^
-The **Worker** panel shows each worker (``sqreamd``) running in the cluster. Each worker has a status bar that represents the status over time. The status bar is divided into 20 equal segments, showing the most dominant activity in that segment.
-	 
-From the **Scale** dropdown menu you can set the time scale of the displayed information
-You can hover over segments in the status bar to see the date and time corresponding to each activity type:
-
-* **Idle** – the worker is idle and available for statements.
-* **Compiling** – the worker is compiling a statement and is preparing for execution.
-* **Executing** – the worker is executing a statement after compilation.
-* **Stopped** – the worker was stopped (either deliberately or due to an error).
-* **Waiting** – the worker was waiting on an object locked by another worker.
-
-.. _add_worker_to_service_5.4.3:
-
-Adding A Worker to A Service
-^^^^^^^^^^^^^^^^^^^^^	 
-You can add a worker to a service by clicking the **add** button. 
-
-
-
-Clicking the **add** button shows the selected service's workers. You can add the selected worker to the service by clicking **Add Worker**. Adding a worker to a service does not break associations already made between that worker and other services.
-
-
-.. _view_worker_query_information_5.4.3:
-
-Viewing A Worker's Active Query Information
-^^^^^^^^^^^^^^^^^^^^^	 
-You can view a worker's active query information by clicking **Queries**, which displays them in the selected service.
-
-
-Each statement shows the **query ID**, **status**, **service queue**, **elapsed time**, **execution time**, and **estimated completion status**. In addition, each statement can be stopped or expanded to show its execution plan and progress. For more information on viewing a statement's execution plan and progress, see :ref:`Viewing a Worker's Execution Plan ` below.
-
-Viewing A Worker's Host Utilization
-^^^^^^^^^^^^^^^^^^^^^	 
-
-While viewing a worker's query information, clicking the **down arrow** expands to show the host resource utilization.
-
-
-
-The graphs show the resource utilization trends over time, and the **CPU memory** and **utilization** and the **GPU utilization** values on the right. You can hover over the graph to see more information about the activity at any point on the graph.
-
-Error notifications related to statements are displayed, and you can hover over them for more information about the error. 
-
-
-.. _view_worker_execution_plan_5.4.3:
-
-Viewing a Worker's Execution Plan
-^^^^^^^^^^^^^^^^^^^^^
-	 
-Clicking the ellipsis in a service shows the following additional options:
-
-* **Stop Query** - stops the query.
-* **Show Execution Plan** - shows the execution plan as a table. The columns in the **Show Execution Plan** table can be sorted.
-
-For more information on the current query plan, see `SHOW_NODE_INFO `_. For more information on checking active sessions across the cluster, see `SHOW_SERVER_STATUS `_.
-
-.. include:: /reference/sql/sql_statements/monitoring_commands/show_server_status.rst
-   :start-line: 67
-   :end-line: 84
-
-Managing Worker Status
-^^^^^^^^^^^^^^^^^^^^^
-
-In some cases you may want to stop or restart workers for maintenance purposes. Each Worker line has a :kbd:`⋮` menu used for stopping, starting, or restarting workers.
-
-
-Starting or restarting workers terminates all queries related to that worker. When you stop a worker, its background turns gray.
-
-
-
-
-.. |icon-user| image:: /_static/images/studio_icon_user.png
-   :align: middle
-   
-.. |icon-dots| image:: /_static/images/studio_icon_dots.png
-   :align: middle   
-   
-.. |icon-editor| image:: /_static/images/studio_icon_editor.png
-   :align: middle
-
-.. |icon-copy| image:: /_static/images/studio_icon_copy.png
-   :align: middle
-
-.. |icon-select| image:: /_static/images/studio_icon_select.png
-   :align: middle
-
-.. |icon-dots| image:: /_static/images/studio_icon_dots.png
-   :align: middle
-
-.. |icon-filter| image:: /_static/images/studio_icon_filter.png
-   :align: middle
-
-.. |icon-ddl-edit| image:: /_static/images/studio_icon_ddl_edit.png
-   :align: middle
-
-.. |icon-run-optimizer| image:: /_static/images/studio_icon_run_optimizer.png
-   :align: middle
-
-.. |icon-generate-create-statement| image:: /_static/images/studio_icon_generate_create_statement.png
-   :align: middle
-
-.. |icon-plus| image:: /_static/images/studio_icon_plus.png
-   :align: middle
-
-.. |icon-close| image:: /_static/images/studio_icon_close.png
-   :align: middle
-
-.. |icon-left| image:: /_static/images/studio_icon_left.png
-   :align: middle
-
-.. |icon-right| image:: /_static/images/studio_icon_right.png
-   :align: middle
-
-.. |icon-format-sql| image:: /_static/images/studio_icon_format.png
-   :align: middle
-
-.. |icon-download-query| image:: /_static/images/studio_icon_download_query.png
-   :align: middle
-
-.. |icon-open-query| image:: /_static/images/studio_icon_open_query.png
-   :align: middle
-
-.. |icon-execute| image:: /_static/images/studio_icon_execute.png
-   :align: middle
-
-.. |icon-stop| image:: /_static/images/studio_icon_stop.png
-   :align: middle
-
-.. |icon-dashboard| image:: /_static/images/studio_icon_dashboard.png
-   :align: middle
-
-.. |icon-expand| image:: /_static/images/studio_icon_expand.png
-   :align: middle
-
-.. |icon-scale| image:: /_static/images/studio_icon_scale.png
-   :align: middle
-
-.. |icon-expand-down| image:: /_static/images/studio_icon_expand_down.png
-   :align: middle
-
-.. |icon-add| image:: /_static/images/studio_icon_add.png
-   :align: middle
-
-.. |icon-add-worker| image:: /_static/images/studio_icon_add_worker.png
-   :align: middle
-
-.. |keep-tabs| image:: /_static/images/studio_keep_tabs.png
-   :align: middle
-   
-:ref:`Back to Monitoring Workers and Services from the Dashboard`
-
-
-
-.. _license_information_5.4.3:
-   
-License Information
-----------------------
-The license information section shows the following:
-
- * The amount of time in days remaining on the license.
- * The license storage capacity.
- 
-.. image:: /_static/images/license_storage_capacity.png
-
- 
-:ref:`Back to Monitoring Workers and Services from the Dashboard`
+.. _monitoring_workers_and_services_from_the_dashboard:
+
+.. _back_to_dashboard_:
+
+****************************
+Monitoring Workers and Services from the Dashboard
+****************************
+The **Dashboard** is used for the following:
+
+* Monitoring system health.
+* Viewing, monitoring, and adding defined service queues.
+* Viewing and managing worker status and add workers.
+
+The following is an image of the Dashboard:
+
+.. image:: /_static/images/dashboard.png
+
+You can only access the Dashboard if you signed in with a ``SUPERUSER`` role.
+
+The following is a brief description of the Dashboard panels:
+
+.. list-table::
+   :widths: 10 25 65
+   :header-rows: 1  
+   
+   * - No.
+     - Element
+     - Description
+   * - 1
+     - :ref:`Services panel`
+     - Used for viewing and monitoring the defined service queues.
+   * - 2
+     - :ref:`Workers panel`
+     - Monitors system health and shows each Sqreamd worker running in the cluster.
+   * - 3
+     - :ref:`License information`
+     - Shows the remaining amount of days left on your license.
+   
+
+.. _data_storage_panel_:
+
+
+
+:ref:`Back to Monitoring Workers and Services from the Dashboard`
+
+.. _services_panel_:
+
+Subscribing to Workers from the Services Panel
+--------------------------
+Services are used to categorize and associate (also known as **subscribing**) workers to particular services. The **Service** panel is used for viewing, monitoring, and adding defined `service queues `_.
+
+
+
+The following is a brief description of each pane:
+	 
+.. list-table::
+   :widths: 10 90
+   :header-rows: 1  
+   
+   * - No.
+     - Description
+   * - 1
+     - Adds a worker to the selected service.
+   * - 2
+     - Shows the service name.
+   * - 3
+     - Shows a trend graph of queued statements loaded over time.
+   * - 4
+     - Adds a service.
+   * - 5
+     - Shows the currently processed queries belonging to the service/total queries for that service in the system (including queued queries).	 
+
+Adding A Service
+^^^^^^^^^^^^^^^^^^^^^	 
+You can add a service by clicking **+ Add** and defining the service name.
+
+.. note:: If you do not associate a worker with the new service, it will not be created.
+
+You can manage workers from the **Workers** panel. For more information about managing workers, see the following:
+
+* :ref:`Managing Workers from the Workers Panel`
+* `Workers `_
+
+:ref:`Back to Monitoring Workers and Services from the Dashboard`
+
+.. _workers_panel_:
+
+Managing Workers from the Workers Panel
+------------
+From the **Workers** panel you can do the following:
+
+* :ref:`View workers `
+* :ref:`Add a worker to a service`
+* :ref:`View a worker's active query information`
+* :ref:`View a worker's execution plan`
+
+.. _view_workers_:
+
+Viewing Workers
+^^^^^^^^
+The **Worker** panel shows each worker (``sqreamd``) running in the cluster. Each worker has a status bar that represents the status over time. The status bar is divided into 20 equal segments, showing the most dominant activity in that segment.
+	 
+From the **Scale** dropdown menu you can set the time scale of the displayed information
+You can hover over segments in the status bar to see the date and time corresponding to each activity type:
+
+* **Idle** – the worker is idle and available for statements.
+* **Compiling** – the worker is compiling a statement and is preparing for execution.
+* **Executing** – the worker is executing a statement after compilation.
+* **Stopped** – the worker was stopped (either deliberately or due to an error).
+* **Waiting** – the worker was waiting on an object locked by another worker.
+
+.. _add_worker_to_service_:
+
+Adding A Worker to A Service
+^^^^^^^^^^^^^^^^^^^^^	 
+You can add a worker to a service by clicking the **add** button. 
+
+
+
+Clicking the **add** button shows the selected service's workers. You can add the selected worker to the service by clicking **Add Worker**. Adding a worker to a service does not break associations already made between that worker and other services.
+
+
+.. _view_worker_query_information_:
+
+Viewing A Worker's Active Query Information
+^^^^^^^^^^^^^^^^^^^^^	 
+You can view a worker's active query information by clicking **Queries**, which displays them in the selected service.
+
+
+Each statement shows the **query ID**, **status**, **service queue**, **elapsed time**, **execution time**, and **estimated completion status**. In addition, each statement can be stopped or expanded to show its execution plan and progress. For more information on viewing a statement's execution plan and progress, see :ref:`Viewing a Worker's Execution Plan ` below.
+
+Viewing A Worker's Host Utilization
+^^^^^^^^^^^^^^^^^^^^^	 
+
+While viewing a worker's query information, clicking the **down arrow** expands to show the host resource utilization.
+
+
+
+The graphs show the resource utilization trends over time, and the **CPU memory** and **utilization** and the **GPU utilization** values on the right. You can hover over the graph to see more information about the activity at any point on the graph.
+
+Error notifications related to statements are displayed, and you can hover over them for more information about the error. 
+
+
+.. _view_worker_execution_plan_:
+
+Viewing a Worker's Execution Plan
+^^^^^^^^^^^^^^^^^^^^^
+	 
+Clicking the ellipsis in a service shows the following additional options:
+
+* **Stop Query** - stops the query.
+* **Show Execution Plan** - shows the execution plan as a table. The columns in the **Show Execution Plan** table can be sorted.
+
+For more information on the current query plan, see `SHOW_NODE_INFO `_. For more information on checking active sessions across the cluster, see `SHOW_SERVER_STATUS `_.
+
+.. include:: /reference/sql/sql_statements/monitoring_commands/show_server_status.rst
+   :start-line: 67
+   :end-line: 84
+
+Managing Worker Status
+^^^^^^^^^^^^^^^^^^^^^
+
+In some cases you may want to stop or restart workers for maintenance purposes. Each Worker line has a :kbd:`⋮` menu used for stopping, starting, or restarting workers.
+
+
+Starting or restarting workers terminates all queries related to that worker. When you stop a worker, its background turns gray.
+
+
+
+
+.. |icon-user| image:: /_static/images/studio_icon_user.png
+   :align: middle
+   
+.. |icon-dots| image:: /_static/images/studio_icon_dots.png
+   :align: middle   
+   
+.. |icon-editor| image:: /_static/images/studio_icon_editor.png
+   :align: middle
+
+.. |icon-copy| image:: /_static/images/studio_icon_copy.png
+   :align: middle
+
+.. |icon-select| image:: /_static/images/studio_icon_select.png
+   :align: middle
+
+.. |icon-dots| image:: /_static/images/studio_icon_dots.png
+   :align: middle
+
+.. |icon-filter| image:: /_static/images/studio_icon_filter.png
+   :align: middle
+
+.. |icon-ddl-edit| image:: /_static/images/studio_icon_ddl_edit.png
+   :align: middle
+
+.. |icon-run-optimizer| image:: /_static/images/studio_icon_run_optimizer.png
+   :align: middle
+
+.. |icon-generate-create-statement| image:: /_static/images/studio_icon_generate_create_statement.png
+   :align: middle
+
+.. |icon-plus| image:: /_static/images/studio_icon_plus.png
+   :align: middle
+
+.. |icon-close| image:: /_static/images/studio_icon_close.png
+   :align: middle
+
+.. |icon-left| image:: /_static/images/studio_icon_left.png
+   :align: middle
+
+.. |icon-right| image:: /_static/images/studio_icon_right.png
+   :align: middle
+
+.. |icon-format-sql| image:: /_static/images/studio_icon_format.png
+   :align: middle
+
+.. |icon-download-query| image:: /_static/images/studio_icon_download_query.png
+   :align: middle
+
+.. |icon-open-query| image:: /_static/images/studio_icon_open_query.png
+   :align: middle
+
+.. |icon-execute| image:: /_static/images/studio_icon_execute.png
+   :align: middle
+
+.. |icon-stop| image:: /_static/images/studio_icon_stop.png
+   :align: middle
+
+.. |icon-dashboard| image:: /_static/images/studio_icon_dashboard.png
+   :align: middle
+
+.. |icon-expand| image:: /_static/images/studio_icon_expand.png
+   :align: middle
+
+.. |icon-scale| image:: /_static/images/studio_icon_scale.png
+   :align: middle
+
+.. |icon-expand-down| image:: /_static/images/studio_icon_expand_down.png
+   :align: middle
+
+.. |icon-add| image:: /_static/images/studio_icon_add.png
+   :align: middle
+
+.. |icon-add-worker| image:: /_static/images/studio_icon_add_worker.png
+   :align: middle
+
+.. |keep-tabs| image:: /_static/images/studio_keep_tabs.png
+   :align: middle
+   
+:ref:`Back to Monitoring Workers and Services from the Dashboard`
+
+
+
+.. _license_information_:
+   
+License Information
+----------------------
+The license information section shows the following:
+
+ * The amount of time in days remaining on the license.
+ * The license storage capacity.
+ 
+.. image:: /_static/images/license_storage_capacity.png
+
+ 
+:ref:`Back to Monitoring Workers and Services from the Dashboard`
diff --git a/sqream_studio_5.4.3/viewing_logs.rst b/sqream_studio_5.4.7/viewing_logs.rst
similarity index 85%
rename from sqream_studio_5.4.3/viewing_logs.rst
rename to sqream_studio_5.4.7/viewing_logs.rst
index 0a8350a45..c4e4b73a3 100644
--- a/sqream_studio_5.4.3/viewing_logs.rst
+++ b/sqream_studio_5.4.7/viewing_logs.rst
@@ -1,122 +1,122 @@
-.. _viewing_logs:
-
-.. _logs_top_5.4.3:
-
-****************************
-Viewing Logs
-****************************
-The **Logs** screen is used for viewing logs and includes the following elements:
-
-.. list-table::
-   :widths: 15 75
-   :header-rows: 1   
-   
-   * - Element
-     - Description
-   * - :ref:`Filter area`
-     - Lets you filter the data shown in the table. 
-   * - :ref:`Query tab`
-     - Shows basic query information logs, such as query number and the time the query was run. 
-   * - :ref:`Session tab`
-     - Shows basic session information logs, such as session ID and user name.
-   * - :ref:`System tab`
-     - Shows all system logs.
-   * - :ref:`Log lines tab`
-     - Shows the total amount of log lines.
-
-
-.. _filter_5.4.3:
-
-Filtering Table Data
--------------
-From the Logs tab, from the **FILTERS** area you can also apply the **TIMESPAN**, **ONLY ERRORS**, and additional filters (**Add**). The **Timespan** filter lets you select a timespan. The **Only Errors** toggle button lets you show all queries, or only queries that generated errors. The **Add** button lets you add additional filters to the data shown in the table. The **Filter** button applies the selected filter(s).
-
-Other filters require you to select an item from a dropdown menu:
-
-* INFO
-* WARNING
-* ERROR
-* FATAL
-* SYSTEM
-
-You can also export a record of all of your currently filtered logs in Excel format by clicking **Download** located above the Filter area.
-
-.. _queries_5.4.3:
-
-:ref:`Back to Viewing Logs`
-
-
-Viewing Query Logs
-----------
-The **QUERIES** log area shows basic query information, such as query number and the time the query was run. The number next to the title indicates the amount of queries that have been run.
-
-From the Queries area you can see and sort by the following:
-
-* Query ID
-* Start time
-* Query
-* Compilation duration
-* Execution duration
-* Total duration
-* Details (execution details, error details, successful query details)
-
-In the Queries table, you can click on the **Statement ID** and **Query** items to set them as your filters. In the **Details** column you can also access additional details by clicking one of the **Details** options for a more detailed explanation of the query.
-
-:ref:`Back to Viewing Logs`
-
-.. _sessions_5.4.3:
-
-Viewing Session Logs
-----------
-The **SESSIONS** tab shows the sessions log table and is used for viewing activity that has occurred during your sessions. The number at the top indicates the amount of sessions that have occurred.
-
-From here you can see and sort by the following:
-
-* Timestamp
-* Connection ID
-* Username
-* Client IP
-* Login (Success or Failed)
-* Duration (of session)
-* Configuration Changes
-
-In the Sessions table, you can click on the **Timestamp**, **Connection ID**, and **Username** items to set them as your filters.
-
-:ref:`Back to Viewing Logs`
-
-.. _system_5.4.3:
-
-Viewing System Logs
-----------
-The **SYSTEM** tab shows the system log table and is used for viewing all system logs. The number at the top indicates the amount of sessions that have occurred. Because system logs occur less frequently than queries and sessions, you may need to increase the filter timespan for the table to display any system logs.
-
-From here you can see and sort by the following:
-
-* Timestamp
-* Log type
-* Message
-
-In the Systems table, you can click on the **Timestamp** and **Log type** items to set them as your filters. In the **Message** column, you can also click on an item to show more information about the message.
-
-:ref:`Back to Viewing Logs`
-
-.. _log_lines_5.4.3:
-
-Viewing All Log Lines
-----------
-The **LOG LINES** tab is used for viewing the total amount of log lines in a table. From here users can view a more granular breakdown of log information collected by Studio. The other tabs (QUERIES, SESSIONS, and SYSTEM) show a filtered form of the raw log lines. For example, the QUERIES tab shows an aggregation of several log lines.
-
-From here you can see and sort by the following:
-
-* Timestamp
-* Message level
-* Worker hostname
-* Worker port
-* Connection ID
-* Database name
-* User name
-* Statement ID
-
-In the **LOG LINES** table, you can click on any of the items to set them as your filters.
-
-:ref:`Back to Viewing Logs`
\ No newline at end of file
+.. _viewing_logs:
+
+.. _logs_top_5.4.7:
+
+****************************
+Viewing Logs
+****************************
+The **Logs** screen is used for viewing logs and includes the following elements:
+
+.. list-table::
+   :widths: 15 75
+   :header-rows: 1   
+   
+   * - Element
+     - Description
+   * - :ref:`Filter area`
+     - Lets you filter the data shown in the table. 
+   * - :ref:`Query tab`
+     - Shows basic query information logs, such as query number and the time the query was run. 
+   * - :ref:`Session tab`
+     - Shows basic session information logs, such as session ID and user name.
+   * - :ref:`System tab`
+     - Shows all system logs.
+   * - :ref:`Log lines tab`
+     - Shows the total amount of log lines.
+
+
+.. _filter_5.4.7:
+
+Filtering Table Data
+-------------
+From the Logs tab, from the **FILTERS** area you can also apply the **TIMESPAN**, **ONLY ERRORS**, and additional filters (**Add**). The **Timespan** filter lets you select a timespan. The **Only Errors** toggle button lets you show all queries, or only queries that generated errors. The **Add** button lets you add additional filters to the data shown in the table. The **Filter** button applies the selected filter(s).
+
+Other filters require you to select an item from a dropdown menu:
+
+* INFO
+* WARNING
+* ERROR
+* FATAL
+* SYSTEM
+
+You can also export a record of all of your currently filtered logs in Excel format by clicking **Download** located above the Filter area.
+
+.. _queries_5.4.7:
+
+:ref:`Back to Viewing Logs`
+
+
+Viewing Query Logs
+----------
+The **QUERIES** log area shows basic query information, such as query number and the time the query was run. The number next to the title indicates the amount of queries that have been run.
+
+From the Queries area you can see and sort by the following:
+
+* Query ID
+* Start time
+* Query
+* Compilation duration
+* Execution duration
+* Total duration
+* Details (execution details, error details, successful query details)
+
+In the Queries table, you can click on the **Statement ID** and **Query** items to set them as your filters. In the **Details** column you can also access additional details by clicking one of the **Details** options for a more detailed explanation of the query.
+
+:ref:`Back to Viewing Logs`
+
+.. _sessions_5.4.7:
+
+Viewing Session Logs
+----------
+The **SESSIONS** tab shows the sessions log table and is used for viewing activity that has occurred during your sessions. The number at the top indicates the amount of sessions that have occurred.
+
+From here you can see and sort by the following:
+
+* Timestamp
+* Connection ID
+* Username
+* Client IP
+* Login (Success or Failed)
+* Duration (of session)
+* Configuration Changes
+
+In the Sessions table, you can click on the **Timestamp**, **Connection ID**, and **Username** items to set them as your filters.
+
+:ref:`Back to Viewing Logs`
+
+.. _system_5.4.7:
+
+Viewing System Logs
+----------
+The **SYSTEM** tab shows the system log table and is used for viewing all system logs. The number at the top indicates the amount of sessions that have occurred. Because system logs occur less frequently than queries and sessions, you may need to increase the filter timespan for the table to display any system logs.
+
+From here you can see and sort by the following:
+
+* Timestamp
+* Log type
+* Message
+
+In the Systems table, you can click on the **Timestamp** and **Log type** items to set them as your filters. In the **Message** column, you can also click on an item to show more information about the message.
+
+:ref:`Back to Viewing Logs`
+
+.. _log_lines_5.4.7:
+
+Viewing All Log Lines
+----------
+The **LOG LINES** tab is used for viewing the total amount of log lines in a table. From here users can view a more granular breakdown of log information collected by Studio. The other tabs (QUERIES, SESSIONS, and SYSTEM) show a filtered form of the raw log lines. For example, the QUERIES tab shows an aggregation of several log lines.
+
+From here you can see and sort by the following:
+
+* Timestamp
+* Message level
+* Worker hostname
+* Worker port
+* Connection ID
+* Database name
+* User name
+* Statement ID
+
+In the **LOG LINES** table, you can click on any of the items to set them as your filters.
+
+:ref:`Back to Viewing Logs`
\ No newline at end of file
diff --git a/studio_login_5.3.2.png b/studio_login_5.3.2.png
deleted file mode 100644
index e888aca13..000000000
Binary files a/studio_login_5.3.2.png and /dev/null differ
diff --git a/third_party_tools/client_drivers/cpp/connect_test.cpp b/third_party_tools/client_drivers/cpp/connect_test.cpp
deleted file mode 100644
index dc199f06b..000000000
--- a/third_party_tools/client_drivers/cpp/connect_test.cpp
+++ /dev/null
@@ -1,34 +0,0 @@
-// Trivial example
-
-#include 
-
-#include  "sqream.h"
-
-int main () {
-
-   sqream::driver sqc;
-
-   // Connection parameters: Hostname, Port, Use SSL, Username, Password,
-   // Database name, Service name
-   sqc.connect("127.0.0.1", 5000, false, "rhendricks", "Tr0ub4dor&3",
-               "raviga", "sqream");
-
-   // create table with data
-   run_direct_query(&sqc, "CREATE TABLE test_table (x int)");
-   run_direct_query(&sqc, "INSERT INTO test_table VALUES (5), (6), (7), (8)");
-
-   // query it
-   sqc.new_query("SELECT * FROM test_table");
-   sqc.execute_query();
-
-   // See the results
-   while (sqc.next_query_row()) {
-       std::cout << "Received: " << sqc.get_int(0) << std::endl;
-   }
-
-   sqc.finish_query();
-
-   // Close the connection completely
-   sqc.disconnect();
-
-}
diff --git a/third_party_tools/client_drivers/cpp/index.rst b/third_party_tools/client_drivers/cpp/index.rst
deleted file mode 100644
index fbbf6fb39..000000000
--- a/third_party_tools/client_drivers/cpp/index.rst
+++ /dev/null
@@ -1,87 +0,0 @@
-.. _cpp_native:
-
-*************************
-C++ Driver
-*************************
-
-The SQream DB C++ driver allows C++ programs and tools to connect to SQream DB.
-
-This tutorial shows how to write a C++ program that uses this driver.
-
-.. contents:: In this topic:
-   :depth: 2
-   :local:
-
-
-Installing the C++ driver
-==================================
-
-Prerequisites
-----------------
-
-The SQream DB C++ driver was built on 64-bit Linux, and is designed to work with RHEL 7 and Ubuntu 16.04 and newer.
-
-Getting the library
----------------------
-
-The C++ driver is provided as a tarball containing the compiled ``libsqream.so`` file and a header ``sqream.h``. Get the driver from the `SQream Drivers page `_. The library can be integrated into your C++-based applications or projects.
-
-
-Extract the tarball archive
------------------------------
-
-Extract the library files from the tarball
-
-.. code-block:: console
-
-   $ tar xf libsqream-3.0.tar.gz
-
-Examples
-==============================================
-
-Assuming there is a SQream DB worker to connect to, we'll connect to it using the application and run some statements.
-
-Testing the connection to SQream DB
---------------------------------------------
-
-Download this file by right clicking and saving to your computer :download:`connect_test.cpp `.
-
-.. literalinclude:: connect_test.cpp
-    :language: cpp
-    :caption: Connect to SQream DB
-    :linenos:
-
-
-Compiling and running the application
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-To build this code, place the library and header file in ./libsqream-3.0/ and run
-
-.. code-block:: console
-
-   $ g++ -Wall -Ilibsqream-3.0 -Llibsqream-3.0 -lsqream connect_test.cpp -o connect_test
-   $ ./connect_test
-
-Modify the ``-I`` and ``-L`` arguments to match the ``.so`` library and ``.h`` file if they are in another directory.
-
-Creating a table and inserting values
---------------------------------------------
-
-Download this file by right clicking and saving to your computer :download:`insert_test.cpp `.
-
-.. literalinclude:: insert_test.cpp
-    :language: cpp
-    :caption: Inserting data to a SQream DB table
-    :linenos:
-
-
-Compiling and running the application
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-To build this code, use
-
-.. code-block:: console
-
-   $ g++ -Wall -Ilibsqream-3.0 -Llibsqream-3.0 -lsqream insert_test.cpp -o insert_test
-   $ ./insert_test
-
diff --git a/third_party_tools/client_drivers/cpp/insert_test.cpp b/third_party_tools/client_drivers/cpp/insert_test.cpp
deleted file mode 100644
index 8a16618a4..000000000
--- a/third_party_tools/client_drivers/cpp/insert_test.cpp
+++ /dev/null
@@ -1,39 +0,0 @@
-// Insert with parameterized statement example
-
-#include 
-
-#include  "sqream.h"
-
-int main () {
-
-   sqream::driver sqc;
-
-   // Connection parameters: Hostname, Port, Use SSL, Username, Password,
-   // Database name, Service name
-   sqc.connect("127.0.0.1", 5000, false, "rhendricks", "Tr0ub4dor&3",
-               "raviga", "sqream");
-
-   run_direct_query(&sqc,
-       "CREATE TABLE animals (id INT NOT NULL, name VARCHAR(10) NOT NULL)");
-
-   // prepare the statement
-   sqc.new_query("INSERT INTO animals VALUES (?, ?)");
-   sqc.execute_query();
-
-   // Data to insert
-   int row0[] = {1,2,3};
-   std::string row1[] = {"Dog","Cat","Possum"};
-   int len = sizeof(row0)/sizeof(row0[0]);
-
-   for (int i = 0; i < len; ++i) {  
-      sqc.set_int(0, row0[i]);
-      sqc.set_varchar(1, row1[i]);
-      sqc.next_query_row();
-   }  
-
-   // This commits the insert
-   sqc.finish_query(); 
-
-   sqc.disconnect();
-
-}
diff --git a/third_party_tools/client_drivers/jdbc/index.rst b/third_party_tools/client_drivers/jdbc/index.rst
deleted file mode 100644
index 42a04548f..000000000
--- a/third_party_tools/client_drivers/jdbc/index.rst
+++ /dev/null
@@ -1,162 +0,0 @@
-.. _java_jdbc:
-
-*************************
-JDBC
-*************************
-
-The SQream DB JDBC driver allows many Java applications and tools connect to SQream DB.
-This tutorial shows how to write a Java application using the JDBC interface.
-
-The JDBC driver requires Java 1.8 or newer.
-
-.. contents:: In this topic:
-   :local:
-
-Installing the JDBC driver
-==================================
-
-Prerequisites
-----------------
-
-The SQream DB JDBC driver requires Java 1.8 or newer. We recommend either Oracle Java or OpenJDK.
-
-**Oracle Java**
-
-Download and install Java 8 from Oracle for your platform
-
-https://www.java.com/en/download/manual.jsp
-
-**OpenJDK**
-
-For Linux and BSD, see https://openjdk.java.net/install/
-
-For Windows, SQream recommends Zulu 8 https://www.azul.com/downloads/zulu-community/?&version=java-8-lts&architecture=x86-64-bit&package=jdk
-
-.. _get_jdbc_jar:
-
-Getting the JAR file
----------------------
-
-The JDBC driver is provided as a zipped JAR file, available for download from the :ref:`client drivers download page`. This JAR file can integrate into your Java-based applications or projects.
-
-
-Extract the zip archive
--------------------------
-
-Extract the JAR file from the zip archive
-
-.. code-block:: console
-
-   $ unzip sqream-jdbc-4.3.0.zip
-
-Setting up the Class Path
-----------------------------
-
-To use the driver, the JAR named ``sqream-jdbc-.jar`` (for example, ``sqream-jdbc-4.3.0.jar``) needs to be included in the class path, either by putting it in the ``CLASSPATH`` environment variable, or by using flags on the relevant Java command line.
-
-For example, if the JDBC driver has been unzipped to ``/home/sqream/sqream-jdbc-4.3.0.jar``, the application should be run as follows:
-
-.. code-block:: console
-
-   $ export CLASSPATH=/home/sqream/sqream-jdbc-4.3.0.jar:$CLASSPATH
-   $ java my_java_app
-
-An alternative method is to pass ``-classpath`` to the Java executable:
-
-.. code-block:: console
-
-   $ java -classpath .:/home/sqream/sqream-jdbc-4.3.0.jar my_java_app
-
-
-Connect to SQream DB with a JDBC application
-==============================================
-
-Driver class
---------------
-
-Use ``com.sqream.jdbc.SQDriver`` as the driver class in the JDBC application.
-
-
-.. _connection_string:
-
-Connection string
---------------------
-
-JDBC drivers rely on a connection string. Use the following syntax for SQream DB
-
-.. code-block:: text
-
-   jdbc:Sqream:///;user=;password=sqream;[; ...]
-
-Connection parameters
-^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. list-table:: 
-   :widths: auto
-   :header-rows: 1
-   
-   * - Item
-     - Optional
-     - Default
-     - Description
-   * - ````
-     - ✗
-     - None
-     - Hostname and port of the SQream DB worker. For example, ``127.0.0.1:5000``, ``sqream.mynetwork.co:3108``
-   * - ````
-     - ✗
-     - None
-     - Database name to connect to. For example, ``master``
-   * - ``username=``
-     - ✗
-     - None
-     - Username of a role to use for connection. For example, ``username=rhendricks``
-   * - ``password=``
-     - ✗
-     - None
-     - Specifies the password of the selected role. For example, ``password=Tr0ub4dor&3``
-   * - ``service=``
-     - ✓
-     - ``sqream``
-     - Specifices service queue to use. For example, ``service=etl``
-   * - ````
-     - ✓
-     - ``false``
-     - Specifies SSL for this connection. For example, ``ssl=true``
-   * - ````
-     - ✓
-     - ``true``
-     - Connect via load balancer (use only if exists, and check port).
-
-Connection string examples
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-For a SQream DB cluster with load balancer and no service queues, with SSL
-
-.. code-block:: text
-
-   jdbc:Sqream://sqream.mynetwork.co:3108/master;user=rhendricks;password=Tr0ub4dor&3;ssl=true;cluster=true
-
-Minimal example for a local, standalone SQream DB
-
-.. code-block:: text 
-
-   jdbc:Sqream://127.0.0.1:5000/master;user=rhendricks;password=Tr0ub4dor&3
-
-For a SQream DB cluster with load balancer and a specific service queue named ``etl``, to the database named ``raviga``
-
-.. code-block:: text
-
-   jdbc:Sqream://sqream.mynetwork.co:3108/raviga;user=rhendricks;password=Tr0ub4dor&3;cluster=true;service=etl
-
-
-Sample Java program
---------------------
-
-Download this file by right clicking and saving to your computer :download:`sample.java `.
-
-.. literalinclude:: sample.java
-    :language: java
-    :caption: JDBC application sample
-    :linenos:
-
diff --git a/third_party_tools/client_drivers/python/api-reference.rst b/third_party_tools/client_drivers/python/api-reference.rst
deleted file mode 100644
index 28e1205e6..000000000
--- a/third_party_tools/client_drivers/python/api-reference.rst
+++ /dev/null
@@ -1,191 +0,0 @@
-.. _pysqream_api_reference:
-
-*************************
-pysqream API reference
-*************************
-
-The SQream Python connector allows Python programs to connect to SQream DB.
-
-pysqream conforms to Python DB-API specifications `PEP-249 `_
-
-
-The main module is pysqream, which contains the :py:meth:`Connection` class.
-
-.. method:: connect(host, port, database, username, password, clustered = False, use_ssl = False, service='sqream', reconnect_attempts=3, reconnect_interval=10)
-   
-   Creates a new :py:meth:`Connection` object and connects to SQream DB.
-   
-   host
-      SQream DB hostname or IP
-
-   port
-      SQream DB port 
-
-   database
-      database name
-
-   username
-      Username to use for connection
-
-   password
-      Password for ``username``
-
-   clustered
-      Connect through load balancer, or direct to worker (Default: false - direct to worker)
-
-   use_ssl
-      use SSL connection (default: false)
-
-   service
-      Optional service queue (default: 'sqream')
-
-   reconnect_attempts
-      Number of reconnection attempts to attempt before closing the connection
-
-   reconnect_interval
-      Time in seconds between each reconnection attempt
-
-.. class:: Connection
-   
-   .. attribute:: arraysize
-   
-      Specifies the number of rows to fetch at a time with :py:meth:`~Connection.fetchmany`. Defaults to 1 - one row at a time.
-
-   .. attribute:: rowcount
-   
-      Unused, always returns -1.
-   
-   .. attribute:: description
-      
-      Read-only attribute that contains result set metadata.
-      
-      This attribute is populated after a statement is executed.
-      
-      .. list-table:: 
-         :widths: auto
-         :header-rows: 1
-         
-         * - Value
-           - Description
-         * - ``name``
-           - Column name
-         * - ``type_code``
-           - Internal type code
-         * - ``display_size``
-           - Not used - same as ``internal_size``
-         * - ``internal_size``
-           - Data size in bytes
-         * - ``precision``
-           - Precision of numeric data (not used)
-         * - ``scale``
-           - Scale for numeric data (not used)
-         * - ``null_ok``
-           - Specifies if ``NULL`` values are allowed for this column
-
-   .. method:: execute(self, query, params=None)
-      
-      Execute a statement.
-      
-      Parameters are not supported
-      
-      self
-         :py:meth:`Connection`
-
-      query
-         statement or query text
-      
-      params
-         Unused
-      
-   .. method:: executemany(self, query, rows_or_cols=None, data_as='rows', amount=None)
-      
-      Prepares a statement and executes it against all parameter sequences found in ``rows_or_cols``.
-
-      self
-         :py:meth:`Connection`
-
-      query
-         INSERT statement
-         
-      rows_or_cols
-         Data buffer to insert. This should be a sequence of lists or tuples.
-      
-      data_as
-         (Optional) Read data as rows or columns
-      
-      amount
-         (Optional) count of rows to insert
-   
-   .. method:: close(self)
-      
-      Close a statement and connection.
-      After a statement is closed, it must be reopened by creating a new cursor.
-            
-      self
-         :py:meth:`Connection`
-
-   .. method:: cursor(self)
-      
-      Create a new :py:meth:`Connection` cursor.
-      
-      We recommend creating a new cursor for every statement.
-      
-      self
-         :py:meth:`Connection`
-
-   .. method:: fetchall(self, data_as='rows')
-      
-         Fetch all remaining records from the result set.
-         
-         An empty sequence is returned when no more rows are available.
-      
-      self
-         :py:meth:`Connection`
-
-      data_as
-         (Optional) Read data as rows or columns
-
-   .. method:: fetchone(self, data_as='rows')
-      
-      Fetch one record from the result set.
-      
-      An empty sequence is returned when no more rows are available.
-      
-      self
-         :py:meth:`Connection`
-
-      data_as
-         (Optional) Read data as rows or columns
-
-
-   .. method:: fetchmany(self, size=[Connection.arraysize], data_as='rows')
-      
-         Fetches the next several rows of a query result set.
-
-         An empty sequence is returned when no more rows are available.
-
-      self
-         :py:meth:`Connection`
-
-      size
-         Number of records to fetch. If not set, fetches :py:obj:`Connection.arraysize` (1 by default) records
-
-      data_as
-         (Optional) Read data as rows or columns
-
-   .. method:: __iter__()
-
-         Makes the cursor iterable.
-
-
-.. attribute:: apilevel = '2.0'
-   
-   String constant stating the supported API level. The connector supports API "2.0".
-
-.. attribute:: threadsafety = 1
-      
-   Level of thread safety the interface supports. pysqream currently supports level 1, which states that threads can share the module, but not connections.
-
-.. attribute:: paramstyle = 'qmark'
-   
-   The placeholder marker. Set to ``qmark``, which is a question mark (``?``).
diff --git a/third_party_tools/client_drivers/python/index.rst b/third_party_tools/client_drivers/python/index.rst
deleted file mode 100644
index 1c69752d7..000000000
--- a/third_party_tools/client_drivers/python/index.rst
+++ /dev/null
@@ -1,502 +0,0 @@
-.. _pysqream:
-
-*************************
-Python (pysqream)
-*************************
-
-The SQream Python connector is a set of packages that allows Python programs to connect to SQream DB.
-
-* ``pysqream`` is a pure Python connector. It can be installed with ``pip`` on any operating system, including Linux, Windows, and macOS.
-
-* ``pysqream-sqlalchemy`` is a SQLAlchemy dialect for ``pysqream``
-
-The connector supports Python 3.6.5 and newer.
-
-The base ``pysqream`` package conforms to Python DB-API specifications `PEP-249 `_.
-
-.. contents:: In this topic:
-   :local:
-
-Installing the Python connector
-==================================
-
-Prerequisites
-----------------
-
-1. Python
-^^^^^^^^^^^^
-
-The connector requires Python 3.6.5 or newer. To verify your version of Python:
-
-.. code-block:: console
-
-   $ python --version
-   Python 3.7.3
-   
-
-.. note:: If both Python 2.x and 3.x are installed, you can run ``python3`` and ``pip3`` instead of ``python`` and ``pip`` respectively for the rest of this guide
-
-.. warning:: If you're running on an older version, ``pip`` will fetch an older version of ``pysqream``, with version <3.0.0. This version is currently not supported.
-
-2. PIP
-^^^^^^^^^^^^
-The Python connector is installed via ``pip``, the Python package manager and installer.
-
-We recommend upgrading to the latest version of ``pip`` before installing. To verify that you are on the latest version, run the following command:
-
-.. code-block:: console
-
-   $ python -m pip install --upgrade pip
-   Collecting pip
-      Downloading https://files.pythonhosted.org/packages/00/b6/9cfa56b4081ad13874b0c6f96af8ce16cfbc1cb06bedf8e9164ce5551ec1/pip-19.3.1-py2.py3-none-any.whl (1.4MB)
-        |████████████████████████████████| 1.4MB 1.6MB/s
-   Installing collected packages: pip
-     Found existing installation: pip 19.1.1
-       Uninstalling pip-19.1.1:
-         Successfully uninstalled pip-19.1.1
-   Successfully installed pip-19.3.1
-
-.. note:: 
-   * On macOS, you may want to use virtualenv to install Python and the connector, to ensure compatibility with the built-in Python environment
-   *  If you encounter an error including ``SSLError`` or ``WARNING: pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.`` - please be sure to reinstall Python with SSL enabled, or use virtualenv or Anaconda.
-
-3. OpenSSL for Linux
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Some distributions of Python do not include OpenSSL. The Python connector relies on OpenSSL for secure connections to SQream DB.
-
-* To install OpenSSL on RHEL/CentOS
-
-   .. code-block:: console
-   
-      $ sudo yum install -y libffi-devel openssl-devel
-
-* To install OpenSSL on Ubuntu
-
-   .. code-block:: console
-   
-      $ sudo apt-get install libssl-dev libffi-dev -y
-
-4. Cython (optional)
-^^^^^^^^^^^^^^^^^^^^^^^^
-
-Optional but highly recommended is Cython, which improves performance of Python applications.
-
-   .. code-block:: console
-   
-      $ pip install cython
-
-Install via pip
------------------
-
-The Python connector is available via `PyPi `_.
-
-Install the connector with ``pip``:
-
-.. code-block:: console
-   
-   $ pip install pysqream pysqream-sqlalchemy
-
-``pip`` will automatically install all necessary libraries and modules.
-
-Upgrading an existing installation
---------------------------------------
-
-The Python drivers are updated periodically.
-To upgrade an existing pysqream installation, use pip's ``-U`` flag.
-
-.. code-block:: console
-   
-   $ pip install pysqream pysqream-sqlalchemy -U
-
-
-Validate the installation
------------------------------
-
-Create a file called ``test.py``, containing the following:
-
-.. literalinclude:: test.py
-    :language: python
-    :caption: pysqream Validation Script
-    :linenos:
-
-Make sure to replace the parameters in the connection with the respective parameters for your SQream DB installation.
-
-Run the test file to verify that you can connect to SQream DB:
-
-.. code-block:: console
-   
-   $ python test.py
-   Version: v2020.1
-
-If all went well, you are now ready to build an application using the SQream DB Python connector!
-
-If any connection error appears, verify that you have access to a running SQream DB and that the connection parameters are correct.
-
-SQLAlchemy examples
-========================
-
-SQLAlchemy is an ORM for Python.
-
-When you install the SQream DB dialect (``pysqream-sqlalchemy``) you can use frameworks like Pandas, TensorFlow, and Alembic to query SQream DB directly.
-
-A simple connection example
----------------------------------
-
-.. code-block:: python
-
-   import sqlalchemy as sa
-   from sqlalchemy.engine.url import URL
-
-   engine_url = URL('sqream'
-                 , username='rhendricks'
-                 , password='secret_passwor"
-                 , host='localhost'
-                 , port=5000
-                 , database='raviga'
-                 , query={'use_ssl': False})
-
-   engine = sa.create_engine(engine_url)
-
-   res = engine.execute('create table test (ints int)')
-   res = engine.execute('insert into test values (5), (6)')
-   res = engine.execute('select * from test')
-
-Pulling a table into Pandas
----------------------------------
-
-In this example, we use the URL method to create the connection string.
-
-.. code-block:: python
-
-   import sqlalchemy as sa
-   import pandas as pd
-   from sqlalchemy.engine.url import URL
-
-
-   engine_url = URL('sqream'
-                 , username='rhendricks'
-                 , password='secret_passwor"
-                 , host='localhost'
-                 , port=5000
-                 , database='raviga'
-                 , query={'use_ssl': False})
-
-   engine = sa.create_engine(engine_url)
-   
-   table_df = pd.read_sql("select * from nba", con=engine)
-
-
-API Examples
-===============
-
-Explaining the connection example
----------------------------------------
-
-First, import the package and create a connection
-
-.. code-block:: python
-   
-   # Import pysqream package
-   
-   import pysqream
-
-   """
-   Connection parameters include:
-   * IP/Hostname
-   * Port
-   * database name
-   * username
-   * password 
-   * Connect through load balancer, or direct to worker (Default: false - direct to worker)
-   * use SSL connection (default: false)
-   * Optional service queue (default: 'sqream')
-   """
-   
-   # Create a connection object
-   
-   con = pysqream.connect(host='127.0.0.1', port=3108, database='raviga'
-                      , username='rhendricks', password='Tr0ub4dor&3'
-                      , clustered=True)
-
-Then, run a query and fetch the results
-
-.. code-block:: python
-
-   cur = con.cursor()  # Create a new cursor
-   # Prepare and execute a query
-   cur.execute('select show_version()')
-   
-   result = cur.fetchall() # `fetchall` gets the entire data set
-   
-   print (f"Version: {result[0][0]}")
-
-This should print the SQream DB version. For example ``v2020.1``.
-
-Finally, we will close the connection
-
-.. code-block:: python
-   
-   con.close()
-
-Using the cursor
---------------------------------------------
-
-The DB-API specification includes several methods for fetching results from the cursor.
-
-We will use the ``nba`` example. Here's a peek at the table contents:
-
-.. csv-table:: nba
-   :file: nba-t10.csv
-   :widths: auto
-   :header-rows: 1 
-
-Like before, we will import the library and create a :py:meth:`~Connection`, followed by :py:meth:`~Connection.execute` on a simple ``SELECT *`` query.
-
-.. code-block:: python
-   
-   import pysqream
-   con = pysqream.connect(host='127.0.0.1', port=3108, database='master'
-                      , username='rhendricks', password='Tr0ub4dor&3'
-                      , clustered=True)
-
-   cur = con.cursor() # Create a new cursor
-   # The select statement:
-   statement = 'SELECT * FROM nba'
-   cur.execute(statement)
-
-After executing the statement, we have a :py:meth:`Connection` cursor object waiting. A cursor is iterable, meaning that everytime we fetch, it advances the cursor to the next row.
-
-Use :py:meth:`~Connection.fetchone` to get one record at a time:
-
-.. code-block:: python
-   
-   first_row = cur.fetchone() # Fetch one row at a time (first row)
-   
-   second_row = cur.fetchone() # Fetch one row at a time (second row)
-
-To get several rows at a time, use :py:meth:`~Connection.fetchmany`:
-
-.. code-block:: python
-   
-   # executing `fetchone` twice is equivalent to this form:
-   third_and_fourth_rows = cur.fetchmany(2)
-
-To get all rows at once, use :py:meth:`~Connection.fetchall`:
-
-.. code-block:: python
-   
-   # To get all rows at once, use `fetchall`
-   remaining_rows = cur.fetchall()
-
-   # Close the connection when done
-   con.close()
-
-Here are the contents of the row variables we used:
-
-.. code-block:: pycon
-   
-   >>> print(first_row)
-   ('Avery Bradley', 'Boston Celtics', 0, 'PG', 25, '6-2', 180, 'Texas', 7730337)
-   >>> print(second_row)
-   ('Jae Crowder', 'Boston Celtics', 99, 'SF', 25, '6-6', 235, 'Marquette', 6796117)
-   >>> print(third_and_fourth_rows)
-   [('John Holland', 'Boston Celtics', 30, 'SG', 27, '6-5', 205, 'Boston University', None), ('R.J. Hunter', 'Boston Celtics', 28, 'SG', 22, '6-5', 185, 'Georgia State', 1148640)]
-   >>> print(remaining_rows)
-   [('Jonas Jerebko', 'Boston Celtics', 8, 'PF', 29, '6-10', 231, None, 5000000), ('Amir Johnson', 'Boston Celtics', 90, 'PF', 29, '6-9', 240, None, 12000000), ('Jordan Mickey', 'Boston Celtics', 55, 'PF', 21, '6-8', 235, 'LSU', 1170960), ('Kelly Olynyk', 'Boston Celtics', 41, 'C', 25, '7-0', 238, 'Gonzaga', 2165160),
-   [...]
-
-.. note:: Calling a fetch command after all rows have been fetched will return an empty array (``[]``).
-
-Reading result metadata
-----------------------------
-
-When executing a statement, the connection object also contains metadata about the result set (e.g.column names, types, etc).
-
-The metadata is stored in the :py:attr:`Connection.description` object of the cursor.
-
-.. code-block:: pycon
-   
-   >>> import pysqream
-   >>> con = pysqream.connect(host='127.0.0.1', port=3108, database='master'
-   ...                , username='rhendricks', password='Tr0ub4dor&3'
-   ...                , clustered=True)
-   >>> cur = con.cursor()
-   >>> statement = 'SELECT * FROM nba'
-   >>> cur.execute(statement)
-   
-   >>> print(cur.description)
-   [('Name', 'STRING', 24, 24, None, None, True), ('Team', 'STRING', 22, 22, None, None, True), ('Number', 'NUMBER', 1, 1, None, None, True), ('Position', 'STRING', 2, 2, None, None, True), ('Age (as of 2018)', 'NUMBER', 1, 1, None, None, True), ('Height', 'STRING', 4, 4, None, None, True), ('Weight', 'NUMBER', 2, 2, None, None, True), ('College', 'STRING', 21, 21, None, None, True), ('Salary', 'NUMBER', 4, 4, None, None, True)]
-
-To get a list of column names, iterate over the ``description`` list:
-   
-.. code-block:: pycon
-   
-   >>> [ i[0] for i in cur.description ]
-   ['Name', 'Team', 'Number', 'Position', 'Age (as of 2018)', 'Height', 'Weight', 'College', 'Salary']
-
-Loading data into a table
----------------------------
-
-This example loads 10,000 rows of dummy data to a SQream DB instance
-
-.. code-block:: python
-   
-   import pysqream
-   from datetime import date, datetime
-   from time import time
-
-   con = pysqream.connect(host='127.0.0.1', port=3108, database='master'
-                      , username='rhendricks', password='Tr0ub4dor&3'
-                      , clustered=True)
-   
-   # Create a table for loading
-   create = 'create or replace table perf (b bool, t tinyint, sm smallint, i int, bi bigint, f real, d double, s varchar(12), ss text, dt date, dtt datetime)'
-   con.execute(create)
-
-   # After creating the table, we can load data into it with the INSERT command
-
-   # Create dummy data which matches the table we created
-   data = (False, 2, 12, 145, 84124234, 3.141, -4.3, "Marty McFly" , u"キウイは楽しい鳥です" , date(2019, 12, 17), datetime(1955, 11, 4, 1, 23, 0, 0))
-   
-   
-   row_count = 10**4
-
-   # Get a new cursor
-   cur = con.cursor()
-   insert = 'insert into perf values (?,?,?,?,?,?,?,?,?,?,?)'
-   start = time()
-   cur.executemany(insert, [data] * row_count)
-   print (f"Total insert time for {row_count} rows: {time() - start} seconds")
-
-   # Close this cursor
-   cur.close()
-   
-   # Verify that the data was inserted correctly
-   # Get a new cursor
-   cur = con.cursor()
-   cur.execute('select count(*) from perf')
-   result = cur.fetchall() # `fetchall` collects the entire data set
-   print (f"Count of inserted rows: {result[0][0]}")
-
-   # When done, close the cursor
-   cur.close()
-   
-   # Close the connection
-   con.close()
-
-Reading data from a CSV file for load into a table
-----------------------------------------------------------
-
-We will write a helper function to create an :ref:`insert` statement, by reading an existing table's metadata.
-
-.. code-block:: python
-   
-   import pysqream
-   import datetime
-
-   def insert_from_csv(cur, table_name, csv_filename, field_delimiter = ',', null_markers = []):
-      """
-      We will first ask SQream DB for some table information.
-      This is important for understanding the number of columns, and will help
-      to create a matching INSERT statement
-      """
-
-      column_info = cur.execute(f"SELECT * FROM {table_name} LIMIT 0").description
-
-
-      def parse_datetime(v):
-         try:
-               return datetime.datetime.strptime(row[i], '%Y-%m-%d %H:%M:%S.%f')
-         except ValueError:
-               try:
-                  return datetime.datetime.strptime(row[i], '%Y-%m-%d %H:%M:%S')
-               except ValueError:
-                  return datetime.datetime.strptime(row[i], '%Y-%m-%d')
-
-      # Create enough placeholders (`?`) for the INSERT query string
-      qstring = ','.join(['?'] * len(column_info))
-      insert_statement = f"insert into {table_name} values ({qstring})"
-
-      # Open the CSV file
-      with open(csv_filename, mode='r') as csv_file:
-         csv_reader = csv.reader(csv_file, delimiter=field_delimiter)
-
-      # Execute the INSERT statement with the CSV data
-      cur.executemany(insert_statement, [row for row in csv_reader])
-
-
-   con = pysqream.connect(host='127.0.0.1', port=3108, database='master'
-                      , username='rhendricks', password='Tr0ub4dor&3'
-                      , clustered=True)
-   
-   cur = con.cursor()
-   insert_from_csv(cur, 'nba', 'nba.csv', field_delimiter = ',', null_markers = [])
-   
-   con.close()
-
-
-Using SQLAlchemy ORM to create tables and fill them with data
------------------------------------------------------------------------
-
-You can also use the ORM to create tables and insert data to them from Python objects.
-
-For example:
-
-.. code-block:: python
-   
-   import sqlalchemy as sa
-   import pandas as pd
-   from sqlalchemy.engine.url import URL
-
-
-   engine_url = URL('sqream'
-                 , username='rhendricks'
-                 , password='secret_passwor"
-                 , host='localhost'
-                 , port=5000
-                 , database='raviga'
-                 , query={'use_ssl': False})
-
-   engine = sa.create_engine(engine_url)
-   
-   # Build a metadata object and bind it
-   
-   metadata = sa.MetaData()
-   metadata.bind = engine
-   
-   # Create a table in the local metadata
-   
-   employees = sa.Table(
-   'employees'
-   , metadata 
-   , sa.Column('id', sa.Integer)
-   , sa.Column('name', sa.VARCHAR(32))
-   , sa.Column('lastname', sa.VARCHAR(32))
-   , sa.Column('salary', sa.Float)
-   )
-
-   # The create_all() function uses the SQream DB engine object
-   # to create all the defined table objects.
-
-   metadata.create_all(engine)
-   
-   # Now that the table exists, we can insert data into it.
-   
-   # Build the data rows
-   insert_data = [ {'id': 1, 'name': 'Richard','lastname': 'Hendricks',   'salary': 12000.75}
-                  ,{'id': 3,  'name': 'Bertram', 'lastname': 'Gilfoyle', 'salary': 8400.0}
-                  ,{'id': 8,  'name': 'Donald', 'lastname': 'Dunn', 'salary': 6500.40}
-                 ]
-
-   # Build the insert command
-   ins = employees.insert(insert_data)
-   
-   # Execute the command
-   result = engine.execute(ins)
-
-.. toctree::
-   :maxdepth: 8
-   :caption: Further information
-   
-   api-reference
diff --git a/third_party_tools/client_platforms/php.rst b/third_party_tools/client_platforms/php.rst
deleted file mode 100644
index 599d6a578..000000000
--- a/third_party_tools/client_platforms/php.rst
+++ /dev/null
@@ -1,46 +0,0 @@
-.. _php:
-
-*****************************
-Connect to SQream Using PHP
-*****************************
-
-You can use PHP to interact with a SQream DB cluster.
-
-This tutorial is a guide that will show you how to connect a PHP application to SQream DB. 
-
-.. contents:: In this topic:
-   :local:
-
-Prerequisites
-===============
-
-#. Install the :ref:`SQream DB ODBC driver for Linux` and create a DSN.
-
-#. 
-   Install the `uODBC `_ extension for your PHP installation.
-   To configure PHP to enable uODBC, configure it with ``./configure --with-pdo-odbc=unixODBC,/usr/local`` when compiling php or install ``php-odbc`` and ``php-pdo`` along with php (version 7.1 minimum for best results) using your distribution package manager.
-
-Testing the connection
-===========================
-
-#. 
-   Create a test connection file. Be sure to use the correct parameters for your SQream DB installation.
-
-   Download this :download:`PHP example connection file ` .
-
-   .. literalinclude:: test.php
-      :language: php
-      :emphasize-lines: 4
-      :linenos:
-
-   .. tip::
-      An example of a valid DSN line is:
-      
-      .. code:: php
-         
-         $dsn = "odbc:Driver={SqreamODBCDriver};Server=192.168.0.5;Port=5000;Database=master;User=rhendricks;Password=super_secret;Service=sqream";
-      
-      For more information about supported DSN parameters, see :ref:`dsn_params`.
-
-#. Run the PHP file either directly with PHP (``php test.php``) or through a browser.
-
diff --git a/third_party_tools/client_platforms/tableau.rst b/third_party_tools/client_platforms/tableau.rst
deleted file mode 100644
index 666b2f198..000000000
--- a/third_party_tools/client_platforms/tableau.rst
+++ /dev/null
@@ -1,453 +0,0 @@
-.. _connect_to_tableau:
-
-*************************
-Connecting to SQream Using Tableau
-*************************
-
-Overview
-=====================
-SQream's Tableau connector plugin, based on standard JDBC, enables storing and fast querying large volumes of data. 
-
-The **Connecting to SQream Using Tableau** page is a Quick Start Guide that describes how install Tableau and the JDBC and ODBC drivers and connect to SQream using the JDBC and ODBC drivers for data analysis. It also describes using best practices and troubleshoot issues that may occur while installing Tableau. SQream supports both Tableau Desktop and Tableau Server on Windows, MacOS, and Linux distributions.
-
-For more information on SQream's integration with Tableau, see `Tableau's Extension Gallery `_.
-
-The Connecting to SQream Using Tableau page describes the following:
-
-.. contents::
-   :local:
-
-Installing the JDBC Driver and Tableau Connector Plugin
--------------------
-This section describes how to install the JDBC driver using the fully-integrated Tableau connector plugin (Tableau Connector, or **.taco** file). SQream has been tested with Tableau versions 9.2 and newer.
-
-**To connect to SQream using Tableau:**
-   
-#. Install the Tableau Desktop application.
-
-   For more information about installing the Tableau Desktop application, see the `Tableau products page `_ and click **Download Free Trial**. Note that Tableau offers a 14-day trial version.
-   
-   ::
-
-#. Do one of the following:
-
-   * **For Windows** - See :ref:`Installing Tableau Using the Windows Installer `. 
-   * **For MacOS or Linux** - See :ref:`Installing the JDBC Driver Manually `.
-
-.. note:: For Tableau **2019.4 versions and later**, SQream recommends installing the JDBC driver instead of the previously recommended ODBC driver.
-
-.. _tableau_windows_installer:
-
-Installing the JDBC Driver Using the Windows Installer
-~~~~~~~~~~~~~~~~~~
-If you are using Windows, after installing the Tableau Desktop application you can install the JDBC driver using the Windows installer. The Windows installer is an installation wizard that guides you through the JDBC driver installation steps. When the driver is installed, you can connect to SQream.
-
-**To install Tableau using the Windows installer**:
-
-#. Close Tableau Desktop.
-
-    ::
-
-#. Download the most current version of the `SQream JDBC driver `_.
-
-    ::
-	
-#. Do the following:
-
-   #. Start the installer.
-   #. Verify that the **Tableau Desktop connector** item is selected.
-   #. Follow the installation steps.
-
-    ::
-
-You can now restart Tableau Desktop or Server to begin using the SQream driver by :ref:`connecting to SQream `.
-
-.. _tableau_jdbc_installer:
-
-Installing the JDBC Driver Manually
-~~~~~~~~~~~~~
-If you are using MacOS, Linux, or the Tableau server, after installing the Tableau Desktop application you can install the JDBC driver manually. When the driver is installed, you can connect to SQream.
-
-**To install the JDBC driver manually:**
-
-1. Download the JDBC installer and SQream Tableau connector (.taco) file from the :ref:`from the client drivers page`.
-
-    ::
-
-#. Install the JDBC driver by unzipping the JDBC driver into a Tableau driver directory.
-   
-   Based on the installation method that you used, your Tableau driver directory is located in one of the following places:
-
-   * **Tableau Desktop on Windows:** *C:\\Program Files\\Tableau\\Drivers*
-   * **Tableau Desktop on MacOS:** *~/Library/Tableau/Drivers*
-   * **Tableau on Linux**: */opt/tableau/tableau_driver/jdbc*
-	  
-.. note:: If the driver includes only a single .jar file, copy it to *C:\\Program Files\\Tableau/Drivers*. If the driver includes multiple files, create a subfolder *A* in *C:\\Program Files\\Tableau/Drivers* and copy all files to folder *A*.
-
-Note the following when installing the JDBC driver:
-
-* You must have read permissions on the .jar file.
-* Tableau requires a JDBC 4.0 or later driver.
-* Tableau requires a Type 4 JDBC driver.
-* The latest 64-bit version of Java 8 is installed.
-
-3. Install the **SQreamDB.taco** file by moving the SQreamDB.taco file into the Tableau connectors directory.
-   
-   Based on the installation method that you used, your Tableau driver directory is located in one of the following places:
-
-   * **Tableau Desktop on Windows:** *C:\\Users\\\\My Tableau Repository\\Connectors*
-   * **Tableau Desktop on Windows:** *~/My Tableau Repository/Connectors*
-   
-      ::
-	  
-4. *Optional* - If you are using the Tableau Server, do the following:
-   
-   1. Create a directory for Tableau connectors and give it a descriptive name, such as *C:\\tableau_connectors*.
-      
-      This directory needs to exist on all Tableau servers.
-      
-       ::
-   
-   2. Copy the SQreamDB.taco file into the new directory.
-   
-       ::
-   
-   3. Set the **native_api.connect_plugins_path** option to ``tsm`` as shown in the following example:
-
-      .. code-block:: console
-   
-         $ tsm configuration set -k native_api.connect_plugins_path -v C:/tableau_connectors
-      
-      If a configuration error is displayed, add ``--force-keys`` to the end of the command as shown in the following example:
-
-      .. code-block:: console
-   
-         $ tsm configuration set -k native_api.connect_plugins_path -v C:/tableau_connectors--force-keys
-		 
-   4. To apply the pending configuration changes, run the following command:
-
-      .. code-block:: console
-    
-         $ tsm pending-changes apply
-      
-      .. warning:: This restarts the server.
-
-You can now restart Tableau Desktop or Server to begin using the SQream driver by :ref:`connecting to SQream ` as described in the section below.
-
-.. _tableau_connect_to_sqream:
-	
-
-Installing the ODBC Driver for Tableau Versions 2019.3 and Earlier
---------------
-
-
-This section describes the installation method for Tableau version 2019.3 or earlier and describes the following:
-
-.. contents::
-   :local:
-
-.. note:: SQream recommends installing the JDBC driver to provide improved connectivity.
-
-Automatically Reconfiguring the ODBC Driver After Initial Installation
-~~~~~~~~~~~~~~~~~~
-If you've already installed the SQream ODBC driver and installed Tableau, SQream recommends reinstalling the ODBC driver with the **.TDC Tableau Settings for SQream DB** configuration shown in the image below:
-
-.. image:: /_static/images/odbc_windows_installer_tableau.png
-
-SQream recommends this configuration because Tableau creates temporary tables and runs several discovery queries that may impact performance. The ODBC driver installer avoids this by automatically reconfiguring Tableau.
-
-For more information about reinstalling the ODBC driver installer, see :ref:`Install and Configure ODBC on Windows `.
-
-If you want to manually reconfigure the ODBC driver, see :ref:`Manually Reconfiguring the ODBC Driver After Initial Installation ` below.
-
-.. _manually_reconfigure_odbc_driver:
-
-Manually Reconfiguring the ODBC Driver After Initial Installation
-~~~~~~~~~~~~~~~~~~
-The file **Tableau Datasource Customization (TDC)** file lets you use Tableau make full use of SQream DB's features and capabilities.
-
-**To manually reconfigure the ODBC driver after initial installation:**
-
-1. Do one of the following:
-
-   1. Download the :download:`odbc-sqream.tdc ` file to your machine and open it in a text editor.
-   
-       ::
-   
-   2. Copy the text below into a text editor:
-   
-   .. literalinclude:: odbc-sqream.tdc
-      :language: xml
-      :caption: SQream ODBC TDC File
-      :emphasize-lines: 2
-
-#. Check which version of Tableau you are using.
-
-    ::
-
-#. In the text of the file shown above, in the highlighted line, replace the version number with the **major** version of Tableau that you are using.
-
-   For example, if you are using Tableau vesion **2019.2.1**, replace it with **2019.2**.
-
-    ::
-
-#. Do one of the following:
-
-   * If you are using **Tableau Desktop** - save the TDC file to *C:\\Users\\\\Documents\\My Tableau Repository\\Datasources*, where ```` is the Windows username that you have installed Tableau under.
- 
-    ::
-	
-   * If you are using the **Tableau Server** - save the TDC file to *C:\\ProgramData\\Tableau\\Tableau Server\\data\\tabsvc\\vizqlserver\\Datasources*.
-
-Configuring the ODBC Connection
-~~~~~~~~~~~~
-The ODBC connection uses a DSN when connecting to ODBC data sources, and each DSN represents one SQream database.
-
-**To configure the ODBC connection:**
-
-1. Create an ODBC DSN.
-
-    ::
-
-#. Open the Windows menu by pressing the Windows button (:kbd:`⊞ Win`) or clicking the **Windows** menu button.
-
-    ::
-	
-#. Type **ODBC** and select **ODBC Data Sources (64-bit)**. 
-
-   During installation, the installer created a sample user DSN named **SQreamDB**.
-   
-    ::
-   
-#. *Optional* - Do one or both of the following:
-
-   * Modify the DSN name.
-   
-      ::
-	 
-   * Create a new DSN name by clicking **Add** and selecting **SQream ODBC Driver**.
-   
-.. image:: /_static/images/odbc_windows_dsns.png
-   
-	  
-5. Click **Finish**.
-
-    ::
-
-6. Enter your connection parameters.
-
-   The following table describes the connection parameters:
-	 
-   .. list-table:: 
-      :widths: 15 38 38
-      :header-rows: 1
-   
-      * - Item
-        - Description
-        - Example
-      * - Data Source Name
-        - The Data Source Name. SQream recommends using a descriptive and easily recognizable name for referencing your DSN. Once set, the Data Source Name cannot be changed.
-        - 
-      * - Description
-        - The description of your DSN. This field is optional.
-        - 
-      * - User
-        - The username of a role to use for establishing the connection.
-        - ``rhendricks``
-      * - Password
-        - The password of the selected role.
-        - ``Tr0ub4dor``
-      * - Database
-        - The database name to connect to. For example, ``master``
-        - ``master``	 
-      * - Service
-        - The :ref:`service queue` to use.
-        - For example, ``etl``. For the default service ``sqream``, leave blank.
-      * - Server
-        - The hostname of the SQream worker.
-        - ``127.0.0.1`` or ``sqream.mynetwork.co``
-      * - Port
-        - The TCP port of the SQream worker.
-        - ``5000`` or ``3108``
-      * - User Server Picker
-        - Uses the load balancer when establishing a connection. Use only if exists, and check port.
-        - 
-      * - SSL
-        - Uses SSL when establishing a connection.
-        - 
-      * - Logging Options
-        - Lets you modify your logging options when tracking the ODBC connection for connection issues.
-        - 
-
-.. tip:: Test the connection by clicking **Test** before saving your DSN.
-
-7. Save the DSN by clicking **OK.**
-
-Connecting Tableau to SQream
-~~~~~~~~~~~~
-**To connect Tableau to SQream:**
-
-1. Start Tableau Desktop.
-
-    ::
-	
-#. In the **Connect** menu, in the **To a server** sub-menu, click **More Servers** and select **Other Databases (ODBC)**.
-
-   The **Other Databases (ODBC)** window is displayed.
-   
-    ::
-	
-#. In the Other Databases (ODBC) window, select the DSN that you created in :ref:`Setting Up SQream Tables as Data Sources `.
-
-   Tableau may display the **Sqream ODBC Driver Connection Dialog** window and prompt you to provide your username and password.
-
-#. Provide your username and password and click **OK**.   
-  
-.. _tableau_connect_to_sqream_db:
-
-
-Connecting to SQream
----------------------
-After installing the JDBC driver you can connect to SQream.
-
-**To connect to SQream:**
-
-#. Start Tableau Desktop.
-
-    ::
-	
-#. In the **Connect** menu, in the **To a Server** sub-menu, click **More...**.
-
-   More connection options are displayed.
-
-    ::
-	
-#. Select **SQream DB by SQream Technologies**.
-
-   The **New Connection** dialog box is displayed.
-
-    ::
-	
-#. In the New Connection dialog box, fill in the fields and click **Sign In**.
-
-  The following table describes the fields:
-   
-  .. list-table:: 
-     :widths: 15 38 38
-     :header-rows: 1
-   
-     * - Item
-       - Description
-       - Example
-     * - Server
-       - Defines the server of the SQream worker.
-       - ``127.0.0.1`` or ``sqream.mynetwork.co``
-     * - Port
-       - Defines the TCP port of the SQream worker.
-       - ``3108`` when using a load balancer, or ``5100`` when connecting directly to a worker with SSL.
-     * - Database
-       - Defines the database to establish a connection with.
-       - ``master``
-     * - Cluster
-       - Enables (``true``) or disables (``false``) the load balancer. After enabling or disabling the load balance, verify the connection.
-       - 
-     * - Username
-       - Specifies the username of a role to use when connecting.
-       - ``rhendricks``	 
-     * - Password
-       - Specifies the password of the selected role.
-       - ``Tr0ub4dor&3``
-     * - Require SSL (recommended)
-       - Sets SSL as a requirement for establishing this connection.
-       - 
-
-The connection is established and the data source page is displayed.
-
-.. tip:: 
-   Tableau automatically assigns your connection a default name based on the DSN and table. SQream recommends giving the connection a more descriptive name.
-   
-.. _set_up_sqream_tables_as_data_sources:
-
-Setting Up SQream Tables as Data Sources
-----------------
-After connecting to SQream you must set up the SQream tables as data sources.
-
-**To set up SQream tables as data sources:**
-	
-1. From the **Table** menu, select the desired database and schema.
-
-   SQream's default schema is **public**.
-   
-    ::
-	
-#. Drag the desired tables into the main area (labeled **Drag tables here**).
-
-   This area is also used for specifying joins and data source filters.
-   
-    ::
-	
-#. Open a new sheet to analyze data. 
-
-.. tip:: 
-   For more information about configuring data sources, joining, filtering, see Tableau's `Set Up Data Sources `_ tutorials.   
-
-Tableau Best Practices and Troubleshooting
----------------
-This section describes the following best practices and troubleshooting procedures when connecting to SQream using Tableau:
-
-.. contents::
-   :local:
-
-Inserting Only Required Data
-~~~~~~~~~~~~~~~~~~
-When using Tableau, SQream recommends using only data that you need, as described below:
-
-* Insert only the data sources you need into Tableau, excluding tables that don't require analysis.
-
-   ::
-
-* To increase query performance, add filters before analyzing. Every modification you make while analyzing data queries the SQream database, sometimes several times. Adding filters to the datasource before exploring limits the amount of data analyze and increases query performance.
-
-Using Tableau's Table Query Syntax
-~~~~~~~~~~~~~~~~~~~
-Dragging your desired tables into the main area in Tableau builds queries based on its own syntax. This helps ensure increased performance, while using views or custom SQL may degrade performance. In addition, SQream recommends using the :ref:`create_view` to create pre-optimized views, which your datasources point to. 
-
-Creating a Separate Service for Tableau
-~~~~~~~~~~~~~~~~~~~
-SQream recommends creating a separate service for Tableau with the DWLM. This reduces the impact that Tableau has on other applications and processes, such as ETL. In addition, this works in conjunction with the load balancer to ensure good performance.
-
-Troubleshooting Workbook Performance Before Deploying to the Tableau Server
-~~~~~~~~~~~~~~~~~~~
-Tableau has a built-in `performance recorder `_ that shows how time is being spent. If you're seeing slow performance, this could be the result of a misconfiguration such as setting concurrency too low.
-
-Use the Tableau Performance Recorder for viewing the performance of queries run by Tableau. You can use this information to identify queries that can be optimized by using views.
-
-Troubleshooting Error Codes
-~~~~~~~~~~~~~~~~~~~
-Tableau may be unable to locate the SQream JDBC driver. The following message is displayed when Tableau cannot locate the driver:
-
-.. code-block:: console
-     
-   Error Code: 37CE01A3, No suitable driver installed or the URL is incorrect
-   
-**To troubleshoot error codes:**
-
-If Tableau cannot locate the SQream JDBC driver, do the following:
-
- 1. Verify that the JDBC driver is located in the correct directory:
- 
-   * **Tableau Desktop on Windows:** *C:\Program Files\Tableau\Drivers*
-   * **Tableau Desktop on MacOS:** *~/Library/Tableau/Drivers*
-   * **Tableau on Linux**: */opt/tableau/tableau_driver/jdbc*
-   
- 2. Find the file path for the JDBC driver and add it to the Java classpath:
-   
-   * **For Linux** - ``export CLASSPATH=;$CLASSPATH``
-
-        ::
-		
-   * **For Windows** - add an environment variable for the classpath:
- 
-	.. image:: /_static/images/third_party_connectors/tableau/envrionment_variable_for_classpath.png
-
-If you experience issues after restarting Tableau, see the `SQream support portal `_.
diff --git a/third_party_tools/client_platforms/talend.rst b/third_party_tools/client_platforms/talend.rst
deleted file mode 100644
index 6e34a7168..000000000
--- a/third_party_tools/client_platforms/talend.rst
+++ /dev/null
@@ -1,177 +0,0 @@
-.. _talend:
-
-*************************
-Connecting to SQream Using Talend
-*************************
-.. _top:
-
-Overview
-=================
- 
-This page describes how to use Talend to interact with a SQream DB cluster. The Talend connector is used for reading data from a SQream DB cluster and loading data into SQream DB. 
-
-In addition, this page provides a viability report on Talend's comptability with SQream DB for stakeholders.
-
-It includes the following:
-
-* :ref:`A Quick Start guide `
-* :ref:`Information about supported SQream drivers `
-* :ref:`Supported data sources ` and :ref:`tool and operating system versions `
-* :ref:`A description of known issues `
-* :ref:`Related links `
-
-About Talend
-=================
-Talend is an open-source data integration platform. It provides various software and services for Big Data integration and management, enterprise application integration, data quality and cloud storage.
-
-For more information about Talend, see `Talend `_.
-
-.. _quickstart_guide:
-
-Quick Start Guide
-=======================
-
-Creating a New Metadata JDBC DB Connection
--------------
-**To create a new metadata JDBC DB connection:**
-
-1. In the **Repository** panel, nagivate to **Metadata** and right-click **Db connections**.
-
-::
-
-2. Select **Create connection**.
-
-3. In the **Name** field, type a name.
-
-The name cannot contain spaces.
-
-4. In the **Purpose** field, type a purpose and click **Next**. You cannot go to the next step until you define both a Name and a Purpose.
-
-::
-
-5. In the **DB Type** field, select **JDBC**.
-
-::
-
-6. In the **JDBC URL** field, type the relevant connection string.
-
-   For connection string examples, see `Connection Strings `_.
-   
-7. In the **Drivers** field, click the **Add** button.
-
-   The **"newLine** entry is added.
-
-8. One the **"newLine** entry, click the ellipsis.
-
-.. image:: /_static/images/Third_Party_Connectors/Creating_a_New_Metadata_JDBC_DB_Connection_8.png
-
-The **Module** window is displayed.
-
-9. From the Module window, select **Artifact repository(local m2/nexus)** and select **Install a new module**.
-
-::
-
-10. Click the ellipsis.
-
-.. image:: /_static/images/Third_Party_Connectors/Creating_a_New_Metadata_JDBC_DB_Connection_9.5.png
-
-Your hard drive is displayed.	
-
-11. Navigate to a **JDBC jar file** (such as **sqream-jdbc-4.4.0.jar**)and click **Open**.
-
-::
-
-12. Click **Detect the module install status**.
-
-::
-
-13. Click **OK**.
-
-The JDBC that you selected is displayed in the **Driver** field.
-
-14. Click **Select class name**.
-
-::
-
-15. Click **Test connection**.
-
-If a driver class is not found (for example, you didn't select a JDBC jar file), the following error message is displayed:
-
-After creating a new metadata JDBC DB connection, you can do the following:
-
- * Use your new metadata connection.
- * Drag it to the **job** screen.
- * Build Talend components.
- 
-For more information on loading data from JSON files to the Talend Open Studio, see `How to Load Data from JSON Files in Talend `_.
-
-:ref:`Back to top `
-
-.. _supported_sqream_drivers:
- 
-Supported SQream Drivers
-================
-
-The following list shows the supported SQream drivers and versions:
-
-* **JDBC** - Version 4.3.3 and higher.
-* **ODBC** - Version 4.0.0. This version requires a Bridge to connect. For more information on the required Bridge, see `Connecting Talend on Windows to an ODBC Database `_.
-
-:ref:`Back to top `
-
-.. _supported_data_sources:
-
-Supported Data Sources
-============================
-Talend Cloud connectors let you create reusable connections with a wide variety of systems and environments, such as those shown below. This lets you access and read records of a range of diverse data.
-
-* **Connections:** Connections are environments or systems for storing datasets, including databases, file systems, distributed systems and platforms. Because these systems are reusable, you only need to establish connectivity with them once.
-
-* **Datasets:** Datasets include database tables, file names, topics (Kafka), queues (JMS) and file paths (HDFS). For more information on the complete list of connectors and datasets that Talend supports, see `Introducing Talend Connectors `_.
-
-:ref:`Back to top `
-
-.. _supported_tools_os_sys_versions:
-
-Supported Tool and Operating System Versions
-======================
-Talend was tested using the following:
-
-* Talend version 7.4.1M6
-* Windows 10
-* SQream version 2021.1
-* JDBC version 
-
-:ref:`Back to top ` 
-
-.. _known_issues:
-
-Known Issues
-===========================  
-The the list below describes the following known issues as of 6/1/2021:
-
-* Schemas not displayed for tables with identical names.
-
-:ref:`Back to top `
-
-.. _related_links:
-
-Related Links
-===============
-The following is a list of links relevant to the Talend connector:
-
-* `Talend Home page `_
-* `Talend Community page `_
-* `Talend BugTracker `_
-
-Download Links
-==================
-The following is a list of download links relevant to the Talend connector:
-
-* `Talend Open Studio for Big Data `_
-* `Latest version of SQream JDBC `_
-
-:ref:`Back to top `
-	 
-.. contents:: In this topic:
-   :local:
\ No newline at end of file
diff --git a/troubleshooting/index.rst b/troubleshooting/index.rst
index efbcdd412..985008e09 100644
--- a/troubleshooting/index.rst
+++ b/troubleshooting/index.rst
@@ -15,11 +15,6 @@ The **Troubleshooting** page describes solutions to the following issues:
    examining_logs
    identifying_configuration_issues
    lock_related_issues
-   sas_viya_related_issues
-   tableau_related_issues
-   solving_code_126_odbc_errors
    log_related_issues
-   node_js_related_issues
    core_dumping_related_issues
-   sqream_sql_installation_related_issues
    information_for_support
\ No newline at end of file
diff --git a/troubleshooting/lock_related_issues.rst b/troubleshooting/lock_related_issues.rst
index 1a15858ec..ed1e21579 100644
--- a/troubleshooting/lock_related_issues.rst
+++ b/troubleshooting/lock_related_issues.rst
@@ -26,4 +26,6 @@ If the locks still appear in the :ref:`show_locks` utility, we can force remove
    t=> SELECT RELEASE_DEFUNCT_LOCKS();
    executed
 
+.. tip:: ``RELEASE_DEFUNCT_LOCKS`` has an optional input parameter to specify the number of seconds, after which ``RELEASE_DEFUNCT_LOCKS`` will execute.
+
 .. warning:: This operation can cause some statements to fail on the specific worker on which they are queued. This is intended as a "last resort" to solve stale locks.
\ No newline at end of file
diff --git a/troubleshooting/log_related_issues.rst b/troubleshooting/log_related_issues.rst
index a260f59d5..a259bff35 100644
--- a/troubleshooting/log_related_issues.rst
+++ b/troubleshooting/log_related_issues.rst
@@ -18,7 +18,7 @@ Assuming logs are stored at ``/home/rhendricks/sqream_storage/logs/``, a databas
 
    CREATE FOREIGN TABLE logs 
    (
-     start_marker      VARCHAR(4),
+     start_marker      TEXT(4),
      row_id            BIGINT,
      timestamp         DATETIME,
      message_level     TEXT,
@@ -32,7 +32,7 @@ Assuming logs are stored at ``/home/rhendricks/sqream_storage/logs/``, a databas
      service_name      TEXT,
      message_type_id   INT,
      message           TEXT,
-     end_message       VARCHAR(5)
+     end_message       TEXT(5)
    )
    WRAPPER csv_fdw
    OPTIONS
@@ -81,8 +81,8 @@ Finding Fatal Errors
 .. code-block:: psql
 
    t=> SELECT message FROM logs WHERE message_type_id=1010;
-   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/leveldb/LOCK: Resource temporarily unavailable
-   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/leveldb/LOCK: Resource temporarily unavailable
+   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/rocksdb/LOCK: Resource temporarily unavailable
+   Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/rocksdb/LOCK: Resource temporarily unavailable
    Mismatch in storage version, upgrade is needed,Storage version: 25, Server version is: 26
    Mismatch in storage version, upgrade is needed,Storage version: 25, Server version is: 26
    Internal Runtime Error,open cluster metadata database:IO error: lock /home/rhendricks/sqream_storage/LOCK: Resource temporarily unavailable
diff --git a/troubleshooting/node_js_related_issues.rst b/troubleshooting/node_js_related_issues.rst
deleted file mode 100644
index b3b95b2ed..000000000
--- a/troubleshooting/node_js_related_issues.rst
+++ /dev/null
@@ -1,54 +0,0 @@
-.. _node_js_related_issues:
-
-***********************
-Node.js Related Issues
-***********************
-The **Node.js Related Issues** page describes how to resolve the following common issues:
-
-.. toctree::
-   :maxdepth: 2
-   :glob:
-   :titlesonly:
-
-Preventing Heap Out of Memory Errors
---------------------------------------------
-
-Some workloads may cause Node.JS to fail with the error:
-
-.. code-block:: none
-
-   FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
-
-To prevent this error, modify the heap size configuration by setting the ``--max-old-space-size`` run flag.
-
-For example, set the space size to 2GB:
-
-.. code-block:: console
-   
-   $ node --max-old-space-size=2048 my-application.js
-
-Providing Support for BIGINT Data Type
-------------------------
-
-The Node.JS connector supports fetching ``BIGINT`` values from SQream DB. However, some applications may encounter an error when trying to serialize those values.
-
-The error that appears is:
-.. code-block:: none
-   
-   TypeError: Do not know how to serialize a BigInt
-
-This is because JSON specification do not support BIGINT values, even when supported by Javascript engines.
-
-To resolve this issue, objects with BIGINT values should be converted to string before serializing, and converted back after deserializing.
-
-For example:
-
-.. code-block:: javascript
-
-   const rows = [{test: 1n}]
-   const json = JSON.stringify(rows, , (key, value) =>
-     typeof value === 'bigint'
-         ? value.toString()
-         : value // return everything else unchanged
-   ));
-   console.log(json); // [{"test": "1"}]
\ No newline at end of file
diff --git a/troubleshooting/remedying_slow_queries.rst b/troubleshooting/remedying_slow_queries.rst
index 8a109f0c0..9bd05b324 100644
--- a/troubleshooting/remedying_slow_queries.rst
+++ b/troubleshooting/remedying_slow_queries.rst
@@ -33,7 +33,7 @@ The following table is a checklist you can use to identify the cause of your slo
      - 
          Use ``SELECT show_cluster_nodes();`` to list the active cluster workers.
          
-         If the worker list is incomplete, follow the :ref:`cluster troubleshooting` section below.
+         If the worker list is incomplete, locate and start the missing worker(s).
          
          If all workers are up, continue to step 4.
    * - 4
@@ -73,7 +73,7 @@ The following table is a checklist you can use to identify the cause of your slo
      - Check free memory across hosts
      - 
          #. Check free memory across the hosts by running ``$ free -th`` from the terminal.
-         #. If the machine has less than 5% free memory, consider **lowering** the ``limitQueryMemoryGB`` and ``spoolMemoryGB`` settings. Refer to the :ref:`configuration` guide.
+         #. If the machine has less than 5% free memory, consider **lowering** the ``limitQueryMemoryGB`` and ``spoolMemoryGB`` settings. Refer to the :ref:`spooling` guide.
          #. If the machine has a lot of free memory, consider **increasing** the ``limitQueryMemoryGB`` and ``spoolMemoryGB`` settings.
          
          If performance does not improve, contact SQream support for more help.
\ No newline at end of file
diff --git a/troubleshooting/sas_viya_related_issues.rst b/troubleshooting/sas_viya_related_issues.rst
deleted file mode 100644
index 6661dec95..000000000
--- a/troubleshooting/sas_viya_related_issues.rst
+++ /dev/null
@@ -1,55 +0,0 @@
-.. _sas_viya_related_issues:
-
-***********************
-SAS Viya Related Issues
-***********************
-
-This section describes the following best practices and troubleshooting procedures when connecting to SQream using SAS Viya:
-
-.. contents::
-   :local:
-
-Inserting Only Required Data
-------
-When using Tableau, SQream recommends using only data that you need, as described below:
-
-* Insert only the data sources you need into SAS Viya, excluding tables that don’t require analysis.
-
-    ::
-
-
-* To increase query performance, add filters before analyzing. Every modification you make while analyzing data queries the SQream database, sometimes several times. Adding filters to the datasource before exploring limits the amount of data analyze and increases query performance.
-
-
-Creating a Separate Service for SAS Viya
-------
-SQream recommends creating a separate service for SAS Viya with the DWLM. This reduces the impact that Tableau has on other applications and processes, such as ETL. In addition, this works in conjunction with the load balancer to ensure good performance.
-
-Locating the SQream JDBC Driver
-------
-In some cases, SAS Viya cannot locate the SQream JDBC driver, generating the following error message:
-
-.. code-block:: text
-
-   java.lang.ClassNotFoundException: com.sqream.jdbc.SQDriver
-
-**To locate the SQream JDBC driver:**
-
-1. Verify that you have placed the JDBC driver in a directory that SAS Viya can access.
-
-    ::
-
-
-2. Verify that the classpath in your SAS program is correct, and that SAS Viya can access the file that it references.
-
-    ::
-
-
-3. Restart SAS Viya.
-
-For more troubleshooting assistance, see the `SQream Support Portal `_.
-
-
-Supporting TEXT
-------
-In SAS Viya versions lower than 4.0, casting ``TEXT`` to ``CHAR`` changes the size to 1,024, such as when creating a table including a ``TEXT`` column. This is resolved by casting ``TEXT`` into ``CHAR`` when using the JDBC driver.
diff --git a/troubleshooting/solving_code_126_odbc_errors.rst b/troubleshooting/solving_code_126_odbc_errors.rst
deleted file mode 100644
index 2e652b113..000000000
--- a/troubleshooting/solving_code_126_odbc_errors.rst
+++ /dev/null
@@ -1,14 +0,0 @@
-.. _solving_code_126_odbc_errors:
-
-***********************
-Solving "Code 126" ODBC Errors
-***********************
-After installing the ODBC driver, you may experience the following error: 
-
-.. code-block:: none
-
-   The setup routines for the SQreamDriver64 ODBC driver could not be loaded due to system error
-   code 126: The specified module could not be found.
-   (c:\Program Files\SQream Technologies\ODBC Driver\sqreamOdbc64.dll)
-
-This is an issue with the Visual Studio Redistributable packages. Verify you've correctly installed them, as described in the :ref:`Visual Studio 2015 Redistributables ` section above.
diff --git a/troubleshooting/sqream_sql_installation_related_issues.rst b/troubleshooting/sqream_sql_installation_related_issues.rst
deleted file mode 100644
index 8225a2f18..000000000
--- a/troubleshooting/sqream_sql_installation_related_issues.rst
+++ /dev/null
@@ -1,33 +0,0 @@
-.. _sqream_sql_installation_related_issues:
-
-***********************
-SQream SQL Installation Related Issues
-***********************
-
-The **SQream SQL Installation Related Issues** page describes how to resolve SQream SQL installation related issues.
-
-Upon running sqream sql for the first time, you may get an error ``error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory``.
-
-Solving this error requires installing the ncruses or libtinfo libraries, depending on your operating system.
-
-* Ubuntu:
-
-   #. Install ``libtinfo``:
-      
-      ``$ sudo apt-get install -y libtinfo``
-   #. Depending on your Ubuntu version, you may need to create a symbolic link to the newer libtinfo that was installed.
-   
-      For example, if ``libtinfo`` was installed as ``/lib/x86_64-linux-gnu/libtinfo.so.6.2``:
-      
-      ``$ sudo ln -s /lib/x86_64-linux-gnu/libtinfo.so.6.2 /lib/x86_64-linux-gnu/libtinfo.so.5``
-      
-* CentOS / RHEL:
-
-   #. Install ``ncurses``:
-   
-      ``$ sudo yum install -y ncurses-libs``
-   #. Depending on your RHEL version, you may need to create a symbolic link to the newer libtinfo that was installed.
-   
-      For example, if ``libtinfo`` was installed as ``/usr/lib64/libtinfo.so.6``:
-      
-      ``$ sudo ln -s /usr/lib64/libtinfo.so.6 /usr/lib64/libtinfo.so.5``
\ No newline at end of file
diff --git a/troubleshooting/tableau_related_issues.rst b/troubleshooting/tableau_related_issues.rst
deleted file mode 100644
index 99b4a04dd..000000000
--- a/troubleshooting/tableau_related_issues.rst
+++ /dev/null
@@ -1,73 +0,0 @@
-.. _tableau_related_issues:
-
-***********************
-Tableau Related Issues
-***********************
-This section describes the following best practices and troubleshooting procedures when connecting to Tableau:
-
-.. contents::
-   :local:
-
-Inserting Only Required Data
-~~~~~~~~~~~~~~~~~~
-When using Tableau, SQream recommends using only data that you need, as described below:
-
-* Insert only the data sources you need into Tableau, excluding tables that don't require analysis.
-
-   ::
-
-* To increase query performance, add filters before analyzing. Every modification you make while analyzing data queries the SQream database, sometimes several times. Adding filters to the datasource before exploring limits the amount of data analyze and increases query performance.
-
-Using Tableau's Table Query Syntax
-~~~~~~~~~~~~~~~~~~~
-Dragging your desired tables into the main area in Tableau builds queries based on its own syntax. This helps ensure increased performance, while using views or custom SQL may degrade performance. In addition, SQream recommends using the :ref:`create_view` to create pre-optimized views, which your datasources point to. 
-
-Creating a Separate Service for Tableau
-~~~~~~~~~~~~~~~~~~~
-SQream recommends creating a separate service for Tableau with the DWLM. This reduces the impact that Tableau has on other applications and processes, such as ETL. In addition, this works in conjunction with the load balancer to ensure good performance.
-
-Error Saving Large Quantities of Data as Files
-~~~~~~~~~~~~~~~~~~~
-An **FAB9A2C5** error can when saving large quantities of data as files. If you receive this error when writing a connection string, add the ``fetchSize`` parameter to ``1``, as shown below:
-
-.. code-block:: text
-
-   jdbc:Sqream:///;user=;password=sqream;[; fetchSize=1...]
-   
-For more information on troubleshooting error **FAB9A2C5**, see the `Tableau Knowledge Base `_.
-
-Troubleshooting Workbook Performance Before Deploying to the Tableau Server
-~~~~~~~~~~~~~~~~~~~
-Tableau has a built-in `performance recorder `_ that shows how time is being spent. If you're seeing slow performance, this could be the result of a misconfiguration such as setting concurrency too low.
-
-Use the Tableau Performance Recorder for viewing the performance of queries run by Tableau. You can use this information to identify queries that can be optimized by using views.
-
-Troubleshooting Error Codes
-~~~~~~~~~~~~~~~~~~~
-Tableau may be unable to locate the SQream JDBC driver. The following message is displayed when Tableau cannot locate the driver:
-
-.. code-block:: console
-     
-   Error Code: 37CE01A3, No suitable driver installed or the URL is incorrect
-   
-**To troubleshoot error codes:**
-
-If Tableau cannot locate the SQream JDBC driver, do the following:
-
- 1. Verify that the JDBC driver is located in the correct directory:
- 
-   * **Tableau Desktop on Windows:** *C:\Program Files\Tableau\Drivers*
-   * **Tableau Desktop on MacOS:** *~/Library/Tableau/Drivers*
-   * **Tableau on Linux**: */opt/tableau/tableau_driver/jdbc*
-   
- 2. Find the file path for the JDBC driver and add it to the Java classpath:
-   
-   * **For Linux** - ``export CLASSPATH=;$CLASSPATH``
-
-        ::
-		
-   * **For Windows** - add an environment variable for the classpath:
- 
-	
-
-If you experience issues after restarting Tableau, see the `SQream support portal `_.