diff --git a/content/Development/desingdocs/column-statistics-in-hive.md b/content/Development/desingdocs/column-statistics-in-hive.md index e3562e78..e6a9c3f1 100644 --- a/content/Development/desingdocs/column-statistics-in-hive.md +++ b/content/Development/desingdocs/column-statistics-in-hive.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Column Statistics in Hive -{{< toc >}} - ### **Introduction** This document describes changes to a) HiveQL, b) metastore schema, and c) metastore Thrift API to support column level statistics in Hive. Please note that the document doesn’t describe the changes needed to persist histograms in the metastore yet. diff --git a/content/Development/desingdocs/design.md b/content/Development/desingdocs/design.md index eeaf56e1..41033e53 100644 --- a/content/Development/desingdocs/design.md +++ b/content/Development/desingdocs/design.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page contains details about the Hive design and architecture. A brief technical report about Hive is available at [hive.pdf]({{< ref "#hive-pdf" >}}). -{{< toc >}} - ## Hive Architecture Figure 1 diff --git a/content/Development/desingdocs/dynamicpartitions.md b/content/Development/desingdocs/dynamicpartitions.md index 5e32c80b..04675207 100644 --- a/content/Development/desingdocs/dynamicpartitions.md +++ b/content/Development/desingdocs/dynamicpartitions.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : DynamicPartitions -{{< toc >}} - ## Documentation This is the design document for dynamic partitions in Hive. Usage information is also available: diff --git a/content/Development/desingdocs/enabling-grpc-in-hive-metastore.md b/content/Development/desingdocs/enabling-grpc-in-hive-metastore.md index a793bd62..d99cb6b8 100644 --- a/content/Development/desingdocs/enabling-grpc-in-hive-metastore.md +++ b/content/Development/desingdocs/enabling-grpc-in-hive-metastore.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Enabling gRPC in Hive/Hive Metastore (Proposal) -{{< toc >}} - ## Contacts Cameron Moberg (Google), Zhou Fang (Google), Feng Lu (Google), Thejas Nair (Cloudera), Vihang Karajgaonkar (Cloudera), Naveen Gangam (Cloudera) diff --git a/content/Development/desingdocs/filterpushdowndev.md b/content/Development/desingdocs/filterpushdowndev.md index 57b4a20c..fa74e597 100644 --- a/content/Development/desingdocs/filterpushdowndev.md +++ b/content/Development/desingdocs/filterpushdowndev.md @@ -7,8 +7,6 @@ date: 2024-12-12 This document explains how we are planning to add support in Hive's optimizer for pushing filters down into physical access methods. This is an important optimization for minimizing the amount of data scanned and processed by an access method (e.g. for an indexed key lookup), as well as reducing the amount of data passed into Hive for further query evaluation. -{{< toc >}} - ## Use Cases Below are the main use cases we are targeting. diff --git a/content/Development/desingdocs/groupbywithrollup.md b/content/Development/desingdocs/groupbywithrollup.md index 9a84ff61..1050ec95 100644 --- a/content/Development/desingdocs/groupbywithrollup.md +++ b/content/Development/desingdocs/groupbywithrollup.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Group By With Rollup -{{< toc >}} - ## Terminology * (No) Map Aggr: Shorthand for whether the configuration variable hive.map.aggr is set to true or false, meaning mapside aggregation is allowed or not respectively. diff --git a/content/Development/desingdocs/hbasebulkload.md b/content/Development/desingdocs/hbasebulkload.md index 26fb7657..d4a0ff43 100644 --- a/content/Development/desingdocs/hbasebulkload.md +++ b/content/Development/desingdocs/hbasebulkload.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page explains how to use Hive to bulk load data into a new (empty) HBase table per [HIVE-1295](https://issues.apache.org/jira/browse/HIVE-1295). (If you're not using a build which contains this functionality yet, you'll need to build from source and make sure this patch and HIVE-1321 are both applied.) -{{< toc >}} - ## Overview Ideally, bulk load from Hive into HBase would be part of [HBaseIntegration]({{< ref "hbaseintegration" >}}), making it as simple as this: diff --git a/content/Development/desingdocs/hbasemetastoredevelopmentguide.md b/content/Development/desingdocs/hbasemetastoredevelopmentguide.md index 9863af27..5ebefd55 100644 --- a/content/Development/desingdocs/hbasemetastoredevelopmentguide.md +++ b/content/Development/desingdocs/hbasemetastoredevelopmentguide.md @@ -11,8 +11,6 @@ Guide for contributors to the metastore on hbase development work. Umbrella JIR This work is discontinued and the code is removed in release 3.0.0 ([HIVE-17234](https://issues.apache.org/jira/browse/HIVE-17234)). -{{< toc >}} - # Building You will need to download the source for Tephra and build it from the develop branch.  You need Tephra 0.5.1-SNAPSHOT.  You can get Tephra from [Cask's github](https://github.com/caskdata/tephra).  Switch to the branch develop and doing 'mvn install' will build the version you need. diff --git a/content/Development/desingdocs/hive-across-multiple-data-centers.md b/content/Development/desingdocs/hive-across-multiple-data-centers.md index 3eb2ca76..47ba7f8f 100644 --- a/content/Development/desingdocs/hive-across-multiple-data-centers.md +++ b/content/Development/desingdocs/hive-across-multiple-data-centers.md @@ -7,8 +7,6 @@ date: 2024-12-12 This project has been abandoned. We're leaving the design doc here in case someone decides to attempt this project in the future. -{{< toc >}} - ## Use Cases Inside facebook, we are running out of power inside a data center (physical cluster), and we have a need to have a bigger cluster. diff --git a/content/Development/desingdocs/hive-on-spark-join-design-master.md b/content/Development/desingdocs/hive-on-spark-join-design-master.md index 2eaf25e8..48b01157 100644 --- a/content/Development/desingdocs/hive-on-spark-join-design-master.md +++ b/content/Development/desingdocs/hive-on-spark-join-design-master.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Hive on Spark: Join Design Master -{{< toc >}} - ## Purpose and Prerequisites The purpose of this document is to summarize the findings of all the research of different joins and describe a unified design to attack the problem in Spark.  It will identify the optimization processors will be involved and their responsibilities. diff --git a/content/Development/desingdocs/hive-on-tez.md b/content/Development/desingdocs/hive-on-tez.md index 1a4267ae..b8119ec8 100644 --- a/content/Development/desingdocs/hive-on-tez.md +++ b/content/Development/desingdocs/hive-on-tez.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Hive on Tez -{{< toc >}} - # Overview [Tez](http://tez.apache.org/) is a new application framework built on Hadoop Yarn that can execute complex directed acyclic graphs of general data processing tasks. In many ways it can be thought of as a more flexible and powerful successor of the map-reduce framework. diff --git a/content/Development/desingdocs/hivereplicationdevelopment.md b/content/Development/desingdocs/hivereplicationdevelopment.md index 627d7b81..2bc5e1fc 100644 --- a/content/Development/desingdocs/hivereplicationdevelopment.md +++ b/content/Development/desingdocs/hivereplicationdevelopment.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HiveReplicationDevelopment -{{< toc >}} - # Introduction Replication in the context of databases and warehouses is the process of duplication of entities from one warehouse to another. This can be at the broader level of an entire database, or at a smaller level such as a table or partition. The goal of replication is to have a replica which changes whenever the base entity changes. diff --git a/content/Development/desingdocs/hivereplicationv2development.md b/content/Development/desingdocs/hivereplicationv2development.md index 1dd351a8..936198a9 100644 --- a/content/Development/desingdocs/hivereplicationv2development.md +++ b/content/Development/desingdocs/hivereplicationv2development.md @@ -9,8 +9,6 @@ This document describes the second version of Hive Replication. Please refer to This work is under development and interfaces are subject to change. This has been designed for use in conjunction with external orchestration tools, which would be responsible for co-ordinating the right sequence of commands between source and target clusters, fault tolerance/failure handling, and also providing correct configuration options that are necessary to be able to do cross cluster replication. -{{< toc >}} - # Version information As of Hive 3.0.0 release : only managed table replication where Hive user owns the table contents is supported. External tables, ACID tables, statistics and constraint replication are not supported. diff --git a/content/Development/desingdocs/hybrid-grace-hash-join-v1-0.md b/content/Development/desingdocs/hybrid-grace-hash-join-v1-0.md index 2a5123c4..e38d31d8 100644 --- a/content/Development/desingdocs/hybrid-grace-hash-join-v1-0.md +++ b/content/Development/desingdocs/hybrid-grace-hash-join-v1-0.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Hybrid Hybrid Grace Hash Join, v1.0 -{{< toc >}} - # Overview We are proposing an enhanced hash join algorithm called “hybrid hybrid grace hash join”. We can benefit from this feature as illustrated below: diff --git a/content/Development/desingdocs/indexdev-bitmap.md b/content/Development/desingdocs/indexdev-bitmap.md index d767cc1b..5ae88a28 100644 --- a/content/Development/desingdocs/indexdev-bitmap.md +++ b/content/Development/desingdocs/indexdev-bitmap.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Bitmap Indexing -{{< toc >}} - ## Introduction This document explains the proposed design for adding a bitmap index handler (). diff --git a/content/Development/desingdocs/indexdev.md b/content/Development/desingdocs/indexdev.md index fbf9f278..a24afa97 100644 --- a/content/Development/desingdocs/indexdev.md +++ b/content/Development/desingdocs/indexdev.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Indexes -{{< toc >}} - ## Indexing Is Removed since 3.0 There are alternate options which might work similarily to indexing: diff --git a/content/Development/desingdocs/listbucketing.md b/content/Development/desingdocs/listbucketing.md index 87fd2aa6..81d7bdd6 100644 --- a/content/Development/desingdocs/listbucketing.md +++ b/content/Development/desingdocs/listbucketing.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : ListBucketing -{{< toc >}} - # Goal The top level problem is as follows: diff --git a/content/Development/desingdocs/llap.md b/content/Development/desingdocs/llap.md index 80e6079a..09c5b129 100644 --- a/content/Development/desingdocs/llap.md +++ b/content/Development/desingdocs/llap.md @@ -8,8 +8,6 @@ date: 2024-12-12 Live Long And Process (LLAP) functionality was added in Hive 2.0 ([HIVE-7926](https://issues.apache.org/jira/browse/HIVE-7926) and associated tasks). [HIVE-9850](https://issues.apache.org/jira/browse/HIVE-9850) links documentation, features, and issues for this enhancement. For configuration of LLAP, see the LLAP Section of [Configuration Properties]({{< ref "#configuration-properties" >}}). -{{< toc >}} - ## Overview Hive has become significantly faster thanks to various features and improvements that were built by the community in recent years, including [Tez]({{< ref "hive-on-tez" >}}) and [Cost-based-optimization]({{< ref "cost-based-optimization-in-hive" >}}). The following were needed to take Hive to the next level: diff --git a/content/Development/desingdocs/locking.md b/content/Development/desingdocs/locking.md index 5d427fdd..284cb99e 100644 --- a/content/Development/desingdocs/locking.md +++ b/content/Development/desingdocs/locking.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Locking -{{< toc >}} - # Hive Concurrency Model ## Use Cases diff --git a/content/Development/desingdocs/mapjoin-and-partition-pruning.md b/content/Development/desingdocs/mapjoin-and-partition-pruning.md index a9198b62..63ddf5e6 100644 --- a/content/Development/desingdocs/mapjoin-and-partition-pruning.md +++ b/content/Development/desingdocs/mapjoin-and-partition-pruning.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : MapJoin and Partition Pruning -{{< toc >}} - # Overview In Hive, Map-Join is a technique that materializes data for all tables involved in the join except for the largest table and then large table is streamed over the materialized data from small tables. Map-Join is often a good join approach for star-schema joins where the fact table will be streamed over materialized dimension tables. diff --git a/content/Development/desingdocs/mapjoinoptimization.md b/content/Development/desingdocs/mapjoinoptimization.md index 70ce551c..74ed9438 100644 --- a/content/Development/desingdocs/mapjoinoptimization.md +++ b/content/Development/desingdocs/mapjoinoptimization.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : MapJoinOptimization -{{< toc >}} - # 1. Map Join Optimization ## 1.1 Using Distributed Cache to Propagate Hashtable File diff --git a/content/Development/desingdocs/outerjoinbehavior.md b/content/Development/desingdocs/outerjoinbehavior.md index 6b29d4fd..842cf610 100644 --- a/content/Development/desingdocs/outerjoinbehavior.md +++ b/content/Development/desingdocs/outerjoinbehavior.md @@ -7,8 +7,6 @@ date: 2024-12-12 # Hive Outer Join Behavior -{{< toc >}} - This document is based on a writeup of [DB2 Outer Join Behavior](http://www.ibm.com/developerworks/data/library/techarticle/purcell/0112purcell.html). The original HTML can be found [here](/attachments/OuterJoinBehavior.html). ## Definitions diff --git a/content/Development/desingdocs/partitionedviews.md b/content/Development/desingdocs/partitionedviews.md index 5adc2da0..bb06d3f0 100644 --- a/content/Development/desingdocs/partitionedviews.md +++ b/content/Development/desingdocs/partitionedviews.md @@ -7,8 +7,6 @@ date: 2024-12-12 This is a followup to [ViewDev]({{< ref "viewdev" >}}) for adding partition-awareness to views. -{{< toc >}} - # Use Cases 1. An administrator wants to create a set of views as a table/column renaming layer on top of an existing set of base tables, without breaking any existing dependencies on those tables. To read-only users, the views should behave exactly the same as the underlying tables in every way. Among other things, this means users should be able to browse available partitions. diff --git a/content/Development/desingdocs/statsdev.md b/content/Development/desingdocs/statsdev.md index 017d11b8..79848979 100644 --- a/content/Development/desingdocs/statsdev.md +++ b/content/Development/desingdocs/statsdev.md @@ -7,8 +7,6 @@ date: 2024-12-12 This document describes the support of statistics for Hive tables (see [HIVE-33](http://issues.apache.org/jira/browse/HIVE-33)). -{{< toc >}} - ## Motivation Statistics such as the number of rows of a table or partition and the histograms of a particular interesting column are important in many ways. One of the key use cases of statistics is query optimization. Statistics serve as the input to the cost functions of the optimizer so that it can compare different plans and choose among them. Statistics may sometimes meet the purpose of the users' queries. Users can quickly get the answers for some of their queries by only querying stored statistics rather than firing long-running execution plans. Some examples are getting the quantile of the users' age distribution, the top 10 apps that are used by people, and the number of distinct sessions. diff --git a/content/Development/desingdocs/theta-join.md b/content/Development/desingdocs/theta-join.md index 35710b7f..4168c8a9 100644 --- a/content/Development/desingdocs/theta-join.md +++ b/content/Development/desingdocs/theta-join.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Theta Join -{{< toc >}} - ## Preliminaries ### Overview diff --git a/content/Development/desingdocs/top-k-stats.md b/content/Development/desingdocs/top-k-stats.md index 06f9edd1..0e79b402 100644 --- a/content/Development/desingdocs/top-k-stats.md +++ b/content/Development/desingdocs/top-k-stats.md @@ -7,8 +7,6 @@ date: 2024-12-12 This document is an addition to [Statistics in Hive](https://hive.apache.org/development/desingdocs/statsdev). It describes the support of collecting column level top K values for Hive tables (see [HIVE-3421](https://issues.apache.org/jira/browse/HIVE-3421)). -{{< toc >}} - ## Scope In addition to the partition statistics, column level top K values can also be estimated for Hive tables. diff --git a/content/Development/desingdocs/vectorized-query-execution.md b/content/Development/desingdocs/vectorized-query-execution.md index 552e16ed..655c4ce7 100644 --- a/content/Development/desingdocs/vectorized-query-execution.md +++ b/content/Development/desingdocs/vectorized-query-execution.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Vectorized Query Execution -{{< toc >}} - # Introduction Vectorized query execution is a Hive feature that greatly reduces the CPU usage for typical query operations like scans, filters, aggregates, and joins. A standard query execution system processes one row at a time. This involves long code paths and significant metadata interpretation in the inner loop of execution. Vectorized query execution streamlines operations by processing a block of 1024 rows at a time. Within the block, each column is stored as a vector (an array of a primitive data type). Simple operations like arithmetic and comparisons are done by quickly iterating through the vectors in a tight loop, with no or very few function calls or conditional branches inside the loop. These loops compile in a streamlined way that uses relatively few instructions and finishes each instruction in fewer clock cycles, on average, by effectively using the processor pipeline and cache memory. A detailed design document is attached to the vectorized query execution JIRA, at . diff --git a/content/Development/desingdocs/viewdev.md b/content/Development/desingdocs/viewdev.md index 6982ef92..151abeec 100644 --- a/content/Development/desingdocs/viewdev.md +++ b/content/Development/desingdocs/viewdev.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Views -{{< toc >}} - ## Use Cases Views () are a standard DBMS feature and their uses are well understood. A typical use case might be to create an interface layer with a consistent entity/attribute naming scheme on top of an existing set of inconsistently named tables, without having to cause disruption due to direct modification of the tables. More advanced use cases would involve predefined filters, joins, aggregations, etc for simplifying query construction by end users, as well as sharing common definitions within ETL pipelines. diff --git a/content/Development/gettingstarted-latest.md b/content/Development/gettingstarted-latest.md index 64df71dc..db2cd551 100644 --- a/content/Development/gettingstarted-latest.md +++ b/content/Development/gettingstarted-latest.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : GettingStarted -{{< toc >}} - ## Installation and Configuration You can install a stable release of Hive by downloading a tarball, or you can download the source code and build Hive from that. diff --git a/content/Development/qtest.md b/content/Development/qtest.md index 4c36a23b..076c786a 100644 --- a/content/Development/qtest.md +++ b/content/Development/qtest.md @@ -26,8 +26,6 @@ draft: false Query File Test is a JUnit-based integration test suite for Apache Hive. Developers write any SQL; the testing framework runs it and verifies the result and output. -{{< toc >}} - ## Tutorial: How to run a specific test case ### Preparation diff --git a/content/community/bylaws.md b/content/community/bylaws.md index f8e8c438..25501670 100644 --- a/content/community/bylaws.md +++ b/content/community/bylaws.md @@ -11,8 +11,6 @@ Hive is a project of the [Apache Software Foundation](http://www.apache.org/foun Hive is typical of Apache projects in that it operates under a set of principles, known collectively as the 'Apache Way'. If you are new to Apache development, please refer to the [Incubator Project](http://incubator.apache.org/) for more information on how Apache projects operate. -{{< toc >}} - ## Roles and Responsibilities Apache projects define a set of roles with associated rights and responsibilities. These roles govern what tasks an individual may perform within the project. The roles are defined in the following sections. diff --git a/content/community/resources/developerguide.md b/content/community/resources/developerguide.md index c31b50c8..518f7c78 100644 --- a/content/community/resources/developerguide.md +++ b/content/community/resources/developerguide.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : DeveloperGuide -{{< toc >}} - ## Code Organization and a Brief Architecture ### Introduction diff --git a/content/community/resources/hive-apis-overview.md b/content/community/resources/hive-apis-overview.md index ff332176..8acb62b7 100644 --- a/content/community/resources/hive-apis-overview.md +++ b/content/community/resources/hive-apis-overview.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page aims to catalogue and describe the various public facing APIs exposed by Hive in order to inform developers wishing to integrate their applications and frameworks with the Hive ecosystem. To date the following APIs have been identified in the Hive project that are either considered public, or widely used in the public domain: -{{< toc >}} - # API categories The APIs can be segmented into two conceptual categories: operation based APIs and query based APIs. diff --git a/content/community/resources/hivedeveloperfaq.md b/content/community/resources/hivedeveloperfaq.md index b6ca575d..5f6ece12 100644 --- a/content/community/resources/hivedeveloperfaq.md +++ b/content/community/resources/hivedeveloperfaq.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HiveDeveloperFAQ -{{< toc >}} - ## Developing ### How do I move some files? diff --git a/content/community/resources/howtocommit.md b/content/community/resources/howtocommit.md index a857853a..b350383d 100644 --- a/content/community/resources/howtocommit.md +++ b/content/community/resources/howtocommit.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page contains guidelines for committers of the Apache Hive project. (If you're currently a contributor, and are interested in how we add new committers, read [BecomingACommitter]({{< ref "/community/becomingcommitter" >}})) -{{< toc >}} - ## New committers New committers are encouraged to first read Apache's generic committer documentation: diff --git a/content/community/resources/howtocontribute.md b/content/community/resources/howtocontribute.md index 40c92570..f399e6a4 100644 --- a/content/community/resources/howtocontribute.md +++ b/content/community/resources/howtocontribute.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page describes the mechanics of *how* to contribute software to Apache Hive. For ideas about *what* you might contribute, please see open tickets in [Jira](https://issues.apache.org/jira/browse/HIVE). -{{< toc >}} - ## Getting the Source Code First of all, you need the Hive source code. diff --git a/content/community/resources/howtorelease.md b/content/community/resources/howtorelease.md index 939bb0c4..4c2a9ad9 100644 --- a/content/community/resources/howtorelease.md +++ b/content/community/resources/howtorelease.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HowToRelease -{{< toc >}} - ## Introduction This page is prepared for Hive committers. You need committer rights to create a new Hive release. diff --git a/content/community/resources/presentations.md b/content/community/resources/presentations.md index ef4d4bc7..565f0a60 100644 --- a/content/community/resources/presentations.md +++ b/content/community/resources/presentations.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Presentations -{{< toc >}} - # Hive Meetups ## January 2016 Hive User Group Meetup diff --git a/content/community/resources/unit-testing-hive-sql.md b/content/community/resources/unit-testing-hive-sql.md index 60919b48..1357a596 100644 --- a/content/community/resources/unit-testing-hive-sql.md +++ b/content/community/resources/unit-testing-hive-sql.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Unit Testing Hive SQL -{{< toc >}} - # Motivations Hive is widely applied as a solution to numerous distinct problem types in the domain of big data. Quite clearly it is often used for the ad hoc querying of large datasets. However it is also used to implement ETL type processes. Unlike ad hoc queries, the Hive SQL written for ETLs has some distinct attributes: diff --git a/content/docs/latest/admin/adminmanual-configuration.md b/content/docs/latest/admin/adminmanual-configuration.md index 9d412f9d..7026f3a9 100644 --- a/content/docs/latest/admin/adminmanual-configuration.md +++ b/content/docs/latest/admin/adminmanual-configuration.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : AdminManual Configuration -{{< toc >}} - ## Configuring Hive A number of configuration variables in Hive can be used by the administrator to change the behavior for their installations and user sessions. These variables can be configured in any of the following ways, shown in the order of preference: diff --git a/content/docs/latest/admin/adminmanual-installation.md b/content/docs/latest/admin/adminmanual-installation.md index a84f49e0..e6663e80 100644 --- a/content/docs/latest/admin/adminmanual-installation.md +++ b/content/docs/latest/admin/adminmanual-installation.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : AdminManual Installation -{{< toc >}} - # Installing Hive You can install a stable release of Hive by downloading and unpacking a tarball, or you can download the source code and build Hive using Maven (release 0.13 and later) or Ant (release 0.12 and earlier). diff --git a/content/docs/latest/admin/adminmanual-metastore-3-0-administration.md b/content/docs/latest/admin/adminmanual-metastore-3-0-administration.md index f62f3f88..3f103bc4 100644 --- a/content/docs/latest/admin/adminmanual-metastore-3-0-administration.md +++ b/content/docs/latest/admin/adminmanual-metastore-3-0-administration.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : AdminManual Metastore 3.0 Administration -{{< toc >}} - ## Version Note **This document applies only to the Metastore in Hive 3.0 and later releases.**  For Hive 0, 1, and 2 releases please see the [Metastore Administration]({{< ref "adminmanual-metastore-administration" >}}) document. diff --git a/content/docs/latest/admin/adminmanual-metastore-administration.md b/content/docs/latest/admin/adminmanual-metastore-administration.md index dc3efcb3..9b368a21 100644 --- a/content/docs/latest/admin/adminmanual-metastore-administration.md +++ b/content/docs/latest/admin/adminmanual-metastore-administration.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page only documents the MetaStore in Hive 2.x and earlier. For 3.x and later releases please see [AdminManual Metastore 3.0 Administration]({{< ref "adminmanual-metastore-3-0-administration" >}}) -{{< toc >}} - ### Introduction All the metadata for Hive tables and partitions are accessed through the Hive Metastore. Metadata is persisted using [JPOX](http://www.datanucleus.org/) ORM solution (Data Nucleus) so any database that is supported by it can be used by Hive. Most of the commercial relational databases and many open source databases are supported. See the list of [supported databases]({{< ref "#supported-databases" >}}) in section below. diff --git a/content/docs/latest/admin/hive-on-spark-getting-started.md b/content/docs/latest/admin/hive-on-spark-getting-started.md index 6d03a485..b12da289 100644 --- a/content/docs/latest/admin/hive-on-spark-getting-started.md +++ b/content/docs/latest/admin/hive-on-spark-getting-started.md @@ -12,8 +12,6 @@ set hive.execution.engine=spark; ``` Hive on Spark was added in [HIVE-7292](https://issues.apache.org/jira/browse/HIVE-7292). -{{< toc >}} - ## Version Compatibility Hive on Spark is only tested with a specific version of Spark, so a given version of Hive is only guaranteed to work with a specific version of Spark. Other versions of Spark may work with a given version of Hive, but that is not guaranteed. Below is a list of Hive versions and their corresponding compatible Spark versions. diff --git a/content/docs/latest/admin/hive-schema-tool.md b/content/docs/latest/admin/hive-schema-tool.md index 06b41b04..8324905a 100644 --- a/content/docs/latest/admin/hive-schema-tool.md +++ b/content/docs/latest/admin/hive-schema-tool.md @@ -6,8 +6,6 @@ date: 2025-10-14 # Apache Hive : Hive Schema Tool -{{< toc >}} - ## About Schema tool helps to initialise and upgrade metastore database and hive sys schema. diff --git a/content/docs/latest/admin/hivederbyservermode.md b/content/docs/latest/admin/hivederbyservermode.md index f7ef7820..2609a753 100644 --- a/content/docs/latest/admin/hivederbyservermode.md +++ b/content/docs/latest/admin/hivederbyservermode.md @@ -9,8 +9,6 @@ Hive in embedded mode has a limitation of one active user at a time. You may wan See [Metadata Store]({{< ref "#metadata-store" >}}) and [Embedded Metastore]({{< ref "#embedded-metastore" >}}) for more information. -{{< toc >}} - ### Download Derby It is suggested you download the version of Derby that ships with Hive. If you have already run Hive in embedded mode, the first line of `derby.log` contains the version. diff --git a/content/docs/latest/admin/hivejdbcinterface.md b/content/docs/latest/admin/hivejdbcinterface.md index 9eb42e42..7dfcf65d 100644 --- a/content/docs/latest/admin/hivejdbcinterface.md +++ b/content/docs/latest/admin/hivejdbcinterface.md @@ -9,8 +9,6 @@ The current JDBC interface for Hive only supports running queries and fetching r To see how the JDBC interface can be used, see [sample code]({{< ref "hiveclient" >}}). -{{< toc >}} - ### Integration with Pentaho 1. Download pentaho report designer from the [pentaho website](http://sourceforge.net/project/showfiles.php?group_id=140317&package_id=192362). diff --git a/content/docs/latest/admin/hiveodbc.md b/content/docs/latest/admin/hiveodbc.md index 3dbce65d..67766fee 100644 --- a/content/docs/latest/admin/hiveodbc.md +++ b/content/docs/latest/admin/hiveodbc.md @@ -8,8 +8,6 @@ date: 2024-12-12 These instructions are for the Hive ODBC driver available in Hive for [HiveServer1]({{< ref "hiveserver" >}}). There is no ODBC driver available for [HiveServer2]({{< ref "setting-up-hiveserver2" >}}) as part of Apache Hive. There are third party ODBC drivers available from different vendors, and most of them seem to be free. -{{< toc >}} - ## Introduction The Hive ODBC Driver is a software library that implements the Open Database Connectivity (ODBC) API standard for the Hive database management system, enabling ODBC compliant applications to interact seamlessly (ideally) with Hive through a standard interface. This driver will NOT be built as a part of the typical Hive build process and will need to be compiled and built separately according to the instructions below. diff --git a/content/docs/latest/admin/iceberg-rest-catalog.md b/content/docs/latest/admin/iceberg-rest-catalog.md index 8769dd3d..a71c9258 100644 --- a/content/docs/latest/admin/iceberg-rest-catalog.md +++ b/content/docs/latest/admin/iceberg-rest-catalog.md @@ -5,8 +5,6 @@ date: 2025-11-14 # Apache Hive : Iceberg REST Catalog API backed by Hive Metastore -{{< toc >}} - ## Introduction ![](../images/hive-iceberg-rest-integration.png) diff --git a/content/docs/latest/admin/manual-installation.md b/content/docs/latest/admin/manual-installation.md index e5544d43..91e21b0c 100644 --- a/content/docs/latest/admin/manual-installation.md +++ b/content/docs/latest/admin/manual-installation.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Manual Installation -{{< toc >}} - # Installing, configuring and running Hive You can install a stable release of Hive by downloading and unpacking a tarball, or you can download the source code and build Hive using Maven (release 3.6.3 and later). diff --git a/content/docs/latest/admin/replication.md b/content/docs/latest/admin/replication.md index 64f53295..58ae9fcd 100644 --- a/content/docs/latest/admin/replication.md +++ b/content/docs/latest/admin/replication.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Replication -{{< toc >}} - ## Overview Hive Replication builds on the [metastore event]({{< ref "hcatalog-notification" >}}) and [ExIm]({{< ref "languagemanual-importexport" >}}) features to provide a framework for replicating Hive metadata and data changes between clusters. There is no requirement for the source cluster and replica to run the same Hadoop distribution, Hive version, or metastore RDBMS. The replication system has a fairly 'light touch', exhibiting a low degree of coupling and using the Hive-metastore Thrift service as an integration point. However, the current implementation is not an 'out of the box' solution. In particular it is necessary to provide some kind of orchestration service that is responsible for requesting replication tasks and executing them. diff --git a/content/docs/latest/admin/setting-up-hiveserver2.md b/content/docs/latest/admin/setting-up-hiveserver2.md index 75f76537..9c213be7 100644 --- a/content/docs/latest/admin/setting-up-hiveserver2.md +++ b/content/docs/latest/admin/setting-up-hiveserver2.md @@ -12,8 +12,6 @@ date: 2024-12-12 This document describes how to set up the server. How to use a client with this server is described in the [HiveServer2 Clients document]({{< ref "hiveserver2-clients" >}}). -{{< toc >}} - ## Version information Introduced in Hive version 0.11. See [HIVE-2935](https://issues.apache.org/jira/browse/HIVE-2935). diff --git a/content/docs/latest/admin/setting-up-metastore-with-mariadb.md b/content/docs/latest/admin/setting-up-metastore-with-mariadb.md index 16fcaf4a..4e243676 100644 --- a/content/docs/latest/admin/setting-up-metastore-with-mariadb.md +++ b/content/docs/latest/admin/setting-up-metastore-with-mariadb.md @@ -5,8 +5,6 @@ date: 2025-11-05 # Apache Hive : Setting up Metastore backed by MariaDB -{{< toc >}} - ## Note **Starting from mysql-connector-java 8.0.12, using the default MySQL driver the Metastore cannot be up to service.** diff --git a/content/docs/latest/admin/user-and-group-filter-support-with-ldap-atn-provider-in-hiveserver2.md b/content/docs/latest/admin/user-and-group-filter-support-with-ldap-atn-provider-in-hiveserver2.md index 146d9fe4..38d39ad5 100644 --- a/content/docs/latest/admin/user-and-group-filter-support-with-ldap-atn-provider-in-hiveserver2.md +++ b/content/docs/latest/admin/user-and-group-filter-support-with-ldap-atn-provider-in-hiveserver2.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : User and Group Filter Support with LDAP Atn Provider in HiveServer2 -{{< toc >}} - ## User and Group Filter Support with LDAP Starting in Hive 1.3.0, [HIVE-7193](https://issues.apache.org/jira/browse/HIVE-7193) adds support in HiveServer2 for diff --git a/content/docs/latest/hcatalog/hcatalog-authorization.md b/content/docs/latest/hcatalog/hcatalog-authorization.md index 23bef50b..345e269b 100644 --- a/content/docs/latest/hcatalog/hcatalog-authorization.md +++ b/content/docs/latest/hcatalog/hcatalog-authorization.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Authorization -{{< toc >}} - # Storage Based Authorization ## Default Authorization Model of Hive diff --git a/content/docs/latest/hcatalog/hcatalog-cli.md b/content/docs/latest/hcatalog/hcatalog-cli.md index bf144635..0cdd13a5 100644 --- a/content/docs/latest/hcatalog/hcatalog-cli.md +++ b/content/docs/latest/hcatalog/hcatalog-cli.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Command Line Interface -{{< toc >}} - ## Set Up The HCatalog command line interface (CLI) can be invoked as `HIVE_HOME=`*hive_home hcat_home*`/bin/hcat` where *hive_home* is the directory where Hive has been installed and *hcat_home* is the directory where HCatalog has been installed. diff --git a/content/docs/latest/hcatalog/hcatalog-configuration-properties.md b/content/docs/latest/hcatalog/hcatalog-configuration-properties.md index 04ad2591..2efc2a80 100644 --- a/content/docs/latest/hcatalog/hcatalog-configuration-properties.md +++ b/content/docs/latest/hcatalog/hcatalog-configuration-properties.md @@ -7,8 +7,6 @@ date: 2024-12-12 Apache HCatalog's behaviour can be modified through the use of a few configuration parameters specified in jobs submitted to it. This document details all the various knobs that users have available to them, and what they accomplish.  -{{< toc >}} - ## Setup The properties described in this page are meant to be job-level properties set on HCatalog through the jobConf passed into it. This means that this page is relevant for Pig users of [HCatLoader/HCatStorer]({{< ref "hcatalog-loadstore" >}}), or MapReduce users of [HCatInputFormat/HCatOutputFormat]({{< ref "hcatalog-inputoutput" >}}). For a MapReduce user of HCatalog, these must be present as key-values in the Configuration (JobConf/Job/JobContext) used to instantiate HCatOutputFormat or HCatInputFormat. For Pig users of HCatStorer, these parameters are set using the Pig "set" command before instantiating an HCatLoader/HCatStorer. diff --git a/content/docs/latest/hcatalog/hcatalog-dynamicpartitions.md b/content/docs/latest/hcatalog/hcatalog-dynamicpartitions.md index 2ced5b7f..54a5e4af 100644 --- a/content/docs/latest/hcatalog/hcatalog-dynamicpartitions.md +++ b/content/docs/latest/hcatalog/hcatalog-dynamicpartitions.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Dynamic Partitioning -{{< toc >}} - ## Overview When writing data in HCatalog it is possible to write all records to a single partition. In this case the partition column(s) need not be in the output data. diff --git a/content/docs/latest/hcatalog/hcatalog-inputoutput.md b/content/docs/latest/hcatalog/hcatalog-inputoutput.md index 657728d7..64a46f24 100644 --- a/content/docs/latest/hcatalog/hcatalog-inputoutput.md +++ b/content/docs/latest/hcatalog/hcatalog-inputoutput.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Input and Output Interfaces -{{< toc >}} - ## Set Up No HCatalog-specific setup is required for the HCatInputFormat and HCatOutputFormat interfaces. diff --git a/content/docs/latest/hcatalog/hcatalog-installhcat.md b/content/docs/latest/hcatalog/hcatalog-installhcat.md index 3b4f0ebc..52f05b9f 100644 --- a/content/docs/latest/hcatalog/hcatalog-installhcat.md +++ b/content/docs/latest/hcatalog/hcatalog-installhcat.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Installation from Tarball -{{< toc >}} - ## HCatalog Installed with Hive Version diff --git a/content/docs/latest/hcatalog/hcatalog-loadstore.md b/content/docs/latest/hcatalog/hcatalog-loadstore.md index f7e127b7..10690c41 100644 --- a/content/docs/latest/hcatalog/hcatalog-loadstore.md +++ b/content/docs/latest/hcatalog/hcatalog-loadstore.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Load and Store Interfaces -{{< toc >}} - ## Set Up The HCatLoader and HCatStorer interfaces are used with Pig scripts to read and write data in HCatalog-managed tables. No HCatalog-specific setup is required for these interfaces. diff --git a/content/docs/latest/hcatalog/hcatalog-notification.md b/content/docs/latest/hcatalog/hcatalog-notification.md index f2b9a693..92703954 100644 --- a/content/docs/latest/hcatalog/hcatalog-notification.md +++ b/content/docs/latest/hcatalog/hcatalog-notification.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Notification -{{< toc >}} - ## Overview Since version 0.2, HCatalog provides notifications for certain events happening in the system. This way applications such as Oozie can wait for those events and schedule the work that depends on them. The current version of HCatalog supports two kinds of events: diff --git a/content/docs/latest/hcatalog/hcatalog-readerwriter.md b/content/docs/latest/hcatalog/hcatalog-readerwriter.md index 620b1caf..eb88c90a 100644 --- a/content/docs/latest/hcatalog/hcatalog-readerwriter.md +++ b/content/docs/latest/hcatalog/hcatalog-readerwriter.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Reader and Writer Interfaces -{{< toc >}} - ## Overview HCatalog provides a data transfer API for parallel input and output without using MapReduce. This API provides a way to read data from a Hadoop cluster or write data into a Hadoop cluster, using a basic storage abstraction of tables and rows. diff --git a/content/docs/latest/hcatalog/hcatalog-storageformats.md b/content/docs/latest/hcatalog/hcatalog-storageformats.md index 6077add6..4892b227 100644 --- a/content/docs/latest/hcatalog/hcatalog-storageformats.md +++ b/content/docs/latest/hcatalog/hcatalog-storageformats.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Storage Formats -{{< toc >}} - ### SerDes and Storage Formats HCatalog uses Hive's SerDe class to serialize and deserialize data. SerDes are provided for RCFile, CSV text, JSON text, and SequenceFile formats. Check the [SerDe documentation]({{< ref "serde" >}}) for additional SerDes that might be included in new versions. For example, the [Avro SerDe]({{< ref "avroserde" >}}) was added in Hive 0.9.1, the [ORC]({{< ref "languagemanual-orc" >}}) file format was added in Hive 0.11.0, and [Parquet]({{< ref "parquet" >}}) was added in Hive 0.10.0 (plug-in) and Hive 0.13.0 (native). diff --git a/content/docs/latest/hcatalog/hcatalog-streaming-mutation-api.md b/content/docs/latest/hcatalog/hcatalog-streaming-mutation-api.md index d27f8f76..65ba2c30 100644 --- a/content/docs/latest/hcatalog/hcatalog-streaming-mutation-api.md +++ b/content/docs/latest/hcatalog/hcatalog-streaming-mutation-api.md @@ -7,8 +7,6 @@ date: 2024-12-12 A Java API focused on mutating (insert/update/delete) records into transactional tables using Hive’s [ACID](https://hive.apache.org/docs/latest/user/hive-transactions) feature. It is introduced in Hive 2.0.0 ([HIVE-10165](https://issues.apache.org/jira/browse/HIVE-10165)). -{{< toc >}} - # Background In certain data processing use cases it is necessary to modify existing data when new facts arrive. An example of this is the classic ETL merge where a copy of a data set is kept in sync with a master by the frequent application of deltas. The deltas describe the mutations (inserts, updates, deletes) that have occurred to the master since the previous sync. To implement such a case using Hadoop traditionally demands that the partitions containing records targeted by the mutations be rewritten. This is a coarse approach; a partition containing millions of records might be rebuilt because of a single record change. Additionally these partitions cannot be restated atomically; at some point the old partition data must be swapped with the new partition data. When this swap occurs, usually by issuing an HDFS `rm` followed by a `mv`, the possibility exists where the data appears to be unavailable and hence any downstream jobs consuming the data might unexpectedly fail. Therefore data processing patterns that restate raw data on HDFS cannot operate robustly without some external mechanism to orchestrate concurrent access to changing data. diff --git a/content/docs/latest/hcatalog/hcatalog-usinghcat.md b/content/docs/latest/hcatalog/hcatalog-usinghcat.md index 240ac85e..a0c02fad 100644 --- a/content/docs/latest/hcatalog/hcatalog-usinghcat.md +++ b/content/docs/latest/hcatalog/hcatalog-usinghcat.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HCatalog Usage -{{< toc >}} - ## Version information HCatalog graduated from the Apache incubator and merged with the Hive project on March 26, 2013. diff --git a/content/docs/latest/language/datasketches-integration.md b/content/docs/latest/language/datasketches-integration.md index e4cb90fb..67e8e9bc 100644 --- a/content/docs/latest/language/datasketches-integration.md +++ b/content/docs/latest/language/datasketches-integration.md @@ -8,8 +8,6 @@ date: 2024-12-12 Apache DataSketches () is integrated into Hive via [HIVE-22939](https://issues.apache.org/jira/browse/HIVE-22939). This enables various kind of sketch operations thru regular sql statement. -{{< toc >}} - # Sketch functions ## Naming convention diff --git a/content/docs/latest/language/enhanced-aggregation-cube-grouping-and-rollup.md b/content/docs/latest/language/enhanced-aggregation-cube-grouping-and-rollup.md index 2e2f4669..d4c04110 100644 --- a/content/docs/latest/language/enhanced-aggregation-cube-grouping-and-rollup.md +++ b/content/docs/latest/language/enhanced-aggregation-cube-grouping-and-rollup.md @@ -7,8 +7,6 @@ date: 2024-12-12 This document describes enhanced aggregation features for the GROUP BY clause of SELECT statements. -{{< toc >}} - Version Grouping sets, CUBE and ROLLUP operators, and the GROUPING__ID function were added in Hive 0.10.0. diff --git a/content/docs/latest/language/genericudafcasestudy.md b/content/docs/latest/language/genericudafcasestudy.md index 54751113..6d174fa4 100644 --- a/content/docs/latest/language/genericudafcasestudy.md +++ b/content/docs/latest/language/genericudafcasestudy.md @@ -11,8 +11,6 @@ This tutorial walks through the development of the `histogram()` UDAF, which com **NOTE:** In this tutorial, we walk through the creation of a `histogram()` function. Starting with the 0.6.0 release of Hive, this appears as the built-in function `histogram_numeric()`. -{{< toc >}} - ## Preliminaries Make sure you have the latest Hive trunk by running `svn up` in your Hive directory. More detailed instructions on downloading and setting up Hive can be found at [Getting Started](http://wiki.apache.org/hadoop/Hive/GettingStarted) . Your local copy of Hive should work by running `build/dist/bin/hive` from the Hive root directory, and you should have some tables of data loaded into your local instance for testing whatever UDAF you have in mind. For this example, assume that a table called `normal` exists with a single `double` column called `val`, containing a large number of random number drawn from the standard normal distribution. diff --git a/content/docs/latest/language/hive-udfs.md b/content/docs/latest/language/hive-udfs.md index 3c95b29b..3facde34 100644 --- a/content/docs/latest/language/hive-udfs.md +++ b/content/docs/latest/language/hive-udfs.md @@ -7,8 +7,6 @@ date: 2024-12-12 Hive User-Defined Functions (UDFs) are custom functions developed in Java and seamlessly integrated with Apache Hive. UDFs are routines designed to accept parameters, execute a specific action, and return the resulting value. The return value can either be a single scalar row or a complete result set, depending on the UDF's code and the implemented interface. UDFs represent a powerful capability that enhances classical SQL functionality by allowing the integration of custom code, providing Hive users with a versatile toolset. Apache Hive comes equipped with a variety of built-in UDFs that users can leverage. Similar to other SQL-based solutions, Hive also offers functionality to expand its already rich set of UDFs by incorporating custom ones as needed. -{{< toc >}} - ## Overview Every UDF's evaluate method is one row at a time! This means if your UDFs has complex code, it could introduce performance issue in execution time. diff --git a/content/docs/latest/language/hiveplugins.md b/content/docs/latest/language/hiveplugins.md index 8b2ac3e7..4bbbeff4 100644 --- a/content/docs/latest/language/hiveplugins.md +++ b/content/docs/latest/language/hiveplugins.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Plugins -{{< toc >}} - ## Creating Custom UDFs First, you need to create a new class that extends UDF, with one or more methods named evaluate. diff --git a/content/docs/latest/language/languagemanual-archiving.md b/content/docs/latest/language/languagemanual-archiving.md index 86bb6e2e..90c8f50a 100644 --- a/content/docs/latest/language/languagemanual-archiving.md +++ b/content/docs/latest/language/languagemanual-archiving.md @@ -7,8 +7,6 @@ date: 2024-12-12 Archiving for File Count Reduction. -{{< toc >}} - ## Overview Due to the design of HDFS, the number of files in the filesystem directly affects the memory consumption in the namenode. While normally not a problem for small clusters, memory usage may hit the limits of accessible memory on a single machine when there are >50-100 million files. In such situations, it is advantageous to have as few files as possible. diff --git a/content/docs/latest/language/languagemanual-authorization.md b/content/docs/latest/language/languagemanual-authorization.md index 80a2631b..6a1609d7 100644 --- a/content/docs/latest/language/languagemanual-authorization.md +++ b/content/docs/latest/language/languagemanual-authorization.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Authorization -{{< toc >}} - ## Introduction Note that this documentation is referring to Authorization which is verifying if a user has permission to perform a certain action, and not about Authentication (verifying the identity of the user). Strong authentication for tools like the [Hive command line]({{< ref "languagemanual-cli" >}}) is provided through the use of Kerberos. There are additional authentication options for users of [HiveServer2]({{< ref "setting-up-hiveserver2" >}}). diff --git a/content/docs/latest/language/languagemanual-cli.md b/content/docs/latest/language/languagemanual-cli.md index 68422758..2186f1be 100644 --- a/content/docs/latest/language/languagemanual-cli.md +++ b/content/docs/latest/language/languagemanual-cli.md @@ -7,8 +7,6 @@ date: 2024-12-12 $HIVE_HOME/bin/hive is a shell utility which can be used to run Hive queries in either interactive or batch mode. -{{< toc >}} - # Deprecation in favor of Beeline CLI HiveServer2 (introduced in Hive 0.11) has its own CLI called [Beeline]({{< ref "#beeline" >}}), which is a JDBC client based on SQLLine.  Due to new development being focused on HiveServer2, [Hive CLI will soon be deprecated](https://issues.apache.org/jira/browse/HIVE-10304) in favor of Beeline ([HIVE-10511](https://issues.apache.org/jira/browse/HIVE-10511)). diff --git a/content/docs/latest/language/languagemanual-ddl.md b/content/docs/latest/language/languagemanual-ddl.md index 84c87962..f064a8a0 100644 --- a/content/docs/latest/language/languagemanual-ddl.md +++ b/content/docs/latest/language/languagemanual-ddl.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual DDL -{{< toc >}} - ## Overview HiveQL DDL statements are documented here, including: diff --git a/content/docs/latest/language/languagemanual-dml.md b/content/docs/latest/language/languagemanual-dml.md index b2ee2864..a29ab933 100644 --- a/content/docs/latest/language/languagemanual-dml.md +++ b/content/docs/latest/language/languagemanual-dml.md @@ -7,8 +7,6 @@ date: 2024-12-12 # Hive Data Manipulation Language -{{< toc >}} - ### Loading files into tables Hive does not do any transformation while loading data into tables. Load operations are currently pure copy/move operations that move datafiles into locations corresponding to Hive tables. diff --git a/content/docs/latest/language/languagemanual-explain.md b/content/docs/latest/language/languagemanual-explain.md index de39f01b..4ffca5b6 100644 --- a/content/docs/latest/language/languagemanual-explain.md +++ b/content/docs/latest/language/languagemanual-explain.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Explain -{{< toc >}} - ## EXPLAIN Syntax Hive provides an `EXPLAIN` command that shows the execution plan for a query. The syntax for this statement is as follows: diff --git a/content/docs/latest/language/languagemanual-groupby.md b/content/docs/latest/language/languagemanual-groupby.md index 5fc9df88..ae0774c5 100644 --- a/content/docs/latest/language/languagemanual-groupby.md +++ b/content/docs/latest/language/languagemanual-groupby.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual GroupBy -{{< toc >}} - ## Group By Syntax ``` diff --git a/content/docs/latest/language/languagemanual-importexport.md b/content/docs/latest/language/languagemanual-importexport.md index aba0e263..16f733f5 100644 --- a/content/docs/latest/language/languagemanual-importexport.md +++ b/content/docs/latest/language/languagemanual-importexport.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Import/Export -{{< toc >}} - ### Version information The `EXPORT` and `IMPORT` commands were added in Hive 0.8.0 (see [HIVE-1918](https://issues.apache.org/jira/browse/HIVE-1918)). diff --git a/content/docs/latest/language/languagemanual-indexing.md b/content/docs/latest/language/languagemanual-indexing.md index 7bfb2bb9..e658ddc8 100644 --- a/content/docs/latest/language/languagemanual-indexing.md +++ b/content/docs/latest/language/languagemanual-indexing.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Indexing -{{< toc >}} - ## Indexing Is Removed since 3.0 There are alternate options which might work similarily to indexing: diff --git a/content/docs/latest/language/languagemanual-joinoptimization.md b/content/docs/latest/language/languagemanual-joinoptimization.md index 2b519aca..8aeecdbf 100644 --- a/content/docs/latest/language/languagemanual-joinoptimization.md +++ b/content/docs/latest/language/languagemanual-joinoptimization.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Join Optimization -{{< toc >}} - ## Improvements to the Hive Optimizer Version diff --git a/content/docs/latest/language/languagemanual-joins.md b/content/docs/latest/language/languagemanual-joins.md index faf57e50..58ddb30a 100644 --- a/content/docs/latest/language/languagemanual-joins.md +++ b/content/docs/latest/language/languagemanual-joins.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Joins -{{< toc >}} - ## Join Syntax Hive supports the following syntax for joining tables: diff --git a/content/docs/latest/language/languagemanual-lateralview.md b/content/docs/latest/language/languagemanual-lateralview.md index 9614fd87..b212ed5d 100644 --- a/content/docs/latest/language/languagemanual-lateralview.md +++ b/content/docs/latest/language/languagemanual-lateralview.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual LateralView -{{< toc >}} - ## Lateral View Syntax ``` diff --git a/content/docs/latest/language/languagemanual-lzo.md b/content/docs/latest/language/languagemanual-lzo.md index 39e0dbb8..182fb7ae 100644 --- a/content/docs/latest/language/languagemanual-lzo.md +++ b/content/docs/latest/language/languagemanual-lzo.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual LZO Compression -{{< toc >}} - ## General LZO Concepts LZO is a lossless data compression library that favors speed over compression ratio. See and for general information about LZO and see [Compressed Data Storage]({{< ref "compressedstorage" >}}) for information about compression in Hive. diff --git a/content/docs/latest/language/languagemanual-orc.md b/content/docs/latest/language/languagemanual-orc.md index 8a9c74db..7cdb5119 100644 --- a/content/docs/latest/language/languagemanual-orc.md +++ b/content/docs/latest/language/languagemanual-orc.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual ORC -{{< toc >}} - # ORC Files ## ORC File Format diff --git a/content/docs/latest/language/languagemanual-sampling.md b/content/docs/latest/language/languagemanual-sampling.md index abcb110a..2c9e5823 100644 --- a/content/docs/latest/language/languagemanual-sampling.md +++ b/content/docs/latest/language/languagemanual-sampling.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Sampling -{{< toc >}} - ## Sampling Syntax ### Sampling Bucketized Table diff --git a/content/docs/latest/language/languagemanual-select.md b/content/docs/latest/language/languagemanual-select.md index 414c3d33..3a35f0da 100644 --- a/content/docs/latest/language/languagemanual-select.md +++ b/content/docs/latest/language/languagemanual-select.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Select -{{< toc >}} - ## Select Syntax ``` diff --git a/content/docs/latest/language/languagemanual-sortby.md b/content/docs/latest/language/languagemanual-sortby.md index 9125c99d..70a38ffd 100644 --- a/content/docs/latest/language/languagemanual-sortby.md +++ b/content/docs/latest/language/languagemanual-sortby.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual SortBy -{{< toc >}} - # Order, Sort, Cluster, and Distribute By This describes the syntax of SELECT clauses ORDER BY, SORT BY, CLUSTER BY, and DISTRIBUTE BY.  See [Select Syntax]({{< ref "#select-syntax" >}}) for general information. diff --git a/content/docs/latest/language/languagemanual-subqueries.md b/content/docs/latest/language/languagemanual-subqueries.md index 24299bd5..1fddc36b 100644 --- a/content/docs/latest/language/languagemanual-subqueries.md +++ b/content/docs/latest/language/languagemanual-subqueries.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual SubQueries -{{< toc >}} - # Subqueries in the FROM Clause ``` diff --git a/content/docs/latest/language/languagemanual-transform.md b/content/docs/latest/language/languagemanual-transform.md index 17026e3f..a751009d 100644 --- a/content/docs/latest/language/languagemanual-transform.md +++ b/content/docs/latest/language/languagemanual-transform.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Transform -{{< toc >}} - ## Transform/Map-Reduce Syntax Users can also plug in their own custom mappers and reducers in the data stream by using features natively supported in the Hive language. e.g. in order to run a custom mapper script - map_script - and a custom reducer script - reduce_script - the user can issue the following command which uses the TRANSFORM clause to embed the mapper and the reducer scripts. diff --git a/content/docs/latest/language/languagemanual-types.md b/content/docs/latest/language/languagemanual-types.md index a8521f5c..8f6b6c5f 100644 --- a/content/docs/latest/language/languagemanual-types.md +++ b/content/docs/latest/language/languagemanual-types.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Data Types -{{< toc >}} - ## Overview This lists all supported data types in Hive. See [Type System]({{< ref "#type-system" >}}) in the [Tutorial]({{< ref "tutorial" >}}) for additional information. diff --git a/content/docs/latest/language/languagemanual-udf.md b/content/docs/latest/language/languagemanual-udf.md index 6a21ce7b..cdce416c 100644 --- a/content/docs/latest/language/languagemanual-udf.md +++ b/content/docs/latest/language/languagemanual-udf.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Operators and User-Defined Functions -{{< toc >}} - ## Overview All Hive keywords are case-insensitive, including the names of Hive operators and functions. diff --git a/content/docs/latest/language/languagemanual-union.md b/content/docs/latest/language/languagemanual-union.md index 014de1cf..fe9716eb 100644 --- a/content/docs/latest/language/languagemanual-union.md +++ b/content/docs/latest/language/languagemanual-union.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual Union -{{< toc >}} - ## Union Syntax ``` diff --git a/content/docs/latest/language/languagemanual-variablesubstitution.md b/content/docs/latest/language/languagemanual-variablesubstitution.md index 01b827ef..13af1fe4 100644 --- a/content/docs/latest/language/languagemanual-variablesubstitution.md +++ b/content/docs/latest/language/languagemanual-variablesubstitution.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual VariableSubstitution -{{< toc >}} - # Introduction Hive is used for batch and interactive queries. Variable Substitution allows for tasks such as separating environment-specific configuration variables from code. diff --git a/content/docs/latest/language/languagemanual-virtualcolumns.md b/content/docs/latest/language/languagemanual-virtualcolumns.md index 34a3c434..8fec2177 100644 --- a/content/docs/latest/language/languagemanual-virtualcolumns.md +++ b/content/docs/latest/language/languagemanual-virtualcolumns.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual VirtualColumns -{{< toc >}} - ## Virtual Columns Hive 0.8.0 provides support for two virtual columns: diff --git a/content/docs/latest/language/languagemanual-windowingandanalytics.md b/content/docs/latest/language/languagemanual-windowingandanalytics.md index 183798f2..d94d88a3 100644 --- a/content/docs/latest/language/languagemanual-windowingandanalytics.md +++ b/content/docs/latest/language/languagemanual-windowingandanalytics.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : LanguageManual WindowingAndAnalytics -{{< toc >}} - ## Enhancements to Hive QL Introduced in Hive version 0.11. diff --git a/content/docs/latest/language/materialized-views.md b/content/docs/latest/language/materialized-views.md index 9d648c55..3fac44a7 100644 --- a/content/docs/latest/language/materialized-views.md +++ b/content/docs/latest/language/materialized-views.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page documents the work done for the supporting materialized views in Apache Hive. -{{< toc >}} - ## Version information Materialized views support is introduced in Hive 3.0.0. diff --git a/content/docs/latest/language/scheduled-queries.md b/content/docs/latest/language/scheduled-queries.md index 20bc47e1..9ff2ddcf 100644 --- a/content/docs/latest/language/scheduled-queries.md +++ b/content/docs/latest/language/scheduled-queries.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Scheduled Queries -{{< toc >}} - # Introduction Executing statements periodically can be usefull in diff --git a/content/docs/latest/language/sql-standard-based-hive-authorization.md b/content/docs/latest/language/sql-standard-based-hive-authorization.md index 0d6e51f8..48cf3751 100644 --- a/content/docs/latest/language/sql-standard-based-hive-authorization.md +++ b/content/docs/latest/language/sql-standard-based-hive-authorization.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : SQL Standard Based Hive Authorization -{{< toc >}} - # Status of Hive Authorization before Hive 0.13 The [default authorization in Hive]({{< ref "#default-authorization-in-hive" >}}) is not designed with the intent to protect against malicious users accessing data they should not be accessing. It only helps in preventing users from accidentally doing operations they are not supposed to do. It is also incomplete because it does not have authorization checks for many operations including the grant statement. The authorization checks happen during Hive query compilation. But as the user is allowed to execute dfs commands, user-defined functions and shell commands, it is possible to bypass the client security checks. diff --git a/content/docs/latest/language/statisticsanddatamining.md b/content/docs/latest/language/statisticsanddatamining.md index e5b24aee..71fccead 100644 --- a/content/docs/latest/language/statisticsanddatamining.md +++ b/content/docs/latest/language/statisticsanddatamining.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page is the secondary documentation for the slightly more advanced statistical and data mining functions that are being integrated into Hive, and especially the functions that warrant more than one-line descriptions. -{{< toc >}} - ## ngrams() and context_ngrams(): N-gram frequency estimation [N-grams](http://en.wikipedia.org/wiki/N-gram) are subsequences of length **N** drawn from a longer sequence. The purpose of the `ngrams()` UDAF is to find the `k` most frequent n-grams from one or more sequences. It can be used in conjunction with the `sentences()` UDF to analyze unstructured natural language text, or the `collect()` function to analyze more general string data. diff --git a/content/docs/latest/user/Hive-Transactions-ACID.md b/content/docs/latest/user/Hive-Transactions-ACID.md index 226fc1ba..4a0c0a1d 100644 --- a/content/docs/latest/user/Hive-Transactions-ACID.md +++ b/content/docs/latest/user/Hive-Transactions-ACID.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Hive Transactions (Hive ACID) -{{< toc >}} - ## What is ACID and why should you use it? ACID stands for four traits of database transactions:  Atomicity (an operation either succeeds completely or fails, it does not leave partial data), Consistency (once an application performs an operation the results of that operation are visible to it in every subsequent operation), [Isolation](https://en.wikipedia.org/wiki/Isolation_(database_systems)) (an incomplete operation by one user does not cause unexpected side effects for other users), and Durability (once an operation is complete it will be preserved even in the face of machine or system failure).  These traits have long been expected of database systems as part of their transaction functionality.   diff --git a/content/docs/latest/user/accumulointegration.md b/content/docs/latest/user/accumulointegration.md index e06aa420..ffba8561 100644 --- a/content/docs/latest/user/accumulointegration.md +++ b/content/docs/latest/user/accumulointegration.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Accumulo Integration -{{< toc >}} - ## Overview [Apache Accumulo](http://accumulo.apache.org) is a sorted, distributed key-value store based on the Google BigTable paper. The API methods that Accumulo provides are in terms of Keys and Values which present the highest level of flexibility in reading and writing data; however, higher-level query abstractions are typically an exercise left to the user. Leveraging Apache Hive as a SQL interface to Accumulo complements its existing high-throughput batch access and low-latency random lookups. diff --git a/content/docs/latest/user/authdev.md b/content/docs/latest/user/authdev.md index 8a75f123..b70f90b2 100644 --- a/content/docs/latest/user/authdev.md +++ b/content/docs/latest/user/authdev.md @@ -7,8 +7,6 @@ date: 2024-12-12 This is the design document for the [original Hive authorization mode]({{< ref "hive-deprecated-authorization-mode" >}}). See [Authorization]({{< ref "languagemanual-authorization" >}}) for an overview of authorization modes, which include [storage based authorization]({{< ref "storage-based-authorization-in-the-metastore-server" >}}) and [SQL standards based authorization]({{< ref "sql-standard-based-hive-authorization" >}}). -{{< toc >}} - # 1. Privilege ## 1.1 Access Privilege diff --git a/content/docs/latest/user/avroserde.md b/content/docs/latest/user/avroserde.md index 14a929ee..95905376 100644 --- a/content/docs/latest/user/avroserde.md +++ b/content/docs/latest/user/avroserde.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : AvroSerDe -{{< toc >}} - ### Availability Earliest version AvroSerde is available diff --git a/content/docs/latest/user/configuration-properties.md b/content/docs/latest/user/configuration-properties.md index aeaf410e..a6b9a083 100644 --- a/content/docs/latest/user/configuration-properties.md +++ b/content/docs/latest/user/configuration-properties.md @@ -15,8 +15,6 @@ Version information As of Hive 0.14.0 ( [HIVE-7211](https://issues.apache.org/jira/browse/HIVE-7211) ), a configuration name that starts with "hive." is regarded as a Hive system property. With the [hive.conf.validation]({{< ref "#hiveconfvalidation" >}}) option true (default), any attempts to set a configuration property that starts with "hive." which is not registered to the Hive system will throw an exception. -{{< toc >}} - ## Query and DDL Execution ##### hive.execution.engine diff --git a/content/docs/latest/user/cost-based-optimization-in-hive.md b/content/docs/latest/user/cost-based-optimization-in-hive.md index 8cb6cfd1..8367b4e1 100644 --- a/content/docs/latest/user/cost-based-optimization-in-hive.md +++ b/content/docs/latest/user/cost-based-optimization-in-hive.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Cost-based optimization in Hive -{{< toc >}} - # Abstract Apache Hadoop is a framework for the distributed processing of large data sets using clusters of computers typically composed of commodity hardware. Over last few years Apache Hadoop has become the de facto platform for distributed data processing using commodity hardware. Apache Hive is a popular SQL interface for data processing using Apache Hadoop. diff --git a/content/docs/latest/user/csv-serde.md b/content/docs/latest/user/csv-serde.md index 9a37c373..f1297d2c 100644 --- a/content/docs/latest/user/csv-serde.md +++ b/content/docs/latest/user/csv-serde.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : CSV Serde -{{< toc >}} - ### Availability Earliest version CSVSerde is available diff --git a/content/docs/latest/user/druid-integration.md b/content/docs/latest/user/druid-integration.md index 34b147f9..2e8c215a 100644 --- a/content/docs/latest/user/druid-integration.md +++ b/content/docs/latest/user/druid-integration.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page documents the work done for the integration between Druid and Hive introduced in Hive 2.2.0 ([HIVE-14217](https://issues.apache.org/jira/browse/HIVE-14217)). Initially it was compatible with Druid 0.9.1.1, the latest stable release of Druid to that date. -{{< toc >}} - ## Objectives Our main **goal** is to be able to index data from Hive into Druid, and to be able to query Druid datasources from Hive. Completing this work will bring benefits to the Druid and Hive systems alike: diff --git a/content/docs/latest/user/hbaseintegration.md b/content/docs/latest/user/hbaseintegration.md index b18834f8..b6480724 100644 --- a/content/docs/latest/user/hbaseintegration.md +++ b/content/docs/latest/user/hbaseintegration.md @@ -11,8 +11,6 @@ A presentation is available from the [HBase HUG10 Meetup](http://wiki.apache.org This feature is a work in progress, and suggestions for its improvement are very welcome. -{{< toc >}} - ## Version Information ### Avro Data Stored in HBase Columns diff --git a/content/docs/latest/user/hive-deprecated-authorization-mode.md b/content/docs/latest/user/hive-deprecated-authorization-mode.md index ee9a2ff0..432eaa2f 100644 --- a/content/docs/latest/user/hive-deprecated-authorization-mode.md +++ b/content/docs/latest/user/hive-deprecated-authorization-mode.md @@ -7,8 +7,6 @@ date: 2024-12-12 This document describes Hive security using the basic authorization scheme, which regulates access to Hive metadata on the client side. This was the default authorization mode used when authorization was enabled. The default was changed to [SQL Standard authorization]({{< ref "sql-standard-based-hive-authorization" >}}) in Hive 2.0 ([HIVE-12429](https://issues.apache.org/jira/browse/HIVE-12429)). -{{< toc >}} - ### Disclaimer Hive authorization is not completely secure. The basic authorization scheme is intended primarily to prevent good users from accidentally doing bad things, but makes no promises about preventing malicious users from doing malicious things.  See the [Hive authorization main page]({{< ref "languagemanual-authorization" >}}) for the secure options. diff --git a/content/docs/latest/user/hive-on-spark.md b/content/docs/latest/user/hive-on-spark.md index c11a6653..db454805 100644 --- a/content/docs/latest/user/hive-on-spark.md +++ b/content/docs/latest/user/hive-on-spark.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Hive on Spark -{{< toc >}} - # 1. Introduction We propose modifying Hive to add Spark as a third execution backend([HIVE-7292](https://issues.apache.org/jira/browse/HIVE-7292)), parallel to MapReduce and Tez. diff --git a/content/docs/latest/user/hive-transactions.md b/content/docs/latest/user/hive-transactions.md index 63fe0553..7ccaddc0 100644 --- a/content/docs/latest/user/hive-transactions.md +++ b/content/docs/latest/user/hive-transactions.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : ACID Transactions -{{< toc >}} - ## Upgrade to Hive 3+ Any transactional tables created by a Hive version prior to Hive 3 require Major Compaction to be run on every partition before upgrading to 3.0.  More precisely, any partition which has had any update/delete/merge statements executed on it since the last Major Compaction, has to undergo another Major Compaction.  No more update/delete/merge may happen on this partition until after Hive is upgraded to Hive 3. diff --git a/content/docs/latest/user/hiveclient.md b/content/docs/latest/user/hiveclient.md index d62698d2..f591966c 100644 --- a/content/docs/latest/user/hiveclient.md +++ b/content/docs/latest/user/hiveclient.md @@ -9,8 +9,6 @@ This page describes the different clients supported by Hive. The command line cl For details about the standalone server see [Hive Server]({{< ref "hiveserver" >}}) or [HiveServer2]({{< ref "setting-up-hiveserver2" >}}). -{{< toc >}} - # Command Line Operates in embedded mode only, that is, it needs to have access to the Hive libraries. For more details see [Getting Started]({{< ref "gettingstarted-latest" >}}) and [Hive CLI]({{< ref "languagemanual-cli" >}}). diff --git a/content/docs/latest/user/hiveserver2-clients.md b/content/docs/latest/user/hiveserver2-clients.md index 274661c7..5e9adf55 100644 --- a/content/docs/latest/user/hiveserver2-clients.md +++ b/content/docs/latest/user/hiveserver2-clients.md @@ -7,8 +7,6 @@ date: 2024-12-12 This page describes the different clients supported by [HiveServer2]({{< ref "setting-up-hiveserver2" >}}). -{{< toc >}} - # Version information Introduced in Hive version 0.11. See [HIVE-2935](https://issues.apache.org/jira/browse/HIVE-2935). diff --git a/content/docs/latest/user/hiveserver2-overview.md b/content/docs/latest/user/hiveserver2-overview.md index ea507aaa..c94884ab 100644 --- a/content/docs/latest/user/hiveserver2-overview.md +++ b/content/docs/latest/user/hiveserver2-overview.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : HiveServer2 Overview -{{< toc >}} - # Introduction HiveServer2 (HS2) is a service that enables clients to execute queries against Hive. HiveServer2 is the successor to [HiveServer1](https://hive.apache.org/docs/latest/admin/hiveserver) which has been deprecated. HS2 supports multi-client concurrency and authentication. It is designed to provide better support for open API clients like JDBC and ODBC. diff --git a/content/docs/latest/user/jdbc-storage-handler.md b/content/docs/latest/user/jdbc-storage-handler.md index 5e2e8204..6ca4df3b 100644 --- a/content/docs/latest/user/jdbc-storage-handler.md +++ b/content/docs/latest/user/jdbc-storage-handler.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : JDBC Storage Handler -{{< toc >}} - # Syntax JdbcStorageHandler supports reading from jdbc data source in Hive. Currently writing to a jdbc data source is not supported. To use JdbcStorageHandler, you need to create an external table using JdbcStorageHandler. Here is a simple example: diff --git a/content/docs/latest/user/kudu-integration.md b/content/docs/latest/user/kudu-integration.md index 6d1c430c..5be3050e 100644 --- a/content/docs/latest/user/kudu-integration.md +++ b/content/docs/latest/user/kudu-integration.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Kudu Integration -{{< toc >}} - ## Overview [Apache Kudu](https://kudu.apache.org) is a an Open Source data storage engine that makes fast analytics on fast and changing data easy.  diff --git a/content/docs/latest/user/parquet.md b/content/docs/latest/user/parquet.md index fb9e8783..699ae095 100644 --- a/content/docs/latest/user/parquet.md +++ b/content/docs/latest/user/parquet.md @@ -7,8 +7,6 @@ date: 2024-12-12 Parquet is supported by a plugin in Hive 0.10, 0.11, and 0.12 and natively in Hive 0.13 and later. -{{< toc >}} - ## **Introduction** Parquet () is an ecosystem wide columnar format for Hadoop. Read [Dremel made simple with Parquet](https://blog.twitter.com/2013/dremel-made-simple-with-parquet) for a good introduction to the format while the Parquet project has an [in-depth description of the format](https://github.com/Parquet/parquet-format) including motivations and diagrams. At the time of this writing Parquet supports the follow engines and data description languages: diff --git a/content/docs/latest/user/query-reexecution.md b/content/docs/latest/user/query-reexecution.md index ed694e7d..74ad6c70 100644 --- a/content/docs/latest/user/query-reexecution.md +++ b/content/docs/latest/user/query-reexecution.md @@ -7,8 +7,6 @@ date: 2024-12-12 Query reexecution provides a facility to re-run the query multiple times in case of an unfortunate event happens. Introduced in Hive 3.0 ([HIVE-17626](https://issues.apache.org/jira/browse/HIVE-17626)) -{{< toc >}} - # ReExecition strategies ## Overlay diff --git a/content/docs/latest/user/rcfilecat.md b/content/docs/latest/user/rcfilecat.md index 7922b626..02c76deb 100644 --- a/content/docs/latest/user/rcfilecat.md +++ b/content/docs/latest/user/rcfilecat.md @@ -7,8 +7,6 @@ date: 2024-12-12 $HIVE_HOME/bin/hive --rcfilecat is a shell utility which can be used to print data or metadata from [RC files]({{< ref "rcfile" >}}). -{{< toc >}} - ## Data Prints out the rows stored in an RCFile, columns are tab separated and rows are newline separated. diff --git a/content/docs/latest/user/replacing-the-implementation-of-hive-cli-using-beeline.md b/content/docs/latest/user/replacing-the-implementation-of-hive-cli-using-beeline.md index dbdfd162..e4959796 100644 --- a/content/docs/latest/user/replacing-the-implementation-of-hive-cli-using-beeline.md +++ b/content/docs/latest/user/replacing-the-implementation-of-hive-cli-using-beeline.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Replacing the Implementation of Hive CLI Using Beeline -{{< toc >}} - ## Why Replace the Existing Hive CLI? [Hive CLI]({{< ref "#hive-cli" >}}) is a legacy tool which had two main use cases. The first is that it served as a thick client for SQL on Hadoop and the second is that it served as a command line tool for Hive Server (the original Hive server, now often referred to as "HiveServer1"). Hive Server has been deprecated and removed from the Hive code base as of Hive 1.0.0 ([HIVE-6977](https://issues.apache.org/jira/browse/HIVE-6977)) and replaced with HiveServer2 ([HIVE-2935](https://issues.apache.org/jira/browse/HIVE-2935)), so the second use case no longer applies. For the first use case, [Beeline]({{< ref "#beeline" >}}) provides or is supposed to provide equal functionality, yet is implemented differently from Hive CLI. diff --git a/content/docs/latest/user/serde.md b/content/docs/latest/user/serde.md index a3c629bb..5b06a95b 100644 --- a/content/docs/latest/user/serde.md +++ b/content/docs/latest/user/serde.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : SerDe -{{< toc >}} - # SerDe Overview SerDe is short for Serializer/Deserializer. Hive uses the SerDe interface for IO. The interface handles both serialization and deserialization and also interpreting the results of serialization as individual fields for processing. diff --git a/content/docs/latest/user/storage-based-authorization-in-the-metastore-server.md b/content/docs/latest/user/storage-based-authorization-in-the-metastore-server.md index 524660b2..ac28c079 100644 --- a/content/docs/latest/user/storage-based-authorization-in-the-metastore-server.md +++ b/content/docs/latest/user/storage-based-authorization-in-the-metastore-server.md @@ -12,8 +12,6 @@ The metastore server security feature with storage based authorization was added * For additional information about storage based authorization in the metastore server, see the HCatalog document [Storage Based Authorization]({{< ref "hcatalog-authorization" >}}). * For an overview of Hive authorization models and other security options, see the [Authorization]({{< ref "languagemanual-authorization" >}}) document. -{{< toc >}} - ## The Need for Metastore Server Security When multiple clients access the same metastore in a backing database, such as MySQL, the database connection credentials may be visible in the `hive-site.xml` configuration file. A malicious or incompetent user could cause serious damage to metadata even though the underlying data is protected by HDFS access controls. diff --git a/content/docs/latest/user/streaming-data-ingest-v2.md b/content/docs/latest/user/streaming-data-ingest-v2.md index 80c564e9..98823c21 100644 --- a/content/docs/latest/user/streaming-data-ingest-v2.md +++ b/content/docs/latest/user/streaming-data-ingest-v2.md @@ -7,8 +7,6 @@ date: 2024-12-12 Starting in release Hive 3.0.0, [Streaming Data Ingest]({{< ref "streaming-data-ingest" >}}) is deprecated and is replaced by newer V2 API ([HIVE-19205](https://issues.apache.org/jira/browse/HIVE-19205)).  -{{< toc >}} - # Hive Streaming API Traditionally adding new data into Hive requires gathering a large amount of data onto HDFS and then periodically adding a new partition. This is essentially a “batch insertion”.  diff --git a/content/docs/latest/user/streaming-data-ingest.md b/content/docs/latest/user/streaming-data-ingest.md index 5e1a1775..b4b22020 100644 --- a/content/docs/latest/user/streaming-data-ingest.md +++ b/content/docs/latest/user/streaming-data-ingest.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Streaming Data Ingest -{{< toc >}} - # Hive 3 Streaming API [Hive 3 Streaming API Documentation](https://hive.apache.org/docs/latest/user/streaming-data-ingest-v2) - new API available in Hive 3 diff --git a/content/docs/latest/user/teradatabinaryserde.md b/content/docs/latest/user/teradatabinaryserde.md index 09b52c55..872e006c 100644 --- a/content/docs/latest/user/teradatabinaryserde.md +++ b/content/docs/latest/user/teradatabinaryserde.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : TeradataBinarySerde -{{< toc >}} - ### Availability Earliest version CSVSerde is available diff --git a/content/docs/latest/user/tutorial.md b/content/docs/latest/user/tutorial.md index 57410846..6f3624ff 100644 --- a/content/docs/latest/user/tutorial.md +++ b/content/docs/latest/user/tutorial.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Tutorial -{{< toc >}} - # Concepts ## What Is Hive diff --git a/content/docs/latest/user/user-faq.md b/content/docs/latest/user/user-faq.md index 63dfeba6..b2e09aeb 100644 --- a/content/docs/latest/user/user-faq.md +++ b/content/docs/latest/user/user-faq.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : User FAQ -{{< toc >}} - ## General ### I see errors like: Server access Error: Connection timed out url= diff --git a/content/docs/latest/user/using-tidb-as-the-hive-metastore-database.md b/content/docs/latest/user/using-tidb-as-the-hive-metastore-database.md index c7e1ddf2..39bb41d0 100644 --- a/content/docs/latest/user/using-tidb-as-the-hive-metastore-database.md +++ b/content/docs/latest/user/using-tidb-as-the-hive-metastore-database.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : Using TiDB as the Hive Metastore database -{{< toc >}} - # Why use TiDB in Hive as the Metastore database? [TiDB](https://github.com/pingcap/tidb) is a distributed SQL database built by [PingCAP](https://pingcap.com/) and its open-source community. **It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.** It's a one-stop solution for both Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP) workloads. diff --git a/content/docs/latest/webhcat/webhcat-configure.md b/content/docs/latest/webhcat/webhcat-configure.md index 18a86c58..a810863b 100644 --- a/content/docs/latest/webhcat/webhcat-configure.md +++ b/content/docs/latest/webhcat/webhcat-configure.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Configure -{{< toc >}} - ## Configuration Files The configuration for WebHCat (Templeton) merges the normal Hadoop configuration with the WebHCat-specific variables. Because WebHCat is designed to connect services that are not normally connected, the configuration is more complex than might be desirable. diff --git a/content/docs/latest/webhcat/webhcat-installwebhcat.md b/content/docs/latest/webhcat/webhcat-installwebhcat.md index 7a99c5d9..4273b81a 100644 --- a/content/docs/latest/webhcat/webhcat-installwebhcat.md +++ b/content/docs/latest/webhcat/webhcat-installwebhcat.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Installation -{{< toc >}} - ## WebHCat Installed with Hive WebHCat and HCatalog are installed with Hive, starting with Hive release 0.11.0. diff --git a/content/docs/latest/webhcat/webhcat-reference-ddl.md b/content/docs/latest/webhcat/webhcat-reference-ddl.md index f9cfced5..ccdaa406 100644 --- a/content/docs/latest/webhcat/webhcat-reference-ddl.md +++ b/content/docs/latest/webhcat/webhcat-reference-ddl.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference DDL -{{< toc >}} - ## Description Performs an [HCatalog DDL]({{< ref "#hcatalog-ddl" >}}) command. The command is executed immediately upon request. Responses are limited to 1 MB. For requests which may return longer results consider using the [Hive resource]({{< ref "webhcat-reference-hive" >}}) as an alternative. diff --git a/content/docs/latest/webhcat/webhcat-reference-deletedb.md b/content/docs/latest/webhcat/webhcat-reference-deletedb.md index e70e3fd3..4e26da7c 100644 --- a/content/docs/latest/webhcat/webhcat-reference-deletedb.md +++ b/content/docs/latest/webhcat/webhcat-reference-deletedb.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference DeleteDB -{{< toc >}} - ## Description Delete a database. diff --git a/content/docs/latest/webhcat/webhcat-reference-deletejob.md b/content/docs/latest/webhcat/webhcat-reference-deletejob.md index 929c367e..1013038a 100644 --- a/content/docs/latest/webhcat/webhcat-reference-deletejob.md +++ b/content/docs/latest/webhcat/webhcat-reference-deletejob.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference DeleteJob -{{< toc >}} - ## Description Kill a job given its job ID. Substitute ":jobid" with the job ID received when the job was created. diff --git a/content/docs/latest/webhcat/webhcat-reference-deletejobid.md b/content/docs/latest/webhcat/webhcat-reference-deletejobid.md index 92618294..7bf181b6 100644 --- a/content/docs/latest/webhcat/webhcat-reference-deletejobid.md +++ b/content/docs/latest/webhcat/webhcat-reference-deletejobid.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference DeleteJobID -{{< toc >}} - ## Description Kill a job given its job ID. Substitute ":jobid" with the job ID received when the job was created. diff --git a/content/docs/latest/webhcat/webhcat-reference-deletepartition.md b/content/docs/latest/webhcat/webhcat-reference-deletepartition.md index 1683cd5d..1971a7d0 100644 --- a/content/docs/latest/webhcat/webhcat-reference-deletepartition.md +++ b/content/docs/latest/webhcat/webhcat-reference-deletepartition.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference DeletePartition -{{< toc >}} - ## Description Delete (drop) a partition in an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-deletetable.md b/content/docs/latest/webhcat/webhcat-reference-deletetable.md index 085a8b00..b1005049 100644 --- a/content/docs/latest/webhcat/webhcat-reference-deletetable.md +++ b/content/docs/latest/webhcat/webhcat-reference-deletetable.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference DeleteTable -{{< toc >}} - ## Description Delete (drop) an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-getcolumn.md b/content/docs/latest/webhcat/webhcat-reference-getcolumn.md index f4446d1d..8c9a5f3e 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getcolumn.md +++ b/content/docs/latest/webhcat/webhcat-reference-getcolumn.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetColumn -{{< toc >}} - ## Description Describe a single column in an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-getcolumns.md b/content/docs/latest/webhcat/webhcat-reference-getcolumns.md index 1564dfa3..918ecc13 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getcolumns.md +++ b/content/docs/latest/webhcat/webhcat-reference-getcolumns.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetColumns -{{< toc >}} - ## Description List the columns in an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-getdb.md b/content/docs/latest/webhcat/webhcat-reference-getdb.md index 1c0474c8..6b114582 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getdb.md +++ b/content/docs/latest/webhcat/webhcat-reference-getdb.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetDB -{{< toc >}} - ## Description Describe a database. (Note: This resource has a "format=extended" parameter however the output structure does not change if it is used.) diff --git a/content/docs/latest/webhcat/webhcat-reference-getdbs.md b/content/docs/latest/webhcat/webhcat-reference-getdbs.md index 048a90b2..daa3d35f 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getdbs.md +++ b/content/docs/latest/webhcat/webhcat-reference-getdbs.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetDBs -{{< toc >}} - ## Description List the databases in HCatalog. diff --git a/content/docs/latest/webhcat/webhcat-reference-getpartition.md b/content/docs/latest/webhcat/webhcat-reference-getpartition.md index 4dcd63d2..c4f54674 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getpartition.md +++ b/content/docs/latest/webhcat/webhcat-reference-getpartition.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetPartition -{{< toc >}} - ## Description Describe a single partition in an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-getpartitions.md b/content/docs/latest/webhcat/webhcat-reference-getpartitions.md index 4ed98853..d517aed4 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getpartitions.md +++ b/content/docs/latest/webhcat/webhcat-reference-getpartitions.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetPartitions -{{< toc >}} - ## Description List all the partitions in an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-getproperties.md b/content/docs/latest/webhcat/webhcat-reference-getproperties.md index 5d6573eb..720b0b71 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getproperties.md +++ b/content/docs/latest/webhcat/webhcat-reference-getproperties.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetProperties -{{< toc >}} - ## Description List all the properties of an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-getproperty.md b/content/docs/latest/webhcat/webhcat-reference-getproperty.md index 070372f9..82a67783 100644 --- a/content/docs/latest/webhcat/webhcat-reference-getproperty.md +++ b/content/docs/latest/webhcat/webhcat-reference-getproperty.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetProperty -{{< toc >}} - ## Description Return the value of a single table property. diff --git a/content/docs/latest/webhcat/webhcat-reference-gettable.md b/content/docs/latest/webhcat/webhcat-reference-gettable.md index 72ab0971..e75271aa 100644 --- a/content/docs/latest/webhcat/webhcat-reference-gettable.md +++ b/content/docs/latest/webhcat/webhcat-reference-gettable.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetTable -{{< toc >}} - ## Description Describe an HCatalog table. Normally returns a simple list of columns (using "desc table"), but the extended format will show more information (using "show table extended like"). diff --git a/content/docs/latest/webhcat/webhcat-reference-gettables.md b/content/docs/latest/webhcat/webhcat-reference-gettables.md index a9ef3167..3593b426 100644 --- a/content/docs/latest/webhcat/webhcat-reference-gettables.md +++ b/content/docs/latest/webhcat/webhcat-reference-gettables.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference GetTables -{{< toc >}} - ## Description List the tables in an HCatalog database. diff --git a/content/docs/latest/webhcat/webhcat-reference-hive.md b/content/docs/latest/webhcat/webhcat-reference-hive.md index 80c560dd..f88808e0 100644 --- a/content/docs/latest/webhcat/webhcat-reference-hive.md +++ b/content/docs/latest/webhcat/webhcat-reference-hive.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference Hive -{{< toc >}} - ## Description Runs a [Hive](http://hive.apache.org/) query or set of commands. diff --git a/content/docs/latest/webhcat/webhcat-reference-job.md b/content/docs/latest/webhcat/webhcat-reference-job.md index 2c52124a..78adb517 100644 --- a/content/docs/latest/webhcat/webhcat-reference-job.md +++ b/content/docs/latest/webhcat/webhcat-reference-job.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference Job -{{< toc >}} - ## Description Check the status of a job and get related job information given its job ID. Substitute ":jobid" with the job ID received when the job was created. diff --git a/content/docs/latest/webhcat/webhcat-reference-jobids.md b/content/docs/latest/webhcat/webhcat-reference-jobids.md index 01f7dc83..8ce6a385 100644 --- a/content/docs/latest/webhcat/webhcat-reference-jobids.md +++ b/content/docs/latest/webhcat/webhcat-reference-jobids.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference JobIDs -{{< toc >}} - ## Description Return a list of all job IDs. diff --git a/content/docs/latest/webhcat/webhcat-reference-jobinfo.md b/content/docs/latest/webhcat/webhcat-reference-jobinfo.md index 5a32bbf1..3f336864 100644 --- a/content/docs/latest/webhcat/webhcat-reference-jobinfo.md +++ b/content/docs/latest/webhcat/webhcat-reference-jobinfo.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference JobInfo -{{< toc >}} - ## Description Check the status of a job and get related job information given its job ID. Substitute ":jobid" with the job ID received when the job was created. diff --git a/content/docs/latest/webhcat/webhcat-reference-jobs.md b/content/docs/latest/webhcat/webhcat-reference-jobs.md index eeb03e7e..5290c5b5 100644 --- a/content/docs/latest/webhcat/webhcat-reference-jobs.md +++ b/content/docs/latest/webhcat/webhcat-reference-jobs.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference Jobs -{{< toc >}} - ## Description Return a list of all job IDs. diff --git a/content/docs/latest/webhcat/webhcat-reference-mapreducejar.md b/content/docs/latest/webhcat/webhcat-reference-mapreducejar.md index 04eca102..bf17b632 100644 --- a/content/docs/latest/webhcat/webhcat-reference-mapreducejar.md +++ b/content/docs/latest/webhcat/webhcat-reference-mapreducejar.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference MapReduceJar -{{< toc >}} - ## Description Creates and queues a standard [Hadoop MapReduce](http://hadoop.apache.org/docs/stable/commands_manual.html) job. diff --git a/content/docs/latest/webhcat/webhcat-reference-mapreducestream.md b/content/docs/latest/webhcat/webhcat-reference-mapreducestream.md index 003608fe..a6c2cb45 100644 --- a/content/docs/latest/webhcat/webhcat-reference-mapreducestream.md +++ b/content/docs/latest/webhcat/webhcat-reference-mapreducestream.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference MapReduceStream -{{< toc >}} - ## Description Create and queue a [Hadoop streaming MapReduce](http://hadoop.apache.org/docs/stable/streaming.html) job. diff --git a/content/docs/latest/webhcat/webhcat-reference-pig.md b/content/docs/latest/webhcat/webhcat-reference-pig.md index 3654924c..a714aa39 100644 --- a/content/docs/latest/webhcat/webhcat-reference-pig.md +++ b/content/docs/latest/webhcat/webhcat-reference-pig.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference Pig -{{< toc >}} - ## Description Create and queue a [Pig](http://pig.apache.org/) job. diff --git a/content/docs/latest/webhcat/webhcat-reference-posttable.md b/content/docs/latest/webhcat/webhcat-reference-posttable.md index 36b5ab7a..fa2062d1 100644 --- a/content/docs/latest/webhcat/webhcat-reference-posttable.md +++ b/content/docs/latest/webhcat/webhcat-reference-posttable.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference PostTable -{{< toc >}} - ## Description Rename an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-putcolumn.md b/content/docs/latest/webhcat/webhcat-reference-putcolumn.md index 310f61ca..473874e0 100644 --- a/content/docs/latest/webhcat/webhcat-reference-putcolumn.md +++ b/content/docs/latest/webhcat/webhcat-reference-putcolumn.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference PutColumn -{{< toc >}} - ## Description Create a column in an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-putdb.md b/content/docs/latest/webhcat/webhcat-reference-putdb.md index ce78f386..74e7f0f9 100644 --- a/content/docs/latest/webhcat/webhcat-reference-putdb.md +++ b/content/docs/latest/webhcat/webhcat-reference-putdb.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference PutDB -{{< toc >}} - ## Description Create a database. diff --git a/content/docs/latest/webhcat/webhcat-reference-putpartition.md b/content/docs/latest/webhcat/webhcat-reference-putpartition.md index a7d711c1..bab7f047 100644 --- a/content/docs/latest/webhcat/webhcat-reference-putpartition.md +++ b/content/docs/latest/webhcat/webhcat-reference-putpartition.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference PutPartition -{{< toc >}} - ## Description Create a partition in an HCatalog table. diff --git a/content/docs/latest/webhcat/webhcat-reference-putproperty.md b/content/docs/latest/webhcat/webhcat-reference-putproperty.md index 8f3627f6..62e50b93 100644 --- a/content/docs/latest/webhcat/webhcat-reference-putproperty.md +++ b/content/docs/latest/webhcat/webhcat-reference-putproperty.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference PutProperty -{{< toc >}} - ## Description Add a single property on an HCatalog table. This will also reset an existing property. diff --git a/content/docs/latest/webhcat/webhcat-reference-puttable.md b/content/docs/latest/webhcat/webhcat-reference-puttable.md index ba796a80..e4510348 100644 --- a/content/docs/latest/webhcat/webhcat-reference-puttable.md +++ b/content/docs/latest/webhcat/webhcat-reference-puttable.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference PutTable -{{< toc >}} - ## Description Create a new HCatalog table. For more information, please refer to the Hive documentation for [CREATE TABLE]({{< ref "#create-table" >}}). diff --git a/content/docs/latest/webhcat/webhcat-reference-puttablelike.md b/content/docs/latest/webhcat/webhcat-reference-puttablelike.md index 797db849..006c4e34 100644 --- a/content/docs/latest/webhcat/webhcat-reference-puttablelike.md +++ b/content/docs/latest/webhcat/webhcat-reference-puttablelike.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference PutTableLike -{{< toc >}} - ## Description Create a new HCatalog table like an existing one. diff --git a/content/docs/latest/webhcat/webhcat-reference-responsetypes.md b/content/docs/latest/webhcat/webhcat-reference-responsetypes.md index 35b1a6e4..7f7f41e2 100644 --- a/content/docs/latest/webhcat/webhcat-reference-responsetypes.md +++ b/content/docs/latest/webhcat/webhcat-reference-responsetypes.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference ResponseTypes -{{< toc >}} - ## Description Returns a list of the response types supported by WebHCat (Templeton). diff --git a/content/docs/latest/webhcat/webhcat-reference-status.md b/content/docs/latest/webhcat/webhcat-reference-status.md index 7a864b47..f15115e9 100644 --- a/content/docs/latest/webhcat/webhcat-reference-status.md +++ b/content/docs/latest/webhcat/webhcat-reference-status.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference Status -{{< toc >}} - ## Description Returns the current status of the WebHCat (Templeton) server. Useful for heartbeat monitoring. diff --git a/content/docs/latest/webhcat/webhcat-reference-version.md b/content/docs/latest/webhcat/webhcat-reference-version.md index 7d64ef7c..4b3ee5de 100644 --- a/content/docs/latest/webhcat/webhcat-reference-version.md +++ b/content/docs/latest/webhcat/webhcat-reference-version.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference Version -{{< toc >}} - ## Description Returns a list of supported versions and the current version. diff --git a/content/docs/latest/webhcat/webhcat-reference-versionhadoop.md b/content/docs/latest/webhcat/webhcat-reference-versionhadoop.md index d4016141..5db30358 100644 --- a/content/docs/latest/webhcat/webhcat-reference-versionhadoop.md +++ b/content/docs/latest/webhcat/webhcat-reference-versionhadoop.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference VersionHadoop -{{< toc >}} - ## Description Return the version of Hadoop being run when WebHCat creates a MapReduce job ([POST mapreduce/jar]({{< ref "webhcat-reference-mapreducejar" >}}) or [mapreduce/streaming]({{< ref "webhcat-reference-mapreducestream" >}})). diff --git a/content/docs/latest/webhcat/webhcat-reference-versionhive.md b/content/docs/latest/webhcat/webhcat-reference-versionhive.md index 23042999..130bf18d 100644 --- a/content/docs/latest/webhcat/webhcat-reference-versionhive.md +++ b/content/docs/latest/webhcat/webhcat-reference-versionhive.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat Reference VersionHive -{{< toc >}} - ## Description Return the version of Hive being run when WebHCat issues Hive queries or commands ([POST hive]({{< ref "webhcat-reference-hive" >}})). diff --git a/content/docs/latest/webhcat/webhcat-usingwebhcat.md b/content/docs/latest/webhcat/webhcat-usingwebhcat.md index 82f05dac..de3f155a 100644 --- a/content/docs/latest/webhcat/webhcat-usingwebhcat.md +++ b/content/docs/latest/webhcat/webhcat-usingwebhcat.md @@ -5,8 +5,6 @@ date: 2024-12-12 # Apache Hive : WebHCat UsingWebHCat -{{< toc >}} - ## Version information The HCatalog project graduated from the Apache incubator and merged with the Hive project on March 26, 2013.