I’m trying to run hiveserver in Kubernetes. I get it running normally. There are no errors in logs.
+ : derby
+ SKIP_SCHEMA_INIT=true
+ [[ true = true ]]
+ VERBOSE_MODE=--verbose
+ export HIVE_CONF_DIR=/opt/hive/conf
+ HIVE_CONF_DIR=/opt/hive/conf
+ '[' -d '' ']'
+ export 'HADOOP_CLIENT_OPTS= -Xmx1G '
+ HADOOP_CLIENT_OPTS=' -Xmx1G '
+ [[ true == false ]]
+ '[' hiveserver2 == hiveserver2 ']'
+ export 'HADOOP_CLASSPATH=/opt/tez/*:/opt/tez/lib/*:'
+ HADOOP_CLASSPATH='/opt/tez/*:/opt/tez/lib/*:'
+ exec /opt/hive/bin/hive --skiphadoopversion --skiphbasecp --service hiveserver2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.18.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/tez/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2024-07-26 05:26:24: Starting HiveServer2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.18.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/tez/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Hive Session ID = bb16c35f-808a-4f82-8df2-06d0d16fd251
I got it exposed via LoadBalancer but somehow i cannot connect to hiveserver2 webserver on http://192.168.0.240:10002/
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hiveserver2-hiveserver2-standalone LoadBalancer 10.152.183.129 192.168.0.240 10002:32450/TCP,10000:32301/TCP 34m
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 4d13h
Same image when ran directly on one of the pods (not in Kubernetes) works fine and I can get to webserver via host ip and 10002 port.
Also nginx server image when ran on the same Kubernetes cluster works fine and I’m getting default page via LoadBalancer provied ip.
I tried different things but nothing seems to work for me.
This is output of helm template for my release:
---
# Source: hiveserver2-standalone/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-hiveserver2-standalone
labels:
helm.sh/chart: hiveserver2-standalone-0.1.0
app.kubernetes.io/name: hiveserver2-standalone
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: hiveserver2-standalone/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-hiveserver2-standalone
labels:
helm.sh/chart: hiveserver2-standalone-0.1.0
app.kubernetes.io/name: hiveserver2-standalone
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: LoadBalancer
ports:
- port: 10002
protocol: TCP
name: http
- port: 10000
protocol: TCP
name: beeline
selector:
app.kubernetes.io/name: hiveserver2-standalone
app.kubernetes.io/instance: release-name
---
# Source: hiveserver2-standalone/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-hiveserver2-standalone
labels:
helm.sh/chart: hiveserver2-standalone-0.1.0
app.kubernetes.io/name: hiveserver2-standalone
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: hiveserver2-standalone
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
helm.sh/chart: hiveserver2-standalone-0.1.0
app.kubernetes.io/name: hiveserver2-standalone
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
serviceAccountName: release-name-hiveserver2-standalone
securityContext:
{}
containers:
- name: hiveserver2-standalone
securityContext:
{}
image: "erntoto/hive:4.0.0"
imagePullPolicy: Always
env:
- name: SERVICE_NAME
value: hiveserver2
- name: VERBOSE
value: "true"
- name: IS_RESUME
value: "true"
ports:
- name: http
containerPort: 10002
protocol: TCP
- name: beeline
containerPort: 10000
protocol: TCP
livenessProbe:
null
readinessProbe:
null
resources:
{}