6

I used juju deploy canonical-kubernetes to deploy a K8S. But when run ./kubectl cluster-info as The Canonical Distribution Of Kubernetes charm document said, get below error:

Error from server: an error on the server ("<html>\r\n<head><title>502
Bad Gateway</title></head>\r\n<body bgcolor=\"white\">\r\n<center>
<h1>502           Bad Gateway</h1></center>\r\n<hr><center>nginx/1.10.0
 (Ubuntu)</center>\r\n</body>\r\n</html>") has prevented the request from succeeding

Juju status output:

MODEL    CONTROLLER  CLOUD/REGION         VERSION
default  lxd-test    localhost/localhost  2.0-rc3

APP                    VERSION  STATUS       SCALE  CHARM                  STORE       REV  OS      NOTES
easyrsa                3.0.1    active           1  easyrsa                jujucharms    2  ubuntu  
elasticsearch                   active           2  elasticsearch          jujucharms   19  ubuntu  
etcd                   2.2.5    active           3  etcd                   jujucharms   13  ubuntu  
filebeat                        active           4  filebeat               jujucharms    5  ubuntu  
flannel                0.6.1    waiting          4  flannel                jujucharms    3  ubuntu  
kibana                          active           1  kibana                 jujucharms   15  ubuntu  
kubeapi-load-balancer  1.10.0   active           1  kubeapi-load-balancer  jujucharms    2  ubuntu  exposed
kubernetes-master      1.4.0    maintenance      1  kubernetes-master      jujucharms    3  ubuntu  
kubernetes-worker      1.4.0    waiting          3  kubernetes-worker      jujucharms    3  ubuntu  exposed
topbeat                         active           3  topbeat                jujucharms    5  ubuntu  

UNIT                      WORKLOAD     AGENT      MACHINE  PUBLIC-ADDRESS  PORTS            MESSAGE
easyrsa/0*                active       idle       0        10.181.160.79                    Certificate Authority connected.
elasticsearch/0*          active       idle       1        10.181.160.62   9200/tcp         Ready
elasticsearch/1           active       idle       2        10.181.160.72   9200/tcp         Ready
etcd/0*                   active       idle       3        10.181.160.41   2379/tcp         Healthy with 3 known peers. (leader)
etcd/1                    active       idle       4        10.181.160.135  2379/tcp         Healthy with 3 known peers.
etcd/2                    active       idle       5        10.181.160.204  2379/tcp         Healthy with 3 known peers.
kibana/0*                 active       idle       6        10.181.160.54   80/tcp,9200/tcp  ready
kubeapi-load-balancer/0*  active       idle       7        10.181.160.42   443/tcp          Loadbalancer ready.
kubernetes-master/0*      maintenance  idle       8        10.181.160.208                   Rendering authentication templates.
  filebeat/0              active       idle                10.181.160.208                   Filebeat ready.
  flannel/0*              waiting      idle                10.181.160.208                   Flannel is starting up.
kubernetes-worker/0*      waiting      idle       9        10.181.160.94                    Waiting for cluster-manager to initiate start.
  filebeat/1*             active       idle                10.181.160.94                    Filebeat ready.
  flannel/1               waiting      idle                10.181.160.94                    Flannel is starting up.
  topbeat/0               active       idle                10.181.160.94                    Topbeat ready.
kubernetes-worker/1       waiting      idle       10       10.181.160.95                    Waiting for cluster-manager to initiate start.
  filebeat/2              active       idle                10.181.160.95                    Filebeat ready.
  flannel/2               waiting      idle                10.181.160.95                    Flannel is starting up.
  topbeat/1*              active       executing           10.181.160.95                    (update-status) Topbeat ready.
kubernetes-worker/2       waiting      idle       11       10.181.160.148                   Waiting for cluster-manager to initiate start.
  filebeat/3              active       idle                10.181.160.148                   Filebeat ready.
  flannel/3               waiting      idle                10.181.160.148                   Flannel is starting up.
  topbeat/2               active       idle                10.181.160.148                   Topbeat ready.

MACHINE  STATE    DNS             INS-ID          SERIES  AZ
0        started  10.181.160.79   juju-23ce86-0   xenial  
1        started  10.181.160.62   juju-23ce86-1   trusty  
2        started  10.181.160.72   juju-23ce86-2   trusty  
3        started  10.181.160.41   juju-23ce86-3   xenial  
4        started  10.181.160.135  juju-23ce86-4   xenial  
5        started  10.181.160.204  juju-23ce86-5   xenial  
6        started  10.181.160.54   juju-23ce86-6   trusty  
7        started  10.181.160.42   juju-23ce86-7   xenial  
8        started  10.181.160.208  juju-23ce86-8   xenial  
9        started  10.181.160.94   juju-23ce86-9   xenial  
10       started  10.181.160.95   juju-23ce86-10  xenial  
11       started  10.181.160.148  juju-23ce86-11  xenial  

RELATION           PROVIDES               CONSUMES               TYPE
certificates       easyrsa                kubeapi-load-balancer  regular
certificates       easyrsa                kubernetes-master      regular
certificates       easyrsa                kubernetes-worker      regular
peer               elasticsearch          elasticsearch          peer
elasticsearch      elasticsearch          filebeat               regular
rest               elasticsearch          kibana                 regular
elasticsearch      elasticsearch          topbeat                regular
cluster            etcd                   etcd                   peer
etcd               etcd                   flannel                regular
etcd               etcd                   kubernetes-master      regular
juju-info          filebeat               kubernetes-master      regular
juju-info          filebeat               kubernetes-worker      regular
sdn-plugin         flannel                kubernetes-master      regular
sdn-plugin         flannel                kubernetes-worker      regular
loadbalancer       kubeapi-load-balancer  kubernetes-master      regular
kube-api-endpoint  kubeapi-load-balancer  kubernetes-worker      regular
beats-host         kubernetes-master      filebeat               subordinate
host               kubernetes-master      flannel                subordinate
kube-dns           kubernetes-master      kubernetes-worker      regular
beats-host         kubernetes-worker      filebeat               subordinate
host               kubernetes-worker      flannel                subordinate
beats-host         kubernetes-worker      topbeat                subordinate
Marco Ceppi
  • 48,101
fkpwolf
  • 161
  • 4
  • Are all the charms units in an active state? – Bilal Baqar Oct 11 '16 at 07:41
  • 2
    This is indicative of a problem establishing communication between the NGINX load balancer and kube-apiserver, which would be a bug. Can I trouble you to pastebin the juju status output? juju status | pastebinit and copy that link here as a comment. I can direct from there. – lazyPower Oct 12 '16 at 18:51
  • http://paste.ubuntu.com/23315901/ lots of waiting status – fkpwolf Oct 13 '16 at 03:19

1 Answers1

2

This seems to be because you're deploying Kubernetes on LXD. According to the README for Canonical Kubernetes:

kubernetes-master, kubernetes-worker, kubeapi-load-balancer and etcd are not supported on LXD at this time.

This is a limitation between Docker and LXD - one we're hoping to have sorted soon. In the meantime those components need to be run on at least a VM.

You can do this manually, with LXD, where you deploy the rest of the components to LXD and then manually launch a few KVM instances on your computer.

I'll try to get a clean set of instructions for that and reply back here with them.

Marco Ceppi
  • 48,101
  • Juju 2.0(https://jujucharms.com/) support Kubernetes by using LXD and ZFS. This solution was shown on it's homepage. So I think LXD should be working. – fkpwolf Oct 13 '16 at 13:22
  • 1
    I'm not sure where you're seeing that. Juju supports LXD, however not every workload can be placed in LXD. There are very few that have this problem but Kubernetes is one of them.

    We're working to address this, and there may already be a workaround, but I'm not aware of that

    – Marco Ceppi Oct 13 '16 at 14:35