Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get Started - not able to access dashboard #628

Open
cre4ture opened this issue Feb 11, 2025 · 11 comments
Open

Get Started - not able to access dashboard #628

cre4ture opened this issue Feb 11, 2025 · 11 comments
Labels
bug Something isn't working

Comments

@cre4ture
Copy link

Hello,

I'm totally new to this topic and might do some beginners mistakes.
But I thought that I could at least get my setup to a stage where I can access the dashboard.
With the help of the "Get Started" and by trying (try and fail) multiple different interpretation possibilities I was able to get a cluster running that reports all services READY.
The tenant-root also looks like OK.

I hope you can give me some input to this.
As a reward I will tell you all my findings about potential improvements for the "Get Started" documentation ;-)

The issues I can see are these:

  • I can't ping the second IP address listed for "root-ingress-controller".
  • For the DNS I'm not sure how to run this example without having a true domain ready to be used. I guess for testing purposes there should be a way to work around this, no? I tried to use the router's default domain that I have at home "fritz.box" and assing the workping IP to a hostname such that I can access the IP via "https://dashboard.fritz.box/". But all I get is "default backend - 404".

Following is some console output for few commands for the tutorial.
Please tell me what you need else for diagnosis.

Thanks, creature

uli@hp-poly:~/cozy-c$ kubectl get hr -A
NAMESPACE                        NAME                        AGE     READY   STATUS
cozy-cert-manager                cert-manager                7h43m   True    Helm install succeeded for release cozy-cert-manager/cert-manager.v1 with chart [email protected]
cozy-cert-manager                cert-manager-crds           7h43m   True    Helm install succeeded for release cozy-cert-manager/cert-manager-crds.v1 with chart [email protected]
cozy-cert-manager                cert-manager-issuers        7h43m   True    Helm install succeeded for release cozy-cert-manager/cert-manager-issuers.v1 with chart [email protected]
cozy-cilium                      cilium                      7h43m   True    Helm upgrade succeeded for release cozy-cilium/cilium.v4 with chart [email protected]+1
cozy-clickhouse-operator         clickhouse-operator         7h43m   True    Helm install succeeded for release cozy-clickhouse-operator/clickhouse-operator.v1 with chart [email protected]
cozy-cluster-api                 capi-operator               7h43m   True    Helm install succeeded for release cozy-cluster-api/capi-operator.v1 with chart [email protected]
cozy-cluster-api                 capi-providers              7h43m   True    Helm install succeeded for release cozy-cluster-api/capi-providers.v1 with chart [email protected]
cozy-dashboard                   dashboard                   7h43m   True    Helm upgrade succeeded for release cozy-dashboard/dashboard.v2 with chart [email protected]
cozy-etcd-operator               etcd-operator               7h43m   True    Helm install succeeded for release cozy-etcd-operator/etcd-operator.v1 with chart [email protected]
cozy-fluxcd                      fluxcd                      7h43m   True    Helm upgrade succeeded for release cozy-fluxcd/fluxcd.v2 with chart [email protected]
cozy-fluxcd                      fluxcd-operator             7h43m   True    Helm upgrade succeeded for release cozy-fluxcd/fluxcd-operator.v2 with chart [email protected]
cozy-grafana-operator            grafana-operator            7h43m   True    Helm install succeeded for release cozy-grafana-operator/grafana-operator.v1 with chart [email protected]
cozy-kafka-operator              kafka-operator              7h43m   True    Helm install succeeded for release cozy-kafka-operator/kafka-operator.v1 with chart [email protected]
cozy-kamaji                      kamaji                      7h43m   True    Helm install succeeded for release cozy-kamaji/kamaji.v1 with chart [email protected]
cozy-kubeovn                     kubeovn                     7h43m   True    Helm upgrade succeeded for release cozy-kubeovn/kubeovn.v4 with chart [email protected]
cozy-kubevirt-cdi                kubevirt-cdi                7h43m   True    Helm install succeeded for release cozy-kubevirt-cdi/kubevirt-cdi.v1 with chart [email protected]
cozy-kubevirt-cdi                kubevirt-cdi-operator       7h43m   True    Helm install succeeded for release cozy-kubevirt-cdi/kubevirt-cdi-operator.v1 with chart [email protected]
cozy-kubevirt                    kubevirt                    7h43m   True    Helm install succeeded for release cozy-kubevirt/kubevirt.v1 with chart [email protected]
cozy-kubevirt                    kubevirt-instancetypes      7h43m   True    Helm install succeeded for release cozy-kubevirt/kubevirt-instancetypes.v1 with chart [email protected]
cozy-kubevirt                    kubevirt-operator           7h43m   True    Helm install succeeded for release cozy-kubevirt/kubevirt-operator.v1 with chart [email protected]
cozy-linstor                     linstor                     7h43m   True    Helm install succeeded for release cozy-linstor/linstor.v1 with chart [email protected]
cozy-linstor                     piraeus-operator            7h43m   True    Helm install succeeded for release cozy-linstor/piraeus-operator.v1 with chart [email protected]
cozy-mariadb-operator            mariadb-operator            7h43m   True    Helm install succeeded for release cozy-mariadb-operator/mariadb-operator.v1 with chart [email protected]
cozy-metallb                     metallb                     7h43m   True    Helm install succeeded for release cozy-metallb/metallb.v1 with chart [email protected]
cozy-monitoring                  monitoring-agents           7h43m   True    Helm install succeeded for release cozy-monitoring/monitoring-agents.v1 with chart [email protected]
cozy-objectstorage-controller    objectstorage-controller    7h43m   True    Helm install succeeded for release cozy-objectstorage-controller/objectstorage-controller.v1 with chart [email protected]
cozy-postgres-operator           postgres-operator           7h43m   True    Helm install succeeded for release cozy-postgres-operator/postgres-operator.v1 with chart [email protected]
cozy-rabbitmq-operator           rabbitmq-operator           7h43m   True    Helm install succeeded for release cozy-rabbitmq-operator/rabbitmq-operator.v1 with chart [email protected]
cozy-redis-operator              redis-operator              7h43m   True    Helm install succeeded for release cozy-redis-operator/redis-operator.v1 with chart [email protected]
cozy-snapshot-controller         snapshot-controller         7h43m   True    Helm install succeeded for release cozy-snapshot-controller/snapshot-controller.v1 with chart [email protected]
cozy-system                      cozystack-api               7h43m   True    Helm install succeeded for release cozy-system/cozystack-api.v1 with chart [email protected]
cozy-system                      cozystack-controller        7h43m   True    Helm install succeeded for release cozy-system/cozystack-controller.v1 with chart [email protected]
cozy-victoria-metrics-operator   victoria-metrics-operator   7h43m   True    Helm install succeeded for release cozy-victoria-metrics-operator/victoria-metrics-operator.v1 with chart [email protected]
tenant-root                      etcd                        4h40m   True    Helm install succeeded for release tenant-root/etcd.v1 with chart [email protected]
tenant-root                      ingress                     4h40m   True    Helm upgrade succeeded for release tenant-root/ingress.v10 with chart [email protected]
tenant-root                      ingress-nginx-system        4h40m   True    Helm upgrade succeeded for release tenant-root/ingress-nginx-system.v9 with chart [email protected]
tenant-root                      monitoring                  4h40m   True    Helm install succeeded for release tenant-root/monitoring.v1 with chart [email protected]



uli@hp-poly:~/cozy-c$ kubectl get svc -n tenant-root
NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP                     PORT(S)                      AGE
alerta                              ClusterIP   10.96.10.207    <none>                          80/TCP                       4h39m
alerta-db-r                         ClusterIP   10.96.144.141   <none>                          5432/TCP                     4h39m
alerta-db-ro                        ClusterIP   10.96.212.31    <none>                          5432/TCP                     4h39m
alerta-db-rw                        ClusterIP   10.96.143.181   <none>                          5432/TCP                     4h39m
etcd                                ClusterIP   10.96.10.75     <none>                          2379/TCP                     4h39m
etcd-headless                       ClusterIP   None            <none>                          2380/TCP,2379/TCP            4h39m
grafana-db-r                        ClusterIP   10.96.168.115   <none>                          5432/TCP                     4h39m
grafana-db-ro                       ClusterIP   10.96.110.161   <none>                          5432/TCP                     4h39m
grafana-db-rw                       ClusterIP   10.96.97.169    <none>                          5432/TCP                     4h39m
grafana-service                     ClusterIP   10.96.245.29    <none>                          3000/TCP                     4h39m
root-ingress-controller             ClusterIP   10.96.234.194   192.168.178.10,192.168.178.11   80/TCP,443/TCP               4h39m
root-ingress-controller-admission   ClusterIP   10.96.203.126   <none>                          443/TCP                      4h39m
root-ingress-controller-metrics     ClusterIP   10.96.148.12    <none>                          10254/TCP                    4h39m
root-ingress-defaultbackend         ClusterIP   10.96.23.180    <none>                          80/TCP                       4h39m
vlogs-generic                       ClusterIP   10.96.155.89    <none>                          9428/TCP                     4h39m
vmalert-vmalert-shortterm           ClusterIP   10.96.106.231   <none>                          8080/TCP                     4h38m
vmalertmanager-alertmanager         ClusterIP   None            <none>                          9093/TCP,9094/TCP,9094/UDP   4h39m
vminsert-longterm                   ClusterIP   10.96.27.181    <none>                          8480/TCP                     25m
vminsert-shortterm                  ClusterIP   10.96.121.58    <none>                          8480/TCP                     4h37m
vmselect-longterm                   ClusterIP   None            <none>                          8481/TCP                     25m
vmselect-shortterm                  ClusterIP   None            <none>                          8481/TCP                     4h38m
vmstorage-longterm                  ClusterIP   None            <none>                          8482/TCP,8400/TCP,8401/TCP   4h38m
vmstorage-shortterm                 ClusterIP   None            <none>                          8482/TCP,8400/TCP,8401/TCP   4h38m


uli@hp-poly:~/cozy-c$ ping 192.168.178.10
PING 192.168.178.10 (192.168.178.10) 56(84) bytes of data.
64 bytes from 192.168.178.10: icmp_seq=1 ttl=63 time=1.36 ms
64 bytes from 192.168.178.10: icmp_seq=2 ttl=63 time=2.08 ms
64 bytes from 192.168.178.10: icmp_seq=3 ttl=63 time=2.74 ms
64 bytes from 192.168.178.10: icmp_seq=4 ttl=63 time=1.47 ms
64 bytes from 192.168.178.10: icmp_seq=5 ttl=63 time=2.67 ms
64 bytes from 192.168.178.10: icmp_seq=6 ttl=63 time=2.45 ms
^C
--- 192.168.178.10 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5009ms
rtt min/avg/max/mdev = 1.359/2.128/2.743/0.549 ms


uli@hp-poly:~/cozy-c$ ping 192.168.178.11
PING 192.168.178.11 (192.168.178.11) 56(84) bytes of data.
From 192.168.178.28 icmp_seq=3 Destination Host Unreachable
From 192.168.178.28 icmp_seq=6 Destination Host Unreachable
^C
--- 192.168.178.11 ping statistics ---
8 packets transmitted, 0 received, +2 errors, 100% packet loss, time 7304ms



uli@hp-poly:~/cozy-c$ ping dashboard.fritz.box
PING dashboard.fritz.box (192.168.178.10) 56(84) bytes of data.
64 bytes from dashboard.fritz.box (192.168.178.10): icmp_seq=1 ttl=63 time=1.33 ms
64 bytes from dashboard.fritz.box (192.168.178.10): icmp_seq=2 ttl=63 time=1.47 ms
64 bytes from dashboard.fritz.box (192.168.178.10): icmp_seq=3 ttl=63 time=1.26 ms
^C
--- dashboard.fritz.box ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.262/1.353/1.470/0.086 ms



uli@hp-poly:~/cozy-c$ linstor node list
+---------------------------------------------------------------+
| Node        | NodeType  | Addresses                  | State  |
|===============================================================|
| talos-2f314 | SATELLITE | 192.168.178.107:3367 (SSL) | Online |
| talos-723a2 | SATELLITE | 192.168.178.108:3367 (SSL) | Online |
| talos-08346 | SATELLITE | 192.168.178.101:3367 (SSL) | Online |
+---------------------------------------------------------------+

uli@hp-poly:~/cozy-c$ linstor sp l
+---------------------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool          | Node        | Driver   | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName                       |
|===================================================================================================================================================|
| DfltDisklessStorPool | talos-08346 | DISKLESS |          |              |               | False        | Ok    | talos-08346;DfltDisklessStorPool |
| DfltDisklessStorPool | talos-2f314 | DISKLESS |          |              |               | False        | Ok    | talos-2f314;DfltDisklessStorPool |
| DfltDisklessStorPool | talos-723a2 | DISKLESS |          |              |               | False        | Ok    | talos-723a2;DfltDisklessStorPool |
| data                 | talos-08346 | ZFS      | data     |     3.58 TiB |      3.72 TiB | True         | Ok    | talos-08346;data                 |
| data                 | talos-2f314 | ZFS      | data     |     3.56 TiB |      3.72 TiB | True         | Ok    | talos-2f314;data                 |
| data                 | talos-723a2 | ZFS      | data     |     3.55 TiB |      3.72 TiB | True         | Ok    | talos-723a2;data                 |
+---------------------------------------------------------------------------------------------------------------------------------------------------+

uli@hp-poly:~/cozy-c$ kubectl get storageclasses
NAME              PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local (default)   linstor.csi.linbit.com   Delete          WaitForFirstConsumer   true                   4h57m
replicated        linstor.csi.linbit.com   Delete          Immediate              true                   4h57m
Copy link

dosubot bot commented Feb 11, 2025

Hello @cre4ture 👋 I'm here to assist you with your issue and help you navigate through any beginner mistakes. I can provide guidance and support while we wait for a human maintainer.

To resolve the issue of not being able to access the dashboard, follow these steps:

  1. Verify the External IP Configuration:
    Ensure that the root-ingress-controller service has an EXTERNAL-IP assigned. You can check this by running:

    kubectl get svc -n tenant-root root-ingress-controller

    The output should show an EXTERNAL-IP like 192.168.178.10 or 192.168.178.11. It seems you have two IPs, but only 192.168.178.10 is reachable.

  2. Patch the Ingress for Dashboard Access:
    If the EXTERNAL-IP is correctly assigned, ensure that the dashboard access is enabled via the ingress controller:

    kubectl patch -n tenant-root ingresses.apps.cozystack.io ingress --type=merge -p '{"spec":{"dashboard": true}}'
  3. Check Domain Configuration:
    Make sure that your local domain (e.g., dashboard.fritz.box) is correctly pointing to the EXTERNAL-IP of the ingress controller (192.168.178.10). This can be done by adding an entry in your /etc/hosts file or configuring your DNS server.

  4. Verify Network Connectivity:
    Ensure that there are no network issues preventing access to the EXTERNAL-IP. You should be able to ping 192.168.178.10 from your local machine, which you can.

  5. Access the Dashboard:
    Use the configured local domain (e.g., dashboard.fritz.box) to access the system dashboard.

  6. Authentication Token:
    If you need to authenticate, retrieve the authentication token from tenant-root:

    kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'

If these steps do not resolve the issue, further investigation into network configurations and DNS settings may be necessary. Additionally, ensure that all pods, including the root-ingress-controller, are running as expected by checking their status with:

kubectl get pod -n tenant-root

For local testing without a true domain, using a local domain like fritz.box should work as long as it resolves to the correct IP address [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@dosubot dosubot bot added the bug Something isn't working label Feb 11, 2025
@cre4ture
Copy link
Author

dosubot mentioned one new aspect: using /etc/hosts which I didn't try out yet. This should be added to the "Get Started" docu. But dosubot also confirmed that my own solution for this should have worked actually as well.

So it would be nice if a real developer could have a look. Please.

@cre4ture
Copy link
Author

@dosubot is there a way to configure a gateway (or similar) which allows direct access to the internal IPs of the cluster?

Copy link

dosubot bot commented Feb 13, 2025

To allow direct access to the internal IPs of your cluster, you can configure a gateway or similar solution. One approach is to use the ingress-nginx controller's internal service configuration. This involves setting up internal service settings such as clusterIP, externalIPs, loadBalancerIP, and loadBalancerSourceRanges to expose internal IPs.

Additionally, you can enable an internal load balancer by setting specific annotations. For example, in a Kubernetes environment on AWS, you can use the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true" to ensure the load balancer is internal. Here's an example configuration snippet:

controller:
  service:
    type: ClusterIP
    internal:
      enabled: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
      ports:
        http: 443
        https: 80
      targetPorts:
        http: 443
        https: 80

If you are using Kube-OVN, you can configure a VPC NAT gateway to manage network address translation for your cluster. This can be done by setting up a ConfigMap for the VPC NAT configuration:

kind: ConfigMap
apiVersion: v1
metadata:
  name: ovn-vpc-nat-config
  namespace: {{ .Values.namespace }}
  annotations:
    kubernetes.io/description: |
      kube-ovn vpc-nat common config
data:
  image: {{ .Values.global.registry.address }}/{{ .Values.global.images.kubeovn.vpcRepository }}:{{ .Values.global.images.kubeovn.tag }}

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: ovn-vpc-nat-gw-config
  namespace: kube-system
data:
  enable-vpc-nat-gw: "{{ .Values.func.ENABLE_NAT_GW }}"

These configurations help control how internal IPs are exposed and accessed, ensuring the load balancer is internal and the appropriate ports are configured [1][2][3].

For further assistance, it would be beneficial for a maintainer to take a look at your setup. This is a tough nut to crack, and their expertise could really help.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other | Bug Report

@chumkaska
Copy link
Contributor

chumkaska commented Feb 14, 2025

@dosubot is there a way to configure a gateway (or similar) which allows direct access to the internal IPs of the cluster?

https://www.telepresence.io/ You can use Telepresence to access cluster resources. https://github.com/aenix-io/cozystack/tree/main/packages/system/telepresence
telepresence connect --also-proxy=10.244.0.0/16

@chumkaska
Copy link
Contributor

Hello,

I'm totally new to this topic and might do some beginners mistakes. But I thought that I could at least get my setup to a stage where I can access the dashboard. With the help of the "Get Started" and by trying (try and fail) multiple different interpretation possibilities I was able to get a cluster running that reports all services READY. The tenant-root also looks like OK.

I hope you can give me some input to this. As a reward I will tell you all my findings about potential improvements for the "Get Started" documentation ;-)

The issues I can see are these:

  • I can't ping the second IP address listed for "root-ingress-controller".
  • For the DNS I'm not sure how to run this example without having a true domain ready to be used. I guess for testing purposes there should be a way to work around this, no? I tried to use the router's default domain that I have at home "fritz.box" and assing the workping IP to a hostname such that I can access the IP via "https://dashboard.fritz.box/". But all I get is "default backend - 404".

Following is some console output for few commands for the tutorial. Please tell me what you need else for diagnosis.

Thanks, creature

uli@hp-poly:~/cozy-c$ kubectl get hr -A
NAMESPACE                        NAME                        AGE     READY   STATUS
cozy-cert-manager                cert-manager                7h43m   True    Helm install succeeded for release cozy-cert-manager/cert-manager.v1 with chart [email protected]
cozy-cert-manager                cert-manager-crds           7h43m   True    Helm install succeeded for release cozy-cert-manager/cert-manager-crds.v1 with chart [email protected]
cozy-cert-manager                cert-manager-issuers        7h43m   True    Helm install succeeded for release cozy-cert-manager/cert-manager-issuers.v1 with chart [email protected]
cozy-cilium                      cilium                      7h43m   True    Helm upgrade succeeded for release cozy-cilium/cilium.v4 with chart [email protected]+1
cozy-clickhouse-operator         clickhouse-operator         7h43m   True    Helm install succeeded for release cozy-clickhouse-operator/clickhouse-operator.v1 with chart [email protected]
cozy-cluster-api                 capi-operator               7h43m   True    Helm install succeeded for release cozy-cluster-api/capi-operator.v1 with chart [email protected]
cozy-cluster-api                 capi-providers              7h43m   True    Helm install succeeded for release cozy-cluster-api/capi-providers.v1 with chart [email protected]
cozy-dashboard                   dashboard                   7h43m   True    Helm upgrade succeeded for release cozy-dashboard/dashboard.v2 with chart [email protected]
cozy-etcd-operator               etcd-operator               7h43m   True    Helm install succeeded for release cozy-etcd-operator/etcd-operator.v1 with chart [email protected]
cozy-fluxcd                      fluxcd                      7h43m   True    Helm upgrade succeeded for release cozy-fluxcd/fluxcd.v2 with chart [email protected]
cozy-fluxcd                      fluxcd-operator             7h43m   True    Helm upgrade succeeded for release cozy-fluxcd/fluxcd-operator.v2 with chart [email protected]
cozy-grafana-operator            grafana-operator            7h43m   True    Helm install succeeded for release cozy-grafana-operator/grafana-operator.v1 with chart [email protected]
cozy-kafka-operator              kafka-operator              7h43m   True    Helm install succeeded for release cozy-kafka-operator/kafka-operator.v1 with chart [email protected]
cozy-kamaji                      kamaji                      7h43m   True    Helm install succeeded for release cozy-kamaji/kamaji.v1 with chart [email protected]
cozy-kubeovn                     kubeovn                     7h43m   True    Helm upgrade succeeded for release cozy-kubeovn/kubeovn.v4 with chart [email protected]
cozy-kubevirt-cdi                kubevirt-cdi                7h43m   True    Helm install succeeded for release cozy-kubevirt-cdi/kubevirt-cdi.v1 with chart [email protected]
cozy-kubevirt-cdi                kubevirt-cdi-operator       7h43m   True    Helm install succeeded for release cozy-kubevirt-cdi/kubevirt-cdi-operator.v1 with chart [email protected]
cozy-kubevirt                    kubevirt                    7h43m   True    Helm install succeeded for release cozy-kubevirt/kubevirt.v1 with chart [email protected]
cozy-kubevirt                    kubevirt-instancetypes      7h43m   True    Helm install succeeded for release cozy-kubevirt/kubevirt-instancetypes.v1 with chart [email protected]
cozy-kubevirt                    kubevirt-operator           7h43m   True    Helm install succeeded for release cozy-kubevirt/kubevirt-operator.v1 with chart [email protected]
cozy-linstor                     linstor                     7h43m   True    Helm install succeeded for release cozy-linstor/linstor.v1 with chart [email protected]
cozy-linstor                     piraeus-operator            7h43m   True    Helm install succeeded for release cozy-linstor/piraeus-operator.v1 with chart [email protected]
cozy-mariadb-operator            mariadb-operator            7h43m   True    Helm install succeeded for release cozy-mariadb-operator/mariadb-operator.v1 with chart [email protected]
cozy-metallb                     metallb                     7h43m   True    Helm install succeeded for release cozy-metallb/metallb.v1 with chart [email protected]
cozy-monitoring                  monitoring-agents           7h43m   True    Helm install succeeded for release cozy-monitoring/monitoring-agents.v1 with chart [email protected]
cozy-objectstorage-controller    objectstorage-controller    7h43m   True    Helm install succeeded for release cozy-objectstorage-controller/objectstorage-controller.v1 with chart [email protected]
cozy-postgres-operator           postgres-operator           7h43m   True    Helm install succeeded for release cozy-postgres-operator/postgres-operator.v1 with chart [email protected]
cozy-rabbitmq-operator           rabbitmq-operator           7h43m   True    Helm install succeeded for release cozy-rabbitmq-operator/rabbitmq-operator.v1 with chart [email protected]
cozy-redis-operator              redis-operator              7h43m   True    Helm install succeeded for release cozy-redis-operator/redis-operator.v1 with chart [email protected]
cozy-snapshot-controller         snapshot-controller         7h43m   True    Helm install succeeded for release cozy-snapshot-controller/snapshot-controller.v1 with chart [email protected]
cozy-system                      cozystack-api               7h43m   True    Helm install succeeded for release cozy-system/cozystack-api.v1 with chart [email protected]
cozy-system                      cozystack-controller        7h43m   True    Helm install succeeded for release cozy-system/cozystack-controller.v1 with chart [email protected]
cozy-victoria-metrics-operator   victoria-metrics-operator   7h43m   True    Helm install succeeded for release cozy-victoria-metrics-operator/victoria-metrics-operator.v1 with chart [email protected]
tenant-root                      etcd                        4h40m   True    Helm install succeeded for release tenant-root/etcd.v1 with chart [email protected]
tenant-root                      ingress                     4h40m   True    Helm upgrade succeeded for release tenant-root/ingress.v10 with chart [email protected]
tenant-root                      ingress-nginx-system        4h40m   True    Helm upgrade succeeded for release tenant-root/ingress-nginx-system.v9 with chart [email protected]
tenant-root                      monitoring                  4h40m   True    Helm install succeeded for release tenant-root/monitoring.v1 with chart [email protected]



uli@hp-poly:~/cozy-c$ kubectl get svc -n tenant-root
NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP                     PORT(S)                      AGE
alerta                              ClusterIP   10.96.10.207    <none>                          80/TCP                       4h39m
alerta-db-r                         ClusterIP   10.96.144.141   <none>                          5432/TCP                     4h39m
alerta-db-ro                        ClusterIP   10.96.212.31    <none>                          5432/TCP                     4h39m
alerta-db-rw                        ClusterIP   10.96.143.181   <none>                          5432/TCP                     4h39m
etcd                                ClusterIP   10.96.10.75     <none>                          2379/TCP                     4h39m
etcd-headless                       ClusterIP   None            <none>                          2380/TCP,2379/TCP            4h39m
grafana-db-r                        ClusterIP   10.96.168.115   <none>                          5432/TCP                     4h39m
grafana-db-ro                       ClusterIP   10.96.110.161   <none>                          5432/TCP                     4h39m
grafana-db-rw                       ClusterIP   10.96.97.169    <none>                          5432/TCP                     4h39m
grafana-service                     ClusterIP   10.96.245.29    <none>                          3000/TCP                     4h39m
root-ingress-controller             ClusterIP   10.96.234.194   192.168.178.10,192.168.178.11   80/TCP,443/TCP               4h39m
root-ingress-controller-admission   ClusterIP   10.96.203.126   <none>                          443/TCP                      4h39m
root-ingress-controller-metrics     ClusterIP   10.96.148.12    <none>                          10254/TCP                    4h39m
root-ingress-defaultbackend         ClusterIP   10.96.23.180    <none>                          80/TCP                       4h39m
vlogs-generic                       ClusterIP   10.96.155.89    <none>                          9428/TCP                     4h39m
vmalert-vmalert-shortterm           ClusterIP   10.96.106.231   <none>                          8080/TCP                     4h38m
vmalertmanager-alertmanager         ClusterIP   None            <none>                          9093/TCP,9094/TCP,9094/UDP   4h39m
vminsert-longterm                   ClusterIP   10.96.27.181    <none>                          8480/TCP                     25m
vminsert-shortterm                  ClusterIP   10.96.121.58    <none>                          8480/TCP                     4h37m
vmselect-longterm                   ClusterIP   None            <none>                          8481/TCP                     25m
vmselect-shortterm                  ClusterIP   None            <none>                          8481/TCP                     4h38m
vmstorage-longterm                  ClusterIP   None            <none>                          8482/TCP,8400/TCP,8401/TCP   4h38m
vmstorage-shortterm                 ClusterIP   None            <none>                          8482/TCP,8400/TCP,8401/TCP   4h38m


uli@hp-poly:~/cozy-c$ ping 192.168.178.10
PING 192.168.178.10 (192.168.178.10) 56(84) bytes of data.
64 bytes from 192.168.178.10: icmp_seq=1 ttl=63 time=1.36 ms
64 bytes from 192.168.178.10: icmp_seq=2 ttl=63 time=2.08 ms
64 bytes from 192.168.178.10: icmp_seq=3 ttl=63 time=2.74 ms
64 bytes from 192.168.178.10: icmp_seq=4 ttl=63 time=1.47 ms
64 bytes from 192.168.178.10: icmp_seq=5 ttl=63 time=2.67 ms
64 bytes from 192.168.178.10: icmp_seq=6 ttl=63 time=2.45 ms
^C
--- 192.168.178.10 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5009ms
rtt min/avg/max/mdev = 1.359/2.128/2.743/0.549 ms


uli@hp-poly:~/cozy-c$ ping 192.168.178.11
PING 192.168.178.11 (192.168.178.11) 56(84) bytes of data.
From 192.168.178.28 icmp_seq=3 Destination Host Unreachable
From 192.168.178.28 icmp_seq=6 Destination Host Unreachable
^C
--- 192.168.178.11 ping statistics ---
8 packets transmitted, 0 received, +2 errors, 100% packet loss, time 7304ms



uli@hp-poly:~/cozy-c$ ping dashboard.fritz.box
PING dashboard.fritz.box (192.168.178.10) 56(84) bytes of data.
64 bytes from dashboard.fritz.box (192.168.178.10): icmp_seq=1 ttl=63 time=1.33 ms
64 bytes from dashboard.fritz.box (192.168.178.10): icmp_seq=2 ttl=63 time=1.47 ms
64 bytes from dashboard.fritz.box (192.168.178.10): icmp_seq=3 ttl=63 time=1.26 ms
^C
--- dashboard.fritz.box ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.262/1.353/1.470/0.086 ms



uli@hp-poly:~/cozy-c$ linstor node list
+---------------------------------------------------------------+
| Node        | NodeType  | Addresses                  | State  |
|===============================================================|
| talos-2f314 | SATELLITE | 192.168.178.107:3367 (SSL) | Online |
| talos-723a2 | SATELLITE | 192.168.178.108:3367 (SSL) | Online |
| talos-08346 | SATELLITE | 192.168.178.101:3367 (SSL) | Online |
+---------------------------------------------------------------+

uli@hp-poly:~/cozy-c$ linstor sp l
+---------------------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool          | Node        | Driver   | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName                       |
|===================================================================================================================================================|
| DfltDisklessStorPool | talos-08346 | DISKLESS |          |              |               | False        | Ok    | talos-08346;DfltDisklessStorPool |
| DfltDisklessStorPool | talos-2f314 | DISKLESS |          |              |               | False        | Ok    | talos-2f314;DfltDisklessStorPool |
| DfltDisklessStorPool | talos-723a2 | DISKLESS |          |              |               | False        | Ok    | talos-723a2;DfltDisklessStorPool |
| data                 | talos-08346 | ZFS      | data     |     3.58 TiB |      3.72 TiB | True         | Ok    | talos-08346;data                 |
| data                 | talos-2f314 | ZFS      | data     |     3.56 TiB |      3.72 TiB | True         | Ok    | talos-2f314;data                 |
| data                 | talos-723a2 | ZFS      | data     |     3.55 TiB |      3.72 TiB | True         | Ok    | talos-723a2;data                 |
+---------------------------------------------------------------------------------------------------------------------------------------------------+

uli@hp-poly:~/cozy-c$ kubectl get storageclasses
NAME              PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local (default)   linstor.csi.linbit.com   Delete          WaitForFirstConsumer   true                   4h57m
replicated        linstor.csi.linbit.com   Delete          Immediate              true                   4h57m

You can run:

kubectl port-forward -n cozy-dashboard service/dashboard 8080:80
to access the dashboard.

Please show the output of:

kubectl get ingress -A

@cre4ture
Copy link
Author

@chumkaska juhu, thanks a lot. You made me happy like a small kid :-)

  1. kubectl port-forward -n cozy-dashboard service/dashboard 8080:80 worked as suggested. I could login to the dashboard the very first time.
  2. I apparently somehow configured the domain name wrongly. This was visible by running (as suggested) kubectl get ingress -A. I created a entry for dashboard.cozy.local in the /etc/hosts that maps to 192.168.178.10 as already recommended by dosubot. And finally it worked also via this domain name. I will re-read the "Get Started" documentation again to check which step I did wrongly. And maybe I can fix it to use the "fritz.box" domain. This would be great because then I could use it from every host in the network without messing around with the "/etc/hosts" on every test-client.
uli@hp-poly:~/cozy-c$ kubectl get ingress -A
NAMESPACE        NAME                    CLASS         HOSTS                  ADDRESS         PORTS     AGE
cozy-dashboard   dashboard-tenant-root   tenant-root   dashboard.cozy.local   10.96.234.194   80, 443   3d1h
tenant-root      alerta                  tenant-root   alerta.cozy.local      10.96.234.194   80, 443   3d1h
tenant-root      grafana-ingress         tenant-root   grafana.cozy.local     10.96.234.194   80, 443   3d1h

I'm happy for any further advice. E.g. it's unclear to me why only the 192.168.178.10 works, but not the other external IP 192.168.178.11.
But anyway - even with just this setup I'm already confident that I will be able to play around to get some experience.

As promised I would then also go ahead to create a pull request for suggested improvements to the documentation. Is that fine?

Copy link

dosubot bot commented Feb 14, 2025

You can use Telepresence to connect your local environment to the cluster, allowing you to access internal IPs directly. By running the command telepresence connect --also-proxy=10.244.0.0/16, you can set up a proxy for the specified IP range, enabling direct access to cluster resources. This tool is particularly useful for local development and testing with Kubernetes services. For more details, you can refer to the Telepresence documentation.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@chumkaska
Copy link
Contributor

@chumkaska juhu, thanks a lot. You made me happy like a small kid :-)

  1. kubectl port-forward -n cozy-dashboard service/dashboard 8080:80 worked as suggested. I could login to the dashboard the very first time.
  2. I apparently somehow configured the domain name wrongly. This was visible by running (as suggested) kubectl get ingress -A. I created a entry for dashboard.cozy.local in the /etc/hosts that maps to 192.168.178.10 as already recommended by dosubot. And finally it worked also via this domain name. I will re-read the "Get Started" documentation again to check which step I did wrongly. And maybe I can fix it to use the "fritz.box" domain. This would be great because then I could use it from every host in the network without messing around with the "/etc/hosts" on every test-client.
uli@hp-poly:~/cozy-c$ kubectl get ingress -A
NAMESPACE        NAME                    CLASS         HOSTS                  ADDRESS         PORTS     AGE
cozy-dashboard   dashboard-tenant-root   tenant-root   dashboard.cozy.local   10.96.234.194   80, 443   3d1h
tenant-root      alerta                  tenant-root   alerta.cozy.local      10.96.234.194   80, 443   3d1h
tenant-root      grafana-ingress         tenant-root   grafana.cozy.local     10.96.234.194   80, 443   3d1h

I'm happy for any further advice. E.g. it's unclear to me why only the 192.168.178.10 works, but not the other external IP 192.168.178.11. But anyway - even with just this setup I'm already confident that I will be able to play around to get some experience.

As promised I would then also go ahead to create a pull request for suggested improvements to the documentation. Is that fine?

The second one isn't working because you have only one A record in DNS. Or are you creating an A record for the second one, and it's still not working?

@tym83
Copy link
Collaborator

tym83 commented Feb 20, 2025

@cre4ture how is it going? It it fine now or you still need some help with that?

@cre4ture
Copy link
Author

@chumkaska @tym83 I'm still busy with making myself familiar. I'm doing slow progress and the issue with the second IP is not yet solved. But its currently not on my highest priority.
I'm very thankful that you offer your (further) help. But I also don't like to ask for every tiny step - I guess this would be too much of effort on your side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants