'How to change, edit, save /etc/hosts file from Azure Bash for Kubernetes?

I uploaded a project to kubernetes and for its gateway to redirect the services, it requires the following:

127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project

and so on.

I can't run any sudo commands, so sudo nano /etc/hosts doesnt work. I tried vi /etc/hosts and it gives permission denied error. How can I edit /etc/hosts file or do some configuration on Azure to make it work like that?

Edit:

To give more information, I have uploaded a project to Kubernetes that has reverse-proxy settings.

So reaching the web-app of that project is not available via IP. Instead if I'm running the application locally, I have to edit the hosts file of the computer I'm using with

127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project

and so on. So whenever I type web-app.my.project its gateway redirects to web-app part and if I write app.my.project it redirects to app part, etc.

When I uploaded it to Azure Kubernetes Service it added default-http-backend on ingress-nginx namespace which created by itself. To expose these services, I opened the Http Routing option from Azure which gave me the loadbalancer at the left side of the image. So If I'm reading the situation correctly, (I'm most probably wrong though) it is something like the image below:

AKS

So, I added hostaliases to kube-system, ingress-nginx and default namespaces to make it like I edited a hosts file when I was running the project locally. But it still gives me that default backend - 404 ingress error

Edit 2:

I have nginx-ingress-controller which allows the redirection as far as I understand. So, when I add hostaliases to it as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      hostAliases:
      - ip: "127.0.0.1"
      hostnames:
      - "app.ota.local"
      - "gateway.ota.local"
      - "treehub.ota.local"
      - "tuf-reposerver.ota.local"
      - "web-events.ota.local"
      hostNetwork: true
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: {{ .ingress_controller_docker_image }}
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: tcp
            containerPort: 8000
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

So when I edit yaml file as aforementioned, it gives the following error on Azure:

Failed to update the deployment Failed to update the deployment 'nginx-ingress-controller'. Error: BadRequest (400) : Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|liases":["ip "127.0|..., bigger context ...|theus.io/scrape":"true"}},"spec":{"hostAliases":["ip "127.0.0.1""],"hostnames":["app.ota.local","g|...

If I edit the yaml file locally and try to run it from local kubectl which is connected to Azure, it gives the following error:

serviceaccount/weave-net configured clusterrole.rbac.authorization.k8s.io/weave-net configured clusterrolebinding.rbac.authorization.k8s.io/weave-net configured role.rbac.authorization.k8s.io/weave-net configured rolebinding.rbac.authorization.k8s.io/weave-net configured daemonset.apps/weave-net configured Using cluster from kubectl context: k8s_14

namespace/ingress-nginx unchanged deployment.apps/default-http-backend unchanged service/default-http-backend unchanged configmap/nginx-configuration unchanged configmap/tcp-services unchanged configmap/udp-services unchanged serviceaccount/nginx-ingress-serviceaccount unchanged clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole unchanged role.rbac.authorization.k8s.io/nginx-ingress-role unchanged rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding unchanged clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding unchanged error: error validating "/home/.../ota-community-edition/scripts/../generated/templates/ingress": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "hostnames" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false make: *** [Makefile:34: start_start-all] Error 1



Solution 1:[1]

Adding entries to a Pod's /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. You can add these custom entries with the HostAliases field in PodSpec.

Modification not using HostAliases is not suggested because the file is managed by the kubelet and can be overwritten on during Pod creation/restart.

I suggest that you use hostAliases instead

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "app.my.project"
    - "event-manager.my.project"
    - "logger.my.project"

  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Rakesh Gupta