'How to get logs from kubernetes using Go?

I'm looking for the solution of how to get logs from a pod in Kubernetes cluster using Go. I've looked at "https://github.com/kubernetes/client-go" and "https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client", but couldn't understand how to use them for this purpose. I have no issues getting information of a pod or any other object in K8S except for logs.

For example, I'm using Get() from "https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client#example-Client--Get" to get K8S job info:

found := &batchv1.Job{}
err = r.client.Get(context.TODO(), types.NamespacedName{Name: job.Name, Namespace: job.Namespace}, found)

Please share of how you get pod's logs nowadays. Any suggestions would be appreciated!

Update: The solution provided in Kubernetes go client api for log of a particular pod is out of date. It have some tips, but it is not up to date with current libraries.



Solution 1:[1]

Here is what we came up with eventually using client-go library:

func getPodLogs(pod corev1.Pod) string {
    podLogOpts := corev1.PodLogOptions{}
    config, err := rest.InClusterConfig()
    if err != nil {
        return "error in getting config"
    }
    // creates the clientset
    clientset, err := kubernetes.NewForConfig(config)
    if err != nil {
        return "error in getting access to K8S"
    }
    req := clientset.CoreV1().Pods(pod.Namespace).GetLogs(pod.Name, &podLogOpts)
    podLogs, err := req.Stream()
    if err != nil {
        return "error in opening stream"
    }
    defer podLogs.Close()

    buf := new(bytes.Buffer)
    _, err = io.Copy(buf, podLogs)
    if err != nil {
        return "error in copy information from podLogs to buf"
    }
    str := buf.String()

    return str
}

I hope it will help someone. Please share your thoughts or solutions of how you get logs from pods in Kubernetes.

Solution 2:[2]

And if you want read stream in client-go v11.0.0+, the code is like this, feel free for create clientset by yourself:

func GetPodLogs(namespace string, podName string, containerName string, follow bool) error {
    count := int64(100)
    podLogOptions := v1.PodLogOptions{
        Container: containerName,
        Follow:    follow,
        TailLines: &count,
    }

    podLogRequest := clientSet.CoreV1().
        Pods(namespace).
        GetLogs(podName, &podLogOptions)
    stream, err := podLogRequest.Stream(context.TODO())
    if err != nil {
        return err
    }
    defer stream.Close()

    for {
        buf := make([]byte, 2000)
        numBytes, err := stream.Read(buf)
        if numBytes == 0 {
            continue
        }
        if err == io.EOF {
            break
        }
        if err != nil {
            return err
        }
        message := string(buf[:numBytes])
        fmt.Print(message)
    }
    return nil
}

Solution 3:[3]

The controller-runtime client library does not yet support subresources other than /status, so you would have to use client-go as shown in the other question.

Solution 4:[4]

Combining some answers found elsewhere and here to stream (tailing) logs for all containers (init included):

func GetPodLogs(namespace string, podName string) {
    pod, err := clientSet.CoreV1().Pods(namespace).Get(ctx, podName, metav1.GetOptions{})
    if err != nil {
        return err
    }
    wg := &sync.WaitGroup{}
    functionList := []func(){}
    for _, container := range append(pod.Spec.InitContainers, pod.Spec.Containers...) {
        podLogOpts := v1.PodLogOptions{}
        podLogOpts.Follow = true
        podLogOpts.TailLines = &[]int64{int64(100)}[0]
        podLogOpts.Container = container.Name
        podLogs, err := clientSet.CoreV1().Pods(namespace).GetLogs(podName, &podLogOpts).Stream(ctx)
        if err != nil {
            return err
        }
        defer podLogs.Close()
        functionList = append(functionList, func() {
            defer wg.Done()
            reader := bufio.NewScanner(podLogs)
            for reader.Scan() {
                select {
                case <-ctx.Done():
                    return
                default:
                    line := reader.Text()
                    fmt.Println(worker+"/"+podLogOpts.Container, line)
                }
            }
            log.Printf("INFO log EOF " + reader.Err().Error() + ": " + worker + "/" + podLogOpts.Container)
        })
    }

    wg.Add(len(functionList))
    for _, f := range functionList {
        go f()
    }
    wg.Wait()
    return nil
}

Solution 5:[5]

The answer by anon_coword got me interested, in getting logs in a bit more complicated case:

  1. I want to preform the action multiple times, and check the logs multiple times.
  2. I want to have many pods that will react the same way.

Here are a few examples: https://github.com/nwaizer/GetPodLogsEfficiently One example is:

package main

import (
    "GetPodLogsEfficiently/client"
    "GetPodLogsEfficiently/utils"
    "bufio"
    "context"
    "fmt"
    corev1 "k8s.io/api/core/v1"
    "time"
)

func GetPodLogs(cancelCtx context.Context, PodName string) {
    PodLogsConnection := client.Client.Pods(utils.Namespace).GetLogs(PodName, &corev1.PodLogOptions{
        Follow:    true,
        TailLines: &[]int64{int64(10)}[0],
    })
    LogStream, _ := PodLogsConnection.Stream(context.Background())
    defer LogStream.Close()

    reader := bufio.NewScanner(LogStream)
    var line string
    for {
        select {
        case <-cancelCtx.Done():
            break
        default:
            for reader.Scan() {
                line = reader.Text()
                fmt.Printf("Pod: %v line: %v\n", PodName, line)
            }
        }
    }
}
func main() {
    ctx := context.Background()
    cancelCtx, endGofunc := context.WithCancel(ctx)
    for _, pod := range utils.GetPods().Items {
        fmt.Println(pod.Name)
        go GetPodLogs(cancelCtx, pod.Name)
    }
    time.Sleep(10 * time.Second)
    endGofunc()
}

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Stepan Maksimchuk
Solution 2 HelloWood
Solution 3 coderanger
Solution 4 anon_coward
Solution 5