Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Timestamp info #1352

Open
3 of 4 tasks
nsankar opened this issue Dec 18, 2024 · 0 comments
Open
3 of 4 tasks

[Question]: Timestamp info #1352

nsankar opened this issue Dec 18, 2024 · 0 comments

Comments

@nsankar
Copy link

nsankar commented Dec 18, 2024

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've included steps to reproduce the behavior

Affected Components

  • K8sGPT (CLI)
  • K8sGPT Operator

K8sGPT Version

No response

Kubernetes Version

No response

Host OS and its Version

Linux

Steps to reproduce

k8sgpt analyze --explain --with-doc --filter=Pod --output=json

Expected behaviour

results of analyze to contain timestamp of detected problem events.

Actual behaviour

Hi,

This is sample output I get when using analyze. How do we know when the problems occured in terms of the timestamp info?
My Questions are : (1) Is there a way to get the timestamp of the problem occurance ? (2) What is the time range you consider when analyzing the problem with the analyze command ? Kindly let me know.
(Note: when using --filter=Log, I get the log's timestamp)._

k1:~$ k8sgpt analyze --explain --with-doc --filter=Pod --output=json
{
"provider": "openai",
"errors": null,
"status": "ProblemDetected",
"problems": 41,
"results": [
{
"kind": "Pod",
"name": "deploynosvc/nginx-deployment-xxxx",
"error": [
{
"Text": "Back-off pulling image "nginx-1:latest"",
"KubernetesDoc": "",
"Sensitive": []
}
],
"details": "Error: The Kubernetes cluster is unable to pull the Docker image "nginx-1:latest," likely due to the image not being found or a network issue.\n\nSolution: \n1. Check if the image "nginx-1:latest" exists in the specified registry.\n2. Verify your Kubernetes deployment configuration for the correct image name.\n3. Ensure your cluster has internet access or access to the private registry.\n4. If using a private registry, check your image pull secrets.",
"parentObject": "Deployment/nginx-deployment"
},
{
"kind": "Pod",
"name": "sprint52alert/high-mem",
"error": [
{
"Text": "0/12 nodes are available: 10 Insufficient memory, 2 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }. preemption: 0/12 nodes are available: 10 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling..",
"KubernetesDoc": "",
"Sensitive": []
}
],
"details": "Error: No nodes are available to schedule the pod due to insufficient memory on 10 nodes and 2 nodes having a taint that the pod cannot tolerate.\n\nSolution: \n1. Check memory usage on nodes.\n2. Scale up nodes or optimize workloads to free memory.\n3. Remove or modify the taint on the 2 nodes if appropriate.\n4. Retry scheduling the pod.",
"parentObject": ""
},
{
"kind": "Pod",
"name": "sprint55alert/liveness-daemonset-3333",
"error": [
{
"Text": "the last termination reason is Error container=liveness pod=xxxx",
"KubernetesDoc": "",
"Sensitive": []
}
],
"details": "Error: The liveness probe for the container in the liveness-daemonset failed, indicating that the container is not responding as expected.\n\nSolution: \n1. Check the container logs for errors: kubectl logs liveness-daemonset-3323.\n2. Verify the liveness probe configuration in the pod spec.\n3. Ensure the application inside the container is running correctly.\n4. Adjust the liveness probe settings if necessary.\n5. Redeploy the pod.",
"parentObject": "DaemonSet/liveness-daemonset"
},
{

Additional Information

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Proposed
Development

No branches or pull requests

1 participant