You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 10m (x17 over 16d) kubelet Readiness probe failed: Get "http://10.32.11.246:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning Unhealthy 10m (x16 over 7d20h) kubelet Liveness probe failed: Get "http://10.32.11.246:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Normal Killing 10m kubelet Container esp failed liveness probe, will be restarted
Normal Started 10m (x2 over 16d) kubelet Started container esp
Normal Pulled 10m (x2 over 16d) kubelet Container image "gcr.io/endpoints-release/endpoints-runtime:2" already present on machine
Normal Created 10m (x2 over 16d) kubelet Created container esp
Warning Unhealthy 10m (x2 over 16d) kubelet Liveness probe failed: Get "http://10.32.11.246:8080/healthz": dial tcp 10.32.11.246:8080: connect: connection refused
Warning Unhealthy 10m (x3 over 16d) kubelet Readiness probe failed: Get "http://10.32.11.246:8080/healthz": dial tcp 10.32.11.246:8080: connect: connection refused
then pod killed so i check esp log like
API failed with: 503 and body: upstream connect error or disconnect/reset before headers. reset reason: connection failure
2023-10-12 05:34:44,076: WARNING: got signal: SIGTERM
2023-10-12 05:34:44,673: INFO: sending TERM to PID=8
2023-10-12 05:34:44,674: INFO: sending TERM to PID=53
W1012 05:34:44.674 53 external/envoy/source/server/server.cc:854] [53][main]caught ENVOY_SIGTERM
I1012 05:34:44.674 53 external/envoy/source/server/server.cc:985] [53][main]shutting down server instance
I1012 05:34:44.674 53 external/envoy/source/server/server.cc:920] [53][main]main dispatch loop exited
W1012 05:34:44.674965 8 server.go:74] Server got signal terminated, stopping
E1012 05:34:45.174 70 src/envoy/http/service_control/client_cache.cc:161] [70][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:45.275 73 src/envoy/http/service_control/client_cache.cc:161] [73][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:45.377 81 src/envoy/http/service_control/client_cache.cc:161] [81][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:45.473 85 src/envoy/http/service_control/client_cache.cc:161] [85][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:45.478 86 src/envoy/http/service_control/client_cache.cc:161] [86][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:45.673 92 src/envoy/http/service_control/client_cache.cc:161] [92][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:45.776 101 src/envoy/http/service_control/client_cache.cc:161] [101][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:45.778 102 src/envoy/http/service_control/client_cache.cc:161] [102][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:46.181 121 src/envoy/http/service_control/client_cache.cc:161] [121][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
E1012 05:34:46.276 128 src/envoy/http/service_control/client_cache.cc:161] [128][filter]Failed to call report, error: CANCELLED:Request cancelled, str body:
[libprotobuf ERROR external/servicecontrol_client_git/src/service_control_client_impl.cc:183] Failed in Report call: Request cancelled
W1012 05:34:46.280 53 external/envoy/source/common/config/grpc_stream.h:201] [53][config]StreamAggregatedResources gRPC config stream to @espv2-ads-cluster closed since 1455705s ago: 14, upstream connect error or disconnect/reset before headers. reset reason: connection termination
I1012 05:34:46.280 53 external/envoy/source/server/server.cc:972] [53][main]exiting
2023-10-12 05:34:50,681: INFO: ===waitpid: pid=8: doesn't exit
2023-10-12 05:34:50,681: CRITICAL: Config Manager is down, killing envoy process.
2023-10-12 05:34:50,681: INFO: Killing process: pid=53
2023-10-12 05:34:50,681: ERROR: The child process: pid=53 may not exist.
so i want to know reason of killed
how to check is it related esp??
The text was updated successfully, but these errors were encountered:
@TAOXUY we use kubernetes that pod on us-west1-a,b,c
You mentioned that there is an application deployed in a sidecar format with ESP. From your assumption, it seems that the application container is unresponsive, and you suspect that the ESP pod has been restarted as a result. Is there a way to confirm this? The error messages you've seen are simple 503, 504, or killed logs, and you're unsure which part of the process caused the error.
hi we using esp v2 in gke
and image like
sometime i saw health check fail like
then pod killed so i check esp log like
so i want to know reason of killed
how to check is it related esp??
The text was updated successfully, but these errors were encountered: