Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jvb CrashLoopBackOff #118

Open
HuXinjing opened this issue Jun 2, 2024 · 14 comments
Open

jvb CrashLoopBackOff #118

HuXinjing opened this issue Jun 2, 2024 · 14 comments

Comments

@HuXinjing
Copy link

Hello, I'm freshman in k8s and jitsi. I installed the service in minikube, but the pod of jvb kept restarting, I have no idea what happened. Could you please instruct me what should I do?

I got logs for the jvb pod and got 5 points should be fixed:
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 01-set-timezone: executing...
[cont-init.d] 01-set-timezone: exited 0.
[cont-init.d] 10-config: executing...
Error: any valid prefix is expected rather than ";;".
No AUTOSCALER_URL defined, leaving autoscaler sidecar disabled
[cont-init.d] 10-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
JVB 2024-06-02 22:54:28.936 INFO: [1] JitsiConfig.#47: Initialized newConfig: merge of /config/jvb.conf: 1,application.conf @ jar:file:/usr/share/jitsi-videobridge/jitsi-videobridge.jar!/application.conf: 1,system properties,reference.conf @ jar:file:/usr/share/jitsi-videobridge/jitsi-videobridge.jar!/reference.conf: 1,reference.conf @ jar:file:/usr/share/jitsi-videobridge/lib/ice4j-3.0-66-g1c60acc.jar!/reference.conf: 1,reference.conf @ jar:file:/usr/share/jitsi-videobridge/lib/jitsi-media-transform-2.3-61-g814bffd6.jar!/reference.conf: 1
JVB 2024-06-02 22:54:29.047 INFO: [1] ReadOnlyConfigurationService.reloadConfiguration#51: loading config file at path /config/sip-communicator.properties
JVB 2024-06-02 22:54:29.049 INFO: [1] ReadOnlyConfigurationService.reloadConfiguration#56: Error loading config file: java.io.FileNotFoundException: /config/sip-communicator.properties (No such file or directory)
JVB 2024-06-02 22:54:29.052 INFO: [1] JitsiConfig.#68: Initialized legacyConfig: sip communicator props (no description provided)
JVB 2024-06-02 22:54:29.143 INFO: [1] JitsiConfig$Companion.reloadNewConfig#94: Reloading the Typesafe config source (previously reloaded 0 times).
JVB 2024-06-02 22:54:29.331 INFO: [1] MainKt.main#75: Starting jitsi-videobridge version 2.3.61-g814bffd6
JVB 2024-06-02 22:54:31.431 INFO: [24] org.ice4j.ice.harvest.MappingCandidateHarvesters.initialize: Adding a static mapping: StaticMapping(localAddress=10.244.0.6, publicAddress=172.18.36.160, localPort=null, publicPort=null, name=ip-0)
JVB 2024-06-02 22:54:31.445 INFO: [24] org.ice4j.ice.harvest.MappingCandidateHarvesters.initialize: Using AwsCandidateHarvester.
JVB 2024-06-02 22:54:32.358 INFO: [27] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.initializeConnectAndJoin#288: Initializing a new MucClient for [ org.jitsi.xmpp.mucclient.MucClientConfiguration id=shard0 domain=auth.jitsi.xunshi.com hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local port=5222 username=jvb mucs=[[email protected]] mucNickname=myjitsi-jitsi-meet-jvb-6f4889c549-s7m99 disableCertificateVerification=true]
JVB 2024-06-02 22:54:32.545 INFO: [1] TaskPools.#87: TaskPools detected 32 processors, creating the CPU pool with that many threads
JVB 2024-06-02 22:54:32.553 WARNING: [27] MucClient.createXMPPTCPConnectionConfiguration#117: Disabling certificate verification!
JVB 2024-06-02 22:54:32.651 INFO: [1] HealthChecker.start#122: Started with interval=60000, timeout=PT1M30S, maxDuration=PT3S, stickyFailures=false.
JVB 2024-06-02 22:54:32.759 INFO: [1] UlimitCheck.printUlimits#109: Running with open files limit 1048576 (hard 1048576), thread limit unlimited (hard unlimited).
JVB 2024-06-02 22:54:32.838 INFO: [1] VideobridgeExpireThread.start#88: Starting with 60 second interval.
JVB 2024-06-02 22:54:33.063 INFO: [1] MainKt.main#113: Starting public http server
JVB 2024-06-02 22:54:33.456 INFO: [27] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.initializeConnectAndJoin#350: Dispatching a thread to connect and login.
JVB 2024-06-02 22:54:33.574 INFO: [1] ColibriWebSocketService.#56: WebSockets are not enabled
JVB 2024-06-02 22:54:33.800 INFO: [1] ColibriWebSocketService.registerServlet#94: Disabled, not registering servlet
JVB 2024-06-02 22:54:33.835 INFO: [1] org.eclipse.jetty.server.Server.doStart: jetty-11.0.14; built: 2023-02-22T23:41:48.575Z; git: 4601fe8dd805ce75b69c64466c115a162586641b; jvm 11.0.21+9-post-Debian-1deb11u1
JVB 2024-06-02 22:54:33.968 INFO: [1] org.eclipse.jetty.server.handler.ContextHandler.doStart: Started o.e.j.s.ServletContextHandler@470a9030{/,null,AVAILABLE}
JVB 2024-06-02 22:54:34.029 INFO: [1] org.eclipse.jetty.server.AbstractConnector.doStart: Started ServerConnector@1db0ec27{HTTP/1.1, (http/1.1)}{0.0.0.0:9090}
JVB 2024-06-02 22:54:34.045 INFO: [1] org.eclipse.jetty.server.Server.doStart: Started Server@5c84624f{STARTING}[11.0.14,sto=0] @6397ms
JVB 2024-06-02 22:54:34.047 INFO: [1] MainKt.main#131: Starting private http server
JVB 2024-06-02 22:54:34.243 INFO: [1] org.eclipse.jetty.server.Server.doStart: jetty-11.0.14; built: 2023-02-22T23:41:48.575Z; git: 4601fe8dd805ce75b69c64466c115a162586641b; jvm 11.0.21+9-post-Debian-1deb11u1
JVB 2024-06-02 22:54:35.634 WARNING: [1] org.glassfish.jersey.server.wadl.WadlFeature.configure: JAXBContext implementation could not be found. WADL feature is disabled.
JVB 2024-06-02 22:54:36.085 WARNING: [1] org.glassfish.jersey.internal.inject.Providers.checkProviderRuntime: A provider org.jitsi.rest.Health registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.jitsi.rest.Health will be ignored.
JVB 2024-06-02 22:54:36.086 WARNING: [1] org.glassfish.jersey.internal.inject.Providers.checkProviderRuntime: A provider org.jitsi.rest.Version registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.jitsi.rest.Version will be ignored.
JVB 2024-06-02 22:54:36.087 WARNING: [1] org.glassfish.jersey.internal.inject.Providers.checkProviderRuntime: A provider org.jitsi.rest.prometheus.Prometheus registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.jitsi.rest.prometheus.Prometheus will be ignored.
JVB 2024-06-02 22:54:36.552 INFO: [1] org.eclipse.jetty.server.handler.ContextHandler.doStart: Started o.e.j.s.ServletContextHandler@10c47c79{/,null,AVAILABLE}
JVB 2024-06-02 22:54:36.558 INFO: [1] org.eclipse.jetty.server.AbstractConnector.doStart: Started ServerConnector@4bdcaf36{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
JVB 2024-06-02 22:54:36.558 INFO: [1] org.eclipse.jetty.server.Server.doStart: Started Server@1e6308a9{STARTING}[11.0.14,sto=0] @8911ms
JVB 2024-06-02 22:54:38.040 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:54:43.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:54:48.032 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:54:51.557 INFO: [24] org.ice4j.ice.harvest.MappingCandidateHarvesters.createStunHarvesters: Using meet-jit-si-turnrelay.jitsi.net:443/udp for StunMappingCandidateHarvester (localAddress=10.244.0.6:0/udp).
JVB 2024-06-02 22:54:51.616 INFO: [74] org.ice4j.ice.harvest.StunMappingCandidateHarvester.discover: We failed to obtain addresses for the following reason:
java.lang.IllegalArgumentException: unresolved address
at java.base/java.net.DatagramPacket.setSocketAddress(DatagramPacket.java:316)
at java.base/java.net.DatagramPacket.(DatagramPacket.java:144)
at org.ice4j.stack.Connector.sendMessage(Connector.java:326)
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:634)
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:581)
at org.ice4j.stack.StunClientTransaction.sendRequest0(StunClientTransaction.java:267)
at org.ice4j.stack.StunClientTransaction.sendRequest(StunClientTransaction.java:245)
at org.ice4j.stack.StunStack.sendRequest(StunStack.java:680)
at org.ice4j.stack.StunStack.sendRequest(StunStack.java:618)
at org.ice4j.stack.StunStack.sendRequest(StunStack.java:585)
at org.ice4j.stunclient.BlockingRequestSender.sendRequestAndWaitForResponse(BlockingRequestSender.java:166)
at org.ice4j.stunclient.SimpleAddressDetector.getMappingFor(SimpleAddressDetector.java:123)
at org.ice4j.ice.harvest.StunMappingCandidateHarvester.discover(StunMappingCandidateHarvester.java:92)
at org.ice4j.ice.harvest.MappingCandidateHarvesters.lambda$createStunHarvesters$0(MappingCandidateHarvesters.java:268)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2024-06-02 22:54:52.132 INFO: [24] org.ice4j.ice.harvest.MappingCandidateHarvesters.maybeAdd: Discarding a mapping harvester: org.ice4j.ice.harvest.AwsCandidateHarvester@3dfb5327
JVB 2024-06-02 22:54:52.139 INFO: [24] org.ice4j.ice.harvest.MappingCandidateHarvesters.initialize: Using org.ice4j.ice.harvest.StaticMappingCandidateHarvester(face=10.244.0.6:9/udp, mask=172.18.36.160:9/udp)
JVB 2024-06-02 22:54:52.148 INFO: [24] org.ice4j.ice.harvest.MappingCandidateHarvesters.initialize: Initialized mapping harvesters (delay=22796ms). stunDiscoveryFailed=true
JVB 2024-06-02 22:54:53.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:54:53.486 WARNING: [27] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local: Temporary failure in name resolution')
at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2024-06-02 22:54:58.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:54:58.488 WARNING: [27] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local')
at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2024-06-02 22:55:03.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:03.084 WARNING: [25] org.jivesoftware.smackx.ping.PingManager.pingServerIfNecessary: XMPPTCPConnection[not-authenticated] (0) was not authenticated
JVB 2024-06-02 22:55:08.030 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:13.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:18.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:23.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:23.511 WARNING: [27] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local: Temporary failure in name resolution')
at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2024-06-02 22:55:28.030 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:28.512 WARNING: [27] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local')
at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2024-06-02 22:55:32.359 SEVERE: [29] HealthChecker.run#181: Health check failed in PT0.000128S: Result(success=false, hardFailure=true, responseCode=null, sticky=false, message=Address discovery through STUN failed)
JVB 2024-06-02 22:55:33.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:38.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:43.031 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:48.030 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:53.030 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
JVB 2024-06-02 22:55:53.532 WARNING: [27] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local: Temporary failure in name resolution')
at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)
at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)
at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)
at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)
at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
JVB 2024-06-02 22:55:58.030 WARNING: [33] [hostname=myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local id=shard0] MucClient.setPresenceExtensions#467: Cannot set presence extension: not connected.
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.
[s6-finish] sending all processes the KILL signal and exiting.

large language model helps me find a few of errors:
first:
JVB 2024-06-02 22:54:29.049 INFO: [1] ReadOnlyConfigurationService.reloadConfiguration#56: Error loading config file: java.io.FileNotFoundException: /config/sip-communicator.properties (No such file or directory)
second:
[cont-init.d] 10-config: executing...
Error: any valid prefix is expected rather than ";;"

third:
org.jivesoftware.smack.SmackException$EndpointConnectionException: Could not lookup the following endpoints: RemoteConnectionEndpointLookupFailure(description='DNS lookup exception for myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local' exception='java.net.UnknownHostException: myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local')
4th:
Health check failed in PT0.000128S: Result(success=false, hardFailure=true, responseCode=null, sticky=false, message=Address discovery through STUN failed)
5th:
WARNING: Disabling certificate verification!
But I have little idea how do I fix them. Whether I need supply myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local into the configmap of kube-dns? but if the prosody pod get restart, will ip keep same? does it make sense?

@thiagosestini
Copy link

Hello!

Looking at the failed address resolution, it's looking for "myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local". This seems wrong. The kubernetes dns structure is "my-svc.my-namespace.svc.cluster.local", but that record has two namespaces (ingress-nginx and kube-system). My guess is that somewhere there was an entry expecting only a service name and it was filled with the service full dns record.

You don't need to fix dns resolution, you need to fix the entry that's making it malform the service name in jitsi config.

@HuXinjing
Copy link
Author

Hello!

Looking at the failed address resolution, it's looking for "myjitsi-prosody.ingress-nginx.svc.kube-system.svc.cluster.local". This seems wrong. The kubernetes dns structure is "my-svc.my-namespace.svc.cluster.local", but that record has two namespaces (ingress-nginx and kube-system). My guess is that somewhere there was an entry expecting only a service name and it was filled with the service full dns record.

You don't need to fix dns resolution, you need to fix the entry that's making it malform the service name in jitsi config.

Thanks so much, I tried to recorrect the "clusterDomain" as "cluster.local" in the values.yaml, it is helpful, there is no repeat chars in jvb's log.
However there remain three problem:
I got nothing except "timeout" and "no servers could be reached" from my core-dns by typing "kubectl exec -it -n ingress-nginx ingress-nginx-controller-xxx -- nslookup myjitsi-prosody.ingress-nginx.svc.cluster.local"
I found "[INFO] 10.244.0.11:41827 - 34118 "A IN myjitsi-prosody.ingress-nginx.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 97 false 512" NXDOMAIN qr,aa,rd 190 0.000367317s
" which show repeat again in the log of core-dns.
And I tried to "nslookup" any domain of svc in pod, it return me reply from the unexpected IP of coredns pod which is the endpoint of kube-dns(svc name). For saving my terrible english, I got reply for example from 10.0.224.3, the expected IP is 10.0.12.4, and the former is the IP of coredns pod, and the later is the kubedns which I got by "kubectl get svc -A". And I tried to modify the "nameserver" from 10.0.12.4 to 10.0.224.3 in /etc/resolv.conf, then I nslookup successfully.

That's all my find after preform your advice. I would become more appreciate if you could give me more advice, Plz

@mh6m
Copy link

mh6m commented Jun 13, 2024

I encountered a similar problem.

Overall, everything works fine within the cluster; all services are accessible and connected. However, for some reason, both JVB and Jicofo, when starting up, are querying Cloudflare's proxy instead of the internal cluster.

My cluster is running on dedicated servers.

JVB 2024-06-13 11:39:08.565 WARNING: [32] [hostname=jitsi-prosody.jitsi.svc.cluster.local id=shard0] MucClient.lambda$getConnectAndLoginCallable$9#640: Error connecting:                                                                           │
│ org.jivesoftware.smack.SmackException$EndpointConnectionException: The following addresses failed: 'RFC 6120 A/AAAA Endpoint + [jitsi-prosody.jitsi.svc.cluster.local:5222] (jitsi-prosody.jitsi.svc.cluster.local/3.64.163.50:5222)' failed becaus │
│     at org.jivesoftware.smack.SmackException$EndpointConnectionException.from(SmackException.java:334)                                                                                                                                              │
│     at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectUsingConfiguration(XMPPTCPConnection.java:664)                                                                                                                                           │
│     at org.jivesoftware.smack.tcp.XMPPTCPConnection.connectInternal(XMPPTCPConnection.java:849)                                                                                                                                                     │
│     at org.jivesoftware.smack.AbstractXMPPConnection.connect(AbstractXMPPConnection.java:526)                                                                                                                                                       │
│     at org.jitsi.xmpp.mucclient.MucClient.lambda$getConnectAndLoginCallable$9(MucClient.java:635)                                                                                                                                                   │
│     at org.jitsi.retry.RetryStrategy$TaskRunner.run(RetryStrategy.java:167)                                                                                                                                                                         │
│     at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)                                                                                                                                                            │
│     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)                                                                                                                                                                           │
│     at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)                                                                                                                     │
│     at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)                                                                                                                                                    │
│     at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)                                                                                                                                                    │
│     at java.base/java.lang.Thread.run(Thread.java:840)                      

@spijet
Copy link
Collaborator

spijet commented Jun 13, 2024

Hello everyone! Sorry for the delay. 😅

@HuXinjing, the last error you've posted says that JVB is trying to connect to a service with the usual k8s service domain added twice:

A IN myjitsi-prosody.ingress-nginx.svc.cluster.local.ingress-nginx.svc.cluster.local.

Please double-check your chart values for any weird names. If that'd be easier for you, you can do helm get values <release name> and post the values here (minus the passwords and other sensitive info). Also, did you deploy the chart in ingress-nginx namespace? If not, then this service name is wrong, since all Jitsi Meet components are supposed to connect to Prosody directly in the same namespace (so, e.g. if you've deployed the chart in jitsi namespace, the FQDN would be myjitsi-prosody.jitsi.svc.cluster.local..

@spijet
Copy link
Collaborator

spijet commented Jun 13, 2024

@krama-d, the second line of the log got truncated unfortunately. Can you please post it again?

Other than that, this looks like a problem with your cluster's DNS settings (unless you're using 3.64.163.0/24 as the Service CIDR, that is 😁).

Also, as a precaution, please keep in mind that you need to do a rollout restart on your Prosody pod when you re-deploy the chart (unless you have all the XMPP usernames and passwords pre-defined in chart values), so Prosody is able to re-read new passwords from all the secrets and use them.

@mh6m
Copy link

mh6m commented Jun 13, 2024

I figured it out. The parameter useHostNetwork: true solves the problem; it makes the queries go directly to the IPs from the cluster instead of the IP 3.64.163.50, which belongs to Cloudflare. However, it's still quite strange that the queries are going through a proxy that doesn't belong to me, bypassing the external IPs of the Kubernetes cluster nodes.

@spijet
Copy link
Collaborator

spijet commented Jun 13, 2024

This option is useful for incoming connections from users to JVBs, and should make no difference for JVB -> Prosody connections (which should go through the prosody service IP only).

@HuXinjing
Copy link
Author

HuXinjing commented Jun 13, 2024

# Default values for jitsi-meet.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

global:
  # Set your cluster's DNS domain here.
  # "cluster.local" should work for most environments.
  # Set to "" to disable the use of FQDNs (default in older chart versions).
  #clusterDomain: kube-system.svc.cluster.local
  clusterDomain: cluster.local
  podLabels: {}
  podAnnotations: {}
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

enableAuth: false
enableGuests: true
publicURL: "https://jitsi.xunshi.com"

tz: Asia/Shanghai

image:
  pullPolicy: IfNotPresent

## WebSocket configuration:
#
#  Both Colibri and XMPP WebSockets are disabled by default,
#  since some LoadBalancer / Reverse Proxy setups can't pass
#  WebSocket connections properly, which might result in breakage
#  for some clients.
#
#  Enable both Colibri and XMPP WebSockets to replicate the current
#  upstream `meet.jit.si` setup. Keep both disabled to replicate
#  older setups which might be more compatible in some cases.
websockets:
  ## Colibri (JVB signalling):
  colibri:
    enabled: false
  ## XMPP (Prosody signalling):
  xmpp:
    enabled: false

web:
  replicaCount: 1
  image:
    repository: jitsi/web

  extraEnvs: {}
  service:
    type: ClusterIP
    port: 80
    externalIPs: []

  ingress:
    enabled: false
    # ingressClassName: "nginx-ingress-0"
    annotations: {}
      # kubernetes.io/tls-acme: "true"
    hosts:
    - host: jitsi.local
      paths: ['/']
    tls: []
    #  - secretName: jitsi-web-certificate
    #    hosts:
    #      - jitsi.local

  # Useful for ingresses that don't support http-to-https redirect by themself, (namely: GKE),
  httpRedirect: false

  # When tls-termination by the ingress is not wanted, enable this and set web.service.type=Loadbalancer
  httpsEnabled: false

  ## Resolver IP for nginx.
  #
  #  Starting with version `stable-8044`, the web container can
  #  auto-detect the nameserver from /etc/resolv.conf.
  #  Use this option if you want to override the nameserver IP.
  #
  # resolverIP: 10.43.0.10

  livenessProbe:
    httpGet:
      path: /
      port: 80
  readinessProbe:
    httpGet:
      path: /
      port: 80

  podLabels: {}
  podAnnotations: {}
  podSecurityContext: {}
    # fsGroup: 2000

  securityContext: {}
    # capabilities:
    #   drop:
    #   - ALL
    # readOnlyRootFilesystem: true
    # runAsNonRoot: true
    # runAsUser: 1000

  resources: {}
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    # limits:
    #   cpu: 100m
    #   memory: 128Mi
    # requests:
    #   cpu: 100m
    #   memory: 128Mi

  nodeSelector: {}

  tolerations: []

  affinity: {}

jicofo:
  replicaCount: 1
  image:
    repository: jitsi/jicofo

  xmpp:
    password:
    componentSecret:

  livenessProbe:
    tcpSocket:
      port: 8888
  readinessProbe:
    tcpSocket:
      port: 8888

  podLabels: {}
  podAnnotations: {}
  podSecurityContext: {}
  securityContext: {}
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: {}

jvb:
  replicaCount: 1
  image:
    repository: jitsi/jvb

  xmpp:
    user: jvb
    password:

  ## Set public IP addresses to be advertised by JVB.
  #  You can specify your nodes' IP addresses,
  #  or IP addresses of proxies/LoadBalancers used for your
  #  Jitsi Meet installation. Or both!
  #
  #  Note that only the first IP address will be used for legacy
  #  `DOCKER_HOST_ADDRESS` environment variable.
  #
  publicIPs:
    - 172.18.36.160
  #   - 5.6.7.8
  ## Use a STUN server to help some users punch through some
  #  especially nasty NAT setups. Usually makes sense for P2P calls.
  stunServers: 'meet-jit-si-turnrelay.jitsi.net:443'
  ## Try to use the hostPort feature:
  #  (might not be supported by some clouds or CNI engines)
  useHostPort: false
  ## Use host's network namespace:
  #  (not recommended, but might help for some cases)
  useHostNetwork: false
  ## UDP transport port:
  UDPPort: 10000
  ## Use a pre-defined external port for NodePort or LoadBalancer service,
  #  if needed. Will allocate a random port from allowed range if unset.
  #  (Default NodePort range for K8s is 30000-32767)
  # nodePort: 10000
  service:
    enabled:
    type: NodePort
    #externalTrafficPolicy: Cluster
    externalIPs: []
    ## Annotations to be added to the service (if LoadBalancer is used)
    ##
    annotations: {}

  breweryMuc: jvbbrewery

  livenessProbe:
    httpGet:
      path: /about/health
      port: 8080
  readinessProbe:
    httpGet:
      path: /about/health
      port: 8080

  podLabels: {}
  podAnnotations: {}
  podSecurityContext: {}
  securityContext: {}
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  extraEnvs: {}

  metrics:
    enabled: false
    prometheusAnnotations: false
    image:
      repository: docker.io/systemli/prometheus-jitsi-meet-exporter
      tag: 1.2.3
      pullPolicy: IfNotPresent
    serviceMonitor:
      enabled: true
      selector:
        release: prometheus-operator
      interval: 10s
      # honorLabels: false
    resources:
      requests:
        cpu: 10m
        memory: 16Mi
      limits:
        cpu: 20m
        memory: 32Mi

octo:
  enabled: false


jibri:
  ## Enabling Jibri will allow users to record
  ## and/or stream their meetings (e.g. to YouTube).
  enabled: false

  ## Use external Jibri installation.
  ## This setting skips the creation of Jibri Deployment altogether,
  ## instead creating just the config secret
  ## and enabling recording/streaming services.
  ## Defaults to disabled (use bundled Jibri).
  useExternalJibri: false

  ## Enable single-use mode for Jibri.
  ## With this setting enabled, every Jibri instance
  ## will become "expired" after being used once (successfully or not)
  ## and cleaned up (restarted) by Kubernetes.
  ##
  ## Note that detecting expired Jibri, restarting and registering it
  ## takes some time, so you'll have to make sure you have enough
  ## instances at your disposal.
  ## You might also want to make LivenessProbe fail faster.
  singleUseMode: false

  ## Enable recording service.
  ## Set this to true/false to enable/disable local recordings.
  ## Defaults to enabled (allow local recordings).
  recording: true

  ## Enable livestreaming service.
  ## Set this to true/false to enable/disable live streams.
  ## Defaults to disabled (livestreaming is forbidden).
  livestreaming: false

  ## Enable multiple Jibri instances.
  ## If enabled (i.e. set to 2 or more), each Jibri instance
  ## will get an ID assigned to it, based on pod name.
  ## Multiple replicas are recommended for single-use mode.
  replicaCount: 1

  ## Enable persistent storage for local recordings.
  ## If disabled, jibri pod will use a transient
  ## emptyDir-backed storage instead.
  persistence:
    enabled: false
    size: 4Gi
    ## Set this to existing PVC name if you have one.
    existingClaim:
    storageClassName:

  shm:
    ## Set to true to enable "/dev/shm" mount.
    ## May be required by built-in Chromium.
    enabled: false
    ## If "true", will use host's shared memory dir,
    ## and if "false" — an emptyDir mount.
    # useHost: false
    # size: 256Mi

  ## Configure the update strategy for Jibri deployment.
  ## This may be useful depending on your persistence settings,
  ## e.g. when you use ReadWriteOnce PVCs.
  ## Default strategy is "RollingUpdate", which keeps
  ## the old instances up until the new ones are ready.
  # strategy:
  #   type: RollingUpdate

  image:
    repository: jitsi/jibri

  podLabels: {}
  podAnnotations: {}

  breweryMuc: jibribrewery
  timeout: 90

  ## jibri XMPP user credentials:
  xmpp:
    user: jibri
    password:

  ## recorder XMPP user credentials:
  recorder:
    user: recorder
    password:

  livenessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    failureThreshold: 2
    exec:
      command:
        - /bin/bash
        - "-c"
        - >-
          curl -sq localhost:2222/jibri/api/v1.0/health
          | jq '"\(.status.health.healthStatus) \(.status.busyStatus)"'
          | grep -qP 'HEALTHY (IDLE|BUSY)'

  readinessProbe:
    initialDelaySeconds: 5
    periodSeconds: 5
    failureThreshold: 2
    exec:
      command:
        - /bin/bash
        - "-c"
        - >-
          curl -sq localhost:2222/jibri/api/v1.0/health
          | jq '"\(.status.health.healthStatus) \(.status.busyStatus)"'
          | grep -qP 'HEALTHY (IDLE|BUSY)'

  extraEnvs: {}

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:

xmpp:
  domain: jitsi.xunshi.com
  authDomain:
  mucDomain: muc.jitsi.xunshi.com
  internalMucDomain:
  guestDomain:

extraCommonEnvs: {}

prosody:
  enabled: true
  server:
  extraEnvFrom:
  - secretRef:
      name: '{{ include "prosody.fullname" . }}-jicofo'
  - secretRef:
      name: '{{ include "prosody.fullname" . }}-jvb'
  - configMapRef:
      name: '{{ include "prosody.fullname" . }}-common'
  ## Uncomment this if you want to use jibri:
  # - secretRef:
  #     name: '{{ include "prosody.fullname" . }}-jibri'
  image:
    repository: jitsi/prosody
    tag: 'stable-9111'

That's my value.yaml, I checked it again, but because of my poor knowledge, I could not find any other mistake except @thiagosestini has mentioned. T_T . I do install the ingress-nginx since I read a Chinese blog used your chart and ingress-nginx, but the blog has a little history, the author forget some detail so him can't help me more. And my "myjitsi" install in the "ingress-nginx" namespace.

@spijet
Copy link
Collaborator

spijet commented Jun 13, 2024

OK, so two things first:

  1. Use a separate namespace for Jitsi, it'll be easier for you to manage it in the long run;
  2. Use a separate values file that would hold your custom values only. This way there's lower risk of accidentally rewriting default values when you don't want to do so, and it'll be easier to read a smaller file.

Here's an example of what I have on one of my test installations of Jitsi Meet (namespace jitsi-test, Helm release name jitsi-test):

# jitsi-test.values.yaml
# install me with "helm -n jitsi-test install -f jitsi-test.values.yaml jitsi-test jitsi/jitsi-meet"

######
## You can ignore these, they're optional
extraCommonEnvs:
  DESKTOP_SHARING_FRAMERATE_MAX: "20"
  DESKTOP_SHARING_FRAMERATE_MIN: "5"
  ENABLE_CODEC_AV1: "true"
  ENABLE_CODEC_OPUS_RED: "true"
  ENABLE_CODEC_VP9: "true"
  ENABLE_E2EPING: "true"
  ENABLE_IPV6: "false"
  ENABLE_OPUS_RED: "true"
  ENABLE_REQUIRE_DISPLAY_NAME: "true"
  ENABLE_STEREO: "false"
  P2P_PREFERRED_CODEC: VP9
  TESTING_AV1_SUPPORT: "true"
  VIDEOQUALITY_PREFERRED_CODEC: VP9
######

jibri:
  enabled: true
  livestreaming: true
  persistence:
    enabled: true
    existingClaim: jitsi-test-record-storage
  replicaCount: 2
  shm:
    enabled: true
    size: 2Gi
  singleUseMode: true
jvb:
  UDPPort: 32768
  metrics:
    enabled: true
  resources:
    limits:
      cpu: 2000m
      memory: 2048Mi
    requests:
      cpu: 1000m
      memory: 1536Mi
  stunServers: meet-jit-si-turnrelay.jitsi.net:443,stun1.l.google.com:19302,stun2.l.google.com:19302,stun3.l.google.com:19302,stun4.l.google.com:19302
  useHostPort: true
  useNodeIP: true
prosody:
  extraEnvFrom:
  - secretRef:
      name: '{{ include "prosody.fullname" . }}-jicofo'
  - secretRef:
      name: '{{ include "prosody.fullname" . }}-jvb'
  - configMapRef:
      name: '{{ include "prosody.fullname" . }}-common'
  - secretRef:
      name: '{{ include "prosody.fullname" . }}-jibri'
web:
  custom:
    configs:
      _custom_config_js: |
        config.pcStatsInterval = 500;
        // Make sure there's an empty line
  ingress:
    annotations:
      ######
      ## These are used by CertManager, remove them if you don't have it:
      acme.cert-manager.io/http01-ingress-class: nginx-public
      cert-manager.io/cluster-issuer: letsencrypt-prod
      ######
      kubernetes.io/ingress.class: nginx-public
    enabled: true
    hosts:
    - host: staging.meet.example.tld
      paths:
      - /
    tls:
    - hosts:
      - staging.meet.example.tld
      secretName: tls-jitsi-meet-test

@HuXinjing
Copy link
Author

HuXinjing commented Jun 13, 2024

serviceMonitor

I tried your values.yaml in my env without any modification. But

----------------------------------------------------------------------------------------------------------------
(base) [myname@localhost ~]# helm -n jitsi-test install -f jitsi-test.values.yaml jitsi-test jitsi/jitsi-meet
----------------------------------------------------------------------------------------------------------------
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "jitsi-test-jitsi-meet-jvb" namespace: "" from "": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
ensure CRDs are installed first

It seems "ServiceMonitor" has a wrong repo? But in the values.yaml you shared it is omitted, and in default values.yaml, "ServiceMonitor" is a part of metrics which is not enabled by default.

@spijet
Copy link
Collaborator

spijet commented Jun 13, 2024

Looks like you don't have any monitoring stuff installed in your cluster. It's not a big deal, you can safely remove the metrics: sections in this case.

@HuXinjing
Copy link
Author

Looks like you don't have any monitoring stuff installed in your cluster. It's not a big deal, you can safely remove the metrics: sections in this case.

It works. Pods are running, but the problem go back to the beginning:
jvb restart endlessly and could not get dns service. :(
jvb.log
[
coredns.log
](url)

@thiagosestini
Copy link

thiagosestini commented Jun 15, 2024

Is this coredns log current? Because it still lists the double namespace:

[INFO] 10.244.0.8:55843 - 11744 "A IN jitsi-test-prosody.jitsi-test.svc.cluster.local.jitsi-test.svc.cluster.local. udp 94 false 512" NXDOMAIN qr,aa,rd 187 0.000296106s

NXDOMAIN meaning it doesn't find a DNS record for that, which makes sense considering that's not the service dns name.

It seems there's still something making the services look for the wrong address. My suggestion would be to delete the entire namespace, then look for pvcs and delete them too if there are any. Naturally, if there's anything other than jitsi in the namespace you might wanna delete things individually instead. I've had issues in the past because I thought removing the helm release and reinstalling was going to give me a clean start, but sometimes pvcs remain and configurations persist between installs because of them.

Another thing I noticed with this chart is that, because there's no dependency chain, JVB and Jicofo will start before Prosody is ready. Then, because the healthchecks are too tight, those services will fail before Prosody finishes its startup processes. So in my release I've relaxed the healthchecks with:

Jicofo

  livenessProbe:
    initialDelaySeconds: 30
    tcpSocket:
      port: 8888

  readinessProbe:
    initialDelaySeconds: 30
    tcpSocket:
      port: 8888

JVB

  livenessProbe:
    initialDelaySeconds: 30
    httpGet:
      path: /about/health
      port: 8080
  readinessProbe:
    initialDelaySeconds: 30
    httpGet:
      path: /about/health
      port: 8080

This gives it a little more time before healthchecks start evaluating JVB and Jicofo.

@HuXinjing
Copy link
Author

HuXinjing commented Jun 19, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants