Skip to content
This repository has been archived by the owner on Feb 26, 2020. It is now read-only.

Jenkins web UI unresponsive, possibly due to Java running out of memory #9

Open
prakashsurya opened this issue Jan 4, 2017 · 0 comments

Comments

@prakashsurya
Copy link
Member

The Jenkins master process appears to have run out memory; when accessing the mater via HTTP it hangs:

$ curl -v http://10.100.64.187:8080
* Rebuilt URL to: http://10.100.64.187:8080/
*   Trying 10.100.64.187...
* Connected to 10.100.64.187 (10.100.64.187) port 8080 (#0)
> GET / HTTP/1.1
> Host: 10.100.64.187:8080
> User-Agent: curl/7.50.1
> Accept: */*
>
^C

and there's lots of messages in the container's log:

$ docker logs jenkins-master 2>&1 | tail -n 50
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at jenkins.model.lazy.LazyBuildMixIn.loadBuild(LazyBuildMixIn.java:165)
        at jenkins.model.lazy.LazyBuildMixIn$1.create(LazyBuildMixIn.java:142)
        at hudson.model.RunMap.retrieve(RunMap.java:223)
        at hudson.model.RunMap.retrieve(RunMap.java:56)
        at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:500)
        at jenkins.model.lazy.AbstractLazyLoadRunMap.load(AbstractLazyLoadRunMap.java:482)
        at jenkins.model.lazy.AbstractLazyLoadRunMap.getByNumber(AbstractLazyLoadRunMap.java:380)
        at jenkins.model.lazy.LazyBuildMixIn.getBuildByNumber(LazyBuildMixIn.java:231)
        at org.jenkinsci.plugins.workflow.job.WorkflowJob.getBuildByNumber(WorkflowJob.java:218)
        at org.jenkinsci.plugins.workflow.job.WorkflowJob.getBuildByNumber(WorkflowJob.java:103)
        at jenkins.model.PeepholePermalink.resolve(PeepholePermalink.java:95)
        at hudson.model.Job.getLastSuccessfulBuild(Job.java:912)
        at org.jenkinsci.plugins.workflow.job.WorkflowJob.poll(WorkflowJob.java:556)
        at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:563)
        at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:609)
        at hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space

Jan 04, 2017 8:22:38 PM hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor monitor
WARNING: Failed to monitor master for Architecture
java.util.concurrent.TimeoutException
        at java.util.concurrent.FutureTask.get(FutureTask.java:205)
        at hudson.remoting.LocalChannel$2.get(LocalChannel.java:81)
        at hudson.node_monitors.AbstractAsyncNodeMonitorDescriptor.monitor(AbstractAsyncNodeMonitorDescriptor.java:96)
        at hudson.node_monitors.AbstractNodeMonitorDescriptor$Record.run(AbstractNodeMonitorDescriptor.java:305)

Jan 04, 2017 8:21:37 PM hudson.triggers.SafeTimerTask run
SEVERE: Timer task hudson.node_monitors.AbstractNodeMonitorDescriptor$1@5d4d2f02 failed
java.lang.OutOfMemoryError: Java heap space

Jan 04, 2017 8:18:24 PM hudson.triggers.SafeTimerTask run
SEVERE: Timer task hudson.node_monitors.AbstractNodeMonitorDescriptor$1@50b7736d failed
java.lang.OutOfMemoryError: Java heap space

Jan 04, 2017 8:09:10 PM hudson.triggers.SafeTimerTask run
SEVERE: Timer task hudson.node_monitors.AbstractNodeMonitorDescriptor$1@3d401ac9 failed
java.lang.OutOfMemoryError: Java heap space

Jan 04, 2017 8:00:24 PM hudson.triggers.SafeTimerTask run
SEVERE: Timer task hudson.model.Queue$MaintainTask@5e869286 failed
java.lang.OutOfMemoryError: Java heap space

Jan 04, 2017 7:59:46 PM org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SandboxResolvingClassLoader$4$1 load
WARNING: took 347,687ms to load/not load java.lang.com.cloudbees.groovy.cps.env$BUILD_NUMBER from classLoader hudson.PluginManager$UberClassLoader

As a result, I've restarted the container to get the service operational again:

$ docker restart jenkins-master

I'm sure this will occur again, so we still need a more permanent fix.

It's possible that we simply need to bump the amount of RAM alloted to the Java heap, and/or move the services to a system with more available RAM, but until a more detailed Root Cause Analysis is performed this is only speculation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant