-
After some Googling, I realised this is a Golang specific error, rather than a Roadrunner one. I've been having a string of similar errors trying to get up and running, that make me wonder whether the error reporting for the server can be improved upon. Yesterday, server was running well but my code was throwing an error because I hadn't connected the cache adapter for production. Even though stack trace was truncated, at least, server produced output that something was wrong. After writing and connecting adapter, server runs briefly before shutting down with this error "context deadline exceeded". When I http pool.debug = true, the connection is sustained, although no request is ever served. Every request causes it to throw that error, but it never shuts down. For this error, I suspect no workers/job is setup. Please see https://gist.github.com/nmeri17/082d6ca1f82b048c13b25902d250584d#file-moduleworkeraccessor-php-L72. My understanding is that for each suphle-test-worker, ModuleWorkerAccessor is spun up and env is updated with the job mode. Unfortunately, I can't debug whether or not that conditional ever runs, since all output is trapped. I run suphle-test-worker alone, no errors, no output. I'm trying to have workers for queue and http. Kindly confirm that's the correct logic for it Secondly I tried to run the underlying code standalone ie without the server. The code ought to interact with cache and queues. All the while, I used my in-memory doubles and all was fine. Now, when I try to connect to the Roadrunner queue, it fails with the message "no global boltdb configuration". I've checked the configuration reference and boltdb has no global entry. It always lives under something else You can see https://gist.github.com/nmeri17/082d6ca1f82b048c13b25902d250584d#file-rr-yaml-L88, the jobs section is commented out. I'm just struggling to get the adapters running ie even if queues fail, cache should succeed, right? Roadrunner says "not so fast!". After commenting that out, the page /app/request fails with "socket_connect no connection could be made because the target machine actively refused it". The RPC that works for booting up the server suddenly is actively refused. This happens on this line https://github.com/nmeri17/Tilwa/blob/23ee2d6377e821c770d6487baeb874e26fc69ec6/nmeri/tilwa/src/Adapters/Cache/BoltDbCache.php#L53. That's where I'm lost I really hope one of the maintainers can help resolve three issues as soon as possible |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 38 replies
-
Hey @nmeri17 👋🏻 , nice to meet you.
BoltDB has a global configuration -> https://github.com/roadrunner-server/roadrunner/blob/master/.rr.yaml#L1027 Remember that you can't have more than one connection to BoltDB; all other connections will be deadlined due to such database nature (SQLite, boltdb, etc). Also, there is no sense to use debug and reload (https://gist.github.com/nmeri17/082d6ca1f82b048c13b25902d250584d#file-rr-yaml-L20), because they both solve one purpose but have a different approach. |
Beta Was this translation helpful? Give feedback.
-
Global configuration describes some general parameters for the storage.
Yes, correct. This is a file database limitation to have only one connection per file. That is because you're getting the
Remember that you should not use the |
Beta Was this translation helpful? Give feedback.
-
My pleasure 👍🏻
You may also use an xdebug: https://roadrunner.dev/docs/php-debugging/2.x/en |
Beta Was this translation helpful? Give feedback.
-
I ran endure container in debug mode in hopes of seeing its last action before shutdown. The only thing that seems to stand out compared to other output is
Does this mean it's unable to handle a job plugin setup? If yes, why has it always said plugin: http.Plugin on the last line? |
Beta Was this translation helpful? Give feedback.
-
I removed the http part of rr.yaml to get around the two minutes it takes for job worker allocation to take place.
This is only true when a solitary worker type/mode is being ran--http OR jobs. When both are combined, http spins up 4 workers instantly. The jobs worker waits exactly 63 seconds before signaling creation of 10 workers. I indicated this four days ago. However, there's another caveat
Spot on. For some reason, Composer was unable to include one of my interfaces while booting the modules, until I emptied the interface's contents and ran again. After it started working, I pasted them back and server stopped crashing mysteriously while trying to hydrate jobs worker handler. What's interesting is that I spent considerable time pursuing a bug that doesn't exist. It's not mentioned anywhere that this setup timeout "context deadline exceeded" is possible if a debugger breakpoint is set anywhere before any of the while loops, in my case
Following that error lead to me this call stack
So I kept wondering the reason behind that line executing when I was yet to dispatch any tasks. This means I'm prohibited from either var_dump-ing anything or setting breakpoints anywhere before the while loops. This means you have to make this command more visible
As already mentioned here #1212 (reply in thread), the local allocate_timeout has no effect. Now that my Composer issue is resolved and everything related to jobs plugin is sorted, the workers don't kick in after 2 minutes anymore but instantly. Using the global(?) one under the "activities" key does nothing. Everything happens instantly. Off the top of my head, I feel if there's a json version of this sample yaml it can help with issues such as nested config. I can see a space before "activities" that traces back to temporal, correct? Do I need that for worker allocation? Doubt. Second suggestion I feel could help is breaking config into different files just like gitignores, so one can tell what config they're looking at (while comparing to the master list), what overrides what, etc. Lastly, I planned to test this using the following in my test case: Many thanks for your patience and assistance |
Beta Was this translation helpful? Give feedback.
I removed the http part of rr.yaml to get around the two minutes it takes for job worker allocation to take place.
This is only true when a solitary worker type/mode is being ran--http OR jobs. When both are combined, http spins up 4 workers instantly. The jobs worker waits exactly 63 seconds before signaling creation of 10 workers. I indicated this four days ago. However, there's another caveat
Spot on. For some reason, Composer was unable to include one of my interfaces while booting the modules, until I emptied the interface's contents and ran again. After it started working…