Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak in VM #1225

Open
ainglessi opened this issue Feb 12, 2024 · 3 comments
Open

Memory leak in VM #1225

ainglessi opened this issue Feb 12, 2024 · 3 comments

Comments

@ainglessi
Copy link

After VM went offline and required a reset I found that the Sample Server memory use grows over time. Here's overall VM memory consumption:
image

And this is memory (in MiB) used by Sample Server container after fresh restart:
image

The container also uses 25-30% of the VM CPU.

Could not reproduce locally, memory use for the Linux binary increased by <2% in an hour, local container looks similar. Can this be related to the number of client requests?

Image used: latest develop, sha256:fe03be93684d641e2abe22342f7b25c7ca03fd8b2ead490ca3f1935f6b063ef7
Compose config:

  server-cpp:
    container_name: server-cpp
    image: ghcr.io/umati/sample-server:develop
    ports:
      - 4840:4840
    volumes:
      - ./config/server_config.json:/configuration.json
      - ./config/server_cert.der:/server_cert.der
      - ./config/server_key.der:/server_key.der
    restart: always
    healthcheck:
      test: netstat -ltn | grep -c 4840

@ccvca, @Kantiran91 any ideas?

@ccvca
Copy link
Member

ccvca commented Feb 13, 2024

When the growth is bigger on the VM, this is propably related to the number of client requests/ subscriptions. Might be also caused by the copy mechanism of values which is called more often, when more data is subscribed.

Cause may be found out using valgrind (unix) or DrMemory (windows) using the debug build (full stack visible).

@ainglessi
Copy link
Author

Here's a Valgrind log after a few minutes of running the server with UaExpert connected: valgrind-2024-02-13.log.

@ccvca
Copy link
Member

ccvca commented Feb 13, 2024

==1349== 2,912 bytes in 208 blocks are definitely lost in loss record 1,043 of 1,051
==1349==    at 0x484DA83: calloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==1349==    by 0x347D31: UA_Array_copy (ua_types.c:1962)
==1349==    by 0x342B79: String_copy (ua_types.c:172)
==1349==    by 0x342C1A: QualifiedName_copy (ua_types.c:189)
==1349==    by 0x345CD6: copyStructure (ua_types.c:1256)
==1349==    by 0x346030: UA_copy (ua_types.c:1373)
==1349==    by 0xCD9969: UA_RelativePathElement_copy (in /home/user/projects/Sample-Server/install/bin/SampleServer-1.1.1)
==1349==    by 0xCD9A98: open62541Cpp::UA_RelativPathElement::operator=(open62541Cpp::UA_RelativPathElement const&) (in /home/user/projects/Sample-Server/install/bin/SampleServer-1.1.1)
==1349==    by 0x18C1CE: void setAddrSpaceLocation<machineTool::Production_FiniteTransitionVariable_t>(BindableMember<machineTool::Production_FiniteTransitionVariable_t>&, open62541Cpp::UA_NodeId const&, open62541Cpp::UA_RelativPathElement) (in /home/user/projects/Sample-Server/install/bin/SampleServer-1.1.1)
==1349==    by 0x188556: auto UmatiServerLib::Bind::MembersRefl<machineTool::ProductionStateMachine_t>(machineTool::ProductionStateMachine_t&, UA_Server*, open62541Cpp::UA_NodeId, NodesMaster&)::{lambda(auto:1)#2}::operator()<refl::descriptor::field_descriptor<machineTool::ProductionStateMachine_t, 1ul> >(refl::descriptor::field_descriptor<machineTool::ProductionStateMachine_t, 1ul>) const (in /home/user/projects/Sample-Server/install/bin/SampleServer-1.1.1)
==1349==    by 0x18C34B: decltype ({parm#1}((forward<refl::descriptor::field_descriptor<machineTool::ProductionStateMachine_t, 1ul> >)({parm#2}))) refl::util::detail::invoke_optional_index<UmatiServerLib::Bind::MembersRefl<machineTool::ProductionStateMachine_t>(machineTool::ProductionStateMachine_t&, UA_Server*, open62541Cpp::UA_NodeId, NodesMaster&)::{lambda(auto:1)#2}&, refl::descriptor::field_descriptor<machineTool::ProductionStateMachine_t, 1ul> >(machineTool::ProductionStateMachine_t&&, refl::descriptor::field_descriptor<machineTool::ProductionStateMachine_t, 1ul>&&, unsigned long, ...) (in /home/user/projects/Sample-Server/install/bin/SampleServer-1.1.1)
==1349==    by 0x1887FC: void refl::util::detail::eval_in_order<UmatiServerLib::Bind::MembersRefl<machineTool::ProductionStateMachine_t>(machineTool::ProductionStateMachine_t&, UA_Server*, open62541Cpp::UA_NodeId, NodesMaster&)::{lambda(auto:1)#2}, refl::descriptor::field_descriptor<machineTool::ProductionStateMachine_t, 1ul>, , 1ul>(refl::util::type_list<refl::descriptor::field_descriptor<machineTool::ProductionStateMachine_t, 1ul>>, std::integer_sequence<unsigned long, 1ul>, machineTool::ProductionStateMachine_t&&) (in /home/user/projects/Sample-Server/install/bin/SampleServer-1.1.1)
==1349==

This looks like a repeating issue and is definitely a leak.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants