Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SmartPtr: Support detect memory leak by valgrind. #4102

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

winlinvip
Copy link
Member

@winlinvip winlinvip commented Jun 21, 2024

  1. Support detect memory leak by valgrind.
  2. Free the http handler entry.
  3. Free the stack of ST.

@winlinvip winlinvip added the EnglishNative This issue is conveyed exclusively in English. label Jun 21, 2024
@winlinvip winlinvip marked this pull request as ready for review June 27, 2024 11:09
@@ -191,6 +191,9 @@ endif
#
# make EXTRA_CFLAGS=-DDEBUG_STATS
#
# or cache the stack and reuse it:
# make EXTRA_CFLAGS=-DMD_CACHE_STACK
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not add configure args to bypass MD_CACHE_STACK to st-srs, like MD_VALGRIND?

@suzp1984
Copy link
Contributor

suzp1984 commented Jun 27, 2024

Here is my env to testing & verify valgrind memory leak, there are a lot of outputs need to analysis, so I align my test env here.

Because I didn't find an image with valgrind in https://hub.docker.com/r/ossrs/srs/tags, I make my own one based on ossrs/srs:ubuntu20: start this image, running apt update -y && apt install -y valgrind, tag this container to an image, then I have the docker image to do the testing.

I would suggest tag an image with all the testing tools to the srs public docker hub.

Then build srs: (ossrs/srs:valgrind is the docker image in my local env)

docker run -it --rm -v pwd:/srs -w /srs ossrs/srs:valgrind bash -c "make clean_st"
docker run -it --rm -v pwd:/srs -w /srs ossrs/srs:valgrind bash -c "./configure --valgrind=on && make"

and run the srs with valgrind:

docker run -p 1935:1935 -p 1985:1985 -p 8080:8080 -p 8085:8085 --env CANDIDATE=$(ifconfig en0 inet| grep 'inet '|awk '{print $2}') -p 8000:8000/udp -it --rm -v pwd:/srs -w /srs ossrs/srs:valgrind valgrind --leak-check=full ./objs/srs -c conf/rtmp2rtc.conf

do live streams publish and play, close the srs by ctrl + \ or ctrl + c, check the output.

@@ -72,14 +74,34 @@ _st_stack_t *_st_stack_new(int stack_size)
_st_num_free_stacks--;
ts->links.next = NULL;
ts->links.prev = NULL;
return ts;
returnt ts;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it a mistake? returnt -> return.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a mistake.
To reproduce, I need to add -DMD_CACHE_STACK to auto/depends.sh, run make clean_st, then ./configure && make.

srs/trunk/auto/depends.sh

Lines 283 to 297 in ea7e2c2

# Patched ST from https://github.com/ossrs/state-threads/tree/srs
if [[ -f ${SRS_OBJS}/${SRS_PLATFORM}/3rdparty/st/libst.a ]]; then
rm -rf ${SRS_OBJS}/st && cp -rf ${SRS_OBJS}/${SRS_PLATFORM}/3rdparty/st ${SRS_OBJS}/ &&
echo "The state-threads is ok."
else
echo "Building state-threads." &&
rm -rf ${SRS_OBJS}/${SRS_PLATFORM}/st-srs ${SRS_OBJS}/${SRS_PLATFORM}/3rdparty/st ${SRS_OBJS}/st &&
cp -rf ${SRS_WORKDIR}/3rdparty/st-srs ${SRS_OBJS}/${SRS_PLATFORM}/ &&
env EXTRA_CFLAGS="${_ST_EXTRA_CFLAGS}" make -C ${SRS_OBJS}/${SRS_PLATFORM}/st-srs ${_ST_MAKE_ARGS} &&
mkdir -p ${SRS_OBJS}/${SRS_PLATFORM}/3rdparty/st &&
cp -f ${SRS_OBJS}/${SRS_PLATFORM}/st-srs/${_ST_OBJ}/st.h ${SRS_OBJS}/${SRS_PLATFORM}/3rdparty/st/ &&
cp -f ${SRS_OBJS}/${SRS_PLATFORM}/st-srs/${_ST_OBJ}/libst.a ${SRS_OBJS}/${SRS_PLATFORM}/3rdparty/st/ &&
cp -rf ${SRS_OBJS}/${SRS_PLATFORM}/3rdparty/st ${SRS_OBJS}/ &&
echo "The state-threads is ok."
fi

st-srs never spend too much time to build, maybe need to consider whether it's a good idea to build st-srs in this inconvenient way.

@suzp1984
Copy link
Contributor

Here is my env to testing & verify valgrind memory leak, there are a lot of outputs need to analysis, so I align my test env here.

Because I didn't find an image with valgrind in https://hub.docker.com/r/ossrs/srs/tags, I make my own one based on ossrs/srs:ubuntu20: start this image, running apt update -y && apt install -y valgrind, tag this container to an image, then I have the docker image to do the testing.

I would suggest tag an image with all the testing tools to the srs public docker hub.

Then build srs: (ossrs/srs:valgrind is the docker image in my local env)

docker run -it --rm -v pwd:/srs -w /srs ossrs/srs:valgrind bash -c "make clean_st"
docker run -it --rm -v pwd:/srs -w /srs ossrs/srs:valgrind bash -c "./configure --valgrind=on && make"

and run the srs with valgrind:

docker run -p 1935:1935 -p 1985:1985 -p 8080:8080 -p 8085:8085 --env CANDIDATE=$(ifconfig en0 inet| grep 'inet '|awk '{print $2}') -p 8000:8000/udp -it --rm -v pwd:/srs -w /srs ossrs/srs:valgrind valgrind --leak-check=full ./objs/srs -c conf/rtmp2rtc.conf

do live streams publish and play, close the srs by ctrl + \ or ctrl + c, check the output.

By this way, I found another definitely memory leak:

==1== 48 bytes in 1 blocks are definitely lost in loss record 48 of 145
==1==    at 0x484A3C4: operator new(unsigned long) (in /usr/lib/aarch64-linux-gnu/valgrind/vgpreload_memcheck-arm64-linux.so)
==1==    by 0x5BB90B: SrsPithyPrint::create_ingester() (srs_app_pithy_print.cpp:261)
==1==    by 0x5D842B: SrsIngester::SrsIngester() (srs_app_ingest.cpp:97)
==1==    by 0x517BE3: SrsServer::SrsServer() (srs_app_server.cpp:361)
==1==    by 0x51E4DB: SrsServerAdapter::SrsServerAdapter() (srs_app_server.cpp:1343)
==1==    by 0x6B842F: run_hybrid_server(void*) (srs_main_server.cpp:494)
==1==    by 0x6333F7: SrsThreadPool::start(void*) (srs_app_threads.cpp:939)
==1==    by 0x4881623: start_thread (pthread_create.c:477)
==1==    by 0x4C3049B: thread_start (clone.S:78)

SrsIngester::SrsIngester()
{
_srs_config->subscribe(this);
expired = false;
disposed = false;
trd = new SrsDummyCoroutine();
pprint = SrsPithyPrint::create_ingester();
}
SrsIngester::~SrsIngester()
{
_srs_config->unsubscribe(this);
srs_freep(trd);
clear_engines();
}

srs_freep(pprint), maybe rename pprint to pprint_, if there is a private member naming convention in this way.

_st_delete_stk_segment(ts->vaddr, ts->vaddr_size);
free(ts);
}
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only rarely a few part of codes in st-srs, add comments to denote which #if to match.

Suggested change
#endif
#endif /* MD_CACHE_STACK */

I would suggest add comments in this way.

@@ -153,7 +175,6 @@ static char *_st_new_stk_segment(int size)


/* Not used */
#if 0
void _st_delete_stk_segment(char *vaddr, int size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. remove the no longer correct /* Not used */ comment.
  2. _st_delete_stk_segment only used when #ifndef MD_CACHE_STACK, but when #ifdef MD_CACHE_STACK, is the stk segment released? or where to call _st_delete_stk_segment?

@@ -37,6 +37,7 @@ using namespace std;
#include <srs_protocol_log.hpp>
#include <srs_app_latest_version.hpp>
#include <srs_app_conn.hpp>
#include <srs_app_threads.hpp>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a useless include.

#endif

extra = _st_randomize_stacks ? _ST_PAGE_SIZE : 0;
#ifndef MD_CACHE_STACK
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if don't cache stack, why not free _st_stack_t in _st_stack_free, don't use _st_free_stacks to store the freed stack?
The stack still not freed in _st_free_stacks, else a new coroutine started and request a new stack here.

Copy link
Member Author

@winlinvip winlinvip Jun 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The crash appears to occur when the st is freed within the _st_stack_free function, which seems to be happening while the stack is still in use. Although it is added to the _st_free_stacks structure, and the previous st is released before the next coroutine starts, this approach should not result in a significant memory leak issue.

TRANS_BY_GPT4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
EnglishNative This issue is conveyed exclusively in English.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants