Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reply offload #1457

Open
wants to merge 2 commits into
base: unstable
Choose a base branch
from

Conversation

alexander-shabanov
Copy link

@alexander-shabanov alexander-shabanov commented Dec 18, 2024

Overview

This PR introduces the ability to offload replies to I/O threads as described at #1353.

Key Changes

  • Added capability to reply construction allowing to interleave regular replies with offloaded replies in client reply buffers
  • Extended write-to-client handlers to support offloaded replies
  • Added offloading of bulk replies when reply offload enabled
  • Minor changes in cluster slots stats in order to support network-bytes-out for offloaded replies

Note: When reply offload disabled content and handling of client reply buffers remains as before this PR

Implementation Details

Reply construction:

  1. Original _addReplyToBuffer and _addReplyProtoToList have been renamed to _addReplyPayloadToBuffer and _addReplyPayloadToList and extended to support different types of payloads - regular replies and offloaded replies.
  2. New _addReplyToBuffer and _addReplyProtoToList calls now _addReplyPayloadToBuffer and _addReplyPayloadToList and used for adding regular replies to client reply buffers.
  3. Newly introduced _addBulkOffloadToBuffer and _addBulkOffloadToList are used for adding offloaded replies to client reply buffers.

Write-to-client infrastructure:

The writevToClient and _postWriteToClient has been significantly changed to support reply offload capability.

Testing

  1. Existing unit and integration tests passed
  2. Added unit tests for reply offload functionality

Performance tests

Note: pay attention io-threads 1 config means only main thread with no additional io-threads, io-threads 2 means main thread plus 1 I/O thread, io-threads 9 means main thread plus 8 I/O threads.

No Reply Offload

Data Size io-threads config Number of keys Number of clients TPS Instance type
512 byte 1 3,000,000 1000 200,000 memory optimized
512 byte 2 3,000,000 1000 245,000 memory optimized
512 byte 4 3,000,000 1000 680,000 memory optimized
512 byte 5 3,000,000 1000 830,000 memory optimized
512 byte 6 3,000,000 1000 1,047,000 memory optimized
512 byte 7 3,000,000 1000 1,120,000 memory optimized
512 byte 8 3,000,000 1000 1,130,000 memory optimized
512 byte 9 3,000,000 1000 1,090,000 memory optimized
512 byte 10 3,000,000 1000 1,100,000 memory optimized
64 KB 1 500,000 500 28,800 network optimized
64 KB 5 500,000 500 68,000 network optimized
64 KB 9 500,000 500 68,000 network optimized
1 MB 1 10,000 100 2,200 network optimized
1 MB 2 10,000 100 4,200 network optimized
1 MB 4 10,000 100 4,200 network optimized
1 MB 9 10,000 100 4,200 network optimized

Reply Offload Enabled

Data Size io-threads config Number of keys Number of clients TPS Instance type
512 byte 1 3,000,000 1000 195,000 memory optimized
512 byte 2 3,000,000 1000 220,000 memory optimized
512 byte 4 3,000,000 1000 620,000 memory optimized
512 byte 5 3,000,000 1000 795,000 memory optimized
512 byte 6 3,000,000 1000 927,000 memory optimized
512 byte 7 3,000,000 1000 1,070,000 memory optimized
512 byte 8 3,000,000 1000 1,120,000 memory optimized
512 byte 9 3,000,000 1000 1,320,000 memory optimized
512 byte 10 3,000,000 1000 1,380,000 memory optimized
512 byte 11 3,000,000 1000 1,400,000 memory optimized
64 KB 1 500,000 500 47,000 network optimized
64 KB 5 500,000 500 190,000 network optimized
64 KB 9 500,000 500 370,000 network optimized
1 MB 1 10,000 100 4,800 network optimized
1 MB 2 10,000 100 4,800 network optimized
1 MB 4 10,000 100 14,200 network optimized
1 MB 9 10,000 100 23,360 network optimized

Comment on lines +1442 to +1446
# For use cases where command replies include Bulk strings (e.g. GET, MGET)
# reply offload can be enabled to eliminate espensive memory access
# and redundant data copy performed by main thread
#
# reply-offload yes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we expect their to be cases where tuning this variable makes sense? Generally we want to avoid configuration in Valkey to make it simple to operate. Can we make real-time decisions about offloading?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to avoid the config too. It's better to start with no config and, if it turns out we need it later, then we can add. The reverse is not possible because removing a config is a breaking change.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see results of Performance tests in the PR description. Reply offload benefits performance if either data size is large (e.g. 64 Kb) or number of I/O threads is big enough for small data sizes (e.g. starting 9 io-threads config for 512 byte).

As eliminating expensive memory access to obj->ptr by main thread is major component of reply offload optimization , it is very challenging to provide dynamic solution. Note: access to obj->ptr is required to know object size. Besides this, assuming object size is available somehow, it will be relatively challenging to calibrate IsReplyOffloadBeneficial(data_size, io_threads_num) to make it generic for any OS/CPU architecture/etc.

It looks like we should provide config parameter in this case and customer need to test their workloads with and without reply offload and decide to activate it or not.

Please share your opinions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have other places where we dynamically choose to take an action, like with lazy-free effort and freeing objects in background threads. For lazy-free, we try to guess how much work it will be to free, and if the effort is small we do it in the main thread anyways. It seems like we can have a similar heuristic here, if the object is large or we have enough I/O threads, we offload the items to be freed. That way end users don't have to tune it and it works well by default.

@@ -3206,6 +3206,7 @@ standardConfig static_configs[] = {
createBoolConfig("cluster-slot-stats-enabled", NULL, MODIFIABLE_CONFIG, server.cluster_slot_stats_enabled, 0, NULL, NULL),
createBoolConfig("hide-user-data-from-log", NULL, MODIFIABLE_CONFIG, server.hide_user_data_from_log, 1, NULL, NULL),
createBoolConfig("import-mode", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.import_mode, 0, NULL, NULL),
createBoolConfig("reply-offload", NULL, MODIFIABLE_CONFIG, server.reply_offload_enabled, 0, NULL, NULL),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess also why is the default off? IO threading is off by default, so it seems to allow this to be on by default.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please comments above regarding avoiding reply-offload config at all. The answer to the question 'why reply-offload is off by default' is because it does not benefit performance in 100% of use cases.

@madolson
Copy link
Member

Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a super comprehensive review. Mostly just some comments to improve the clarity, since the code is complex but seems mostly reasonable.

The TPS with reply offload enabled and without I/O threads slightly decreased from 200,000 to 190,000. So, reply offload is not recommended without I/O threads until decrease in cob size is highly important for some customers.

I didn't follow the second half of this sentence. Do you mean "unless decrease in cob size is important"? I find that unlikely to be the case. I would also still like to understand better why it degrades performance.

src/networking.c Outdated
Comment on lines 2368 to 2392
* INTERNALS
* The writevToClient strives to write all client reply buffers to the client connection.
* However, it may encounter NET_MAX_WRITES_PER_EVENT or IOV_MAX or socket limit. In such case,
* some client reply buffers will be written completely and some partially.
* In next invocation writevToClient should resume from the exact position where it stopped.
* Also writevToClient should communicate to _postWriteToClient which buffers written completely
* and can be released. It is intricate in case of reply offloading as length of reply buffer does not match
* to network bytes out.
*
* For this purpose, writevToClient uses 3 data members on the client struct as input/output paramaters:
* io_last_written_buf - Last buffer that has been written to the client connection
* io_last_written_bufpos - The buffer has been written until this position
* io_last_written_data_len - The actual length of the data written from this buffer
* This length differs from written bufpos in case of reply offload
*
* The writevToClient uses addBufferToReplyIOV, addCompoundBufferToReplyIOV, addOffloadedBulkToReplyIOV, addPlainBufferToReplyIOV
* to build reply iovec array. These functions know to skip io_last_written_data_len, specifically addPlainBufferToReplyIOV
*
* In the end of execution writevToClient calls saveLastWrittenBuf for calculating "last written" buf/pos/data_len
* and storing on the client. While building reply iov, writevToClient gathers auxiliary bufWriteMetadata that
* helps in this calculation. In some cases, It may take several (> 2) invocations for writevToClient to write reply
* from a single buffer but saveLastWrittenBuf knows to calculate "last written" buf/pos/data_len properly
*
* The _postWriteToClient uses io_last_written_buf and io_last_written_bufpos in order to detect completely written buffers
* and release them
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally internal comments should be near the code they are describing. Can we move this into the function near the relevant sections?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved (spread) comments to relevant functions

src/networking.c Outdated Show resolved Hide resolved
src/networking.c Outdated Show resolved Hide resolved
src/networking.c Outdated
Comment on lines 2539 to 2545
payloadHeader *header = (payloadHeader *)ptr;
ptr += sizeof(payloadHeader);

if (header->type == CLIENT_REPLY_PAYLOAD_BULK_OFFLOAD) {
clusterSlotStatsAddNetworkBytesOutForSlot(header->slot, header->actual_len);

robj** obj_ptr = (robj**)ptr;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code like this would benefit a lot from some helper methods. Instead of just constantly moving and recasting values. Something like,

robj *getValkeyObjectFromHeader(payloadHeader *header) {
     char *ptr = (char *ptr) header;
     ptr += sizeof(payloadHeader);
     return (robj**)ptr;
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The suggested helper function does not address all the needs. As buffer can contain content like
header1ptr1ptr2ptr3header2plain_replyheader3ptr4ptr5 and it is more convenient to move ptr and objv accordingly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't understand your comment, it doesn't look like it rendered correctly?

it is more convenient to move

I agree, but it is much harder to read.

@zuiderkwast
Copy link
Contributor

I just looked briefly, mostly at the discussions.

I don't think we should call this feature "reply offload". The design is not strictly limited to offloading to threads. It's rather about avoiding copying.

The TPS for GET commands with data size 512 byte increased from 1.09 million to 1.33 million requests per second in test with 1000 clients and 8 I/O threads.

The TPS with reply offload enabled and without I/O threads slightly decreased from 200,000 to 190,000. So, reply offload is not recommended without I/O threads until decrease in cob size is highly important for some customers.

So there appears to be some overhead with this approach? It could be that cob memory is already in CPU cache, but when the cob is written to the client, the values are not in CPU cache anymore, so we get more cold memory accesses.

Anyhow, you tested it only with 512 byte values? I would guess this feature is highly dependent on the value size. With a value size of 100MB, I would be surprised if we don't see an improvement also in single-threaded mode.

Is there any size threshold for when we embed object pointers in the cob? Is it as simple as if the value is stored as OBJECT_ENCODING_RAW, the string is stored in this way? In that case, the threshold is basically around 64 bytes practice, because smaller strings are stored as EMBSTR.

I think we should benchmark this feature with several different value sizes and find the reasonable size threshold where we benefit from this. Probably there will be a different (higher) threshold for single-threaded and a lower one for IO-threaded. Could it even depend on the number of threads?

Signed-off-by: Alexander Shabanov <[email protected]>
@alexander-shabanov
Copy link
Author

https://github.com/valkey-io/valkey/actions/runs/12395947567/job/34606854911?pr=1457

Means you are leaking some memory.

Fixed

@alexander-shabanov alexander-shabanov force-pushed the reply_offload branch 2 times, most recently from ac7e1f5 to a40e72e Compare December 19, 2024 14:03
@alexander-shabanov
Copy link
Author

alexander-shabanov commented Dec 20, 2024

Not a super comprehensive review. Mostly just some comments to improve the clarity, since the code is complex but seems mostly reasonable.

The TPS with reply offload enabled and without I/O threads slightly decreased from 200,000 to 190,000. So, reply offload is not recommended without I/O threads until decrease in cob size is highly important for some customers.

I didn't follow the second half of this sentence. Do you mean "unless decrease in cob size is important"? I find that unlikely to be the case. I would also still like to understand better why it degrades performance.

From the tests and perf profiling it appears that main cause for performance improvement from this feature comes from eliminating expensive memory access to obj->ptr by main thread and much much less from eliminating copy to reply buffers. Without I/O threads, main thread still need to access obj->ptr and writev flow is a bit slower (requires additional preparation work) than plain write flow. I will publish results of various tests with and without I/O threads and with different data sizes on next week.

@alexander-shabanov
Copy link
Author

alexander-shabanov commented Dec 20, 2024

I just looked briefly, mostly at the discussions.

I don't think we should call this feature "reply offload". The design is not strictly limited to offloading to threads. It's rather about avoiding copying.

The TPS for GET commands with data size 512 byte increased from 1.09 million to 1.33 million requests per second in test with 1000 clients and 8 I/O threads.
The TPS with reply offload enabled and without I/O threads slightly decreased from 200,000 to 190,000. So, reply offload is not recommended without I/O threads until decrease in cob size is highly important for some customers.

So there appears to be some overhead with this approach? It could be that cob memory is already in CPU cache, but when the cob is written to the client, the values are not in CPU cache anymore, so we get more cold memory accesses.

Anyhow, you tested it only with 512 byte values? I would guess this feature is highly dependent on the value size. With a value size of 100MB, I would be surprised if we don't see an improvement also in single-threaded mode.

Is there any size threshold for when we embed object pointers in the cob? Is it as simple as if the value is stored as OBJECT_ENCODING_RAW, the string is stored in this way? In that case, the threshold is basically around 64 bytes practice, because smaller strings are stored as EMBSTR.

I think we should benchmark this feature with several different value sizes and find the reasonable size threshold where we benefit from this. Probably there will be a different (higher) threshold for single-threaded and a lower one for IO-threaded. Could it even depend on the number of threads?

Very good questions. I will publish results of various tests with and without I/O threads and with different data sizes on next week. IMPORTANT NOTE: we can't switch on or off reply offload dynamically according to obj(string) size cause main optimization is to eliminate expensive memory access to obj->ptr by main thread (eliminating copy much less important).

@zuiderkwast
Copy link
Contributor

we can't switch on or off reply offload dynamically according to obj(string) size cause main optimization is to eliminate expensive memory access to obj->ptr by main thread

Got it. Thanks!

At least, when the feature is ON, it doesn't make sense to dynamically switch it OFF based on length.

But for single-threaded mode where this feature is normally OFF, we could consider switching it ON dynamically only for really huge strings, right? In this case we will have one expensive memory access, but we could avoid copying megabytes. Let's see the benchmark results if this makes sense.

I appreciate you're testing this with different sizes and with/without IO threading.

@alexander-shabanov
Copy link
Author

we can't switch on or off reply offload dynamically according to obj(string) size cause main optimization is to eliminate expensive memory access to obj->ptr by main thread

Got it. Thanks!

At least, when the feature is ON, it doesn't make sense to dynamically switch it OFF based on length.

But for single-threaded mode where this feature is normally OFF, we could consider switching it ON dynamically only for really huge strings, right? In this case we will have one expensive memory access, but we could avoid copying megabytes. Let's see the benchmark results if this makes sense.

I appreciate you're testing this with different sizes and with/without IO threading.

Published results of performance tests in the PR description

robj *obj2 = createObject(OBJ_STRING, sdscatfmt(sdsempty(), "test2"));
_addBulkOffloadToBufferOrList(c, obj2);

TEST_ASSERT(c->bufpos == sizeof(payloadHeader) + 2 * sizeof(void *));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add comments explaining the + 2 or better declare a meaningful variable name with the value of 2

Copy link
Author

@alexander-shabanov alexander-shabanov Dec 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added more comments in tests


int clusterSlotStatsEnabled(void) {
return server.cluster_slot_stats_enabled && /* Config should be enabled. */
server.cluster_enabled; /* Cluster mode should be enabled. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments appear to be redundant.

Copy link
Author

@alexander-shabanov alexander-shabanov Dec 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments retained from the original refactored code. They indeed redundant. Will remove them

clientReplyBlock *block = (clientReplyBlock *)listNodeValue(c->io_last_reply_block);
c->io_last_bufpos = block->used;
/* If reply offload enabled force new header */
block->last_header = NULL;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need a last_header per block if we already have a last_header field per client

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Single last_header per client does not work well with DeferredReply case. Besides this, last_header per block is simpler, aligns well with existing code and data fields.

@@ -234,6 +266,9 @@ client *createClient(connection *conn) {
c->commands_processed = 0;
c->io_last_reply_block = NULL;
c->io_last_bufpos = 0;
c->io_last_written_buf = NULL;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we instead use a bit flag in the payload header to indicate if we are done? Since the main thread needs to iterate over the headers inside the buffer anyway to get the actual written bytes.

Copy link
Author

@alexander-shabanov alexander-shabanov Dec 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"the main thread needs to iterate over the headers inside the buffer anyway to get the actual written bytes" is not accurate. The main thread iterates over the headers inside the buffer ONLY ONCE when buffer should be released.

"use a bit flag in the payload header" approach will cause:

  1. main thread to iterate over the headers inside the buffer even when buffer can't be released fully yet
  2. writer thread to iterate over the headers inside the buffer in the end of write each time to detect/mark headers

The implemented io_last_written_buf/io_last_written_bufpos/io_last_written_data_len eliminates redundant iterations over headers inside buffers and much simpler.

Signed-off-by: Alexander Shabanov <[email protected]>
@zuiderkwast
Copy link
Contributor

Thanks for the benchmarks! This is very interesting. For large values (64KB and above), it is a great improvement also for single-threaded. For 512 bytes values, it's faster only with 9 threads or more.

This confirms my guess that it's not only about offloading work to IO threads, but also about less copying for large values.

We should have some threshold to use it also for single threaded. I suggest we use 64KB as the threshold, or benchmark more sizes to find a better threshold.

@alexander-shabanov
Copy link
Author

Thanks for the benchmarks! This is very interesting. For large values (64KB and above), it is a great improvement also for single-threaded. For 512 bytes values, it's faster only with 9 threads or more.

This confirms my guess that it's not only about offloading work to IO threads, but also about less copying for large values.

We should have some threshold to use it also for single threaded. I suggest we use 64KB as the threshold, or benchmark more sizes to find a better threshold.

Pay attention 9 threads means main thread + 8 I/O threads. Why do we need to find out threshold? I still think it should be config param and customers should test their workloads and activate or not accordingly.

@ranshid
Copy link
Member

ranshid commented Jan 2, 2025

Thanks for the benchmarks! This is very interesting. For large values (64KB and above), it is a great improvement also for single-threaded. For 512 bytes values, it's faster only with 9 threads or more.
This confirms my guess that it's not only about offloading work to IO threads, but also about less copying for large values.
We should have some threshold to use it also for single threaded. I suggest we use 64KB as the threshold, or benchmark more sizes to find a better threshold.

Pay attention 9 threads means main thread + 8 I/O threads. Why do we need to find out threshold? I still think it should be config param and customers should test their workloads and activate or not accordingly.

@alexander-shabanov I do not think adding a configuration parameter is the preferred option in this case. Users are almost never tuning their caches at these levels and it is also very problematic to tell the user to learn his workload and formulate ahis own rules to when to enable this config. I also think there is some risk in introducing this degradation so we should work to understand what other alternatives we have.
I can think of some:

  1. Enable this feature only when the number of active IO-Threads is 8
  2. Track the CPU consumption on the engine thread and synamically enable the feature when the main engine is utilizing high CPU
  3. Maybe we can find a way to tag the reply object should be offloaded so that we will not get the memory access penalty
    And I am sure there are more.

/* Test 1: Add bulk offloads to the reply list */

/* Fill c->buf almost completely */
size_t reply_len = c->buf_usable_size - 2 * sizeof(payloadHeader) - 4;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a comment explaining why -4 ?

freeReplyOffloadClient(c);

return 0;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we test releaseBufOffloads instead of directly calling decrRefCount

@@ -174,24 +179,14 @@ void clusterSlotStatsDecrNetworkBytesOutForReplication(long long len) {
* This type is not aggregated, to stay consistent with server.stat_net_output_bytes aggregation.
* This function covers the internal propagation component. */
void clusterSlotStatsAddNetworkBytesOutForShardedPubSubInternalPropagation(client *c, int slot) {
/* For a blocked client, c->slot could be pre-filled.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand why was it required before this PR.

static int canAddNetworkBytesOut(client *c) {
return server.cluster_slot_stats_enabled && server.cluster_enabled && c->slot != -1;
static int canAddNetworkBytesOut(int slot) {
return clusterSlotStatsEnabled() && slot != -1;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider renaming it to canAddSlotStats and using it in both canAddCpuDuration and canAddNetworkBytesIn

c->io_last_bufpos = ((clientReplyBlock *)listNodeValue(c->io_last_reply_block))->used;
clientReplyBlock *block = (clientReplyBlock *)listNodeValue(c->io_last_reply_block);
c->io_last_bufpos = block->used;
/* If reply offload enabled force new header */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should indeed check if reply offload is enabled before writing NULL to the header? Checking a global is cheaper than write access to the block

@@ -374,6 +425,49 @@ void deleteCachedResponseClient(client *recording_client) {
/* -----------------------------------------------------------------------------
* Low level functions to add more data to output buffers.
* -------------------------------------------------------------------------- */
static inline void insertPayloadHeader(char *buf, size_t *bufpos, uint8_t type, size_t len, int slot, payloadHeader **last_header) {
/* Save the latest header */
*last_header = (payloadHeader *)(buf + *bufpos);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe new_header is more clear?

c->bufpos += reply_len;
/* We update the buffer peak after appending the reply to the buffer */
if (c->buf_peak < (size_t)c->bufpos) c->buf_peak = (size_t)c->bufpos;
return reply_len;
}

size_t _addReplyToBuffer(client *c, const char *s, size_t len) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be static as well. same for _addBulkOffloadToBuffer

struct iovec *iov = iov_arr;
ssize_t bufpos, iov_bytes_len = 0;
listNode *lastblock;
char prefixes[iovmax / 3 + 1][LONG_STR_SIZE + 3];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

iovmax / 3 - use constant instead of magic number

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add comments for this line to clarify what the prefixes are and why these dimensions were chosen

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to find a way to avoid allocating this array on the stack in the common case where reply offload is disabled

char prefixes[iovmax / 3 + 1][LONG_STR_SIZE + 3];
char crlf[2] = {'\r', '\n'};
int bufcnt = 0;
bufWriteMetadata metadata[listLength(c->reply) + 1];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a comment explaining that the +1 is for the static buffer

}

static inline int updatePayloadHeader(payloadHeader *last_header, uint8_t type, size_t len, int slot) {
if (last_header->type == type && last_header->slot == slot) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using for example MGET with small values that are not offloaded and come from different slots, wouldn't this cause multiple plain headers, thus requiring writeV instead of a simple write? We should investigate if this causes any performance degradation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants