Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor/online repacking #10446

Merged
merged 9 commits into from
Dec 7, 2024

Conversation

Djip007
Copy link
Contributor

@Djip007 Djip007 commented Nov 21, 2024

goal: consolidation of the cpu backend for reintegration of the AMX backend.

  • remove Q4_0_N_M from ggml file tensor type, only the cpu backend have know of this type and do dynamic repacking for it on "ggml_backend_cpu_aarch64_buffer_type" only.
  • "extract" extra_buffer_type part (aarch64/hbm) and move most in there .cpp/.h files (migrate aarch64 to c++)
  • get more general structure for "extra_op"
  • get GGML_OP_MUL_MAT_ID for Q4_0_N_M working with dynamic repacking (aarch64_buffer)

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Nov 21, 2024
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure it is good, can not test.
And may not work/build on master branch either.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could probably be removed, the normal CPU buffer type calls ggml_aligned_malloc which already uses HBM. So at the moment this buffer type serves no purpose.

int64_t const matmul_num_cols = type_traits_cpu[type].ncols;
ggml_gemv_t const gemv = type_traits_cpu[type].gemv;
//int64_t const matmul_num_cols = type_traits_cpu[type].ncols;
//ggml_gemv_t const gemv = type_traits_cpu[type].gemv;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look for me it have not be write for dynamic repack but only with "native" Q4_0_N_M packing.
leave it commented it need some work to be usable on dynamic repacking.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should try to fix to keep support for aarch64 types with models with experts.

ggml/src/ggml-cpu/ggml-cpu.c Outdated Show resolved Hide resolved
@Djip007 Djip007 force-pushed the refactor/online_repacking branch from 36a0406 to 655a3fb Compare November 21, 2024 21:24
@slaren
Copy link
Collaborator

slaren commented Nov 22, 2024

Overall looks good. I am not sure about removing support for current Q4_0_x_x models, but I guess if we are going to do it, it is better to do it sooner than later.

@Djip007
Copy link
Contributor Author

Djip007 commented Nov 22, 2024

I am not sure about removing support for current Q4_0_x_x models, but I guess if we are going to do it, it is better to do it sooner than later.

yes it will be the main/difficult choice :

  • Allow weight repacking only at load time, and reduce the interest of mmap...
  • Allow to add new "bloc" type... and be prepare to have lot of new type (AVX512 will like bloc of 16xN, AVX512BF16 of 2x16xN, AVX2 of 8xN, RDNA3 of 16x16 ...)

@Djip007
Copy link
Contributor Author

Djip007 commented Nov 24, 2024

@slaren I still need your expertise so as not to make too many mistakes.

I was looking for where params->wdata was created.

char * wdata = params->wdata;

for me look to be in this function:
struct ggml_cplan ggml_graph_plan(
const struct ggml_cgraph * cgraph,
int n_threads,
struct ggml_threadpool * threadpool) {

Am I right?

If yes, look for me that the size is not calculated correctly for llamafile and Q4_0 repacking:

  • llamafile: we may compute size for src[1] that may not be used.
  • Q4_0_M_N: may be compute with wrong 'vec_dot_type'

case GGML_OP_MUL_MAT:
{
const enum ggml_type vec_dot_type = type_traits_cpu[node->src[0]->type].vec_dot_type;
if (node->src[1]->type != vec_dot_type) {
cur = ggml_row_size(vec_dot_type, ggml_nelements(node->src[1]));
}
} break;

Note: I'm trying to make it more generic to make it easier to reintegrate the AMX backend so maybe not useful to fix it for now.

@ggerganov
Copy link
Owner

llamafile: we may compute size for src[1] that may not be used.

It's OK if we over-allocate a bit of memory for wdata even if it ends up not being needed. It would be best to add asserts in the different branches that validate wdata is big enough.

Q4_0_M_N: may be compute with wrong 'vec_dot_type'

Isn't vec_dot_type always GGML_TYPE_Q8_0 for the Q4_0_M_N?

@Djip007
Copy link
Contributor Author

Djip007 commented Nov 24, 2024

Q4_0_M_N: may be compute with wrong 'vec_dot_type'

Isn't vec_dot_type always GGML_TYPE_Q8_0 for the Q4_0_M_N?

Yes it is the case for Q4_0_M_N for, so not critical for now. Even if internally it is more a Q8_0_N:

block_q8_0x4 * restrict y = (block_q8_0x4 *) vy;

But may not work with other/future case.

@slaren
Copy link
Collaborator

slaren commented Nov 24, 2024

If we remove the old API and make the CPU backend accessible only through ggml-backend, then there will be a context that can be used to store the work buffer. Then the work buffer could simply be a std::vector in the context, and each operation that uses it only needs to resize it to the amount of memory it needs. Then we can remove ggml_cplan and related functions. However at this point this would break a lot of code.

@Djip007
Copy link
Contributor Author

Djip007 commented Nov 24, 2024

If we remove the old API and make the CPU backend accessible only through ggml-backend, then there will be a context that can be used to store the work buffer. Then the work buffer could simply be a std::vector in the context, and each operation that uses it only needs to resize it to the amount of memory it needs. Then we can remove ggml_cplan and related functions. However at this point this would break a lot of code.

So you confirm that for now this is where the size is calculated.

@slaren
Copy link
Collaborator

slaren commented Nov 24, 2024

Yes, the size is calculated in the function ggml_graph_plan.

@Djip007 Djip007 force-pushed the refactor/online_repacking branch from 655a3fb to e772df4 Compare November 29, 2024 00:51
@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language labels Nov 29, 2024
@Djip007 Djip007 force-pushed the refactor/online_repacking branch 3 times, most recently from a411d95 to fd768e0 Compare November 29, 2024 01:38
@Djip007 Djip007 force-pushed the refactor/online_repacking branch from fd768e0 to 16154eb Compare November 29, 2024 05:18
@Djip007
Copy link
Contributor Author

Djip007 commented Nov 29, 2024

can find how to enable c++17 for macOS-latest-swift / xcode...

@Djip007 Djip007 force-pushed the refactor/online_repacking branch from 16154eb to dc8adeb Compare November 29, 2024 05:41
@Djip007
Copy link
Contributor Author

Djip007 commented Nov 30, 2024

OK now this is merged #10570 , and c++17 is the default I need some more work.

@Djip007 Djip007 force-pushed the refactor/online_repacking branch from dc8adeb to 1b29245 Compare December 1, 2024 15:35
@Djip007 Djip007 mentioned this pull request Dec 1, 2024
3 tasks
@Djip007 Djip007 force-pushed the refactor/online_repacking branch from 1b29245 to 733f891 Compare December 1, 2024 16:54
@Djip007
Copy link
Contributor Author

Djip007 commented Dec 1, 2024

@slaren @ggerganov what do you think with this refactor?

I tried to make adding a "cpu-extra-buffer" simpler and more general. 🤞

@Djip007 Djip007 marked this pull request as ready for review December 1, 2024 20:18
Copy link
Collaborator

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The design looks good, this is a good improvement. Just reformat the code according to the .clang-format file and remove outdated comments.

ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-hbm.h Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-hbm.h Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu.cpp Outdated Show resolved Hide resolved
- clean Q4_0_N_M and IQ4_0_N_M
  - remove from "file" tensor type
  - allow only with dynamic repack

- extract cpu extra bufts and convert to C++
  - hbm
  - "aarch64"

- more generic use of extra buffer
  - generalise extra_supports_op
  - new API for "cpu-accel":
     - amx
     - aarch64
Enable restrict on C++
Copy link
Collaborator

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job, this is a good improvement.

@slaren slaren requested a review from ggerganov December 6, 2024 02:14
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
@Djip007 Djip007 force-pushed the refactor/online_repacking branch from 8e5bd04 to b14b471 Compare December 6, 2024 19:57
@Djip007
Copy link
Contributor Author

Djip007 commented Dec 6, 2024

I updated the size controls, it should be better like this. 🖕

ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp Outdated Show resolved Hide resolved
@ggerganov
Copy link
Owner

We should add debug logs about what repacks are applied to what tensors, so that when running with --log-verbose 1 we would be able to understand the conversions. Currently, it is quite difficult to trace.

@Djip007 Djip007 force-pushed the refactor/online_repacking branch from be3c64b to 1221d13 Compare December 7, 2024 12:14
@Djip007
Copy link
Contributor Author

Djip007 commented Dec 7, 2024

We should add debug logs about what repacks are applied to what tensors, so that when running with --log-verbose 1 we would be able to understand the conversions. Currently, it is quite difficult to trace.

@ggerganov: I added 2 logs, is that what you were thinking?

@ggerganov
Copy link
Owner

Yes, perfect. Should we merge this or is there anything else you are planning to do?

@Djip007
Copy link
Contributor Author

Djip007 commented Dec 7, 2024

Yes, perfect. Should we merge this or is there anything else you are planning to do?

For me we can merge it. (if the CI success 🤞)

@ggerganov ggerganov merged commit 19d8762 into ggerganov:master Dec 7, 2024
46 checks passed
@Djip007
Copy link
Contributor Author

Djip007 commented Dec 7, 2024

@slaren @ggerganov thanks for all you reviews and time.

@bartowski1182
Copy link
Contributor

Just saw that opened bug, is the implication of this change that I should no longer be making Q4_0_N_M quants then? They seem to be fully removed?

@slaren
Copy link
Collaborator

slaren commented Dec 10, 2024

Yes, support for Q4_0_N_M model files has been removed and cannot be made anymore.

arthw pushed a commit to arthw/llama.cpp that referenced this pull request Dec 20, 2024
* rename ggml-cpu-aarch64.c to .cpp

* reformat extra cpu backend.

- clean Q4_0_N_M and IQ4_0_N_M
  - remove from "file" tensor type
  - allow only with dynamic repack

- extract cpu extra bufts and convert to C++
  - hbm
  - "aarch64"

- more generic use of extra buffer
  - generalise extra_supports_op
  - new API for "cpu-accel":
     - amx
     - aarch64

* clang-format

* Clean Q4_0_N_M ref

Enable restrict on C++

* add op GGML_OP_MUL_MAT_ID for Q4_0_N_M with runtime repack

* added/corrected control on tensor size for Q4 repacking.

* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <[email protected]>

* Update ggml/src/ggml-cpu/ggml-cpu-aarch64.cpp

Co-authored-by: Georgi Gerganov <[email protected]>

* add debug logs on repacks.

---------

Co-authored-by: Georgi Gerganov <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation examples ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs python python script changes SYCL https://en.wikipedia.org/wiki/SYCL - GPU programming language
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants