Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyperparameter tuning, standardization & extended searchspace construction #278

Draft
wants to merge 43 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
b13f972
Added pyATF as searchspace builder
fjwillemsen Jun 5, 2024
3ac3f15
Merge branch 'master' of https://github.com/KernelTuner/kernel_tuner
fjwillemsen Jun 6, 2024
5e32707
Implemented basic PyATF searchspace builder
fjwillemsen Jun 6, 2024
2214394
Minor changes
fjwillemsen Jul 25, 2024
0fa30a0
Setup structure for cross-searchspace hyperparameter tuning using met…
fjwillemsen Jul 30, 2024
2636245
Further integration of Hypertuner device functions
fjwillemsen Jul 30, 2024
af88572
Finalized new cross-searchspace hyperparameter tuning in Kernel Tuner
fjwillemsen Jul 30, 2024
4f0a4aa
Added T1 schema and command line interface for T1 input files
fjwillemsen Oct 5, 2024
1e7e54d
Implemented T1 input in interface
fjwillemsen Oct 7, 2024
fa11cfd
Merge with master
fjwillemsen Oct 8, 2024
a83fcd3
Fixed an issue that caused the T4 schema to point to a non-existing a…
fjwillemsen Oct 8, 2024
5f191da
Hypertuner tests are skipped without Methodology dependency installed
fjwillemsen Oct 8, 2024
0c0fbed
Added error message to Nox development environment check in case of f…
fjwillemsen Oct 8, 2024
a081772
Added type hints
fjwillemsen Oct 8, 2024
34d0755
Added T1 input file for testing
fjwillemsen Oct 8, 2024
61405fb
Added T1 input schema for validation
fjwillemsen Oct 8, 2024
bbdbc89
Setup tests for T1 input file
fjwillemsen Oct 8, 2024
1887fcd
Merge branch 'searchspace_experiments' of https://github.com/benvanwe…
fjwillemsen Oct 8, 2024
07bc8b0
Fixed an error reading the T1 format
fjwillemsen Oct 8, 2024
8242403
Fixed an error reading the T1 format
fjwillemsen Oct 8, 2024
0c4c76f
Interface accepts Path-type as kernel_source argument to resolve to file
fjwillemsen Oct 9, 2024
6c2476b
Added type hints
fjwillemsen Oct 9, 2024
f773dff
Changed test to Kernel Tuner example
fjwillemsen Oct 9, 2024
3cbf19a
Added functionality to have dynamic vector sizes based on tunable par…
fjwillemsen Oct 9, 2024
3398320
Added coonversion of grid divisions and constant arguments
fjwillemsen Oct 9, 2024
6711968
Added support for cachefiles in T1 interface
fjwillemsen Oct 9, 2024
814b155
Expose T1 interface function in package
fjwillemsen Oct 11, 2024
6ca0f69
Extended T1 tuning interface, formatting
fjwillemsen Oct 11, 2024
cb1a755
Convert single problem size to array, improved error messages
fjwillemsen Oct 11, 2024
a15ac3b
Split T4 output function to have either Python value or file returned
fjwillemsen Oct 11, 2024
74d5bba
Implemented return of T4 output, additional arguments
fjwillemsen Oct 11, 2024
3ea8c75
Further intergration of experiments file generation and hyperparamete…
fjwillemsen Oct 16, 2024
f17fcbe
Generate an experiments file using provided values and defaults from …
fjwillemsen Oct 16, 2024
1c9db4a
Fixed a function name error
fjwillemsen Oct 16, 2024
94995b4
Fixed an interface error in test
fjwillemsen Oct 16, 2024
b3e0f1d
Implemented sonarlint suggestions
fjwillemsen Oct 16, 2024
e7fd95a
Implemented benchmarkobserver for reporting hyperparameter tuning res…
fjwillemsen Oct 16, 2024
fd823aa
Limited the max_feval size to at most the size of the searchspace
fjwillemsen Oct 17, 2024
e04906e
Succesfully implemented hyperparameter tuning
fjwillemsen Oct 17, 2024
6062e50
Avoid duplicate execution, resolved an issue which sometimes led to h…
fjwillemsen Oct 22, 2024
11a86eb
Added time unit metadata to T4 output
fjwillemsen Oct 23, 2024
abaf90d
Set up hyperparamtuning cache, made cache names hyperparam-unique
fjwillemsen Oct 23, 2024
3be5685
Changed hyperparameter tuning defaults
fjwillemsen Oct 23, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
poetry.lock
noxenv.txt
noxsettings.toml
hyperparamtuning/

### Python ###
*.pyc
Expand All @@ -16,6 +17,8 @@ push_to_pypi.sh
.nfs*
*.log
*.json
!kernel_tuner/schema/T1/1.0.0/input-schema.json
!test/test_T1_input.json
*.csv
.cache
*.ipynb_checkpoints
Expand Down
2 changes: 1 addition & 1 deletion kernel_tuner/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
from kernel_tuner.integration import store_results, create_device_targets
from kernel_tuner.interface import tune_kernel, run_kernel
from kernel_tuner.interface import tune_kernel, tune_kernel_T1, run_kernel

from importlib.metadata import version

Expand Down
10 changes: 5 additions & 5 deletions kernel_tuner/backends/backend.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
"""This module contains the interface of all kernel_tuner backends"""
"""This module contains the interface of all kernel_tuner backends."""
from __future__ import print_function

from abc import ABC, abstractmethod


class Backend(ABC):
"""Base class for kernel_tuner backends"""
"""Base class for kernel_tuner backends."""

@abstractmethod
def ready_argument_list(self, arguments):
"""This method must implement the allocation of the arguments on device memory."""
pass
return arguments

@abstractmethod
def compile(self, kernel_instance):
Expand Down Expand Up @@ -59,7 +59,7 @@ def memcpy_htod(self, dest, src):


class GPUBackend(Backend):
"""Base class for GPU backends"""
"""Base class for GPU backends."""

@abstractmethod
def __init__(self, device, iterations, compiler_options, observers):
Expand All @@ -82,7 +82,7 @@ def copy_texture_memory_args(self, texmem_args):


class CompilerBackend(Backend):
"""Base class for compiler backends"""
"""Base class for compiler backends."""

@abstractmethod
def __init__(self, iterations, compiler_options, compiler):
Expand Down
131 changes: 131 additions & 0 deletions kernel_tuner/backends/hypertuner.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
"""This module contains a 'device' for hyperparameter tuning using the autotuning methodology."""

import platform
from pathlib import Path

from numpy import mean

from kernel_tuner.backends.backend import Backend
from kernel_tuner.observers.observer import BenchmarkObserver

try:
methodology_available = True
from autotuning_methodology.experiments import generate_experiment_file
from autotuning_methodology.report_experiments import get_strategy_scores
except ImportError:
methodology_available = False


class ScoreObserver(BenchmarkObserver):
def __init__(self, dev):
self.dev = dev
self.scores = []

def after_finish(self):
self.scores.append(self.dev.last_score)

def get_results(self):
results = {'score': mean(self.scores), 'scores': self.scores.copy()}
self.scores = []
return results

class HypertunerFunctions(Backend):
"""Class for executing hyperparameter tuning."""
units = {}

def __init__(self, iterations):
self.iterations = iterations
self.observers = [ScoreObserver(self)]
self.name = platform.processor()
self.max_threads = 1024
self.last_score = None

# set the environment options
env = dict()
env["iterations"] = self.iterations
self.env = env

# check for the methodology package
if methodology_available is not True:
raise ImportError("Unable to import the autotuning methodology, run `pip install autotuning_methodology`.")

def ready_argument_list(self, arguments):
arglist = super().ready_argument_list(arguments)
if arglist is None:
arglist = []
return arglist

def compile(self, kernel_instance):
super().compile(kernel_instance)
path = Path(__file__).parent.parent.parent / "hyperparamtuning"
path.mkdir(exist_ok=True)

# TODO get applications & GPUs args from benchmark
gpus = ["RTX_3090", "RTX_2080_Ti"]
applications = None
# applications = [
# {
# "name": "convolution",
# "folder": "./cached_data_used/kernels",
# "input_file": "convolution.json"
# },
# {
# "name": "pnpoly",
# "folder": "./cached_data_used/kernels",
# "input_file": "pnpoly.json"
# }
# ]

# strategy settings
strategy: str = kernel_instance.arguments[0]
hyperparams = [{'name': k, 'value': v} for k, v in kernel_instance.params.items()]
hyperparams_string = "_".join(f"{k}={str(v)}" for k, v in kernel_instance.params.items())
searchspace_strategies = [{
"autotuner": "KernelTuner",
"name": f"{strategy.lower()}_{hyperparams_string}",
"display_name": strategy.replace('_', ' ').capitalize(),
"search_method": strategy.lower(),
'search_method_hyperparameters': hyperparams
}]

# any additional settings
override = {
"experimental_groups_defaults": {
"samples": self.iterations
}
}

name = kernel_instance.name if len(kernel_instance.name) > 0 else kernel_instance.kernel_source.kernel_name
experiments_filepath = generate_experiment_file(name, path, searchspace_strategies, applications, gpus,
override=override, overwrite_existing_file=True)
return str(experiments_filepath)

def start_event(self):
return super().start_event()

def stop_event(self):
return super().stop_event()

def kernel_finished(self):
super().kernel_finished()
return True

def synchronize(self):
return super().synchronize()

def run_kernel(self, func, gpu_args=None, threads=None, grid=None, stream=None):
# generate the experiments file
experiments_filepath = Path(func)

# run the methodology to get a fitness score for this configuration
scores = get_strategy_scores(str(experiments_filepath))
self.last_score = scores[list(scores.keys())[0]]['score']

def memset(self, allocation, value, size):
return super().memset(allocation, value, size)

def memcpy_dtoh(self, dest, src):
return super().memcpy_dtoh(dest, src)

def memcpy_htod(self, dest, src):
return super().memcpy_htod(dest, src)
Loading