Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exp cristian #29

Open
wants to merge 62 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
e440931
normalize and dinamic graph
JoeNatan30 Aug 30, 2022
dc815f4
folders
JoeNatan30 Aug 30, 2022
31aff52
no raw in new folders
JoeNatan30 Aug 30, 2022
79b5540
keep folders
JoeNatan30 Aug 30, 2022
5b0d2a6
get data from connecting points
JoeNatan30 Aug 30, 2022
7223bbb
wandb functions
JoeNatan30 Aug 30, 2022
c9bed67
fix path of split in connecting points
JoeNatan30 Aug 30, 2022
d4c7890
path connections fixes
JoeNatan30 Aug 30, 2022
6c5ceaa
to run different models for 71 points
JoeNatan30 Aug 30, 2022
9505bf6
work with 29 points
JoeNatan30 Aug 31, 2022
4edaffd
Connecting points respository
JoeNatan30 Aug 31, 2022
b260e97
note in readme
JoeNatan30 Aug 31, 2022
f73a75c
file output fixes and readme
JoeNatan30 Sep 6, 2022
a6038b8
automatizacion v1 estable
Sep 7, 2022
680ea71
automatizacion
CristianLazoQuispe Sep 7, 2022
d6091f0
hyperparameter tunning 71 v1
CristianLazoQuispe Sep 7, 2022
1c672a1
running tunning 71 y 29 keypoints
CristianLazoQuispe Sep 7, 2022
177863e
Merge pull request #1 from JoeNatan30/exp-cristian
CristianLazoQuispe Sep 8, 2022
1b463f5
Update readme.md
CristianLazoQuispe Sep 18, 2022
cf9d4b5
mas val accuracy and top5
Sep 18, 2022
a3c2c18
Merge branch 'exp-cristian' into main
CristianLazoQuispe Sep 18, 2022
d6d4534
seed
CristianLazoQuispe Sep 18, 2022
0c11191
fundamenntacion1
CristianLazoQuispe Sep 19, 2022
099bbff
run experiment
CristianLazoQuispe Sep 20, 2022
2b7cad1
simulacion fundamentacion 3
CristianLazoQuispe Sep 23, 2022
5ca7c4a
fundamentacion v3
Sep 28, 2022
a1e4901
Update readme.md
CristianLazoQuispe Oct 6, 2022
216c8e4
Update readme.md
CristianLazoQuispe Oct 6, 2022
0d40608
cambios neurips
CristianLazoQuispe Oct 7, 2022
54b17c1
cambios neurips
CristianLazoQuispe Oct 7, 2022
c5d0fff
# parameters
CristianLazoQuispe Oct 7, 2022
0b40057
# parameters
CristianLazoQuispe Oct 7, 2022
48420ca
# parameters
CristianLazoQuispe Oct 7, 2022
4d9fd24
# paramters
CristianLazoQuispe Oct 7, 2022
4b32e71
report submission and word list in csv
Oct 16, 2022
9c77036
51 points
CristianLazoQuispe Oct 23, 2022
8a1d10b
51 points works well
CristianLazoQuispe Oct 27, 2022
fec729a
51 puntos funcionando
Oct 28, 2022
bdf7320
lower number of parameters in the model
Oct 28, 2022
559670f
readme s
Oct 30, 2022
b1af751
readme mod
Oct 30, 2022
1d84725
merge with main
CristianLazoQuispe Oct 30, 2022
aad01c8
reduce params v1
CristianLazoQuispe Oct 30, 2022
3dfd56e
exp v7
CristianLazoQuispe Oct 30, 2022
3e32024
experimento 9
CristianLazoQuispe Oct 30, 2022
85fe6c7
experimento 10
CristianLazoQuispe Oct 30, 2022
1590e39
analysis optimizacion model aec 29 v1
CristianLazoQuispe Oct 31, 2022
e539a36
v2 2 capas 16 a 32
CristianLazoQuispe Oct 31, 2022
9ac0c6f
v2 1 capa a 32
CristianLazoQuispe Oct 31, 2022
d159271
v2 1 capa a 16
CristianLazoQuispe Nov 1, 2022
c141529
corriendo analysis completo 1 capa a 16 para todos los datasets
CristianLazoQuispe Nov 1, 2022
41c7a2c
optimizacion_analysis_v3 PUCP 29
CristianLazoQuispe Nov 1, 2022
b1ce554
analisis v4 modelo mas ligero
Nov 6, 2022
89d3243
escoger el model_version esta automatizado
CristianLazoQuispe Nov 6, 2022
19890dc
model param is not working
CristianLazoQuispe Nov 9, 2022
8251350
cambios locales
CristianLazoQuispe Nov 9, 2022
e3e6a3b
Merge branch 'exp-optimizacion' of https://github.com/JoeNatan30/CVPR…
CristianLazoQuispe Nov 9, 2022
42c11ff
models version funciona bien
CristianLazoQuispe Nov 9, 2022
2e8a5f6
optimizacion modelos mas ligeros
CristianLazoQuispe Nov 9, 2022
cd8a20e
model optmizacion
CristianLazoQuispe Nov 10, 2022
db890c8
model optmizacion
CristianLazoQuispe Nov 10, 2022
6252c32
Merge pull request #2 from JoeNatan30/exp-optimizacion
CristianLazoQuispe Nov 16, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,16 @@
*.log
*.pt
*.env
data/
save_models/
work_dir/
wandb/

SL-GCN/data/
SL-GCN/save_models/
SL-GCN/work_dir/
SL-GCN/wandb/

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
Expand Down
1 change: 1 addition & 0 deletions SL-GCN/.~lock.points_29.csv#
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
,cristian,cristian,23.10.2022 13:09,file:///home/cristian/.config/libreoffice/4;
31 changes: 17 additions & 14 deletions SL-GCN/config/sign/train/train_joint.yaml
Original file line number Diff line number Diff line change
@@ -1,48 +1,51 @@
Experiment_name: sign_joint_final
#Experiment_name: sign_joint_final

# feeder
feeder: feeders.feeder.Feeder
train_feeder_args:
data_path: ./data/sign/27_2/train_data_joint.npy
label_path: ./data/sign/27_2/train_label.pkl
data_path: data/sign/1/train_data_joint.npy
label_path: data/sign/1/train_label.pkl
meaning_path: data/sign/1/meaning.pkl
debug: False
random_choose: True
window_size: 100
random_shift: True
normalization: True
normalization: True # Set to false because the data is normalized
random_mirror: True
random_mirror_p: 0.5
is_vector: False

test_feeder_args:
data_path: ./data/sign/27_2/val_data_joint.npy
label_path: ./data/sign/27_2/val_gt.pkl
data_path: data/sign/1/val_data_joint.npy
label_path: data/sign/1/val_label.pkl
meaning_path: data/sign/1/meaning.pkl
random_mirror: False
normalization: True

# model

model: model.decouple_gcn_attn.Model
model_args:
num_class: 226
num_point: 27
num_person: 1
graph: graph.sign_27.Graph
groups: 16
block_size: 41
graph_args:
labeling_mode: 'spatial'
num_node: 29

#optim
weight_decay: 0.0001
base_lr: 0.1
step: [150, 200]
base_lr: 0.005
step: [] #[50, 100, 150, 200] # To modify the learning rate => lr * 0.1**Sum(x :-> epoch > step)

# training
device: [0,1,2,3]
device: [0, 1]
#device: [0]
keep_rate: 0.9
only_train_epoch: 1
batch_size: 64
test_batch_size: 64
#batch_size: 8
test_batch_size: 8
num_epoch: 250
nesterov: True
warm_up_epoch: 20
warm_up_epoch: 20
14 changes: 0 additions & 14 deletions SL-GCN/data/sign/27_2/gen_train_val.py

This file was deleted.

147 changes: 147 additions & 0 deletions SL-GCN/data_gen/getConnectingPoint.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
import pickle
import sys
import numpy as np
import pandas as pd
import os
import h5py
import pandas as pd
sys.path.extend(['../'])

max_body_true = 1
max_frame = 150
num_channels = 2

# These three def return an index value less 1 because it array count starts at 1
def get_mp_keys(points):
tar = np.array(points.mp_pos)-1
return list(tar)

def get_op_keys(points):
tar = np.array(points.op_pos)-1
return list(tar)

def get_wp_keys(points):
tar = np.array(points.wb_pos)-1
return list(tar)

def read_data(path, model_key_getter, config):
data = []
classes = []
videoName = []

if 'AEC' in path:
list_labels_banned = ["ya", "qué?", "qué", "bien", "dos", "ahí", "luego", "yo", "él", "tú","???","NNN"]

if 'PUCP' in path:
list_labels_banned = ["ya", "qué?", "qué", "bien", "dos", "ahí", "luego", "yo", "él", "tú","???","NNN"]
list_labels_banned += ["sí","ella","uno","ese","ah","dijo","llamar"]

if 'WLASL' in path:
list_labels_banned = ['apple','computer','fish','kiss','later','no','orange','pizza','purple','secretary','shirt','sunday','take','water','yellow']


with h5py.File(path, "r") as f:
for index in f.keys():
label = f[index]['label'][...].item().decode('utf-8')

if str(label) in list_labels_banned:
continue

classes.append(label)
videoName.append(f[index]['video_name'][...].item().decode('utf-8'))
data.append(f[index]["data"][...])

print('config : ',config)
points = pd.read_csv(f"points_{config}.csv")

tar = model_key_getter(points)
print('tart',tar)

data = [d[:,:,tar] for d in data]

meaning = {v:k for (k,v) in enumerate(sorted(set(classes)))}

retrive_meaning = {k:v for (k,v) in enumerate(sorted(set(classes)))}

labels = [meaning[label] for label in classes]

print('meaning',meaning)
print('retrive_meaning',retrive_meaning)

return labels, videoName, data, retrive_meaning


def gendata(data_path, out_path, model_key_getter, part='train', config=1):

data=[]
sample_names = []

labels, sample_names, data , retrive_meaning = read_data(data_path, model_key_getter,config)
fp = np.zeros((len(labels), max_frame, config, num_channels, max_body_true), dtype=np.float32)

for i, skel in enumerate(data):

skel = np.array(skel)
skel = np.moveaxis(skel,1,2)
skel = skel # *256

if skel.shape[0] < max_frame:
L = skel.shape[0]

fp[i,:L,:,:,0] = skel

rest = max_frame - L
num = int(np.ceil(rest / L))
pad = np.concatenate([skel for _ in range(num)], 0)[:rest]
fp[i,L:,:,:,0] = pad

else:
L = skel.shape[0]

fp[i,:,:,:,0] = skel[:max_frame,:,:]


with open('{}/{}_label.pkl'.format(out_path, part), 'wb') as f:
pickle.dump((sample_names, labels), f)

fp = np.transpose(fp, [0, 3, 1, 2, 4])
print(fp.shape)
np.save('{}/{}_data_joint.npy'.format(out_path, part), fp)

with open('{}/meaning.pkl'.format(out_path), 'wb') as f:
pickle.dump(retrive_meaning, f)




if __name__ == '__main__':

folderName= '1' # just used to create folder "1" in data/sign/1/
out_folder='../data/sign/'
out_path = os.path.join(out_folder, folderName)

kp_model = 'wholepose' # openpose wholepose mediapipe
dataset = "WLASL" # WLASL PUCP_PSL_DGI156 AEC
numPoints = 29 # number of points used, need to be: 29 or 71

model_key_getter = {'mediapipe': get_mp_keys,
'openpose': get_op_keys,
'wholepose': get_wp_keys}

if not os.path.exists(out_path):
os.makedirs(out_path)


print('\n',kp_model, dataset,'\n')

part = "train"
print(out_path,'->', part)
data_path = f'../../../../joe/ConnectingPoints/split/{dataset}--{kp_model}-Train.hdf5'
gendata(data_path, out_path, model_key_getter[kp_model], part=part, config=numPoints)


part = "val"
print(out_path,'->', part)
data_path = f'../../../ConnectingPoints/split/{dataset}--{kp_model}-Val.hdf5'

gendata(data_path, out_path, model_key_getter[kp_model], part=part, config=numPoints)
49 changes: 33 additions & 16 deletions SL-GCN/feeders/feeder.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,17 @@
sys.path.extend(['../'])
from feeders import tools

flip_index = np.concatenate(([0,2,1,4,3,6,5],[17,18,19,20,21,22,23,24,25,26],[7,8,9,10,11,12,13,14,15,16]), axis=0)
# flip_index for 71 and 29
flip_index = {71:np.concatenate(([0,2,1,4,3,6,5,8,7,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],[31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],[51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70]), axis=0),
51:np.concatenate(([0,2,1,4,3,6,5,8,7,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],[31,32,33,34,35,36,37,38,39,40],[41,42,43,44,45,46,47,48,49,50]), axis=0),
29:np.concatenate(([0,2,1,4,3,6,5,8,7],[9,10,11,12,13,14,15,16,17,18],[19,20,21,22,23,24,25,26,27,28]), axis=0)}

class Feeder(Dataset):
def __init__(self, data_path, label_path,
def __init__(self, data_path, label_path, meaning_path,
random_choose=False, random_shift=False, random_move=False,
window_size=-1, normalization=False, debug=False, use_mmap=True, random_mirror=False, random_mirror_p=0.5, is_vector=False):
"""

"""
:param data_path:
:param label_path:
:param random_choose: If true, randomly choose a portion of the input sequence
Expand All @@ -29,6 +32,7 @@ def __init__(self, data_path, label_path,
self.debug = debug
self.data_path = data_path
self.label_path = label_path
self.meaning_path = meaning_path
self.random_choose = random_choose
self.random_shift = random_shift
self.random_move = random_move
Expand All @@ -41,7 +45,6 @@ def __init__(self, data_path, label_path,
self.is_vector = is_vector
if normalization:
self.get_mean_map()
print(len(self.label))

def load_data(self):
# data: N C V T M
Expand All @@ -63,6 +66,14 @@ def load_data(self):
self.label = self.label[0:100]
self.data = self.data[0:100]
self.sample_name = self.sample_name[0:100]
try:
with open(self.meaning_path) as f:
self.meaning = pickle.load(f)
except:
# for pickle file from python2
with open(self.meaning_path, 'rb') as f:
self.meaning = pickle.load(f, encoding='latin1')


def get_mean_map(self):
data = self.data
Expand All @@ -79,38 +90,44 @@ def __iter__(self):
def __getitem__(self, index):
data_numpy = self.data[index]
label = self.label[index]
name = self.sample_name[index]
data_numpy = np.array(data_numpy)

if self.random_choose:
data_numpy = tools.random_choose(data_numpy, self.window_size)

if self.random_mirror:
if random.random() > self.random_mirror_p:
assert data_numpy.shape[2] == 27
data_numpy = data_numpy[:,:,flip_index,:]
#print("dabe before random mirror", data_numpy)
assert data_numpy.shape[2] == 71 or data_numpy.shape[2] == 29 or data_numpy.shape[2] == 51
data_numpy = data_numpy[:,:,flip_index[data_numpy.shape[2]],:]
if self.is_vector:
data_numpy[0,:,:,:] = - data_numpy[0,:,:,:]
data_numpy[0,:,:,:] = 1 - data_numpy[0,:,:,:]
else:
data_numpy[0,:,:,:] = 512 - data_numpy[0,:,:,:]
data_numpy[0,:,:,:] = 1 - data_numpy[0,:,:,:]
#print("dabe after random mirror", data_numpy)

if self.normalization:
# data_numpy = (data_numpy - self.mean_map) / self.std_map
assert data_numpy.shape[0] == 3
assert data_numpy.shape[0] == 2
#print("dabe before norm", data_numpy.shape)
if self.is_vector:
data_numpy[0,:,0,:] = data_numpy[0,:,0,:] - data_numpy[0,:,0,0].mean(axis=0)
data_numpy[1,:,0,:] = data_numpy[1,:,0,:] - data_numpy[1,:,0,0].mean(axis=0)
else:
data_numpy[0,:,:,:] = data_numpy[0,:,:,:] - data_numpy[0,:,0,0].mean(axis=0)
data_numpy[1,:,:,:] = data_numpy[1,:,:,:] - data_numpy[1,:,0,0].mean(axis=0)

#print("dabe after norm", data_numpy)
if self.random_shift:

#print("dabe before shift", data_numpy)
if self.is_vector:
data_numpy[0,:,0,:] += random.random() * 20 - 10.0
data_numpy[1,:,0,:] += random.random() * 20 - 10.0
data_numpy[0,:,0,:] += random.random()/25 # * 20 - 10.0
data_numpy[1,:,0,:] += random.random()/25 # * 20 - 10.0
else:
data_numpy[0,:,:,:] += random.random() * 20 - 10.0
data_numpy[1,:,:,:] += random.random() * 20 - 10.0

data_numpy[0,:,:,:] += random.random()/25 #random.random() * 20 - 10.0
data_numpy[1,:,:,:] += random.random()/25 #random.random() * 20 - 10.0
#print("dabe after shift", data_numpy)

# if self.random_shift:
# data_numpy = tools.random_shift(data_numpy)
Expand All @@ -120,7 +137,7 @@ def __getitem__(self, index):
if self.random_move:
data_numpy = tools.random_move(data_numpy)

return data_numpy, label, index
return data_numpy, label, index, name

def top_k(self, score, top_k):
rank = score.argsort()
Expand Down
Loading