Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

paifu analyze demo: infer ranking based on score #22

Open
Ledenel opened this issue Nov 25, 2019 · 4 comments
Open

paifu analyze demo: infer ranking based on score #22

Ledenel opened this issue Nov 25, 2019 · 4 comments
Labels
algorithm Fancy algorithm idea help wanted Extra attention is needed plan Project next step (Milestone)

Comments

@Ledenel
Copy link
Owner

Ledenel commented Nov 25, 2019

Something like:
Given current score, game round, is oya, seat order, output ranking possibility like:

top: 6%
second: 30%
third: 60%
last: 4%

means that you may most probably get third ranking, and almost safe from being last, and has substantial chance to beat second player down, from given situations. Tenhou Pt estimation is also possible.

@Ledenel Ledenel added help wanted Extra attention is needed algorithm Fancy algorithm idea plan Project next step (Milestone) labels Nov 25, 2019
@Ledenel
Copy link
Owner Author

Ledenel commented Nov 25, 2019

A naive implementation:
count all game filtered by above condition as filtered_game
then, count game in this situation which player actually get 1,2,3,4 rank, like filtered_rank_1, ...

then output filtered_rank_1 / filtered_game as top rate, remains are same.

@Ledenel
Copy link
Owner Author

Ledenel commented Nov 26, 2019

To measure how confident we estimate possibility by filtered_rank_k / filtered_game frequency, total count filtered_game is also needed to show.

When the filtered_game is to small, means these estimation is untrusted, we should loose the constraint to make more games taken account, which means a fallback.

Here's some possible solution:

  • try delete filter seat order, is oya in this order.
  • add binning for current score, like Bayesian binning, binning by possible score changes.
  • using Decision Trees or Random Forest to find some proper score/game round cut point (given these situations, predict player ranking 1,2,3,4 is a 4-class classification problem).
  • more Machine Learning algorithm aimed to solve multi-class classification problem

Some data structure support fast fallback is also needed.

@canuse
Copy link
Contributor

canuse commented Dec 2, 2019

Using SVM as baseline of classification.

As you well know, SVM(SVC) is a widely used classifier that is robust and interpretable. Thus it might be a possible baseline of this question.

I've trained 8 models with different train_set_size(50000, 100000), different multi-classifier (OveVSOne and OneVsRest) and different kernel(rbf, linear) and tested on five test sets, each contains 300000 records. All sets are picked randomly and uniquely from tenhou records, which have 57755880 items. 19 characters are used, which are ['player_pos', 'game_round', 'game_sub_round', 'is_oya', 'self_score', 'score0', 'score1', 'score2', 'score3', 'your_rate', 'rate1', 'rate2', 'rate3', 'rate4', 'your_dan', 'dan1', 'dan2', 'dan3', 'dan4'] and the target is final position (1, 2 , 3 or 4).

Result are listed below, accurate means that predict == truth, delta means that abs(predict - truth) == 1 and wrong stands for other conditions. It shows that RBF kernel is more fast and precise than the linear kernel, with a higher accurate_rate and a lower wrong_rate, though the accuracy of all configs is below 45%. And, larger train_set slightly increases the performance. Considering the time consumption and performance, RBF kernel with OVO and 100000 train_set_size could be a not so bad baseline.

train_set_size kernel multi_classifier calculate_time accurate_rate (avg) delta_rate (avg) wrong_rate (avg)
50000 linear OneVSOne ~1h 43.702 38.152 18.2
OneVsRest ~1h 43.674 37.768 18.556
rbf OneVSOne ~30min 43.532 41.654 14.812
OneVsRest ~20min 43.532 41.654 14.812
100000 linear OneVSOne ~2.5h 44.024 37.82 18.156
OneVsRest ~2.5h 44.024 37.82 18.156
rbf OneVSOne ~1h 44.028 41.086 14.886
OneVsRest ~1h 44.028 41.086 14.886

And the raw data:

used line:
'player_pos', 'game_round', 'game_sub_round', 'is_oya','self_score', 'score0', 'score1', 'score2', 'score3', 'your_rate','rate1', 'rate2', 'rate3', 'rate4', 'your_dan', 'dan1', 'dan2', 'dan3','dan4'

ovr,linear,train_set_size=50000,test_set_size=30000
Train set: accurate 43.36%, delta=1 37.63%, wrong 19.01%
Test set1: accurate 43.64%, delta=1 37.80%, wrong 18.56%
Test set2: accurate 43.81%, delta=1 37.72%, wrong 18.47%
Test set3: accurate 43.66%, delta=1 37.84%, wrong 18.49%
Test set4: accurate 43.70%, delta=1 37.75%, wrong 18.55%
Test set5: accurate 43.56%, delta=1 37.73%, wrong 18.71%

ovr,rbf,train_set_size=50000,test_set_size=30000
Train set: accurate 45.22%, delta=1 40.85%, wrong 13.93%
Test set1: accurate 43.56%, delta=1 41.66%, wrong 14.78%
Test set2: accurate 43.57%, delta=1 41.66%, wrong 14.77%
Test set3: accurate 43.61%, delta=1 41.61%, wrong 14.78%
Test set4: accurate 43.50%, delta=1 41.64%, wrong 14.85%
Test set5: accurate 43.42%, delta=1 41.70%, wrong 14.88%

ovo,linear,train_set_size=50000,test_set_size=30000
Train set: accurate 45.36%, delta=1 37.45%, wrong 17.2%
Test set1: accurate 43.65%, delta=1 38.14%, wrong 18.21%
Test set2: accurate 43.76%, delta=1 38.09%, wrong 18.15%
Test set3: accurate 43.66%, delta=1 38.14%, wrong 18.21%
Test set4: accurate 43.78%, delta=1 38.04%, wrong 18.18%
Test set5: accurate 43.66%, delta=1 38.09%, wrong 18.25%

ovo,rbf,train_set_size=50000,test_set_size=30000
Train set: accurate 45.23%, delta=1 40.83%, wrong 13.94%
Test set1: accurate 43.56%, delta=1 41.66%, wrong 14.78%
Test set2: accurate 43.57%, delta=1 41.66%, wrong 14.77%
Test set3: accurate 43.60%, delta=1 41.61%, wrong 14.78%
Test set4: accurate 43.50%, delta=1 41.65%, wrong 14.85%
Test set5: accurate 43.43%, delta=1 41.69%, wrong 14.88%


100000_ovo_svm_linear
Test set1: accurate 43.61%, delta=1 38.02%, wrong 18.37%
Test set2: accurate 43.79%, delta=1 37.90%, wrong 18.31%
Test set3: accurate 43.64%, delta=1 38.01%, wrong 18.35%
Test set4: accurate 43.71%, delta=1 37.91%, wrong 18.38%
Test set5: accurate 43.66%, delta=1 37.87%, wrong 18.47%
Train set: accurate 45.32%, delta=1 37.41%, wrong 17.27%

100000_ovr_svm_linear
Test set1: accurate 43.61%, delta=1 38.02%, wrong 18.37%
Test set2: accurate 43.79%, delta=1 37.9%, wrong 18.31%
Test set3: accurate 43.64%, delta=1 38.01%, wrong 18.35%
Test set4: accurate 43.71%, delta=1 37.91%, wrong 18.38%
Test set5: accurate 43.66%, delta=1 37.87%, wrong 18.47%
Train set: accurate 45.32%, delta=1 37.41%, wrong 17.27%

100000_ovo_svm_rbf
Test set1: accurate 43.76%, delta=1 41.25%, wrong 14.99%
Test set2: accurate 43.7%, delta=1 41.33%, wrong 14.97%
Test set3: accurate 43.78%, delta=1 41.21%, wrong 15.01%
Test set4: accurate 43.67%, delta=1 41.25%, wrong 15.08%
Test set5: accurate 43.61%, delta=1 41.27%, wrong 15.12%
Train set: accurate 45.38%, delta=1 40.37%, wrong 14.25%

100000_ovr_svm_rbf
Test set1: accurate 43.76%, delta=1 41.25%, wrong 14.99%
Test set2: accurate 43.7%, delta=1 41.33%, wrong 14.97%
Test set3: accurate 43.78%, delta=1 41.21%, wrong 15.01%
Test set4: accurate 43.67%, delta=1 41.25%, wrong 15.08%
Test set5: accurate 43.61%, delta=1 41.27%, wrong 15.12%
Train set: accurate 45.38%, delta=1 40.37%, wrong 14.25%

Since SVM is not so suitable for multi-classification and can not give the probability directly, I then tried RVM, which uses Bayesian Inference that can solve these problems. However, the train time and resource requirements of RVM is much more larger. Thus I only able to train one with the train_set_size of 5000 and the accuracy is only 25%.

@Ledenel
Copy link
Owner Author

Ledenel commented Dec 2, 2019

In my opinion decision tree-based algorithms may be the most basic reliable solution in this situation. It could give precise frequency and confidence by counts on leafs, and supports multi-class classification naturally.

Consider only the wrong rate, SVM-based algorithms is also reasonable. For possibilities inference of a trained model, we could directly make a work around by given the raw output of the decision function (rather than signed output), assume output forms a Gussian distribution (using sample variance S for appxorimation), normalized to standard Gussian distribution, then using the cumulative distribution function to perform a possibility of being the negative class.

By the way, since player rank (1,2,3,4) has a strongly level meaning, it's also possible to use some Regression method, which may be more interpretable directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
algorithm Fancy algorithm idea help wanted Extra attention is needed plan Project next step (Milestone)
Projects
None yet
Development

No branches or pull requests

2 participants