-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reputation from the microeconomic POV #45
Conversation
Most of the work is done. There are few TODOs and the final conclusions are missing.
All of the planned content is here. Some minor fixes will come.
206620b
to
6dcddda
Compare
743e22f
to
1ed4516
Compare
58591a9
to
ac45628
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some general comments
- Systematical approach is important, good for you to take it on, keep up the good work!
- It is very important to provide good intuitions to accompany the formalism.
- The paper talks about honesty, but to be able to do it, one should state explicitly assumptions about the protocol, as well as analyse which events have uniquely-attributable fault.
- The paper might benefit from a more clear purpose & scope statement. For example, it seems that efficiency issues are disregarded, which is a conroversial choice given its importance from the requestor POV, especially in the light of the overprovisioning issues experienced on the network.
- The abstract promises to answer the question ``What do we exactly mean by the "reputation"?"''; alas it doesn't seem the answer can be found in the paper, at least not in the Conclusions section.
-> is \to Co-authored-by: Marcin Benke <[email protected]>
\item The Expected Utility Hypothesis\footnote{\href{https://en.wikipedia.org/wiki/Expected\_utility\_hypothesis}{https://en.wikipedia.org/wiki/Expected\_utility\_hypothesis}}: every agent tries to maximize their expected utility. | ||
|
||
\item Every agent can costlessy access and analyze all of the publicly available information about Golem. Thanks to this assumption we can remove $I$ from the equations - we no longer | ||
care about "what agent knows", but only about "what really is". This is a major simplification that might not be a good approximation of the "real world" Golem market, but we |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not convinced by this argument. The mere fact that there is no cost of accessing the information does not mean the information itself has no value. In fact asserting that would undermine the whole point of a reputation system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. What I meant was:
- The general information about Golem is free, e.g. things like "how efficient is
yagna
", "what are the bugs/vulnerabilities in the provided software", "how many providers they are and how much work they do", "what is the average price variability" - The information regarding particular agents is not free (and not really available, and increasing the avaiability is a part of the reputation system)
But that's what I meant, not what I wrote :)
I will fix this. Thx.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in ac044d4 (a footnote)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still think this assumption is suspicious on fundamental grounds as well as counterproductive from the pragmatic POV. For example it seems to exclude powerful arguments along the lines of "Golem value with information is bigger than without it, thus the value of the information is such and such."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand.
Let's say the information is "what is the chance the given runtime will not start because of the yagna
bug". What do we gain by analyzing the value of this information, or by considering an agent who has an incorrect estimation of it?
}. They equal $V_P(a)$/$V_R(a)$ if neither side breached the agreement $a$. | ||
\item $V_{AL}(a)$ is the value lost by agent $A$ because of the other side breaching the agreement $a$. | ||
\item $V_{AG}(a)$ is the value gained by agent $A$ when they break the agreement $a$. | ||
\item $V_{PL}$, $V_{RL}$, $V_{PG}$, $V_{RG}$ are $V_{AL}$/$V_{AG}$ from the POV of provider/requestor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In which cases is V_PL != V_RG
? I realise you offer some examples below, but a more refined analysis might be worthwhile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add some explanation/analysis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a minor comment in ac044d4.
This might need some bigger refactoring (e.g. so that there won't be a forward reference to a footnote), but I'll wait with this until we have agreement about the contents.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it needs more love, especially that your whole reasoning crucially depends on V_L
being positive. However, it might be worth considering what really happens if we drop this assumption and pursue the avenue I suggest in #45 (comment) (value of information) instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The avenue might be interesting, I'll think about this.
\end{equation} | ||
|
||
Important note here is that both $\sum_{a \in all\_agreements}(V_{PL}(a)- V_{RG}(a))$ and $\sum_{a \in all\_agreements}(V_{RL}(a)- V_{PG}(a))$ are positive: | ||
when someone breaks the agreement, the harm done to the victim is usually greater then the offenders gain\footnote{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be, but is estimating numerical values here feasible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say it is possible with additional far-reaching assumptions about utility functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mention "additional far-reaching assumptions". Would it then be fair to conclude that it is hard to estimate V_L
and hence potential improvements to V_G
from reputation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, I don't understand, sorry :(
To estimate any numerical value we need assumptions about utility functions. This is true about V_G
even without any reputation-related component. So I think:
- if we don't assume anything about the utility functions, there's no numerical value we can calculate
- if we make enough assumptions, they will let us estimate
V_L
using some repuatation-related metrics (I'm not certain about this statement, but I believe it is true)
\begin{enumerate} | ||
\item Calculate the expected value of the decision \textbf{not} to sign the agreement, $E_{A_1}(\neg a)$ | ||
\item Calculate the expected value of the decision to sign the agreement $E_{A_1}(a) = V_{A_1N}(a) - E(V_{A_1L}(a))$ | ||
\item Sign the agreement if $E_{A_1}(a) > E_{A_1}(\neg a)$ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The elephant in the room seems to be how to calculate these values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sure!
There are two directions from here:
- Design a specific reputation system with additional assumptions that make it possible to calculate the values
- Design a specific reputation system that can be proven to be correct without calculation specific values
I'm not sure if the second path is really possible, but I'd say it might be.
Do you think such discussion should be a part of this document?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I daresay, without it the practical applications of this paper seem limited.
But maybe I misunderstood the purpose of this exercise, and practical applications are not what you have in mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my POV, the purpose of this paper is to:
- Create a shared definition of the "reputation system"
- Agree on a general path towards the reputation system (p. 3.4)
- Agree on the elements of the reputation system. E.g. if I understand correctly, you assume something like "fault attribution mechanism" is necessary. I think it is not. We must agree on this to effectively continue works on the reputation.
I expect we'll find few more similar disagreements and hopefully we'll resolve them :)
For me, this is practical.
But for sure there is nothing we can go and implement/calculate using this document only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my POV, the purpose of this paper is to:
* Create a shared definition of the "reputation system" * Agree on a general path towards the reputation system (p. 3.4) * Agree on the elements of the reputation system. E.g. if I understand correctly, you assume something like "fault attribution mechanism" is necessary. I think it is not. We must agree on this to effectively continue works on the reputation.
Forgive me for saying this, but so far none of these goals of the paper seem to have been reached. But indeed it might be worthwhile to add this purpose statement at the start of the paper (cf my point 4 in #45 (review))
I expect we'll find few more similar disagreements and hopefully we'll resolve them :)
For me, this is practical. But for sure there is nothing we can go and implement/calculate using this document only.
I did not mean anything that you can directly implement, but rather some argument that it is at all feasible.
How about working through some examples? I think it would improve readability as well (cf my point 2 in #45 (review))
I disagree, or maybe I just don't understand. The only assumption about the protocol is: there are agreements, known and signed by both sides. With it, we can very well define honesty as "doing what I agreed to do" - an this is (I think?) enough to talk meaningfully about honesty. On the other hand, for sure this is not enough to calculate honesty factor, calculating any values requires much more assumptions and analysis.
If the agreement says I sell access to a hardware, and later requestor can access only a part of it - because it is used also by someone else, e.g. other requestor - I am dishonest, because I'm not fulfilling my part of the agreement. So every statement about provider-side honesty is a statement about overprovisioning as well.
Conclusion 2.3 (p.7) is exactly an answer to this question. It's a very general answer, but the abstract also explicitly promises that no specific solution will be proposed :) Maybe we should meet & discuss this? |
You are right from an ethical point of view, but this is not supposed to be an exercise in ethics, is it? At some point you even write yourself "agent's intentions don't matter here" ( even though it is slightly at odds with the term "honesty"). But what we are dealing with is a distributed protocol: we have a sequence (or more often just a set) of events and messages, and the different parties may even not see the same set. And to quote Vitalik, "Penalizing uniquely attributable faults is easy. Penalizing non-uniquely-attributable faults is hard." Even worse, as I mentioned at the Tech Council meeting, it is not really clear what the agreements in Yagna really say. For example: a GolemLife task with some reactions to simulate with a given deadline is requested; a provider runs some computation but does not return any results before deadline. Was the agreement fulfilled? Does it make a difference whether the provider really tried to compute, but failed, or just pretended to try? If so, can we observe the difference? Does it make a difference whether the requestor set the deadline in a reasonable or a predatory manner? Can we observe the difference? And this is just a small example.
I am not going to argue with this :)
What I meant is that some aspects may not be specified in the agreement, but important from the requestor POV. For example, currently the agreement (or at least the offer) specifies the number of cores/threads, but not their speed. So overprovisioning in the sense of running two tasks on the same core might be breaking the agreement, but running the task on a Pentium Pro (which was a really good processor in its time BTW) would be not. Still, the requestor might not be happy. My question is: are we taking this into account when talking about reputation?
Let us see
Does it answer the question ``What do we exactly mean by the "reputation"?"''. If so, this begs several next questions:
Moreover: "...to make honest strategies more profitable than dishonest strategies". Which ones? All dishonest strategies? Some dishonest strategies? If the system fails to make (some) strategies less profitable than honest strategies, do you still consider it a reputation system?
Definitely! |
\item Our new agents require honesty, so there is a reason to be more honest than the average. | ||
\end{itemize} | ||
|
||
The important thing to note here is that - as both sides change their strategies - the new balance should be preserved even after we remove the new agents from the market.\footnote{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this claim is sufficiently substantiated. In fact, the opposite may be the case: if there is plenty of high quality requestors, providers can afford to refuse to deal with lower-quality ones. OTOH when there are few high-quality requestors, insisting on high quality might starve them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to think about this.
I've a feeling that this is sort-of-crucial for the reputation system, but I agree this claim requires a stronger support.
The agent's core decision-making process is called a "strategy". The better agent fulfills their part of the agreements, the more "honest" is the strategy. | ||
So, in other words: when agent $A$ signs an agreement $a$ with an agent using a dishonest strategy, the expected value of $V_{AL}(a)$ is high. | ||
When the other side is fully honest, $V_{AL}(a) = 0$. | ||
Note that agent's intentions don't matter here - there's no difference if an agent breaks an agreement on purpose or accidentally. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some doubts here.
First of all, if an agent breaks an agreement accidentally (e.g. due to a hardware/software/network fault), this can hardly be considered their "strategy".
(By the way this also shows the weakness of your "strategy" to choose such loaded terms as "honesty/dishonesty", but I digress).
Secondly, I think there is a difference with regard to the "Pascal's mugging" you mention - a machine that randomly breaks with some probability is very different from a cunning crook there.
What is true though, is that an outside observer may often not be able to distinguish between the two by observing the agent behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all, if an agent breaks an agreement accidentally (e.g. due to a hardware/software/network fault), this can hardly be considered their "strategy".
I disagree.
Agent makes a decision about quality of the hardware/software/network etc they run the provider agent on. E.g.
- they might decide to run an expensive a provider on a high quality hardware in a rented high quality server room
- they might decide to run a cheap provider on an old laptop in their basement
To me, this is a part of the strategy.
(By the way this also shows the weakness of your "strategy" to choose such loaded terms as "honesty/dishonesty", but I digress).
Well, I know this is a digression, but still:
- If I know there is a certain risk my computer will fail
- Nevertheless, I rent it to agents who expect the reliability
this is sort-of-dishonest? But I don't think we have an important disagreement here, I won't be defending the term "honesty". I'm pretty bad at naming things :)
Secondly, I think there is a difference with regard to the "Pascal's mugging" you mention - a machine that randomly breaks with some probability is very different from a cunning crook there.
Yes, that's true, but for me it is quantitative difference: machine that randomly breaks is "low quality slightly dishonest strategy", cunning crook is "(possibly) high quality very dishonest strategy".
What is true though, is that an outside observer may often not be able to distinguish between the two by observing the agent behaviour.
Yes. That's exactly the reason why I want to keep them together, as a single concept.
(I could imagine corner cases when even the provider doesn't know if they are acting "fair", e.g. afair we had providers who had a cron job that purged yagna
directories and this sometimes harmed requestors, but providers didn't know this).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it seems we just have to agree to disagree ;)
I.e. $V_A(a) = X$ means "when agent $A$ takes part in the agreement $a$, their hapiness changes as if they were given $X$ money".}, i.e.: | ||
\begin{itemize} | ||
\item For the provider it is the utility of the money received decreased by the utility cost of the hardware/electricity/etc. | ||
\item For the requestor it is the utility of the resources obtained decreased by the utility cost of the money spent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the ex post value (i.e. evaluated after the agreement is completed) right? Perhaps it might be worthwhile to state it explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is ex post, I will add an explicit statement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in ac044d4
\item For the requestor it is the utility of the resources obtained decreased by the utility cost of the money spent | ||
\end{itemize} | ||
\item $V_P(a)$/$V_R(a)$ are agreement values from the POV of (respectively) provider/requestor. | ||
\item $V_{PN}(a)$/$V_{RN}(a)$ are nominal (i.e. negotiated) values of the agreement\footnote{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it is even more unclear whether you mean ex ante or ex post values.If the former, then what is V_PN
if it stipulates pay per use? If the former, can the agent even estimate V_PN
when deciding whether to sign the agreement?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it is even more unclear whether you mean ex ante or ex post values.
The latter, ex post.
can the agent even estimate
V_PN
when deciding whether to sign the agreement?
I don't see why not? E.g. they can have a history of agreements and estimate V_PN
as "average V_PN for all agreements similar enough". Or just have some predictions/estimations of the relevant parameters (e.g. agreement length).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added a general note about all $V*$s being ex post in ac044d4
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can the agent even estimate
V_PN
when deciding whether to sign the agreement?I don't see why not? E.g. they can have a history of agreements and estimate
V_PN
as "average V_PN for all agreements similar enough". Or just have some predictions/estimations of the relevant parameters (e.g. agreement length).
It seems substantiating this claim may require a lot of work at the very least (and fail in the end).
On the other hand your argument here sems to contradict your initial assumption "we can remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the other hand your argument here sems to contradict your initial assumption "we can remove
$I$ from the equations - we no longer care about what agent knows"
The assumption is about (quote, line 64) publicly available information only.
Removing all of the agent information from the equations is in obvious contradiction not only with this argument, but also with e.g. Conclusion 2.3 that reputation system is based on making new information available.
I don't think a good reputation system requires any deterministic "fault attribution". Consider a stupid example. I give a 2-star review of a restaurant on Google saying: "the food was late and cold". Restaurant owner replies "the food was late and cold because you gave as the wrong address". The truth might never be known in this particular case, but nevertheless reputation system works: a restaurant with higher average ratings will more likely deliver food on time. This works only as long as agents can't profit from getting into disputes on purpose. Good reputation system should enforce this, and "fault attribution" is just one way to achieve this, but (I think) there are other as well.
As said above: I hope we won't have to deal with such matters. I'm not sure if my POV here is clear. An example of the reputation system that might work without any explicit fault attribution is described in #47.
This is a very good point. I am inclined to say It doesn't really matter if one side broke the "explicitly stated" part of the agreement, or only the "implicit assumption of the other side" - i.e. that from the reputation system POV it's all the same.
I think we have some major misunderstanding here. For me, anything that can be described in terms of Conclusion 2.3 is a reputation system. There could be a billion different possible reputation systems and Conclusion 2.3 doesn't specify any particular one. So e.g.
Any information, that will serve the purpose, i.e. will get us closer to the goal from Conclusion 2.1 by means described in Conclusion 2.3. Do you think I should rephrase this somehow, so that it will be obvious Conclusion 2.3 is about a general class of solutions, not about any particular one?
Reputation system should make some "reasonably" honest strategy better than the best dishonest strategy available. |
So long as you don't mention "honesty". |
I would not go as far as calling this example stupid, but I can agree this is not very wise ;)
So let's define a good reputation system then, shall we? |
Alas, the reality is that this are the matters we have to deal with, whether we like it or not. |
I don't understand this comment. |
My "not very wise" example is about a reputation system that works and doesn't deal with anything similar. So OK, maybe I'm wrong, but as for now I can't see why we would "have to deal with" this. |
but elsewhere you say
which to me undermines the usefulness of the notion of "honesty" here. Moreover, if the actual agreement content is outweighed by "implicit assumptions", we can disregard the agreements altogether. |
Yep, that's true. |
Saying someone is dishonest is attributing fault, don't you think? |
Saying someone is dishonest (in the discussed reputation model) is saying "according to the available honesty metric, there were some cases where they were somehow faulty". But we neither need to know any particular cases, nor any specific fault type. This is very far from pointing to any specific situation and saying "here agent X did something wrong". |
My main problem with treating Conclusion 2.3 as a definition of reputation (or perhaps more with your insisting that it can be treated as said definition) is that it is paradoxically both too broad and too specific. It is too broad, because it stipulates that "Reputation system is an[y] attempt to solve the problem [X]" It is too specific, because it insists on using the "dishonesty index", hence a system that does not use it cannot be called a "reputation system" in the view of this "definition". So I would suggest that we admit that this is not a definition of a reputation system (much less "reputation") and offer a real definition. I am very well aware that formally defining reputation is not easy, in fact I have my doubts whether if it is indeed necessary and expedient (if at all feasible). |
Could you give an example that matches this definition and should (in your opinion) not be named a reputation system? Is it about some missing word like "reasonable"? Note that Conclusion 2.3 says that "Reputation system is an[y] attempt to solve the problem [X] in the following way", and this way is crucial here.
Could you give an example of a system that should (in your opinion) be named a "reputation system", but is excluded by this definition? |
As for now, I'm definitely not ready to admit it - I've asked for some more details in the previous comment.
Maybe. Maybe such document is not useful at all and we'll just decide to close this PR instead of improving it. |
Additional implicit assumptionOK, @mbenke I think I understand what lies behind our disagreements (at least, behind an important part of them). I, in fact, make an implicit assumption (in the discussed text and in many comments here), that the agreement contains all the details that might be important from the POV of either side. Without this assumption:
In other words, I assume that after the agreement agent can either say "agreement was fulfilled, I got what I wanted, the other side was honest & I want to trade with them in the future" or "agreement was (partially?) broken". Without the assumption, there is also "agreement was fulfilled, but nevertheless this was quite worthless for me". Do you agree with this summary? Where to go from here?This is - at least from my POV - a quite interesting question: what exactly is the Golem agreement? Do we expect/assume it is detailed enough? My intuition is that - as we trade goods that are quite easy to define - we should go in this direction. Either way, I think the discussed document should be altered in one of two ways:
I have no strong opinion about which way is better. Other topicsThis has - from my POV - nothing to do with certain other disagreements, namely:
|
For a start, I would propose the following:
We should then go on to specify how reputation systems may be described and evaluated. For example .a reputation system can be described in terms of (cf [Hoffman+2009])
|
I don't know if adding this assumpion fixes everything, but I do agree that the things are different with and without this assumption. |
One more thing to clear up is, whether the statement "agreement was broken" is meant to be objective (it can be determined looking at the facts) or just subjective (opinion of an agent). If it is the former, we enter the discussion about fault attribution. If it is the latter, do both sides really have the same agreement? (i.e. is the meaning of the agreement the same for both parties) |
The latter. If an agent believes the agreement was broken by the other side, it doesn't really matter (from the POV of this document) if it was really broken. IOW, agent's utility is determined by their opinion.
I think I get your point, but why would this matter? |
$V_{whatever}$ is a determined, known value. When trading on the Golem market, some values are known from the start (e.g. $V_{AN}(a)$), | ||
and other only post factum (e.g. $V_{AL}(a)$, so also $V_A(a)$). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This statements is at odds with your previous statement that V_AN
is ex-post.
Imagine an honest agent $A_1$ who considers signing an agreement $a$ with agent $A_2$. The decision algorithm can be rougly summarized as: | ||
\begin{enumerate} | ||
\item Calculate the expected value of the decision \textbf{not} to sign the agreement, $E_{A_1}(\neg a)$ | ||
\item Calculate the expected value of the decision to sign the agreement $E_{A_1}(a) = V_{A_1N}(a) - E(V_{A_1L}(a))$ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This cannot be calculated if V_A1N
is not known a priori.
Then, in your own words - "it is hard to talk about honesty". It is not clear whether we can even call such an artifact an "agreement". But more importantly, the information that "A believes B has broken an agreement" is significantly less useful to other parties than the information "B has broken the agrement X with regard to Y". For example: B believes they should only pay when the work is done. The work is not done and consequently B does not pay. However, A believes they should be paid whether the work is done or not. So A believes the agreement was broken, and, in your words "it doesn't really matter [...] if it was really broken". Hence it wouldn't matter what the agreement says. I don't think this is a direction we should be going. |
The problem you describe in #45 (comment) may well be part of it, but I think, there are deeper reasons:
(edited, I forgot to add the following) Where to go from hereFirst of all, we need to agree to what constitutes a definition, e.g. "means to give a precise meaning to a new term, by describing a condition which unambiguously qualifies what the term is and is not." Then, if you want to keep pursuing the game-theoretic approach, we need to
I am aware this is a lot of work. Altrnatively, we may try a more modest approach, starting with examples and trying to generalise from them (this may be useful even if we still want to create the game-theoretic model described above). |
I see your point here, but:
|
At today's Road Ahead we decided to close this PR and postpone work related to this. |
The
microeconomic_reputation.pdf
is an attempt to define a base for all the future works related to the topic of the "reputation".As we should expect it to be referenced from other the documents/GAPs in the future, high quality - at least regarding the main conclusions - is crucial. Please review scrupulously and don't hesitate to point out any part that is incorrect or not clear enough.
.pdf
is for reading, the corresponding.tex
is a source, they should always match.