Replies: 1 comment 1 reply
-
@nlohmann could you share your thoughts about the topic? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Lets start with an example: godbolt
Code
While using the library, it's a usual thing to have some enum, and [de]serialization for it via
NLOHMANN_JSON_SERIALIZE_ENUM
(or some other) macro. Also enums are comparable by default, and in some cases "Ord" comparison can be used (e.g.Severity::Low < Severity::High
), but this is different in lexicographic comparison of string representation ("Low" > "High"
).Assume that one gets some json with "severity" inside, and would like to compare severity from json with some value:
Severity::High < nlohmann::json("Low")
it can be compiled (due to this overload of
<=>
), but gets the wrong result. One (who does not know about this overloading) can expect that it works the right way by choosingoperator <=>(Severity, Severity)
and the implicit conversion fromjson
toSeverity
.I can understand, why this
<=>
overload (with the current implementation) is here forstd::is_arithmetic
types (usually json [de]serialization for them is natural), but not forstd::is_enum
.I suggest to remove this overload for
enum
-s.IIUC, in this case,
Severity::High < nlohmann::json("Low")
will work correctly (in terms ofoperator<=>(Severity, Severity)
), whenJSON_USE_IMPLICIT_CONVERSIONS=1
, and will not compilable (it seems to be a good thing) whenJSON_USE_IMPLICIT_CONVERSIONS=0
Notes
json::operator ==
for enums is semantically OK with the current implementation (hidden to_string conversion can be not-optimal way to do this, but this is for another discussion).Beta Was this translation helpful? Give feedback.
All reactions