-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Proposal] Typeclass Traits #4153
Conversation
I like the idea but I'm not a fan of the surface syntax: trait Text {
//...
}
abstract object Text {
def fromString(str: String): Instance
def fromStrings(strs: String*): Instance =
("" :: strs).map(fromString).reduceLeft(_.concat)
}
enum ConcText extends Text {
//...
}
object ConcText { // Implicitly extends the abstract object Text
def fromString(str: String): ConcText = ConcText.Str(str)
} |
@smarter That's an interesting idea! There's one complication though, that we sometimes need both an abstract and a concrete companion object for a trait. We'd have to establish the intuition that abstract objects are a third thing, different from objects. Another variant would be trait Text {
//...
}
common {
def fromString(str: String): Instance
def fromStrings(strs: String*): Instance =
("" :: strs).map(fromString).reduceLeft(_.concat)
} That avoids the scoping confusion but does not cause a name clash. |
[UPDATE: Instance is not exposed anymore in the latest proposal] Thinking over it, I believe we should not expose The downside of dropping |
What's the difference between Instance and This? |
Yes, this does make it more difficult to find a good design, but is this really a hard requirement? |
In the typeclass encoding |
For the typeclass encoding, yes. Also, if the implementation is parameterized, the
|
tests/pos/typeclass-encoding3.scala
Outdated
common def limit = 100 | ||
} | ||
|
||
class CG2[T](xs: Array[Int]) extends CG1[T](xs) with HasBoundedLength { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be xs: Array[T]
here
tests/pos/typeclass-encoding3.scala
Outdated
def lengthOKX[T : HasBoundedLengthX](x: T) = | ||
x.length < HasBoundedLengthX.impl[T].limit | ||
|
||
def longestLengthOK[T : HasBoundedLengthX](implicit tag: ClassTag[T]) = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First argument list missing? (the (x: T)
)
Here is my attempt to desugar the first part of Note that there are no implicit conversions involved, only implicit parameters. |
I think the problem with this is that you need to define every normal trait twice - once as written and then again with a type parameter. I believe that's way to burdensome - we should not have to pay any overhead at all for normal traits. |
Shouldn't that be just for typeclass traits? Not every trait makes sense as a type class, at least in Haskell, for a few different reasons (can elaborate if needed). |
[UPDATE: This is now solved. See the section on Factored Instance Declarations.] Right now I am stumped to come up with a scheme to re-use extensions. Example: extension IntSemiGroup for Int : SemiGroup {
def add(that: Int) = this + that
}
extension IntMonoid for Int : Monoid {
common def unit = 0
} We'd like to be able to write extensions like the above and transparently re-use the One technique we should look at it to generate for every trait method a static method in the trait with
becomes trait A {
def a() = a$(this)
static def a$($this: A) = "a"
}
trait B extends A {
def b() = b$(this)
static def b$($this: B) = "b"
} If we have that, we might be able to replace inheritance by delegation, which would give us more flexibility. |
Please elaborate, yes! I mean, discussing at lunch, we decided that only no-init traits qualify as type classes (i.e. no vars or vals allowed). Are there other restrictions? |
I expected the translation should be opt-in. Here's a conceptual reason about what seems to work in practice in Haskell, but that's a bit fuzzy. Then there's a few examples where I'm not sure what a translation should do.
trait Set[A] {
def append(that: Set[A]): Set[A]
// Beware: this isn't
//def append(that: This): This
} which allows appending different implementations of sets, do you expect to translate that to a typeclass implementation? How? With Olivier's translation I can maybe use: trait Set_TC[T, A] {
def append($this: T, that: Set[A]): Set[A]
} but in Olivier's translation, adding a
trait Set_TC[T, A] {
def append[U]($this: T, that: U)(implicit $ev: Set_TC[U, A]): Set[A]
}
implicit class HasLengthOps[T](self: T)(implicit ev: HasLength_TC[T]) {
def length: Int = ev.length(self)
} by implicit class HasLengthOps[T](self: T)(implicit ev: HasLength_TC[T]) extends HasLength {
def length: Int = ev.length(self)
}
|
Add `opaque` to syntax. Let it be parsed and stored/pickled as a flag.
An opaque type becomes an abstract type, with the alias stored in an OpaqueAlias annotation
maintain the link from a module class to its opaque type companion, using the same technique as for companion classes.
Higher-kinded comparisons did not account for the fact that a type constructor in a higher-kinded application could have a narrowed GADT bound.
The previous scheme, based on "magic" methods could not accommodate links from opaque types to their companion objects because there was no way to put a magic method on the opaque type. So we now use Annotations instead, which leads to some simplifications. Note: when comparing the old and new scheme I noted that the links from companion object of a value class to the value class itself broke after erasure, until they were restored in RestoreScopes. This looked unintended to me. The new scheme keeps the links unbroken throughout.
FirstTransform rewrites opaque type aliases to normal aliases, so no boxing is introduced in erasure.
It's overall simpler to just define them internally in Definitions
I can't find the correct thread, could you provide a link? |
@EECOLOR: Somebody has to start a thread. There is none yet. |
@oxbowlakes I agree about @deriving. We are looking how we can accommodate it in the language, since the macro systems we will support in the future look too weak for this. @OlivierBlanvillain is working on this. Otherwise I am not sure in what sense users of a typeclass would be affected. There might be small differences, i.e. it's currently |
@odersky, you have asked for
For IO data types, the libraries the typically provide When As for the functional dependencies - I skimmed Mark P. Jones "Type classes with functional dependencies" paper, and while this matter is way over my head, my understanding is that |
Needs kind polymorphism to work. Seems the is the first time we kind polymorphism it in the wild!
def sum[T](xs: List[T])(implicit $ev: Monoid.Impl[T]) = | ||
(Monoid.impl[T].unit /: xs)((x, y) => x `add` y) | ||
def sum[T](xs: List[T])(implicit $ev: Monoid.common[T]) = | ||
(Monoid.common[T].unit /: xs)((x, y) => x `add` y) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it could (aliased to) apply
, so it becomes Monoid[T].unit
which looks nicer and less bulky. Some of the alternative proposals had that short form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this a typo?
(implicit $ev: Monoid.common[T])
shouldn't that be Monoid.Common[T]
? a type and not a term
@rkrzewski I think the instances that @odersky was looking for are the ones for However to me it still seems that the second type parameter of trait MonadError[F[_]] {
type ErrorType
}
object MonadError extends LowPriority {
type Aux[F[_], E] = MonadError[F] { type ErrorType = E }
implicit def either[A] = new MonadError[Either[A, ?]] { type ErrorType = A }
implicit def io = new MonadError[IO] { type ErrorType = Throwable }
implicit def eithert1[F[_], L](implicit F0: Monad[F]) = new MonadError[EitherT[F, L, ?]] { type ErrorType = L }
}
trait LowPriority {
implicit def eithert2[F[_], E, L](implicit FE0: MonadError.Aux[F, E]) = new MonadError[EitherT[F, L, ?]] { type ErrorType = E }
}
// here I care about the error type
def foo[F[_]](f: F[String])(implicit m: MonadError.Aux[F, Throwable]) = ???
// and here I don't
def bar[F[_]: MonadError](f: F[String]) = ??? |
@rkrzewski @Jasper-M Thanks for the answers. Would be good to get to the bottom of this! |
What about things with a bidirectional functional dependency? |
@SystemFw is |
@rkrzewski No, I've actually just realised that there's a better example though. Typeclasses with three type arguments where there's a dependency between the first two and the third. For example typeclasses for type safe dimensionality: multiplying a vector with a matrix, a scalar with a matrix and so on class Mult a b c | a b -> c where
(*) :: a -> b -> c
instance Mult Matrix Matrix Matrix where
{- ... -}
instance Mult Matrix Vector Vector where |
Once you do that, it seems actually quite clear to write
Or where do you see room here for further improvement? I would say: maybe we omit the names of the objects |
@oderky Honestly at this point I'm still in the exploration phase, trying to understand benefits, drawbacks and limitations of the current proposal :)
Because of the above, the examples are really just meant to highlight possible limitations, not to argue that such limitations are a showstopper one way or another. The point about context bounds is interesting though, I'll keep that in mind. More generally, I personally prefer a typeclass solution that doesn't rely on |
So, it's now `Monad.at[T]` instead of `Monad.impl[T]`, `Monad.common[T]`, or `Monad[T]`.
Understood :-) That's what we both are trying to do here. |
I'm having a hard time keeping up with all the comments but here is one comment (by @odersky) that I would like to discuss:
I think that using a trait and using a typeclass are 2 fundamentally different design decisions so having to refactor is a normal consequence because they are not the same thing:
In my mind this justifies having a different constructs in the language like I am sorry if this rambling comment is more a vague intuition rather than a formal proposal but maybe this can shed some light on the discussion? |
That sentiment is shared my many people. But I don't think it needs to be true. Two counter-examples:
|
@odersky well, that's right. But did you use it heavily in practice? I mean in non trivial projects. I'm practicing Rust heavily for more than a year now (wrote several dozen of thousands of line of code), and even if I totally agree it must be a good source of inspiration for your current work. I'm really not enthusiast about the I did use it quite heavily at first, literally thinking "oh maybe that's finally a nice fusion of OOP/FP here!", so I totally understand you enthusiasm here... but I got very disappointed in the long run. I had to spend hours of refactoring to "fix" code that was using As JVM memory model is arguably pretty different, do you think it would be possible to avoid such limitations? I'm happy to provide concrete example if there is interest, the reason of the boxing is tied to how methods are dispatched and I'm pretty sure that would have an impact on Scala design. Also, an other important point, please consider that even if Rust allows I hope this put some emphasis on the motivation to look at those both concepts separately as @etorreborre suggest, in my opinion if you want to fuse them, great advantages must be demonstrated... My goal here was to demonstrate that Finally, on a more personal note, just a word to say how happy I am to see the Scala team looking to support natively type-classes after all this years of intense (and interesting!) debates. It is also pretty awesome for me to hear the discussion about coherence and this concept being discussed in a Scala context. Even though my interested have shift after all this time (and as you say Martin, if people want a pure encoding on the JVM they look into Eta-lang, and not hijack your design), I'm very glad to see that there is a true and serious attempt at providing a native encoding for typeclasses in Scala. |
@odersky I think your approach is definitely worth exploring and it is interesting to contrast what you are proposing with Rust's
Indeed |
@aloiscochard Thanks for your input! [I was busy with other things for a while so saw it only recently]. Very interesting to compare with your Rust experience. I would think that boxing on the JVM is purely a performance issue, so you should not see any compilation errors when switching between context bounds and value parameters. That would be a design goal, for sure. As to performance, it will depends a lot what a compiler and the JVM runtime would do in each case. |
I use Rust--not as heavily as you have; I'm still mostly writing Scala or C++--but I have had a better experience with trait objects than you. I tended to use them lightly to begin with (preferring static dispatch for performance), and I was wary of the restrictions needed to make a trait object-safe, so I haven't run into refactoring gotchas much. Being able to have an object at all is way better, in some cases, than not, so it's still a win from my perspective. But, anyway, all the annoyances in the Rust are due to being explicit about memory and avoiding allocations. The JVM doesn't have to worry about them (aside from the standard friction around boxing primitives). A Rust-on-the-JVM would have trait objects just work, always. (Everything on the JVM can be thought of as So I think as inspiration for Scala, the upsides for Rust are relevant and the drawbacks not. |
|
||
### Typeclass Implementation | ||
|
||
An implementation of a typeclass trait is a class or object that extends the trait (possibly that implementation is generated from an extension clause). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This rule seems a bit fragile for ADT. What if you want you have an implementation which is a trait? Motivation: consider
trait Filter[A] extends TypeClass { def filter(f: A => Boolean): This[A] }
trait Option[A] extends Filter[A]
class Some[A] extends Option[A] { def filter(...): ??? }
the return type would seem to have to be Some[A]
, not Option[A]
. Clearly, what we actually want is to have type This = Option
. And yes, I realize this can be achieved by making Option
an abstract class (but this isn't very orthogonal — what if I need to mix other classes in Option
's children) or by using extensions.
Incidentally, this puzzler would be avoided if typeclass traits were marked otherwise.
EDIT: credit for the question to @tpolecat on Gitter.
|
||
type This = C | ||
|
||
unless `C` extends another implementation class `B` (in this case `B` has already defined `This`.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Back to the example above with Option[A]
extending typeclass Filter[A]
, if we want Option
to implement typeclasses of different kinds, like Filter
(or Monad
) and Semigroup
, we can't use the OOP typeclass-implementation style. Rust avoids this problem because it doesn't have higher kinds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you are looking for a concrete example, from cats:
trait SemigroupK[F[_]] {
def combineK[A](fa: F[A], fb: F[A]): F[A]
}
vs
trait Semigroup[A] {
def combine(a: A, b: A): A
}
Option
has instances for both, SemigroupK
encodes prioritised choice (orElse
basically), and Semigroup
combines the A: Semigroup
in Option[A]
@odersky thanks, it's very good to hear this should be purely a performance issue on the JVM (and that this will be enforced during design)! @Ichoran yeah indeed, interesting to hear about your experience. I wish I was wary of this before writing so much code, but I guess it's a fair price to pay for me not RTFM ;) ... anyway, now I'm still using them in few places, not sure how long they will survive and I just try to avoid them on new design. |
Support it! Different semantic should use different keywords,
I think they should have themselves keywords.It's also good for tools. Besides, for unify concept of Scala, they could be implements by one origin thought which depend on how does DOT defines(I not really know how does Scala compiler works). |
This is a proposal to support extension methods and typeclasses in a more direct and convenient way. It is based on #4114.
Status
This is a first draft proposal (consider it Pre-SIP stage). None of the features that go beyond #4114 are implemented yet.
Rationale
There are two dominant styles of structuring Scala programs: Standard (object-oriented) class hierarchies or typeclass hierarchies encoded with implicit parameters. Standard class hierarchies lead to simpler code and allow to dispatch on the runtime type, which enables some optimizations that are hard to emulate with implicits. Typeclass hierarchies are more flexible: instances can be given independently of the implementing types and the implemented interfaces, and instances can be made conditional on other typeclass instances.
Unfortunately, typeclass-oriented programming has a high upfront cost. It starts with the definitions of "typeclasses" themselves. Say you want to implement a typeclass for a
map
method (often called aFunctor
). The usage you want to support is:However, you can't define
map
as a unary method like this, not if it is part of a typeclass. Instead you need to define a binary version of map, along the following lines:You need another implicit class, to get back
map
as an infix operator. Something like this:It does the job, but at a cost of lots of boilerplate! The required boilerplate is very technical, advanced, and, I believe, frightening for a newcomer. The complexity of typeclasses does not end with their definition, either. It continues to the use sites, which typically need one extra type parameter per typeclass argument. Scala is a language set out to eliminate boilerplate and promote the simplest possible style of expression. But it seems in this area it has utterly failed to do so.
https://github.com/mpilquist/simulacrum is macro library that removes a lot of the boilerplate. But it relies on "macro paradise" which has never been officially supported and has lost its maintainer https://contributors.scala-lang.org/t/stepping-down-as-the-maintainer-of-scalamacros-paradise/1703. The kind of annotation macros required by simulacrum will almost certainly only be supported in Dotty as code generation tools, requiring a separate build step.
The aim of this proposal is to
Details
The proposal is written up as a list of reference pages:
The proposal was heavily inspired by Rust's traits and implementations. The main differences to Rust are