Skip to content

Planned Features

Jake Chitel edited this page Nov 13, 2017 · 6 revisions

This will contain a list of features identified during the compiler translation process that should be added to the language:

(NOTE: once these features are assigned to a version, they will be moved to that version's page. see the sidebar for a list of versions)

Language features

Invariant operation

This will be a syntactic feature that is injected between statements to enforce some constraint. It is effectively equivalent to an if-statement that throws an error in the false case and does nothing in the true case. This is a mockup of what it might look like:

invariant (<condition>) <expression>

Let-in expression

This is a way to declare expressions that depend on intermediate values, based on the similar feature with the same name in haskell:

let <assignable-expression> in <expression>

or

let { <assignable-expression>; <assignable-expression> ... } in <expression>

Patterns

Patterns are a syntactic construct that is used in several places:

  • Function parameters
  • Assignment expressions
  • Match-case expressions Effectively any place where a name is declared (not used, declared)

Patterns can be:

  • types (used in match-case expressions, case is chosen if value matches type)
  • expressions (used in match-case expressions, case is chosen if value equals value)
  • conditions (used in test-case expressions, case is chosen if condition is true for value)
  • structural patterns (destructuring)
    • struct patterns (used in assignments to destructure, match-case to match pattern)
    • array patterns (used in assignments to destructure, match-case to match pattern)
    • tuple patterns (used in assignments to destructure, match-case to match pattern)

Patterns can contain _ to skip over entries in arrays and tuples.

Match-case expressions

Match-case expressions are powerful alternatives to if-else expressions.

They allow you to specify a list of cases to match sequentially against a target expression. Cases can be types, expressions, or structural patterns. The default case is the final fallthrough, if no pattern matches then an error is thrown.

match (<expression>) { case <pattern> => <expression>; ...; default => <expression> }

Test-case expressions

Test-case expressions are for testing target expressions by predicate cases:

test (<expression>) { case <predicate> => <expression>; ...; default => <expression> }

Predicates are functions that return a boolean value for an input parameter of the type of the target expression.

Shorthand functions

Shorthand functions are even more succinct function forms than lambdas. They are only applicable to non-complex (yet to be defined) expression bodied functions that have one parameter. The one parameter is represented by a _ character.

Examples:

_ + 4
_ == 2
someOtherFunc(_, 9)

This will be interesting because it will be parsed as an expression until the underscore is reached. Although because this is so context-sensitive, it is probably better to be dealt with during type-checking.

Matches expressions

The matches keyword is used to simply test in place if a value matches a pattern, returning a boolean:

<expression> matches (<pattern>)

Spread operator

This is a simply way to copy entries from an array or struct into a new array or struct:

<assignable> = [<exp>, <exp>, ...<exp>]
<assignable> = { <ident>: <exp>, ...<exp> }
<assignable> = (<exp>, <exp>, ...<exp>)

In this case order matters, and the spread can be followed by more entries.

Rest/spread params

Rest params (also known as variadic arguments) allow you to specify a variable number of arguments for a function. This has an impact on the semantics of a function, for example when it is partially applied.

func int myFunc(int ...params) => <exp>

Spread params allows the items in arrays and tuples (and other iterables) to be spread into function calls. Tuples must match the correct types. For arrays and iterables, which have only runtime lengths, all function parameters must be assignable from the type of the iterable. If the function doesn't have rest params, then a runtime check is added to make sure the length of the iterable is correct.

myFunc(<exp>, ...<exp>)

Truthy/Falsy

Booleans shouldn't be the only values usable in if-else and test-case expressions. Any value can be passed. By default if a value represents an empty structure (empty array, tuple, or struct) it evaluates to false, otherwise it's true. But types can specify the bool truthy() method to override this behavior.

Default parameters

To make it easier to specify overrides, a sugar allows you to specify a default value for a parameter, so that if it is not provided, the default it used. Unlike other languages, defaults can be specified in any order with non-defaults. Typechecking intelligently infers which arguments match which parameters. When two adjacent parameters have the same type, whichever comes first is used.

func int myFunc(int a = 1, int b, int c = 4) => <exp>

Named parameters

All parameters have names, but if you specify your function with the keyword kwargs (up for discussion), you can specify that calls to the function must specify the parameter names. Parameters without default values must be provided. Additionally, structs can then be spread into function calls.

Template strings

Clean replacement for string concatenation, potentially allowing for formatting as well.

`some text${<exp>:<format-string>}some more text${<exp>}final text`

This will be interesting for the lexer because there will be string "segments" that will serve as single tokens.

Indexers

We should be able to index into things other than arrays, using keys other than integers.

<modifiers> <type> [<type> <name>] { get => <expression> set => <expression> }

Map/List literals

Map literals:

map = { [<key-exp>]: <value-exp>, ... }

List literals:

list = { <entry-exp>, ... }

Improved type inference

This feature has its own feature page

Multithreading

Threads

Multithreading functionality will be provided with the primitives:

  • tfork (takes a void function call and executes it in a new thread, returning a thread reference object)
  • tjoin (takes a thread reference object and waits for it to finish)
  • tkill (takes a thread reference object, sends a kill signal, and waits for it to finish)
  • tcurr (returns a reference to the current thread's reference object)
  • twait (puts the current thread to sleep until it is awoken, optionally an int argument can be provided)
  • twake (wakes a thread up)

Additionally synchronization is provided with a lock block, which uses a single named reference as a lock object. When the lock is active, the thread that activated the lock is the only one that can access the associated block. Any other thread that attempts to activate it is put into a sleep queue until the active thread leaves the block, deactivating the lock.

This locking mechanism is a further abstraction on top of three operations:

  • test-and-set (an atomic operation that sets a memory value to true (1), returning the previous value)
  • twait (see above)
  • set (set a memory value)

A thread calls test-and-set. If it returns true, it should sleep. If it returns false, it means that the code block has been opened for business, and it can enter. Once the block is done, the lock object should be set to false and the next thread should be awoken. TODO: I'm mixing things up here. perhaps all we need is an "atomic" modifier.

Async functions

Async functions are an abstraction over several multithreading operations to allow chaining of multiple asynchronous operations into a synchronous workflow. Any function (including methods) can be marked async. This does several things:

  • The function is implicitly transformed to one that returns a Task object, where the type parameter is the return type of the function.
  • Invoking the function calls it in a new thread and immediately returns a Task object that is bound to the operation.
  • Any 'await' operation inside the function puts the async thread to sleep while the inner async operation finishes.
  • Any return will make the returned value available to the task, and the task is marked resolved, triggering any dependent operations to wake up and start executing.
  • Any throw will do the same. If there is not at least one operation waiting for an error, an exception is raised in the thread, which will kill the whole process.

Another type of async function is a task generator function, which can be used to create new tasks in a first-class way as opposed to the API way. Any async function that uses the keywords resolve or reject is parsed as a task generator. Await cannot be used in one of these functions. resolve and reject are available to any inner function within the async function. If there is no inner function, the operation does not need to be async.

Event queue

An execution environment backed by an event queue can also be created. An event-based environment runs in a single thread, starting with one function. That function may invoke operations that place event handlers onto the queue. No queued event handler will be invoked until the current function is finished executing. This makes it very easy to reason about asynchronous operations, and things like race conditions are not possible. Additionally, execution is always non-blocking, which makes it very useful for use in responsive systems such as UI.

Async functions invoked in the event queue will use the event queue model as opposed to a multi-threaded model. The event queue system can invoke synchronous code, but that will defeat the purpose of using the event system. Instead, a wrapper API asyncify will invoke the synchronous code in a separate thread, and when it finishes, will place a handler on the event queue. There are various mechanisms for interacting with an async environment from a synchronous one.

Generator functions

Generator functions provide a syntactic abstraction for iterators and generators.

Generators are simply normal functions that use a 'yield' keyword. The yielded value is output from the function before it returns, and the function's execution is paused until the next value is requested. Generators add additional behavior to iterators that allow the caller to pass data to the generator, so yield expressions may actually resolve to a value. Callers can also invoke a throw operation on the generator, which will cause an exception to be thrown in the generator.

Async generators

TODO

Cast expressions

Sometimes, it's nice to be able to force an expression to have a specific type (assuming assignability is valid). Cast expressions are used during type checking to force-infer (not coerce) an expression to a type. It's more of a "type hint" than a coersion.

Certain casts also inject runtime behavior as well, for example with classes. Casts from a general class type to a more specific one will add a runtime type check. A class cast error will be thrown if the type is not assignable to the casted type.

<assignable> = (<type>)<expression>

Script mode

Script mode allows logic on the top-level of a module.

When this is applied to the main module, the logic is treated in the same manner as the main function. The main function in this context is ignored unless it is explicitly invoked.

When this is applied to imported modules, the logic is executed when the module is first imported. In script mode, imports can be placed anywhere in the file because they are executed statements as opposed to declarations. You can also import modules without using any exported values. In addition, "global" values are possible via modules, because modules become runtime values that are mutable (only when const is not used).

In script mode, code is always executed by an interpreter as opposed to being compiled, because the extra overhead is not necessary.

To turn on script mode, you can add a "#script" to the module configuration or use a ".rensh" extension to the file.

Non-script files will have extra optimization in script mode, because they are not executed. Script files cannot be compiled, because the structure is not compatible with the compiled code structure.

Macros

Macros are intended to be used to build DSLs (domain-specific languages) within Ren, NOT for traditional C-like purposes.

To build macros, you are effectively building parsing and execution logic for the DSL. Macros add abstractions for syntax and logic ONLY, they do NOT change the behavior of the type system or runtime.

Macros are collections of parser functions. There is always one top-level parser function which indicates if the macro is a "transformation" or "execution" macro. Transformation macros parse the DSL and emit valid Ren code that will be used to replace the DSL code during parsing. Execution macros parse the DSL into runtime values and contain Ren code that will be executed when the macro is encountered at runtime. In the case of execution macros, the macro defintion is used to translate the DSL code into some boilerplate that will invoke the macro functions. Transformation macros are more performant than execution ones, but are more complex.

Modules that export macros are special because they need to be processed before any other code. Modules that use macros need to declare a macro directive in the module configuration that specifies the relative path to the macro module. When the parser encounters these directives during preprocessing, it will parse and process the macro module and add its logic to the parser runtime. For each module that uses the macro, the DSL implemented by the macro will be usable like any other code. Because Ren's parser is built using order of precedence, macros must declare parse functions that do not clash with Ren code.

The syntax of macros is yet to be determined.

Module configurations

Module configurations are a set of preprocessor directives that control how modules are parsed, processed, and compiled. Here is a running list of configurations:

  • script (turns script mode on)
  • noscript (turns script mode off)
  • macro (imports a macro definition)
  • macrodef (specifies the module as a macro definition)
  • index (turns the current module into an index module (aggregates modules in the directory), only usable on index.ren files)
  • exclude (specifies a glob path for files to exclude from the index)
  • include (specifies a glob path for files to include in the index)
  • recursive (applies current index module configuration to all subdirectories as well)
  • exportstyle <explicit|pascal|camel|underscore> <includepath?> (specifies how the aggregated exports should be named)
    • explicit (default): as much as possible, keep the names as-is. default exports of modules will take the module name, replacing all non-identifier characters with underscores
    • pascal: module names are translated to pascal-case, where the first letter is capitalized and all non-identifier characters are treated as word boundaries where the next character is capitalized
    • camel: module names are translated to camel-case, where the first letter is lowercase and all non-identifier characters are treated as word boundaries where the next character is capitalized
    • underscore: module names are translated to underscore style, where all letters are made lowercase and non-identifier characters are replaced with underscores
    • includepath (default false): an optional modifier that includes relative module paths in names, inserting "default" for default exports
  • defaultforward (specifies a module whose default value should be forwarded as the default of the index)

"index" modules allow common configuration to be specified for several files in a directory, and potentially subdirectories as well. It is useful to be used for the top-level index.ren to specify project-wide configurations. In recursive configurations, module configurations can override inherited configurations.

There will likely be many more options to come, containing things like compiler options and typechecker configurations.

Closures

As of now, functions only support operations on their parameters or module-scoped constants. We need to allow access to local variables as well. This is classically done via "closures", which are logical collections of references to local variables. While parameters are new memory allocations which will lose the original reference if reassigned, closure variables are direct references. If you have a variable x and an inner lambda function which sets x to some value, that will set the value of x in the outer function.

A necessary extension of this is that non-mutating references to outer function variables are possible. This is MUCH easier to implement, and as an optimization, any outer function references that are not modified can be copied as opposed to being referenced. Closures are an example of something that can be configured as disallowed in a linter, because they lead to performance issues.

Function contexts

Function contexts are one of the language features that is completely new to Ren, not inspired from another language. Operations that come with necessary side effects, such as IO, are difficult to implement in a pure functional way. Languages like Haskell use Monads to get around this, but the IO Monad is just a way to offset the side-effecty operations until after the program is done executing. When you perform an IO operation, you are just specifying "actions" to be executed later.

Function contexts are a way to implement operations that mutate data or perform side effects without a bunch of code overhead. A function context is an implicit parameter that is passed to functions that require it. Functions that require a context must explicitly declare it as a context parameter. Any function that invokes a function with a context then has that context implicitly added to itself. This goes recursively upward to the main function. The main function receives a special context called "external" that represents resources external to the application. This context is required to access side-effecty operations from pure code. Any time an impure function is called from a pure function, the external context is implicitly passed to it and implicitly returned as a new value.

external is the only context received by the main function, so if a different context bubbles up to the main function, an error is thrown during compilation. These contexts must be created separately and passed to functions that use them. For example, an application might have a "state" context that maintains the state of the application. Functions that use it will implicitly or explicitly be given a "state" context parameter. But some function between the main function and those functions must initialize "state" and start it as a context for a call chain.

There are a few dependent features that are required for this to be possible:

Pure/impure functions

Functions should be labelled either pure or impure. There are two types of "pure" that are not really distinguished by Ren, but should be distinguishable by the developer. The first is "true pure", meaning every function is expression-bodied and not even local state is mutated. These functions do not perform side effecty operations of any kind. The second is "essentially pure" or "quasi pure", where externally, functions don't violate referential transparency, but internally, they perform mutable operations such as setting local variables and changing their values. To label a function "pure", you can simply add the "pure" keyword to the function (or add the #pure directive to the module). "true pure" functions are "pure" functions that are expression-bodied. "quasi pure" functions are "pure" functions that are statement-bodied. Ren doesn't care, it treats them effectively the same.

"impure" functions are functions that can directly mutate state and perform side effects. They can be expression bodied, but are usually statement-bodied. To label these, use the "impure" keyword (or add the #impure directive to the module).

"quasi pure" functions and "impure" functions have different runtime semantics, primarily around mutation. When you declare a local variable, a new variable is added to the runtime state of the function initialized to the provided value. In an "impure" function, reassigning the variable behaves as expected. The memory location or register value is modified. In a "quasi-pure" function, reassigning the variable creates a new reference, and all future references to the variable will use the new reference. All past references to the variable maintain the old value. This is not necessary when previous references are not saved. This behavior maintains immutability rules for purity.

Additionally, mutating the internals of values, either local variables or parameters, will cause a shallow copy of the value to be created in "quasi-pure" functions. In "impure" functions, the value is actually mutated. When this happens to a parameters in a "quasi-pure" function, the new reference will be made available implicitly to the caller. If the caller (even an "impure" caller) uses the value after calling the function, it will be referencing the new value. In the case of methods, references to "this" behave the same way. If "this" is modified at all, a new "this" will be implicitly returned from the function. The same applies to contexts.

Classes and structs can also be labelled pure, which effectively makes their internals immutable. Impure code using these types will have the same restrictions as "quasi-pure" functions.

Decorators

Decorators are higher-order functions that modify function definitions in-place. This is very useful for common logic around functions. They receive a function as a parameter (the types can be generic or specific) and must return a function of the same type. Ideally the new function will call the original function, but it technically doesn't have to.

This can be useful for many things:

  • logging
  • memoization
  • invariant checks on parameters
  • initialization of values required by the function

Additionally, decorators can be functions that return decorator functions. This can be used to configure the decorator for each target function.

Reflection

Reflection allows runtime inspection of types and values. The built-in "reflect" module provides a single entry-point function: reflect() which will return a reflection object for the provided value. This will return a Value instance (a subclass instance of Value) which allows runtime access to the value's internals and types. For example, invoking reflect() on a struct will return a Struct instance, and one can enumerate the keys and values of the struct, get its runtime memory size, or serialize it.

To do this on types, the type must be converted to a type reference using the "typeof" keyword. A type reference isn't useful for much aside from comparing to other type references and getting the name as a string. To do reflection operations, you have to call reflect() on the type reference to get a Type instance. Type is a class that has several subclasses: StructType, ArrayType, TupleType, Class, Primitive, etc. The fields or contents of the type can be enumerated or invoked dynamically on a value.

Reflection also includes type decorators. Type decorators are functions that receive the reflected value of types and can operate on the type before it is created. During this stage, the type is technically mutable, but members cannot be added or removed, and signatures cannot change.

Optionals

Optionals are special values that can be either some value or nothing. Optional types look like this:

<type>?

Optional types are assignable from their base type, but not to them. Additionally there is a special value nothing that is assignable to all T?. To extract a value of the base type from an optional, you need the ?? operator, which has the following signature:

infix oper T ??<T>(T? value, T alternate)

If the value represents some T, then that value is returned. Otherwise, the alternate value is returned.

Additionally, there is a little thing called optional chaining, whereby you can access an optional's value as if it were some T, but nothing will happen if it's nothing.

value = (SomeType?)nothing
value?.someField?.someFunc?() // nothing

Each member access returns an optional form of the expected type of the access. At runtime, if the base value or any intermediate value is nothing, the full expression will resolve to nothing.

There are several other APIs of the T? type that can all be called on nothing without an error, which is what makes these types so valuable.

With operator

For classes and structs, the with operator allows setting multiple fields in a single expression.

<assignable> with <expression>

The expression must be struct-assignable to the assignable value, and will return a new reference.

Use-site variance

Languages like Java and Kotlin allow you to specify use-site variance to override the variance of a generic type. For example, arrays are invariant because you can both get and set values. However, you can specify that a parameter with an array type is covariant to allow arrays of any of the type parameter's subtypes to be passed. This also means that the array's mutable operations cannot be used within the function.

Pointers

If Ren is to be a jack of all trades, it needs some form of pointers. We will experiment with several potential models for this to come to a clean decision. Rust will probably provide a lot of inspiration here.

Member-granular exports

Because we are using structural types (classes, extensions, structs) that have access modifiers, there are sort of two encapsulation mechanisms emerging: access modification and module exports. These two systems should be one.

We should be able to control it at export site and declaration site. The default behavior is as expected: all public/protected members are accessible if the class is exported. Additional properties apply only to public members. Private members are accessible only within the class, and protected members are accessible only to subclasses. You can do the following to public members:

  • declare members with access modifiers to hide them from other modules, but not the current module (hidden?).
  • declare the class un-exported, then have a separate export where you specifically choose members to include:
class MyClass {}
export MyClass { {...list of members} }
  • the above can also be done for forwarded exports:
export from "module": MyClass { {...} }
// or
export from "module": { MyClass { {...} } }

Once a member has been hidden, it cannot be "unhidden", so this basically means there are three levels of hiding:

  • hiding at declaration site (exposed only to current module)
  • hiding at export site (exposed only to current module and other modules if an additional export is used)
  • hiding at forward site (exposed to modules that directly import it, but not to ones that use the "proxy" module)

Along those lines, we should also have a mechanism of labelling certain modules as "internal" or private to a certain directory, so that way an index module can be the only "gateway" to exports from submodules.

Type Generation

Type inference is a very powerful tool because it allows the developer to not have to think about declaring a type, and the usage of language constructs instead infers the required types.

We can take this a step further and say that not only can type inference dynamically determine the types of things, it can also "generate" types.

This idea is already present for certain scenarios, for example, every expression with a clear type "generates" a type that will flow to usages of the expression. A function that returns a struct with specific keys will "generate" a struct type containing those keys.

But what if you could make it more generic than that? Take our parser logic for example. In TypeScript we are allowed to specify "key types" (keyof { ... }) because all objects are sets of key-value pairs. But in Ren this concept does not map directly to structs. Struct fields are not keys, they are just fields. So this kind of thing:

export function acceptFunctionDeclaration(parser: Parser) {
    return parser.accept([
        { name: 'funcToken', type: 'FUNC', definite: true },
        { name: 'returnType', parse: acceptType, mess: mess.INVALID_RETURN_TYPE },
        { name: 'functionNameToken', type: 'IDENT', mess: mess.INVALID_FUNCTION_NAME },
        { name: 'typeParamList', parse: acceptTypeParamList, optional: true },
        { name: 'paramsList', parse: acceptParameterList, mess: mess.INVALID_PARAMETER_LIST },
        { name: 'fatArrowToken', type: 'FAT_ARROW', mess: mess.INVALID_FAT_ARROW },
        { name: 'functionBody', parse: acceptFunctionBody },
    ], decls.STFunctionDeclaration);
}

where the specified names are checked against the fields of STFunctionDeclaration, does not have a direct counterpart in Ren.

But could it?

export func acceptFunctionDeclaration(Parser parser) => parser.accept([
    { field: .funcToken, type: 'FUNC', definite: true },
    { field: .returnType, parse: acceptType, mess: mess.INVALID_RETURN_TYPE },
    { field: .functionNameToken, type: 'IDENT', mess: mess.INVALID_FUNCTION_NAME },
    { field: .typeParamList, parse: acceptTypeParamList, optional: true },
    { field: .paramsList, parse: acceptParameterList, mess: mess.INVALID_PARAMETER_LIST },
    { field: .fatArrowToken, type: 'FAT_ARROW', mess: mess.INVALID_FAT_ARROW },
    { field: .functionBody, parse: acceptFunctionBody },
])

We have omitted the return type of this function, so we are telling the type checker to infer the type. Let's look at the type of Parser.accept():

T accept<T : {}>(SequentialExpansion<T>[] expansions)

interface SequentialExpansion<T : {}, V, F : Field<T>> where T.F : V {
    F field
    bool? definite
    ParserMessage? mess
    bool? optional
    (Parser => V)? parse 
}

There's a bit more to it than that, but this shows enough for our purposes. SequentialExpansion has three type parameters:

  • T : {}: the type of struct that this expansion will be adding a field to
  • V: the type of that field's value, which can be anything
  • F : Field<T>: the type of the field's name. The Field<T> type is a special type that resolves to the set of fields in T, where a "field" in this case is a reflected type from the type of the struct.

The little where clause after the type parameters is what ties it all together. This is a constraint specifying a relationship between type parameters. It says that the value of field F on type T is assignable to type V.

What all of this means is that when you call parser.accept(), the array of SequentialExpansion structs that you pass to it will "generate" a struct type based on the type parameters. The type parameters on SequentialExpansion will be inferred. Let's look at this expansion:

{ field: .functionBody, parse: acceptFunctionBody }

We need to infer the three parameters. T must be a struct, it must have F as a field, and the type of field F must be of type V, which can be anything. F must be a field of T, its type must be V, and its name is specified by the name field, which is .functionBody. So we are able to infer so far that T is a struct with a field functionBody with type V. V can be anything, and its type comes from the return type of parse, which we will say is called FunctionBody. So now we have fully inferred T as { FunctionBody functionBody }. This type was "generated" from a generic type with a constraint.

We can expand this to the other kinds of expansions:

  • expansions with type or image are looking for tokens, so we can say that the type of those fields must be Token.
  • expansions with optional are optional, so we can say that the type of those fields must be an optional of whatever the type is.
  • expansions that specify repetitions must be an array of whatever the type is, likewise with repetitions.

These type constraints should be able to support all of these cases and more. Effectively what this boils down to is the ability to add constraints that specify relationships between type parameters, and type inference that takes these constraints into account. The existing type constraints that we have for type parameters technically fall into this umbrella, but they are a simpler form that can only apply to the one parameter, and are one-sided.

This feature applies particularly well to any kind of parsing or deserialization.

It would also be nice if you could "get" types from structured types. For example, as in the above case, we don't specify a return type on the function because we want it to be inferred. But what if we want to get that return type? We should be able to say something along the lines of type (Parser => returnType) = typeof(<func>) to say "given the type of this function, get the return type". This is basically just type destructuring. Fantastic idea.

Inferred and Simplified Type Constraints

Having done some pretty complex typing with TS now, I can now say that it is exceedingly annoying to specify type constraints with heavily nested or extended types.

For one thing, every time you specify a type constraint on a type, everything that uses that type now has to specify the constraint. On the surface, these constraints likely end at a function, making it nice for the developer because function type arguments can be inferred. But this should also be a thing for type definitions. Not sure how that should look yet, but we need to be able to infer type arguments for type definitions so that the only place you need to explicitly specify them is at the actual definition of the type parameters.

Likewise, mapped types are awesome, and we should have something like them. But man, are they verbose, especially for the use case that I was using them on, which was constraining the type of a specific property. This should not be so. What should happen is that you specify the type parameters, and if the constraints are simple, you specify them in-place. But for complex constraints, we should have a where clause (similar to the above proposal) which can specify relationships between two or more type parameters. From how the constraint is written, it should be inferred what the types of the parameters are.

Tooling

Type evaluator

This can be an extension of the REPL and/or its own command, but in any case it would be great to have some operation that takes a module and an exported identifier and gets the type information for it. This would be very useful in cases where you've created a module that exports inferred types and you want to know what those types are inferred to. IDEs can also hook into this to get type information.

Clone this wiki locally