-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Contents of SHACL-AF #234
Comments
Is this where some of the section from SHACL-AF will go?
|
@afs please double-check the title of this issue, doesn't look right.
|
Done! (Side effect of GH "create new issue" on the original comment - lesson learnt) |
The two sections "Node Expressions" and "SHACL Rules" could be two documents, depending on how big the node expressions document is likely to become. The machinery for node expressions needs to be in core. The node expressions proposal is suggesting "Eventually, the library of Node Expressions could cover all of SPARQL" That's not small, even if SHACL taps into "Functions and Operators" directly. |
+1, the benefit then would certainly be that the "collection" of NExp could be extended and worked on independently from the core spec. (a bit like ODRL's Vocab for example) |
Yes I like that suggestion, @afs . It would also protect the existence of Node Expressions in case there are significant objections against the graph-level rule inferencing by someone, or they need much longer than the NE spec. In terms of the core NE machinery, I looked at the SPARQL definition of variables and solution mapping sequences, which could be the foundation of input and output of each expression. Basically they would take a sequence of variables (including, usually, focusNode) as input and then produce a new output variables sequence, including a dedicated default variable such as "outputNodes" that serves as input for other expressions that currently take sh:nodes as parameter. This would allow chaining them together in a modular way while potentially covering all of SPARQL in the future? The algorithms behind each would then be just a description of how the input sequences map to the output sequences, e.g. sh:count would return a single xsd:integer counting the input sequence. Does this make sense? |
I think I see what your getting at - SPARQL functions themselves don't work on solution mappings except when evaluating a variable. A "function" is something that already-evaluated arguments. There are few things that look like functions, but aren't (
#222 is probably the place for discussing the way evaluation happens. I put some examples in - more from other people would be great. Rules of more than one triple with NE conditions will need named variables in some form (warning current thinking), c.f. CONSTRUCT templates, because of being able to restrict values and then use in generated triples - one expression, two locations. |
Originally posted by @HolgerKnublauch in #167 (comment)
The text was updated successfully, but these errors were encountered: