Releases: StyraInc/enterprise-opa
v1.15.4
v1.15.3
v1.15.2
v1.15.1
This is a bug fix release for an exception that occurred when using a
per-output mask or drop decision that included a print() statement.
It's only relevant to you if
- you are using the
eopa_dl
decision logs plugin, - with a per-output mask_decision or drop_decision,
- and that decision includes a
print()
call.
v1.15.0
This release updates the OPA version used in Enterprise OPA to v0.60.0,
and includes improvements for Decision Logging, sql.send
, and the eopa eval
experience.
Contextual information on errors in eopa eval
When you evaluate a policy, eopa eval --format=pretty
will include extra links to
docs pages explaining the errors, and how to overcome them.
For example, with a policy like
# policy.rego
package policy
allow := data[input.org].allow
$ eopa eval -fpretty -d policy.rego data.policy.allow
1 error occurred: policy.rego:3: rego_recursion_error: rule data.policy.allow is recursive: data.policy.allow -> data.policy.allow
For more information, see: https://docs.styra.com/opa/errors/rego-recursion-error/rule-name-is-recursive
Note that the output only appears on standard error, and only for output format
"pretty", so it should not interfere with any scripted usage of eopa eval
you
may have.
Decision Logs: per-output mask and drop decisions
Enterprise OPA lets you configure multiple sinks for your decision logs.
With this release, you can also specific per-output mask_decision
and drop_decision
settings, to accomodate different privacy and data restrictions.
For example, this configuration would apply a mask decision (data.system.s3_mask
)
only for the S3 sink, and a drop decision (data.system.console_drop
) for the console
output.
decision_logs:
plugin: eopa_dl
plugins:
eopa_dl:
buffer:
type: memory
output:
- type: console
drop_decision: system/console_drop
- type: s3
mask_decision: system/s3_mask
# more config
Also see
- Decision Logs Configuration
- Tutorial: Logging decisions to AWS S3
- Masking and dropping decision logs from the OPA docs.
sql.send
supports MS SQL Server
sql.send
now supports Microsoft SQL Server! To connect to it, use a data_source_name
of
sqlserver://USER:PASSWORD@HOST:PORT?database=DATABASE_NAME
For complete description of data_source_name
options available, see: https://github.com/microsoft/go-mssqldb#connection-parameters-and-dsn
It also comes with the usual Vault helpers, under system.eopa.utils.sqlserver.v1.vault
.
See the sql.send
documentation
for all details.
Telemetry
Telemetry data sent to Styra's telemetry system now includes the License ID.
You can use eopa run --server --disable-telemetry
to opt-out.
v1.14.0
This release updates the OPA version used in Enterprise OPA to v0.59.0, and integrates some performance improvements and a few bug fixes.
CLI
- Fixed a panic when running
eopa bundle convert
on Delta Bundles.
Runtime
- The Set and Object types received a few small performance optimizations in this release, which net out speedups of around 1-7% on some benchmarks.
- Set union operations are slightly faster now.
v1.13.0
This release contains a security fix for gRPC handlers used with OpenTelemetry, various performance
enhancements, bug fixes, third-party dependency updates, and a way to have Enterprise OPA fall back
to "OPA-mode" when there is no valid license.
OpenTelemetry CVE-2023-47108
This release updates the gRPC handlers used with OpenTelemetry to address a security vulnerability (CVE-2023-47108, GHSA-8pgv-569h-w5rw).
Fallback to OPA
When using eopa run
and eopa exec
without a valid license, Enterprise OPA will now log a message,
and continue executing as if it was an ordinary instance of OPA.
This is enabled by running the license check synchronously. It'll be quick for missing files and environment
variables.
If you don't want to fallback to OPA, because you expect your license to be present and valid, you can
pass --no-license-fallback
to both eopa run
and eopa exec
: the license validation will run asynchronously,
and stop the process on failures.
Bug Fixes
- The gRPC API's decision logs now include the
input
sent with the request. - An issue with the
mongodb.find
andmongodb.find_one
caching has been resolved.
v1.12.0
This release updates the OPA version used in Enterprise OPA to v0.58.0,
and integrates several performance improvements and a bug fix:
Function return value caching
Function calls in Rego now have their return value cached: when called with the same arguments,
subsequent evaluations will use the cached value.
Previously, the function body was evaluated twice.
Currently, only simple argument types are subject to caching: numbers, bools, strings -- collection
arguments are exempt.
Library utils lazy loading
If your policy does not make use of any of the data.system.eopa.utils
helpers of Enterprise OPA's
builtin functions, they are not loaded,
and thus avoid superfluous work in the compiler.
Topdown-specific compiler stages
When evaluating a policy, certain compiler stages in OPA are now skipped: namely, the Rego VM in
Enterprise OPA does not make use of OPA's rule and comprehension indices, so we no longer build them
in the compiler stages.
Numerous Rego VM improvements
The Rego VM now uses less allocations, improving overall performance.
Preview API
Fixes a bug with "Preview Selection".
v1.11.1
This is a bug fix release addressing the following security issue:
OpenTelemetry-Go Contrib security fix CVE-2023-45142:
Denial of service in otelhttp due to unbound cardinality metrics.
Note: GO-2023-2102 was fixed in v1.11.0
A malicious HTTP/2 client which rapidly creates requests and immediately resets them can cause excessive server resource consumption.
v1.11.0
This release includes several bugfixes and a powerful new feature for data source integrations: Rego transform rules!
[New Feature] Data transformations are available for all data source integrations
Enterprise OPA now supports Rego transform rules for all data source plugins!
These transform rules allow you to reshape and modify data fetched by the data sources, before that data is stored in EOPA for use by policies.
This feature can be opted into for a data source by adding a rego_transform
key to its YAML configuration block.
Example transform rule with the HTTP data source
For this example, we will assume we have an HTTP endpoint that responds with the following JSON document:
[
{"username": "alice", "roles": ["admin"]},
{"username": "bob", "roles": []},
{"username": "catherine", "roles": ["viewer"]}
]
Here's what the OPA configuration might look like for a fictitious HTTP data source:
plugins:
data:
http:
type: http
url: https://internal.example.com/api/users
method: POST # default: GET
body: '{"count": 1000}' # default: no body
file: /some/file # alternatively, read request body from a file on disk (default: none)
timeout: "10s" # default: no timeout
polling_interval: "20s" # default: 30s, minimum: 10s
follow_redirects: false # default: true
headers:
Authorization: Bearer XYZ
other-header: # multiple values are supported
- value 1
- value 2
rego_transform: data.e2e.transform
The rego_transform
key at the end means that we will run the data.e2e.transform
Rego rule on the incoming data before that data is made available to policies on this EOPA instance.
We then need to define our data.e2e.transform
rule. rego_transform
rules generally take incoming messages as JSON via input.incoming
and return the transformed JSON for later use by other policies.
Below is an example of what a transform rule might look like for our HTTP data source:
package e2e
import future.keywords
transform[id] := d {
some entry in input.incoming
id := entry.username
d := entry.roles
}
In the above example, the transform policy will populate the data.http.users
object with key-value pairs. Of note: the http
key comes from the datasource plugin configuration above.
Each key-value pair will be generated by iterating across the JSON list in input.incoming
, and for each JSON object, the key will be taken from the username
field, and the value from the roles
field.
Given our earlier data source, the result stored in EOPA for data.http.users
will look like:
{
"alice": ["admin"],
"bob": [],
"catherine": ["viewer"]
}
This general pattern applies to all the data source integrations in Enterprise OPA, including the Kafka data source (covered below).
In addition to input.incoming
– containing the incoming information retrieved by the datasource – the value of input.previous
can be used to refer to all of the data currently stored in the plugin's data.
subtree.
[Changed Behavior] Updates to the Kafka data source's Rego transform rules
The Kafka data source now supports the new rego_transform
rule system, the same as all of the other data source integrations. Concretely, It no longer expects the output of the transform rule to be a JSON Patch object to be applied to the existing data, but instead expects the output to be the full data object to be persisted.
Because Kafka messages are often incremental updates, the input.previous
value should be used to refer to the rest of the data subtree.
See the Reference documentation for more details and examples of the new transform rules.
[Changed Behavior] Updates to the dynamodb
series of builtins
In this release dynamodb.send
has been split apart into more specialized variants embodying the same functionality: dynamodb.get
and dynamodb.query
.
dynamodb.get
For normal key-value lookups in DynamoDB, dynamodb.get
provides a straightforward solution.
Here is a brief usage example:
thread := dynamodb.get({
"endpoint": "dynamodb.us-west-2.amazonaws.com",
"region": "us-west-2",
"table": "thread",
"key": {
"ForumName": {"S": "help"},
"Subject": {"S": "How to write rego"}
}
}) # => { "row": ...}
See the Reference documentation for more details.
dynamodb.query
For queries on DynamoDB, dynamodb.query
allows control over the query expression and other parameters:
Here is a brief usage example:
music := dynamodb.query({
"region": "us-west-1",
"table": "foo",
"key_condition_expression": "#music = :name",
"expression_attribute_values": {":name": {"S": "Acme Band"}},
"expression_attribute_names": {"#music": "Artist"}
}) # => { "rows": ... }
See the Reference documentation for more details.
[Changed Behavior] Removal of MongoDB plugin keys
The keys
configuration for the MongoDB datasource plugin is now deprecated. Instead, MongoDB's native _id
value will be used as the primary key for each document.
Any restructuring or renormalization of the data should now be done via rego_transform
.