Releases: StyraInc/enterprise-opa
v1.30.0
With this release, the built-in sql.send()
can be used to talk to Oracle Databases.
This release further includes various dependency bumps and updates the embedded Regal version to v0.29.0.
sql.send
supports Oracle
sql.send
now supports Oracle databases! To connect to it, use a data_source_name
of
oracle://USER:PASSWORD@HOST:PORT/DATABASE
See the sql.send
documentation
for all details about the built-in.
v1.29.1
v1.29.0
v1.28.0
This release includes various dependency bumps, as well as support for Google Cloud Storage as a sink for decision logs.
Google Cloud Storage as a Decision Log Sink
You can now configure Enterprise OPA to send decision logs to Google Cloud Storage.
This is done by configuring a new sink of type gcs
in the decision log configuration:
decision_logs:
plugin: eopa_dl
plugins:
eopa_dl:
output:
- type: gcs
bucket: logs
For all configuration options, please see the reference documentation.
v1.27.1
v1.27.0
v1.26.0
This release contains various version bumps and an improvement to EKM ergonomics!
External Key Manager (EKM): Simplified configuration, support for plugin configs
Starting with this release, you no longer need to reference service and keys replacements via JSON pointers, but you can use direct lookups, like
services:
acmecorp:
credentials:
bearer:
scheme: "bearer"
token: "${vault(kv/data/acmecorp/bearer:data/token)}"
Furthermore, these are also supported in plugins allowing you to retrieve secrets for their configurations as well.
These replacement can also be done in substrings, like this:
decision_logs:
plugin: eopa_dl
plugins:
eopa_dl:
output:
- type: http
url: https://myservice.corp.com/v1/logs
headers:
Authorization: "bearer ${vault(kv/data/logs:data/token)}"
Replacements also happen on discovery bundles, if their config includes lookup calls of this sort.
See here for the docs on Using Secrets from HashiCorp Vault.
v1.25.1
This release contains optimizations for the Batch Query API.
v1.25.0
v1.24.8
This release upgrades the common_input
field for the Batch Query API to support recursive merges with each per-query input. This is expected to allow further reduction of request sizes when the majority of each query's input would be shared data.
Here is an example of the recursive merging in action:
{
"inputs": {
"A": {
"user": {
"name": "alice",
"type": "admin"
},
"action": "write",
},
"B": {
"user": {
"name": "bob",
"type": "employee"
}
},
"C": {
"user": {"name": "eve"}
}
},
"common_input": {
"user": {
"company": "Acme Corp",
"type": "user",
},
"action": "read",
"object": "id1234"
}
}
The above request using common_input
is equivalent to sending this request:
{
"inputs": {
"A": {
"user": {
"name": "alice",
"company": "Acme Corp",
"type": "admin"
},
"action": "write",
"object": "id1234"
},
"B": {
"user": {
"name": "bob",
"company": "Acme Corp",
"type": "employee"
},
"action": "read",
"object": "id1234"
},
"C": {
"user": {
"name": "eve",
"company": "Acme Corp",
"type": "user",
},
"action": "read",
"object": "id1234"
}
}
}
In the event of matching keys between the common_input
and the per-query input
object, the per-query input's value is used.
This behavior is intentionally like the behavior of object.union
in Rego.