Skip to content

Releases: StyraInc/enterprise-opa

v1.10.1

02 Oct 22:26
9ace37d
Compare
Choose a tag to compare

New data source integration: MongoDB

It is now possible to use a single MongoDB collection as a data source, with optional filtering/projection at retrieval time.

For example if you had collection1 in a MongoDB instance set to the following JSON document:

[
  {"foo": "a", "bar": 0},
  {"foo": "b", "bar": 1},
  {"foo": "c", "bar": 0},
  {"foo": "d", "bar": 3}
]

If you configured a MongoDB data source to use collection1:

plugins:
  data:
    mongodb.example:
      type: mongodb
      uri: <your_db_uri_here>
      auth: <your_login_info_here>
      database: database
      collection: collection1
      keys: ["foo"]
      filter: {"bar": 0}

The configuration shown above would filter this collection down to just:

[
  {"foo": "a", "bar": 0},
  {"foo": "c", "bar": 0}
]

The keys parameter in the configuration shown earlier guides how the collection is transformed into a Rego Object, mapping the unique key field(s) to the corresponding documents from the filtered collection:

{
  "a": {"foo": "a", "bar": 0},
  "c": {"foo": "c", "bar": 0}
}

You could then use this data source in a Rego policy just like any other aggregate data type. As a simple example:

package hello_mongodb

filtered_documents := data.mongodb.example

allow if {
  count(filtered_documents) == 2 # Want just 2 items in the collection.
}

v1.10.0

29 Sep 19:55
9ace37d
Compare
Choose a tag to compare

This release updates the OPA version used in Enterprise OPA to v0.57.0, and integrates several bugfixes and new features.

v1.9.5

07 Sep 18:41
9ace37d
Compare
Choose a tag to compare

These releases have been release engineering fixes to sort out automated publishing of this changelog, capabilities JSON files, and gRPC protobuf definitions.

v1.9.4

07 Sep 17:24
c2af245
Compare
Choose a tag to compare

These releases have been release engineering fixes to sort out automated publishing of this changelog, capabilities JSON files, and gRPC protobuf definitions.

v1.9.3

07 Sep 16:12
Compare
Choose a tag to compare

These releases have been release engineering fixes to sort out automated publishing of this changelog, capabilities JSON files, and gRPC protobuf definitions.

v1.9.2

07 Sep 15:27
Compare
Choose a tag to compare

These releases have been release engineering fixes to sort out automated publishing of this changelog, capabilities JSON files, and gRPC protobuf definitions.

v1.9.0

01 Sep 22:05
Compare
Choose a tag to compare

This release updates the OPA version used in Enterprise OPA to v0.56.0, and integrates several bugfixes and new features.

mongodb.find, mongodb.find_one: query MongoDB databases during policy evaluation

Enterprise OPA now supports querying MongoDB databases!

Two new builtins are dedicated for this purpose: mongodb.find, and mongodb.find_one. These correspond approximately to MongoDB's db.collection.find() and db.collection.findOne() operations, respectively. These operations make it possible to integrate MongoDB databases efficiently into policies, depending on whether a single or multiple document lookup is needed.

Find out more in the new Tutorial, or see the Reference documentation for more details.

dynamodb.send: query DynamoDB during policy evaluation

This builtin currently supports sending GetItem and Query requests to a DynamoDB endpoint, allowing direct integration of DynamoDB into policies.

Find out more in the new Tutorial, or see the Reference documentation for more details.

vault.send for interacting directly with Hashicorp Vault in policies.

This new builtin provides support for more direct, request-oriented Hashicorp Vault integrations in policies than was previously possible through the EKM Plugin.

See the Reference documentation for more details.

gRPC plugin Decision Logs Support

The gRPC server plugin now integrates into Enterprise OPA's decision logging!
This means that gRPC requests are logged in a near-identical format to HTTP requests, allowing deeper insight into the usage and performance of Enterprise OPA deployments in production.

v1.8.0

01 Aug 22:40
046ef01
Compare
Choose a tag to compare

This release updates the OPA version used in Enterprise OPA to v0.55.0.

v1.7.0

10 Jul 19:07
6827af1
Compare
Choose a tag to compare

Envoy External Authorization Support

This release makes Styra Enterprise OPA a drop-in replacement for opa-envoy-plugin, to be used with the External Authorization feature of the popular Envoy API gateway, and Envoy-based service meshes such as Istio and Gloo Edge.

It works exactly like opa-envoy-plugin, i.e. the images known as openpolicyagent/opa:latest-envoy, but featuring all the Enterprise OPA enhancements.

See here for a general introduction to OPA and Envoy.

Enhanced OpenTelemetry Support

Styra Enterprise OPA now supports OpenTelemetry Traces for the following operations:

  • Rego VM evaluations, with extra spans for http.send and sql.send
  • All decision log operations.
  • All of its gRPC handlers.

This allows for improved observability, allowing you to quicker pin-point any issues in your distributed authorization system.

v1.6.0

03 Jul 15:47
6827af1
Compare
Choose a tag to compare

This release updates the OPA version used in Styra Enterprise OPA to v0.54.0, along with gRPC plugin improvements and new gRPC streaming endpoints.

Support for large gRPC message sizes

Most gRPC implementations default to having a max receivable message size of 4 MB for both servers and clients. This helps avoid memory exhaustion from large messages sent by misconfigured or malicious actors on the other side of the connection.

This size limit presents a problem for Enterprise OPA though: a relatively simple rule query that returns a large amount of data can easily break past that 4 MB message size limit. Additionally, clients who need to provide more than 4 MB of data for a data update or rule query input can also run into the receivable message size limit. To work around this problem, we have to attack it from both the client and server sides.

On the client side, most gRPC implementations allow providing the "Max Receive Message Size" as a parameter for the gRPC call. (See the CallOption.MaxRecvMsgSize option in Go, for example.) This means that clients who want to receive potentially massive responses from the Enterprise OPA server will need to do a little more setup at call time, but don't necessarily need to change their Enterprise OPA configs.

For the server side of the problem, we changed Enterprise OPA to support a new configuration option for the gRPC plugin: grpc.max_recv_message_size

In the example configuration below, we start up the Enterprise OPA gRPC server on localhost:9090, and set it to receive messages up to 8 MB in size:

plugins:
  grpc:
    # 8 MB, in bytes:
    max_recv_message_size: 8589934592
    addr: "localhost:9090"

This allows the server to receive larger gRPC messages from clients than before.

Fixing both sides of the large gRPC message size problem allow for high-throughput and data-heavy use cases over the gRPC API that were not possible before.

New streaming gRPC endpoints for the Data and Policy APIs

The Data and Policy gRPC services now provide bidirectional streaming endpoints! These endpoints work similarly to the experimental BulkRW endpoint that was explored in earlier versions of Enterprise OPA, and provide several speed and efficiency benefits over the REST API or existing unary gRPC endpoints.

They provide fixed-structure transactions that describe batches of CRUD operations, where all "write" operations (that would cause changes to the Data and Policy stores) are run as a single, sequential write transaction first, and then all "read" operations, such as rule queries, are evaluated in parallel. If any write operations fail, the entire request fails. Read operations have their failures reported as normal response messages with error-wrapping message types.

These batched operations can provide substantial throughput improvements over using the existing unary gRPC endpoints:

  • The connection cost is paid once at the start of the stream, instead of once per call.
  • Read and write operations enjoy greatly reduced contention for access to the Data and Policy stores.
  • Some operations are able to be parallelized.

For styles of API access where the successes and failures of write operations should be independent of one another, callers can send several smaller messages over the stream, and will receive back individual successful responses, or error messages for each failure that occurs.

See the Enterprise OPA gRPC docs on the Buf Schema Registry for more details.