The OIF Device provides an example of an OpenC2 Consumer, which has the potential to easily interoperate with other entities and functions and then, provide an OpenC2 Response.
The OIF Device supports JSON encoding of OpenC2 commands and responses, and message transfer over MQTT or HTTP.
Given OpenC2's goals and design philosphy, the Consumer implementer essentially should:
- Interface to the message fabric to receive commands and send responses
- Receive commands and validate them against the relevant AP schema
- Parse the command and convert / translate / interpret into the local syntax / API for the relevant function
- Execute the commanded action, and collect response information
- Package the response information in OpenC2 format
- Send the response back to the Producer (or other destination, per the environment)
OIF Device provides a skeleton for steps 3 and 4, but only a very basic implementation as a starting point.
Basic support for two OpenC2 APs is provided; either can be enabled within the config.toml file via the following configuration fields:
- schema_file: Actuator Profile to be used for message validation
- options are [th_ap_vbeta.json or slpf_ap_v2.0.json]
- HTTP / is_enabled: HTTP transport functionality. Enabled by default.
- MQTT / is_enabled: MQTT transport functionality. Enabled by default.
- KESTREL / is_enabled: Kestrel example queries. Disabled by default.
- SLPF: Feature Flag comming soon, explicidly on by default.
Various other configurations are available from the config.toml file. The application will need to be bounced if a change is applied.
The examples provided with OIF Device are not intended for production use.
This GitHub public repository openc2-oif-device was created at the request of the OASIS OpenC2 Technical Committee as an OASIS TC Open Repository to support development of open source resources related to Technical Committee work.
While this TC Open Repository remains associated with the sponsor TC, its development priorities, leadership, intellectual property terms, participation rules, and other matters of governance are separate and distinct from the OASIS TC Process and related policies.
All contributions made to this TC Open Repository are subject to open source license terms expressed in Apache License v 2.0. That license was selected as the declared Applicable License when the TC voted to create this Open Repository.
As documented in Public Participation Invited, contributions to this TC Open Repository are invited from all parties, whether affiliated with OASIS or not. Participants must have a GitHub account, but no fees or OASIS membership obligations are required. Participation is expected to be consistent with the OASIS TC Open Repository Guidelines and Procedures, the open source LICENSE.md designated for this particular repository, and the requirement for an Individual Contributor License Agreement that governs intellectual property.
OpenC2 Integration Framework (OIF) is a project that will enable developers to create and test OpenC2 specifications and implementations without having to recreate an entire OpenC2 ecosystem.
OIF consists of two major parts. The "orchestrator" which functions as an OpenC2 producer and the "Device" which functions as an OpenC2 consumer.
This particular repository contains the code required to set up an OpenC2 Device. The Orchestrator repository can be found here. Due to port bindings it is recommended that the orchestrator and the device not be run on the same machine.
The OIF Device was created with the intent of being an easy-to-configure OpenC2 consumer that can be used in the creation of reference implementations. To that end it allows for the addition of multiple actuators, serializations, and transportation types.
- Clone from git
- Create a virtual environment
- Run:
pip install -r requirements.txt
- Run:
python ./main.py
- Go here to view the HTTP APIs:
http://127.0.0.1:5000/docs
- See the config.toml file for MQTT Topics and to enable features
Note: See below to setup more advanced capabilities, such as Kestrel STIX Shifter and Elastic.**
Clean your docker instances:
./scripts/cleanup.sh
Start the Elastic and Kibana Network, run:
docker network create elastic
Start Elastic with SSL/HTTPS (Elastic uses this by default)
Create docker:
docker run \
--name elasticsearch \
--net elastic \
-p 9200:9200 \
-e discovery.type=single-node \
-e ES_JAVA_OPTS="-Xms1g -Xmx1g"\
-e ELASTIC_PASSWORD=elastic \
-it \
docker.elastic.co/elasticsearch/elasticsearch:8.2.2
In another terminal, get Elastic's CA Cert to login and make calls later:
docker cp elasticsearch:/usr/share/elasticsearch/config/certs/http_ca.crt .
To see if Elastic's CA Cert was obtained successfully:
curl --cacert http_ca.crt https://elastic:elastic@localhost:9200
Should be similar:
{
"name" : "b560471008eb",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "GRzNb8kdTXW54T1KLMQBFA",
"version" : {
"number" : "8.2.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "9876968ef3c745186b94fdabd4483e01499224ef",
"build_date" : "2022-05-25T15:47:06.259735307Z",
"build_snapshot" : false,
"lucene_version" : "9.1.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
Start Kibana docker instance, run:
docker run \
--name kibana \
--net elastic \
-p 5601:5601 \
docker.elastic.co/kibana/kibana:8.2.2
Should be similar (code may be different):
Login to Kibana with security token by opening the given link in a web browser:
http://0.0.0.0:5601/?code=038409
In a new terminal, get the Enrollment token:
docker exec -it elasticsearch \
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token \
-s kibana
Login to Kibana
- elastic
- elastic
Load Data into ElasticSearch, run (only needed once):
npm i elasticdump -g
Then, run:
./scripts/load-elastic-data.sh
Output:
Mon, 05 Jun 2023 13:46:55 GMT | dump complete
On the side navigation bar, go to Management > Stack Management > Index Management to view the Indices. This will show you the data stored in the Elasticsearch.
- On the side navigation bar, go to Management > DevTools to do raw queries similar to HTTP GET requests or curl requests
- On the side navigation bar, go to Analytics > Dashboard and create a DataView to query the data using a UI
GET /linux-91-sysflow-bh22-20220727/_search?size=1
{
"query": {
"query_string": {
"query": "0.4.3",
"fields": [
"agent.version"
]
}
}
}
curl --cacert ./http_ca.crt "https://elastic:elastic@localhost:9200/linux-91-sysflow-bh22-20220727/_search?size=1"
- Add to your ~/.bashrc file:
alias pp='python -mjson.tool'
source ~/.bashrc
- Add
| pp
to the end of your cmdl query
curl --cacert ./http_ca.crt -XGET "https://elastic:elastic@localhost:9200/linux-91-sysflow-bh22-20220727/_search?size=1" -H "kbn-xsrf: reporting" -H "Content-Type: application/json" -d'
{
"query": {
"query_string": {
"query": "0.4.3",
"fields": [
"agent.version"
]
}
}
}' | pp
Install dependencies
pip install -r requirements.txt
To test, run:
kestrel ./hunts/huntflow/helloworld.hf
Output
name pid
firefox.exe 201
chrome.exe 205
[SUMMARY] block executed in 1 seconds
VARIABLE TYPE #(ENTITIES) #(RECORDS) process*
proclist process 4 4 0
browsers process 2 2 0
*Number of related records cached.
Preloaded and mapped data
To setup the configuration file, see: STIX-shifter Data Source Interface
kestrel ./hunts/huntflow/query_data_via_stixshifter.hf
STIX formatted data
From an https github file
kestrel ./hunts/huntflow/helloworld.hf
From a local file (Need script to pass in path to file, hardcoded)
kestrel ./hunts/huntflow/query_local_stixdata.hf
kestrel ./hunts/huntflow/query_net_traffic_stixdata.hf