-
Notifications
You must be signed in to change notification settings - Fork 157
Python Scripts
In the page on writing a BESS configuration script, we saw how the Python code did:
global somevar
somevar = Module(...)
whenever we wrote:
somevar::Module(...)
In this page we cover the gory details, and also the special cases that apply to Python test scripts.
In particular, what exactly is (in) our namespace, and how exactly does this all hook together? The code in bess/bessctl/sugar.py
has some of the details—for instance, somevar::Module(...)
is equivalent to writing __bess_module__('somevar', 'Module', ...)
—but a few key items are scattered elsewhere in the BESS CLI and related code.
All of the C++ modules are inserted into this namespace as global names. This means both regular module classes and port driver classes. This means that all the C++ names for C++ modules (see Writing Your Own Module) must be kept different from all the C++ names for C++ ports (see Built In Modules and Ports). The C++ code already ensures the module names are unique, and that the port names are unique, but if you accidentally duplicate a port name in a new module, scripts will no longer function.
Whenever the CLI runs any file, it creates a new namespace for that file. This is the logical equivalent of Python's action when running import module
: the imported module has its own separate namespace, different from the namespace of any other imported module. Any global variable created within this namespace—whether using the global
keyword, or simply assigning to a name while outside a function—is really just "module global". Local names, inside Python functions, are local, unless you use the global
or (Python 3 only) nonlocal
keyword to declare them.
The same is true for BESS scripts. Be aware, however, that var::Module
always assigns the variable globally within the module. In fact, it's literally done through a call to __bess_module__
(see function list below). This function has a return value: it returns the module or port instance (or a list of such instances) created and bound to the variables. To create a tuple of instances it calls __bess_module__
with a tuple of variable names (and a single class name), so that:
a,b::Module(arg1, arg2)
translates into:
__bess_module__(('a', 'b'), 'Module', arg1, arg2)
The two return values, created by two gRPC calls to the C++ Module(arg1, arg2)
, get assigned into a
and b
as if by the more typical Python code:
tmp = __bess_module__(...)
global a, b
a, b = tmp
To avoid this global effect, you can use ordinary assignment:
a = Module(arg1, arg2)
b = Module(arg1, arg2)
Most of the time this makes no real difference. If it matters to you whether your names are globally visible, use whichever form you prefer. Note that the magic syntax variant will refuse to overwrite any existing global names (but fails to check for existing local names—this is a bug).
As noted above, all the C++ modules and ports are already in the namespace, using the names defined in the C++ and protobuf code. If you don't need one of these names it is safe to rebind it, e.g., if you are not going to use the Source
module, you can define a class named Source
, if you want.
The rest of the standard namespace consists of the following:
-
__builtins__
The usual.
-
bess
Bound to the Python instance that talks to BESS. Use
bess.add_worker()
to add workers, for instance, orbess.pause_all()
andbess.resume_all()
to pause and resume workers. All of the normal BESS controls are available here. -
__bess_env__(key, default=None)
The environment fetcher for the syntactic-sugar
$ENV!default
. -
__bess_module__(module_names, mclass_name, *args, **kwargs)
The module and port instance builder. If
-
ConfError
The type of a configuration error.
Test scripts, which are found in bessctl/conf/testing/module_tests/
and are run via bessctl/conf/testing/run_module_tests.bess
, are a little bit special. These are run much like ordinary scripts, but their name-space is pre-filled with the following extra names:
-
scapy
Bound to the result of
import scapy.all as scapy
. -
socket
,time
Bound to the result of
import socket
andimport time
, ctively. -
SOCKET_PATH
Contains (as a string) the path in which sockets created by
gen_socket_and_port
live. -
SCRIPT_STARTTIME
Contains (as a string) a unique timestamp that is useful for making unique file or socket names.
-
gen_socket_and_port(sockname)
Takes a string and returns a pair of items: a BESS
UnixSocketPort
instance that allows sending and receiving data over anAF_UNIX SOCK_SEQPACKET
socket, and the Python socket instance that is connected to this BESS port. Thus you might write:p, s = gen_socket_and_port("name" + SCRIPT_STARTTIME) PortInc(port=p.name) -> ...
You can now call
s.send(some_bytes)
and those bytes will be transmitted into BESS, where they will appear on thePortInc
instance and be sent through the pipeline. To receive data you might write:... -> PortOut(port=p.name)
and then call
s.recv(2048)
to get the bytes. -
gen_packet(proto, src_ip, dst_ip, ip_ttl=64, srcport=1001, dstport=1002)
Creates a packet with the specified protocol (
scapy.TCP
orscapy.UDP
) and specified IP addresses. -
pkt_str(pkt)
Returns (as a string) a printable representation of a packet. If the packet is a
scapy.Packet
instance, usespkt.summary()
as well as a hex encoding. You may pass in aNone
instance, in which case this returns the string"None"
. -
aton(ip)
Returns
socket.inet_aton(ip)
. -
monitor_task(module, wid)
:Calls
module.attach_task(wid=wid)
. -
CRASH_TEST_INPUTS
,OUTPUT_TEST_INPUTS
,CUSTOM_TEST_FUNCTIONS
You modify these lists to get tests run automatically. Their use is a bit tricky, so read the next section.
There are three kinds of tests: crash tests, I/O tests, and custom tests.
You define your crash tests by appending triples—or 3-element-long lists—to CRASH_TEST_INPUTS
. The triple contains a module instance, the number of input gates, and the number of output gates. For instance, the ACL module creates an ACL instance, which always has one input gate and one output gate, so part of the ACL test reads:
# build a firewall insance
fw_instance_1 = ACL(rules=[{...}])
CRASH_TEST_INPUTS.append([fw_instance_1, 1, 1])
The test module runs these tests by connecting a packet source (of packets it generates, which are up to the test module) to each input gate, and connecting each output gate to a Sink(). It then runs the BESS pipeline for a few seconds to make sure no crashes occur.
You may make as many module instances as you like. The test module will run them all. It's a good idea to use a different module instance for each test:
fw_instance_2 = ACL(...)
CRASH_TEST_INPUTS.append([fw_instance_2, 1, 1])
You can add all your tests at once, since none actually run until your entire module has loaded:
fw1 = ACL(...)
fw2 = ACL(...)
CRASH_TEST_INPUTS.extend([fw1, 1, 1], [fw2, 1, 1])
If no crashes occur, the tests pass.
I/O tests are the most complex that the test framework itself handles. You define these by appending a list of 4-tuples to OUTPUT_TEST_INPUTS
. (As before, lists instead of tuples are fine.) The first element is an instance, and the second and third are the number of input and output gates, just as before. The fourth, however, is a list of dictionaries containing packet data to deliver to the input and expect from its output, on which port:
send1 = gen_packet(scapy.TCP, '1.2.3.4', '22.22.22.22') # firewall rule here is DROP
send2 = gen_packet(scapy.TCP, '96.22.22.22', '22.22.22.22') # firewall rule here is PERMIT
expect1 = None # we expect the firewall to drop send1 on the floor
expect2 = send2 # we expect the firewall to pass send2 on
dict1 = { 'input_port': 0, 'input_packet': send1, 'output_port': 0, 'output_packet': expect1 }
dict2 = { 'input_port': 0, 'input_packet': send2, 'output_port': 0, 'output_packet': expect2 }
OUTPUT_TEST_INPUTS.append([fw4, 1, 1, [dict1, dict2])
Since there are two dictionaries here, the test framework will run two tests on this firewall instance. Each test will connect all of its input and output gates to Python sockets (one Python socket per port-pair, one port-pair per input and/or output gate). The framework will then run the tests in order: it will send send1
on Python socket 0 (connected to input gate 0) and expect expect1
—None
, in this case—to be received (within a short timeout) on Python socket 0 (connected to output gate 0). If the test framework sees the expected result, the test passes. If it sees some other packet, the test fails. In any case, the test framework moves on to the next dictionary of gates-and-packets.
(Once all the I/O tests have been run, the framework closes all the Python sockets it created.)
Custom tests are the simplest for the framework but the most difficult for the test writer (i.e., you).
Here, the framework simply calls bess.pause_all()
and bess.reset_all()
to pause and reset, then calls your custom test function. If no exceptions occur, your test is considered to have passed, so to have your test fail you must raise an error.
To list the test functions you want called, add them to the CUSTOM_TEST_FUNCTIONS
variable:
def custom_test():
...
assert some_condition
CUSTOM_TEST_FUNCTIONS.append(custom_test)
Note that you must call bess.resume_all()
in your test function.