diff --git a/source/basics/constant_generator.html.md b/source/basics/constant_generator.html.md
index f6dc0e7..d2450d8 100644
--- a/source/basics/constant_generator.html.md
+++ b/source/basics/constant_generator.html.md
@@ -44,7 +44,7 @@ the dataflow in the `ArmCartesianControlWdls` composition, we find:
The type name is `/base/samples/RigidBodyState`. Types like this one are
defined when implementing components, which is something we will see
-[later](../writing_components/types.html). You can have an overview of the
+[later](../type_system). You can have an overview of the
types already available in your Rock workspace by starting the
`rock-browse` tool.
@@ -121,7 +121,7 @@ end
**Note**: the position and orientation here are assumed to be respectively a
vector (of type Eigen::Vector3) and a quaternion (of type Eigen::Quaternion).
The underlying type system [is a subject for another
-part](../writing_components/types.html). For now, just accept it.
+part](../type_system). For now, just accept it.
{: .callout .callout-info}
This kind of "high-level argument shadowing low-level
diff --git a/source/basics/day_to_day.html.md b/source/basics/day_to_day.html.md
index 6578bc2..6a982d9 100644
--- a/source/basics/day_to_day.html.md
+++ b/source/basics/day_to_day.html.md
@@ -83,7 +83,7 @@ $ acd -p c/kdl
# Now in install/
~~~
-### Error Logs
+### Logs {#alog}
During build, and less often during updates, the tools autoproj calls will
error out. However, to keep the output of autoproj manageable, it redirects the
diff --git a/source/index.html.md b/source/index.html.md
index 36150a4..06f6d83 100644
--- a/source/index.html.md
+++ b/source/index.html.md
@@ -55,7 +55,8 @@ need to know at a certain point in time.
## Building Systems
3. [Workspace and Packages](workspace/index.html)
-4. [Writing components](writing_components/index.html)
+3. [The Type System](type_system/index.html)
+4. [Integrating Functionality](integrating_functionality/index.html)
5. Reusable Syskit modelling
7. Advanced data processing in Components
8. System coordination
diff --git a/source/integrating_functionality/components.html.md b/source/integrating_functionality/components.html.md
new file mode 100644
index 0000000..50b0b3d
--- /dev/null
+++ b/source/integrating_functionality/components.html.md
@@ -0,0 +1,109 @@
+---
+layout: documentation
+title: Components
+sort_info: 30
+---
+
+# Components
+{:.no_toc}
+
+- TOC
+{:toc}
+
+Within Rock, components are _implemented_ in C++. They are also _specified_ in a
+Ruby domain-specific language that is processed by a code generation tool,
+**oroGen**. This tool ensures that the component's interface matches its
+specification. It also removes most of the crude boilerplate-writing code that
+is the declaration in C++ of the component interfaces.
+
+From a package point of view, components are defined in an orogen package. The
+orogen packages are all placed in the `/orogen/` subdirectory of one of the
+[package categories](../workspace/conventions.html)
+
+**Important** an oroGen package and a library can share the same basename (e.g.
+`drivers/hokuyo` and `drivers/orogen/hokuyo`). This is even a recommended
+behavior when an orogen package is mainly tied to a certain library.
+{: .note}
+
+From this page on, the rest of this section will deal with the integration of
+the functionality from C++ libraries into Rock components by means of orogen.
+But let's first talk about how to create an orogen package.
+
+## Creating and new oroGen packages {#create}
+
+Packages are created with the `rock-create-orogen` tool. Let's assume we want
+to create a `planning/orogen/sbpl` package, the workflow would be to:
+
+~~~
+acd
+cd planning/orogen/
+rock-create-orogen sbpl
+cd sbpl
+# Edit sbpl.orogen
+rock-create-orogen
+# Fix potential mistakes and re-run rock-create-orogen until there are no errors
+# …
+~~~
+
+**What does `rock-create-orogen` do ?** `orogen` does "private" code generation
+in a `.orogen` subfolder of the package, and creates a `templates/` folder.
+`rock-create-orogen` ensures that the initial repository commit does not
+contain any of these. If you don't want to use `git`, or if you're confident
+that you know which files and folder to commit and which to leave out, the second
+run is not neeeded.
+{: .note}
+
+Once this is done, [add the package to your build configuration](../workspace/add_packages.html#orogen)
+
+## Development Workflow
+
+Developing a component involves doing mainly three things:
+
+- [defining data types](../type_system/defining_types.html) for usage on
+ its interface. Types do not necessarily have to be defined in standalone
+ orogen packages as described in the Type System section, but can also
+ be directly imported in an orogen package that defines tasks. When this
+ is the case, the explicit `export_types` statement is not needed, as `orogen`
+ will export all types that are used on the component's interface.
+- defining the component(s) interface(s) in the orogen file
+- implementing the processing parts of the component in C++
+
+**Let's remember** we strongly recommend that you develop the bulk of your
+component's functionality in **libraries**, instead of doing in the components
+themselves.
+{: .important}
+
+Each time data types or the orogen specification are modified, one must run
+orogen to re-generate code. After code generation, the package behaves like
+a CMake package.
+
+The best way to do the first code generation is to use
+[`amake`](../basics/day_to_day.html). After this, one can run `make regen` to
+do code generation and `make` to build from within the package's build
+directory (which is usually located in `build/`). This is usually the best way
+to integrate an orogen package in an IDE.
+
+## Runtime Workflow
+
+"Developing" a component in C++ within Rock is to write a C++ class that
+interacts with its inputs/outputs. This class does not specify when the
+processing is going to be called, and under which OS resource (threads,
+processes). It is said that the _component implementation_ is separated
+from the _system deployment_. The first one is really writing the C++ code that
+interacts with the component's interface. The second one is part of system
+integration.
+
+What it means in practice is that a component implement is nothing more than a standalone
+C++ class. This C++ class can be instantiated multiple times in a single
+system, using different periods or triggering mechanisms, different threading
+policies, …
+
+When you define components in oroGen, you create a _task library_, which is a
+shared library in which the task context classes are defined. Then, you need to
+put these libraries in _deployments_ (which is also done by oroGen). Finally,
+you can start these deployments, connect the tasks together, and monitor them
+using Syskit.
+
+{: .fullwidth}
+
+**Next** Let's define the [component interface](interface.html)
diff --git a/source/integrating_functionality/cpp_libraries.html.md b/source/integrating_functionality/cpp_libraries.html.md
new file mode 100644
index 0000000..d45e64d
--- /dev/null
+++ b/source/integrating_functionality/cpp_libraries.html.md
@@ -0,0 +1,295 @@
+---
+layout: documentation
+title: C++ Library Packages
+sort_info: 10
+---
+
+# C++ Library Packages
+{:.no_toc}
+
+- TOC
+{:toc}
+
+While Rock is designed to allow you for the separation between functionality
+and framework, if you feel so inclined, Rock provides a C++ library template.
+This template solves some of the common problems with setting up a C++ library
+(build system, ...) and obviously integrate as-is with the rest of a Rock
+system.
+
+**However** there is nothing that forces you to use the Rock library template.
+Autoproj generically integrates with autotools and cmake packages. One can also
+set up a custom package for more exotic systems (such as some old packages that
+use "plain make")
+
+The only constraint when the aim is to create a library that will be integrated
+in a Rock component is to provide a pkg-config file for it. This is how orogen
+resolves its dependencies.
+
+## Integrating 3rd-party library
+
+Rock packages - even those using the Rock CMake macros - are **not** dependent
+on autoproj. Autoproj is an external helper tool, but in no way does it interact
+with the package's build process. It is therefore perfectly feasible to build and
+use 3rd party libraries in a Rock system.
+
+When doing so, try to follow these guidelines:
+
+- do not change the package unless strictly necessary. The one exception for
+ which there is currently no good solution is to provide a pkg-config file for
+ orogen integration. If that is needed, you will probably want to integrate
+ this change [as a patch](../workspace/add_packages.html#patch) into your build
+ configuration. Also, try to get this patch in the package's mainline, it will
+ make things easier down the line.
+- provide [a package manifest](../workspace/add_packages.html#manifest_xml) in the package set.
+
+## Creating a Package
+
+You need to first pick a category and a name [the workspace
+conventions](../workspace/conventions.html) for information about how to name
+your library package.
+
+Rock then provides a library template. One can create a new library with
+
+~~~
+rock-create-lib library_dir
+~~~
+
+e.g.
+
+~~~
+cd drivers
+rock-create-lib imu_advanced_navigation_anpp
+~~~
+
+This library creates a dummy class and a dummy executable that uses this class.
+It is great at providing you with the example CMake code for both library and
+tests.
+
+## Conventions for library design
+
+There's a small number of conventions that Rock libraries follow:
+
+- **Extensions** header files are `.hpp`, source `.cpp`
+- **Naming** classes should be `CamelCase`. The library must be defined under a
+ namespace that matches the package basename (e.g.
+ `imu_advanced_navigation_anpp` for `drivers/imu_advanced_navigation_anpp`). Each class
+ has its own file, with named like the class (i.e. the `Driver` class is in
+ `src/Driver.hpp` and `src/Driver.cpp`)
+- **File Structure** source and header files _tests excluded_ are saved in
+ `src/`. Tests are in `test/`.
+
+If the ultimate goal of a data type is to be used as an interface type on a
+Rock component, you must have first read and understood the [type
+system](../type_system) description.
+
+## Tests
+
+This is 2017 (or later). Testing is now an integral part of modern development
+process, and Rock provides support to integrate unit testing in the development
+workflow.
+
+All libraries that have a `test/` folder will be assumed to have a test suite.
+However, testing is disabled by default - since building all the tests from all
+the workspace's packages would be fairly expensive. One needs to enable a package
+tests to build them:
+
+~~~
+acd package/name
+autoproj test enable .
+aup
+amake
+~~~
+
+The `aup` step is needed if the package has test-specific dependencies, as
+defined by the `test_depend` tag of its [manifest
+file](../workspace/add_packages.html#manifest_xml).
+
+Once the tests are built, run them manually if you want to see their results.
+Autoproj can also run them with `autoproj test [package]`, but will redirect
+the test's output to a log file (that can be visualized later with
+[alog](../basics/day_to_day.html#alog).
+
+## The Rock CMake macros
+
+To ease the use of CMake within a Rock system - i.e. in packages that follow
+Rock conventions, Rock provides CMake macros that are somewhat easier to use.
+The following describes them. The macros can be found in
+`base/cmake/modules/Rock.cmake` in a rock installation. There are also specific
+support for other tools within the Rock system (such as
+[vizkit3d](todo_link_to_vizkit3d)), but these will be introduced when
+applicable.
+
+The end of this page will detail this macro. But unless you need them, you may
+want to go to the next topic: [the integration of Ruby packages](ruby_libraries.html)
+
+## Rock.cmake Reference Documentation
+
+### Executable Targets (`rock_executable`)
+
+~~~
+rock_executable(name
+ SOURCES source.cpp source1.cpp ...
+ [LANG_C]
+ [DEPS target1 target2 target3]
+ [DEPS_PKGCONFIG pkg1 pkg2 pkg3]
+ [DEPS_CMAKE pkg1 pkg2 pkg3]
+ [MOC qtsource1.hpp qtsource2.hpp])
+~~~
+
+Creates a C++ executable and (optionally) installs it.
+
+The following arguments are mandatory:
+
+**SOURCES**: list of the C++ sources that should be built into that library
+
+The following optional arguments are available:
+
+**LANG_C**: build as a C rather than a C++ library
+
+**DEPS**: lists the other targets from this CMake project against which the
+library should be linked
+
+**DEPS_PKGCONFIG**: list of pkg-config packages that the library depends upon. The
+necessary link and compilation flags are added
+
+**DEPS_CMAKE**: list of packages which can be found with CMake's find_package,
+that the library depends upon. It is assumed that the Find*.cmake scripts
+follow the CMake accepted standard for variable naming
+
+**MOC**: if the library is Qt-based, this is a list of either source or header
+files of classes that need to be passed through Qt's moc compiler. If headers
+are listed, these headers should be processed by moc, with the resulting
+implementation files are built into the library. If they are source files, they
+get added to the library and the corresponding header file is passed to moc.
+
+### Library Targets (`rock_library`)
+
+~~~
+rock_library(name
+ [SOURCES source.cpp source1.cpp ...]
+ [HEADERS header1.hpp header2.hpp header3.hpp ...]
+ [LANG_C]
+ [DEPS target1 target2 target3]
+ [DEPS_PKGCONFIG pkg1 pkg2 pkg3]
+ [DEPS_CMAKE pkg1 pkg2 pkg3]
+ [MOC qtsource1.hpp qtsource2.hpp]
+ [NOINSTALL])
+~~~
+
+Creates and (optionally) installs a shared library.
+
+As with all rock libraries, the target must have a pkg-config file along, that
+gets generated and (optionally) installed by the macro. The pkg-config file
+needs to be in the same directory and called package_name.pc.in. See the template
+created by `rock-create-lib` for an example.
+
+The following arguments are mandatory:
+
+**SOURCES**: list of the C++ sources that should be built into that library. If
+absent, the library is assumed to be header-only (i.e. only the headers and
+pkg-config file will be installed). Note that even in this case the DEPS_* arguments
+can be provided as they are passed to the pkg-config file generation.
+
+**HEADERS**: list of the C++ headers that should be installed. Headers are installed
+in `include//`.
+
+The following optional arguments are available:
+
+**LANG_C**: build as a C rather than a C++ library
+
+**DEPS**: lists the other targets from this CMake project against which the
+library should be linked
+
+**DEPS_PKGCONFIG**: list of pkg-config packages that the library depends upon. The
+necessary link and compilation flags are added
+
+**DEPS_CMAKE**: list of packages which can be found with CMake's find_package,
+that the library depends upon. It is assumed that the Find*.cmake scripts
+follow the CMake accepted standard for variable naming
+
+**MOC**: if the library is Qt-based, this is a list of either source or header
+files of classes that need to be passed through Qt's moc compiler. If headers
+are listed, these headers should be processed by moc, with the resulting
+implementation files are built into the library. If they are source files, they
+get added to the library and the corresponding header file is passed to moc.
+
+**NOINSTALL**: by default, the library gets installed on 'make install'. If this
+argument is given, this is turned off
+
+### Boost Test Suite Targets (`rock_testsuite`)
+
+~~~
+rock_testsuite(name
+ SOURCES source.cpp source1.cpp ...
+ [LANG_C]
+ [DEPS target1 target2 target3]
+ [DEPS_PKGCONFIG pkg1 pkg2 pkg3]
+ [DEPS_CMAKE pkg1 pkg2 pkg3]
+ [MOC qtsource1.hpp qtsource2.hpp])
+~~~
+
+Creates a C++ test suite that is using the boost unit test framework
+
+The following arguments are mandatory:
+
+**SOURCES**: list of the C++ sources that should be built into that library
+
+The following optional arguments are available:
+
+**LANG_C**: build as a C rather than a C++ library
+
+**DEPS**: lists the other targets from this CMake project against which the
+library should be linked
+
+**DEPS_PKGCONFIG**: list of pkg-config packages that the library depends upon. The
+necessary link and compilation flags are added
+
+**DEPS_CMAKE**: list of packages which can be found with CMake's find_package,
+that the library depends upon. It is assumed that the Find*.cmake scripts
+follow the CMake accepted standard for variable naming
+
+**MOC**: if the library is Qt-based, this is a list of either source or header
+files of classes that need to be passed through Qt's moc compiler. If headers
+are listed, these headers should be processed by moc, with the resulting
+implementation files are built into the library. If they are source files, they
+get added to the library and the corresponding header file is passed to moc.
+
+### GTest Test Suite Targets (`rock_testsuite`)
+
+~~~
+rock_gtest(name
+ SOURCES source.cpp source1.cpp ...
+ [LANG_C]
+ [DEPS target1 target2 target3]
+ [DEPS_PKGCONFIG pkg1 pkg2 pkg3]
+ [DEPS_CMAKE pkg1 pkg2 pkg3]
+ [MOC qtsource1.hpp qtsource2.hpp])
+~~~
+
+Creates a C++ test suite that is using the Google unit test framework
+
+The following arguments are mandatory:
+
+**SOURCES**: list of the C++ sources that should be built into that library
+
+The following optional arguments are available:
+
+**LANG_C**: build as a C rather than a C++ library
+
+**DEPS**: lists the other targets from this CMake project against which the
+library should be linked
+
+**DEPS_PKGCONFIG**: list of pkg-config packages that the library depends upon. The
+necessary link and compilation flags are added
+
+**DEPS_CMAKE**: list of packages which can be found with CMake's find_package,
+that the library depends upon. It is assumed that the Find*.cmake scripts
+follow the CMake accepted standard for variable naming
+
+**MOC**: if the library is Qt-based, this is a list of either source or header
+files of classes that need to be passed through Qt's moc compiler. If headers
+are listed, these headers should be processed by moc, with the resulting
+implementation files are built into the library. If they are source files, they
+get added to the library and the corresponding header file is passed to moc.
+
+**Next** let's look at the creation of [ruby packages](ruby_libraries.html)
diff --git a/source/integrating_functionality/deployment.html.md b/source/integrating_functionality/deployment.html.md
new file mode 100644
index 0000000..b83ce19
--- /dev/null
+++ b/source/integrating_functionality/deployment.html.md
@@ -0,0 +1,347 @@
+---
+layout: documentation
+title: Deployments
+sort_info: 50
+---
+
+# Deployments
+{:.no_toc}
+
+- TOC
+{:toc}
+
+In Rock, each deployment is a separate binary (UNIX process) in which a certain
+number of tasks have been _instanciated_. The role of the deployment is to:
+ - group threads in processes
+ - group tasks into threads (and specify the thread parameters)
+ - assign each task to a triggering mechanism, that defines in which conditions
+ the task's `updateHook` will be called.
+
+The combination of thread information and triggering mechanism is called an
+**activity**.
+
+## Default Deployments
+
+orogen creates a default deployment for each declared
+[non-abstract](interface.html#abstract) task. This default deployment puts each
+component in a single thread, in its own process. It uses a default triggering
+mechanism that is defined on the task context. This default activity should be
+considered a "sane default", but components should in general not rely on this
+activity being "the" component activity.
+
+In some cases, it is beneficial to deploy components differently than the
+default. This is done by defining explicit deployments.
+
+## Explicit Deployments
+
+Deployment blocks declare one binary, that is a set of components along with
+their activities that are grouped into a single process.
+
+~~~ ruby
+deployment "test" do
+
+ add_default_logger
+end
+~~~
+
+This statement generates a "test" binary which will be installed by CMake. If
+that is not desired, for instance if it is for testing purposes only, add the
+do_not_install statement in the block:
+
+~~~ ruby
+deployment "test" do
+ do_not_install
+
+end
+~~~
+
+The most basic thing that can be done in a deployment is listing the tasks that
+should be instantiated. It is done by using the 'task' statement:
+
+~~~ ruby
+task 'TaskName', 'orogen_project::TaskClass'
+~~~
+
+It will create a task with the given name and class. By default, that task will
+have its own thread. Use `using_task_library` to import tasks from another project.
+
+## Use in Syskit
+
+To use a task's default deployment, one adds
+
+~~~ ruby
+Syskit.conf.use_deployment 'model::Task' => 'task_name'
+~~~
+
+Which deploys a task called `task_name` using the component's default
+deployment. Explicit deployments can be used as-is
+
+~~~ ruby
+Syskit.conf.use_deployment 'test'
+~~~
+
+however, this way, only one of the `test` deployment can be started at a given
+time. To start multiple ones, one must prefix the task's names:
+
+~~~ ruby
+Syskit.conf.use_deployment 'test' => 'left:'
+Syskit.conf.use_deployment 'test' => 'right:'
+~~~
+
+This prefixes the task names with resp. `left:` and `right:`. If `test` had a
+task called `task`, the deployed tasks would be called `left:task` and
+`right:task`.
+
+## Triggers
+
+Trigger statements can be placed either in the component's `task_context`
+block, or as a refinement of an explicit deployment's `task` statement.
+
+The first case looks like
+
+~~~ ruby
+task_context "Task" do
+ periodic 0.1
+end
+~~~
+
+The second case looks like
+
+~~~ ruby
+deployment 'test' do
+ task('task', 'Task').
+ periodic(0.1)
+end
+~~~
+
+When a task is added in the explicit deployment, the component's default
+activity will be used (as defined in its `task_context` block). However, the
+`periodic` and `fd_driven` activity statements that are available within the
+`task_context` statement can also be used in a deployment's definition to
+change. `port_driven` cannot. This overrides the default.
+
+**Note** the dot at the end of the `task` statement. This is a fluid
+interface, don't forget that each modifier for the task definition is actually
+a chain of method calls, and require the dots.
+{: .important}
+
+### Periodic Triggering (`periodic`)
+
+This is the most simple triggering method. When a task is declared periodic, its
+`updateHook` will be called with a fixed time period. The task is within its own
+thread.
+
+To use this triggering mechanism, simply add the `periodic(period)` statement
+to the task context:
+
+~~~ ruby
+task_context 'Task' do
+ ...
+ periodic 0.01
+end
+~~~
+
+The period is given in seconds. The periodic activity cannot be combined with
+other triggering mechanisms.
+
+### Port-Driven Triggering (`port_driven`)
+
+A port-driven task is a task that wants to perform computations whenever new
+data is available on its input ports. In general, data-processing tasks (as for
+instance image processing tasks) fall into that category: their goal is to take
+data from their input, process it, and push it to their outputs.
+
+A port-driven task is declared by using the `port_driven` statement.
+
+~~~ ruby
+task_context "Task" do
+ input_port 'image', '/Camera/Frame'
+ input_port 'parameters', '/SIFT/Parameters'
+ output_port 'features' '/SIFT/FeatureSet'
+
+ port_driven 'image'
+end
+~~~
+
+During runtime, the `updateHook` method will be called when new data arrives on
+the listed ports (in this case 'image'). Other input ports are ignored by the
+triggering mechanism. Obviously, the listed ports must be input ports. In
+addition, they must be declared _before_ the call to `port_driven`.
+
+Finally, if called without arguments, `port_driven` will activate the port
+triggering on all input ports declared before it is called. This means that, in
+
+~~~ ruby
+task_context "Task" do
+ input_port 'image', '/Camera/Frame'
+ input_port 'parameters', '/SIFT/Parameters'
+ output_port 'features' '/SIFT/FeatureSet'
+
+ port_driven
+end
+~~~
+
+both 'parameters' and 'image' are triggering. Now, in
+
+~~~ ruby
+task_context "Task" do
+ input_port 'image', '/Camera/Frame'
+ port_driven
+ input_port 'parameters', '/SIFT/Parameters'
+ output_port 'features' '/SIFT/FeatureSet'
+end
+~~~
+
+only 'image' is.
+
+### FD-Driven Triggering (`fd_driven`)
+
+In the IO triggering scheme, `updateHook` is called whenever new data is made
+available on a file descriptor. It allows to very easily implement drivers,
+that are waiting for new data on the driver communication line(s). The task
+has its own thread.
+
+**Note** if you're writing a task that has to interact with I/O, consider using
+Rock's [iodrivers_base](https://github.com/rock-core/drivers-iodrivers_base)
+library and the corresponding [orogen
+integration](https://github.com/rock-core/drivers-orogen-iodrivers_base)
+
+To use the IO-driven mechanism, use the `fd_driven` statement. fd-driven and
+port-driven triggering can be combined.
+
+
+~~~ ruby
+task_context 'Task' do
+ ...
+ fd_driven
+end
+~~~
+
+To access more detailed information on the trigger reason, and to set up the
+trigger mechanism, one must access the underlying activity. Two parts are
+needed, one in `configureHook` to tell the activity which file descriptors to watch
+for, and one in `cleanupHook` to remove all the watches (**that last part is
+mandatory**)
+
+First of all, include the header in the task's cpp file:
+
+~~~ cpp
+#include
+~~~
+
+Second, set up the watches in `configureHook`
+
+~~~ cpp
+bool MyTask::configureHook()
+{
+ // Here, "fd" is the file descriptor of the underlying device
+ // it is usually created in configureHook()
+ RTT::extras::FileDescriptorActivity* activity =
+ getActivity();
+ // This is mandatory so that the task can be deployed
+ // with e.g. a port-driven or periodic activity
+ if (activity)
+ activity->watch(fd);
+ return true;
+}
+~~~
+
+It is possible to list multiple file descriptors by having multiple calls to
+watch().
+
+One can set a timeout in milliseconds, in which case `updateHook` with be
+called after that many milliseconds after the last successful trigger.
+
+~~~ cpp
+activity->setTimeout(100);
+~~~
+
+Finally, you **must** clear all watches in stopHook():
+
+~~~ cpp
+void MyTask::cleanupHook()
+{
+ RTT::extras::FileDescriptorActivity* activity =
+ getActivity();
+ if (activity)
+ activity->clearAllWatches();
+}
+~~~
+
+The FileDescriptorActivity class offers a few ways to get more information
+related to the trigger reason (data availability, timeout, error on a file
+descriptor). These different conditions can be tested with:
+
+~~~ cpp
+RTT::extras::FileDescriptorActivity* fd_activity =
+ getActivity();
+if (fd_activity)
+{
+ if (fd_activity->hasError())
+ {
+ }
+ else if (fd_activity->hasTimeout())
+ {
+ }
+ else
+ {
+ // If there is more than one FD, discriminate. Otherwise,
+ // we don't need to use isUpdated
+ if (fd_activity->isUpdated(device_fd))
+ {
+ }
+ else if (fd_activity->isUpdated(another_fd))
+ {
+ }
+ }
+}
+~~~
+
+### Threading
+
+When in an explicit deployment, one has the option to fine-tune the assignment
+of tasks to threads.
+
+The first option is to associate a task with a thread. When there is a trigger,
+the thread is woken up and the task will be asynchronously executed when the OS
+scheduler decides to do so. It is the safest option (and the default) as the
+different tasks are made independent from each other.
+
+The second option is to *not* associate the task with its own thread. Instead,
+the thread that triggers it will be used to run the task. This is really only
+useful for port-driven tasks: the task that wrote on the triggering port
+will also execute the triggered task's `updateHook`. The main advantage is that
+the OS scheduler is removed from the equation, which can reduce latency. The
+periodic and IO triggering mechanisms _require_ the task to be in its own
+thread.
+
+When using a separate thread, the underlying thread can be parametrized with a
+scheduling class (realtime/non-realtime) and a priority. By default, a task is
+non-realtime and with the lowest priority possible. Changing it is done with
+the following statements:
+
+~~~ ruby
+ task('TaskName', 'orogen_project::TaskClass').
+ realtime.
+ priority()
+~~~
+
+Where the priority value is a number between 1 (lowest) and 99 (highest).
+
+**Note** the dot at the end of the `task` statement. This is a fluid
+interface, don't forget that each modifier for the task definition is actually
+a chain of method calls, and require the dots.
+{: .important}
+
+The second case is called a sequential activity and is declared with:
+
+~~~ ruby
+ task('TaskName', 'orogen_project::TaskClass').
+ sequential
+~~~
+
+**Next** this is mostly all. [The next page](plugins.html) describes how it is
+possible to extend the oroGen specification. You may want to simply remember
+that it exists on first read and come back to it later. And instead go to the
+[documentation's overview](../index.html#how_to_read).
+{: .next-page}
diff --git a/source/integrating_functionality/index.html.md b/source/integrating_functionality/index.html.md
new file mode 100644
index 0000000..ca420f6
--- /dev/null
+++ b/source/integrating_functionality/index.html.md
@@ -0,0 +1,57 @@
+---
+layout: documentation
+title: Introduction
+sort_info: 0
+directory_title: Integrating Functionality
+directory_sort_info: 40
+---
+
+# Integrating Functionality
+
+You should have at this stage read the Basics section of this documentation.
+Where Basics was all about dealing with system integration, we are going to
+discuss in this section how new functionality is presented in a form that
+can be used in a system.
+
+This part will assume that you've understood the notions of
+[dataflow](../basics/composition.html) and [execution
+lifecycle](../runtime_overview/event_loop.html)
+
+We **strongly recommend** that you develop most of your system's functionality
+in **libraries**, instead of doing within the framework itself. For C++, this
+means creating C++ library packages that are then later integrated into Rock
+components to expose that functionality to the system. For Ruby, this means
+creating Ruby packages that are then used within the Ruby layers (e.g. Syskit)
+
+**Why ?** Developing libraries is a matter of "general" software engineering
+best practices. Robotics is a small field, software engineering is not. By
+doing most of your work in a framework-independent manner, you ensure that you
+can benefit from the much bigger ecosystem. Moreover, we haven't seen the end
+of the robotic frameworks. By developing libraries that are
+framework-independent, you ensure that you can integrate them elsewhere if needs
+be, cutting the time and effort by **a lot**.
+
+**How does Rock help the library/framework separation ?** Supporting this
+separation during the development process is a main design driver for the
+tooling. For instance, Rock's build system - `autoproj` - is not assumed to be
+present by the rest of the packages. Second, `orogen` exposes C++ structures
+directly into the type system. The widespread approach - using IDLs - usually
+end up pushing the developers to integrate code-generated structures in their
+libraries thus tying them to the framework itself.
+{: .note}
+
+While we do recommend a separation between framework and libraries, Rock does
+have some guidelines and best practices on how to develop C++ and Ruby
+libraries to ease their integration in a Rock system. The next pages of this
+section will first deal with [C++ libraries](cpp_libraries.html) and then [Ruby
+libraries](ruby_libraries.html).
+
+The rest will then deal with the no small matter of integrating this
+functionality in a Rock system.
+If you feel so inclined, Rock provides a C++ library template. This template
+solves some of the common problems with setting up a C++ library (basic build
+system, ...) and obviously integrate as-is with the rest of a Rock system.
+
+**Next**: let's talk about the development of [C++ libraries](cpp_libraries.html)
+{: .next-page}
+
diff --git a/source/integrating_functionality/interface.html.md b/source/integrating_functionality/interface.html.md
new file mode 100644
index 0000000..c3546fa
--- /dev/null
+++ b/source/integrating_functionality/interface.html.md
@@ -0,0 +1,205 @@
+---
+layout: documentation
+title: Interface
+sort_info: 35
+---
+
+# Component Interfaces
+{:.no_toc}
+
+- TOC
+{:toc}
+
+We'll cover in this page how to define your task's interface. All statements
+presented in this page are to be included in a component definition, i.e.
+between 'do' and 'end' in
+
+~~~ ruby
+task_context "ClassName" do
+ needs_configuration
+
+ ...
+end
+~~~
+
+The only constraint on `ClassName` is that it *must* be different from the
+project name. How one is meant to interact with these elements in the task's
+own code is dealt with [later](writing_the_hooks.html)
+
+The `needs_configuration` statement is historical and should always be present.
+
+## Abstract Tasks {#abstract}
+
+orogen supports subclassing a component class into another component class. Of
+course, in some cases, one would create a component class that is only meant to
+be subclasses. This is declared with the `abstract` statement, which ensures that
+orogen will not attempt (nor allow to) create a component instance from this class.
+
+~~~ ruby
+task_context "ClassName" do
+ needs_configuration
+ abstract
+ ...
+end
+~~~
+
+## Interface Elements
+
+
+{: .fullwidth}
+
+ * **Ports** are used to transfer data between the components
+ * **Properties** are used to store and set configuration parameters
+ * Finally, **Operations** (not represented here) are used to do remote method
+ calls on the components
+
+As a general rule of thumb, the components should communicate with each other
+only through ports. The properties and operations (as well as the state machine
+covered in [the next page](state_machine.html)) are meant to be used by
+a coordination layer, namely Syskit in our case.
+
+### Ports
+Ports are defined with
+
+~~~ ruby
+# A documentation string
+input_port 'in', 'my_type'
+# Another documentation string
+output_port 'out', 'another_type'
+~~~
+
+### Properties
+
+Properties are defined with
+
+~~~ ruby
+# What this property is about
+property 'name', 'configuration_type'
+~~~
+
+Plain properties must be read by the component only before it is started. If
+one needs to be able to change the value at runtime, the property must be
+declared `dynamic`:
+
+~~~ ruby
+# What this property is about
+property('name', 'configuration_type').
+ dynamic
+~~~
+
+**Don't make everything dynamic**. Use dynamic properties only for things that
+(1) won't affect the component functionality when the property is changed and
+(2) for which the "dynamicity" is easy to implement. A counter example is for
+instance a device whose change in parameter would take a few seconds. This
+should definitely *not* be dynamic. A good example would be a simple scaling
+parameter, which is only injected in a numerical equation - that is something
+that won't require any internal reinitialization.
+{: .important}
+
+### Operations
+
+The operations offer a mechanism from which a task context can expose
+functionality through remote method calls. They are defined with:
+
+~~~ ruby
+# Documentation of the operation
+operation('commandName').
+ argument('arg0', '/arg/type').
+ argument('arg1', '/example/other_arg')
+~~~
+
+Additionally, a return type can be added with
+
+~~~ ruby
+operation('operationName').
+ returns('int').
+ argument('arg0', '/arg/type').
+ argument('arg1', '/example/other_arg')
+~~~
+
+Note the dot at the end of all but the last line. This dot is important and, if
+omitted, will lead to syntax errors. If no return type is provided, the
+operation returns nothing.
+
+**When to use an operation ?** Well, don't. Mostly. Operations should very
+rarely be used, as they create hard synchronization between components. The one
+common case where an operation is actually useful is if something _really
+expensive_ needs to rarely be done in the middle of the component processing,
+such as dumping an internal state that is really expensive to dump.
+{: .important}
+
+### Dynamic Ports {#dynamic_ports}
+
+Some components (e.g. the logger or the canbus components) may create new ports
+at runtime, based on their configuration. To integrate within Syskit, it is
+necessary to declare that such creation is possible. This is done with the
+`dynamic_input_port` and `dynamic_output_port` statements, possibly using a
+regular expression as name pattern and either a message type or nil for "type
+unknown".
+
+The following for instance declares, in the Rock
+[canbus::Task](https://github.com/rock-drivers/drivers-orogen-canbus), that
+ports with arbitrary names might be added to the task interface, and that these
+ports will have the /canbus/Message type.
+
+~~~ ruby
+dynamic_output_port /.*/, "/canbus/Message"
+~~~
+
+oroGen currently provides no support for dynamic ports at the C++ level.
+`dynamic_output_port` and `dynamic_input_port` are purely declarative, it is
+the job of the component implementer to handle their creation and destruction.
+This is details [later in this section](writing_the_hooks.html#dynamic_ports)
+
+Syskit expects dynamic ports to be created at configuration time and removed at
+cleanup time.
+
+## Inheritance
+
+It is possible to make the components inherit from each other, and have the
+other oroGen features play well.
+
+Given a `Task` base class, the subclass is defined with
+
+~~~ ruby
+task_context "SubTask", subclasses: "Task" do
+end
+~~~
+
+When one does so, the component's subclass inherits from the parent. This means
+that it has access to the methods defined on the parent class, and also that
+it inherits the parent's class interface.
+
+When inheriting between task contexts, the following constraints will apply:
+
+ * it is not possible to add a task interface object (port, property, ...) that
+ has the same name than one defined by the parent model.
+ * the child shares the parent's [state definitions](state_machine.html)
+
+Finally, "abstract task models", i.e. task models that are used as a base for
+others, but which it would be meaningless to deploy since they don't have any
+functionality can be marked as abstract with
+
+~~~ ruby
+task_context "SubTask" do
+ abstract
+end
+~~~
+
+One can also inherit from a task defined by another oroGen package. Import the
+package first at the top of the `.orogen` file with
+
+~~~ ruby
+using_task_library "base_package"
+~~~
+
+and subclass the task from `base_package` using its full name:
+
+~~~ ruby
+task_context 'Task', subclasses: "base_package::Task" do
+end
+~~~
+
+**Next** let's have a look at the component [lifecycle state machine](state_machine.html)
+{: .next-page}
+
diff --git a/source/integrating_functionality/media/deployment_process.svg b/source/integrating_functionality/media/deployment_process.svg
new file mode 100644
index 0000000..96a39f1
--- /dev/null
+++ b/source/integrating_functionality/media/deployment_process.svg
@@ -0,0 +1,913 @@
+
+
+
+
diff --git a/source/integrating_functionality/media/error_state_machine.svg b/source/integrating_functionality/media/error_state_machine.svg
new file mode 100644
index 0000000..bef3782
--- /dev/null
+++ b/source/integrating_functionality/media/error_state_machine.svg
@@ -0,0 +1,416 @@
+
+
+
+
diff --git a/source/integrating_functionality/media/orocos_component.svg b/source/integrating_functionality/media/orocos_component.svg
new file mode 100644
index 0000000..430ab78
--- /dev/null
+++ b/source/integrating_functionality/media/orocos_component.svg
@@ -0,0 +1,390 @@
+
+
+
+
diff --git a/source/integrating_functionality/media/state_machine.svg b/source/integrating_functionality/media/state_machine.svg
new file mode 100644
index 0000000..798c238
--- /dev/null
+++ b/source/integrating_functionality/media/state_machine.svg
@@ -0,0 +1,431 @@
+
+
+
+
diff --git a/source/integrating_functionality/orogen_cheat_sheet.pdf b/source/integrating_functionality/orogen_cheat_sheet.pdf
new file mode 100644
index 0000000..2805576
Binary files /dev/null and b/source/integrating_functionality/orogen_cheat_sheet.pdf differ
diff --git a/source/integrating_functionality/orogen_cheat_sheet.svg b/source/integrating_functionality/orogen_cheat_sheet.svg
new file mode 100644
index 0000000..bc7ceb8
--- /dev/null
+++ b/source/integrating_functionality/orogen_cheat_sheet.svg
@@ -0,0 +1,3401 @@
+
+
+
+
diff --git a/source/integrating_functionality/plugins.html.md b/source/integrating_functionality/plugins.html.md
new file mode 100644
index 0000000..6c1eeb7
--- /dev/null
+++ b/source/integrating_functionality/plugins.html.md
@@ -0,0 +1,207 @@
+---
+layout: documentation
+title: Extending oroGen
+sort_info: 520
+---
+
+# Extending oroGen
+{:.no_toc}
+
+- TOC
+{:toc}
+
+oroGen plugins offer ways to add new statements to the oroGen specification
+file, allowing to integrate some libraries in the component development
+workflow. Two examples are the [stream aligner](TODO) and the
+[transformer](TODO).
+
+## General principle
+
+The general plugin design is based on the extensible nature of the Ruby
+language. How the plugin loading works is:
+
+ * the OROGEN_PLUGIN_PATH environment variable contains directories in which
+ plugin files (actually, ruby files) are.
+ * these ruby files get loaded by oroGen at startup through the normal "require"
+ mechanism
+ * the code in the Ruby file should extend the task context's class
+ (`OroGen::Spec::TaskContext`) to add new functionality. Examples can be found
+ at the bottom of this page.
+
+Example of a very simple plugin:
+
+~~~ ruby
+class BoolPlugin < OroGen::Spec::TaskModelExtension
+ attr_accessor :varname
+
+ # implement extension for task
+ def pre_generation_hook(task)
+ task.add_base_member("simple_plugin", varname, "bool")
+ task.operations["getBool"].
+ base_body("return #{varname};")
+ end
+end
+
+class OroGen::Spec::TaskContext
+ def add_boolean_attribute(varname)
+ extension_name = "BoolPlugin"
+ # find previous instance of the extension
+ extension = find_extension(extension_name)
+ if !extension
+ # create new instance
+ boolp = BoolPlugin.new(extension_name)
+ boolp.varname = varname
+ # define interface for task
+ hidden_operation("getBool").
+ returns("bool")
+ register_extension(boolp)
+ else
+ raise OroGen::ConfigError, "Plugin '#{extension_name}' is already instantiated with base member '#{extension.varname}'. '#{varname}' will not be created."
+ end
+ end
+end
+~~~
+
+If this plugin is present, the `add_boolean_attribute` method becomes available on
+task context definitions. Then, the generated Base class for these tasks will
+include a boolean attribute with the name given as parameter. Below, it would be
+"test".
+
+~~~ ruby
+task_context "Task" do
+ add_boolean_attribute "test"
+end
+~~~
+
+Edit `.orogen/tasks/TaskBase.hpp` to see the new attribute.
+
+## Adding methods to task contexts
+
+The add_base_method and add_user_method methods allow to add virtual methods to
+(respectively) the user-visible part of a task context and the Base class of the
+task context.
+
+The syntax (simply replace add_base_method by add_user_method to have the same
+behaviour on the user-part of the task class)
+
+~~~ ruby
+add_base_method(return_type, name, signature)
+~~~
+
+This method call declares a pure virtual method (no body has been specified
+yet). To give a body, one has to call #body on the value returned by
+add_base_method:
+
+~~~ ruby
+add_base_method(return_type, name, signature).
+ body(body_of_the_method)
+~~~
+
+Where the body is the method's implementation without the toplevel block markers
+
+For instance, oroGen adds a getModelName method to every task context. It is defined
+by
+
+~~~ ruby
+add_base_method("std::string", "getModelName", "").
+ body("return \"#{task_model.name}\";")
+~~~
+
+## Adding attributes to the base class
+
+~~~ ruby
+add_base_member("kind", "attribute_name", "attribute_type")
+~~~
+
+Additionally, related code can be added to the initializer list, constructor
+and/or destructor:
+
+~~~ ruby
+add_base_member("kind", "attribute_name", "attribute_type").
+ initializer("code_in_initializer_list").
+ constructor("code_in_constructor").
+ destructor("code_in_destructor")
+~~~
+
+`kind` is a simple string that is only used for sorting the declarations. For
+instance the properties use "properties" as "kind" so that all properties get
+grouped together.
+
+## Adding code to the base class constructor and destructor
+
+It is simply done with:
+
+~~~ ruby
+add_base_construction(kind, name, body)
+add_base_destruction(kind, name, body)
+~~~
+
++kind+ and +name+ will have no meaning. They will only be used for sorting the
+code snippets, so that the generated code always stays the same (avoiding
+unnecessary recompilations).
+
+## Adding code the hooks in the base class
+
+It is done with
+
+~~~ ruby
+in_base_hook(hook_name, code)
+~~~
+
+For instance, oroGen clears all input ports in the base `startHook` with
+
+~~~ ruby
+each_input_port do |port|
+ in_base_hook('start', "_#{port.name}.clear();")
+end
+~~~
+
+## Adding code at the toplevel
+
+Code can be added at the toplevel of the Base.hpp and Base.cpp files. Simply do
+
+~~~ ruby
+add_base_header_code(code, include_before)
+~~~
+
+Where include_before should be true if the code needs to be added before the
+Base class declaration and false if it should be after.
+
+and
+
+~~~ ruby
+add_base_implementation_code(code, include_before)
+~~~
+
+## Code Generation Hooks
+
+There are three different hook available which plugins could use. The above
+listed `pre_generation_hook` makes it able to add code to existing base-methods
+or register new base methods for generation.
+
+The `generation_hook` is the normal use-case for plugins. In this generation
+step regular ports or methods can be created, but no new code can be added to
+existing base-members.
+
+The `post_generation_hook` is only intended to read the finally generated
+task-context. No modifications on the TaskContext or the generation should be
+done at this step. This hook is useful to extract information during
+compilation time.
+
+## Adding user-defined classes to generation
+
+Plugins that want to register additional classes during compilation
+can register a subfolder within the CMake build system. These subfolders get
+added to the compilation unit. The plugin developer has to make sure that the folder
+and a CMakeLists.txt within it has been generated during one of the above listed hooks.
+If this classes should be installed or linked against other parts, regular CMake commands must
+be used. The subfolders names must be export as a enumerable of string by the
+`each_auto_gen_source_directory` method of the plugin, example:
+
+~~~ ruby
+def each_auto_gen_source_directory(&block)
+ ['src1','src2'].each(&block)
+end
+~~~
+
+**Next** That's all, folks. Go back to [the list of topics](../index.html#how_to_read)
diff --git a/source/integrating_functionality/ruby_libraries.html.md b/source/integrating_functionality/ruby_libraries.html.md
new file mode 100644
index 0000000..81aea2f
--- /dev/null
+++ b/source/integrating_functionality/ruby_libraries.html.md
@@ -0,0 +1,46 @@
+---
+layout: documentation
+title: Ruby Library Packages
+sort_info: 15
+---
+
+# Ruby Library Packages
+{:.no_toc}
+
+- TOC
+{:toc}
+
+On the C++ side, we have already discussed the separation between the bulk of
+the implementation in C++ libraries and their framework integration in
+components.
+
+The same can be applied on the Ruby side of the system. A lot of the
+functionality needed to e.g. look at diagnostics data streams from the
+components can be made framework-independent (i.e. Syskit independent). It is
+somehow weaker in the Ruby case though, as in the end most of the data
+processing is done in C++ in a Rock system.
+
+## Conventions
+
+The Ruby packages are expected to follow the de-factor standard Ruby package
+layout set forth by RubyGems. The best way to create a new ruby package is to
+use [`bundle gem`](https://bundler.io/v1.15/guides/creating_gem.html) and add
+Rock's [`manifest.xml`](../workspace/add_packages.html) to it.
+
+`autoproj` runs `rake` within the package during the build, which means that
+it runs the `Rakefile`'s `default` task.
+
+External gems can be installed by autoproj using [the osdeps
+mechanism](../workspace/os_dependencies.html). For now, autoproj does not know
+how to look at a package's gemspec (which defines the gem's dependencies), so
+you will have to duplicate dependencies between the gemspec and the
+`manifest.xml`.
+
+## Tests
+
+This is 2017 (or later). Testing is now an integral part of modern development
+process, and Rock provides support to integrate unit testing in the development
+workflow.
+
+Ruby packages are expected to provide a `test` target in their `Rakefile` to run
+the tests. The `Rakefile` generated by `bundle gem` has one.
diff --git a/source/integrating_functionality/state_machine.html.md b/source/integrating_functionality/state_machine.html.md
new file mode 100644
index 0000000..589ac7b
--- /dev/null
+++ b/source/integrating_functionality/state_machine.html.md
@@ -0,0 +1,141 @@
+---
+layout: documentation
+title: State Machine
+sort_info: 40
+---
+
+# Lifecycle State Machine
+{:.no_toc}
+
+- TOC
+{:toc}
+
+
+While the component interface tells how to _communicate_ with a component, the
+lifecycle state machine defines how a component can be controlled at runtime.
+All Rock components share the same state machine, which is what allows [the
+generic Syskit integration](../runtime_overview/event_loop.html).
+
+## The nominal RTT state machine
+
+What follows is the **nominal** state machine. On each state
+transition, the italic names are the transition names, and the non-italic name
+the name of the method that will be called on the component so that it does
+something, i.e. the ones that you -- the component developer -- must implement
+if something is needed for a particular transition.
+
+The `configureHook()` and `startHook()` methods may return false, in which case
+the transition is refused.
+
+{: .fullwidth}
+
+
+### Configure and Start
+
+As its name implies, the transition between PreOperational and Stopped is meant
+to encapsulate the need for complex and/or costly configuration. For instance,
+trying to open and configure a device (which can take very long). To give you
+another example, in hard realtime contexts, it is expected that `startHook()` is
+hard realtime while `configureHook()` does not need to be.
+
+Additionally, because of [assumptions within Syskit](../basics/recap.html), the
+`configureHook()` is the only place where dynamic ports can be created (and
+`cleanupHook()` the place where they must be destroyed).
+
+**Note** the `needs_configuration` statement within the file generated by
+`rock-create-orogen` allowed to control whether the component's requires a
+`configure` step or not. This is still here for historical reasons. All new
+components should have it.
+{: .important}
+
+## Error representation
+
+
+
+{: .pull-left}
+
+Errors are represented in the way depicted on the left. The exception state is
+used to represent errors that demand the component to stop, but can be recovered
+from by restarting it. The fatal error state, however, is a terminal state:
+there is no way to get out of it except by restarting the component's process.
+
+The components will automatically transition from any state to Exception if a
+C++ exception is "leaked" by one of the hooks (i.e. uncaught exception).
+Because of such a transition, the stopHook and cleanupHook will be called
+before getting into Exception. In addition, one may transition manually to
+the exception state by calling `exception()` from within the code. Note that
+`exception()` behaves as a normal function, i.e. will not interrupt the flow
+of the method it is in. Make sure you return after the exception:
+
+~~~
+void Task::updateHook()
+{
+ if (something_went_wrong)
+ return exception();
+
+ // Without the 'return', the execution would continue as if everything was
+ // alright
+}
+~~~
+
+If, while going into Exception, another C++ exception is caught, the component
+will go into Fatal. In general, there should be no reason to transition to
+fatal manually.
+
+
+## Extending the state machine {#extended_states}
+
+oroGen offers a way to have a more fine-grained reporting mechanism for
+components to their coordination layer. This mechanism is based
+on the definition of sub-states for each of the runtime and terminal states of
+the task context state machine: Running, Exception and Fatal.
+
+These sub-states are declared in the task_context block of the oroGen
+specification:
+
+~~~ ruby
+task_context "MotionTask" do
+ # Sub-states of Running (nominal operations)
+ runtime_states 'GOING_FORWARD', 'TURNING_LEFT'
+ # Sub-states of Exception (non-nominal end)
+ exception_states 'BLOCKED', 'SLIPPING'
+ # Sub-states of Fatal (not recoverable error)
+ fatal_states 'TOTALLY_BROKEN'
+end
+~~~
+
+On the C++ side, this mechanism is available through two things:
+
+* a States enumeration that defines all the states in a manner that is usable in
+ the code
+* the `state(States)`, `exception(States)` and `fatal(States)` methods that
+ allow to declare state changes in the C++ code.
+
+For instance, if the updateHook() detects that the system is blocked, it would
+do
+
+~~~ cpp
+void MotionTask::updateHook()
+{
+ // code
+ if (blocked)
+ {
+ exception(BLOCKED);
+ return;
+ }
+ // code
+}
+~~~
+
+All these generate notifications can be reacted on at the Syskit level to change
+the system's behavior. Because each of these calls generate a notification, it is
+good practice to avoid transitioning multiple time to the same runtime state. Calls
+to `state()` can be guarded to avoid this:
+
+~~~ cpp
+if (state() != GOING_FORWARD)
+ state(GOING_FORWARD);
+~~~
+
+**Next**: let's see how one should [write the `*Hook` methods](writing_the_hooks.html).
+{: .next-page}
diff --git a/source/integrating_functionality/writing_the_hooks.html.md b/source/integrating_functionality/writing_the_hooks.html.md
new file mode 100644
index 0000000..ae45a8a
--- /dev/null
+++ b/source/integrating_functionality/writing_the_hooks.html.md
@@ -0,0 +1,231 @@
+---
+layout: documentation
+title: Writing the Hooks
+sort_info: 40
+---
+
+# Writing the Hooks
+{:.no_toc}
+
+- TOC
+{:toc}
+
+This page describes the part of the C++ API, and their usage pattern, that are
+relevant to the implementation of the hooks in a Rock component. You should
+already have familiarized yourself with [the component
+interface](interface.html) and its [lifecycle state
+machine](state_machine.html).
+
+## Interface Objects in C++
+
+There is one C++ object for each declared interface element. The name of the
+object is the name of the element with a leading underscore. For instance,
+
+~~~ruby
+# Another documentation string
+output_port 'out', 'another_type'
+~~~
+
+is mapped to a C++ attribute of type `RTT::InputPort` called
+`_out`.
+
+## Code Generation and Code Updates
+
+oroGen will not update a file that is already present on disk. Whenever an
+interface object requires the addition or removal of a method (operations and
+dynamic properties), one must manually modify the corresponding files in
+`tasks/`. To ease the process, oroGen does update template files in
+`templates/`
+
+In order to achieve this, each component is implemented in two classes: the one
+you modify - which has the name declared in the orogen file - and a `Base`
+class that is directly modified by orogen. The latter is where the interface
+objects are defined. Have a look if you're interested in understanding more
+about the component's implementation. It's in the `.orogen/tasks/` directory
+{: .note}
+
+## Properties
+
+Plain properties are read only. They must be read either in the `configureHook()` or
+in the `startHook()`. Syskit will write them before calling `configure`.
+
+A property is read with `.get()`:
+
+~~~ cpp
+configuration_type config = _name.get();
+~~~
+
+Dynamic properties can be read at runtime. However, the property update method
+is called in-between two hooks, and therefore any delay due to the update will
+impact the component's update rate, plus one must take into account that the
+state of the system does change in-between two `updateHook` calls. In other
+words, dynamic properties have a cost both on the component's implementation
+complexity and on its predictability. __Use them wisely__.
+
+There are two ways to handle dynamic properties. Either by reading the property
+object repeatedly, or by implementing a hook method that is called when the
+property is written at runtime. This hook method is called `setPropertyName`
+for a `property_name` property. In doubt, check the template files in
+`templates/`.
+
+If you do reimplement this method, always call the method from the base class
+(as the generated template instructs you to do).
+
+## Ports
+
+The ports map to C++ attributes on the component class, with the name prefixed
+by an underscore (i.e. `_in` and `_out` here. The most common operation is to
+read the input port and write an output port;
+
+~~~ cpp
+my_type in_sample;
+RTT::DataFlow status = _in.read(sample);
+another_type out_sample;
+_out.write(out_sample);
+~~~
+
+The `status` return value indicates whether there was nothing to read
+(`RTT::NoData`), a new, never-read sample was read (`RTT::NewData`) or an
+already-read sample was read (`RTT::OldData`). Let's now look at the common
+port-reading patterns.
+
+All input ports are cleared on `startHook`, i.e. just after `startHook`, the
+status will often be `NoData`. This is done so that the component does not read
+stale data from its last execution.
+
+Input ports can be used in the C++ code in two ways, which one you want to use
+depends on what you actually want to do.
+
+* if you want to read all new samples that are on the input (since an input port
+ can be connected to multiple output ports)
+
+ ~~~ cpp
+ // my_type is the declared type of the port
+ my_type sample;
+ while (_in.read(sample, false) == RTT::NewData)
+ {
+ // got a new sample, do something with it
+ // The 'false' here is a small optimization
+ }
+ ~~~
+
+* if you are just interested by having some data
+
+ ~~~ cpp
+ // my_type is the declared type of the port
+ my_type sample;
+ if (_in.read(sample) != RTT::NoData)
+ {
+ // got a sample, do something with it
+ }
+ ~~~
+
+Finally, to write on an output, you use 'write':
+
+~~~ cpp
+// another_type is the declared type of the port
+another_type data = calculateData();
+_out.write(data);
+~~~
+
+Another operation of interest is the connected() predicate. It tests if
+there is a data provider that will send data to input ports
+(in.connected()) or if there is a listener component that will get the
+samples written on output ports.
+
+For instance,
+
+~~~ cpp
+if (_out.connected())
+{
+ // generate the data for _out only if somebody may be interested by it. This
+ // is useful if generating // the data is costly
+ another_type data = calculateData();
+ _out.write(data);
+}
+~~~
+
+## Dynamic Ports {#dynamic_ports}
+
+Components that have a dynamic port mechanism must create these ports in
+`configureHook`. They will usually do so based on information on their
+properties.
+
+For the purpose of example, let's assume that we're implementing a time source,
+and need different ports to be at different periods. A valid configuration type
+would be
+
+~~~ cpp
+struct PortConfiguration
+{
+ std::string port_name;
+ base::Time period;
+};
+~~~
+
+To hold the list of created ports, the task would need an attribute
+
+~~~ cpp
+typedef RTT::InputPort TimeOutputPort;
+std::vector mCreatedPorts;
+~~~
+
+The task's `configureHook` would create the ports (after checking for e.g. name
+collisions)
+
+~~~ cpp
+for (auto const& conf : _port_configurations.get())
+{
+ TimeOutputPort* port = new TimeOutputPort("name_of_the_port");
+ ports()->addPort(port);
+ mCreatedPorts.push_back(port);
+}
+~~~
+
+and `cleanupHook` would remove and delete them
+
+~~~ cpp
+while (!mCreatedPorts.empty())
+{
+ TimeOutputPort* port = mCreatedPorts.back();
+ mCreatedPorts.pop_back();
+ ports->removePort(created_port->getName());
+ delete created_port;
+}
+~~~
+
+## Operations
+
+Operations map to a C++ method. E.g. for the declaration
+
+~~~ruby
+operation('operationName').
+ returns('int').
+ argument('arg0', '/arg/type').
+ argument('arg1', '/example/other_arg')
+~~~
+
+oroGen will generate a method with the signature
+
+~~~ cpp
+return_type operationName(arg::type const& arg0, example::other_arg const& arg1);
+~~~
+
+By default, the operations are run into the thread of the callee, i.e. the thread of
+the component on which the operation is defined. This is easier from a thread-safety
+point of view, as one thus guarantees that there won't be concurrent access to the task's
+internal state. However, it also means that the operation will be executed only when all
+the task's hooks have returned (waiting potentially long).
+
+If it is desirable, one can design the operation's C++ method to be thread-safe
+and declare it as being executed in the caller thread instead of the callee
+thread. This is done with
+
+~~~ ruby
+operation('operationName').
+ returns('int').
+ argument('arg0', '/arg/type').
+ argument('arg1', '/example/other_arg').
+ runs_in_caller_thread
+~~~
+
diff --git a/source/runtime_overview/recap.html.md b/source/runtime_overview/recap.html.md
index 8677f06..9974647 100644
--- a/source/runtime_overview/recap.html.md
+++ b/source/runtime_overview/recap.html.md
@@ -13,7 +13,7 @@ sort_info: 50
structure](task_structure.html).
- Syskit has assumptions about how components should be implemented. We'll
recollect those when we get to [how to implement
- components](../writing_components/index.html)
+ components](../integrating_functionality/components.html)
- Components are configured and started "when possible" by [the
scheduler](event_loop.html#scheduling)
- Components are transparently [reconfigured](event_loop.html#reconfiguration)
diff --git a/source/type_system/defining_types.html.md b/source/type_system/defining_types.html.md
new file mode 100644
index 0000000..7f2ae94
--- /dev/null
+++ b/source/type_system/defining_types.html.md
@@ -0,0 +1,297 @@
+---
+layout: documentation
+title: Defining Types
+sort_info: 10
+---
+
+# Defining Types
+{:.no_toc}
+
+- TOC
+{:toc}
+
+
+Types are described using C++. However, not all C++ types can be used in the data
+flow. There are limitations to which types are acceptable, and ways to work
+around these limtations.
+
+They are then injected in Rock's type system through Rock's code generation
+tool, `orogen`. We will see [later](../integrating_functionality/components.html) that this tool is
+also the tool that is used to create components.
+
+## Creating an orogen package for type definition
+
+Packages are created with the `rock-create-orogen` tool. Let's assume we want
+to create a `planning/orogen/sbpl` package, the workflow would be to:
+
+~~~
+acd
+cd planning/orogen/
+rock-create-orogen sbpl
+cd sbpl
+# Edit sbpl.orogen
+rock-create-orogen
+# Fix potential mistakes and re-run rock-create-orogen until there are no errors
+# …
+~~~
+
+If we are going to use this package only for type definitions, you will have to
+delete all the `task_context` definitions. The orogen file will end up looking
+like this:
+
+~~~ ruby
+name "name_of_package"
+version "0.1"
+
+# Import types from other orgen projects as well as
+# C++ headers within the orogen package itself
+import_types_from "..."
+import_types_from "..."
+import_types_from "..."
+import_types_from "..."
+
+# Import types from libraries
+using_library "other_lib"
+import_types_from "other_lib/Header.hpp"
+
+# Choose which types are going to be usable on
+# component interfaces
+typekit.export_types "/name/of/type",
+ "/name/of/another/type"
+~~~
+
+**What does `rock-create-orogen` do ?** `orogen` does "private" code generation
+in a `.orogen` subfolder of the package, and creates a `templates/` folder.
+`rock-create-orogen` ensures that the initial repository commit does not
+contain any of these. If you don't want to use `git`, or if you're confident
+that you know which files and folder to commit and which to leave out, the second
+run is not neeeded.
+{: .note}
+
+Once this is done, [add the package to your build
+configuration](../workspace/add_packages.html#orogen)
+
+## Type Declarations {#type_declarations}
+
+Not all C++ types can be used by Rock's type system. To be usable as-is, a type must:
+
+* be default constructible and copyable (i.e. have a constructor that have no
+ arguments and can be copied).
+* have no private fields
+* have only public ancestors, that fit the definition of "acceptable type".
+* not use pointers.
+
+In addition, Rock does support `std::string` and `std::vector` standard
+classes, so you can use them freely. Moreover, for types that can't be directly
+managed by oroGen, the mechanism of [opaque types](#opaques) allows to
+integrate them in the Rock workflow anyways.
+
+Example: defining a Time class
+
+~~~ cpp
+namespace base {
+ struct Time
+ {
+ uint64_t microseconds;
+ static Time fromMilliseconds(uint64_t ms);
+ Time operator +(Time const& other);
+ };
+}
+~~~
+
+## Type Names {#naming_scheme}
+
+The Rock type system does not use the same naming scheme than C++ for types.
+Parts of a type are separated by a forward slash `/`. A well-formed type name
+is always absolute (always starts with /).
+
+For instance, Rock's `base::Time` is `/base/Time` within the type system.
+
+Containers derived from `/std/vector` do use the `<>` markers: `/std/vector`
+
+## Importing Types {#import}
+
+In an oroGen project, one adds one or more `import_types_from` statements to
+include headers from within the oroGen package, headers from other packages or
+to import all types that have already been defined within another oroGen
+package. The template generated by `rock-create-orogen` has created a header
+file for this purpose:
+
+~~~ ruby
+import_types_from "myprojectTypes.hpp"
+~~~
+
+Such headers must be self-contained, that is include all the headers they,
+themselves, require. Moreover, only the types that are _directly_ defined in
+the imported header (and the types they themselves use) will be exported in the
+typekit. Finally, one can directly use types defined in a library, provided
+that this library gives a pkg-config file for dependency discovery.
+
+Let's consider a `drivers/hokuyo` package that would define a
+`hokuyo::Statistics` structure. Assuming that this package (1) installs a
+`hokuyo.pc` file - all Rock packages do by default - and (2) installs the
+relevant header as `Statistics.hpp`, one can import the type with
+
+~~~ ruby
+using_library "hokuyo"
+import_types_from "hokuyo/Statistics.hpp"
+~~~
+
+**Note** the pkg-config name of a Rock library package is the package's basename
+(i.e. `hokuyo` for `drivers/hokuyo`).
+{: .note}
+
+Finally, if the types you are interested in are already imported by another
+oroGen package, it is recommended to reuse the code already generated there
+(if only to reduce compilation times).
+
+To import types from another project, one does:
+
+~~~ ruby
+import_types_from "project_name"
+~~~
+
+**Note** the name of an oroGen package as used in `import_types_from` is the
+package's basename (i.e. `hokuyo` for `drivers/orogen/hokuyo`). An oroGen
+package and a library can share the same basename (e.g. `drivers/hokuyo` and
+`drivers/orogen/hokuyo`). This is even a recommended behavior when an orogen
+package is mainly tied to a certain library.
+{: .note}
+
+**Important** The `using_library "library_name"` and `import_types_from "project_name"`
+implicitly create a dependency between the oroGen package you're working on and
+other packages. These dependencies **must** be made explicit by adding them to
+the oroGen package's [`manifest.xml`](../workspace/add_packages.html#manifest_xml).
+{: .important}
+
+The following two sections on [C++ templates](#templates) and [opaque
+types](#opaques) can be passed on a first reading. You can skip it to go
+straight to [how types will be seen from Ruby](types_in_ruby.html).
+{: .next-page}
+
+## Handling of C++ templates {#templates}
+
+Templates are not directly understood by oroGen. However, explicit
+instantiations of them can be used.
+
+Unfortunately, typedef'ing the type that you need is not enough. You have to
+use the instantiated template directly in a structure. To work around this, you
+can define a structure whose name contains the `orogen_workaround` string to
+get the template instantiated, and then define the typedefs that you will
+actually use in your typekits and oroGen task interfaces.
+
+For instance, with
+
+~~~ cpp
+template
+struct Vector {
+ Scalar values[DIM];
+};
+
+struct __orogen_workaround {
+ Vector<3> vector3;
+ Vector<4> vector4;
+};
+~~~
+
+One can use Vector<3> in its orogen interface, and in other structures.
+The `__orogen_workaround` structure itself will be ignored by oroGen to avoid
+polluting the type system.
+
+
+## Opaque Types {#opaques}
+
+Opaque types are a way to enable oroGen to handle types that it cannot handle
+completely automatically. The general idea is that you provide oroGen with a
+"marshalling structure" that (1) [it can understand](#type_declarations) and
+(2) can hold all the data that the "real type" holds. Then, you have to
+implement two conversion functions: one that converts from the marshalling type,
+and one to the marshalling type.
+
+So, it involves doing one copy. What is the gain ?
+
+Opaque types provide you with the advantage that other types that use opaque
+types (i.e. structures with fields that are from opaque types, std::vector,
+arrays) will be automatically handled by oroGen. I.e. you write the conversion
+function for the types that oroGen can't handle and let it do the rest of the
+work.
+
+Moreover, oroGen will be able to generate typekits for all the transports it
+can handle.
+
+Finally, the conversion to and from the marshalling type is only done in
+inter-process transports. When communicating across threads, the data structure
+is copied as-is.
+
+To use opaque types, you first have to create a wrapper type (a.k.a.
+"intermediate type") for the opaque. In the case of `Eigen::Vector3d`, a
+suitable wrapper would be
+
+~~~ cpp
+namespace wrappers
+{
+ struct Vector3d
+ {
+ double x, y, z;
+ };
+}
+~~~
+
+The wrapper is usually defined within the oroGen package itself, in a
+`wrappers/` subdirectory placed at the root of the package. It then needs to be
+[imported with `import_types_from`](#import). Finally, one can use
+`opaque_type` to declare the opaque.
+
+~~~ ruby
+import_types_from "wrappers/Vector3d.hpp"
+opaque_type "/Eigen/Vector3d", "/wrappers/Vector3d"
+~~~
+
+where `wrappers::Vector3d` is the marshalling structure defined in
+`wrappers/Vector3d.hpp`. Moreover, if getting the definition of the opaque type
+requires new include directories that are not yet added to the typekit through
+the [using_library mechanism](#import), you will have to detect them in the
+Ruby code and add them with the `include:` option
+
+~~~ ruby
+import_types_from "wrappers/Vector3d.hpp"
+opaque_type "/Eigen/Vector3d", "/wrappers/Vector3d", include: eigen_prefix
+~~~
+
+Once you have re-generated the project, a typekit/ directory is created with
+two files, `Opaques.cpp` and `Opaques.hpp` in it. These files hold the
+`toIntermediate` and `fromIntermediate` conversion functions that should be
+used by oroGen to convert the opaque to the wrapper and the wrapper to the
+opaque. Note that any function will do: you may change the plain functions to
+e.g. templates if you need to defined opaques for many types (as
+`base/orogen/types` does [for the Eigen
+types](https://github.com/rock-core/base-orogen-types/blob/master/typekit/Opaques.hpp)).
+
+**Updates to Opaques.hpp/Opaques.cpp** If you add new opaques to an orogen
+project that already has some, you will need to copy the corresponding
+toIntermediate/fromIntermediate conversion functions manually from
+templates/typekit/Opaques.cpp. Note that this is a general behavior: oroGen
+will always refuse to modify a file that already exists, but update a "fresh"
+template within `templates/`.
+{: .important}
+
+As explained, once you have defined an opaque type, oroGen will take care of
+other types that _use_ this opaque. For instance
+
+~~~ cpp
+struct Position
+{
+ base::Time time;
+ Eigen::Vector3d position;
+};
+~~~
+
+can be used in your task interfaces without any modifications. This works for
+structures, std::vector and static-size arrays. Before you may do this, however,
+you need to `import_types_from` the orogen package that declared the opaque
+in the first place.
+
+**Next** Now that you know all about defining data types, let's get to understand how
+they are seen [from within Ruby](types_in_ruby.html)
+{: .next-page}
+
diff --git a/source/type_system/index.html.md b/source/type_system/index.html.md
new file mode 100644
index 0000000..573887c
--- /dev/null
+++ b/source/type_system/index.html.md
@@ -0,0 +1,30 @@
+---
+layout: documentation
+title: Introduction
+sort_info: 0
+directory_title: The Type System
+directory_sort_info: 35
+---
+
+# Type System
+{:.no_toc}
+
+One of the first thing that a system designer has to think about is defining
+the data structures that will be used to exchange data between the system's
+parts (in our case, between the components and Syskit).
+
+These types are used for a few different things
+
+* in the communication between components and Syskit (ports)
+* in the configuration of the component (properties)
+* in the control of the component (operations)
+* to assess the component's state (diagnostics)
+
+In Rock, the types are defined in C++ in the components themselves. They are
+then exported into Rock's type system to allow for their **transport**
+(communication between processes), but also for their manipulation in Syskit.
+
+This section will detail [how types are defined](defining_types.html), how they are
+[mapped into the Ruby layers](types_in_ruby.html), and how which types are
+available can be discovered.
+
diff --git a/source/type_system/types_in_ruby.html.md b/source/type_system/types_in_ruby.html.md
new file mode 100644
index 0000000..29b19af
--- /dev/null
+++ b/source/type_system/types_in_ruby.html.md
@@ -0,0 +1,201 @@
+---
+layout: documentation
+title: Types in Ruby
+sort_info: 20
+---
+
+# Types in Ruby
+{:.no_toc}
+
+- TOC
+{:toc}
+
+You will be using Ruby to interact with a running system (_via_ Syskit) and
+post-process log files. It's important to understand how the types that are
+defined and exchanged from the C++ side end up being manipulated in Ruby.
+
+## Basics
+
+The mapping from C++ to Ruby is mostly as one would expect: one can create, read
+or modify a struct by setting or reading its fields. One can access an array or
+`std::vector` as one would access a Ruby array and so on.
+
+For instance, an instance of the Time type we
+already [used as example](defining_types.html#type_declarations):
+
+~~~ cpp
+namespace base {
+ struct Time
+ {
+ uint64_t microseconds;
+ static Time fromMilliseconds(uint64_t ms);
+ Time operator +(Time const& other);
+ };
+}
+~~~
+
+would be accessed with:
+
+~~~ ruby
+obj.microseconds # current value of 'microseconds' as a Ruby integer
+obj.microseconds = 20 # set the value of 'microseconds' to 20
+~~~
+
+A more complex struct such as:
+
+~~~ cpp
+namespace base {
+ struct Timestamps {
+ std::vector stamps;
+ };
+}
+~~~
+
+would be accessed with
+
+~~~ ruby
+obj.stamps[0].microseconds
+obj.stamps << new_time
+~~~
+
+Enums are represented as Ruby symbols, so
+
+~~~ cpp
+namespace base {
+ enum Result {
+ OK, FAILED
+ };
+ struct S {
+ Result status;
+ };
+}
+~~~
+
+would be manipulated with
+
+~~~ ruby
+obj.status # => :OK
+obj.status = :FAILED
+~~~
+
+## Loading and Accessing Types
+
+To get access to registered types, one needs to initialize the orocos.rb library and load the corresponding typekits:
+
+~~~ ruby
+require 'orocos'
+Orocos.initialize
+Orocos.load_typekit 'base'
+~~~
+
+From their on, all the types that the `base` typekit define are made available
+under the `Types` object. For instance, the `base::Time` type is available
+as `Types.base.Time`.
+
+New objects can thus be created `Types.base.Time.new`. New objects - except
+enums - are left uninitialized. Enums are initialized to the first valid value
+in their definition. Call `#zero!` to zero-initialize an object.
+
+The fields of a struct can be initialized on construction: `Types.base.Time.new(microseconds: 0)`.
+
+## Converting between the C++ definitions and more Ruby-ish types
+
+As it is, the Rock type system is optimized for C++. The types can have a
+proper API, accessors, initialization … These parts of the types are available
+in C++ but are "lost in translation" when passed to Ruby.
+
+However, Ruby also has a rich ecosystem of built-in types and external
+libraries, that sometimes match what the C++ types provide. For instance,
+Rock's existing `base::Time` type has an equivalent in the Ruby `Time` class.
+To ease the use of Rock on the Ruby side, the framework provides a way to
+convert to and from pure-Ruby types. Rock's own `base/types` package defines
+such conversions. The main (but no only) conversions are used to handle Eigen
+types in Ruby (using built-in Eigen bindings), and the `Time` conversion that
+we just described.
+
+If one defines a conversion to ruby with:
+
+~~~ ruby
+Typelib.convert_to_ruby '/base/Time', Time do |value|
+ microseconds = value.microseconds
+ seconds = microseconds / 1_000_000
+ Time.at(seconds, microseconds % 1_000_000)
+end
+~~~
+
+Then the framework will automatically convert `/base/Time` values into Ruby's
+Time using the given block. Note that the Ruby type is optional in this case
+(whatever's returned by the block will be considered "the converted type")
+
+**Important** the conversions must be defined **before** the type is loaded
+
+**Where to define these ?** One-shot conversions can be defined straight into
+your system (ruby script or Syskit app). For conversions that are too widespread
+for that, consider installing a `typelib_plugin.rb` file under a folder that is resolved
+by `RUBYLIB` (e.g. `mylib/typelib_plugin.rb`). This would either be a plain Ruby package
+or a file installed by a C++ package within the Ruby search path. Both methods are
+described in more details in the [Creating Functionality](../integrating_functionality/ruby_libraries.html) section.
+
+The inverse conversion may also be provided
+
+~~~ ruby
+Typelib.convert_to_ruby Time, '/base/Time' do |value, type|
+ type.new(
+ microseconds: value.tv_sec * 1_000_000 + value.tv_usec)
+end
+~~~
+
+**Reminder** if you don't understand the `/base/Time` syntax, we've covered that
+when we talked about the type system's [naming scheme](defining_types.html#naming_scheme).
+{: .note}
+
+## Extending the Rock Types
+
+An alternative to the conversions mechanism is to extend the types with new
+methods, and/or initializers.
+
+To define methods on the type class itself, one uses
+`Typelib.specialize_model`. The following would for instance allow to create a
+`/base/Angle` initialized with NaN by doing `Types.base.Angle.Invalid`
+
+~~~ ruby
+Typelib.specialize_model '/base/Angle' do
+ def Invalid
+ new(rad: position: Float::NAN)
+ end
+end
+~~~
+
+To define methods on the values themselves, one uses `Typelib.specialize`.
+
+~~~ ruby
+Typelib.specialize '/base/Angle' do
+ def to_degrees
+ rad * 180 / Math::PI
+ end
+end
+~~~
+
+It is possible to define an initializer this way:
+
+~~~ ruby
+Typelib.specialize '/base/Angle' do
+ def initialize
+ self.rad = Float::NAN
+ end
+end
+~~~
+
+**Important** the specializations must be defined **before** the type is loaded
+
+**Where to define these ?** One-shot conversions can be defined straight into
+your system (ruby script or Syskit app). For conversions that are too widespread
+for that, consider installing a `typelib_plugin.rb` file under a folder that is resolved
+by `RUBYLIB` (e.g. `mylib/typelib_plugin.rb`). This would either be a plain Ruby package
+or a file installed by a C++ package within the Ruby search path. Both methods are
+described in more details in the [Creating Functionality](../integrating_functionality/ruby_libraries.html) section.
+
+**Next** that's all about the type system. Go back to [the documentation
+overview](../index.html#how_to_read) for more.
+{: .next-page}
+
diff --git a/source/workspace/add_packages.html.md b/source/workspace/add_packages.html.md
index f1ba6d1..3f1b017 100644
--- a/source/workspace/add_packages.html.md
+++ b/source/workspace/add_packages.html.md
@@ -46,9 +46,9 @@ The first step in creating a package is [to pick a name](conventions.html#naming
If you create a package from scratch, Rock provides a set of command-line tools
to generate package scaffolds for you:
-- `rock-create-lib` for a C++ library
-- `rock-create-rubylib` for Ruby libraries
-- `rock-create-orogen` for [an oroGen component](../writing_components/index.html#create)
+- `rock-create-lib` for a [C++ library](../integrating_functionality/cpp_libraries.html)
+- `rock-create-rubylib` for [Ruby libraries](../integrating_functionality/ruby_libraries.html)
+- `rock-create-orogen` for [an oroGen component](../integrating_functionality/components.html)
- `rock-create-bundle` for [a bundle](../basics/getting_started.html)
If you are integrating a package that already exists, it should be easy enough
@@ -66,8 +66,12 @@ dependency may be another package declared in autoproj or a package provided by
the underlying operating system through [the osdep system we will see
later](os_dependencies.html).
-All this information is stored in a package's `manifest.xml` file. This file has
-the following format:
+All this information is stored in a XML file whose format follows. If the
+package has been created for Rock specifically, it is saved as `manifest.xml`
+file directly at the root of the package. For packages that already exist but
+are being integrated in Rock, the file should be saved in the package set under
+`manifests/package/name.xml` (e.g. `simulation/gazebo`'s manifest is saved in
+`manifests/simulation/gazebo.xml`). {:
~~~xml
@@ -97,13 +101,6 @@ The `` tag is used for dependencies that are specific to [the
package's test suite](../basics/day_to_day.html#test). The `` allows to [avoid building some dependencies within some builds](managing.html).
-If the package has been created for Rock specifically, the common practice is
-to put the `manifest.xml` file directly at the root of the package. For
-packages that already exist but are being integrated in Rock, the
-`manifest.xml` file should be saved in the package set under
-`manifests/package/name.xml` (e.g. `simulation/gazebo`'s manifest is saved in
-`manifests/simulation/gazebo.xml`). This is how dependencies to non-Rock
-packages should be declared as well.
## Declaring a package {#autobuild}
@@ -199,7 +196,7 @@ ruby_package "package_name" do |pkg|
end
~~~
-### oroGen packages
+### oroGen packages {#orogen}
~~~ ruby
orogen_package "package_name"
@@ -410,7 +407,7 @@ package_name:
module: modulename
~~~
-### Patching after checkout or update
+### Patching after checkout or update {#patch}
It is possible to apply patches after a given package (imported by any of the
importer types) has been checked out/updated. To do so, simply add the option