@@ -72,8 +72,8 @@ perhaps something like:
72
72
73
73
[
74
74
{ "router_uuid": "<UUID>" },
75
- { "joined_networks",
76
- "<network1-UUID>", "<network2-UUID>",.... },
75
+ { "joined_networks": [
76
+ "<network1-UUID>", "<network2-UUID>",.... ] },
77
77
]
78
78
79
79
Until IP prefix collisions are solved by Router Object processing, the Router
@@ -91,8 +91,15 @@ Networks can be located either on the same Triton Data Center, or on distinct
91
91
ones. Other factors (see below) may come into play as well, but
92
92
inter-data-center traffic will require additional protections. Networks on
93
93
the same DC may not need additional protections, but could benefit from them
94
- if defense-in-depth is a concern. (NOTE: I'm assuming intra-DC can still
95
- span distinct compute-nodes.)
94
+ if defense-in-depth is a concern.
95
+
96
+ A single intra-DC network can still span multiple compute nodes (CNs). A
97
+ router object may need to instantiate router zones on multiple CNs as well.
98
+ A naive implementation could just instantiate one IP per network per CN. This
99
+ does not scale will with IPv4 networks (there are DC deployments with over
100
+ 300 CNs today, for example).
101
+
102
+ Inter-DC networks will likely need a different sort of router zone <TBD >.
96
103
97
104
#### Same Ownership vs. Different Ownership
98
105
@@ -120,28 +127,6 @@ single-remote-node (not unlike a secure remote-access client). It could also
120
127
be a generic OS installation that requires some amount of configuration,
121
128
which could be provided by Triton.
122
129
123
-
124
- ===================== (Cut up to and including here.) =====================
125
- Behind the scenes, there are these connectivity possibilities with which to
126
- contend:
127
-
128
- ### Intra-Data-Center routing.
129
-
130
- This should be relatively easy, and may not even involve any tunneling.
131
-
132
- ### Inter-Data-Center routing.
133
-
134
- This will require more thought, including the requirement for tunneling (so
135
- one does not need dedicated inter-DC links). A goal should be to reduce the
136
- number of public IP addresses required to construct these tunnels. Clever
137
- use of NAT, clever use of IP-in-IP tunnels, or both, should help.
138
-
139
- ### On-Premise Triton to DC routing.
140
-
141
- Once the inter-DC problems are solved, most, if not all, of them can be
142
- generalized to on-premise Triton to a JPC location.
143
- ===================== (Cut up to and including here.) =====================
144
-
145
130
### Triton to other-cloud routing.
146
131
147
132
Both JPC customers and Triton on-premise customers may wish to bring other
@@ -171,6 +156,53 @@ customer or disappoint them.
171
156
measured, and optimized where possible.
172
157
173
158
159
+ ## Implementation issues
160
+
161
+ ### Forwarding entities implementation strategies
162
+
163
+ The most straightforward approach is to construct a single zone, not unlike
164
+ the NAT zone used for Fabric Networks, and have it forward packets between
165
+ the networks. Problems with this naive approach center around a single point
166
+ of failure.
167
+
168
+ An incremental upgrade of the straightforward approach is to construct
169
+ multiple zones, each with distinct IP addresses for each network joined,
170
+ scattered across a number of compute nodes in the DC. This approach
171
+ introduces a tradeoff between IP addresses consumed and redundancy.
172
+ Furthermore, instances either have to run a routing protocol, or need more
173
+ complex configuration management to select an appropriate router zone.
174
+
175
+ Within a fabric network that spans compute nodes, it is possible to
176
+ instantiate a single IP address in each compute node. Changes to multiple
177
+ APIs: at least NAPI and VMAPI, would be needed to cleanly instantiate
178
+ shared-IP router zones across multiple CNs.
179
+
180
+ ### Instance issues
181
+
182
+ Regardless of how forwarding entities get implemented, the instances attached
183
+ to these networks, which now will have greater connectivity, will still need
184
+ to select an appropriate next-hop. Use of routing protocols adds complexity
185
+ to both configuration and to each instance. Multiple next-hop entries using
186
+ ECMP will provide degraded service if one or more forwarding entities fail.
187
+ Selective configuration, for example the next-hop is selected based on
188
+ compute node locality or by some other means, adds complexity to
189
+ configuration, and the drawback of pushing single-point-of-failure to a set
190
+ of instances.
191
+
192
+ As Router Objects are created, destroyed, and modified, as of this time,
193
+ there is no clear strategy to update running instances about their new
194
+ availability. A routing daemon on every instance would solve that, at the
195
+ cost of the complexity mentioned previously. Otherwise, there is no way to
196
+ propagate NAPI or VMAPI updates about reachable networks into instances. RFD
197
+ 28 (https://github.com/joyent/rfd/tree/master/rfd/0028 ) documents potential
198
+ solutions to update propagation, and may become a dependency for Router
199
+ Objects.
200
+
201
+ ### Inter-DC issues.
202
+
203
+ <TBD >
204
+
205
+
174
206
## Future and Even Fringe Ideas.
175
207
176
208
### NAT to resolve same-prefix conflicts
0 commit comments