You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/2020.md
+3-4
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ NOTE: I list vendors only to illustrate and clarify, I'm sure your favorite vend
33
33
34
34
#### Data Storage section
35
35
36
-
- The main part is an events provider, so that we can implement some variation of an event driven architecture. Kafka (eg. Instaclustr), ActiveMQ, ZeroMQ, PubNub. Consider starting with PubNub.
36
+
- The main part in this section is the **events** provider/ESB, so that we can implement some variation of an event driven architecture. Kafka (eg. Instaclustr), PubNub. Consider starting with PubNub.
37
37
38
38
- Also Auth/IAM (AWS Congnito | Backendless), is in the data section.
39
39
@@ -96,10 +96,9 @@ Also E2E testing, auth, etc. And since SEO needs static data, maybe both back en
96
96
</p>
97
97
One example of FE app UI is: voice and chat applets. Don't forget that in your architecture. Just because you have a computer that is not seen, does not mean it's not there.
98
98
99
-
- CDN that the he FE|Client apps talk to. CDN provides HTTPs offloading and early TLS termination. And it hides the IP of the 'REST' services, making them harder to attack. Also CDNs provide HTTP3, udp based. CDN's handle the TLS certs, so And a nice thing CDN provide is: caching. We set the cache in the header of each request. This significantly simplifies development and reduces operating costs. For example set cache on a REST API to 2 seconds. Don't use HTTP servers anymore(eg: NGNX), use the CDN and other edge features.(eg: CDN77). CDN is one of the big wins on migrating to the cloud: it is cheaper and faster.
99
+
- Data plane: Edge / Geo DNS/ Data Plane / CDN(w/ https), trough the proxy, accesses the load balanced stateless services. (services include serving static assets like HTML, css, PNG, video, and streams, such as video). A proxy Eg. would be EnvoyProxy | Istio. FE|Client apps talk to the CDN pop closest to them. CDN provides HTTPs offloading and early TLS termination. And it hides the IP of the 'REST' services, making them harder to attack. Also CDNs provide HTTP3, udp based. CDN's handle the TLS certs. And a nice thing CDN provide is: caching. We set the cache in the header of each request. This significantly simplifies development and reduces operating costs. For example set cache on a REST API to 2 seconds. Don't use HTTP servers (eg: NGNX), use the CDN and other edge features.(eg: CDN77). CDN is one of the big wins on migrating to the cloud: it is cheaper and faster. So GeoDNS for Blue/Green deployment, talks to CDN, that talks to proxy to help shape the traffic, and the proxy talks to the service container.
100
100
101
-
102
-
- HMI Micro Service in containers(Docker). CDN, trough the firewall, accesses the load balanced stateless services. (services include serving static assets like HTML, css, PNG, video, it is a file server behind a CDN). The client API's and ViewModel talks to these services. These service containers are GEO distributed, at minimum EU and Americas data centers, but possibly more granular. The service containers are HMI - high memory, for example 512G of RAM or more. Cloud instances tend to have slow and variable I/O and you want to avoid it via RAM as Disk. Section #4 in this write up touches on it.
101
+
- HMI Micro Service in containers(Docker), with a K8 manager (OpenShift). . The client API's and ViewModel talks to these services. These service containers are GEO distributed, at minimum EU and Americas data centers, but possibly more granular. The service containers are HMI - high memory, for example 512G of RAM or more. Cloud instances tend to have slow and variable I/O and you want to avoid it via RAM as Disk. Section #4 in this write up touches on it.
103
102
They would use local Redis memory or local SQLite memory for things like queuing and to have locally things that are needed to operate the service. We try to avoid network latency of services by having the data in the application RAM. You of course 'warm up' the container by accessing your data store and analytics data, and have a background daemon thread that keeps local data fresh.
104
103
<br />
105
104
IMPORTANT: Micro services application architecture is hard. They are used to allow independent scaling: we add containers to increase scale. This obviously does not work if different micro services are bottle-necked by the same back-pressure of the same downstream service(eg. S3 bucket). It's not a micro service, unless someone engineers a design to allow independent scaling and a way to plan and manage capacity. You could still have a monolithic application architecture in different containers! You know you are doing it wrong if your design took only days to come up with and did not include stress testing to verify the design hypothesis of the capacity plan.
0 commit comments