Skip to content

Commit 6f145fe

Browse files
committed
d
1 parent f3df763 commit 6f145fe

File tree

3 files changed

+3
-4
lines changed

3 files changed

+3
-4
lines changed

docs/2020.md

+3-4
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ NOTE: I list vendors only to illustrate and clarify, I'm sure your favorite vend
3333

3434
#### Data Storage section
3535

36-
- The main part is an events provider, so that we can implement some variation of an event driven architecture. Kafka (eg. Instaclustr), ActiveMQ, ZeroMQ, PubNub. Consider starting with PubNub.
36+
- The main part in this section is the **events** provider/ESB, so that we can implement some variation of an event driven architecture. Kafka (eg. Instaclustr), PubNub. Consider starting with PubNub.
3737

3838
- Also Auth/IAM (AWS Congnito | Backendless), is in the data section.
3939

@@ -96,10 +96,9 @@ Also E2E testing, auth, etc. And since SEO needs static data, maybe both back en
9696
</p>
9797
One example of FE app UI is: voice and chat applets. Don't forget that in your architecture. Just because you have a computer that is not seen, does not mean it's not there.
9898

99-
- CDN that the he FE|Client apps talk to. CDN provides HTTPs offloading and early TLS termination. And it hides the IP of the 'REST' services, making them harder to attack. Also CDNs provide HTTP3, udp based. CDN's handle the TLS certs, so And a nice thing CDN provide is: caching. We set the cache in the header of each request. This significantly simplifies development and reduces operating costs. For example set cache on a REST API to 2 seconds. Don't use HTTP servers anymore(eg: NGNX), use the CDN and other edge features.(eg: CDN77). CDN is one of the big wins on migrating to the cloud: it is cheaper and faster.
99+
- Data plane: Edge / Geo DNS/ Data Plane / CDN(w/ https), trough the proxy, accesses the load balanced stateless services. (services include serving static assets like HTML, css, PNG, video, and streams, such as video). A proxy Eg. would be EnvoyProxy | Istio. FE|Client apps talk to the CDN pop closest to them. CDN provides HTTPs offloading and early TLS termination. And it hides the IP of the 'REST' services, making them harder to attack. Also CDNs provide HTTP3, udp based. CDN's handle the TLS certs. And a nice thing CDN provide is: caching. We set the cache in the header of each request. This significantly simplifies development and reduces operating costs. For example set cache on a REST API to 2 seconds. Don't use HTTP servers (eg: NGNX), use the CDN and other edge features.(eg: CDN77). CDN is one of the big wins on migrating to the cloud: it is cheaper and faster. So GeoDNS for Blue/Green deployment, talks to CDN, that talks to proxy to help shape the traffic, and the proxy talks to the service container.
100100

101-
102-
- HMI Micro Service in containers(Docker). CDN, trough the firewall, accesses the load balanced stateless services. (services include serving static assets like HTML, css, PNG, video, it is a file server behind a CDN). The client API's and ViewModel talks to these services. These service containers are GEO distributed, at minimum EU and Americas data centers, but possibly more granular. The service containers are HMI - high memory, for example 512G of RAM or more. Cloud instances tend to have slow and variable I/O and you want to avoid it via RAM as Disk. Section #4 in this write up touches on it.
101+
- HMI Micro Service in containers(Docker), with a K8 manager (OpenShift). . The client API's and ViewModel talks to these services. These service containers are GEO distributed, at minimum EU and Americas data centers, but possibly more granular. The service containers are HMI - high memory, for example 512G of RAM or more. Cloud instances tend to have slow and variable I/O and you want to avoid it via RAM as Disk. Section #4 in this write up touches on it.
103102
They would use local Redis memory or local SQLite memory for things like queuing and to have locally things that are needed to operate the service. We try to avoid network latency of services by having the data in the application RAM. You of course 'warm up' the container by accessing your data store and analytics data, and have a background daemon thread that keeps local data fresh.
104103
<br />
105104
IMPORTANT: Micro services application architecture is hard. They are used to allow independent scaling: we add containers to increase scale. This obviously does not work if different micro services are bottle-necked by the same back-pressure of the same downstream service(eg. S3 bucket). It's not a micro service, unless someone engineers a design to allow independent scaling and a way to plan and manage capacity. You could still have a monolithic application architecture in different containers! You know you are doing it wrong if your design took only days to come up with and did not include stress testing to verify the design hypothesis of the capacity plan.

docs/diag.png

2 KB
Loading

docs/diag0.png

-24.1 KB
Loading

0 commit comments

Comments
 (0)