You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This might be a bit jargon-ey ... well, so be it -- maybe our language has been polluted by marketing speak -- that's just the way it is ... I apolgoize for that ... maybe I'll be able to edit, revise, clarify ... but GitHub Issues are just about getting the thought down to start thinking about developing something...
The general point is that the network is the computer ... or the LARGER entire network [of smart networks of networks of networks of smart sensors] really, actually, truly ISThe Computer ... it's not just marketing speak -- that's the paradigm we need to use [temporarily, at least].
It's not just running machines in the cloud ... or running multiple services in a continously-optimized multi-cloud environment ... picking and choosing among the best options that AWS, Azure, GCP, OracleCloud, Paperspace Core, Lambda or some other data center or HPC compute center offer ... we envision an intelligent space in which larger, deeper, hybrid intelligence is a function of the information delivered -- that information fabric includes by thousands of edge-computing networks of smart sensors ... the entire network of multiple smart networks is the computer ... so it really comes down to picking, choosing, selecting and taking in all of the information that we can actually hope to USE somewhat intelligently.
As with the realm of something like open source development and the whole competitive world of Linux distros and container images ... the entire macro network is the [quantum] computer, but without observability engineering, monitoring and profiling to continuously manage, tune, optimize and reconfigure that macro network [of sensor networks] as necessary.
The USABLE part of the cloud is defined by the the kernel ... there will quadrillions or quintillions or more
threads of activity everywhere which can be used, but what is actually usable is all about observability-engineered monitoring I/O, which populates the database feeding the dashboard to facilitate optimization ... this business is really about how people use their own clouds to deliver value to people.
The text was updated successfully, but these errors were encountered:
This might be a bit jargon-ey ... well, so be it -- maybe our language has been polluted by marketing speak -- that's just the way it is ... I apolgoize for that ... maybe I'll be able to edit, revise, clarify ... but GitHub Issues are just about getting the thought down to start thinking about developing something...
The general point is that the network is the computer ... or the LARGER entire network [of smart networks of networks of networks of smart sensors] really, actually, truly IS The Computer ... it's not just marketing speak -- that's the paradigm we need to use [temporarily, at least].
It's not just running machines in the cloud ... or running multiple services in a continously-optimized multi-cloud environment ... picking and choosing among the best options that AWS, Azure, GCP, OracleCloud, Paperspace Core, Lambda or some other data center or HPC compute center offer ... we envision an intelligent space in which larger, deeper, hybrid intelligence is a function of the information delivered -- that information fabric includes by thousands of edge-computing networks of smart sensors ... the entire network of multiple smart networks is the computer ... so it really comes down to picking, choosing, selecting and taking in all of the information that we can actually hope to USE somewhat intelligently.
Without any doubt, profiling and process monitoring are gigantically important to information technology ... designing the observability into the code is huge ... developing your own databases and data structures is essential for creating the tools and dashboards that allow one to be able to discern a level of situational awareness within 60 seconds.
As with the realm of something like open source development and the whole competitive world of Linux distros and container images ... the entire macro network is the [quantum] computer, but without observability engineering, monitoring and profiling to continuously manage, tune, optimize and reconfigure that macro network [of sensor networks] as necessary.
Big Monitorability in the larger sense of entire macro network of smart sensor networks used by hybrid human-machine intelligent people-driven systems define the core of CloudKernel.
The USABLE part of the cloud is defined by the the kernel ... there will quadrillions or quintillions or more
threads of activity everywhere which can be used, but what is actually usable is all about observability-engineered monitoring I/O, which populates the database feeding the dashboard to facilitate optimization ... this business is really about how people use their own clouds to deliver value to people.
The text was updated successfully, but these errors were encountered: