-
Notifications
You must be signed in to change notification settings - Fork 140
High speed reactive microservices abstract
This talk endeavors to explain high-speed reactive microservices architecture. High-speed reactive microservices is a set of patterns for building services that can readily back mobile and web applications at scale. It uses a scale up and out versus a scale out model to do more with less hardware. A scale up model uses in-memory operational data, efficient queue hand-off & micro-batch streaming, and async calls to handle more calls on a single node.
High-speed microservices endeavors to get back to OOP roots where data and logic live together in a cohesive understandable representation of the problem domain, and away from separation of data and logic. Since data lives with the service logic that operates on it. Also less time is spent dealing with cache coherency issues as the services own or lease the data. The code tends to be smaller to do the same things. From our experience is not uncommon to need 3 to 20 servers for a service where in a traditional system you might need 100s or even 1,000s. This is not just supposition but actual observation.
Generally you can expect in has been the experience of the speakers that you write less code, and the code runs faster. There are many Java frameworks and libraries for reactive microservices that you can use to build a high speed microservice system, namely,
- Vert.x,
- Akka,
- Kaftka,
- Netty,
- QBit Java Microservice Lib,
The above are all great tools and technology stacks to build these types of services. This talk is not about particular technology stack, but more about how to implement this pattern on the JVM.
Attributes of High speed services
High speed services have the following attributes:
- High speed services are in-memory services
- High speed services do not block
- High speed services own their data
- Scale out involves sharding services
- Reliability is achieved by replicating service stores
An in-memory service is a service that runs in-memory and is non-blocking. An in-memory service can load its data from a central data store or even data-fault load its data. In-memory services can load data it owns asynchronously and does not block. It continues to service other requests while the data is loading. It streams in data from its service store whenever leased data is not found, i.e., it faults the data into the services in streams.
Single writer rule: Only one service at any point in time can edit service particular set of service data. In-memory services either own their data or own their data for a period of time. Owning the data for a period of time is a lease model.
Avoid the following:
- Caching (use sparingly for data from other services)
- Blocking
- Transactions
- Databases for operational data
Embrace the following:
- In-memory service data and data faulting
- Sharding
- Async callbacks and streams
- Replication / Batching / Remediations
- Service Stores for operational data
High speed services employ the following:
- Timed/Size Batching
- Callbacks
- Call interception to enable data faulting from the service store
- Data faulting for elasticity
- Data leasing
Why not just own data out right. Well you can if the service data is small enough. Leasing data provides a level of elasticity. This allows you to spin up more nodes. If you optimize and tune the data load from the service store to the service then loading users data becomes trivial and very performant. The faster and more trivial the data fault loading, the shorter you can lease the data and the more elastic your services are. In like manner services can save data periodically to the service store or even keep data in a fault tolerant local store (store and forward) and update the service store in streaming micro-batches to accommodate speed and throughput. Leasing data is not the same as getting data from the cache. In the lease model only one node can edit the data at any given time.
More will be covered in the talk
- Service Sharding / Service Routing and discovery
- Fault tolerance / replication
- Service Store - definition and usage
- Active Objects / Service Actors or Event Bus loops
- Service discovery and health checks
The Active Object pattern consists of six elements:
- A client proxy to provide an interface for clients. The client proxy can be local (local client proxy) or remote (remote client proxy).
- An interface which defines the method request on an active object.
- A queue of pending method requests from clients.
- A scheduler, which decides which request to execute next
QBit Website What is Microservices Architecture?
QBit Java Micorservices lib tutorials
The Java microservice lib. QBit is a reactive programming lib for building microservices - JSON, HTTP, WebSocket, and REST. QBit uses reactive programming to build elastic REST, and WebSockets based cloud friendly, web services. SOA evolved for mobile and cloud. ServiceDiscovery, Health, reactive StatService, events, Java idiomatic reactive programming for Microservices.
Reactive Programming, Java Microservices, Rick Hightower
Java Microservices Architecture
[Microservice Service Discovery with Consul] (http://www.mammatustech.com/Microservice-Service-Discovery-with-Consul)
Microservices Service Discovery Tutorial with Consul
[Reactive Microservices] (http://www.mammatustech.com/reactive-microservices)
[High Speed Microservices] (http://www.mammatustech.com/high-speed-microservices)
Reactive Microservices Tutorial, using the Reactor
QBit is mentioned in the Restlet blog
All code is written using JetBrains Idea - the best IDE ever!
Kafka training, Kafka consulting, Cassandra training, Cassandra consulting, Spark training, Spark consulting
Tutorials
- QBit tutorials
- Microservices Intro
- Microservice KPI Monitoring
- Microservice Batteries Included
- RESTful APIs
- QBit and Reakt Promises
- Resourceful REST
- Microservices Reactor
- Working with JSON maps and lists
__
Docs
Getting Started
- First REST Microservice
- REST Microservice Part 2
- ServiceQueue
- ServiceBundle
- ServiceEndpointServer
- REST with URI Params
- Simple Single Page App
Basics
- What is QBit?
- Detailed Overview of QBit
- High level overview
- Low-level HTTP and WebSocket
- Low level WebSocket
- HttpClient
- HTTP Request filter
- HTTP Proxy
- Queues and flushing
- Local Proxies
- ServiceQueue remote and local
- ManagedServiceBuilder, consul, StatsD, Swagger support
- Working with Service Pools
- Callback Builders
- Error Handling
- Health System
- Stats System
- Reactor callback coordination
- Early Service Examples
Concepts
REST
Callbacks and Reactor
Event Bus
Advanced
Integration
- Using QBit in Vert.x
- Reactor-Integrating with Cassandra
- Using QBit with Spring Boot
- SolrJ and service pools
- Swagger support
- MDC Support
- Reactive Streams
- Mesos, Docker, Heroku
- DNS SRV
QBit case studies
QBit 2 Roadmap
-- Related Projects
- QBit Reactive Microservices
- Reakt Reactive Java
- Reakt Guava Bridge
- QBit Extensions
- Reactive Microservices
Kafka training, Kafka consulting, Cassandra training, Cassandra consulting, Spark training, Spark consulting