- Atomic replace service with service + Distributed
- Proxy service
- Routing: support version
- Rollback / switch version button
- Complex deps between components
- unique endpoint id
- get config
- update config
- subscribe for update with companion service
- global version for huge web endpoints collection or version per endpoint? hard to track all version changes
- multiple versions on some time
- how to indicate that service missing concrete version of other service
- simple sharding param
- complex sharding: /store/{shard1}/item/{shard2}/
- versioning?
- swarm orchestration
- require orchestration master -> need config befor run orchestration tool
- separate scheduler with deps support
- tracing.span passing into internal calls
- logging with span, how to handle logging in libraries without log support? global logger redefinition
- don't pay for invocation / easy flow
- don't pay for dynamic features (version updates)
- Zk watch - enable cheap watching, but not consistent until vnode register as service user endpoint/{state}/ endpoint/{state}/subscribers/ -> store vnode with user info, uniq discovery id?
- Zk sequence - as unique id generator
- tracing headers
- message headers (ttl)
- payload
Usage:
- rpc (timeout)
- push
Services:
- register
- subscribe / unsubscribe (queue-like)
- Application (global process)
- Service
- Transport
- Tracing
- Metrics
- Circuit breaker
- Configs
- find service
- register
- config get / subscribe
- config update
- name
- type (rpc, queue)
- route type: http, amqp, redis, database, local?
- route persistance: persistent, dynamic
- sharding
- unique
ProxyService
Encapsulate discovery, configs. We can create multiple applications for testing purposes.
Usually will be one application on process level.
ProxyEndpoint <-transport-> LocalEndpoint : Service
- http
- amqp
- redis
- IPC/local
- tcurl analogue
- mock for logging, tracer, metrics, discovery, config
- http<->amqp relay (relay/host/vhost/)
- logging
- metrics
- tracing
- tipsi pypi
- dependency management + versions
- Supports db configuration via django admin site
- Separate docker container
- Separate codebase
class AppRoutes(kit.Routes):
deployment = '{TIPSI_CONFIG}'
routes = {
'lightspeed.config_update': {'type': 'redis'},
'lightspeed.config_update': ConfigDiscovery['lightspeed_config'] or DefaultConfig,
}
class LightSpeedSync(Service):
name = 'lightspeed'
depend = {
'hard': ['integration_api'], # we cannot work without this service
'soft': ['lightspeed'], # we notify lightspeed, but can work without it
}
def serviceStart(self): pass
def serviceStop(self): pass
@subscribe('config_update')
def config_update(self):
new_config = {}
# push create forward span
# usage
self.rpc.lightspeed.config_update.push(new_config)
self.rpc[LIGHTSPEED]config_update.push(new_config)
@rpc('ping')
def ping(self):
# after return
return 'pong'
@rpc('barcode_sync_clear')
def run_sync(self, barcodes):
barcodes = []
# discover integration api
# new span -> 'integration_api'
# check circuit breaker / open span/ perform call / get response / close span
# return result
self.rpc.integration_api.barcode_sync_clear(barcodes)
class BarcodeSyncClear(GenericAPIView, DRFService):
name = ['integrations_api', 'barcode_sync_clear']
@rpc() # can call just barcode_sync_clear()
def patch(self, request, *args, **kwargs):
pass
class PublicWineViewset(ModelViewSet, DRFService):
'''
DRF API's should pass:
* user_id
retail.store[store_id].public_wines.get(inventory_id)
TODO: url can store params for HTTP api_version, store_id, inventory_id
'''
name = 'public_wines'
@rpc('create') # item parameters
def create(self, request, *a, **k): pass
@rpc('list') # filters
def list(self, request, *a, **k): pass
@rpc('get') # id
def retrieve(self, request, *a, **k): pass
@rpc('update') # id
def update(self, request, *a, **k): pass
@rpc('delete') # id
def destroy(self, request, *a, **k): pass