Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate lookup tables #4

Open
stan-dot opened this issue Jul 2, 2024 · 24 comments · May be fixed by #14
Open

Generate lookup tables #4

stan-dot opened this issue Jul 2, 2024 · 24 comments · May be fixed by #14
Assignees

Comments

@stan-dot
Copy link
Collaborator

stan-dot commented Jul 2, 2024

there is some current script that does it for sections of the sepctrums
find a bragg and then finds the maximum, for 10-15 Bragg angles

IDGAP lookup table

@stan-dot
Copy link
Collaborator Author

stan-dot commented Jul 2, 2024

@callumforrester where to best cache values like lookup tables that change a couple of times during a run. or for a different experiment. ideally during comissioning automatically. 15 different hfiles for different harmonics here

@stan-dot
Copy link
Collaborator Author

stan-dot commented Jul 2, 2024

for each energy there happens an optimization of the undulator gap SR. then it is move the Bragg angle given a crystal cut. crystal spacing is what we need - Ge Si for this. then move to the Bragg angle, scan the gap

then we measure at one of the diods, maby eD7 - BL18I-DI-PHDGN-07 - instant rbv

w eget a curve, with a peak and then the peak

columns: energy of monochromator, gap fot he undulator
then we make a quadratic curve. we need to cache a python callable.

@callumforrester
Copy link
Collaborator

I believe MX have been using redis for this @dperl-dls may know more...

@dperl-dls
Copy link

We'd like to be using redis for things like this, currently we're just using it for feature flag lookup live. This kind of thing is definitely on the roadmap though.

@aragaod does use redis to cache some expensive parameters, but that's his personal deployment for one beamline, hasn't had any input from us. It's enough to say that it works well for this use case but we'll want something properly supported if we're going to be relying on it seriously, or at least have very good fallback options

@aragaod
Copy link

aragaod commented Jul 4, 2024

I have some python code that does lookup tables for me.

Basically

Changes energy, waits for feedback to recover, gather valuee, stores in redis.

Then at the end a piece of python3 outside GDA gathers the data from redis and in one case uses jinja2 template to generate the GDA formatted lookup style file and in other case calculate the coefs using for a polinomial based on the redis data. Then pushes does coefs to the respective holder epics PVs.

Because I could not install redis in jython I wrote a urllib based jython implementation and have a http to redis api jython talks too. All working for 5 years. Our redis server runs on i04-control and in 5 years has never failed us.

We also use redis for a variety of other purposes from caching to temporary storage of data in JSON format for later use outside gda. Most redis keys are setup with an expiry so we see redis as volatile data not long term.

Would be great to have a redis server side by side with every beamline and consider it part of the infrastructure. It is incredibly useful.

@stan-dot
Copy link
Collaborator Author

stan-dot commented Jul 5, 2024

there is a separate issue to explore this already, so it looks like we're on the right track

DiamondLightSource/blueapi#537

@stan-dot
Copy link
Collaborator Author

stan-dot commented Jul 5, 2024

@aragaod could you link me the code please?

@stan-dot
Copy link
Collaborator Author

stan-dot commented Jul 9, 2024

dodal DCM

#Bragg angle against roll( absolute number)
#reloadLookupTables()

last update 2023/01/19 NP

Units Deg mrad
26.4095 -0.2799
6.3075 -0.2799

it's in dodal/devices/unit_tests/test_daq_configuratino/lookup/BeamLineEnergy_DCM_Roll_converter.txt

@stan-dot stan-dot self-assigned this Jul 9, 2024
@stan-dot
Copy link
Collaborator Author

$ helm install my-redis bitnami/redis --version 19.6.3

Error: INSTALLATION FAILED: 2 errors occurred:
        * admission webhook "validate.kyverno.svc-fail" denied the request: 

policy StatefulSet/i18-beamline/my-redis-replicas for resource violations: 

custom-drop-capabilities:
  autogen-validate-capabilities-hostNetwork-not-set: 'validation failure: The list
    spec.securityContext.drop must include NET_RAW'
  autogen-validate-capabilities-hostNetwork-true: preconditions not met
kp-validate-custom-validate-user-and-group-ids:
  autogen-validate fsgroup ids: preconditions not met
  autogen-validate group id: preconditions not met
  autogen-validate supplemental group ids: preconditions not met
  autogen-validate user id: preconditions not met
kp-validate-hostpath-restrictions-custom:
  autogen-enforce absolute paths: preconditions not met
  autogen-restrict directories: preconditions not met

        * admission webhook "validate.kyverno.svc-fail" denied the request: 

policy StatefulSet/i18-beamline/my-redis-master for resource violations: 

custom-drop-capabilities:
  autogen-validate-capabilities-hostNetwork-not-set: 'validation failure: The list
    spec.securityContext.drop must include NET_RAW'
  autogen-validate-capabilities-hostNetwork-true: preconditions not met
kp-validate-custom-validate-user-and-group-ids:
  autogen-validate fsgroup ids: preconditions not met
  autogen-validate group id: preconditions not met
  autogen-validate supplemental group ids: preconditions not met
  autogen-validate user id: preconditions not met
kp-validate-hostpath-restrictions-custom:
  autogen-enforce absolute paths: preconditions not met
  autogen-restrict directories: preconditions not met

@DiamondJoseph
Copy link
Collaborator

I see that Sci-Comp resolved the issue for i18 cluster (which had previously been seen on another beamline cluster, but the fix not applied to other clusters), but it begs the question (or I beg the question with the issue as justification): central redis or per-beamline redis?

@stan-dot
Copy link
Collaborator Author

  1. tested-on-beamline redis is the first step in both paths anyway
  2. redis is a fast cache and per-beamline would be the fastest. We might try some experimentation with plan calling either the central or the local one and measuring the response time. or just the delay in pinging either server as a low-effort version?

@stan-dot
Copy link
Collaborator Author

stan-dot commented Aug 1, 2024

redis deployment notes:

$ helm install blue-redis oci://registry-1.docker.io/bitnamicharts/
redis
Pulled: registry-1.docker.io/bitnamicharts/redis:19.6.4
Digest: sha256:42bd28a5365ce7da88e2fafec59cd4eb41827db26b2e42050dc6e6607baaaa51
NAME: blue-redis
LAST DEPLOYED: Mon Jul 29 16:51:04 2024
NAMESPACE: i18-beamline
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis
CHART VERSION: 19.6.4
APP VERSION: 7.2.5

** Please be patient while the chart is being deployed **

Redis® can be accessed on the following DNS names from within your cluster:

    blue-redis-master.i18-beamline.svc.cluster.local for read/write operations (port 6379)
    blue-redis-replicas.i18-beamline.svc.cluster.local for read-only operations (port 6379)



To get your password run:

    export REDIS_PASSWORD=$(kubectl get secret --namespace i18-beamline blue-redis -o jsonpath="{.data.redis-password}" | base64 -d)

To connect to your Redis® server:

1. Run a Redis® pod that you can use as a client:

   kubectl run --namespace i18-beamline redis-client --restart='Never'  --env REDIS_PASSWORD=$REDIS_PASSWORD  --image docker.io/bitnami/redis:7.2.5-debian-12-r4 --command -- sleep infinity

   Use the following command to attach to the pod:

   kubectl exec --tty -i redis-client \
   --namespace i18-beamline -- bash

2. Connect using the Redis® CLI:
   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h blue-redis-master
   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h blue-redis-replicas

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace i18-beamline svc/blue-redis-master 6379:6379 &
    REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p 6379

WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
  - replica.resources
  - master.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

@stan-dot
Copy link
Collaborator Author

stan-dot commented Aug 7, 2024

should make an ophyd-async device fwiw

@DiamondJoseph
Copy link
Collaborator

should make an ophyd-async device fwiw

For communicating with Redis? Disagree, it should be a plan stub.

@stan-dot
Copy link
Collaborator Author

stan-dot commented Aug 7, 2024

@callumforrester said otherwise, that RunEngine should never do IO on its own, so it'd be like we have with the panda hdf writing being defined there in ophyd-async panda folder

@stan-dot
Copy link
Collaborator Author

stan-dot commented Aug 7, 2024

after conversation, this could be implemented in the following ways:

  • ophyd-async device - where we this is similar to GDA scannable, but should work for now
  • stub - but this would block the runengine IO
  • a call back - but it's less handy for the optimization loop

for now we'll do an ophyd-async device

@callumforrester
Copy link
Collaborator

@callumforrester said otherwise, that RunEngine should never do IO on its own

This is not quite true, I said plans should never do I/O on their own, so you shouldn't have a plan that says

def my_plan():
    yield from mv(motor, 1)
    redis.put({"foo": "bar"})  # Does I/O!
    yield from mv(motor 2)

The plan should be runnable offline, so you should have a yield from and get the RunEngine to either do I/O directly by defining a custom verb or via a device, as @DominicOram et al. have been doing.

@stan-dot
Copy link
Collaborator Author

stan-dot commented Aug 8, 2024

custom_verb - that's the first time when this has been mentioned, it might have the shape of the thing @DiamondJoseph was postulating, so as plan stubs.

still it's a more coupled solution, against the modular approach.

@DominicOram
Copy link

I think I would do this using a callback but given it's a commissioning script I think it needs to be:

  • As simple as possible - callbacks spread logic so are not easy to reason about
  • Doesn't need to be perfect code quality - it will usually be ran supervised and infrequently

So I wouldn't actually be too against the plan just doing the io itself

@stan-dot
Copy link
Collaborator Author

@callumforrester mentioned quite strongly that plans running io themselves causes lack of determinism in the RunEngine.

@callumforrester
Copy link
Collaborator

I agree that a callback is the best solution for writing to the lookup table, I assume we're also figuring out how best to read from the lookup table?

@DominicOram
Copy link

There is already code to do the reading inside the ophyd device, which seems sensible to me

@stan-dot
Copy link
Collaborator Author

@stan-dot
Copy link
Collaborator Author

stan-dot commented Aug 19, 2024

todo: need to check how often are those run

IDGAP - is the only alignment kind one that uses the lookup

pinhole scan - to get the beam back through it

gold wire scan - without lookup table - to check the size of the focal spot - ran if change to KB mirrors

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants