-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate lookup tables #4
Comments
@callumforrester where to best cache values like lookup tables that change a couple of times during a run. or for a different experiment. ideally during comissioning automatically. 15 different hfiles for different harmonics here |
for each energy there happens an optimization of the undulator gap SR. then it is move the Bragg angle given a crystal cut. crystal spacing is what we need - Ge Si for this. then move to the Bragg angle, scan the gap then we measure at one of the diods, maby eD7 - BL18I-DI-PHDGN-07 - instant rbv w eget a curve, with a peak and then the peak columns: energy of monochromator, gap fot he undulator |
I believe MX have been using redis for this @dperl-dls may know more... |
We'd like to be using redis for things like this, currently we're just using it for feature flag lookup live. This kind of thing is definitely on the roadmap though. @aragaod does use redis to cache some expensive parameters, but that's his personal deployment for one beamline, hasn't had any input from us. It's enough to say that it works well for this use case but we'll want something properly supported if we're going to be relying on it seriously, or at least have very good fallback options |
I have some python code that does lookup tables for me. Basically Changes energy, waits for feedback to recover, gather valuee, stores in redis. Then at the end a piece of python3 outside GDA gathers the data from redis and in one case uses jinja2 template to generate the GDA formatted lookup style file and in other case calculate the coefs using for a polinomial based on the redis data. Then pushes does coefs to the respective holder epics PVs. Because I could not install redis in jython I wrote a urllib based jython implementation and have a http to redis api jython talks too. All working for 5 years. Our redis server runs on i04-control and in 5 years has never failed us. We also use redis for a variety of other purposes from caching to temporary storage of data in JSON format for later use outside gda. Most redis keys are setup with an expiry so we see redis as volatile data not long term. Would be great to have a redis server side by side with every beamline and consider it part of the infrastructure. It is incredibly useful. |
there is a separate issue to explore this already, so it looks like we're on the right track |
@aragaod could you link me the code please? |
dodal DCM
last update 2023/01/19 NPUnits Deg mrad it's in |
$ helm install my-redis bitnami/redis --version 19.6.3
|
I see that Sci-Comp resolved the issue for i18 cluster (which had previously been seen on another beamline cluster, but the fix not applied to other clusters), but it begs the question (or I beg the question with the issue as justification): central redis or per-beamline redis? |
|
redis deployment notes:
|
should make an ophyd-async device fwiw |
For communicating with Redis? Disagree, it should be a plan stub. |
@callumforrester said otherwise, that RunEngine should never do IO on its own, so it'd be like we have with the panda hdf writing being defined there in ophyd-async panda folder |
after conversation, this could be implemented in the following ways:
for now we'll do an ophyd-async device |
This is not quite true, I said plans should never do I/O on their own, so you shouldn't have a plan that says def my_plan():
yield from mv(motor, 1)
redis.put({"foo": "bar"}) # Does I/O!
yield from mv(motor 2) The plan should be runnable offline, so you should have a |
still it's a more coupled solution, against the modular approach. |
I think I would do this using a callback but given it's a commissioning script I think it needs to be:
So I wouldn't actually be too against the plan just doing the |
@callumforrester mentioned quite strongly that plans running |
I agree that a callback is the best solution for writing to the lookup table, I assume we're also figuring out how best to read from the lookup table? |
There is already code to do the reading inside the |
ok I have this and will work with this |
todo: need to check how often are those run IDGAP - is the only alignment kind one that uses the lookup pinhole scan - to get the beam back through it gold wire scan - without lookup table - to check the size of the focal spot - ran if change to KB mirrors |
there is some current script that does it for sections of the sepctrums
find a bragg and then finds the maximum, for 10-15 Bragg angles
IDGAP lookup table
The text was updated successfully, but these errors were encountered: