Skip to content

Ekf positioning#40

Open
aircad wants to merge 18 commits intodevfrom
ekf_positioning
Open

Ekf positioning#40
aircad wants to merge 18 commits intodevfrom
ekf_positioning

Conversation

@aircad
Copy link
Collaborator

@aircad aircad commented Mar 19, 2020

This update is for world positioning, currently only set up for a single object as a proof of concept before moving forward

@aircad aircad requested a review from Wesley-Fisher March 19, 2020 14:21
@aircad aircad self-assigned this Mar 19, 2020
Copy link
Collaborator

@Wesley-Fisher Wesley-Fisher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left comments


def state_to_measurement_h(self, x, u):
#currently 3 element point
z = np.array([x[13],x[14],x[15]]) # from substate + 3 points
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is where you could calculate what the relative position of the object should be, given some estimated positio of the sub and object from the x vector

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if this is ok, i currently have the state_to_measurement_h just returning the same camera position of the object, and the x will just contain the raw camera position of each object. I'm doing the processing of camera position to absolute position in the construction of the world position message. I'm trying to get it so that x has relative object positions instead of raw camera positions, but if using the raw camera positions is fine, then i'm just going to leav eit.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think making x have global positions of objects is the best bet. If x stores the relative location, we would need to update it for each object whenever the sub moves, and I think that would add too much noise.

Something like:
x: global positions of all objects
incoming message: has relative location of object as seen by camera
z function: returns the relative measurement
state_to_measurement_h: calculates what the relative measurement should be given the two absolution positions and sub orientation

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I was working on that, I was considering how to deal with the variance of the object positions, cuz they're still coming out of the get_R function as relative. They're supposed to be kinda small though, so would it be ok if they just stayed relative until the data published as a message? Otherwise, I could try changing the get_R function to also take x and u so i can calculate the absolute values.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The get_R function should be returning the uncertainty of the measurements (relative position), not the state (absolute position).

message.z = vec[2]
return message

def quat_to_euler(q):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checkout scipy.spatial.transform.Rotation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants