Releases: WPIRoboticsProjects/GRIP-SmartDashboard
GRIP SmartDashboard Extension
Installation
Windows:
Put sdb-grip.jar
in C:\Users\YourName\SmartDashboard\extensions\
.
Linux/OS X
Put sdb-grip.jar
in ~/SmartDashboard/extensions/
.
Usage
In GRIP, use a "Publish Video" operation and at least one of "Publish Contours", "Publish Blobs", or "Publish Lines".
In SmartDashboard, select View > Add> GRIP Output Viewer
If you're running GRIP locally on the same computer as SmartDashboard, you should be good to go at this point.
If you're running GRIP on another machine (a RoboRIO or other vision coprocessor), you must set the address of the machine it's running on.
- Press Ctrl+E to get into edit mode if you aren't already.
- Right click the GRIP Viewer component and choose "Properties..."
- Set the "GRIP Address" item to the IP address or hostname of whatever you're running GRIP on. For example, this could be
roborio-190-frc.local
if you're running GRIP on a RoboRIO and your team number is 190.
Why?
GRIP can run on the driver station laptop, the roboRIO, or an additional on-board vision processor like the Kangaroo PC or Raspberry Pi. When running GRIP on the driver station PC, your drivers can easily visualize targeting data during a match, but you also have to reduce the video quality to fit within the field restrictions, and accept the lag caused by sending video data back and forth over the WiFi network. These can both make computer vision less effective, so it's often better from a control system point of view to do computer vision on the robot.
With this extension, you can run GRIP on a processor directly on the robot (either a RoboRIO or a coprocessor) and send a lower-quality image back to the human driver for feedback. You can also view simple pieces of information published to NetworkTables, such as the bounding boxes of any contours found, without actually doing that processing on the driver station laptop. The driver will end up seeing a video feed with somewhat higher compression and maybe some lag issues, but the algorithms on the robot will get the full-quality, perfectly-synced video feed.
Changes
- Fixed
Invalid Stream (wrong magic numbers)
GRIP#407.
GRIP SmartDashboard Extension
Installation
Windows:
Put sdb-grip.jar
in C:\Users\YourName\SmartDashboard\extensions\
.
Linux/OS X
Put sdb-grip.jar
in ~/SmartDashboard/extensions/
.
Usage
In GRIP, use a "Publish Video" operation and at least one of "Publish Contours", "Publish Blobs", or "Publish Lines".
In SmartDashboard, select View > Add> GRIP Output Viewer
If you're running GRIP locally on the same computer as SmartDashboard, you should be good to go at this point.
If you're running GRIP on another machine (a RoboRIO or other vision coprocessor), you must set the address of the machine it's running on.
- Press Ctrl+E to get into edit mode if you aren't already.
- Right click the GRIP Viewer component and choose "Properties..."
- Set the "GRIP Address" item to the IP address or hostname of whatever you're running GRIP on. For example, this could be
roborio-190-frc.local
if you're running GRIP on a RoboRIO and your team number is 190.
Why?
GRIP can run on the driver station laptop, the roboRIO, or an additional on-board vision processor like the Kangaroo PC or Raspberry Pi. When running GRIP on the driver station PC, your drivers can easily visualize targeting data during a match, but you also have to reduce the video quality to fit within the field restrictions, and accept the lag caused by sending video data back and forth over the WiFi network. These can both make computer vision less effective, so it's often better from a control system point of view to do computer vision on the robot.
With this extension, you can run GRIP on a processor directly on the robot (either a RoboRIO or a coprocessor) and send a lower-quality image back to the human driver for feedback. You can also view simple pieces of information published to NetworkTables, such as the bounding boxes of any contours found, without actually doing that processing on the driver station laptop. The driver will end up seeing a video feed with somewhat higher compression and maybe some lag issues, but the algorithms on the robot will get the full-quality, perfectly-synced video feed.