-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(postgre)SQL export support / TimeScaleDB #2814
Comments
@oliv3r most of the time, Glances exports data to NoSQL databases (InfluxDB, ELK, MongoDB...). Before inserting data into TimeScale/Postgre, Glances needs to create a database and a relational table. I am a little bit confuse about this data model. I think that the best way is to create one table per plugin. The documentation also talk about hypertable, perhaps more adapted for the Glances data ? Can you give me a simple example of CREATE TABLE command for the following plugin ?
|
I have no idea what's best here. I always thought so too. But a) I noticed that their own 'stock-exchange' example actually uses a single table, but then this data is a bit correlated. While there's different stock symbols for different companies, the data is the same. Also mangaging things becomes different, because when a new symbol is added/removed, you have to create/drop a table. IMO it still makes logical sense. But then I know that home-assistant also puts all its sensor data into a single table. This is puzzeling for me still. Because then the data is not correlated at all. I would expect that each sensor has its own table. And that extends to here as well. Each plugin/sensor should have its own table. There surely must be a performance benefit here. Asking AI, it also states, performance should be better on multiple tables, with the downside that it's more work to manage, but the upside that related queries on a single timestamp might be faster, if you store each plugin in its own column. But I cant' figure out why the single table option would be better. Regardless, while you can store json directly in postgres, that's probably not what you have in mind ;) CPU
This creates a regular postgres table For the low integer value types, Some ints might need 'BIGINT' but sure on the the range from your example ;) I think for timescale to work effectively, you'd have to use actual timestamps instead of 'time_since_update', but I'm not a timescale expert, which is why I added Then, if the timescale extension is available,
we can convert it (and enable compression to add a significant performance and storage benefit.
If alter fails, the wrong license/container image was chosen, so this could just 'warn' that we are continuing without compression. The segment/index needs to be figured out what fits best here (of if anything at all). so cpu_core needs to be a column that it makes sense having an index on. If there is none, it might not be worthwhile to compress/have an index. So for now I assumed the stats are unique per CPU, but I know this is also not true ... (load is system-wide) From what I understood from this example here: https://docs.timescale.com/use-timescale/latest/compression/about-compression/#segment-by-columns it could be that in some cases, you want to keep different data in similar tables. For diskio, we'd end up wtih
One other timescale specific feature, would then be to create automated 'aggregated views' where postgres/timescaledb would automatically take averages etc over longer periods of time, and efficiently stores them for quick access. E.g. the zoomed out view. https://blog.timescale.com/blog/achieving-the-best-of-both-worlds-ensuring-up-to-date-results-with-real-time-aggregation/ |
If we do plan on creating the tables from Glances end and go with one table per plugin, using something like sqlalchemy to abstract out the DB connector implementation would probably be better. |
@oliv3r thanks for the implementation proposal. For the moment, export plugins do not have any feature to create table with the needed information (variables are not typed in a standard way). For example, CPU plugin fields description is the following: https://github.com/nicolargo/glances/blob/v4.0.8/glances/plugins/cpu/__init__.py#L24
It will increase code complexity/maintenance only for PostgreSQL export. Another approach will be init tables out of Glances but as a consequence we should maintain the script/documentation used to create tables with Glances data model. |
This issue is available for anyone to work on. Make sure to reference this issue in your pull request. ✨ Thank you for your contribution ! ✨ |
Hi there, I am inclined towards contributing to this feature if its still available to work on. |
@siddheshtv Feel free to take a stab at this. Though glances still lacks typing at many layers so defining the tables for all the plugins might be a bit hard |
Alright, I'll give it a go. |
This issue is stale because it has been open for 3 months with no activity. |
Don't be such a downer bot |
Is your feature request related to a problem? Please describe.
I've already got a postgres database running. Having to learn, setup and maintain an influxdb just for glances seems a bit frustrating to do.
Describe the solution you'd like
Since there's generic python libraries that handle a multitude of SQL backends, this would be something nice to have. As for 'timeseries databases don't fit traditional databases' that is quite true, however there is also 'timescaledb', which is an extension to postgres, that should be able to handle this just fine. So a bit of extra effort might be needed to support timescaledb, instead of regular postgres.
Describe alternatives you've considered
Setting up influxdb :(
The text was updated successfully, but these errors were encountered: