Integrating DB (like SQLite) into my Legato App

So I want to have data that i receive to be saved on an SD card . I want to steer away from using files to do that I do not have to worry about Synchronization issues since I will have other Apps accessing this data for read/write operations, so I have decided to go with Database approach. My questions here are:

  • What is the best way to achieve this with minimal amount of work required for the Database approach? Does Legato have any DB abilities built into it? If not how can I import something like SQLite to use?
  • If DB is not a good approach then what is the better approach to achieve this?

Note my database will be saving data in hex format along with another column for timestamp and maybe an extra one to specify if a row was read by another App or not.

For memory I have couple of GB on SD card but won’t be going up that high as we are adding/removing from it and if it does get to a certain point (say 20-30MB) then we should purge it and start over

Thanks

Hi @mg_bg,

AFAIK Legato doesn’t have any built-in DB capability.
There is the configTree that can be used to store/retrieve configuration but that’s about it.

So using sqlite seems like an interesting approach. You might be able to package it as an app with externalBuild: Component Definition .cdef - Legato Docs
Cf legato-af/components/3rdParty/lighttpd at master ¡ legatoproject/legato-af ¡ GitHub for a sample usage with lighttpd for instance.

Note that if what you’re storing is timeseries, sqlite might not be the best.
You could eventually push that to AirVantage through the Time Series API: AirVantage Data API - Legato Docs

You could also use something like InfluxDB which is an open-source timeseries database: GitHub - influxdata/influxdb: Scalable datastore for metrics, events, and real-time analytics
It’s very good at storing timeseries, doing operations on them, handling retention (if you want to only keep 2 weeks of data for instance, it’s going to automatically remove data that are more than 2 weeks every day).
I also think that should be buildable as an app through externalBuild, even if it’s in go.

@CoRfr Thanks for ur response.

Regarding the configTree I’m not 100% sure to its uses: Can I use it to store values( format: “timestamp | hex value” )? If so this would make my life easier as I do not have to bring in any new libraries.

Regarding sqlite I found out that although this comes with both a C source file, it is not as straight forward as I had hoped; I can’t just call functions to create Databases/populate them/retrieve info and such. This needs to be done at the terminal first using the sqlite3 command to create database and then we would be using its own CLI to populate/and such. It is also still using the SQL commands like “CREATE TABLE…”. So I will have it have it installed at the command line level (maybe in /usr/bin or even /etc/init.d) in order to be able to use it.

Is there no other library that can be used for this purpose that doesn’t involve such complication and can be used via simple functions API that I can just link my build against or even include its source files into my project? C/C++ r fine not def. not SQL based ones

I’m quite surprised regarding sqlite that it would need any cli calls.
The only thing that the sqlite3 command does AFAIK is to calls their C/C++ APIs.

Did you have a look at An Introduction To The SQLite C/C++ Interface ?

1 Like

@CoRfr Nop I was totally mistaken. SQLite offers the ability to be run via either CLI commands or via C/C++ API. I was using the former till I found out about the later. It works perfectly fine using the C/C++ API and it is pretty simple so far:)

One quick question though: how can I control the size of my database? I do not want it to go over a certain size (like 1 GB). I know I can use COUNT(*) to get # elements in a table but what about whole db file?

Since there is no support for disk quote yet, the framework is not going to help you much on this side.

A possibility would be to have a thread that monitor the size of the database size, as to eventually launch some clean-up sequence that would remove the oldest entries.

One possibility as well is to implement some kind of ‘sharding’. For instance every day you create a new database that has the time of day (data-20180620.sqlite, data-20180621.sqlite…) and when the size of the directory is too big (>= 1 big), you remove the whole file for the oldest entries.
This is significantly more performant and reduce the number of flash writes as you can just drop a whole bunch of data at the time instead of removing entries one at a time (I guess you can do DELETE over a big range, but still I think it would end-up rewriting a big chunk of the file for such operations, and it would be hard to predict how much you’re actually removing).
It does make it harder to do queries though.