You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/on-device-ai-goes-mainstream.mdx
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ Two megatrends are converging:
25
25
26
26
-**[Edge Computing](https://objectbox.io/dev-how-to/edge-computing-state-2025)** - Processing data where it is created, on the device, locally, at the egd of the network, is called "Edge Computing" and it is growing
27
27
-**AI** - AI capabilities and use are expanding rapidly and without a need for further explanation
28
-
<imgsrc="/img/edge-ai/edge-ai.png"alt="Edge AI: Where Edge Computing and AI intersect" />
28
+
<imgsrc="/dev-how-to/img/edge-ai/edge-ai.png"alt="Edge AI: Where Edge Computing and AI intersect" />
29
29
30
30
--> where these two trends overlap (at the intersection), it is called Edge AI (or local AI, on-device AI, or with regards to a subsection: "Mobile AI")
31
31
@@ -37,18 +37,18 @@ The shift to Edge AI is driven by use cases that:
37
37
* are not economically viable when using the cloud / a cloud AI
38
38
* want to be sustainable
39
39
40
-
<imgsrc="/img/edge-ai/edge-ai-benefits.png"alt="Edge AI drivers (benefits)" />
40
+
<imgsrc="/dev-how-to/img/edge-ai/edge-ai-benefits.png"alt="Edge AI drivers (benefits)" />
41
41
42
42
If you're interested in the sustianability aspect, see also: [Why Edge Computing matters for a sustainable future](https://objectbox.io/why-do-we-need-edge-computing-for-a-sustainable-future/)
43
43
44
44
## Why it's not Edge AI vs. Cloud AI - the reality is hybrid AI
45
45
46
46
Of course, while we see a market shift towards Ede Computing, there is no Edge Computiung vs. Cloud Computing - the two complement each other and the question is mainly: How much edge does your use case need?
47
47
48
-
<imgsrc="/img/edge-ai/cloud-to-edge-continuum.png"alt="Edge AI drivers (benefits)" />
48
+
<imgsrc="/dev-how-to/img/edge-ai/cloud-to-edge-continuum.png"alt="Edge AI drivers (benefits)" />
49
49
50
50
Every shift in computing is empowered by core technologies
51
-
<imgsrc="/img/edge-ai/computing-shifts-empowered-by-core-tech.png"alt="Every shift in computing is empowered by core technologies" />
51
+
<imgsrc="/dev-how-to/img/edge-ai/computing-shifts-empowered-by-core-tech.png"alt="Every shift in computing is empowered by core technologies" />
52
52
53
53
## What are the core technologies empowering Edge AI?
54
54
@@ -59,7 +59,7 @@ Typically, Mobile AI apps need **three core components**:
59
59
2. A [**vector database**](https://objectbox.io/vector-database/))
60
60
3.**Data sync** for hybrid architectures ([Data Sync Alternatives](https://objectbox.io/data-sync-alternatives-offline-vs-online-solutions/))
@@ -68,15 +68,15 @@ Typically, Mobile AI apps need **three core components**:
68
68
69
69
Large foundation models (LLMs) remain costly and centralized. In contrast, [**Small Language Models (SLMs)**] bring similar capabilities in a lightweight, resource-efficient way.
70
70
71
-
<imgsrc="/img/edge-ai/slm-quality-cost.png"alt="SLM quality and cost comparison" />
71
+
<imgsrc="/dev-how-to/img/edge-ai/slm-quality-cost.png"alt="SLM quality and cost comparison" />
72
72
- Up to **100x cheaper** to run
73
73
- Faster, with lower energy consumption
74
74
- Near-Large-Model quality in some cases
75
75
76
76
This makes them ideal for **local AI** scenarios: assistants, semantic search, or multimodal apps running directly on-device. However....
77
77
78
78
### Frontier AI Models are still getting bigger and costs are skyrocketing
79
-
<imgsrc="/img/edge-ai/llm-costs-still-skyrocketing.png"alt="SLM quality and cost comparison" />
79
+
<imgsrc="/dev-how-to/img/edge-ai/llm-costs-still-skyrocketing.png"alt="SLM quality and cost comparison" />
80
80
81
81
### Why this matters for developers: Monetary and hidden costs of using Cloud AI
82
82
@@ -87,15 +87,15 @@ Running cloud AI comes at a cost:
87
87
-**Dependency**: Few tech giants hold all major AI models, the data, and the know-how, and they make the rules (e.g. thin AI layers on top of huge cloud AI models will fade away due to vertical integration)
88
88
-**Data privacy & compliance**: Sending data around adds risk, sharing data too (what are you agreeing to?)
89
89
-**Sustainability**: Large models consume waqy more energy, and transmitting data unnecessarily consumes way more energy too (think of this as shopping apples from New Zealand in Germany) ([Sustainable Future with Edge Computing](https://objectbox.io/why-do-we-need-edge-computing-for-a-sustainable-future/)).
90
-
<imgsrc="/img/edge-ai/why-llm-costs-and-energy-consumption-impacts-developers.png"alt="SLM quality and cost comparison" />
90
+
<imgsrc="/dev-how-to/img/edge-ai/why-llm-costs-and-energy-consumption-impacts-developers.png"alt="SLM quality and cost comparison" />
91
91
92
92
### What about Open Source AI Models?
93
93
94
94
Yes, they are an option, but be mindful of potential risks and caveats. Be aware that you also pay to be free of liability risks.
95
-
<imgsrc="/img/edge-ai/opensource-ai-models.png"alt="SLM quality and cost comparison" />
95
+
<imgsrc="/dev-how-to/img/edge-ai/opensource-ai-models.png"alt="SLM quality and cost comparison" />
96
96
97
97
### While SLM are all the rage, it's really about specialised AI models in Edge AI (at this moment...)
98
-
<imgsrc="/img/edge-ai/for-mobile-it-is-specialized-models-not-SLM.png"alt="SLM quality and cost comparison" />
98
+
<imgsrc="/dev-how-to/img/edge-ai/for-mobile-it-is-specialized-models-not-SLM.png"alt="SLM quality and cost comparison" />
99
99
100
100
101
101
## On-device Vector Databases are the second essential piece of the Edge AI Tech Stack
@@ -110,7 +110,7 @@ On-device (or Edge) vector databases have a small footprint (a couple of MB, not
110
110
111
111
(Note: Edge Vector databases, or on-device vector databases, are still rare. ObjectBox was the first on-device vector database available on the market. Some server- and cloud-oriented vector databases have recently begun positioning themselves for edge use. However, their relatively large footprint often makes them more suitable for laptops than for truly resource-constrained embedded devices. More importantly, solutions designed by scaling down from larger systems are generally not optimized for restricted environments, resulting in higher computational demands and increased battery consumption.)
## Developer Story: On-device AI Screenshot Searcher Example App
@@ -125,19 +125,19 @@ To test the waters, I built a [**Screenshot Searcher** app with ObjectBox Vector
125
125
This was easy and took less than a day. However, I learned more with the stuff I tried that wasn't easy... ;)
126
126
127
127
### What I learned about text classification (and hopefully helps you)
128
-
<imgsrc="/img/edge-ai/on-device-text-classification.png"alt="On-device Text Classification Learnings" />
128
+
<imgsrc="/dev-how-to/img/edge-ai/on-device-text-classification.png"alt="On-device Text Classification Learnings" />
129
129
130
130
--> See Finetuning.... without Finetuning, no model, no text classification.
131
131
132
132
### What I learned about finetuning (and hopefully helps you)
133
-
<imgsrc="/img/edge-ai/finetuning-text-model-learnings.png"alt="Finetuning Learnings (exemplary, based on finetuning DBPedia)" />
133
+
<imgsrc="/dev-how-to/img/edge-ai/finetuning-text-model-learnings.png"alt="Finetuning Learnings (exemplary, based on finetuning DBPedia)" />
134
134
135
135
--> Finetuning failed --> I will tray again ;)
136
136
137
137
### What I learned about integrating an SLM (Google's Gemma)
138
138
139
139
Integrating Gemma was super straightforward; it worked on-device in less than an hour (just don't try to use the Android emulator (AVD) - it's not recommended to try and run Gemma on the AVD, and it also did not work for me).
140
-
<imgsrc="/img/edge-ai/using-gemma-on-android.png"alt="Using Gemma on Android" />
140
+
<imgsrc="/dev-how-to/img/edge-ai/using-gemma-on-android.png"alt="Using Gemma on Android" />
141
141
142
142
143
143
In this example app, we are using Gemma to enhance the screenshot search with an additional AI layer:
@@ -153,7 +153,7 @@ It's already fairly easy - and vibe coding an Edge AI app very doable. While of
0 commit comments