Skip to content

Commit 8ac59b6

Browse files
committed
Updated links, images, and image paths
1 parent d7dceb9 commit 8ac59b6

File tree

2 files changed

+17
-17
lines changed

2 files changed

+17
-17
lines changed

docs/on-device-ai-goes-mainstream.mdx

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Two megatrends are converging:
2525

2626
- **[Edge Computing](https://objectbox.io/dev-how-to/edge-computing-state-2025)** - Processing data where it is created, on the device, locally, at the egd of the network, is called "Edge Computing" and it is growing
2727
- **AI** - AI capabilities and use are expanding rapidly and without a need for further explanation
28-
<img src="/img/edge-ai/edge-ai.png" alt="Edge AI: Where Edge Computing and AI intersect" />
28+
<img src="/dev-how-to/img/edge-ai/edge-ai.png" alt="Edge AI: Where Edge Computing and AI intersect" />
2929

3030
--> where these two trends overlap (at the intersection), it is called Edge AI (or local AI, on-device AI, or with regards to a subsection: "Mobile AI")
3131

@@ -37,18 +37,18 @@ The shift to Edge AI is driven by use cases that:
3737
* are not economically viable when using the cloud / a cloud AI
3838
* want to be sustainable
3939

40-
<img src="/img/edge-ai/edge-ai-benefits.png" alt="Edge AI drivers (benefits)" />
40+
<img src="/dev-how-to/img/edge-ai/edge-ai-benefits.png" alt="Edge AI drivers (benefits)" />
4141

4242
If you're interested in the sustianability aspect, see also: [Why Edge Computing matters for a sustainable future](https://objectbox.io/why-do-we-need-edge-computing-for-a-sustainable-future/)
4343

4444
## Why it's not Edge AI vs. Cloud AI - the reality is hybrid AI
4545

4646
Of course, while we see a market shift towards Ede Computing, there is no Edge Computiung vs. Cloud Computing - the two complement each other and the question is mainly: How much edge does your use case need?
4747

48-
<img src="/img/edge-ai/cloud-to-edge-continuum.png" alt="Edge AI drivers (benefits)" />
48+
<img src="/dev-how-to/img/edge-ai/cloud-to-edge-continuum.png" alt="Edge AI drivers (benefits)" />
4949

5050
Every shift in computing is empowered by core technologies
51-
<img src="/img/edge-ai/computing-shifts-empowered-by-core-tech.png" alt="Every shift in computing is empowered by core technologies" />
51+
<img src="/dev-how-to/img/edge-ai/computing-shifts-empowered-by-core-tech.png" alt="Every shift in computing is empowered by core technologies" />
5252

5353
## What are the core technologies empowering Edge AI?
5454

@@ -59,7 +59,7 @@ Typically, Mobile AI apps need **three core components**:
5959
2. A [**vector database**](https://objectbox.io/vector-database/))
6060
3. **Data sync** for hybrid architectures ([Data Sync Alternatives](https://objectbox.io/data-sync-alternatives-offline-vs-online-solutions/))
6161

62-
<img src="/img/edge-ai/core-tech-enabling-edge-ai.png" alt="The core technologies empoewring Edge AI" />
62+
<img src="/dev-how-to/img/edge-ai/core-tech-enabling-edge-ai.png" alt="The core technologies empowering Edge AI" />
6363

6464

6565
## A look at AI models
@@ -68,15 +68,15 @@ Typically, Mobile AI apps need **three core components**:
6868

6969
Large foundation models (LLMs) remain costly and centralized. In contrast, [**Small Language Models (SLMs)**] bring similar capabilities in a lightweight, resource-efficient way.
7070

71-
<img src="/img/edge-ai/slm-quality-cost.png" alt="SLM quality and cost comparison" />
71+
<img src="/dev-how-to/img/edge-ai/slm-quality-cost.png" alt="SLM quality and cost comparison" />
7272
- Up to **100x cheaper** to run
7373
- Faster, with lower energy consumption
7474
- Near-Large-Model quality in some cases
7575

7676
This makes them ideal for **local AI** scenarios: assistants, semantic search, or multimodal apps running directly on-device. However....
7777

7878
### Frontier AI Models are still getting bigger and costs are skyrocketing
79-
<img src="/img/edge-ai/llm-costs-still-skyrocketing.png" alt="SLM quality and cost comparison" />
79+
<img src="/dev-how-to/img/edge-ai/llm-costs-still-skyrocketing.png" alt="SLM quality and cost comparison" />
8080

8181
### Why this matters for developers: Monetary and hidden costs of using Cloud AI
8282

@@ -87,15 +87,15 @@ Running cloud AI comes at a cost:
8787
- **Dependency**: Few tech giants hold all major AI models, the data, and the know-how, and they make the rules (e.g. thin AI layers on top of huge cloud AI models will fade away due to vertical integration)
8888
- **Data privacy & compliance**: Sending data around adds risk, sharing data too (what are you agreeing to?)
8989
- **Sustainability**: Large models consume waqy more energy, and transmitting data unnecessarily consumes way more energy too (think of this as shopping apples from New Zealand in Germany) ([Sustainable Future with Edge Computing](https://objectbox.io/why-do-we-need-edge-computing-for-a-sustainable-future/)).
90-
<img src="/img/edge-ai/why-llm-costs-and-energy-consumption-impacts-developers.png" alt="SLM quality and cost comparison" />
90+
<img src="/dev-how-to/img/edge-ai/why-llm-costs-and-energy-consumption-impacts-developers.png" alt="SLM quality and cost comparison" />
9191

9292
### What about Open Source AI Models?
9393

9494
Yes, they are an option, but be mindful of potential risks and caveats. Be aware that you also pay to be free of liability risks.
95-
<img src="/img/edge-ai/opensource-ai-models.png" alt="SLM quality and cost comparison" />
95+
<img src="/dev-how-to/img/edge-ai/opensource-ai-models.png" alt="SLM quality and cost comparison" />
9696

9797
### While SLM are all the rage, it's really about specialised AI models in Edge AI (at this moment...)
98-
<img src="/img/edge-ai/for-mobile-it-is-specialized-models-not-SLM.png" alt="SLM quality and cost comparison" />
98+
<img src="/dev-how-to/img/edge-ai/for-mobile-it-is-specialized-models-not-SLM.png" alt="SLM quality and cost comparison" />
9999

100100

101101
## On-device Vector Databases are the second essential piece of the Edge AI Tech Stack
@@ -110,7 +110,7 @@ On-device (or Edge) vector databases have a small footprint (a couple of MB, not
110110

111111
(Note: Edge Vector databases, or on-device vector databases, are still rare. ObjectBox was the first on-device vector database available on the market. Some server- and cloud-oriented vector databases have recently begun positioning themselves for edge use. However, their relatively large footprint often makes them more suitable for laptops than for truly resource-constrained embedded devices. More importantly, solutions designed by scaling down from larger systems are generally not optimized for restricted environments, resulting in higher computational demands and increased battery consumption.)
112112

113-
<img src="/img/edge-ai/vector-database.png" alt="Vector Databases" />
113+
<img src="/dev-how-to/img/edge-ai/vector-database.png" alt="Vector Databases" />
114114

115115

116116
## Developer Story: On-device AI Screenshot Searcher Example App
@@ -125,19 +125,19 @@ To test the waters, I built a [**Screenshot Searcher** app with ObjectBox Vector
125125
This was easy and took less than a day. However, I learned more with the stuff I tried that wasn't easy... ;)
126126

127127
### What I learned about text classification (and hopefully helps you)
128-
<img src="/img/edge-ai/on-device-text-classification.png" alt="On-device Text Classification Learnings" />
128+
<img src="/dev-how-to/img/edge-ai/on-device-text-classification.png" alt="On-device Text Classification Learnings" />
129129

130130
--> See Finetuning.... without Finetuning, no model, no text classification.
131131

132132
### What I learned about finetuning (and hopefully helps you)
133-
<img src="/img/edge-ai/finetuning-text-model-learnings.png" alt="Finetuning Learnings (exemplary, based on finetuning DBPedia)" />
133+
<img src="/dev-how-to/img/edge-ai/finetuning-text-model-learnings.png" alt="Finetuning Learnings (exemplary, based on finetuning DBPedia)" />
134134

135135
--> Finetuning failed --> I will tray again ;)
136136

137137
### What I learned about integrating an SLM (Google's Gemma)
138138

139139
Integrating Gemma was super straightforward; it worked on-device in less than an hour (just don't try to use the Android emulator (AVD) - it's not recommended to try and run Gemma on the AVD, and it also did not work for me).
140-
<img src="/img/edge-ai/using-gemma-on-android.png" alt="Using Gemma on Android" />
140+
<img src="/dev-how-to/img/edge-ai/using-gemma-on-android.png" alt="Using Gemma on Android" />
141141

142142

143143
In this example app, we are using Gemma to enhance the screenshot search with an additional AI layer:
@@ -153,7 +153,7 @@ It's already fairly easy - and vibe coding an Edge AI app very doable. While of
153153

154154

155155

156-
<img src="/img/edge-ai/final-tech-stack.png" alt="Final Tech Stack" />
156+
<img src="/dev-how-to/img/edge-ai/final-tech-stack.png" alt="Final Tech Stack" />
157157

158158

159159

@@ -188,8 +188,8 @@ We’re at an inflection point: AI is moving from centralized, cloud-based servi
188188
- Cost-efficient
189189
- Sustainable
190190

191-
The future of AI is not just big — its also **small, local, and synced**.
191+
The future of AI is not just big — it's also **small, local, and synced**.
192192

193-
<img src="/img/edge-ai/ai-anytime-anywhere.png" alt="AI Anytime Anywhere Future" />
193+
<img src="/dev-how-to/img/edge-ai/ai-anytime-anywhere.png" alt="AI Anytime Anywhere Future" />
194194

195195
---
272 KB
Loading

0 commit comments

Comments
 (0)