Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mathematics-I: migrated to the electrostatics-sandbox #21

Merged
merged 1 commit into from
Jun 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 13 additions & 2 deletions docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,22 @@

The following page lists the documentation manuals provided by the _Electrostatic Sandbox Project_ in the context of embedded systems testing, simulation, and design.

- [The AVR-Sandbox Project](https://electrostat-lab.github.io/Electrostatic-Sandbox/embedded-system-design/avr-sandbox/index): Includes documentation on basic C programming, digital electronics, AVR architecture, and embedded systems interfacing together with some circuit design.
## Computing Guidelines:

- [ACM Guidelines for Curricula](https://electrostat-lab.github.io/Electrostatic-Sandbox/embedded-system-design/acm-guidelines): Generalized guidelines for studying computer science utilized in approaching the self-taught route of embedded software/hardware co-design and engineering.

- [IEEE-1516 HLA Specification](https://electrostat-lab.github.io/Electrostatic-Sandbox/embedded-system-design/ieee-1516): Provides a documentation manual for the _High-level Architecture_ IEEE specification.

## Embedded Systems:

- [The AVR-Sandbox Project](https://electrostat-lab.github.io/Electrostatic-Sandbox/embedded-system-design/avr-sandbox/index): Includes documentation on basic C programming, digital electronics, AVR architecture, and embedded systems interfacing together with some circuit design.


- [Embedded Systems Design](https://electrostat-lab.github.io/Electrostatic-Sandbox/embedded-system-design/): WIP.

- [IEEE-1516 HLA Specification](https://electrostat-lab.github.io/Electrostatic-Sandbox/embedded-system-design/ieee-1516): Provides a documentation manual for the _High-level Architecture_ IEEE specification.

## Mathematics:

- [Mathematics-I](https://electrostat-lab.github.io/Electrostatic-Sandbox/embedded-system-design/mathematics-i/index): Housing useful resusable equations and formulas in calculus, discrete mathematics, and linear algebra.


28 changes: 28 additions & 0 deletions embedded-system-design/mathematics-i/LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
BSD 3-Clause License

Copyright (c) 2024, Software~Hardware Co-design

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
22 changes: 22 additions & 0 deletions embedded-system-design/mathematics-i/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Mathematics-I
> Created by [Pavl_G](https://github.com/Scrappers-glitch)
>> [Contribution Guidelines](https://github.com/Electrostat-Lab/.github/blob/main/CONTRIBUTING.md)

This repository houses useful re-usable formulas and equations for discrete mathematics, linear algebra, and calculus together with an insightful addition of appendices that include proofs, analysis tools, and other useful life applications.

> Utilizes Resources from:

| _Discrete & Finite Mathematics_ | _Pure Mathematics_ |
|----------|---------|
| <img width=160 height=200 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/c100afd6-459c-48f2-ae1d-5aa387b3eff5"/> | <img width=160 height=200 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/8006dfe9-80fa-4668-be6f-93726a708ea0"/> |
| <img width=160 height=200 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/f3ce09d7-223a-46b1-a849-84271b15acc8"/> | <a href="https://link.springer.com/book/10.1007/978-3-319-91041-3"><img width=160 height=200 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/04300248-da55-48f9-b9f8-23315d29535c"/></a> |
| <img width=160 height=200 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/4207ea21-1cc3-4c11-a64a-042947122a21"/> | <a href="https://link.springer.com/book/10.1007/978-3-540-72122-2"><img width=160 height=200 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/bee1c1a4-7d55-4697-a340-456a5e7df950"/></a> |

| _Practical Applications_ |
|--------------------------|
| <img width=160 height=200 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/e589fc74-c6d4-428d-8941-c3857cee2d21"/> |

> Powered by:

<a href="https://jekyllrb.com/"><img width=170 height=100 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/e8f4ae7f-8dcf-498b-8856-661278875347"/></a> <a href="https://www.mathjax.org/"> <img width=220 height=70 src="https://github.com/Electrostat-Lab/Mathematics-I/assets/60224159/a3489889-5669-4a5b-ab94-a9a1155d85f5"/> </a>

137 changes: 137 additions & 0 deletions embedded-system-design/mathematics-i/calculus/appendix-e.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
# Calculus I: Appendix-E: Derivation of the dot product and the projection vectors

<div align=center><img src="https://electrostat-lab.github.io/Mathematics-I/calculus/archive/the-dot-product.jpg" width=550 height=850/></div>

## 1) The dot product:

- Recall, 2 intersecting vectors $u$ and $v$ together with the resultant vetcor $w$; such that, $$w = u - v$$

- From the "Law of Cosines":

$$Since, ||w||^2 = ||u||^2 + ||v||^2 - 2 * ||u||^2 * ||v||^2 * cos(\theta)$$

$$Then, cos(\theta) = (||w||^2 - ||u||^2 - ||v||^2) / -2 * ||u||^2 * ||v||^2\ Lemma\.01$$

$$ ------- $$

$$Since, ||w|| = \sqrt{(u_x - v_x)^2 + (u_y - v_y)^2 + (u_z - v_z)^2}$$

$$||v|| = \sqrt{\{v_x}^2 + {v_y}^2 + {v_z}^2\}$$

$$||u|| = \sqrt{\{u_x}^2 + {u_y}^2 + {u_z}^2\}$$

$$||w|| = ||u-v|| = \sqrt{\{(u_x - v_x)}^2 + {(u_y - v_y)}^2 + {(u_z - v_z)}^2\}$$

$$Then,\ Lemma.02:$$

$$1)\ ||v||^2 = {v_x}^2 + {v_y}^2 + {v_z}^2$$

$$2)\ ||u||^2 = {u_x}^2 + {u_y}^2 + {u_z}^2$$

$$3)\ ||w||^2 = {u_x - v_x}^2 + {u_y - v_y}^2 + {u_z - v_z}^2$$

$$= ({u_x}^2 -2{u_x}{v_x} + {v_x}^2) + ({u_y}^2 -2{u_y}{v_y} + {v_y}^2) + ({u_z}^2 -2{u_z}{v_z} + {v_z}^2)$$

$$= ||u||^2 + ||v||^2 -2({u_x}{v_x} + {u_y}{v_y} + {u_z}{v_z})$$

$$ ------- $$

- By back-substitution in $Lemma.01$:

$$cos(\theta) = (||w||^2 - ||u||^2 - ||v||^2) / -2 * ||u||^2 * ||v||^2 = {(R.H.S)}_1 / {(R.H.S)}_2$$

$${(R.H.S)}_1 = (||w||^2 - ||u||^2 - ||v||^2)$$

$$= -2({u_x}{v_x} + {u_y}{v_y} + {u_z}{v_z})$$

$${(R.H.S)}_2 = -2 * ||u||^2 * ||v||^2$$

$$Hence, cos(\theta) = {(R.H.S)}_1 / {(R.H.S)}_2$$

$$= -2({u_x}{v_x} + {u_y}{v_y} + {u_z}{v_z}) / -2 * ||u||^2 * ||v||^2$$

$$= ({u_x}{v_x} + {u_y}{v_y} + {u_z}{v_z}) / ||u||^2 * ||v||^2$$

- And, by definition $({u_x}{v_x} + {u_y}{v_y} + {u_z}{v_z})$ yields $u.v$, the dot product or the inner product.

$$Thence, cos(\theta) = u.v / ||u|| * ||v||$$

$$And, u.v = ||u|| * ||v|| * cos(\theta)$$


## 2) Projection vectors:

* Definition: The projection vector $\vec{proj_{\vec{v}}} \vec{u}$ of a vector $\vec{u}$, is the vector component of that vector, that is coincident to the projectile vector $\vec{v}$, multiplied by the unit vector of the
projectile vector (i.e., the direction).

* For a productive proof, let vector $\vec{u}$ be our target vector, the one we would like to find its vector components in the direction of another contigous vectors, and vector $\vec{v}$ be the projectile (aka. base) vector
the one we would like to utilize its direction to find the projection vector.

1) Finding the norm of the vector component $||\vec{u_x}||\$ that is coincident to the projectile vector $\vec{v}\$:

$$Since,cos(\theta)=||\vec{u_x}||/||\vec{u}||$$

$$Then,||\vec{u_x}||=||\vec{u}||*cos(\theta)$$

2) Finding the vector norm $||\vec{v}||$ of the projectile vector $\vec{v}$:

$$||\vec{v}||=\sqrt{\{v_x}^2+{v_y}^2+{v_z}^2\}$$

3) Finding the normalization ratio using the scalar division property $\vec{v_{unit}}=\vec{v}/||\vec{v}||$ of the projectile vector (base vector).

4) Using the scalar multiplication property $||\vec{u_x}||*\vec{v_{unit}}$:

> $Lemma.01$

$$||\vec{u_x}||*\vec{v_{unit}}=(||\vec{u}||*cos(\theta))\*\vec{v_{unit}}$$

$$Since,\vec{u}.\vec{v}=||\vec{u}||*||\vec{v}||\*cos(\theta)$$

> $Lemma.02$

$$Then,||\vec{u}||*cos(\theta)=(\vec{u}.\vec{v})/||\vec{v}||$$

5) Then, from $Lemma.01$ and $Lemma.02$, we can deduce the projection vector formula in terms of the dot product between 2 vectors as follows:

$$\vec{proj_{\vec{v}}}\vec{u}=[({\vec{u}}.{\vec{v}})*{\vec{v_{unit}}}]/||\vec{v}||$$

6) Another formula when $\vec{v_{unit}}$ is broken:

$$\vec{proj_{\vec{v}}}\vec{u}=[\vec{u}.\vec{v}*\vec{v}]/||\vec{v}||^2$$

> Note:
> * This is could be applied to other components of vector $\vec{u}$, the $\vec{u_y}$, and the $\vec{u_z}$, and any vector could be utilized as the base or the projectile vector.
> * The projection vector of vector $\vec{u}$ on itself $\vec{proj_{\vec{u}}} \vec{u}$ is the vector $\vec{u}$ itself scaled with the length of one of its vector components.

## 3) Usages Review:
1) Finding the work done by a force vector $(F)$ to move an object a displacement $(D)$ with an inscribed angle $(a)$, formula (Physics):

$$W = F.D = ||F|| * ||D|| * cos(a) = \sum_{i=0}^{n} u_i v_i = u_0 v_0+u_1 v_1+u_2 v_2+...+ u_{n-1} v_{n-1} + u_n v_n$$

2) Finding the inscribed angle (<a) between 2 intersecting vectors, formula:

$$m(a) = acos(u.v/(||u|| * ||v||))$$

> where u.v can be evaluated using the Riemann's sum formula (Trigo./Physics).

3) Finding whether 2 intersecting vectors are orthogonal, formula: $u.v = ||u|| * ||v|| * cos(PI/2) = ZERO.$ (Geometry).

4) Finding projection vectors, formula: "the vector projection of $u$ onto $v$", formula:

$$proj_{v}^{u} = (||u|| * cos(a)) * (v/||v||) = (u.v / ||v||^2) * v$$

> where $(||u|| * cos(a))$ is the length of the triangle base, and $(v/||v||)$ is the unit vector form (normalized) of $v$.

5) Finding the total electromotive force (EMF) in a closed circuit loop, formula (aka. Ohm's Law):

$$V = I * R * cos(0)$$

6) Finding the driving arterial blood pressure in a closed arterial circuitry, formula (Hemodynamics):

$$BP = CO * SVR * cos(0)$$

> References:
> * Thomas' Calculus $14^{th}e$: Ch.12 Vectors & Geometry of Space: Section.12.3. (The Dot Product).
> * Guyton and Hall Textbook of Medical Physiology $13^{th}e$
> * Applied Linear Algebra $2^{nd}e$ Springer

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Calculus I: Archive

This folder archives supportive assets other than the main text and materials utilized for the website deployment.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 23 additions & 0 deletions embedded-system-design/mathematics-i/discrete-maths/appendix-a.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Discrete Mathematics I: Appendix-A: Symbolic Designation for mathematical analysis

<div align=center><img src="https://electrostat-lab.github.io/Mathematics-I/discrete-maths/archive/algorithm-analysis-using-machines.jpg" width=550 height=850/></div>

## The following is the symbolic designation legend to aid in the subsequent mathematical analysis:
- $N_c$: the number of times of execution of the closure $c$ imposed by the enclosing executing environment $E_e$ or the superclosure.
- $E_{e}$: the superclosure of the current closure that is of interest; the superclosure defines the executing environment that imposes the syntactical interpretation of iterations and machine transitions.
- $C_c$: the clock-complexity function; defines the number of cycles needed by the CPU to execute a set of machines inside an environment.
- ${\tau\}$: the transition-complexity function; defines the approximate clock-complexity taken by some machinery transitions among a set of specified machinery states (e.g., $M_{\alpha} = [\{\mu}_n, \{\mu}\_{n+1}, ..., \{\mu}\_{N-1}, \{\mu}\_{N}]$), and it follows that the transition-complexity function formula is the same as the clock-complexity function formula (i.e., ${\{\tau\}\'\}\_n = {C^{''}}_c$).
- $t_c$: the physical time-complexity function; defines the approximate time taken in seconds to execute the specified runnable set of machines in seconds unit.
- ${\epsilon}$: the error rate complexity function; defines the error rate as a result of calculating the exact physical time taken to execute some machines.

## The following is the generalized formula:

$$Since, N_c = \prod_{i=1}^I E_{e_i}$$

$$C_c = N_c * \sum_{n=1}^N {\tau}\_n$$

- Such that, ${\tau}\_n = C'_c$, and ${\{\tau\}\'\}\_n = {C^{''}}_c$, and so on; as it represents the transition between machinery states, so this is a recursive formula re-evaluating on the most inner closures.

$$Then, C_c = \prod_{i=1}^I E_{e_i} * \sum_{n=0}^N {\tau}\_n = (E_{e_1} * E_{e_2} * ... * E_{e_{I-1}} * E_{e_{I}}) * ({\tau}\_{1} + {\tau}\_{2} + ... + {\tau}\_{(N-1)} + {\tau}\_{(N)})$$

$$And, t_c = (C_c/F_{CPU}) + {\epsilon}$$
92 changes: 92 additions & 0 deletions embedded-system-design/mathematics-i/discrete-maths/appendix-b.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
# Discrete Mathematics I - Appendix-B: Algorithm Analysis

## Algorithms Manual Analysis (Closures Analysis):
### 1) N-order looping algorithms:
- Recall, _Closure A_:
```java
A {
FOR I = 1 TO N:
command()
END
}
```
- Then, it follows that the dummy `command()` with the clock-complexity $N_c$ will be executed $N$ times, so $$f(N) = N * (N_c)$$
- Hence, the complexity of the closure execution can be represented as a finite-sum by Riemann's sum formula:
$$f(N) = \sum_{n=1}^{N} N_{c_n} = {(N_c)}_1 + {(N_c)}_2 + {(N_c)}_3 +...+ ({N_c})\_{N-2} + ({N_c})\_{N-1} + ({N_c})\_{N} = N(N_c)$$
- As a matter of multi-variable equations, as well as the total clock-complexity of the execution depends on the number of iterations, it also depends on what's inside the loop closures, or in other words the `command()` complexity. If, the `command()`'s clock-complexity can be evaluated to $N_c=1$ as a matter of simple command-execution opcode, then the total clock-complexity for this loop closure is $$f(N) = \sum_{n=1}^{N} 1 = N(1) = N$$.
- Since, Riemann's sums can be applied for a finite-set _S_ of closures execution, so $$f(I) = \sum_{i=1}^{I} f(N_{i}) = f(N_{i}) + f(N_{i+1}) + f(N_{i+2}) + ... + f(N_{I-2}) + f(N_{I-1}) + f(N_{I})$$
- Then, a specific notation of Riemann's sums can be applied for a finite-set $S_L$ of loop closures execution: $$f(I) = \sum_{i=1}^{I} f(L_{i}) = f(L_{i}) + f(L_{i+1}) + f(L_{i+2}) + ... + f(L_{I-2}) + f(L_{I-1}) + f(L_{I})$$
$$= L_i + L_{i+1} + L_{i+2} + ... + L_{I-2} + L_{I-1} + L_{I}$$ ;where $I$ is the total number of closures, and it represents the index of the finite-item in the set.

### 2) Conditional closures algorithms:
- Recall, _Closure B_:
```java
B {
IF ({C_i} = {VALUE}) THEN
command()
END
}
```
- Where, $C_i$ is the condition tag, and _i_ is the number of conditions, in this case, it's 1 times.
- Then, it follows that the dummy `command()` will be executed $1$ times, so $$f(n) = 1 * N_c$$ ;where $N_c$ represents the clock-complexity for the involved `command()` to be executed by this execution.
- Since, Riemann's sums can be applied for a finite-set _S_ of closures execution, so $$f(I) = \sum_{i=1}^{I} f(N_{i}) = f(N_{i}) + f(N_{i+1}) + f(N_{i+2}) + ... + f(N_{I-2}) + f(N_{I-1}) + f(N_{I})$$
- Then, a specific notation of Riemann's sums can be applied for a finite-set $S_C$ of conditional closures execution, so $$f(I) = \sum_{i=1}^{I} f(C_{i}) = f(C_{i}) + f(C_{i+1}) + f(C_{i+2}) + ... + f(C_{I-2}) + f(C_{I-1}) + f(C_{I})$$
$$={(N_c)}\_1 + {(N_c)}\_2 + {(N_c)}\_3 +...+ ({N_c})\_{I-2} + ({N_c})\_{I-1} + ({N_c})\_{I} $$ ;where $I$ is the total number of closures, and it represents the index of the finite-item in the set, and $f(C_{i})$ is the complexity of execution of a conditional command `command()` (notice, how this function is very abstract, as the `command()` could be another algorithm of another complexity, see `compound complexities section`).

### 3) The scientific basis behind compositing closures (defining a transcendental formula for closures):

- Closures can be designated as special types of _Sets_; where operations, a specific sort of relations, are being monitored in an execution environment, hence all types of closures creates $$f(N) = C(N) * \sum_{c=1}^{C} N_c = C(N) * (N_1 + N_2 + N_3 + ... + N_{C-2} + N_{C-1} + N_C)$$ ;where $f(N)$ is the total clock-complexity of the closure execution (execution of commands inside the closure), $C(N)$ is the clock-complexity of the closure itself by its class (e.g., first-order loops use $C(N)=N$), and $N_c$ represents the clock-complexity of the single command, in which their Riemann's sum yields the total clock-complexity of execution of the enclosed commands.

- It follows that this could be also represented using the _integral function_, aka. _Leibniz's notation_, the integral function integrates the time complexities of the stack of the function in the form $f(x).dx=C(N).N_c$:
$$Since, f(x) = F(x) = \int_a^x{f'(x).dx}$$
$$Hence, F(x_1) - F(x_0) = \int_a^{x_1}{f'(x).dx} - \int_a^{x_0}{f'(x).dx} = \int_{x_0}^{x_1}{f'(x).dx} = f(x_1) - f(x_0)$$
$$Then, F(x) = f(N) = C(N) * \sum_{c=1}^{C} N_c = \sum_{c=1}^{C} N_c * C(N) = \int_1^C{f'(x).dx} = f(C) - f(1)$$

- Almost all properties of _Sets_ could be applied to closures, hence if the super-closure (superset) has a simple complexity of constant functional execution (i.e., of $C(N) = 1$), then the generalized Riemann's sum can be narrowed down to: $$f(N) = C(N) * \sum_{c=1}^{C} N_c = (1) * \sum_{c=1}^{C} N_c$$

- While, if the super-closure (superset) has a loop complexity of transcendental functional execution (i.e., of $C(N) = c*N^e$), then the generalized Riemann's sum can be obtained as follows: $$f(N) = C(N) * \sum_{c=1}^{C} N_c = c * N^e * \sum_{c=1}^{C} N_c$$ ;where $c$ is a constant co-efficient, and $e$ is the exponent representing nested loop closures.

### 4) Compound (or Nested) closures algorithms:
- Recall, a super-closure $S_c$; such that:
```java
S_c: {
command()
}
```
- Then, it follows that this closure executes the `command()` in (N) times the clock-complexity of the command $N_c$, hence $$f(N) = N*N_c$$
- Hence, if the `command()` holds the following closure as its stack:
```java
command(): {
FOR I = 1 TO N:
command1();
END
}
```
- Then, the total clock-complexity of execution will be: $$f(N) = N*N_c$$
- However, if the `command()` holds a simple closure as the follows:
```java
command(): {
IF ({C_i} = {VALUE}) THEN
command1()
END
}
```
- Then, it follows that the total clock-complexity of execution will be: $$f(N) = N*N_c = (1) * N_c = N_c$$
- Now, if the `command()` holds a nested loop closure as follows:
```java
command(): {
FOR I = 1 TO N:
FOR J = 1 TO N:
command1();
END
END
}
```
- Then, it follows that the total clock-complexity can be evaluated to: $$f(N) = N*N_c = N * (N * N_c') = N^2 * N_c'$$ ;which means that the `command1()` where $N_c'$ will be executed $N^2$ times, in a product set fashion.

- Now, the $N_c$ can represent any type of clock-complexity ranging from complexity to compund complexity involving finite-sets, the general formula utilizes Riemann's sum, and can be also represented as an integral function using _Leibniz's notation_.





Loading
Loading