Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .checkpatch.ignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Ignore directories containing third-party files
chapters/compute/synchronization/drills/tasks/threadsafe-data-struct/support
content/assignments/async-web-server/src/http-parser
content/assignments/elf-loader/tests
24 changes: 14 additions & 10 deletions .github/workflows/lab-archive.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,13 @@ jobs:
exit 0
fi

git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"

ls -A *.zip > zip-list
git add *.zip
git stash

# Create or switch to lab-archives branch
if git ls-remote --exit-code origin lab-archives; then
git fetch origin lab-archives
Expand All @@ -46,15 +53,12 @@ jobs:
git rm -rf .
fi

# Remove old archives
for f in $(cat zip-list); do rm "$f"; git rm "$f"; done
git commit -m "Remove outdated lab archives for commit $GITHUB_SHA"

# Copy new zips into branch root
git stash pop
git add *.zip

# Only commit if there are changes
if ! git diff --cached --quiet; then
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git commit -m "Update lab archives for commit $GITHUB_SHA"
git push origin lab-archives
else
echo "No changes to commit."
fi
git commit -m "Update lab archives for commit $GITHUB_SHA"
git push origin lab-archives
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
*.war
*.nar
*.ear
*.zip
*.tar.gz
*.rar

Expand Down
1 change: 1 addition & 0 deletions chapters/compute/overview/reading/lab6.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
The contents of the lab are located in the [lab archive](https://github.com/cs-pub-ro/operating-systems/raw/refs/heads/lab-archives/Lab_6_Multiprocess_and_Multithread.zip) and in the [GitHub repository](https://github.com/cs-pub-ro/operating-systems).
1 change: 1 addition & 0 deletions chapters/compute/overview/reading/lab8.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
The contents of the lab are located in the [lab archive](https://github.com/cs-pub-ro/operating-systems/raw/refs/heads/lab-archives/Lab_8_Synchronization.zip) and in the [GitHub repository](https://github.com/cs-pub-ro/operating-systems).
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# Create Process

Enter the `chapters/compute/processes/drills/tasks/create-process/` directory, run `make skels`, open the `support/src` folder and go through the practice items below.
Enter the `create-process/` directory (or `chapters/compute/processes/drills/tasks/create-process/` if you are working directly in the repository).
Run `make` and then enter `support/` folder and go through the practice items below.

Use the `tests/checker.sh` script to check your solutions.

```bash
./checker.sh
./tests/checker.sh
exit_code22 ...................... passed ... 50
second_fork ...................... passed ... 50
100 / 100
Expand Down
5 changes: 3 additions & 2 deletions chapters/compute/processes/drills/tasks/sleepy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,13 @@

## Higher level - Python

Enter the `chapters/compute/processes/drills/tasks/sleepy` directory, run `make skels`, open the `support/src` folder and go through the practice items below.
Enter the `sleepy/` directory (or `chapters/compute/processes/drills/tasks/sleepy` if you are working directly in the repository).
Run `make` and then enter `support/` folder and go through the practice items below.

Use the `tests/checker.sh` script to check your solutions.

```bash
./checker.sh
./tests/checker.sh
sleepy_creator ...................... passed ... 30
sleepy_creator_wait ................. passed ... 30
sleepy_creator_c .................... passed ... 40
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Wait for Me

Enter the `chapters/compute/processes/drills/tasks/wait-for-me-processes/` directory, run `make skels`, open the `support/src` folder and go through the practice items below.
Enter the `wait-for-me/` directory (or `chapters/compute/processes/drills/tasks/wait-for-me/` if you are working directly in the repository).
Run `make` and then enter `support/` folder and go through the practice items below.

Use the `tests/checker.sh` script to check your solutions.

Expand All @@ -10,7 +11,7 @@ wait_for_me_processes ...................... passed ... 100
```

1. Run the code in `wait_for_me_processes.py` (e.g: `python3 wait_for_me_processes.py`).
The parent process creates one child that writes and message to the given file.
The parent process creates one child that writes a message to the given file.
Then the parent reads that message.
Simple enough, right?
But running the code raises a `FileNotFoundError`.
Expand Down
2 changes: 1 addition & 1 deletion chapters/compute/processes/reading/processes.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ student@os:~$ file /usr/bin/ls
```

When you run it, the `ls` binary stored **on the disk** at `/usr/bin/ls` is read by another application called the **loader**.
The loader spawns a **process** by copying some of the contents `/usr/bin/ls` in memory (such as the `.text`, `.rodata` and `.data` sections).
The loader spawns a **process** by copying some contents of `/usr/bin/ls` into memory (for example the `.text`, `.rodata` and `.data` sections).
Using `strace`, we can see the [`execve`](https://man7.org/linux/man-pages/man2/execve.2.html) system call:

```console
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ There are a few rules though, such as:
- The consumer must not retrieve data if the buffer is empty.
- The producer and the consumer can't access the shared buffer at the same time.

Now enter `chapters/compute/synchronization/drills/tasks/apache2-simulator-condition/` and run `make skels`.
Look at the code in `chapters/compute/synchronization/drills/tasks/apache2-simulator/support/src/producer_consumer.py`.
Now enter the `apache2-simulator-condition/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/apache2-simulator-condition/` if you are working directly in the repository) and run `make skels`.
Look at the code in `support/src/producer_consumer.py`.
We have one producer and one consumer for simplicity.
Observe that the producer calls `notify()` once there is data available, and the consumer calls `notify()`, when data is read.
Notice that this call is preceded by an `acquire()` call, and succeeded by a `release()` call.
Expand Down Expand Up @@ -58,7 +58,7 @@ Neat!
So now we have both synchronization **and** signalling.
This is what conditions are for, ultimately.

Open `chapters/compute/synchronization/drills/tasks/apache2-simulator/support/src/apache2_simulator_condition.py` and follow the TODOs.
Open the `apache2-simulator-condition/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/apache2-simulator-condition/` if you are working directly in the repository), then go to `support/src/apache2_simulator_condition.py` and follow the TODOs.
The code is similar to `apache2_simulator_semaphore.py`, but this time we use condition variables as shown in `producer_consumer.py`.

[Quiz](../../../drills/questions/notify-only-with-mutex.md)
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,14 @@ It is not an instruction with its own separate opcode, but a prefix that slightl
For example, we cannot place it before a `mov` instruction, as the action of a `mov` is simply `read` or `write`.
Instead, we can place it in front of an `inc` instruction if its operand is memory.

Go in `chapters/compute/synchronization/drills/tasks/atomic-assembly/` and run:
Go in the `atomic-assembly/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/atomic-assembly/` if you are working directly in the repository) and run:

```bash
make skels
```

Look at the code in `chapters/compute/synchronization/drills/tasks/atomic-assembly/support/src/race_condition_lock.asm`.
It's an Assembly equivalent of the code you've already seen many times so far (such as `chapters/compute/synchronization/drills/tasks/race-condition/support/c/race_condition.c`).
Look at the code in `support/src/race_condition_lock.asm`.
It's an Assembly equivalent of the code you've already seen many times so far (such as `race-condition/support/c/race_condition.c`).
The 2 assembly functions (**increment_var** and **decrement_var**) are called by `race_condition_lock_checker.c`

Now add the `lock` prefix before `dec`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Modern processors are capable of _atomically_ accessing data, either for reads o
An atomic action is an indivisible sequence of operations that a thread runs without interference from others.
Concretely, before initiating an atomic transfer on one of its data buses, the CPU first makes sure all other transfers have ended, then **locks** the data bus by stalling all cores attempting to transfer data on it.
This way, one thread obtains **exclusive** access to the data bus while accessing data.
As a side note, the critical sections in `chapters/compute/synchronization/drills/tasks/race-condition/support/c/race_condition_mutex.c` are also atomic once they are wrapped between calls to `pthread_mutex_lock()` and `pthread_mutex_unlock()`.
As a side note, the critical sections in `race-condition/support/c/race_condition_mutex.c` are also atomic once they are wrapped between calls to `pthread_mutex_lock()` and `pthread_mutex_unlock()`.

As with every hardware feature, the `x86` ISA exposes an instruction for atomic operations.
In particular, this instruction is a **prefix**, called `lock`.
Expand All @@ -27,19 +27,19 @@ Compilers provide support for such hardware-level atomic operations.
GCC exposes [built-ins](https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html) such as `__atomic_load()`, `__atomic_store()`, `__atomic_compare_exchange()` and many others.
All of them rely on the mechanism described above.

Go to `chapters/compute/synchronization/drills/tasks/race-condition-atomic/` and run:
Go to the `race-condition-atomic/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/race-condition-atomic/` if you are working directly in the repository) and run:

```bash
make skels
```

Now enter `chapters/compute/synchronization/drills/tasks/race-condition-atomic/support/src/race_condition_atomic.c` and complete the function `decrement_var()`.
Now enter `support/src/race_condition_atomic.c` and complete the function `decrement_var()`.
Compile and run the code.
Its running time should be somewhere between `race_condition` and `race_condition_mutex`.

The C standard library also provides atomic data types.
Access to these variables can be done only by one thread at a time.
Go to `chapters/compute/synchronization/drills/tasks/race-condition-atomic/support/race_condition_atomic2.c`, compile and run the code.
Go to `support/src/race_condition_atomic2.c`, compile and run the code.

After both tasks are done, go in the checker folder and run it using the following commands:

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# C: Race Conditions

Go to `chapters/compute/synchronization/drills/tasks/race-condition/support/c/race_condition_mutex.c` and notice the differences between this code and the buggy one.
Go to the `race-condition/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/race-condition/` if you are working directly in the repository), then open `support/c/race_condition_mutex.c` and notice the differences between this code and the buggy one.
We now use a `pthread_mutex_t` variable, which we `lock` at the beginning of a critical section, and we `unlock` at the end.
Generally speaking, `lock`-ing a mutex makes a thread enter a critical section, while calling `pthread_mutex_unlock()` makes the thread leave said critical section.
Therefore, as we said previously, the critical sections in our code are `var--` and `var++`.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Synchronization - Thread-Safe Data Structure

Now it's time for a fully practical exercise.
Go to `chapters/compute/synchronization/drills/tasks/threadsafe-data-struct/support/`.
Go to the `threadsafe-data-struct/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/threadsafe-data-struct/` if you are working directly in the repository), then open the `support/` folder.
In the file `clist.c` you'll find a simple implementation of an array list.
Although correct, it is not (yet) thread-safe.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The perspective of C towards TLS is the following: everything is shared by defau
This makes multithreading easier and more lightweight to implement than in other languages, like D, because synchronization is left entirely up to the developer, at the cost of potential unsafety.

Of course, we can specify that some data belongs to the TLS, by preceding the declaration of a variable with `__thread` keyword.
Enter `chapters/compute/synchronization/drills/tasks/tls-on-demand/` and run `make skels`.
Enter the `tls-on-demand/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/tls-on-demand/` if you are working directly in the repository) and run `make skels`.
Now enter `support/src` and follow the TODOs.

1. Create the declaration of `var` and add the `__thread` keyword to place the variable in the TLS of each thread.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ void *decrement_var(void *arg)
var--;

/**
* Print the value of `var` after it's incremented. Also print
* Print the value of `var` after it's decremented. Also print
* the ID of the thread. Use `pthread_self()` to get it.
*/
/* TODO 1: */
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Wrap the Whole `for` Statements in Critical Sections

Navigate to the `chapters/compute/synchronization/drills/tasks/wrap-the-for/` directory, run `make skels` and open the `support/src` directory.
Navigate to the `wrap-the-for/` directory of the extracted archive (or `chapters/compute/synchronization/drills/tasks/wrap-the-for/` if you are working directly in the repository), run `make skels`, and open the `support/src` directory.

Here you will find two source files:

Expand Down
7 changes: 4 additions & 3 deletions chapters/compute/threads/drills/tasks/multithreaded/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# Multithreaded

Enter the `chapters/compute/threads/drills/tasks/multithreaded/` folder, run `make skels`, and go through the practice items below in the `support/` directory.
Enter the `multithreaded/` directory (or `chapters/compute/threads/drills/tasks/multithreaded/` if you are working directly in the repository).
Run `make` and then enter `support/` folder and go through the practice items below.

1. Use the Makefile to compile `multithread.c`, run it and follow the instructions.
1. Use the Makefile to compile `multithreaded.c`, run it and follow the instructions.

The aim of this task is to familiarize you with the `pthreads` library.
In order to use it, you have to add `#include <pthread.h>` in `multithreaded.c` and `-lpthread` in the compiler options.
Expand All @@ -15,7 +16,7 @@ Enter the `chapters/compute/threads/drills/tasks/multithreaded/` folder, run `ma

Create a new function `sleep_wrapper2()` identical to `sleep_wrapper()` to organize your work.
So far, the `data` argument is unused (mind the `__unused` attribute), so that is your starting point.
You cannot change `sleep_wrapper2()` definition, since `pthreads_create()` expects a pointer to a function that receives a `void *` argument.
You must keep `sleep_wrapper2()`'s signature unchanged because `pthread_create()` requires a function of type `void *(*)(void *)`.
What you can and should do is to pass a pointer to a `int` as argument, and then cast `data` to `int *` inside `sleep_wrapper2()`.

**Note:** Do not simply pass `&i` as argument to the function.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Wait for It

The process that spawns all the others and subsequently calls `waitpid` to wait for them to finish can also get their return codes.
Update the code in `chapters/compute/threads/drills/tasks/sum-array-bugs/support/seg-fault/sum_array_processes.c` and modify the call to `waitpid` to obtain and investigate this return code.
Update the code in `sum-array-bugs/support/seg-fault/sum_array_processes.c` (or `chapters/compute/threads/drills/tasks/sum-array-bugs/support/seg-fault/sum_array_processes.c` if you are working directly in the repository) and modify the call to `waitpid` to obtain and investigate this return code.
Display an appropriate message if one of the child processes returns an error.

Remember to use the appropriate [macros](https://linux.die.net/man/2/waitpid) for handling the `status` variable that is modified by `waitpid()`, as it is a bit-field.
Expand All @@ -19,7 +19,7 @@ Thus, an application that uses processes can be more robust to errors than if it
## Memory Corruption

Because they share the same address space, threads run the risk of corrupting each other's data.
Take a look at the code in `sum-array-bugs/support/memory-corruption/python/`.
Take a look at the code in `sum-array-bugs/support/memory-corruption/python/` (or `chapters/compute/threads/drills/tasks/sum-array-bugs/support/memory-corruption/python/` if you are working directly in the repository).
The two programs only differ in how they spread their workload.
One uses threads while the other uses processes.

Expand Down
20 changes: 9 additions & 11 deletions chapters/compute/threads/drills/tasks/sum-array/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Libraries for Parallel Processing

In `chapters/compute/threads/drills/tasks/sum-array/support/c/sum_array_threads.c` we spawned threads "manually" by using the `pthread_create()` function.
Enter the `sum-array/` directory (or `chapters/compute/threads/drills/tasks/sum-array/` if you are working directly in the repository).
In `./support/c/sum_array_threads.c` we spawned threads "manually" by using the `pthread_create()` function.
This is **not** a syscall, but a wrapper over the common syscall used by both `fork()` (which is also not a syscall) and `pthread_create()`.

Still, `pthread_create()` is not yet a syscall.
Expand All @@ -10,15 +11,12 @@ Most programming languages provide a more advanced API for handling parallel com

## Array Sum in Python

Let's first probe this by implementing two parallel versions of the code in `sum-array/support/python/sum_array_sequential.py`.
One version should use threads and the other should use processes.
Run each of them using 1, 2, 4, and 8 threads / processes respectively and compare the running times.
Notice that the running times of the multithreaded implementation do not decrease.
This is because the GIL makes it so that those threads that you create essentially run sequentially.
First, let's navigate to the `sum-array/` directory (or `chapters/compute/threads/drills/tasks/sum-array/` if you are working directly in the repository).
Let's explore this by implementing two parallel versions of the sequential script located at `./support/python/sum_array_sequential.py`.
Create one version that uses threads and another that uses processes.

The GIL also makes it so that individual Python instructions are atomic.
Run the code in `chapters/compute/synchronization/drills/tasks/race-condition/support/python/race_condition.py`.
Every time, `var` will be 0 because the GIL doesn't allow the two threads to run in parallel and reach the critical section at the same time.
This means that the instructions `var += 1` and `var -= 1` become atomic.
After implementing them, run each version using 1, 2, 4, and 8 workers for both threads and processes and compare their execution times.

If you're having difficulties solving this exercise, go through [this](../../../guides/sum-array-threads.md) reading material.
You will likely notice that the running time of the multi-threaded implementation does not decrease as you add more threads.
This is due to CPython's Global Interpreter Lock (GIL), which prevents multiple native threads from executing Python bytecode at the same time.
For this reason, CPU-bound tasks in Python do not typically see a performance increase from multi-threading.
Loading
Loading