Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions 02_activities/assignments/DC_Cohort/Assignment2.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ The store wants to keep customer addresses. Propose two architectures for the CU
**HINT:** search type 1 vs type 2 slowly changing dimensions.

```
Your answer...
There are two types of slowly changing dimensions. Type 1 completely overwrites the old data, and no history is kept; this results in only one data row per customer in the CUSTOMER_ADDRESS table. Type 2 “shelves” the old data (i.e. marks it as obsolete, usually with an end date), then adds a new row with the updated data; this results in multiple records/rows per customer in the CUSTOMER_ADDRESS table over time, but only one row per customer that is actually active while the rest are inactive.
```

***
Expand Down Expand Up @@ -183,5 +183,8 @@ Consider, for example, concepts of labour, bias, LLM proliferation, moderating c


```
Your thoughts...
LLMs, a form of generative AI, have become invaluable for many aspects of function ranging from large industrial processes to day-to-day life. However, they are rife with quite a few ethical considerations. For example, the initial databases that create many more training models were often created by cheap labour. While there were checks in place, we cannot be sure that the training data was given optimal time and resources, leading to questions regarding accuracy of the building blocks of training data.
One major source of ethical considers is various types of data biases. Most LLMs will reflect inherent biases (e.g. historical or societal) or lack of diversity present in the training data, and since they learn via repetition/pattern recognition, there is a chance the biases might amplify as time goes on. These include potential gender biases (e.g. assuming gender based on topic/profession), racial biases (e.g. if training data is from a less tolerant/more prejudice historical era), cultural biases (most training datasets in use are based on Western-centric data sources), socioeconomic status biases (this could go either way; if outsourced [potentially cheap] labour used there is a risk of biasing towards lower SES, while if trained on university/college students/staff then there is a risk of biasing towards higher SES), political biases (could be biased on right-leaning or left-leaning ideologies depending on the source of the training data), and disability biases (could lump disabilities together in terms of their capabilities) to name a few. If LLMs are used for screening purposes (e.g. hiring, loans, legal systems, medical fields), these biases could cause unfair outcomes for some demographics.
Another ethical issue revolves accountability and transparency. Due to the multi-level complexity of LLMs, it can be difficult to assign responsibility when an LLM makes a mistake, since the error can come from any level (e.g. training data, algorithm, developers, users, etc.). Because of these complex inner workings, it can be hard to understand how the LLM arrived at that solution, impeding error identification and correction. There is also the issue of misusing LLMs for malicious purposes such as creating deepfakes/media manipulation, getting affected by data poisoning, creating media without (or with minimal) human input and passing it off as someone’s original work. Additionally, LLMs are trained on large datasets, often scavenged generally from the internet, which could contain many sensitive data (personal information, copyrights, etc.). It is not always clear whether informed consent was present in all the datasets LLMs use for learning, and whether this sensitive information is being accessed by unauthorized parties or misused. Other ethical issues include model architecture (could prioritize some patterns over others leading to inductive biases), environmental impact (running LLMs require significant energy consumption, being at odds with climate sustainability), and potential job loss (via AI “automation”, and disproportionately giving advantage to the people that have the skillset and/or access to trainings that leverage AI tools).
We need to combine ethical and technical strategies to mitigate ethical issues in LLMs. Potential solutions include establishing accountability and transparency from the beginning, increasing training data diversity, ongoing monitoring/measurement of biases and outputs, defining legal frameworks for data usage in training LLMs, invest in AI literacy to allow users to critically evaluate outputs, exploring solutions to mitigate economic disruptions (e.g. via layoffs), and developing energy-efficient algorithms.
```
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
149 changes: 146 additions & 3 deletions 02_activities/assignments/DC_Cohort/assignment2.sql
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,17 @@ FROM product

But wait! The product table has some bad data (a few NULL values).
Find the NULLs and then using COALESCE, replace the NULL with a
blank for the first problem, and 'unit' for the second problem.
blank for the first column with nulls, aand 'unit' for the second column with nulls.

HINT: keep the syntax the same, but edited the correct components with the string.
The `||` values concatenate the columns into strings.
Edit the appropriate columns -- you're making two edits -- and the NULL rows will be fixed.
All the other rows will remain the same.) */

SELECT
product_name || ', ' || (COALESCE(product_size, '')) || ' (' || (COALESCE(product_qty_type, 'unit')) || ')' AS concise_product_info
FROM product;



--Windowed Functions
Expand All @@ -32,17 +36,45 @@ each new market date for each customer, or select only the unique market dates p
(without purchase details) and number those visits.
HINT: One of these approaches uses ROW_NUMBER() and one uses DENSE_RANK(). */

SELECT
customer_id,
market_date,
DENSE_RANK() OVER (PARTITION BY customer_id ORDER BY market_date ASC) AS number_of_visits
FROM customer_purchases;



/* 2. Reverse the numbering of the query from a part so each customer’s most recent visit is labeled 1,
then write another query that uses this one as a subquery (or temp table) and filters the results to
only the customer’s most recent visit. */
SELECT
customer_id,
market_date,
DENSE_RANK() OVER (PARTITION BY customer_id ORDER BY market_date DESC) AS number_of_visits2
FROM customer_purchases;

SELECT x.*
FROM (
SELECT
customer_id,
market_date,
DENSE_RANK() OVER (PARTITION BY customer_id ORDER BY market_date DESC) AS number_of_visits2
FROM customer_purchases
) AS x
WHERE x.number_of_visits2 = 1;



/* 3. Using a COUNT() window function, include a value along with each row of the
customer_purchases table that indicates how many different times that customer has purchased that product_id. */

SELECT
customer_id,
product_id,
vendor_id,
COUNT () OVER (PARTITION BY customer_id, product_id) AS number_of_purchases
FROM customer_purchases;



-- String manipulations
Expand All @@ -57,10 +89,28 @@ Remove any trailing or leading whitespaces. Don't just use a case statement for

Hint: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR will help split the column. */

SELECT
product_id,
product_name,
CASE
WHEN INSTR(product_name, '-') > 0
THEN TRIM(SUBSTR(product_name, INSTR(product_name, '-') + 1))
ELSE
NULL
END AS description
FROM product;



/* 2. Filter the query to show any product_size value that contain a number with REGEXP. */

SELECT
product_id,
product_name,
product_size
FROM product
WHERE product_size REGEXP'\d';



-- UNION
Expand All @@ -73,6 +123,40 @@ HINT: There are a possibly a few ways to do this query, but if you're struggling
3) Query the second temp table twice, once for the best day, once for the worst day,
with a UNION binding them. */

DROP TABLE IF EXISTS temp.total_sales1;
CREATE TEMP TABLE temp.total_sales1 AS
SELECT
market_date,
SUM(quantity * cost_to_customer_per_qty) AS total_cost
FROM customer_purchases
GROUP BY market_date;
SELECT * FROM temp.total_sales1;

DROP TABLE IF EXISTS temp.total_sales2;
CREATE TEMP TABLE temp.total_sales2 AS
SELECT
market_date,
total_cost,
DENSE_RANK() OVER (ORDER BY total_cost ASC) AS min_total_sales,
DENSE_RANK() OVER (ORDER BY total_cost DESC) AS max_total_sales
FROM temp.total_sales1;
SELECT * FROM temp.total_sales2;

SELECT
market_date,
total_cost,
min_total_sales,
'min price' AS type
FROM temp.total_sales2
WHERE min_total_sales = 1
UNION
SELECT
market_date,
total_cost,
max_total_sales,
'max price' AS type
FROM temp.total_sales2
WHERE max_total_sales = 1;



Expand All @@ -89,6 +173,25 @@ Think a bit about the row counts: how many distinct vendors, product names are t
How many customers are there (y).
Before your final group by you should have the product of those two queries (x*y). */

SELECT
v.vendor_name,
p.product_name,
SUM(crossed.original_price * 5) AS max_sales
FROM (
SELECT DISTINCT
vendor_id,
product_id,
original_price,
customer_id
FROM vendor_inventory
CROSS JOIN customer
) AS crossed
LEFT JOIN product AS p
ON crossed.product_id = p.product_id
LEFT JOIN vendor AS v
ON crossed.vendor_id = v.vendor_id
GROUP BY v.vendor_id, p.product_id;



-- INSERT
Expand All @@ -97,18 +200,45 @@ This table will contain only products where the `product_qty_type = 'unit'`.
It should use all of the columns from the product table, as well as a new column for the `CURRENT_TIMESTAMP`.
Name the timestamp column `snapshot_timestamp`. */

--DROP TABLE IF EXISTS product_units;
CREATE TABLE product_units AS
SELECT
product_id,
product_name,
product_size,
product_category_id,
product_qty_type,
CURRENT_TIMESTAMP AS snapshot_timestamp
FROM product
WHERE product_qty_type = 'unit';
SELECT * FROM product_units;



/*2. Using `INSERT`, add a new row to the product_units table (with an updated timestamp).
This can be any product you desire (e.g. add another record for Apple Pie). */

INSERT INTO product_units (product_id, product_name, product_size, product_category_id, product_qty_type, snapshot_timestamp)
VALUES (8, 'Cherry Pie', '8"', 3, 'unit', CURRENT_TIMESTAMP);
SELECT * FROM product_units;



-- DELETE
/* 1. Delete the older record for the whatever product you added.

HINT: If you don't specify a WHERE clause, you are going to have a bad time.*/

DELETE FROM product_units
--SELECT * FROM product_units
WHERE product_name = 'Cherry Pie'
AND snapshot_timestamp < (
SELECT max(snapshot_timestamp)
FROM product_units
WHERE product_name = 'Cherry Pie'
);
SELECT * FROM product_units;



-- UPDATE
Expand All @@ -128,6 +258,19 @@ Finally, make sure you have a WHERE statement to update the right row,
you'll need to use product_units.product_id to refer to the correct row within the product_units table.
When you have all of these components, you can run the update statement. */

ALTER TABLE product_units
ADD current_quantity INT;


UPDATE product_units AS pu
SET current_quantity = (
SELECT coalesce(vi.quantity, 0)
FROM vendor_inventory AS vi
WHERE vi.product_id = pu.product_id
AND vi.market_date = (
SELECT max(market_date)
FROM vendor_inventory
WHERE vi.product_id = pu.product_id
)
);
SELECT * FROM product_units;