-
Notifications
You must be signed in to change notification settings - Fork 11
Week 3 Ally Szema
In a machine learning image classification dataset, images and labels are put together in order to create a classify, or categorize these images. Images by themselves already may have their own connotative meaning without even having a label. In René Magritte's painting of a pipe his label says “Ceci n’est pas une pipe", or this is not a pipe. It is simply an image of a pipe. When looking at the painting of the pipe alone, the viewer probably would not question the existence of the pipe, but Magritte's label is placed on it, one may begin to question what the label means in relation to the image. When images are given a label, another meaning is brought to the image. When applying this in the context of machine learning, the relationship between images and words is not always accurate. After reading this weeks article along with my previous experience with using machine learning technology, in fact, a lot of the time the labels for recognized images might not have anything to do with each other, or they may have a negative connotation or even a slur. But, who has the power to label these images? Based on the reading, I think anyone who has the ability to access the internet has the power to label images in a data set. I created the labels for this week project in my data set. Media companies and journalists have the power to link images to words and put them on the internet. However, I think the tougher question is not who has the power to label images but rather what social responsibility do people have when creating a data set, or even just putting a label on an image? Labeling can be a useful tool in order to categorize, like google images. However, giving an image a negative connotation is something that can be trained using a dataset. This can be incredibly harmful to people in society, especially regarding classifications of gender, race, sexual orientation etc. I remember a few years ago I learned that if you look up three black teenagers on google images, a bunch of mugshots of black teenagers will appear, whereas you look up three white teenagers and stock images of white teenagers smiling will come up. The social responsibility that comes with labeling images in a dataset is to be mindful of the connotations you give your images with labels, and to fix issues in real world datasets like in the example I mentioned.
let classifier;
// Label let label = 'start talkin';
// Teachable Machine model URL: let soundModel = 'https://teachablemachine.withgoogle.com/models/FHUIdoyxI/';
function preload() { // Load the model classifier = ml5.soundClassifier(soundModel + 'model.json'); }
function setup() {
createCanvas(500, 500);
// Start classifying // The sound model will continuously listen to the microphone classifier.classify(gotResult); }
function draw() { background(255, 0, 0); // Draw the label in the canvas fill(255,0,200); textSize(50); stroke(233); fill(255,200); textAlign(CENTER, CENTER); text(label, width / 2, height / 2);
}
// The model recognizing a sound will trigger this event function gotResult(error, results) { if (error) { console.error(error); return; } // The results are in an array ordered by confidence. // console.log(results[0]); label = results[0].label; }