This Processing applet was made to explore how machine learning can be used to cluster chairs into categories. The applet clusters images from a database I made of chairs designed by Verner Panton, Charles and Ray Eames, Le Corbusier, Pierre Jeanneret, Charlotte Perriand, Harry Bertoia, and Eero Saarinen. When the applet is ran, it clusters all of the chairs into categories.
This Processing applet was made to explore how machine learning can be used to classify chairs. The applet classifies images from a database I made of chairs designed by Verner Panton, Charles and Ray Eames, Le Corbusier, Pierre Jeanneret, Charlotte Perriand, Harry Bertoia, and Eero Saarinen. When the applet is ran, it chooses nine chair images and tries to classify them by their designers. The number on the top left of each image represents the actual designer of the chair and the number on the top right of each image represents the applet’s guess for the the designer of the chair. The applet learns from a database of chairs and their designers prior to choosing the nine to guess, and the applet’s guess is displayed in red if it is incorrect, and is displayed in green if it is correct. The designers that the numbers represent are as follows:
This Processing applet was made to explore how machine learning can be used to sort logos by similarity. The applet looks at a collection of logos and sorts them based on how similar they are to a user-chosen logo from the collection. The logos all begin at full brightness, and when a logo is chosen, the other logos fade to black until they are sorted with the most similar logo as the brightest and the least similar logo as the darkest.
This interactive driving applet consists of a Processing applet that simulates a road and a physical interface that controls it. The physical interface was created by rewiring an optical mouse to switches and mounting them in a foam core enclosure that I made to resemble a car interior.
This interactive applet features colored tiles that can be “played” to a song of the user’s choice, in this case, Disney’s Main Street Electrical Parade.
This project is about using interesting inputs to control an interactive application. I coded a simple moving kaleidoscope that can response to input :
– V to reshuffle the shapes,
– B to cycle to the value space for the shapes color,
– N to make the shapes spin,
– , (comma) to change the direction of animation (zoom in vs zoom out)
Then, in order to control the Kaleidoscope, I build a stick reader who has 4 connectors placed on the 4 edges like in the following picture :
Then I used a stick with electric contact only on 2 consecutive edges at a time, so as i slide the stick in the reader, it activates different keypress so by designing different sticks the whole system act like a barrel organ.
In this exercise, I used a clustering machine to categorize pictures that contain faces of women in different emotional states as described by Ekman (anger, disgust, fear, joy, sadness, surprise, and contempt). My idea is to see if a machine can actually come across the difference between these emotions as human can do. Apparently, the clustering that came out show more the grouping by the persons instead of the expression, meaning that it didn’t work that well. An idea to improve this could be by subtracting each picture of a person by the mean of her picture.
The idea was to study image classification through the notion of social norms and normality: for instance, what can be considered as ugly ? Also, what are the implications of subconscious physiognomy ? This program is supposed to show the limits of this approach by having the computer to decide for us who is good, bad or (and?) ugly.
My idea was to first train the classifier with images from various source : Supreme Court Justices as Good people, Most wanted people by the FBI as Bad people, and face pictures of people considered as Ugly (from a website I found on the internet, their work is morally questionable, so i won’t display these images). Then I used a database of face images as testing images : for each one, the classifier output a triplet of probability to belong in each of these categories. Their image is then mapped into a triangle.
The result is not that great, mainly because I think that the features where not that relevant (pixels of the images), and the 3 categories are not well-balanced (a 2 dimensional grid should have been more appropriate, with beautiful/ugly and good/bad as axes), but it is not that important, since he purpose here is to show the limit of classification.
In this exercise, we’re trying to study how we can use similarity to display interesting information.
I took 17 ebooks from the Internet (from the open-source project: http://www.gutenberg.org), by using the download popularity at that time (and filtering to get at least somewhat popular books, in english, and of usable size).
Then I run the TF-IDFs algorithm on that corpus, kept the best 50 words per document, and rendered each book as a chromosome, while each word is a gene. For each word, the mapping is the following :
– the IDF factor, since it is the same across the entire corpus, is considered as the size (height) of the gene: if the word is important, then its presence has an high impact on the overall property of the book.
– the TF is used to display the intensity of the gene: if the word/gene is more present in this book, then it means that it expresses itself more that others genes/words, and appears more white.
– the size of a book depends on the the number of words.
In addition, we can cycle through the book y using the W and X key: the current book is then selected in green, and all the best words of this book that appear at least once in others are displayed in green to across all the corpus, with the count per book. Then it is easy to see what are the similar book, and how they are similar.
This exercise was about syncing an output with music. For the scope of that project, the processing code doesn’t listen to the music; instead, the interactivity is brought by playing with the keyboard and the mouse:
– by pressing successively W five time accordingly to the music rhythm, the program will register the music tempo and displays psychedelic bubbles at each beat. Pressing Q will unregister the tempo.
– C will throw random numbers that will move like objects on a pond.
Each of the created objects stay on stage and move towards the audience (each object is actually a particule that evolves in a 3D space), but the user can change the display in the following way:
– V launches the Time Shift, where all the particles are moved back in time (ie in the background) very quickly. It is meant to match momentums in the music.
– P and M changes the field of vision of the pseudo 3D engine.
– If the right click is pressed on, the image is not redrawn at each call of draw, resulting in a perceived accelerated motion.
– Moving the mouse change the relative position of the camera in the 3D space.
Also the class Particle3D provides an abstraction to develop all kinds of 3D object that can evolve in this space, but I haven’t explore further this possibility.
The music I used was “Anyway You Choose to Give It” by The Black Ghosts, remixed by Boy-8-Bit.