Skip navigation

Monthly Archives: May 2009

The Curator

Polo Chau

François Chu

Sue Ann Hong

Patrick Gage Kelley

Art That Learns 2009.

Introduction

This installation explores the act of acceptance and rejection, of the artist, and the influence of institutions such as museums and galleries on artists’ directions as well as that of artists on such institutions.

Here, a computer automatically decides whether to add to its collection simple drawings submitted by the audience. While various criteria are used in judgment and reception of art in our society, “The Curator” simulates a simple aspect: originality.

It uses a machine learning algorithm based on anomaly detection to determine the acceptance of pieces, rejecting the ones similar to those already seen. (Also, to simulate the concept of revival, after a time, older pieces are forgotten by the algorithm.)

Hence the computer adapts to respond to the artistic ideas given by the audience, while we hope that the decisions made by the computer shape the submissions as the audience attempts to please the machine, and avoid brutal rejection.

The Process

Our design was an iterative process from the beginning. The idea, a cousin to that of an early concept presented to the class referred to as “Lack,” incorporated children by allowing them to create their own artworks which would be appraised.

Early on we decided to have two end states, an acceptance, and a rejection, where work would be destroyed. However the physical incarnations changed, from clear acrylic boxes to the idea of the shreds simply tumbling to the floor.

The best example of our attempts to build simplicity into our design come from the submission module. We knew there needed to be a way for the children to enter their work into the machine, however the specifics of this mechanism changed frequently. From a fed document strip scanner, to a vertical slot, a horizontal slot, a deconstructed flatbed scanner, we in the end went with a design that was a single slot in a piece of acrylic.

Our physical construction involved laser cutting, wood work, bending acrylic, priming, painting, the deconstruction of a mouse, the deconstruction of a shredder, and finally electrical work.

The final circuit used was simple, an arduino powered two servos, one for acceptance, the other for rejection, and additionally a relay, through a transistor, powered by a 9V battery, to control the timing of the shredder (whose automatic mechanism & safety switch were removed).

The shredder itself was deconstructed and rebuilt inside of a clear acrylic box.

For the learning algorithm we use a simple anomaly detection algorithm. We keep around the last n (we use 30) drawings in the form of a orthonormal basis: every time we get a drawing, we compare its feature vector (of pixels) v to the existing basis vectors (except the oldest one, which is what we’re replacing), and store the component of v orthogonal to them. Before creating the new basis vector, we test the drawing for acceptance by projecting onto the space of the orthonormal basis, then compute the reconstruction error. If the reconstruction error is greater than our threshold for acceptance, then the drawing is accepted. Intuitively, if the current drawing can be described well by the last 30 drawings (and hence not “original”), we do not accept it. Note that since the feature space is much larger (300×200=60,000?) than the basis space, the basis should not span the space of drawings.

We also learn the threshold for acceptance over time to maintain the specified acceptance rate (e.g. we used 40%). After deciding the fate of each drawing, we set the threshold such that the threshold would have accepted 40% of the last 30 drawings.

The Installation.

The installation was reasonably straight forward. We late in the process found out about the light up wall that would be the backdrop of our work, added frames to compensate for the space, which increased the overall impact of the artwork, and added a second stage of “acceptance” (albeit, a human stage, not one controlled by learning).

We made a few changes during the process including changing the timing of the button (adding a delay), adjusting the acceptance threshold, and fixing one of the servos which was overdrawing power from the arduino.

Observations.

One success is that we were actually able to get children’s attention and keep it, possibly for too long. Many parents had to take their children away from the exhibit. Children were pleased with acceptance and rejection, the clear box was a sign of winning, but the shredder was fun, loud, and visceral. As everything was a reward, they wanted to keep drawing, sometimes to the point of mass producing scribbles…, artwork.

Conclusion

We are going to call this a success, children liked it, no one cried, no one got hurt, it was able to stay in the museum, the learning worked (well enough), and it photographed well. The biggest failure is that it likely did not truly accomplish it’s intended reflection on rejection in society, but with six year olds, this may be a near impossible task to accomplish.

Photos & Videos

Advertisements

Sole Identity by Jesse Chorng

3

My piece attempts to answer the question “What do our shoes say about us?” by using machine learning to analyze and classify the bottom of users’ shoes.

Sole Identity captures ones out-sole and tries to determine the intended purpose the shoe was designed for. Much like a thumbprint, one’s shoe print has unique characteristics that can identify an individual. In the shoe design process, special attention is paid to the sole because of its importance in ensuring durability and comfort. Once a person’s shoe is identified as either a casual, skate, or athletic type, an image of the shoe is then placed into an environment where it is animated and interacts with other pairs according to characteristics specific to their classification.

The goal of the project is to demonstrate to the user how their shoes were meant to behave. By giving life to ones footwear, a person can visually understand the purpose of their shoes and determine whether their shoe choices do indeed reflect who they are.

INPUT: the bottom of people’s shoes
BLACK BOX: classification of soles as Casual, Athletic, or Skateboarding
OUTPUT: animated environment of captured soles

Download Code
Download Powerpoint

img_0671
img_0687-eimg_0682img_0681

Above are some images of “Brighten Your Day!” at the Children’s Museum of Pittsburgh.

==============================================================

Description:

Have you ever wondered what it would be like if household products could see you and react to you? By using a machine learning library that detects people, their faces, and other qualities, this lamp is able to turn on and look at you when you look at it. However, the lamp will only respond to people who believe that it can come to life. Are you a believer?

==============================================================

Materials:

The mechanical components and wires were secured in a support structure made of MDF with a vacuum-formed plastic cover that I made for the installation.

Processing 1.0.1

Open CV Library

Webcam (Apple iSight from a MacBook Pro for the installation)

Breadboard (now Perfboard)

Servo

5 Volt Relay

Diode

22-Gauge Solid Wire

12 Volt Power Adapter

Desk Lamp

Extension Cord

USB Cable

Solder

==============================================================

Processing Code:

import processing.serial.*;
import hypermedia.video.*;
OpenCV opencv;
int contrast_value    = 0;
int brightness_value  = 0;
float A = 0;
Serial myPort;
void setup() {
size(1920,1200);
background(0);
opencv = new OpenCV( this );
opencv.capture( 180,135 );
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );
println(Serial.list());
myPort = new Serial(this, Serial.list()[0], 9600);
}
public void stop() {
opencv.stop();
super.stop();
}
void draw() {
A+=15;
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
image( opencv.image(), 0, 0 );
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {
int faceX = (((int)(faces[i].x)));
rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
println(faceX);
if((faces[i].y)>25){
if(faceX>=60 && faceX<=100){
myPort.write(faceX);
}
else if(faceX<50){
myPort.write(50);
}
else if(faceX>100){
myPort.write(100);
}
}
}
background(0);
noStroke();
smooth();
fill(random(50,255));
ellipse(width/2,height/2,15,15);
noFill();
noSmooth();
}
void mouseDragged() {
contrast_value   = (int) map( mouseX, 0, width, -128, 128 );
brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}

import processing.serial.*;

import hypermedia.video.*;

OpenCV opencv;

int contrast_value    = 0;

int brightness_value  = 0;

float A = 0;

Serial myPort;

void setup() {

size(1920,1200);

background(0);

opencv = new OpenCV( this );

opencv.capture( 180,135 );

opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );

println(Serial.list());

myPort = new Serial(this, Serial.list()[0], 9600);

}

public void stop() {

opencv.stop();

super.stop();

}

void draw() {

A+=15;

opencv.read();

opencv.convert( GRAY );

opencv.contrast( contrast_value );

opencv.brightness( brightness_value );

Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

image( opencv.image(), 0, 0 );

noFill();

stroke(255,0,0);

for( int i=0; i<faces.length; i++ ) {

int faceX = (((int)(faces[i].x)));

rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );

println(faceX);

if((faces[i].y)>25){

if(faceX>=60 && faceX<=100){

myPort.write(faceX);

}

else if(faceX<50){

myPort.write(50);

}

else if(faceX>100){

myPort.write(100);

}

}

}

background(0);

noStroke();

smooth();

fill(random(50,255));

ellipse(width/2,height/2,15,15);

noFill();

noSmooth();

}

void mouseDragged() {

contrast_value   = (int) map( mouseX, 0, width, -128, 128 );

brightness_value = (int) map( mouseY, 0, width, -128, 128 );

}

==============================================================

Arduino Code:

#include <Servo.h>

int incomingByte = 0;

Servo myservo;

int pos = 0;

int ledPin = 11;

int value = LOW;

long previousMillis = 0;

long interval = 1000;

void setup()

{

pinMode(ledPin, OUTPUT);

myservo.attach(9);

Serial.begin(9600);

}

void loop()

{

if (Serial.available() > 0)  {

incomingByte = Serial.read();

if (incomingByte>50 && incomingByte<100){

value = LOW;

digitalWrite(ledPin, value);

}

else{

value = HIGH;

digitalWrite(ledPin, value);

}

}

Serial.println(incomingByte);

myservo.write(incomingByte*.75);

}

chair-clustering

Above is an image of my Exercise 5.

==============================================================

Description:

This Processing applet was made to explore how machine learning can be used to cluster chairs into categories. The applet clusters images from a database I made of chairs designed by Verner Panton, Charles and Ray Eames, Le Corbusier, Pierre Jeanneret, Charlotte Perriand, Harry Bertoia, and Eero Saarinen. When the applet is ran, it clusters all of the chairs into categories.

==============================================================

Code:

import wekaizing.*;
import java.io.File;
import java.lang.Integer;
class digitImage {
int number;
PImage digit_image;
int[] pixeldata;
public digitImage(int image_size) {
pixeldata = new int[image_size*image_size];}
}
WekaData digits_data;
WekaClusterer clusterer;
digitImage[] digits;
int NUM_DIGITS = 100;
int TRAIN_IMAGE_SIZE = 20;
int NUM_CLUSTERS = 10;
int[] clusters;
PFont courier_font;
void setup() {
background(0);
size(1800,1200);
courier_font = loadFont(“CourierNew-12.vlw”);
textFont(courier_font, 15);
digits_data = new WekaData();
for (int i = 0; i < TRAIN_IMAGE_SIZE*TRAIN_IMAGE_SIZE; i++) {
digits_data.AddAttribute(Integer.toString(i));
}
loadDigits(“digits”);
clusterer = new WekaClusterer(WekaClusterer.EM);
clusters = clusterer.clusterData(digits_data,NUM_CLUSTERS);
print(“Training done”);
drawResults();
}
void loadDigits(String digitfolder) {
File digitfiles = new File(sketchPath, “data/” + digitfolder);
String[] files = digitfiles.list(filter);
if(files.length < NUM_DIGITS)
NUM_DIGITS = files.length;
digits = new digitImage[NUM_DIGITS];
String numbers[] = loadStrings(digitfolder + “/digits.txt”);
for (int i = 0; i<NUM_DIGITS ; i++) {
println(“Loading image ” + files[i]);
digits[i] = new digitImage(TRAIN_IMAGE_SIZE);
digits[i].digit_image = loadImage(“data/” + digitfolder + “/” + files[i]);
digits[i].number = Integer.valueOf(numbers[i]);
PImage resizedImg = loadImage(“data/” + digitfolder + “/” + files[i]);
resizedImg.resize(TRAIN_IMAGE_SIZE,TRAIN_IMAGE_SIZE);
resizedImg.loadPixels();
for (int j = 0; j < TRAIN_IMAGE_SIZE*TRAIN_IMAGE_SIZE; j++) {
digits[i].pixeldata[j] = resizedImg.pixels[j];
}
digits_data.InsertData(digits[i].pixeldata);
}
}
void drawResults() {
int imgx=0, imgy=0;
for (int j=0;j<NUM_CLUSTERS;j++)
{
for (int i = 0; i < digits.length; i++) {
if(clusters[i] == j)
{
image(digits[i].digit_image,imgx,imgy);
imgx += digits[0].digit_image.width;
if(imgx>width-digits[0].digit_image.width)
{
imgx = 0;
imgy += digits[0].digit_image.height;
}
}
}
imgx = 0;
imgy += digits[0].digit_image.height*1.25;
stroke(0,255,0);
line(0,imgy,width,imgy);
imgy += digits[0].digit_image.height/4;
}
}
FilenameFilter filter = new FilenameFilter() {
public boolean accept(File dir, String name) {
if (name.toLowerCase().endsWith(“.png”) || name.toLowerCase().endsWith(“.jpg”) || name.toLowerCase().endsWith(“.gif”)) return true;
return false;
}
};

import wekaizing.*;

import java.io.File;

import java.lang.Integer;

class digitImage {

int number;

PImage digit_image;

int[] pixeldata;

public digitImage(int image_size) {

pixeldata = new int[image_size*image_size];}

}

WekaData digits_data;

WekaClusterer clusterer;

digitImage[] digits;

int NUM_DIGITS = 100;

int TRAIN_IMAGE_SIZE = 20;

int NUM_CLUSTERS = 10;

int[] clusters;

PFont courier_font;

void setup() {

background(0);

size(1800,1200);

courier_font = loadFont(“CourierNew-12.vlw”);

textFont(courier_font, 15);

digits_data = new WekaData();

for (int i = 0; i < TRAIN_IMAGE_SIZE*TRAIN_IMAGE_SIZE; i++) {

digits_data.AddAttribute(Integer.toString(i));

}

loadDigits(“digits”);

clusterer = new WekaClusterer(WekaClusterer.EM);

clusters = clusterer.clusterData(digits_data,NUM_CLUSTERS);

print(“Training done”);

drawResults();

}

void loadDigits(String digitfolder) {

File digitfiles = new File(sketchPath, “data/” + digitfolder);

String[] files = digitfiles.list(filter);

if(files.length < NUM_DIGITS)

NUM_DIGITS = files.length;

digits = new digitImage[NUM_DIGITS];

String numbers[] = loadStrings(digitfolder + “/digits.txt”);

for (int i = 0; i<NUM_DIGITS ; i++) {

println(“Loading image ” + files[i]);

digits[i] = new digitImage(TRAIN_IMAGE_SIZE);

digits[i].digit_image = loadImage(“data/” + digitfolder + “/” + files[i]);

digits[i].number = Integer.valueOf(numbers[i]);

PImage resizedImg = loadImage(“data/” + digitfolder + “/” + files[i]);

resizedImg.resize(TRAIN_IMAGE_SIZE,TRAIN_IMAGE_SIZE);

resizedImg.loadPixels();

for (int j = 0; j < TRAIN_IMAGE_SIZE*TRAIN_IMAGE_SIZE; j++) {

digits[i].pixeldata[j] = resizedImg.pixels[j];

}

digits_data.InsertData(digits[i].pixeldata);

}

}

void drawResults() {

int imgx=0, imgy=0;

for (int j=0;j<NUM_CLUSTERS;j++)

{

for (int i = 0; i < digits.length; i++) {

if(clusters[i] == j)

{

image(digits[i].digit_image,imgx,imgy);

imgx += digits[0].digit_image.width;

if(imgx>width-digits[0].digit_image.width)

{

imgx = 0;

imgy += digits[0].digit_image.height;

}

}

}

imgx = 0;

imgy += digits[0].digit_image.height*1.25;

stroke(0,255,0);

line(0,imgy,width,imgy);

imgy += digits[0].digit_image.height/4;

}

}

FilenameFilter filter = new FilenameFilter() {

public boolean accept(File dir, String name) {

if (name.toLowerCase().endsWith(“.png”) || name.toLowerCase().endsWith(“.jpg”) || name.toLowerCase().endsWith(“.gif”)) return true;

return false;

}

};

chair-classification

Above is an image of my Exercise 4.

==============================================================

Description:

This Processing applet was made to explore how machine learning can be used to classify chairs. The applet classifies images from a database I made of chairs designed by Verner Panton, Charles and Ray Eames, Le Corbusier, Pierre Jeanneret, Charlotte Perriand, Harry Bertoia, and Eero Saarinen. When the applet is ran, it chooses nine chair images and tries to classify them by their designers. The number on the top left of each image represents the actual designer of the chair and the number on the top right of each image represents the applet’s guess for the the designer of the chair. The applet learns from a database of chairs and their designers prior to choosing the nine to guess, and the applet’s guess is displayed in red if it is incorrect, and is displayed in green if it is correct. The designers that the numbers represent are as follows:

1:  Bertoia

2:  Eames

3:  Panton

4:  Saarinen

5:  Le Corbusier, Jeanneret, and Perriand

==============================================================

Code:

import wekaizing.*;
import java.io.File;
import java.lang.Integer;
class digitImage {
int number;
PImage digit;
int[] pixeldata;
public digitImage() {
pixeldata = new int[101];}
}
WekaData digitsTrain;
WekaData digitsTest;
WekaClassifier classifier;
digitImage[] digits;
int[] results;
PFont HNL_font;
void setup() {
background(0);
size(680,680);
HNL_font = loadFont(“HelveticaNeue-Light-100.vlw”);
textFont(HNL_font, 15);
digitsTrain = new WekaData();
digitsTest = new WekaData();
for (int i = 0; i < 100; i++) {
digitsTrain.AddAttribute(Integer.toString(i));
digitsTest.AddAttribute(Integer.toString(i));
}
Object[] digitarray = new Object[] {0,1,2,3,4,5,6,7,8,9};
digitsTrain.AddAttribute(“digit”,digitarray);
digitsTest.AddAttribute(“digit”,digitarray);
loadDigits(“digits”);
digitsTrain.setClassIndex(100);
digitsTest.setClassIndex(100);
classifier = new WekaClassifier(WekaClassifier.LOGISTIC);
classifier.Build(digitsTrain);
print(“Training done”);
results = classifier.Classify(digitsTest);
print(“Classification done”);
drawResults();
}
void loadDigits(String digitfolder) {
File digitfiles = new File(sketchPath, “data/” + digitfolder);
String[] files = digitfiles.list(filter);
digits = new digitImage[files.length];
String numbers[] = loadStrings(digitfolder + “/digits.txt”);
for (int i = 0; i < files.length; i++) {
println(“Loading image ” + files[i]);
digits[i] = new digitImage();
digits[i].digit = loadImage(“data/” + digitfolder + “/” + files[i]);
digits[i].number = Integer.valueOf(numbers[i]);
PImage resizedImg = loadImage(“data/” + digitfolder + “/” + files[i]);
resizedImg.resize(10,10);
resizedImg.loadPixels();
for (int j = 0; j < 100; j++) {
digits[i].pixeldata[j] = resizedImg.pixels[j];
}
digits[i].pixeldata[100] = digits[i].number;
if (i < 40) {
digitsTest.InsertData(digits[i].pixeldata);
} else {
digitsTrain.InsertData(digits[i].pixeldata);
}
}
}
void drawResults() {
float num_correct = 0.0, total = 0.0;
int imgx, imgy;
for (int i = 0; i < 12; i++) {
imgx = (i % 3) * 220 + 20;
imgy = (i / 3) * 220 + 20;
image(digits[i].digit,imgx,imgy);
}
for (int i = 0; i < 9; i++) {
imgx = (i % 3) * 220 + 25;
imgy = (i / 3) * 220 + 35;
fill(0);
text(digits[i].number, imgx, imgy);
if(digits[i].number == results[i]){
fill(0,255,0);
}
else{
fill(255,0,0);
}
text(results[i], imgx + 180, imgy);
total += 1.0;
if(digits[i].number == results[i])
num_correct += 1.0;
}
println(“\n” + “Accuracy = ” + num_correct/total*40 + “%”);
}
FilenameFilter filter = new FilenameFilter() {
public boolean accept(File dir, String name) {
if (name.toLowerCase().endsWith(“.png”) || name.toLowerCase().endsWith(“.jpg”) || name.toLowerCase().endsWith(“.gif”)) return true;
return false;
}
};

import wekaizing.*;

import java.io.File;

import java.lang.Integer;

class digitImage {

int number;

PImage digit;

int[] pixeldata;

public digitImage() {

pixeldata = new int[101];}

}

WekaData digitsTrain;

WekaData digitsTest;

WekaClassifier classifier;

digitImage[] digits;

int[] results;

PFont HNL_font;

void setup() {

background(0);

size(680,680);

HNL_font = loadFont(“HelveticaNeue-Light-100.vlw”);

textFont(HNL_font, 15);

digitsTrain = new WekaData();

digitsTest = new WekaData();

for (int i = 0; i < 100; i++) {

digitsTrain.AddAttribute(Integer.toString(i));

digitsTest.AddAttribute(Integer.toString(i));

}

Object[] digitarray = new Object[] {0,1,2,3,4,5,6,7,8,9};

digitsTrain.AddAttribute(“digit”,digitarray);

digitsTest.AddAttribute(“digit”,digitarray);

loadDigits(“digits”);

digitsTrain.setClassIndex(100);

digitsTest.setClassIndex(100);

classifier = new WekaClassifier(WekaClassifier.LOGISTIC);

classifier.Build(digitsTrain);

print(“Training done”);

results = classifier.Classify(digitsTest);

print(“Classification done”);

drawResults();

}

void loadDigits(String digitfolder) {

File digitfiles = new File(sketchPath, “data/” + digitfolder);

String[] files = digitfiles.list(filter);

digits = new digitImage[files.length];

String numbers[] = loadStrings(digitfolder + “/digits.txt”);

for (int i = 0; i < files.length; i++) {

println(“Loading image ” + files[i]);

digits[i] = new digitImage();

digits[i].digit = loadImage(“data/” + digitfolder + “/” + files[i]);

digits[i].number = Integer.valueOf(numbers[i]);

PImage resizedImg = loadImage(“data/” + digitfolder + “/” + files[i]);

resizedImg.resize(10,10);

resizedImg.loadPixels();

for (int j = 0; j < 100; j++) {

digits[i].pixeldata[j] = resizedImg.pixels[j];

}

digits[i].pixeldata[100] = digits[i].number;

if (i < 40) {

digitsTest.InsertData(digits[i].pixeldata);

} else {

digitsTrain.InsertData(digits[i].pixeldata);

}

}

}

void drawResults() {

float num_correct = 0.0, total = 0.0;

int imgx, imgy;

for (int i = 0; i < 12; i++) {

imgx = (i % 3) * 220 + 20;

imgy = (i / 3) * 220 + 20;

image(digits[i].digit,imgx,imgy);

}

for (int i = 0; i < 9; i++) {

imgx = (i % 3) * 220 + 25;

imgy = (i / 3) * 220 + 35;

fill(0);

text(digits[i].number, imgx, imgy);

if(digits[i].number == results[i]){

fill(0,255,0);

}

else{

fill(255,0,0);

}

text(results[i], imgx + 180, imgy);

total += 1.0;

if(digits[i].number == results[i])

num_correct += 1.0;

}

println(“\n” + “Accuracy = ” + num_correct/total*40 + “%”);

}

FilenameFilter filter = new FilenameFilter() {

public boolean accept(File dir, String name) {

if (name.toLowerCase().endsWith(“.png”) || name.toLowerCase().endsWith(“.jpg”) || name.toLowerCase().endsWith(“.gif”)) return true;

return false;

}

};

logos

Above is an image of my Exercise 3.

==============================================================

Description:

This Processing applet was made to explore how machine learning can be used to sort logos by similarity. The applet looks at a collection of logos and sorts them based on how similar they are to a user-chosen logo from the collection. The logos all begin at full brightness, and when a logo is chosen, the other logos fade to black until they are sorted with the most similar logo as the brightest and the least similar logo as the darkest.

==============================================================

Code:

import java.io.File;
import java.io.FilenameFilter;
import imagelib.*;
import similarity.*;
ArrayList histograms;
PImage[] images;
int picIndex;
float px = 0;
float py = 0;
float pz = 0;
float mx = 0;
float my = 0;
float circleX = 0;
float circleY = 0;
float circleDiameter = 0;
float colorVariable = 0;
float opacity;
float vOpacity;
float vBoxOpacity;
int colorStart;
int frame;
color c;
float boxOpacity = 17;
//float idealBoxOpacity;
void setup() {
picIndex=-1;
size(1280,720);
File datafolder = new File(sketchPath, “data”);
String[] files = datafolder.list(filter);
if (files==null|files.length<1){
println(“You must add images in jpg format to the data subdirectory of your sketch.”);
exit();
}
images = new PImage[files.length];
int num_bins = 6;
histograms = new ArrayList(files.length);
double[] histogram;
for (int i = 0; i < files.length; i++) {
println(files[i]);
images[i] = loadImage(files[i]);
histogram = ImageParsing.histHue(this, images[i],num_bins);
histograms.add(i, histogram);
}
resetImages();
fill(0,220);
rect(0,0, 1280,720);
noFill();
}
void resetImages() {
int picy = 0;
int picx = 0;
background(255);
for (int i = 0; i < images.length ;i++) {
if (picx >= width) {
picx = 0;
picy += 240;
}
image(images[i],picx,picy,320,240);
picx += 320;
}
}
void draw() {
smooth();
noStroke();
float A = 0.90;
float B = 1.0-A;
mx = A*mx + B*mouseX;
my = A*my + B*mouseY;
circleDiameter = 40;
vOpacity = -10;
opacity = vOpacity+opacity;
if(opacity<25){
opacity=25;
}
if(picIndex>-1){
markSimilars(picIndex);
}
}
void mousePressed() {
boxOpacity = 0;
opacity = 255;
c = color(random(0,255),random(150,255),random(0,255));
if (mouseButton == LEFT) {
picIndex = (mouseY / 240) * 4 + mouseX / 320;
if (picIndex < images.length) {
markSimilars(picIndex);
}
}
}
void markSimilars(int index) {
int rectx, recty;
resetImages();
int[] similars = Similarity.similarVectors(histograms, index);
PFont Serif_48;
Serif_48 = loadFont(“Serif-48.vlw”);
textFont(Serif_48, 48);
for (int i = 0; i < similars.length; i++) {
println(i);
rectx = (similars[i] % 4) * 320;
recty = (similars[i] / 4) * 240;
/*fill(255,255,0);
text(i,rectx+30,recty+90);*/
float idealBoxOpacity = (i+1)*20;
println(idealBoxOpacity);
if(boxOpacity>idealBoxOpacity){
vBoxOpacity = -2;
boxOpacity = vBoxOpacity+boxOpacity;
}
else if(boxOpacity<idealBoxOpacity){
vBoxOpacity = 2;
boxOpacity = vBoxOpacity+boxOpacity;
}
else{
boxOpacity=idealBoxOpacity;
}
fill(0,boxOpacity);
noStroke();
rect(rectx,recty,320,240);
}
}
FilenameFilter filter = new FilenameFilter() {
public boolean accept(File dir, String name) {
if (name.toLowerCase().endsWith(“.jpg”)) return true;
return false;
}
};

import java.io.File;

import java.io.FilenameFilter;

import imagelib.*;

import similarity.*;

ArrayList histograms;

PImage[] images;

int picIndex;

float px = 0;

float py = 0;

float pz = 0;

float mx = 0;

float my = 0;

float circleX = 0;

float circleY = 0;

float circleDiameter = 0;

float colorVariable = 0;

float opacity;

float vOpacity;

float vBoxOpacity;

int colorStart;

int frame;

color c;

float boxOpacity = 17;

//float idealBoxOpacity;

void setup() {

picIndex=-1;

size(1280,720);

File datafolder = new File(sketchPath, “data”);

String[] files = datafolder.list(filter);

if (files==null|files.length<1){

println(“You must add images in jpg format to the data subdirectory of your sketch.”);

exit();

}

images = new PImage[files.length];

int num_bins = 6;

histograms = new ArrayList(files.length);

double[] histogram;

for (int i = 0; i < files.length; i++) {

println(files[i]);

images[i] = loadImage(files[i]);

histogram = ImageParsing.histHue(this, images[i],num_bins);

histograms.add(i, histogram);

}

resetImages();

fill(0,220);

rect(0,0, 1280,720);

noFill();

}

void resetImages() {

int picy = 0;

int picx = 0;

background(255);

for (int i = 0; i < images.length ;i++) {

if (picx >= width) {

picx = 0;

picy += 240;

}

image(images[i],picx,picy,320,240);

picx += 320;

}

}

void draw() {

smooth();

noStroke();

float A = 0.90;

float B = 1.0-A;

mx = A*mx + B*mouseX;

my = A*my + B*mouseY;

circleDiameter = 40;

vOpacity = -10;

opacity = vOpacity+opacity;

if(opacity<25){

opacity=25;

}

if(picIndex>-1){

markSimilars(picIndex);

}

}

void mousePressed() {

boxOpacity = 0;

opacity = 255;

c = color(random(0,255),random(150,255),random(0,255));

if (mouseButton == LEFT) {

picIndex = (mouseY / 240) * 4 + mouseX / 320;

if (picIndex < images.length) {

markSimilars(picIndex);

}

}

}

void markSimilars(int index) {

int rectx, recty;

resetImages();

int[] similars = Similarity.similarVectors(histograms, index);

PFont Serif_48;

Serif_48 = loadFont(“Serif-48.vlw”);

textFont(Serif_48, 48);

for (int i = 0; i < similars.length; i++) {

println(i);

rectx = (similars[i] % 4) * 320;

recty = (similars[i] / 4) * 240;

/*fill(255,255,0);

text(i,rectx+30,recty+90);*/

float idealBoxOpacity = (i+1)*20;

println(idealBoxOpacity);

if(boxOpacity>idealBoxOpacity){

vBoxOpacity = -2;

boxOpacity = vBoxOpacity+boxOpacity;

}

else if(boxOpacity<idealBoxOpacity){

vBoxOpacity = 2;

boxOpacity = vBoxOpacity+boxOpacity;

}

else{

boxOpacity=idealBoxOpacity;

}

fill(0,boxOpacity);

noStroke();

rect(rectx,recty,320,240);

}

}

FilenameFilter filter = new FilenameFilter() {

public boolean accept(File dir, String name) {

if (name.toLowerCase().endsWith(“.jpg”)) return true;

return false;

}

};

dsc_00021

Above is a short video of my Exercise 2 and the physical interface that I made to control it.

==============================================================

Description:

This interactive driving applet consists of a Processing applet that simulates a road and a physical interface that controls it.  The physical interface was created by rewiring an optical mouse to switches and mounting them in a foam core enclosure that I made to resemble a car interior.

==============================================================

Controls:

Steering Wheel/Mouse Wheel: Turn left or right.

Gas Pedal/Left Click: Accelerate.

Shifter/Right Click: Reverse On/Off.

Brake Pedal/Middle Click: Brake.

==============================================================

Code:

PImage road;
float y = -1200;
boolean inDrive;
boolean inReverse;
boolean brake;
boolean turnLeft;
boolean turnRight;
void setup(){
addMouseWheelListener(new java.awt.event.MouseWheelListener() {
public void mouseWheelMoved(java.awt.event.MouseWheelEvent evt) {
mouseWheel(evt.getWheelRotation());
}
}
);
road = loadImage (“Road.jpg”);
size(800,600);
background(0);
}
void mousePressed() {
if ( (mouseEvent.getModifiers() & InputEvent.BUTTON2_MASK) != 0) {
inDrive=false;
inReverse=false;
brake=true;
}
else if (mouseButton==LEFT){
inReverse=false;
brake=false;
inDrive=true;
}
else if (mouseButton==RIGHT) {
inDrive=false;
brake=false;
inReverse=true;
}
else if(mouseButton!=RIGHT){
inReverse=false;
}
}
void draw(){
if(y>0){
y=-1600;
}
if(y<-1600){
y=0;
}
if(inDrive){
y+=20;
}
if(inReverse){
y-=10;
}
if(turnLeft){
rotate(PI/10);
}
if(turnRight){
rotate(-PI/10);
}
image(road,-200,y);
}
void mouseWheel(int delta) {
println(delta);
if(delta<0){
turnLeft=false;
turnRight=true;
}
if(delta>0){
turnRight=false;
turnLeft=true;
}
}

PImage road;

float y = -1200;

boolean inDrive;

boolean inReverse;

boolean brake;

boolean turnLeft;

boolean turnRight;

void setup(){

addMouseWheelListener(new java.awt.event.MouseWheelListener() {

public void mouseWheelMoved(java.awt.event.MouseWheelEvent evt) {

mouseWheel(evt.getWheelRotation());

}

}

);

road = loadImage (“Road.jpg”);

size(800,600);

background(0);

}

void mousePressed() {

if ( (mouseEvent.getModifiers() & InputEvent.BUTTON2_MASK) != 0) {

inDrive=false;

inReverse=false;

brake=true;

}

else if (mouseButton==LEFT){

inReverse=false;

brake=false;

inDrive=true;

}

else if (mouseButton==RIGHT) {

inDrive=false;

brake=false;

inReverse=true;

}

else if(mouseButton!=RIGHT){

inReverse=false;

}

}

void draw(){

if(y>0){

y=-1600;

}

if(y<-1600){

y=0;

}

if(inDrive){

y+=20;

}

if(inReverse){

y-=10;

}

if(turnLeft){

rotate(PI/10);

}

if(turnRight){

rotate(-PI/10);

}

image(road,-200,y);

}

void mouseWheel(int delta) {

println(delta);

if(delta<0){

turnLeft=false;

turnRight=true;

}

if(delta>0){

turnRight=false;

turnLeft=true;

}

}

Above is a short video of my Exercise 1 along with the song for which it was made.

==============================================================

Description:

This interactive applet features colored tiles that can be “played” to a song of the user’s choice, in this case, Disney’s Main Street Electrical Parade.

==============================================================

Controls:

‘1’: Play and/or pause the song.

Left and Right Mouse Click: Large and small white tiles that vary slightly in size and appear where the mouse is clicked.

‘V’,’B’,’N’, and ‘M’: The middle four colored tiles of the applet.

‘A’,’S’,’D’,’F’,’J’,’K’,’L’,’;’: The 16 little colored tiles on the top and bottom of the applet.

==============================================================

Code:

import ddf.minim.*;

AudioPlayer player;

Minim minim;

boolean sound;

int frame;

float shapeSize;

float BBwidth;

float SBwidth;

color bg = color(50,45,45);

color c = color(255);

color c2 = color(255);

color c3 = color(225,75,75);

color c4 = color(200,150,250);

color c5 = color(250,225,100);

color c6 = color(100,250,200);

color c7 = color(100,250,200);

color c8 = color(100,250,200);

color c9 = color(250,225,100);

color c10 = color(250,225,100);

color c11 = color(200,150,250);

color c12 = color(200,150,250);

color c13 = color(225,75,75);

color c14 = color(225,75,75);

void setup(){

size(800,600);

background(bg);

rectMode(RADIUS);

noStroke();

minim = new Minim(this);

player = minim.loadFile(“ElectricalParade2.mp3”, 2048);

}

void draw(){

if(frame == frameCount-1 ||

frame == frameCount-2 ||

frame == frameCount-3 ||

frame == frameCount-4 ||

frame == frameCount-5 ||

frame == frameCount-6 ||

frame == frameCount-7 ||

frame == frameCount-8 ||

frame == frameCount-9 ||

frame == frameCount-10 ||

frame == frameCount-11 ||

frame == frameCount-12 ||

frame == frameCount-13 ||

frame == frameCount-14 ||

frame == frameCount-15 ||

frame == frameCount-16 ||

frame == frameCount-17 ||

frame == frameCount-18 ||

frame == frameCount-19 ||

frame == frameCount-20 ||

frame == frameCount-21 ||

frame == frameCount-22 ||

frame == frameCount-23 ||

frame == frameCount-24 ||

frame == frameCount-25 ||

frame == frameCount-26 ||

frame == frameCount-27 ||

frame == frameCount-28 ||

frame == frameCount-29 ||

frame == frameCount-30 ||

frame == frameCount-31 ||

frame == frameCount-32 ||

frame == frameCount-33 ||

frame == frameCount-34 ||

frame == frameCount-35 ||

frame == frameCount-36 ||

frame == frameCount-37 ||

frame == frameCount-38 ||

frame == frameCount-39 ||

frame == frameCount-40 ||

frame == frameCount-41 ||

frame == frameCount-42 ||

frame == frameCount-43 ||

frame == frameCount-44 ||

frame == frameCount-45 ||

frame == frameCount-46 ||

frame == frameCount-47 ||

frame == frameCount-48 ||

frame == frameCount-49 ||

frame == frameCount-50 ||

frame == frameCount-51 ||

frame == frameCount-52 ||

frame == frameCount-53 ||

frame == frameCount-54 ||

frame == frameCount-55 ||

frame == frameCount-56 ||

frame == frameCount-57 ||

frame == frameCount-58 ||

frame == frameCount-59 ||

frame == frameCount-60){

fill(bg,25);

rect(0,0,800,600);

}

if(sound == true){

player.play();

}

if(sound == false){

player.pause();

}

}

void mousePressed(){

fill(c);

if(mouseButton==LEFT){

rectMode(RADIUS);

shapeSize = random(100,150);

rect(mouseX,mouseY,shapeSize,shapeSize);

}

fill(c2);

if(mouseButton==RIGHT){

rectMode(RADIUS);

shapeSize = random(15,40);

rect(mouseX,mouseY,shapeSize,shapeSize);

}

frame = frameCount;

}

void keyPressed(){

float BBwidth = width/4;

float SBwidth = BBwidth/2;

rectMode(CORNER);

noStroke();

switch(key){

case ‘v’:

fill(c3);

rect(0,100,BBwidth,400);

frame = frameCount;

break;

case ‘b’:

fill(c4);

rect(200,100,BBwidth,400);

frame = frameCount;

break;

case ‘n’:

fill(c5);

rect(400,100,BBwidth,400);

frame = frameCount;

break;

case ‘m’:

fill(c6);

rect(600,100,BBwidth,400);

frame = frameCount;

break;

case ‘a’:

fill(c7);

rect(0,0,SBwidth,100);

rect(0,500,SBwidth,100);

frame = frameCount;

break;

case ‘s’:

fill(c8);

rect(SBwidth,0,SBwidth,100);

rect(SBwidth,500,SBwidth,100);

frame = frameCount;

break;

case ‘d’:

fill(c9);

rect(SBwidth*2,0,SBwidth,100);

rect(SBwidth*2,500,SBwidth,100);

frame = frameCount;

break;

case ‘f’:

fill(c10);

rect(SBwidth*3,0,SBwidth,100);

rect(SBwidth*3,500,SBwidth,100);

frame = frameCount;

break;

case ‘j’:

fill(c11);

rect(SBwidth*4,0,SBwidth,100);

rect(SBwidth*4,500,SBwidth,100);

frame = frameCount;

break;

case ‘k’:

fill(c12);

rect(SBwidth*5,0,SBwidth,100);

rect(SBwidth*5,500,SBwidth,100);

frame = frameCount;

break;

case ‘l’:

fill(c13);

rect(SBwidth*6,0,SBwidth,100);

rect(SBwidth*6,500,SBwidth,100);

frame = frameCount;

break;

case ‘;’:

fill(c14);

rect(SBwidth*7,0,SBwidth,100);

rect(SBwidth*7,500,SBwidth,100);

frame = frameCount;

break;

case ‘1’:

if(sound == false){

sound = true;

break;

}

if(sound == true){

sound = false;

break;

}

}

}

import ddf.minim.*;

AudioPlayer player;

Minim minim;

boolean sound;

int frame;

float shapeSize;

float BBwidth;

float SBwidth;

color bg = color(50,45,45);

color c = color(255);

color c2 = color(255);

color c3 = color(225,75,75);

color c4 = color(200,150,250);

color c5 = color(250,225,100);

color c6 = color(100,250,200);

color c7 = color(100,250,200);

color c8 = color(100,250,200);

color c9 = color(250,225,100);

color c10 = color(250,225,100);

color c11 = color(200,150,250);

color c12 = color(200,150,250);

color c13 = color(225,75,75);

color c14 = color(225,75,75);

void setup(){

size(800,600);

background(bg);

rectMode(RADIUS);

noStroke();

minim = new Minim(this);

player = minim.loadFile(“ElectricalParade2.mp3”, 2048);

}

void draw(){

if(frame == frameCount-1 ||

frame == frameCount-2 ||

frame == frameCount-3 ||

frame == frameCount-4 ||

frame == frameCount-5 ||

frame == frameCount-6 ||

frame == frameCount-7 ||

frame == frameCount-8 ||

frame == frameCount-9 ||

frame == frameCount-10 ||

frame == frameCount-11 ||

frame == frameCount-12 ||

frame == frameCount-13 ||

frame == frameCount-14 ||

frame == frameCount-15 ||

frame == frameCount-16 ||

frame == frameCount-17 ||

frame == frameCount-18 ||

frame == frameCount-19 ||

frame == frameCount-20 ||

frame == frameCount-21 ||

frame == frameCount-22 ||

frame == frameCount-23 ||

frame == frameCount-24 ||

frame == frameCount-25 ||

frame == frameCount-26 ||

frame == frameCount-27 ||

frame == frameCount-28 ||

frame == frameCount-29 ||

frame == frameCount-30 ||

frame == frameCount-31 ||

frame == frameCount-32 ||

frame == frameCount-33 ||

frame == frameCount-34 ||

frame == frameCount-35 ||

frame == frameCount-36 ||

frame == frameCount-37 ||

frame == frameCount-38 ||

frame == frameCount-39 ||

frame == frameCount-40 ||

frame == frameCount-41 ||

frame == frameCount-42 ||

frame == frameCount-43 ||

frame == frameCount-44 ||

frame == frameCount-45 ||

frame == frameCount-46 ||

frame == frameCount-47 ||

frame == frameCount-48 ||

frame == frameCount-49 ||

frame == frameCount-50 ||

frame == frameCount-51 ||

frame == frameCount-52 ||

frame == frameCount-53 ||

frame == frameCount-54 ||

frame == frameCount-55 ||

frame == frameCount-56 ||

frame == frameCount-57 ||

frame == frameCount-58 ||

frame == frameCount-59 ||

frame == frameCount-60){

fill(bg,25);

rect(0,0,800,600);

}

if(sound == true){

player.play();

}

if(sound == false){

player.pause();

}

}

void mousePressed(){

fill(c);

if(mouseButton==LEFT){

rectMode(RADIUS);

shapeSize = random(100,150);

rect(mouseX,mouseY,shapeSize,shapeSize);

}

fill(c2);

if(mouseButton==RIGHT){

rectMode(RADIUS);

shapeSize = random(15,40);

rect(mouseX,mouseY,shapeSize,shapeSize);

}

frame = frameCount;

}

void keyPressed(){

float BBwidth = width/4;

float SBwidth = BBwidth/2;

rectMode(CORNER);

noStroke();

switch(key){

case ‘v’:

fill(c3);

rect(0,100,BBwidth,400);

frame = frameCount;

break;

case ‘b’:

fill(c4);

rect(200,100,BBwidth,400);

frame = frameCount;

break;

case ‘n’:

fill(c5);

rect(400,100,BBwidth,400);

frame = frameCount;

break;

case ‘m’:

fill(c6);

rect(600,100,BBwidth,400);

frame = frameCount;

break;

case ‘a’:

fill(c7);

rect(0,0,SBwidth,100);

rect(0,500,SBwidth,100);

frame = frameCount;

break;

case ‘s’:

fill(c8);

rect(SBwidth,0,SBwidth,100);

rect(SBwidth,500,SBwidth,100);

frame = frameCount;

break;

case ‘d’:

fill(c9);

rect(SBwidth*2,0,SBwidth,100);

rect(SBwidth*2,500,SBwidth,100);

frame = frameCount;

break;

case ‘f’:

fill(c10);

rect(SBwidth*3,0,SBwidth,100);

rect(SBwidth*3,500,SBwidth,100);

frame = frameCount;

break;

case ‘j’:

fill(c11);

rect(SBwidth*4,0,SBwidth,100);

rect(SBwidth*4,500,SBwidth,100);

frame = frameCount;

break;

case ‘k’:

fill(c12);

rect(SBwidth*5,0,SBwidth,100);

rect(SBwidth*5,500,SBwidth,100);

frame = frameCount;

break;

case ‘l’:

fill(c13);

rect(SBwidth*6,0,SBwidth,100);

rect(SBwidth*6,500,SBwidth,100);

frame = frameCount;

break;

case ‘;’:

fill(c14);

rect(SBwidth*7,0,SBwidth,100);

rect(SBwidth*7,500,SBwidth,100);

frame = frameCount;

break;

case ‘1’:

if(sound == false){

sound = true;

break;

}

if(sound == true){

sound = false;

break;

}

}

}

Click here
Here is a website with our documentation. I also uploaded the word doc with all the photos and everything.

sync21webresSYNCHRONIZATION

JET, Joana Ricou, Paul Shen

This piece explores how self-organization emerges in the search for synchronization, where synchronization appears to be an ubiquitous and old principle that holds together all living things.

The installation  consists of 144 entities who will synch with their closest neighbors, until the whole population is synched. The presence of visitors will disrupt the synchronization.

INPUT: presence of visitors

BLACK BOX: consensus algorithm to synchronize pulsing.

OUTPUT: synchronized blinking 

Problem to be solved:

  • Arduino stops lighting LEDs to receive data so an LED controller will be added

Code: http://storage.paulshen.name/Fireflies.tar.gz

Powerpoint:http://joanaricou.com/transferfiles/Synch.pptx