Skip navigation

Category Archives: Assignment

Sole Identity by Jesse Chorng

3

My piece attempts to answer the question “What do our shoes say about us?” by using machine learning to analyze and classify the bottom of users’ shoes.

Sole Identity captures ones out-sole and tries to determine the intended purpose the shoe was designed for. Much like a thumbprint, one’s shoe print has unique characteristics that can identify an individual. In the shoe design process, special attention is paid to the sole because of its importance in ensuring durability and comfort. Once a person’s shoe is identified as either a casual, skate, or athletic type, an image of the shoe is then placed into an environment where it is animated and interacts with other pairs according to characteristics specific to their classification.

The goal of the project is to demonstrate to the user how their shoes were meant to behave. By giving life to ones footwear, a person can visually understand the purpose of their shoes and determine whether their shoe choices do indeed reflect who they are.

INPUT: the bottom of people’s shoes
BLACK BOX: classification of soles as Casual, Athletic, or Skateboarding
OUTPUT: animated environment of captured soles

Download Code
Download Powerpoint

Advertisements

img_0671
img_0687-eimg_0682img_0681

Above are some images of “Brighten Your Day!” at the Children’s Museum of Pittsburgh.

==============================================================

Description:

Have you ever wondered what it would be like if household products could see you and react to you? By using a machine learning library that detects people, their faces, and other qualities, this lamp is able to turn on and look at you when you look at it. However, the lamp will only respond to people who believe that it can come to life. Are you a believer?

==============================================================

Materials:

The mechanical components and wires were secured in a support structure made of MDF with a vacuum-formed plastic cover that I made for the installation.

Processing 1.0.1

Open CV Library

Webcam (Apple iSight from a MacBook Pro for the installation)

Breadboard (now Perfboard)

Servo

5 Volt Relay

Diode

22-Gauge Solid Wire

12 Volt Power Adapter

Desk Lamp

Extension Cord

USB Cable

Solder

==============================================================

Processing Code:

import processing.serial.*;
import hypermedia.video.*;
OpenCV opencv;
int contrast_value    = 0;
int brightness_value  = 0;
float A = 0;
Serial myPort;
void setup() {
size(1920,1200);
background(0);
opencv = new OpenCV( this );
opencv.capture( 180,135 );
opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );
println(Serial.list());
myPort = new Serial(this, Serial.list()[0], 9600);
}
public void stop() {
opencv.stop();
super.stop();
}
void draw() {
A+=15;
opencv.read();
opencv.convert( GRAY );
opencv.contrast( contrast_value );
opencv.brightness( brightness_value );
Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );
image( opencv.image(), 0, 0 );
noFill();
stroke(255,0,0);
for( int i=0; i<faces.length; i++ ) {
int faceX = (((int)(faces[i].x)));
rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );
println(faceX);
if((faces[i].y)>25){
if(faceX>=60 && faceX<=100){
myPort.write(faceX);
}
else if(faceX<50){
myPort.write(50);
}
else if(faceX>100){
myPort.write(100);
}
}
}
background(0);
noStroke();
smooth();
fill(random(50,255));
ellipse(width/2,height/2,15,15);
noFill();
noSmooth();
}
void mouseDragged() {
contrast_value   = (int) map( mouseX, 0, width, -128, 128 );
brightness_value = (int) map( mouseY, 0, width, -128, 128 );
}

import processing.serial.*;

import hypermedia.video.*;

OpenCV opencv;

int contrast_value    = 0;

int brightness_value  = 0;

float A = 0;

Serial myPort;

void setup() {

size(1920,1200);

background(0);

opencv = new OpenCV( this );

opencv.capture( 180,135 );

opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );

println(Serial.list());

myPort = new Serial(this, Serial.list()[0], 9600);

}

public void stop() {

opencv.stop();

super.stop();

}

void draw() {

A+=15;

opencv.read();

opencv.convert( GRAY );

opencv.contrast( contrast_value );

opencv.brightness( brightness_value );

Rectangle[] faces = opencv.detect( 1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 40, 40 );

image( opencv.image(), 0, 0 );

noFill();

stroke(255,0,0);

for( int i=0; i<faces.length; i++ ) {

int faceX = (((int)(faces[i].x)));

rect( faces[i].x, faces[i].y, faces[i].width, faces[i].height );

println(faceX);

if((faces[i].y)>25){

if(faceX>=60 && faceX<=100){

myPort.write(faceX);

}

else if(faceX<50){

myPort.write(50);

}

else if(faceX>100){

myPort.write(100);

}

}

}

background(0);

noStroke();

smooth();

fill(random(50,255));

ellipse(width/2,height/2,15,15);

noFill();

noSmooth();

}

void mouseDragged() {

contrast_value   = (int) map( mouseX, 0, width, -128, 128 );

brightness_value = (int) map( mouseY, 0, width, -128, 128 );

}

==============================================================

Arduino Code:

#include <Servo.h>

int incomingByte = 0;

Servo myservo;

int pos = 0;

int ledPin = 11;

int value = LOW;

long previousMillis = 0;

long interval = 1000;

void setup()

{

pinMode(ledPin, OUTPUT);

myservo.attach(9);

Serial.begin(9600);

}

void loop()

{

if (Serial.available() > 0)  {

incomingByte = Serial.read();

if (incomingByte>50 && incomingByte<100){

value = LOW;

digitalWrite(ledPin, value);

}

else{

value = HIGH;

digitalWrite(ledPin, value);

}

}

Serial.println(incomingByte);

myservo.write(incomingByte*.75);

}

Click here
Here is a website with our documentation. I also uploaded the word doc with all the photos and everything.

sync21webresSYNCHRONIZATION

JET, Joana Ricou, Paul Shen

This piece explores how self-organization emerges in the search for synchronization, where synchronization appears to be an ubiquitous and old principle that holds together all living things.

The installation  consists of 144 entities who will synch with their closest neighbors, until the whole population is synched. The presence of visitors will disrupt the synchronization.

INPUT: presence of visitors

BLACK BOX: consensus algorithm to synchronize pulsing.

OUTPUT: synchronized blinking 

Problem to be solved:

  • Arduino stops lighting LEDs to receive data so an LED controller will be added

Code: http://storage.paulshen.name/Fireflies.tar.gz

Powerpoint:http://joanaricou.com/transferfiles/Synch.pptx

dotsPaul Shen, Joana Ricou, Jet Townsend

 

Installation:  The installation is in a darkened area where 50-100 LEDs appear to float. The installation hangs from a suspended PVC square frame.When visitors observe from afar the installation will pulse in synchronicity. If a visitor approaches, they will disturb nearby LEDs. The lights will eventually synchronize again.

Still needed:

  • dimensions

 

Input: Presence of visitors.

Blackbox: The presence of visitors will disturb the pulsing pattern of nearby LEDs. Individual LEDs use consensus to synchronize their blinking. 

Output: an increasingly synchronized pulsing.

 

sketch-2

Delivered:

  • prototype of 13 LEDs
  • deliver sketch of algorithm
  • decide on appearance
  • decide on sensors: presence 
  • material list

 

Materials:

  • PVC pipe + paint: lowe’s/home depot
  • conductive thread or wire: already have
  • 1 or more arduinos: already have
  • LED drivers / shift registers / multiplexors: already have
  • LEDs
  • PIR Sensor (http://www.futurlec.com/PIR_Sensors.shtml)

 

Questions:

  • how many LEDs?
  • size of the external structure of the LEDs?
  • what will it do when no-one is close? (Can it reset itself?)
  • does it respond to motion or presence? Ie. will it re-synch if people stand close but still?
  • if someone stands close to an LED cluster could those stay on?

chair

This installation consists of a modified task chair that turns to face children in the museum, but doesn’t turn to face adults. The idea behind the installation is to create a playful experience for the children as if they are in a world where normally inanimate objects “come alive” like in many children’s movies.

[Sue Ann, Patrick, Polo]

* Phase 1

Input (camera)/output (using screen output) code. A basic classifier.

A timeline:

* Phase2 (by 4/14 Tue):

– central mechanism built and mounted on backboard with arduino

– program arduino for servo and shredder control

– a more solid version of the base classifier

* Phase3 (by installation beginning date 4/21 Tue):

– adaptive code, display (5 best paintings) code

– build accepted box, a stand for the shredder, mount the camera, cover the central mechanism

– a “SUBMIT” button

Shoe Prints

For my final proposal, I hope to create a piece that scans a user’s shoe print, classifies it as certain type of footwear, and then animates it on screen according to its characteristics. A person will step onto a platform which has a clear plexiglass top with a camera underneath. The piece will take a picture of the sole of the shoe and try to classify it. The following categories will be:
– Basketball/Athletic
– Running
– Skateboarding
– Casual

Based on its classification, the shoes will then be animated to interact in a projected space according to certain characteristics. For example, a person’s shoe print that is classified as a running shoe will start running around the perimeter of the screen and constantly be in motion. A skateboard shoe print will be placed onto a board shape and glide around, interweaving between other groups and generally disrupting the space. Basketball prints will be competitive and challenging other similarly classified prints. And so on.

This piece is strongly influenced by braitenberg vehicles, which try to make complex interactions through objects with simple rules.

For phase 1, I will create circles in Processing which exemplify each category and start to get an idea of how the interactions will take place. I will also start building the platform and calibrating the camera capture.

Phase I: Tuesday, April 7th
– prototype of 13 LEDs
– deliver sketch of algorithm
– decide on appearance
– decide on sensors/interactivity: possibilities: presence (IR lamps + camera) or heart beat 
– get materials (LED drivers)
– determine dimensions
Phase II: Tuesday, April 14th
– build structure
– implement sensors
Final Installation: Saturday, April 25th
sketch-11
Concept:
All of life and many inanimate processes rely on synchronization. Synchronization helps the cells of the heart beat together, neurons to communicate, fireflies to attract mates, etc. The emergent self-organization that arises from seeking synchronization appears to be an ubiquitous and old principle that holds together living things.
sketch-2
This piece will explore the drive to synchronize. Several entities will have the ability to learn to pulse at the same time. This progression can be interrupted by the presence of people. There will be sound to the piece, that will communicate how synchronized the pieces are. If left to match up, the whole population will pulse together, in a slow but increasing crescendo. It will be interesting to see the effect of a living glowing, soothing, synchronized environment on visitors – will they synch up their breath or pace? Will visitors behave/feel differently when the system is out of synch or in synch? Will people be more attracted to the “searching” entities or the “synched” entities?
Some constraint can exist, symbolizing a resource such as food/energy, which will limit the amount of time the synchronization can be kept. After which, the synchronization falls apart and chaos breaks out again. At this point the entities can be reset randomly or re-seed each other. 
dots
Suggested location: Dark hallway next to text rain
Physical description: light support structure hangs from the ceiling. About 100 LEDs hang from this structure, pulsing rythmically.
Questions:
– how does a visitor disrupt one of the obs? Be able to disrupt locally — Use a camera; Have a discrete number of zones monitored by range finders; monitor only when people come in to the space
– can people also “help” the process? Ie if you hold a MOB it lights up, resetting its learning, but if you let go at the right time, it will know that is a good time to be on
– should the system be able to maintain its synchronicity or have a built-in constraint? 
– how does chaos break out? This step has the possibility of being very beautiful. 

The idea is similar to Patrick’s box (“what they left behind” from Assignment 1). Here, instead of objects, people will submit drawings. A drawing is passed into a rolling scanner that sends the image to the computer, which then decides whether to accept it or reject it. There are two large transparent boxes, one for accepted drawings and another for the rejected. However, when a drawing is rejected, it’ll be sent through a shredder(!) before being dumped in the reject bin… On the wall behind this contraption, we’ll project 5 best drawings so far, just as a curator would display them at a museum.

12-900

We hope to see that people try to “learn” what the curator likes and does not like, and try to get their art accepted.

There are a couple machine learning components. We want the curator to have a set of criteria for judging the drawings. First, it will have a pre-trained classifier that defines the curator’s “taste”. But also, as people submit drawings, it will redefine its taste for a couple reasons: 1. it gets sick of “things” it’s seen too many e.g. stick figures, and starts to hate them. 2. it starts to like elements from the more recent drawings (following the “trend”) if there seems to be too few acceptances. The latter also would help with the fact that

We have not decided exactly what the pre-trained classifier would be trained on. A couple possibilities are: 1. a puzzle e.g. has to have 2. whatever we like.

Another idea is to have colored papers and/or some pre-drawn figures on the paper to promote certain kinds of drawings. The colored papers would make the installation more visually appealing.

Things to build: two boxes (need doors to empty out the bins if needed), a mechanical “switch” (probably a teflon cookie sheet with a cervo) connected to the computer (probably through arduino)

Things to program/learn: the predefined classifier, how to change strategies over time (with no particular goal, or just to balance the bins)

Other work: getting a scanner to activate the computer… another option is to use a camera, but the same problem persists.