Monday, April 12, 2010

Procedural Mapping




The purpose of this project is to collect qualitative characteristics of a site. This is an alternative mapping technique, that will hopefully begin to describe a site in terms other than cartesian coordinates. The method of data collection is to triangulate the site, taking sequential photos from three points, documenting the use of the site, mainly occupancy. I would like to synthesize this collection of imagery in such a way that each photo can be cross referenced with its two counterparts, then further compared to the sequential photos. These comparisons will reveal fixed objects found within the site and changing objects, and between these two points will reveal descriptive qualitative data. I do not yet know how this will be done, perhaps via a series of Photoshop of Processing filters, or through an arraying of a point grid, that can then be further expressed in Grasshopper.

As an overall idea, this process should be deployable and repeatable. An entire city or neighborhood could be scanned, then each of the mappings can be combined. This is an experiment in creating a method of qualitative data gathering.

Sunday, April 4, 2010


Not Processing, Magic

I did not use a script to make this image, I used magic. But, lucky I did some research and their happens to be a code that exist to manipulate the image in a very similar way. This is using crude and undeveloped technology, it is not as refined as the Dark Arts that have been passed down from generation to generation from the Middle Ages, but it works none- the-less, pretty well.

Let’s start with a quote from the book to frame how the code is proposed to be used:

“Using the pixels[] array rather than the image() function to draw the image to the display window provides more control and leaves room for variation in displaying the image. Small calculations modifying the for structure and the pixels[] array reveal some of the potential of this technique.”

With that frame in mind this is a image manipulation based on grayscale representation of the in conjunction with the vertical “blocking” of informations. The spacing of the vertical lines are controlled through the code which allows my specific analysis to take place, this happens when the background color of the manipulated image is rendered and blends with the grayscaled corresponding color. At these moments the image “blurs” out, notation the specific information that I am looking to map via digital technique. The code is as follows:

size(750,587); //size of the image

PImage arch3 = loadImage("arch3.jpg"); // note arch3 is the name of the image that is loaded and anywhere that this is mentioned in the name of your image instead

int count = arch3.width * arch3.height; // frames how the script read the information which is structured by the width and height of the image

arch3.loadPixels();

loadPixels(); // call to load the pixels to be manipulated

for (int i = 0; i <>// tells the program to run the script at a spacing of 5 pixels for the entire width of the page until the page width runs out

pixels[i] = arch3.pixels[i];

}

updatePixels(); // calls to update the pixels via the above information

the image below is the “blind” outcome come from running this script.

As noted before this can also be done, with relative ease, using magic. But make sure you have a beginner Warlock / Witch Kit and proper safety equipment. Wizard-ing is a highly dangerous practice and must be respected and regarded as such.

Tuesday, March 30, 2010

NAVAGATION

As you saw during the presentation, this process finds itself rooted in the analysis of light quality in a series of spaces depicted through a video found on YouTube.com. The first process that I found incredibly helpful, for those of you using stills from a video now or in the future, is the program ImageGrab.

http://paul.glagla.free.fr/imagegrab_en.htm

It can be run without installing and has a fairly easy interface. It allows still to be taken from any video uploaded to it. What made it even more useful is it allows you to set a time frame in which
it will take successive stills, 1sec, 2 sec, etc... This can be done using the the "intravalometer" which is located along the toolbar (looks like an alarm clock). I am sure processing will do this for you but for those who don't have the time to figure that out this program worked very well, allowing me to save 132 jpegs in a few seconds.

The code used in processing to create the image above allows processing to load an image and through a line that you select (the green line in the image) will take the color of every pixel along that line and use its value to color a line which is drawn along the bottom of the image. This procedure is basically taking a section cut through the image at this defined point. Within this code is also a command called POSTERIZE which allows you to select the number of colors in which the image is made from. Note: If for example you have a color image and choose say 3 colors to create the image, the program will use three of every color meaning three blues, three reds, three yellows, etc... With my specific project I diminished the screenshots to black and white before loading them.

CODE:

PImage test; int y = 0; void setup() { size (500,600); // Size of the image *Be sure to include space // for your image and the lines to be drawn
test = loadImage("Test2.jpg"); // "Insert the name of your image"
} void draw() { image(test,0,0); filter(POSTERIZE, 3
); // POSTERIZE, Insert number of colors to make // image out of this example shows 3
y = constrain(mouseY,0,499);
// Sets variable y constraints of a straight line
// line the width of the image *mouseY allows the //movement of the line to be dynamic according to //the mouses placement on the image in the y-direction
for (int i =0; i < style="color: rgb(192, 192, 192);">// Sets variable i used to later
// to move over one pixel at a time until edge of // of the image is reache
d
color c = get(i,y); //get() function tells processing to "get" the // the color of each pixel using variables above
stroke(c); // Sets stroke color to the pixel recieved from above

line(i, 500,i, 600); // Tells where to
draw lines using the stroke color
}
stroke(57,221,44); // Gives the stroke color of the marking line on the image
// showing where the pixels are being selected in this case // a bright green
strokeWeight(2);

line(0,y,499,y);
}

Following is a screen shot with sections taken at 1, 20, 40, and 60 pixels




Wednesday, March 24, 2010

Cinema Redux

Vertigo (1958)



Created in January 2004, by Brendan Dawes Cinema Redux explores the idea of distilling a whole film down to one single image. This script lays out a film as a series of stills captured at 1 frame per second. The result is a matrix of images resembling a DNA print of the film.

Doing some internet sleuthing I was able to find a working script that allowed me to do the same thing. The original script created by Brendan Dawes was copyrighted and no longer works on new editions of Processing.

-------------------------------------------------------------------------------------

import processing.video.*;
Movie myMovie;
int xpos = 0;
int ypos = 0;
int VWIDTH = 11; // width of capture
int VHEIGHT = 6; // height of capture
int MOVIEWIDTH = VWIDTH * 60; // width is equivalent to 1 minute of film time
int MOVIEHEIGHT;
int MAXWIDTH = MOVIEWIDTH - VWIDTH;
float MOVIEDURATION;

void setup() {
myMovie = new Movie(this, "Tetsuo2.mov"); //change movie.mov to the filename of your Quicktime movie
MOVIEDURATION = (myMovie.duration()); // gets the duration of the movie in seconds
MOVIEHEIGHT = VHEIGHT * int(MOVIEDURATION / 60) + VHEIGHT; // height of the stage is based on the length of your film
// note that the last frame of the film will repeat until it reaches the end of the current line
size(MOVIEWIDTH, MOVIEHEIGHT);
background(0); // sets the background of the stage to black
frameRate(1); // forces the video to play at one frame per second
myMovie.play();
}

void draw() {
if(myMovie.available()) { // checks to see if the next frame is ready for processing
myMovie.read();
image(myMovie, xpos, ypos, VWIDTH, VHEIGHT);
xpos += VWIDTH;
if (xpos > MAXWIDTH) {
xpos = 0;
ypos += VHEIGHT;
}
if(ypos > MOVIEHEIGHT) {
saveFrame("my_movie_dna.tif"); // saves a tiff image to the folder of the current sketch when the end of the movie is reached
delay(2000); // pauses two seconds to save the file
noLoop(); // exits the draw loop so that the process ends
}
delay(100); // waits one tenth of a second before repeating the draw function
}
}

-------------------------------------------------------------------------------------

some notes:
  • To get this script to work you will need the video to have the file extension .MOV which is Apples QuickTime player format. If you have that you can just add the file in Processing by just dragging it into the script window. There a number of programs out there that can do this such as Oxelon Media Converter.
  • VWIDTH and VHEIGHT are measured in pixels this should probably be changed based on the ratio of the screen size of the movie.
  • It is also best to shrink down the size of the film in an exterior program (ex if the films screen resolution is 800x600 make it 80x60) this makes the computer do a lot less work and work faster to create the image.

Here are two I did.

Tetsuo: The Iron Man (1989)


2001: A Space Odyssey (1968)