social hacking computer vision

our final assignment is to use computer vision techniques. i have two ideas:

  1.  an IRL game where we use our “meat eyes” (trevor paglen’s evocative words) to see like a computer
  2.  something about the history of photography, which has to do with the history of the mugshot, phrenology, and the practice of reading criminality on the body. some of the face tracking diagrams remind me of cesare lombroso’s face measurements:

lombroso_plate687474703a2f2f7777772e656563732e716d756c2e61632e756b2f253745696f616e6e6973702f48756d616e4d6f74696f6e416e616c797369732f65787072657373696f6e732e6a7067

unfinished thoughts:

i’ve been thinking about trevor paglen on meat eyes. in a recent talk, he said: “most of the images made in the world are made by machines, for other machines, and human eyes are not in the loop.” what is an image if we can’t see it? is there a way for our meat eyes to see the output of computer vision processes in a way that doesn’t interfere?

are we externalizing our brain when we try to make a computer see like we see? or, rather, to make a computer translate what it sees into something we can see? if our visual systems, in our brains, evolved from needs in our environments, needs that computers don’t share, and we are now building systems to replicate those visual systems with inorganic parts, isn’t that interesting?

like construction machines that mirror body parts. the way diggers look like arms with scooping hands at the ends. what does it do to have a bunch of body parts with no brain, only their own affordances, roaming around the city? to have these disembodied arms everywhere, digging stuff up and making holes for condos to go in? what about these disembodied computer eyes? to whom do they belong? where is their brain?

last but not least, john berger in ways of seeing: “we only see what we look at… every image embodies a way of seeing.”

need to come back to these:
the brain’s visual processing system
john berger’s ways of seeing
kyle mcdonald’s notes on computer vision

fixing a bug in p5 textAlign()

lauren flagged this textAlign() bug as beginner-level, so i started digging into it. the problem was that you couldn’t center more than one line of text vertically within a bounding box. the textAlign() function would only center the block of text in relation to the first line—so the first line would be centered vertically, but the rest would be off. you can see what i mean in the video below.

riveting silent film of working on a p5.js bug

solution

to find the problem, i had to dig around in different parts of the p5.js file—doing ctrl + f “renderText” and “finalMaxHeight” and “Renderer2D” etc. to see how all the functions and constants were connected/talking to each other. eventually, i narrowed it down to this little else{} section of the textAlign() function. previously, this.renderText didn’t account for any offset you need with more than one line of text. so lauren helped me think through the math and we created the variable, offset.

Screen Shot 2016-04-22 at 3.34.55 PM

the code says:

the offset amount equals number of lines of text (that’s what cars.length gives us),

divided by two (that’s how you center something)

minus half a line of text (because when we use the CENTER argument, half the height of one line is baked in to that constant),

all multiplied the text leading (or, line height).

for all text you render, put the text itself, spread across however many lines, starting at location x, at location y minus the offset.

then, add line height for each line.

i also found the documentation in the reference to be really counter-intuitive so i added some more explanation. from this:

“Sets the current alignment for drawing text. The parameters LEFT, CENTER, and RIGHT set the alignment of text in relation to the values for the x and y parameters of the text() function.”

to this:

“Sets the current alignment for drawing text. Accepts two arguments: horizAlign (LEFT, CENTER, or RIGHT) and vertAlign (TOP, BOTTOM, CENTER, or BASELINE).

The horizAlign parameter is in reference to the x value of the text() function, while the vertAlign parameter is in reference to the y value.

So if you write textAlign(LEFT), you are aligning the left edge of your text to the x value you give in text(). If you write textAlign(RIGHT, TOP), you are aligning the right edge of your text to the x value and the top of edge of the text to the y value.”

hopefully that’ll be helpful.

instructions for creating the pull request

p5 development

preparing a p5 pull request