WIP: who are the guantanamo detainees?

do you know? i don’t know. until a few days ago, i’d never taken the time to look through the files that wikileaks published a few years ago. wikileaks has these inmate profiles indexed by ISN or name.

part of my interest in this is about learning how someone ends up in the atrocious place that is guantanamo without due process. how bad are these bad guys? what kind of bad are they? are they like us? did we have a role in making them? in starting to read these profiles, i’m learning about these men and am particularly interested in the “prior history” section of these documents.

another part of my interest in this data set is something about chelsea manning not being in solitary in vain. if she risked her life to leak this information, we sure as hell better do something good with it, right?

this is a WIP, so i’ll continue to update this as i go…


wikileaks.org interface



making a javascript array from ISN and “prior history”



first draft

3 next steps

so far, i’ve met with cassie and surya and spent a handful of hours working with each of them.

it seems like my work with the editor this semester will be fixing bugs to help me learn the code base. ultimately, maybe i can think about features to add.

my work with surya will be research to support a first webisode around his concept. what’s the [political] context in which we’re teaching technical skills? i’ll work with him on thursdays and pore over docs from the eff, the intercept, tom’s understanding networks class, and other sources as necessary.

i see my project within these projects as thinking about what it means to build stuff with people; this project is about collaborative projects as a medium, as process. an important parallel for me is grassroots organizing. the important things they share are: (theoretically) low barrier to entry, no formal education prereqs, shared values, working towards a tangible thing. i’m interested in my process of working on these projects as one leg of a larger framework/argument about what tech can be for.

3 next steps:

  • sketch out [in words] the main argument of this leg. something something scalable collaboration.
  • pick 3 writers, thinkers, or artists: ivan illich, [x], [x]
  • make [x] wordless gifs or sketches around concepts in the books

project statement/how i want to spend the semester:

my brain dump/storm raised a bunch of formats, sources, and projects i’m interested in. i’m really interested in the role of the web in producing, revising, distributing ideology and history. that’s too much for one semester. the ideas i want to back-burner for later are:

  • click strike [books: ]
  • site specific thing at national borders [books: wendy brown, harsha walia, ]

this semester, i’d like to articulate, in words and pictures, something about certain kinds of tech as a medium for collaborative process.


a funny thing happened on the way to the nsa(.gov)


we talked about traceroute last week in understanding networks. this led me down a thousand rabbit holes, including this instructive powerpoint presentation featuring

Random Traceroute Factoid

•The default starting port in UNIX traceroute is 33434.This comes from 32768 (2^15 or the max value of a signed 16-bit integer) + 666 (the mark of Satan).”


but also more applicable things like address naming conventions and how to notice different possible relationships between network types (p21).


i wanted to see if anything interesting happened when i ran traceroute nsa.gov.

answer: not really. i ran whois on some of these, but they’re just regular ol’ cloud companies in new york and massachusetts and colorado. i guess this makes sense because the nsa probably doesn’t store all of our stuff on the same server that hosts the nsa.gov website.

from there, i tried to find an isis website, thinking that that might be a better way to find an nsa server along a traceroute. it was surprisingly difficult for me to find one via (english) google or twitter. i did learn about a quarterly isis magazine, but it had no web presence to speak of. #printnotdead

googling “nsa traceroute” pulled up a wired article from 2006 which lists the address of the folsom street web carrier hotel in san fran where the nsa was mirroring everyone’s communication. the article said to look for the string tbr2-p012201.sffca.ip.att.net in your traceroute or, really, any att.net string. still no dice. but, per the powerpoint presentation above, i was able to tell that the “sf” in there probably stands for “san francisco”.

it felt like i’d hit a dead end, so i read the manual page for traceroute to see if there were any arguments i could add to my traceroute command to give me more information. -D looked promising:

“When an ICMP response to our probe datagram is received, print
the differences between the transmitted packet and the packet
quoted by the ICMP response.  A key showing the location of
fields within the transmitted packet is printed, followed by the
original packet in hex, followed by the quoted packet in hex.
Bytes that are unchanged in the quoted packet are shown as under-
scores.  Note, the IP checksum and the TTL of the quoted packet
are not expected to match.  By default, only one probe per hop is
sent with this option.”

i dunno, maybe looking at the packet contents could be helpful? here’s a sampling:

from the definition in the manual page, we know the structure of this blob of letters and numbers is

[human-readable(ish) header]

[outbound packet contents]

[inbound packet contents with existing stuff as underscore and new stuff denoted]

so, a few interesting things, although not really what i was looking for:

  • tl decrements every time it’s sent out, and always comes back as 1. this must be the “time to live”!
  • the bytes? bits? under sum always leave as 0 and come back as something slightly different. maybe this is just to notify that it’s a new packet? or maybe the TTL changes the packet a little?
  • the ts always leaves as “00” and comes back as “08”. maybe 00 means outgoing and 08 means incoming? idk, tbh.

eventually, i somehow ended up at what i think is the traceroute spec, which sort of verified pieces of this. hopefully, i can get some more insight during class this week.

see, i told you! rabbit holes…

stupid network & mother earth mother board

the dawn of the stupid network

the difference between smart networks—where scarcity of infrastructure & bandwidth mandated maximizing efficiency of bits, creating services, expansion was expen$ive, endpoints (telco terminals, telephones) were *just* endpoints—to stupid networks—where bandwidth becomes abundant and cheap, bits go in one end and out the other, processing happens at endpoints.

design assumptions of telephone networks: “Theoretically, a local exchange can serve up to 10,000 telephones, e.g., with numbers 762-0000 through 762-9999. The design assumption, though, is that only a certain percentage of these lines, maybe one in 10, are active at any one time. ” when more people use phones, or when the internet happens, this assumption breaks the system.

interesting to note the revenue-generating/value-adding things these companies came up with:

  • call routing
  • caller options (press 1 for…)
  • database lookup based on number you call from

“Stupid Networks have three basic advantages over Intelligent Networks – abundant infrastructure; underspecification; and a universal way of dealing with underlying network details, thanks to IP (Internet Protocol)”

“repertoire of different data handling techniques” makes it possible to handle lots of different kinds of traffic on the same infrastructure.

mother earth mother board

jesus christ, neal stephenson is obnoxious. but once you get past that:

“The cyberspace-warping power of wires, therefore, changes the geometry of the world of commerce and politics and ideas that we live in. The financial districts of New York, London, and Tokyo, linked by thousands of wires, are much closer to each other than, say, the Bronx is to Manhattan.”

“wires have never been perfectly transparent carriers of data; they have always degraded the information put into them.”

“(the distinction between countries and companies is hazy in the telco world)”

“Without rubber and another kind of tree resin called gutta-percha, it would not have been possible to wire the world.”

“Virtually all communications between countries take place through a very small number of bottlenecks, and the available bandwidth simply isn’t that great.”

as opposed to cable over land, where air does not interfere because it’s a bad conductor, cable underwater has this technical challenge: “the ocean serves as the ground wire.”

“Daily and Wall preside over this [FLAG] operation, which is Western at the top and pure Thai at the ground level”

“Nynex and AT&T have their offices a short distance from each other in Manhattan, but the war between them is being fought in trenches in Thailand, glass office towers in Tokyo, and dusty government ministries in Egypt.”

“Cables have always been financed and built by telecoms, which until very recently have always been government-backed monopolies.” privatization of infrastructure was a game-changer.

“In deep water, where the majority of FLAG is located, the work is done by cable ships and has more in common with space exploration than with any terrestrial activity.”

everything goes in a Big Room Full of Expensive Stuff. “Early cable technicians were sometimes startled to see their cables suddenly jerk loose from their moorings inside the station – yanking the guts out of expensive pieces of equipment – and disappear in the direction of the ocean, where a passing ship had snagged them.”

“The first cables carried telegraphy, which is as purely digital as anything that goes on inside your computer. The cables were designed that way because the hackers of a century and a half ago understood perfectly well why digital was better. A single bit of code passing down a wire from Porthcurno to the Azores was apt to be in sorry shape by the time it arrived, but precisely because it was a bit, it could easily be abstracted from the noise, then recognized, regenerated, and transmitted anew.”

cue mr. shannon’s gorgeous drawring:


initial ramblings

this semester, i’m taking a project development studio. i’d like to use the time and structure of this class to think and write about some projects i’ll be working on:

  • the p5-web-editor with cassie
  • a project about networks with surya

these are both very different projects, but i’m interested in thinking about what draws me to them and what they might have in common.

let’s start with the p5 web editor. this is an open-source tool for learning how to code with the creative coding language, p5.js.

the project with surya is different. this is a teaching and advocacy tool meant to make the process of learning about networks engaging and fun. i’ll be working on web episodes and thinking a lot about audience and tbd stuff.

the things that excite me most about any project are 1. the ideas behind the project and 2. who i get to work with. i am totally thrilled to work with both of the people leading these projects because i think they’re thoughtful and creative and kind and really smart.

switching gears. the backdrop of everything always for me is kafka and judith butler and hannah arendt. since i read eichmann in jerusalem a million years ago, i have never stopped being haunted by the idea that what makes people do evil things is a lack of imagination, an inability to think. what leads to this state of affairs? what is the role of bureaucracy here? and ideology? where can i possibly intervene?  what special opportunities does the internet present? demand? what about code? collaborative projects?

both of these projects are open-source or have elements of open-source thinking. i want to use the time in this studio to get specific about the difference between “free” and “collaborative” and “open-source.” they are not all the same. further: i think part of my excitement about open-source comes from a belief in the mcluhan thing that the medium is the message. if we are collaborating, if we are thinking and teaching each other along the way, we cannot be doing harm. of course, this is not always true. i wanna think about when it is true and when it’s not true.

something else these projects share is that they’re teaching tools. i believe in thinking, which is different from education, as an antidote to violence and as a way toward healthier relationships. healthy relationships feel fractal to me. a healthy public is made up of people who have healthy relationships with their partners and peers. people who have healthy interpersonal (and inter-things/beings that aren’t people) relationships have healthier communities, publics.

i want to think about the role of teaching tools within all this. i’m also interested in thinking about a reparative vs. paranoid reading of teaching: https://nonoedipal.files.wordpress.com/2009/09/paranoid-reading-and-reparative-reading.pdf

and i’ve been dying to read raymond williams on the structure of feeling which may or may not actually be applicable here.

barabasi open-source fail

over the course of completing our first series of class readings, i did an open-source fail: i forgot that the whole world is not a github repo and i shared a pdf of a section of a copyrighted book with our class. tom asked that i remove the pdf from our class email group out of respect for copyrights. i was surprised that i’d broken a rule and i contacted the nyu library, where i’d gotten the ebook originally, to find out more. here’s the response i got from the library’s legal specialist:


a few link hops away from the one she posted, i found a bunch of stuff about copyright and fair use. there’s my “mass e-mail to your class” right next to the high risk stop light:


this is all very curious to me. what’s the point of having the ability to easily download a portion or an entire book to pdf if it’s not to share the pdf? i know the correct answer is “to read or print out exactly one copy for yourself and yourself only!” but the reality is that sharing digitally is an affordance of having a digital file. counter arguments just don’t make sense.

what makes more sense is admitting that we have a bunch of real mismatches here: between the affordances of digital information and the needs of knowledge producers and distributors to be compensated for their work. surely, there are precedents re: how to deal with this problem. the camera and the printing press are also technologies for copying and sharing the work of a single person. i’d be curious to learn more about the histories of those and this question of copyrights in their wake.

another interesting thing i found:

“Copyright law provides a classroom exception in section 110(1) that allows instructors to display or show entire copyrighted works during the course of a face-to-face classroom session.”

i love a good loophole. would it count as “a face-to-face classroom session” to share a digital copy of a book as long as each page also contained a photo or video of the professor’s face? i’m only partly joking. my point is: how can a rule like that possibly hold up in this era of MOOCs? who is it there for anyway?

anyway. barabasi is brilliant. i’m excited for the next few chapters, which you can rest assured i will never share with anyone ever. these parts of the reading about the google outage were also interesting:

“Google previously suffered a similar outage when Pakistan was allegedly trying to censor a video on YouTube and the National ISP of Pakistan null routed the service’s IP addresses. Unfortunately, they leaked the null route externally. Pakistan Telecom’s upstream provider, PCCW, trusted what Pakistan Telecom’s was sending them and the routes spread across the Internet. The effect was YouTube was knocked offline for around 2 hours.”

“When I figured out the problem, I contacted a colleague at Moratel to let him know what was going on. He was able to fix the problem at around 2:50 UTC / 6:50pm PST. Around 3 minutes later, routing returned to normal and Google’s services came back online.”

update: i sent a snarky question back to the email librarian, to which she graciously responded.

“It’s why we fight so hard for open access. I encourage you and your classmates to make your scholarship OA and to encourage your professors to do the same. In the meantime, we provide access to what we can given the contractual restraints publishers put on us.

Welcome to the world (business) of scholarly publishing. Glad you’re fired up. Join us in fighting to (legally) change it.”

to-do lists for big problems => small pieces

one of the most useful skills i’m learning this summer is the ability to take seemingly seamless big problems and chisel them into smaller chunks.

the most recent example of this was adding support for min-vid from google’s main page. as i’m writing about in more depth shortly, min-vid uses the urlcontext and selectorcontext to parse a link to an mp4 to send to the min-vid player. but google LIES about its hrefs! for shame! this means that if you “inspect element” on a google search result, you’ll see a bunch of crap that is not a direct link to the resource you want. so i had to spend some time looking through all the gunk to find the link i wanted.

Screen Shot 2016-08-15 at 9.31.20 PM

when i looked at the actual href in its entirety, i noticed something interesting:


do you see it? the youtube link is in there, surrounded by a bunch of %2F‘s and %3D‘s. initially, i thought this was some kind of weird google cipher and that i needed to write a bunch of vars to convert these strange strings to the punctuation marks my link-parsing function expected. i wrote a regular expression to get rid of everything before https, then started the converting. it looked something like this:

at this point, i made myself a little to-do list. to-do lists make my life easier because i have a short attention span and am highly prone to rabbit holes, but am also really impatient and like to feel like i’m actually accomplishing things. the ability to cross tiny things off my list keeps me engaged and makes it much more likely that i’ll actually finish a thing i start. so. the list:

Screen Shot 2016-08-15 at 9.50.13 PM

thankfully, after cursing my fate at having to deconstruct and reconstruct such a ridiculous string, i found out about a thing called URI encoding. those weird symbols are not google-specific, and there are special functions to deal with them. decodeURIComponent took care of my first two to-do items. indexOf took care of my third. adding forward slashes to all my other selectorcontexts, to distinguish between the encoded hrefs on google and the un-encoded hrefs on other sites, took care of my last to-do.


i am 1000% positive i would not have completed this task without a to-do list. thanks to mentor jared for teaching me how!

all the links that’s fit to save for later

jared has ~a million links for me to review in response to every 1 question i ask. i ask a lot of questions.

needless to say, we have been filing links away on an imaginary “to read later” list for several weeks now.

i’m starting an actual “to read later” list here, with the hope that i’ll make it back around to some of these:

to be continued…

draggable min-vid, part 1

since merging john and i’s css PR, i’ve been digging into min-vid again. lots has changed! dave rewrote min-vid in react.js to make it easier for contributors to plug in.

why react.js? because we won’t have to write a thousand different platform checks anymore. for example, we’d have to trigger one set of behaviors if the platform was youtube.com and another set of behaviors if the platform was vimeo.com. this wasn’t scalable and it wasn’t very contributor-friendly. now, to add support for additional video-streaming platforms, contributors will just have to construct the URL to access the platform’s video files (hopefully via a well-documented API) and add the new URL constructing code to min-vid’s /lib folder in file called get-[platform]-url.js.

so that’s awesome!

right now, i’m working on how to make the video panel draggable within the browser window so you’re not just limited to watching yr vids in the lower left-hand corner:

Screen Shot 2016-07-20 at 12.23.26 PM

john came up with a hacky idea for draggability where, on mouseDown, we’ll:

  1. create an invisible container the size of the entire browser window
  2. as long as mouseDown is true, drag the panel wherever we want within the invisible container
  3. onMouseUp, snap the container to be the size of the panel again.

the idea is to make dragging less glitchy by changing our dragging process so we’re no longer sending data back and forth between react, the add-on, and the window.

how to get started? jared broke down the task into smaller pieces for me. here’s the first piece:

Screen Shot 2016-07-20 at 12.25.41 PM

the function for setting up the panel size is in the index.js file. we determine how and when to panel.show() and panel.hide() based on the block of code below. the code tells the panel to listen for

  1. a message being emitted and
  2. for the content of that message, in this case from the controls.js file:

then, do different stuff based on what the message said.

i added another little chunk in there which says: if the title is drag, hide the panel and then show it again with these new dimensions. the whole new block of code looks like this:

so we have some new instructions for the panel. but how do we trigger them?  we trigger the instructions by creating the drag function within the PlayerView component and then rendering it. this code says: on whatever new custom event, send a message. the content of the message is an object with the format {detail: obj}—in this case, {action: 'drag'}. then, render the trigger in a <div> in an <a> tag.

and we style the class in our css file:

so we get something like this, before clicking the red square:

Screen Shot 2016-07-20 at 1.19.37 PM

and after clicking the red square:

Screen Shot 2016-07-20 at 1.19.45 PM

next, i have to see if i can make the panel fill the page, then only drag the video element inside the panel, then snap the panel its position on the window and put it back to its original size, 320 x 180.

playing with react.js

i taught an intro to p5.js workshop earlier this summer, and a big part of what made it possible was jess klein and atul varma’s widget, which lets people play with p5.js sketches and see their changes in the browser without having to refresh the page.

i think it’s an amazing teaching tool, but it currently doesn’t support p5 libraries like p5.dom. this means we can’t incorporate video capture into widget views. ultimately, i’d love to contribute to the widget project and use p5.dom support to create some interactive documentation for kyle mcdonald’s computer vision examples. it’d also be great for making computer vision a little more accessible to beginners.

to contribute to the widget, i need to know some react so i’m playing around with atul’s react tutorial for p5 programmers. react seems really powerful in ways that i don’t totally understand yet. at this point, i’m trying to figure out how everything is wired up in a react program—syntax, which APIs are being referenced where and how, etc.

this code & video is just a slight change from the tutorial. from what i can tell, SVG has?/is? an API, which defines certain shapes like circle and rect. the function within the SVG tag, onClick, is a global event listener. i changed it from what was in the example, onMouseMove. things like clientY and clientX are part of another API; they’re built in (to what? idk) and have specific parameters/syntax rules that you have to follow. none of this is required for react programs, but this exercise is helpful for starting to learn structure and what references i might look to when i get stuck.

and here’s the example from the tutorial. the index.html file is the same, but there are some slight differences in the react-sketch.js file.