The theme for this Year's City of Lights was Kings and Queens

Some shots of the making process- Level 2 worked in the studio as well as with local schools



























Level 2 have recently Finished A collaborative project with Fashion students

Students from both courses worked on the same predictive inspirational themes to create body adornment and swimwear

We are hoping to have a fashion show in the Spring with fashion students at the Gylly beach Cafe meanwhile here's a preview.













Suits you sir

who you looking at



Flapper



 

Digital Killed the Video Candidate

Davide Panagia
Trent University

In no uncertain terms, this month’s American presidential election signals Barack Obama as the first New Media President. I will quickly concede that this is an odd christening. New Media technology was around in 2008, and indeed prior to that. And we have read and heard a lot about the potentials of political mobilization thanks to the influence of social media technology (think here of Tahrir Square). But what the immediate aftermath of the 2012 Presidential elections secured for our cultural-political zeitgeist, and what contributed to our collective schadenfreude, is the conviction that the numbers were always on Obama’s side, despite what Karl Rove’s gut told him. The question that the Republicans are now asking themselves is how they could have been so wrong in their convictions?

The answer lies in the role of “big data” – something that many American voters had never heard of prior to this elections. Big data is the resource that Nate Silver had at his disposal to give such accurate predictions as he did; it is also the basis for recent developments in humanities research collectively called the “digital humanities” (see the Mapping the Republic of Letters Project here); and it is, as best as I can tell, what will make such companies as Facebook even more astronomically rich than they already are, once Wall Street realizes that what Facebook is selling is not advertising space but massive amounts of information on consumer tendencies generated from our aggregated “hits.”


Obama’s re-election campaign ran a Navy-Seal like digital operation: surgical, tactical, precise, and sophisticated beyond what James Bond’s Q can even imagine. It’s almost as if Obama took a page out of Althusser’s handbook: forgo ideology and go straight to science, so to speak.

Here is how Mike Lynch, former founder and CEO of the UK software company Autonomy, describes the scenario: “Obama’s was not an election won with a clever advertising campaign -- that is too 90’s -- and actually, that is what the Republicans did. This campaign was masterminded by data analysts who left nothing to chance. They revived the virtual campaign centre called mybarackobama.com from the ‘08 election (and thus highlighting the benefits of 'owning' your data), and encouraged supporters to volunteer their personal information, comments, post photos and videos, and donate funds. But this was only the starting point. In a multi-pronged engagement strategy, webmasters used supporters’ content to galvanize others and drive traffic to other campaign sites such as Obama’s Facebook page (33 million “likes”) and YouTube channel (240,000 subscribers and 246 million page views).”

And here is a report from ProPublica.org’s Lois Beckett about recent information released on Obama’s big data tactics.

Big data, data mining, and predictive analytics are the tools of network politics – and they are tools we all volunteer willingly, though usually not knowingly, whenever we click a link or like a post. In doing so, we make what amounts to a charitable donation in the form of micro-data points to massive organizations and global conglomerates who hold property rights to these data-hits; in exactly the way, for instance, that Facebook owns property rights to the pictures of my children when I “share” them on my Facebook page. In fact, it’s not the images they own, but the ones and zeroes that compose the data file which software converts into an image.


 All of this to say what we all already know: that we live in another age of “information overload”, to invoke the Harvard historian, Ann Blair’s helpful phrase. There is just “too much to know” and so, as in other epochs of information overload, we devise strategies and structures for handling information, for organizing it, assessing it, and rendering it valid. The Dewey decimal system was one such invention, as was the transistor radio that converted noise into sound. Writing is, of course, another such technology; as are note taking, highlighting, statistics, and practices of compiling. The number of such activities are infinite, and usually end up multiplying the volume of information rather than rendering it manageable. If I were Malthusian I would say that information has a geometric growth; and if I were Kantian, I would say that information is the best picture of the mathematical sublime.

More to the point, networked humans are information generating creatures. Every click on a keyboard or a mouse, for instance, produces new variations on ones and zeroes that, in turn, generate new information data. As I am not a coder, I have no way of knowing how much information my composing, emailing, and posting of this TCC entry has generated. Suffice it to say that the minimum measure for digital information today is the kilobyte (103 bytes) and that most new computers currently store terabytes (1012 bytes) in their hard drives. Such technological developments, I would suggest, marks a revolution in artificial memory storage that has dramatic consequences for our contemporary networked condition. I will only single out a couple of such consequences.

The first is that code is king. More specifically, source code is king – and code ‘sourcerers’ will inherit the earth. Our world is a world of software: a world of commands and orders relaying signals to go here and there, to execute this task and fulfill that algorithm. As the new media scholar Wendy Chun argues in her book, Programmed Visions, “Code is executable because it embodies the power of the executive, the power to enforce that has traditionally – even within neoliberal logic – been the provenance of government.” (27) This means that in the very logic of network, code is both sovereign and disciplinary master through the algorithmic production of self-regulating rules. (For those with interest in reading more on these issues and developments, I highly recommend Tiziana Terranova’s Network Culture: Politics for the Information Age (PDF) and Jodi Dean’s Blog Theory (PDF).)

Two: Code is indifferent to content. Nate Silver’s algorithmic acumen, that took the accuracy of electoral prediction to alchemical heights, was initially developed as a forecasting system for Major League Baseball called PECOTA (an acronym for Player Empirical Comparison and Optimization Test Algorithm). It can just as easily still be called PECOTA, an acronym for Presidential Empirical Comparison and Optimization Test Algorithm.


What this means is that traditional content-based politics – and our content-based habits of analysis – are being put under substantial pressure by our networked condition. It’s precisely the largesse of big data that makes content almost impossible to handle. Hence the switch, as the new media scholar Lev Manovich notes, from the old new media language of “documents” and “works” and “recordings” (signifying static, content-specific objects) to the big data language of “dynamic software performances,” referring to the interface with real-time dynamic outputs as in the case of video-games or mobile apps. When we do engage these virtual entities, we are not engaging static documents but interactive programs generating infinitely new instances of data. Manovich describes this shift best when he says that “in contrast to paintings, literary works, music scores, films, industrial designs, or buildings, a critic can’t simply consult a single ‘file’ containing all of work’s content.”

This is how digital killed the video candidate. Mitt Romney lost apparently against all Republican odds because Romney was the video candidate: everyone, including myself, was distraught and surprised by Obama’s lackluster performance at the first debate. He just didn’t come out looking good; he didn’t project “good content” ... and Romney did. But Obama’s big data pundits must have persuaded him that he didn’t need to look that good: that the content of his appearance didn’t matter as much as everyone thought it ought to. What did matter was the mosaic generated by big data that offers much more than any singular exit poll, or intuitive hunch, can.


And it’s not that this mosaic is devoid of content. It is filled with content. But that content is mobile, interactive, terabyte content that the video candidate’s more traditional, kilobyte content analysis finds impossible to process.

No doubt, there will be much to celebrate and bemoan in this cultural-political shift. At the very least, I think this election should put pressure on political critics to rethink the criteria of political judgment and assessment with which they (we) have leaned on for so long because one thing is certain: the networked body politic is a much different beast than our conventional understandings of the Leviathan. But then again, rethinking the famous mosaic frontispiece of that equally famous book, maybe not.