Cyberpunk & Representation

Cyberpunk is a sub-genre of science fiction, typically set in the future. It involves predictions of overwhelming tech culture, united with some form of radical change or breakdown in the social order.  The term Cyberpunk first appeared in William Gibson’s novel, Neuromancer, in 1984. Science fiction author Lawrence Person, describes a classic cyberpunk character as marginalized and reclusive. Living in the outskirts of a dystopic society, they encounter rapid technological alterations, including modification of the human body and an omnipresent datasphere of computerised information. In the late 1980’s, a cyberpunk was also the label given to malicious hackers who illegally access computer networks. 

In Gibson’s Johnny Mnemonic (1981), characters are mutated to suggest dystopic visions related to the reconstruction of social identities. David Thomas uses the idiom “Technophilic Body” to depict functional and aesthetic transformations that reconstitute the organic and sensorial architecture of a human body. Visions of future systems have been explored through cyberculture since the 1960s in literature and on screen. Many aspects of pop culture harness the ideologies presented cyberpunk realms, generating significant representations of mainstream Internet culture. An early example would be The Jetsons, (1962) a family who live in a futuristic utopia. More recently, Matt Groening and David X Cohen have adapted and explored cyberpunk themes in the TV series (and later in the videogame game), Futurama (1999).

Set on earth in the year 3000, Futurama is a classic cyberpunk parody with present libertarian, consumer and anti-corporist elements. There is a minority of low socio-economic groups that are segregated from the rest of society, a vast majority of characters with body modifications, as well as the prominence of androids, Robot Rebellions, layered cities, cybercriminals, evil megacorps, and a cyberspace… with some episodes depicting societies that are completely controlled by computers.


The Public Sphere of Imagination

When I was younger, recess and lunch in the school playground was a crucial space, where discussions of canteen chicken nuggets, annoying teachers and the latest toys occurred. Communication is part of human nature. It is instinctual; we want to openly talk about our interests and concerns, and it’s a good feeling when we share common views with someone (Hewett, 2011, p3). As we progress through life, humans are drawn to a space where one can discuss collective social interests; this has come to be known as the modern day Public Sphere. In the 18th century, middle-class men encountered face-to-face social and political squabbles in local salons and coffee houses. Philosopher Jürgen Habermas described the Public Sphere as ‘a network for communicating information and points of view’ (1996). In the 21st Century, when we see the words “network” and “communication” we don’t tend to grab our newspapers and head down to the local café to have lengthy debates with our neighbours. The modern day Public Sphere(s) is tucked into our pockets, or sitting on your desk. Humans can now digitally access what seems like an infinite number of platforms, to receive, respond and circulate information.

Everyday we consume mediated images and expectations of life, through popular media discourses presented in entertainment programs. This epitomises the notion of a Cultural Public Sphere, where aesthetic and emotional modes of communication are articulated affectively (McGuigan, 2005, p435). While traditional journalistic institutions distract audiences from “serious” cultural concerns, viewers begin to respond and reflect on personal, public and political interests through social networking platforms. TV shows such as ABC’s Q&A become agents of critical intervention in the mainstream media, by providing public debate. The program allows viewers to hashtag and tweet in real-time, highlighting the value of independent criticism about widespread dissent and ideologies in the Public Sphere (McGuigan, 2005, p440).

asylum qanda



#qanda tweets acting as commentary about issues being raised.

With anybody being able to participate in The Public Sphere, modern anxieties arise over the deterioration of balanced discourse about public affairs. Some critics argue that mass media has expanded the Public Sphere; while others state that it has transformed the nature of publicness all together (Munday & Chandler, 2011).


Habermas, J 1996, Between facts and norms: Contributions to a discourse theory of law and democracy,MIT Press, Cambridge

Hewett, D 2011, The Nature of Human Communication, Sage Publications, pp.3-8

McGuigan, J 2005, ‘The Cultural Public Sphere’, Cultural Studies, vol. 8, no. 4, pp. 427–443

Munday, R & Chandler, D 2011, ‘Public and Private Spheres’, Dictionary of Media and Communication, Oxford University Press

Visionaries and Notions of Cyberspace

Cyberspace: ‘A term introduced by the novelist William Gibson in 1984 to describe an abstract virtual space created in part by networks of interconnecting computers and in part by the human imagination.’ (Oxford Dictionary of Communication and Media, 2011)

The contemporary concern of cyberspace and virtual reality is something that William Gibson regards as a consensual hallucination. We have categorically labeled demographics, aiming to represent and stereotype behaviours associated with technology and the Internet. However, this assumption of generational difference regarding media consumption is somewhat inaccurate. Generation X and Generation Y are simply measured by the way they have adapted through mass transitions of technology and media forms. Generation Z (“The Google Generation”) may have only ever been surrounded by a digital environment, but this does not necessarily mean they are more or less dependent on technology or that their media consumption is higher.

The way I see it, all demographics are in the same boat, with a different view over the edge. Generation X has had the privilege to grow through decades of technological convergence, and whether they choose to keep up-to-date and participate, ultimately comes down to personal choice. Vannevar Bush examines past inventions in his article “As We May Think”,  reflecting that even back in 1945, scientific developments have benefitted humanity in ways never thought possible. The stereotype that Gen X is inadequate with relation to new technologies perhaps evolved from the majority being comfortable in their already non-technologically dependent lifestyles. They lived for so long without tech-savvy gadgets, and may not see the need or convenience. Gen Y have been familiarised with new technologies at an imperative time of mental growth, presumably connecting them more dependently to new forms of media, in particular social networking. The emergence of selfies is an example of Gen Y’s obsession with self-representation and the need for constant validation. Behaviours of Gen Y on networking platforms typically surround issues regarding attention-worthy online and offline identities.

GOOGLE_chappattImage source

This paves a way for Generation Z. The concern here is that without knowledge of life with no online profile, social identities are being completely constructed on social media platforms. Furthermore, educational concerns are at an all time high, as The Google Generation’s  general attitude toward online content is the infinite ability of having ‘facts at their fingertips’. The immense amount of information being scanned through immobilizes a creative and independent thought process. With Gen Z deeming search engines such as Google as an Internet brand,2014 being the fourth year in a row it has topped the most trusted Internet Brand List. Research libraries have no option but to adjust to the enormous transformation in the way that scholarly information is being sought and used electronically. Social media platforms have conditioned the young to expect dynamic and personalised content experiences, which research libraries are struggling to compete with. The shift from the library as a physical space to a virtual environment has immeasurable implications. With high demand for around-the-clock accessibility and immediate answers, librarians are anxious and threatened by having to match these services provided by Google. The materialization of social media is altering the nature and fabric of the World Wide Web. We have strayed from an Internet constructed by certain authorities to one where content is being generated by millions. This notion is of specific interest to librarians and publishers as it blurs the line between information producers and information consumers, by users having the ability to create and share their own content.

‘In a real sense, we are all Google generation now: the demographics of Internet and media consumption are rapidly eroding this presumed generational difference.’ (Information Behaviour of the Researcher of the Future: A Ciber Briefing Paper, 2008, p21)

Informed Audiences RE: Wikileaks

‘You can’t publish a paper on physics without the full experimental data and results; that should be the standard in journalism.’ (Jualian Assange, 2010)

Journalists are expected to abide by a code of ethics in the distribution of content, particularly the notion of objectivity. Where absolute neutral transmission of objective reality can be defined as impossible, in a journalistic sense, objectivity is a method used to test interpretations for bias or inaccuracies. This stance is alike to the way scientists test their hypotheses about phenomena . Julian Assange, self-labeled “information acitivist”, believes that media convergence has blurred the traditional philosophies of accurate and factual reporting, stating that the truth should always be presented unvarnished and verifiable. Assange launches Wikileaks, a non-profit website in 2006, aiming to disseminate sensitive material  for the public to use as a tool to make intelligent and informed decisions. For audiences, it is just as important to be objective when receiving information. Traditional and digital forms of journalism have pros and cons with regard to ethical reporting. Both vary in the delivery of material yet the significance lies within you as the reader. Advantages of receiving information online are that you have the ability to re-read, follow through links and images, as well as crosscheck information at your own pace (click-click-click). This idea was one of the driving forces for Wikileaks, and when Assange released the collateral murder video, in its pure and unpolished state, it was confronting for many viewers. As a result it assisted to open the eyes of the public and their perception of war, while causing a moral panic among government and media agencies.

Nonetheless, consuming content today is a double-edged sword. For example, citizen journalism means that anyone can post freely online (yay) however the information you’re reading may not be accurate. An online author often doesn’t have any academic credentials and may not have used reliable sources (nay). This element of the digital economy is what poses ethical concerns of journalistic integrity. Audiences must understand the significance of factual and balanced news reporting while also being objective and critical.

pressImage Source

We are all constantly consuming information, sometimes subconsciously. It is worthwhile to note when you’re watching the news or reading the paper, to keep in mind that the Fourth Estate enforce rigorous editing and presentation practices. This process may skew the representation of whole stories, individuals and groups of people, ultimately influencing the attitudes of consumers.

If You Can Think of It – It’s About to Happen.

In the not-so-distant future, everyone and everything in the world will be connected to the Internet. This phenomenon is already in its early stages – and is known as the “Internet of Things” (Kevin Ashton, 2009). There are approximately 2 billion people using the Internet right now, however the Internet contains a larger number of data. Our ability to produce information has far exceeded our ability to control it. We know that the Internet has extreme potential, now it’s just a matter of developing an effective way to harness it. Technologist John Barrett states “Every major global government, and every major economic block, is investing heavily in the IoT”.

Since the emergence of the Internet, we’ve recognised a unique sense of harmony to the dimensions of life. Now, by accessing real-time data of the way systems are interacting, we can better understand global dynamics and thus make more intelligent decisions. From space, the world is visible as a neural network with cities as nodes, a literal image that we are a system of systems. We can see it, hear it, and capture it – the world has virtually developed a central nervous system, it is early days but the planet is speaking to us. Ongoing accessibility and innovations make for a very efficient society, and with the matrixing of services we will generate more resilient systems.


The Internet of Things cannot be simply explained, so I recommend watching this lecture by Dr. John Barrett. Barrett describes the Internet as a digital cloud or universe, 4000 Exabyte’s in size (whoa). All of our lives are about to change – by merging the physical world to the Internet. We will be able to control and communicate with everything from anywhere – goods, objects, machines, appliances, buildings, vehicles, animals, plants, soil and even humans will become a part of the IoT (we kind of already are). The possibilities are only restricted by our imagination… so buckle your seat belts, hold your horses, and put down your Smartphones. Actually pick them back up, because soon you will be able to point your device at anything or anyone and learn as much as you can about it through embedded circuits. Barrett quotes, “Facebook will look like a minor event”.  So if you were concerned about privacy issues on social media… think again. One major concern regarding the IoT is the devalued notion of privacy. Google has the potential to become a real life search engine as everything will be tagged, locatable, and can give us information about itself and its surrounding environment (via RFID – Radio Frequency Identification).

Screen Shot 2013-10-25 at 1.07.05 PMSource

Another major concern is if everything in the world is connected, issues of terrorism and hacking will be magnified. The IoT will be extremely vulnerable, creating immense opportunities for the security software industry. This may seem frightening and preposterous, but it is a reality. Pre-schoolers are now learning on iPads – young children brought into this technologically dependent world will embrace the IoT effortlessly. However, I think it will take us (gen X & Y) some time to get used to.

Apple & Android: Different Ideas, Great Success.

The two hottest Smartphone’s on the market: Android & iPhone, with the battle of locked vs. generative appliances coming into play. Both successful in appealing to different tastes, however there has been much debate over one being better than the other. Ultimately, I feel it comes down to personal choice. If you’re a tech-wiz, or simply enjoy being able to fiddle with every minute feature on your phone, the Android is an appropriate choice for you. The Android allows you to take control and responsibility over the usage choices you make (via rooting) whereas the iPhone is a ‘sterile’ or closed/locked device.

apple-vs-android-010700Image source

The Internet revolution challenged copyright laws, with users freely downloading music, applications, images and software (pretty much anything) – being impossible to manage. Apple attempts to prevent illegal activity used on the iPhone by controlling it as a locked appliance. Therefore a newly purchased iPhone comes tethered to Apple’s desires. To ensure this, Apple has created relative programs to use in conjunction with the iPhone, such as the iTunes store and a walled garden of applications (App store). This means that Apple have complete control over the platform, user and content. It’s also a bonus for Apple, as they receive a 30 per cent profit of everything sold in their App store, which holds almost one million applications. Some would argue that not providing the user with complete control is a negative, however the set features Apple provides seem to please a bulk of the Smartphone market. In addition, not everybody cares about the fiddly elements of their phone and prefer the simple layout Apple provides. The Android is an example of a generative and free platform, with an open garden of applications. Considering these two different devices, there has been much debate over which is better to use or preferred by consumers. The ideologies are completely opposite as Apple states that locking the options for audiences is for their own good, whereas the Android market believes users take responsibility for their free choices. Nevertheless, it is possible for iPhone users to “jailbreak” or gain ‘root’ access to the code, which allows complete control over the hardware and software.

Social Media – A Revolutionary Tool

The Arab Spring is a term expressing the revolutionary movements in 2010, which began in the Arab region. What made the Arab Spring so unique was the utilisation of social media to establish and promote uprising agendas, as these were the first collective movements in the Middle East since Internet and social media revolutions. A journal article by Richard Lindsey explores the significance of social media during the Arab Spring, allowing individuals to influence public opinion and gain international support through the global distribution of news. Lindsey assures that techniques and procedures via social media will affect future revolutionary tactics in globalised societies, however the degree to which is questionable.

Sharing mass amounts of uncensored and accurate information through social networking significantly prompted the rise in Arab Spring activists. Not only did they obtain supremacy to overthrow powerful dictatorship, but also Arab civilians were now conscious of underground communities whom they can connect with. This may have not been possible without the significant role social media played, “We use Facebook to schedule the protests… Twitter to coordinate, and YouTube to tell the world.” – Arab Spring activist from Egypt. Stories of shared grievances and hopelessness was overflowing over these networks. The use of digital storytelling through social media is what drew people into the streets to protest.


Image source

A blog post on PolicyMic describes the use of social networks as assisting to remove the psychological barrier of fear for Arab civilians by connecting and sharing information. The consistent flow of news provided a sense of reassurance that they are not alone, and that there are others experiencing hardship, prejudice, and similar accounts of brutality. Professor of mass communications from Cairo, Hussein Amin, stated that social networks “for the first time provided activists with an opportunity to quickly disseminate information while bypassing government restrictions”. It is worthy to note that new social networking platforms were not the reason for the Arab Spring but function in serving future revolutions with regard to communication.

Hacking for the Greater Good

When we consider the term “Hacker” it typically sparks negative connotations surrounding selfish, invasive actions for personal gain. However, there are individuals who hack for global purpose, known as “Hactivists”.  An essay by Joel Nixon defines Hacktivism as the “use of technology to promote political ends, such as freedom of speech and the right to information.”  While they may violate the protection of data, Nixon states that the law should recognise hacktivists are seeking to benefit society, through the distribution of documents acknowledging government corruption. Nevertheless, the law should be stringent on citizens who seek personal gain or profit by attacking global sites and companies.

At present, hacktivists are using their abilities not only to advantage citizens but also to expose corporate nepotism and corruption. One key objective for hacktivists is to generate public availability of academic sources via universities and online libraries.

An example of this would be the case of Aaron Swartz (1986-2013), whose aim was to benefit others and not himself, which Nixon describes as ethical hacking. Swartz downloaded 4 million publicly funded JSTOR articles with the intention of distributing them freely among MIT students, resulting in fraudulent charges.

swartzImage source

One could argue that hacktivists, such as Swartz, who utilise their skills for a constructive purpose should not be convicted. On the other hand, some abuse their skills for unethical purposes. Hacktivist Barrett Brown, labelled spokesperson for hacking group Anonymous, was out to obtain political advantage by distributing credit card information of Stratfor operatives and threatening a federal officer. As a result, Brown was not charged with committing the hack, but for obstructing justice and transmitting stolen credit card information. Evidently, existing laws don’t specifically reprimand the action of hacking, but regulate the ownership and dissemination of illegally obtained contents. Thus it is important to note the disparity between ethical cases of hacktivism and online fraud.

Hacktivism is a significant issue as it is closely associated with two main rights within a democratic country – freedom of speech and right to information. People such as Aaron Swartz, who intended to expose corruption through the dissemination of data, can be considered activists who aim not to benefit themselves but to advantage others. A blog in the Washington Post states that Swartz also aimed to produce an understanding of the powerful influence the Internet can have in shaping popular culture. Swartz stood out from other hacktivists as he was identifiable – he existed inside and outside the system, striving to advance societal change.

The Significance of Social Media

Interaction is a significant aspect of human culture. An article by Mike Laurie investigates the different ways social media has changed us. Over time many different forms of communication have evolved. From inconvenient, labour intensive technologies such as Morse code and carrier pigeons, to instantaneous connections through wireless devices. Rather than posting a letter or buying a newspaper we are now able to share, produce, and circulate endless amounts of information in simple and effective ways.

Skeptics consider social networking to be straining society with regard to social etiquette and identity. However, I would deem these to be issues within the media as a whole and not just social media. Consider a teenage girl reading a magazine – the collaboration of articles and images would produce something to the effect of: “Wear this. Wear that. Act like this around boys. If you’re thin and pretty you will be happy and popular”. In this sense, the consumer only has the option to do just that – consume. And while these same messages may be sprawled across the Internet, we are no longer lazy consumers of passive messages – we are active participants. Social media is about being connected, engaging with old friends and creating new experiences. Instead of being limited to the information in a 25-page magazine, we can now explore what feels like infinite amounts of content. Laurie describes time before the Internet to be when limitations of learning existed due to poor literacy and lack of access to books. If “knowledge is power” and you have access to continuous information distribution, your desire for knowledge is legitimately within fingertips.


Image souce

An article By David Wallace outlines the statistics with regard to the influence social media has had beyond the notion of socialising. Employment, news, law enforcement, education, political participation, economy, music industries and marketing systems have all been prompted and enhanced through social media. A report by PEW suggested that social networks have encouraged younger generations to be more involved in political issues, a fine example of society being more interested and informed with the world around us.

Through citizen journalism comes the rise of “gatewatchers”, where user-generated content flows freely among platforms. Axel Bruns (2003) states that social networks fabricate participant communities through various understandings and interpretations. Bruns states that blogging should be recognised as a significant form of journalism. Online gatewatchers may actually compliment the mainstream journalism industry through the diversity of discussion and debate, no longer being limited by the “gatekeeper”.