Saturday 14 April 2012

The Digital Age



 
The Digital Age is sometimes referred to as the Computer Age or the Information Age. The focus of the Digital Age is the fact that it is characterized by transferring information freely among individuals and the ability to instantly gain information that at one time would have been impossible or at least difficult for individuals to find in previous generations. The Digital Age capitalized on the advances in computer microminiaturization that occurred during the transition from the personal computer intervention in the late seventies to the critical mass of the computer in the early nineties. It is the Digital Age that allows global communications and networking to boom and shape modern society.


The Digital Age has caused a great impact on the workplace. For individuals who once performed jobs that could be easily automated, the Digital Age led to them being forced out of their work to find other employment in areas that weren't as easily automated. The Digital Age also meant that works in a non-automated sector has to adjust to a global job market that increased the competition. When computers are viewed as doing a job better and faster than humans it makes it a lot more difficult to maintain and find a good job. This is especially true in societies that are focused on industry. Therefore, the Digital Age has brought about a major dilemma for many individuals in the middle class. At one time many people in the middle class enjoyed substantial employment, but during the Digital Age many were forced into losing their jobs and choosing between moving up to higher jobs or down to a lower skill and lower paying job.  

http://www.alternet.org/media/140982/%22more_better_faster%21%22%3A_how_our_spastic_digital_culture_scrambles_our_brains/

E-learning


 
E-learning a new tool that allows nearly anyone to take advantage of a college education. E-learning provides individuals with the chance to learn anywhere and at any time provided they have a good working computer and an Internet connection. E-learning is available in several formats including CD-ROM, Network, Intranet or Internet. It can incorporate a variety of styles including text, video, audio, animation and virtual environments. For some E-learning has become a very unique learning style that can often go beyond the teaching found in a crowded classroom. It allows people to learn at their own pace and take a personal approach to their learning.


E-learning has many clear benefits that you should take advantage of. Perhaps the most clear benefits includes the flexibility and the cost effectiveness. However, there are also other – less obvious benefits you can enjoy when you choose E-learning over traditional classroom learning. E-learning allows you to set up a self-paced system. You can learn at your own pace and not at a pace established by the classroom teacher. It also allows you to learn faster, most E-learning courses are about fifty percent faster than a traditional classroom course. E-learning can help you maintain a consistent message because you won't have multiple instructors trying to teach you the same subject. Lastly, through E-learning you have increased retention because it allows students to have a stronger grasp on the subject. This is because E-learning can take into account multiple teaching methods to help reinforce the message and make it easier for the student to remember.  

GPS or Global Positioning


 
GPS or Global Positioning System is a space-based satellite network that provides individuals with location and time information anywhere on Earth in any weather when they have an unobstructed line of sight to four or more of the satellites. The satellite network is maintained by the United States government and is free to use by anyone with access to a GPS receiver. GPS is a truly remarkable system that provides critical advancements to the military, commercial and private sector that can't be found in any other technological product.

While GPS was at one time an entirely military project, it has become a dual-use project over the years offering both military and civilian applications. GPS is today used for a wide variety of projects ranging from commerce to scientific uses. GPS also offers an accurate time keeping system that regulates a number of everyday activities like banking, cell phone operation and even the synchronization of power grid systems. So how can a system with so many benefits have problems?


The biggest issue with the GPS system is the regulatory spectrum issue. Within the United States, GPS receivers are regulated under the FCC or Federal Communications Commission. All GPS-enabled devices within the United States are listed as Part 15 devices in their manuals. This means that a GPS device is obligated to accept any interference it receives, including those that cause undesired operation. This means that despite the numerous benefits a GPS receiver can offer, like all other forms of technology there can still be interference that causes undesired operation.

Social networking


Social networking is the act of grouping people into a specific group. While social networking can happen anywhere including schools and workplaces; it is most commonly found online in the form of social websites or specific group websites that bring together people with specific interests. Social networking websites act as an online community for people to get together and share their hobbies or personal views. While there are many benefits to social networking, there is also a lot of misconceptions about online social networking.

The biggest misconception about social networking is the idea that social is another term for people. Many feel that social networking websites are nothing more than a list of names that people can randomly interact with because they want to have fun or have nothing else to do. Rather, social networking websites can only be successful if they are able to bring together a specific group of people around a common subject that everyone finds interesting.

However, the biggest drawback to online social networking websites that has to be considered is the value of the relationship. While it is quite possible to have a large number of friends with interests similar to yours on a social networking website, you need to consider how many of these relationships have true value. If you are joined with people that you can meet both offline and online so you can focus towards a common and specific goal then you can say you have a true relationship with an individual. The goal of social networking is to focus not on the number of people you have a friendship with, but rather the link that you have in common with them and the quality of your relationship with them.

http://www.guardian.co.uk/media/2011/jan/22/social-networking-cyber-scepticism-twitter

Cyberwarfare


 
Cyberwarfare is defined as any action by an individual or a nation-state to penetrate an individual or national computer or network for the purpose of causing disruption or damage. Cyberwarfare can include espionage, sabotage and vandalism in order to access critical information housed in computer systems. There are several different forms of cyber attacks that can happen and the attacks typically build upon one another to achieve a specific goal. While cyberwarfare is a relatively new concept, it is starting to be taken very seriously by both corporations and entire countries throughout the world. Many security concerns stem from the threat of cyberwarfare. Therefore, cyberwarfare is also becoming something very serious that individuals need to be concerned about and take seriously.

In today's heavily focused technological society, you rely on computer for everything. Think about your daily routine – how many times in a day do you use or rely on a computer? Consider this, what would you do if you woke up one morning and had no electricity to run computers, what would you do? Many people aren't aware of how much they rely on computers until they are affected. Most people store their entire lives and information on computers and informational networks online. You need to take steps to protect your own private computer and your Internet network. Cyberwarfare is no different from standard warfare, you need to be prepared and protect yourself against the worse. Not only so you can function when a computer is lost, but also so you can protect yourself from all the sensitive information you keep on your computer.

http://www.cfr.org/technology-and-foreign-policy/confronting-cyber-threat/p15577

The Digital Divide


The Digital Divide is a term used to reference the gap between individuals who have regular access to technology in the form of computers and their access to the Internet, and those who don't have this regular access. The term Digital Divide was first used in the 1990s and was widely discussed during the Clinton Administration in order to close the gap. There are many ways to consider the Digital Divide, but it is largely a separation of the haves and have-nots within the United States.

While technology and Internet access has definitely increased in the United States in recent years, the digital divide is certainly seen amongst the population. It is a fact that poorer individuals can't afford to keep up with technological changes and schools with less funding are less likely to have regular Internet access for students. In comparison, middle to upper class families and schools can afford to have the necessary income to keep up with technological changes both at home and in school. Having this technological ability definitely gives the middle and upper class a significant advantage over the poorer class that doesn't have the same technological advantage at home or in their schools. Since there is a lot of benefit to knowing computers and using Internet material, it is easy to see how the Digital Divide causes certain social groups to stay poor and ignorant while others have a clear advantage and more opportunities to advance in society than others based on their technological benefits.


http://www.edutopia.org/blog/digital-divide-technology-internet-access-mary-beth-hertz

Wednesday 11 April 2012

Digital Games and Players


 
The introduction of modern digital video games has created a unique range of players and challenges, from the casual gamer up to the professional that can be considered a video game artist. The challenge comes in how gamers respond to a video game and how people choose to play the game. Are they artists for the way they change the game or are they nothing more than a hack that ruins the enjoyment of the game for others? A videogame was designed to provide an enjoyable social interaction between people, but this has become difficult for those who play alongside others that don't adhere to the basic rules of the game. When it comes to game players there are a number of so-called "cheaters" including hackers and team-killers and "griefers". Players who knowingly engage in activities such as offensive messaging they can destroy the balance of social interaction among players. However, when it comes to individuals who kill members of their own team to sabotage the game, post their team's position to other players or find unique ways of overcoming the basics of the game; there is a fine line between those who are considered "cheaters" and those who are simply videogame artists who are improving the play and providing a more challenging experience. Hacking has often been considered a new trend called "emergent gameplay" which is used to describe players that simply find strategies that weren't originally seen or intended by the designers. Cheating is looked down upon because it is considered taking something that doesn't belong to you. Yet glitches in videogames are considered normal because digital games aren't a real space and a glitch can be used to provide certain advantages to players. So where does the different lie in a player who cheats and one who is simply exploiting the glitches found in a videogame?





http://www.igi-global.com/article/international-journal-gaming-computer-mediated/3957
http://arstechnica.com/gaming/news/2006/02/player-driven.ars

Video Game Research

When you play a video game you are entering a unique time and place. This is the basic idea behind all video game research that focuses on providing a unique and different game experience. However, is it truly all this theory that makes a game fun to play or is there something else involved? There are many factors to be considered, but first you have to realizes what the different theories are for videogame research. When you play a video game you enter a specific arena that incorporates the area around you and the people you playing with. Even when done online in a multiplayer setting, a videogame provides a sense of socialization. Therefore, videogame research has to focus on game play that allows players to interact together. Videogame research looks at two aspects, first the narration of the focus of the story and the characters in a game. Second, is the actual act of game play itself. The basic approach to game play is that you get stuck and then have to find a solution. Videogame research determines the best way for this to happen through three forms of analysis, the qualitative that deals with the puzzle itself, the sociological that deals with the point at which people become stuck in a game and the psychological in which how the player is rewarded for finding the solution and how gamers will respond to this. However, this information only allows videogame researchers to determine the best hooks to use to keep a player intrigued such as action, resources, strategies or time; what this doesn't teach a videogame researcher is how to make a game enjoyable for people. There are many things that make a game enjoyable including the interaction among players and the story itself. When it comes to videogame research it can go a long way to creating a fascinating and enjoyable game, but research can't really determine if a game will be fun for people to play. That is something that can't ever be learned through simple research.




Tuesday 27 March 2012

The Debate of Digital Art and Digital Literature


Much debate still remains between traditional artists and the internet based artists that focus on digital art and literature. The two biggest arguments against digital art and digital literature is that there is no original and the work doesn't have the same heart and soul of something that is done by hand with traditional methods. When it comes to those who work on modern computers, after all their work there is nothing left except digital copies or recordings of the work. The truth of the matter is that whether digital or traditional they are all works of art and neither has more value. Both digital and traditional forms of art and literature deserve their own individual recognition because of the producers hard earned efforts that went into creating the work. Therefore, it comes down to a personal decision as to whether you prefer digital over traditional and the value you place on each.  

There are many benefits to digital literature and art. Digital literature and art is making many materials more accessible to the masses and allows older texts to be around much longer without the need for careful preservation and many of the digital forms of literature and art are free so that everyone can enjoy them. Digital art and literature are good for the visually impaired and can be easily searchable to find exactly what you are looking for. For this reason many are embracing the world of digital art and literature. However, there are still many other individuals who view digital art and literature as a setback. There are those who think digital literature is taking away from the experience of holding a paper copy in your hands and that it eliminates certain senses involved in the reading experience. Others view digital art as less personal than regular art because it lacks certain characteristics of hand drawn art. The one thing that you can be sure of is that digital art and literature is opening many more avenues than every before.



Metadata and the Importance of Tagging


Metadata is an important subject when it comes to software and the internet. Metadata was at one time used on an HTML page in order to increase its search ability. This was done through the addition of metadata or tags to an internet page so search engines could be told what the page was about when people searched for specific information. With the introduction of Web 2.0 metadata has become a powerful tool for all internet users. It is often referred to as tagging and it is commonly done on blogs, photo sharing sites, networking sites and social bookmarking sites. Tagging has actually turned out to be a key component of the technologies that make up Web 2.0. Metadata and tagging has become a social way of organizing the ever growing volume of information that is constantly flooding the internet today. It is an information method that classifies things and makes them more collaborative.  

When it comes to metadata and tagging there are two things that you need to keep in mind: content description and administrative aspects. For this reason there are three different types of metadata: quality, legal and technical. Metadata and tagging can be very important to get your message and website viewed by millions and get to the top of the website search engines. However, there are also many issues with metadata and tagging that still have to be addressed. Metadata doesn't take into account things like user preference or time. You can search for a subject on the internet and get results for things that aren't relevant to your needs or information that is outdated, simply because it contains the tags related to what you search for. On the other hand, if you are someone choosing tags for your website and you want to reach out to everyone looking for your information you need to consider factors such as alternate spellings otherwise you may never reach all individuals who search for your specific content.


Thursday 8 March 2012

Open Source versus Free Software

Open Source versus Free Software

In 1998, the term open source software was heard in placed of free software. Soon, open source was the term used to describe a different approach, philosophy, value and even different license criteria that were deemed acceptable. This caused the open source software and free software movements to head on different paths with different goals and views. The biggest difference between the two paths was the values involved. When it comes to open source the question of whether software should be offered as open source is considered practical and not ethical. The official definition of open source software is nearly the same as free software, but it is a little loser and does accept some licenses that are considered restrictive to some users. You would assume that open source software is free software, but in some cases this isn't true. Before using any software you need to carefully consider the usage terms and make sure it is truly free software under the terms of open source software.  
While the Open Source movement is enjoying increasing success, there are still many who find Open Source software to be inaccessible and this presents a major flaw in the software development. There are five primary problems with the development of Open Source software: user interface design, documentation, feature-centric development programming for the self and religious blindness. With a lack of user interface design many people prefer to use software with a more intuitive interface. Open Source software also lacks documentation that is accessible and complete in order to retain users. Most Open Source software focuses on feature rather than making sure they have a solid core. Those who program Open Source software often tend to view themselves as the intended audience rather than taking the general public into consideration. Lastly, most Open Source programmers refuse to learn from lessons taught by proprietary software. In order for Open Source programming to become widely popular and used by the general public all five of these issues need to be dealt with.


The Concept of EveryWare

The Concept of EveryWare

EveryWare is also known as ubiquitous computing. It is a post-desktop model that involves computer-human interaction in which everyday activities and objects are completely been integrated into the information process. The core concept of all models of EveryWare is a shared vision that small, inexpensive and robust networked processing devices can be distributed at all scales of everyday life and are often turned into common-place ends. EveryWare has created several challenges for the computer science field in both systems design and engineering. There are three basic forms of EveryWare that have been proposed: tabs (wearable devices), pads (hand-held devices) and boards (interactive display devices). These EveryWare form are macro-sized and use visual output displays. As the concept expands three additional forms of EveryWare have been proposed: dust (miniaturized devices without visual displays), skin (fabric based materials) and clay (three-dimensional shapes). The concept of EveryWare has been around since 1988 and is still a growing field today.
  Everyware is the almost imperceptible ubiquitous computing and it is something that is quickly becoming a reality. So how can we shape the emerging Everyware system and how it is going to impact us as individuals? These are two very important questions to consider because Everyware is a product that will impact us greatly, but is also very hard to understand. Everyware is quite literally everywhere, from smart technology to RFID tags in our credit cards. Ubiquitous computing is a technology that will reshape out live and transform the communities we live in so we need to take the time to understand it and embrace it. Because of its nature, Everyware is hard to see and this presents a problem in understanding it. Ubiquitous computing is a vital component of our lives because we need our computer systems to be intelligent and accountable in order to change and better serve our lives.

Robots, AI and the Issue of Intelligence and Identity

Robots, AI and the Issue of Intelligence and Identity


When it comes to the field of robotics one of the most exciting advancements is artificial intelligence or AI. It is also a controversial area when it comes to intelligence and identity. While a robot can be a part of an assembly line, there is much debate as to whether or not that robot can actually be intelligent. Artificial intelligence is something very difficult to define. The ultimate in artificial intelligence would be to have a robot with our intellectual abilities or the ability to recreate the human thought process. This would give a robot the chance to learn anything, have the intelligence to reason and the identity to formulate their own original ideas. Robots are nowhere near having this level of artificial intelligence currently, but robotics has progressed beyond the limited stage. There are some artificial intelligence machines today that have specific intellectual abilities to process some human functions.When it comes to creating intelligent robots there has been much debate surrounding the social impact it will create. Since many things that were once deemed science fiction have become true it isn't that far off to assume that we will soon have robots with artificial intelligence. However, when it comes to artificial intelligence there are three sides to the issue of the moral and ethical implications. The most obvious argument is the fact that with unemployment on the rise there is very little reason to take away even more jobs by turning them over to an intelligent robot. Then there are individuals who believe that it is impossible for society to advance without the help of intelligent machines. Lastly, there are those individuals who simply don't care either way. However, in today's technological age you need to take a side and express your views. Robots are essential to society, but their intelligence needs to be carefully considered and monitored.

Intellectual Property: Choosing Between Creative Commons and DRM

Intellectual Property: Choosing Between Creative Commons and DRM

There is a place for DRM alongside creative commons when it comes to intellectual property rights. Take a look at the creative commons licensing schema and you will see that you have plenty of choices to make when it comes to using licenses. While Digital Rights Management isn't mentioned as a specific tool, you can work through how digital work is used based on the various licenses you choose which is an important part of DRM. When it comes to determining how work will be used a producer has several things to consider including how the work is used as well as how others can use the work or under what circumstances the work can be used. By considering these two important aspects of intellectual property rights a producer can choose proper creative commons licenses that also incorporate proper Digital Rights Management as well.

There are many clear laws regarding intellectual property rights and digital rights management, but most of these are ignored today. Ask any teenager today where they got their music from and they are likely to reply that they downloaded it from the internet. However, the question is how much of this downloaded content is obtained legally? While there are many sites that offer authorized downloads that comply with intellectual property rights and digital rights managements, there are still just as many – if not more – file-sharing sites that offer unauthorized downloading that doesn't follow the laws. This has divided people into two views, those who believe intellectual property rights make things public domain provided they don't get caught and then those who still view intellectual property rights as a person's private property that should be trespassed upon. The issue of intellectual property rights and digital rights management has grown beyond a simple issue of piracy and is now becoming an issue of public policy. Therefore, it is a good idea to start moving in a new copyright direction, a law in which not all users of intellectual property rights are not viewed as criminal.



Sunday 29 January 2012

THE WEB 2.0

The web 2.0 is a second developed copy or version of World Wide Web by transferring a static HTML Web pages to be more active and organized that make Web 2.0 is concentrating on serving Web applications for people. In other words, it gives users large space and good chance to cooperate and share information online, also they can upload their own information to share. Over time Web 2.0 has been used more as a marketing term than a computer-science-based term. Blogs, wikis, and Web services are all seen as components of Web 2.0.

Web 2.0 was previously used as a synonym for Semantic Web, but while the two are similar, they do not share precisely the same meaning.


Monday 16 January 2012

Digital Culture- Technology

Technology creates digital culture by allowing people to cooperate with new people at a comfortable and controllable distance (for example facebook, twitter). Technology started making life more easy . The aim of technology and digital culture is to build your bridge to the real world. This bridge will enable your online efforts to truly benefit your offline life, mostly by connecting you with off- minded individuals. This is one way in which technology creates digital culture.


Technology creates digital culture by providing dynamic publishing opportunities and 24/7/365 broadcast channels that include news, movies, cartoons etc. This 'digital culture' is a human extension of our original relationship with tools. Think of the first time a stick or rock was used by a human as a tool to fulfill another purpose. The internet, social media and computers are extensions of our humanity and by that very definition create digital culture by enabling digital culture.

Analog And Digital Computers

In the mid-20th century there were two quite different and challenging approaches to designing computing machines: Digital and Analog. Analog computers used continuous distinctions of voltage or even mechanical movement to calculate answers to equations, where digital computers operated one command at a time using distinct values and logical addresses. In the beginning , digital computers faced many design and cost problems that organizations felt, would never be covered. By comparison, analog systems in the 1930s and 1940s such as Vannevar Bush's differential analyser were quite fast and powerful. However, once the general purpose design, usually attributed to Von Neumann, was widely accepted, digital systems began their extraordinary path to ubiquity. Analogue computers were increasingly relegated to narrower and narrower applications.

Online Gaming And Digital World


In this world of digital revolution, one can play various online games available all over the internet. Moreover these games are free to play and at your fingertips; click whenever you wish to have some fun.

But these games have a darker side as well. According to a Iowa State University Study, around one amongst ten gamers gets addicted to this piece of adventure. For the kids, this addiction may cause issues such as anxiety, and depression.

In addition to this Nottingham Trent University Professor Mark Griffiths states that online gaming can prove beneficial for those undergoing painful cancer treatments.

As far as online gaming is concerned, I personally feel if time is managed properly you can definitely take a break from your routine and enjoy playing it with people from all around the world.

Sunday 15 January 2012

Digital Culture ,Web 2.0- Twitter – The place where everyone tweets


According to a report on BBC, recently a twitter user attempted to unveil a few celebrities who have acquired certain level of injunction to restrict publication of their personal life details.

Twitter allows connecting to anyone or everyone all over the world which has made it gain popularity for all the right and wrong reasons.

 A few years back where one would have to wait for the other person to get home and call up, twitter has thoroughly changed the equation. Just one tweet from your mobile and your loved ones get to know what you are up to. I believe twitter has impacted the lives of people from all age ranges in a positive way.

Virtuality, Immersion, Simulacra and Avatars

Virtuality

Virtuality is a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. For example, it can be used in regards to online content, video games and films.

A very common virtual device is a “computer”, this is an extremely large virtual device in which it allows you to do anything whether that is networking, watching films, playing games etc.
 But some simulations include additional sensory information, such as sound through speakers or headphones.There is also such a thing called “virtual reality” which describes a wide variety of applications commonly associated with immersive, highly visual 3D environments. For example, The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves and miniaturization helps to feel as if an individual is trapped within the film with the elements of the film revolving around them.

Immersion:

Immersion is the state where you cease to be aware of your physical self. It is often accompanied
By intense focus, distorted sense of time, and effortless action.
It feels as “being in the zone”. Immersion does not require realism or 3D environment. For example, like reading, a novel can be characteristics of immersion too.
Tactical immersion
Tactical immersion is experienced when performing tactile operations that involve skill. For example, it relates to fast actions within video games, often fast action games. Players feel "in the zone" while perfecting actions that result in success.
Strategic immersion
Strategic immersion is more cerebral, and is associated with mental challenge.It is more efficient within video games as opposed to other platforms. This involves seeking different paths and techniques in order to conquer a game. For example,Chess players experience strategic immersion when choosing a correct solution among a broad array of possibilities.
Narrative immersion
Narrative immersion occurs when players become invested in a story, and is similar to what is experienced while reading a book or watching a movie. It states that the more an individual engages with the device, the more intrigued they will be in regards to discovering how the story finishes or what happens next.
Spatial immersion
Spatial immersion occurs when a player feels the simulated world is perceptually convincing. The player feels that he or she is really "there" and that a simulated world looks and feels "real".



Simulacra:
Is known for discussions of images and signs, and how they relate to our contemporary society, wherein we have replaced reality and meaning with symbols and signs; what we know as reality actually is a simulation of reality. The simulacra are the sign of culture and communications media that create the reality we perceive: a world saturated with imagery, infused with communications media, sound and commercial advertising.
Avatar:
Avatar is the most powerful, purest self-development program available. It is a series of experimental exercises that enables you to rediscover yourself and align your consciousness with what you want to achieve. You will experience your own unique insights and revelations. For example, it is used as display pictures on social networking sites, within various video games or even in cartoons.



Saturday 14 January 2012

Thinking About Media Change



21st century prepared the essential revolutions globally in science, health, education, and communications. Gradual development of modern tools shaped our lives on new itineraries. Remediation in communication process is neither a new phenomenon and nor a modern term to explore astonishingly as Marshal McLuhan quoted the concept of remediation in 1964 in his book “Understanding Media: The Extensions of Man”. However, technological inceptions and contemporary communication gadgets developed an incessant debate on remediation. Old (or passive) & New (or active), immediacy & Hypermeidacy are common themes for media scholars.

‘What is new about new media comes from the particular ways in which they refashion older media and the ways in which older media refashion themselves to answer the challenges of new media.’ (Bolter & Grusin, 199:15).
New media (as we called, Digital media, Multimedia, Networked and Mobile media) encompasses all kinds of digital, computerized or networked information and communication. Old media was operated in competitors’ threat free atmosphere without fear of cross dialogue in form of consumers’ commentary for correctiveness and transparency. Newspapers, Radio, TV and Telephone worked as independent media which, now turned on a single podium of internet (web).

 New media supports to micro which enables the macro media. Balance, credibility, research and decency of information with freedom of choice and instant availability are the basic characteristics of new media. Remediation brought a larger scale convergence at technical, institutional, professional and cultural levels with additional features of interactivity, participation and customization.

Bolter and Grusin described the phenomenon as, ‘both new and old media are invoking the twin logics of immediacy and hypermediacy in their efforts to remake themselves and each other.’ (199:5) The outcome of the process of remediation, in their account, is a dialectical conversion between ‘immediacy’ as “to erase all traces of mediation” and ‘hypermediacy’ is a “style of visual representation whose goal is to remind the viewer of the medium.” (1999:272)

The simulation mechanism is spreading its wings. Apart from old and new media contention, we know the realities and practicing remediated media in contemporary world. This is McLuhan’s message. We shaped the tools first but are now being shaped.