Go home now Header Background Image
Submission Procedure
share: |
Follow us
Volume 9 / Issue 4 / Abstract

available in:   PDF (75 kB) PS (225 kB)
Similar Docs BibTeX   Read comments Write a comment
 Number of Comments:1
Links into Future
DOI:   10.3217/jucs-009-04-0300

The Future of PCs and Implications on Society1

Hermann Maurer
(Graz University of Technology, Graz, Austria

Ron Oliver
(Edith Cowan University, Perth, Australia

Abstract: In this paper we argue that in about ten years time PCs as we now know them, will no longer exist. Their functionality will be totally integrated into mobile telephony devices, or putting it differently, in ten years time mobile phones will incorporate all functions one would expect from a powerful PC. These new devices, let us call them eAssistants, will be with us all the time and will change our lives enormously. In this paper we take a first look at both the technological and applied aspects of this prediction.

Keywords: wearable PC, display technology, future computers, societal implications

Categories: H.4, J.0, K.4, K.8

1 Introduction

Laptops are becoming lighter and lighter, hand-helds are getting more sophisticated, and the mobile phone is gaining increasingly powerful computing resources. Yet we believe this is very much the beginning of an era that will start around 2010. At that point we believe mobile phones will have turned into veritable computer-powerhouses. Let us call them eAssistants. They will be not much bigger than a credit card, with a fast processor, gigabytes of internal memory, a combination of mobile-phone, computer, camera (still and video), global positioning system, a variety of sensors and in continuous connection with huge non-volatile local storage and the then existing equivalence of the internet: continuous, since there will be no charge for connect time, just for data transfer.

Most importantly, the PC of 2010 will NOT have: (i) a hard-disk: this fragile energy consuming device with rotating parts will be replaced by a version of the memory stick as we now use them in digital cameras, but with possibly hundreds of Gigabyte capacity; it (ii) will have no screen nor keyboard as we now have; and the much reduced energy required by this device will be provided by tiny fuel-cells.

1This paper was written during the first author's visit to Edith Cowan University in Perth. The first author would like to thank the university for the support obtained during his visit.

Page 300

Of all of the above, we believe that most readers might be startled by only two things: the promised absence of screen and keyboard.

Let us first address the issue of screen. Presently, there are half a dozen technologies competing to replace the screen as we now have it. They include flexible screens that can be attached to your sleeves ("wearable screens"), projectors that create images wherever you want (even on uneven surfaces of any colour), and specialized eye-glasses that replace the screen, just to mention three alternatives.

Which of those technologies will "win" we do not know, nor does it matter: what matters is that wherever you go you will have a more or less zero-weight high quality display at your disposal, connected to the small computer proper by the modernized version of Bluetooth, and via the computer to a huge archive of information locally and all the servers on the internet.

Of all possible technologies we particularly fancy a certain version of eyeglasses: the electronics in the eyeglasses are in contact with the computer via Bluetooth. The computer delivers (if wanted stereo) sound to the side of the glasses, that transmit it directly to the ear-bones (thus, only the wearer can hear the signals); the computer transmits (moving or still) pictures (if wanted 3D) through little mirrors through the pupils of the eyes directly to the retinas; and a tiny camera in the middle of the glasses provides the computer with what the user sees, e.g. for gesture recognition.

Indeed, the eAssistant may also have additional sensors and I/O devices and is supported by powerful software (for sophisticated image processing of the pictures obtained by the camera) as was already mentioned in [1]. This will be discussed in more detail in the following sections.

Let us now turn to the keyboard. First, alternative input techniques are already starting to emerge. Speech input is one of them, and is particularly attractive if "speech that is not heard" is used (i.e. utterances with closed mouth), e.g. using microphones near the larynx. Second, techniques that use the movement of fingers, the head, or the body using tiny sensors are becoming realistic; third, by using the glasses with an integrated camera described above a "virtual keyboard" can be made visible to the user, and the finger movements on that keyboard can be analysed by software that does image-processing of what the camera delivers.

We are not trying to suggest in this paper that any one of the technologies described above will take precedence over others but more to suggest that the screens, hard-disks and keyboards, as we know them today, will be obsolete within ten years, give or take a few years.

2 A potential model

In light of what we have discussed above, it is possible to predict that the eAssistant might look similar to what is shown in Figure 1 below.

We want to emphasize once more that we do not necessarily believe in the "eyeglass" version that is shown, but it is a helpful metaphor to convey the functionality we believe will be available. The computer proper (1) is not much larger than a credit card and has all the functionality describe earlier. It is connected by wireless to the internet and to the eyeglasses plus necklace. The computer delivers on the side of the eyeglasses (if wanted stereo) sound (2); it delivers via tiny mirrors (5) into the eyeglasses (if desired 3D) visual multimedia material of whatever kind, such as text, pictures, animation, 3D models, movies, 3D movies, etc.

Page 301

This may be technically accomplished by projecting images through the pupils directly onto the retinas of the eyes, or by creating a virtual image in front of the eyes. (4) represents a camera that has multiple uses: (i) the user can look through it (thus having infrared vision during the night, or macro vision or zoom when this is useful); (ii) the user can transmit what is being seen to others (i.e. we have video-telephony, of course); finally, the pictures taken by the (still and movie) camera can be analysed by powerful image processing software.

Figure 1: The eAssistant and associated devices

The camera has also a built-in compass, hence the eAssistant is not only aware of where the user is (because of the GPS system), but also in which the user is looking. (6) is a larynx-microphone that can pick up what is spoken by the user (even if done with closed mouth: this takes a bit of practice on part of the user), and it also has a loudspeaker so that a conversation or audio outputs can be shared with others, even if those do not happen to have an eAssistant at this point in time (if they had, the audio information could be sent directly to their ears using the devices (2) mentioned earlier). (3) symbolizes a device that can detect different states of brain activity.

Page 302

At this point in time a very limited number of states can be detected (typically the intention to move the left arm can be distinguished from the intention to move the right arm) [5], but it is foreseeable that a dozen or more states will be achievable, allowing to create input for the eAssistant by thinking only. (3) will also have integrated further sensors, typically for the detection of head-position and head-movement or speed of movement of user, but potentially also to measure physiological parameters of the user like body-temperature, pulse, skin conductivity, etc. or even environmental parameters like temperature, humidity, air quality, air pressure. If it is not self-evident the next section should convince readers that an eAssistant as described will indeed revolutionize our world.

Note that each of the features and sensors described above has been implemented in some way or another. The "only" thing that is missing is integration of all into one small unit. However, the assumption that this will happen is basic to some of the research we are seeing today, e.g. in [3] or [7].

3 Applications

3.1 The e-Assistant is going to change how we use computers

Input of information using the keyboard or the mouse will be replaced to a large extent by other means, such as speech recognition, gesture recognition, employing a "virtual keyboard" and other methods that still sound unorthodox today.

The "virtual keyboard" is a good example that shows how the various components of the eAssistant interact with each other. By a spoken command "Create keyboard" the eAssistant creates the image of a keyboard for the eyes of the user. In our eye-glass model the user will now see a keyboard floating in mid-air, and can type on it. Image processing based on what the camera delivers determines what keys have been touched: It may deliver audio feedback (producing a click for each key that is hit) and video feedback by showing the text being typed, the text floating on a virtual screen above the keyboard.

Using gesture recognition, movements of fingers or the hands replaces the mouse, nodding the head can be interpreted as mouse-click; or if two alternatives yes/no are offered or a two-button mouse is to be simulated, nodding or shaking the head could select one of the options. Alternatively, a simple gesture with the finger might also be used.

As mentioned, simple inputs are also possible using the measurement of brain activity or other sensors: there is no limit to what one might imagine and it will be one of the interesting tasks to experiment with the combination of various techniques.

3.2 The e-Assistant is going to change how we communicate and work with other people and with information

Clearly one of the functions of the eAssistant is that of a mobile phone. However, it is not necessary to press a phone against an ear, rather one can either use the loudspeaker mentioned earlier, or feed the audio-signal to the sides of the eye-glass and thus directly (via the ear-bone) to the inner ear, i.e. without other persons hearing or noticing anything.

Page 303

Since one can talk with closed mouth (after some practice) or spell a message by invoking a sequence of brain-states by thinking of designated actions (much like we spell a message we send an SMS) two persons can communicate over arbitrary distance in a way that other persons on the sender's or on the receiver's end do not notice it: thus, we have basically implemented telepathy in a technological manner, a fact much used in e.g. the XPERTEN-novel series [12]. Note that while two or more persons are communicating this way, the can also share arbitrary information, from what they currently see to information from local storage or a server, accessed via the net. It is also conceivable that the whole conversation might be recorded and stored for later perusal.

Even communication with persons speaking different languages is quite conceivable: persons talks in the language of their choice; a speech recognition-translation-speech synthesizing program is translating what is said into whatever other languages are desired. Note that such "machine" translations will not be perfect in the foreseeable future: "To understand a language is to understand the world" is a famous statement that indicates clearly that for even near perfect language translations one would need computers at least as "intelligent" as humans, and aware of all facets of the world and human life: and if this can ever be achieved is still a topic of much discussion. However, good machine translation programs are certainly doing a better job than the average person after having studied a language in school for a few years! Further, misunderstandings due to the translation process can be fairly easily avoided by feedback techniques: the translated material is translated back: if this now differs from the intent of the speaker, appropriate actions can be taken. By the way, the communication between persons with different languages may be made much easier by using dynamic symbolic languages, a main aim of the project MIRACLE [3] and its forerunner MUSLI [7].

The eAssistant also changes how we discuss things: while someone is telling us something, we have the possibility to check if the information provide is correct, by accessing back ground libraries [6] on local storage or in the internet. Conversely, we can use information from such background sources in our statements. This is of course assuming that access to desired information is easier and more selective than what we could get today using e.g. search engines in the internet. This is where techniques of knowledge management come in [4], [10]. That techniques such as similarity recognition and active documents [8] can make quite a difference is shown in [4], that semantic nets and metadata [11] allow to produce much better search results is shown by the success of knowledge networks in [9] and is the basis of one the currently leading knowledge management systems Hyperwave [2] and [13]. The new technologies will create many more uses and applications for the digital libraries and repositories currently being researched and developed [15].

Clearly the access and more productive use of information is not restricted to discussions with other persons, but applies universally to all situations when information is of critical importance. This is why knowledge tools as outlined in [4] are of such importance.

3.3 The e-Assistant will change our lives

It does not require that much imagination to see how the eAssistant is going to change our lives. I may suffice to provide just a few examples:

Page 304

When we meet a person the first time, information is usually exchanged by passing business cards and talking a bit about mutual backgrounds. Of course the exchange of information on business cards together what all that is available on the internet about this person, plus the pictures taken by the camera in the eye-glasses is recorded for later use. When we see the person next time, image processing software identifies who this is (even if we have forgotten it), and supplies us with a wealth of information about this person.

The eAssistant is a perfect guide. Of course it can guide us when we drive the car, something already quite common for persons who drive top-of-the-line car models with built in navigation systems. But the eAssistant is equally helpful when we are walking, and not just for routing: when we look at a building the speech command "Explain building" will be enough for the eAssistant to give us ample information: after all it knows (by GPS) where we are and (because of the compass) in which direction we are looking, so going into a guide book or such to retrieve what we want to know is easy. Clearly, this is not restricted to buildings, rivers, lakes, mountains...but equally well applies to plants or animals. If we look at a plant the speech command "Explain flower" activates the camera and image processing, identifies what we are looking at and gives us information on the flower, on the berry, on the mushroom, etc.

We will be paying with the eAssistant rather than with credit-cards or the like. The eAssistant will be our driving license and passport. It will automatically open those doors that we are authorized to enter. It will us allow with one command to turn on the light, the water, or what have you. And this does not just apply to things near to us: while we drive to our home we can turn on the air-conditioning, or the heating in our skiing cabin.

This list can be continued indefinitely, and we intend to prepare a more detailed study at a later point: it is our belief that an extensive list of what the eAssistant is good for will be rather mind-boggling and will be a strong incentive for the fast wide-spread deployment of eAssistants. So let us conclude this subsection with one more example from the realm of medicine:

Suppose we have a sore throat. We call our doctor. He asks us to show our tongue. We use the macro mode of the camera in our eyeglasses (the camera can be taken out of its casing for such purposes) to send the picture to the doctor. While viewing the picture the doctor is supported by a computerized diagnostic system that uses image processing to find out what kind of infection we might be suffering from. Having decided what it is, the doctor makes sure that we can pick up required medication from a pharmacy near to us; after all, our current position is known (if we permit it) to the system due to our GPS. Note that sensors that supervise some of our physiological data (like body-temperature and pulse) and monitor environmental data (like air-temperature and air quality) might alert us to take actions, or even initiate actions such as sending an ambulance to help us! There is even more to the widespread use of sensors and constant monitoring of sensory data. Suppose a person dies of some rare disease: comparing the date that has been monitored concerning this person over a long period with persons in similar circumstances who did not suffer from this disease might well discover the real reason for the disease at issue. Maybe this is the way how we will finally be able to combat mysterious illnesses such as the SDS (sudden death syndrome) in children, or the high rate of some type of cancer in certain groups of the population.

Page 305

3.4 The e-Assistant is going to change how we learn, how we work and the humanity as such

Sophisticated learning programs that allow communication with others, with experts or with "Interactive Knowledge Centers" [13] or "Active Documents" [8] will allow us to pick up knowledge as we require it. The "in-time" learning will be the natural thing to do, rather than learning just because certain things might be needed at some future time. The fact that we will have continuous access to information in all areas will mean that the learning of facts, today still an important component in myriads of areas from geography to history, from law to medicine, will become significantly less important. Learning in the workplace will finally become a favoured form of learning for many with help from the eAssistant and its supporting technologies and software [14]. Even activities like handwriting might become unimportant (why use handwriting, when we have our eAssistant with rather more convenient ways of data-input?); or why learn a foreign language for simple communication when we have automatic language translation, as explained earlier. Of course we recognize that to understand a culture, we have to understand the language of this culture at a deep level: but for just travel or business the automatic translation devices will do. Thus, one of the main issues that will have to be investigated more than has happened so far is not HOW we learn in the future, but WHAT we have to learn, when the eAssistant and the internet is more and more turning into an extension of our brain.

Indeed this fact will have a deep influence on all of humanity. We live today in a totally Tayloristic society as far as material goods are concerned, i.e. we are completely dependant on thousands of other professions and hundreds of thousands of other people for our daily life (from food, to housing, to clothing, to entertainment, etc.) and we have accepted this: we have accepted that we can hardly survive as autonomous individuals any more, but only as part of large group of diverse humans. The point is, what has happened in the area of material products is now about to happen also in the area of the non-material products information and knowledge. We will become, for better or for worse, much more dependent on hundreds of thousands of other humans for what we have to know to function properly. The eAssistant is a big step in this direction: we will be profit from a powerful network sharing knowledge with others (the positive aspect and we will become more and more dependent on this network (the negative aspect). Unless world wars or revolution stop the development of IT as it is now happening the eAssistant and its implications will be unavoidable.

4 Conclusion

In this paper we have tried to argue that the merging of computer technology with communication technology, particularly more and more powerful mobile phones, will bring powerful "eAssistants" that will change the life of all of us.

This will happen much faster than one might assume due to the very high and increasing penetration of mobile phones. We have emphasized the positive aspects of this development in this paper.

Page 306

We are aware that it is also necessary to examine the development for negative aspects: what we have described may negatively influence our power of thinking and our memory, as we more and more rely on the eAssistant. It will make us more dependent on the eAssistant and computer networks than we already are. It is not clear what the development will mean for those countries or groups of people who cannot afford to or who do not want to use eAssistants. It is clear that many of the applications of eAssistants will be quite invasive as far as privacy is concerned.

In this sense our paper will hopefully provoke a discussion of the issues involved. We feel that the (rich) world is drifting in an interesting yet albeit also dangerous direction, but nobody seems to seriously consider positive or negative consequences.

If this paper initiates a discussion that is indeed necessary we will have achieved what we wanted to achieve.


[1] Maurer, H.: Necessary Aspects of Quality eLearning Systems; Proc. Conference on Quality Education @ a Distance, Deakin University, Geelong, Australia (2003), to appear

[2] Maurer, H.: Hyperwave - the Next Generation Web Solution; Addison Wesley Longman (1996)

[3] Maurer, H., Stubenrauch, R., Camhy, D.: Foundations of MIRACLE - Multimedia Information Repository: A Computer-based Language Effort; J.UCS 9, 4 (2003), 309-348

[4] Maurer, H., Tochtermann, K.: On a New Powerful Model for Knowledge Management; JUCS 8, 1 (2002), 85-96

[5] Pfurtscheller, G., Neuper, C., Guger, C., Harkam, W., Ramoser, H., Schlögl, A., Obermaier, B., Pregenzer, M.: Current Trends in Graz Brain-Computer Interface Resarch; IEEE Transactions Rehab. Engineering 8, 2 (2000), 216-219.

[6] Maurer, H.: Beyond Classical Digital Libaries; Global Digital Library Development in the New Millenium, Proceedings NIT Conference, Beijing, Tsinghua University Press (2001), 165-173

[7] Lennon, J., Maurer, H.: MUSLI: A hypermedia interface for dynamic, interactive and symbolic communication, J.NCA 24, 4 (2001), 273-292

[8] Heinrich, E., Maurer, H.: Active Documents: Concept, Implementation and Applications, J.UCS 6, 12 (2000), 1197-1202

[9] Der Brockhaus Multimedial 2002 Premium; DVD, Brockhaus Verlag, Mannheim (2001).

Page 307

[10] Ives, W., Torrey, B., Gordon, C.: Knowledge Management: An emerging discipline with a long history. Journal of Knowledge Management 1, 4 (1998), 269-274.

[11] Meersman, R., Tari, Z., Stevens, S. (Eds.): Database Semantics; Kluwer Academic Publishers, USA (1999).

[12] www.iicm.edu/XPERTEN - A series of novels (in German), to appear in English early 2004.

[13] www.hyperwave.com

[14] Oliver, R.: Seeking best practice in online learning: Flexible Learning Toolboxes in the Australian VET sector. Australian Journal of Educational Technology, 17, 2 (2001), 204-222.

[15] Oliver, R., Wirski, R., Omari, A., Hingston, P. & Brownfield, G.: Exploring the reusability of Web-based learning resources. Proc. of ED-MEDIA 2003, to appear.

Page 308