Engelbart in History & in the Future. A perspective:
re-igniting the revolution

 

Computers & Philosophy presentation by Frode Hegland
Oregon State University January 24th 2002

http://osu.orst.edu/groups/cap/

 

 

 

Hi my name is Frode Hegland. I've been working with Doug for a couple of years now and grow ever more fascinated with him, his work and his vision.

I have done some work with Doug, including an audio interview series on the web, a live demo of the HyperScope, as we'll talk about a little bit later, and other smaller items, all available on www.liquid.org.

I was born in Norway, lived for a while in Singapore, studied advertising in Syracuse NY and I'm now in London England, which is a nice enough place, but it ain't the Bay Area. :-) I'm with The Liquid Information company who deal with email at this point.

Archimedes said: Give me somewhere to stand, and I will move the earth.
287 BC - 212 BC), Archimedes (c. 287­203 B.C.), Greek scientist, mathematician. Quoted in Mathematical Collection, book VIII, proposition 10, section 11, Pappus of Alexandria (date unknown); translated into Latin (1588).

Doug Engelbart moved the earth, he invented most of the interfaces we use today (the mouse, hypertext- thought not the name, windows and all the other good stuff). And all of this pretty much in the sixties and early seventies.

Doug moved the earth because he had somewhere to stand: A vision and an his research group at SRI: The Augmentation Research Center (ARC) to implement his vision with.

But as we all know there has been not much done since the sixties. What happened? What is the difference between then and now which made the development stop?

The difference is there is still the vision, but there is no Augmentation Research Center .

This presentation concerns:
 

 

 
First, the vision...

 

 

The Vision

 

 
Crusade Hunting

 

Doug was driving to work the Monday morning after getting engaged in December 1950. Having got a good job and was about to get married he though- is this it? What am I going to do with my life? He then calculated the amount of professional minutes he would have for his career.

Assuming he would work til he was 65. He was then 25 and taking an assumption of an average work year containing 2,000 hours a year that would make it 65-25=40 years *2,000 hours a year= 8,0000 hours of professional work or 4,800,000 minutes. And he kept thinking.

The first issue and question on that Monday morning was a view of this empty hallway of his career, there was no plan - which was embarrassing.

So what kind of plan and objective - goals should he have:

Money? Enough for raising a family yes, but he didn't find that in itself really interesting.

Sometimes that morning he had a thought: I am investing a career, what kind of return would I like?What if I could maximize the value my career contributes to mankind? This started orientating him.

So he spent a couple of months crusade hunting.

By February/March he had spent enough time thinking about crusades. Real crusades, not just 'lets clean up this neighborhood'. Well, you know, one thing is to think to think about is great contributions, but how have individuals changed history? Gengis Khan and Adolf Hitler come to mind. Not laudatory examples, but interesting. This didn't lead him very far. Did read a lot about Khan though that week.

Other crusades like health in third world etc came to mind. I read about someone who wanted to drain swamps where natives where living and were suffering from malaria. So the swamps got drained and the mosquitoes went away. And the population went up. However... the bigger population ruined their environment and a couple of generations later they were back where they started.

 

 

 
Realization

 

One Saturday it dawned on him: Boy, the world is complex, jeez, the problems are getting more complex and urgent and have to be dealt with collectively- we have to deal with them collectively.

So here came the crusade: how to deal with maximizing the improvements we could make for mankind's abilities to deal with complex, urgent problems.

In the next half hour or so he really got the picture of computers and interactive displays. This was 1951.

You see he had read a book about computers (Giant Brains, or Machines That Think by Edmund C Berkeley, Consultant in Modern Technology, 1949.) and he was a radar technician in the second world war. He also had an electrical engineering degree, the engineer in him could generalize what the circuits could to etc.

The thought went like this: The radar could draw stuff on the screen for the operator, but in a limited way. Having seen the internal electronics which could provide the display for the operator, he knew that if a computer could print on a line printer electronically it would be able to produce anything you wanted on the CRT!

The radar could watch the operator and do things. Jeez, the computer could watch the operator and do whatever you want on the screen:

The computer could interact with the display in all sorts of flexible portrayals. It could do fast retrieval and it could do jobs for you:

It could allow you to type - what we now call word processing.

It could retrieve for you, submit to someone else at a distance. Distance work! Large numbers of people could be interacting with the knowledge. What a revolutionary thought- a real, feasible way to allow people from afar to work together.

One could only think of explorable options about what the computer could provide for you which your typewriter cannot.

The picture came easily, within half and hour once the right question had been formulated and digested.

 

The concept of interactive computing was born in his mind. This basic picture never changed.

 

 

 
The vision was born. Time to create an Augmentation Research Center to implement it.

 

History of the First ARC

 

Implementing it all presented lots of practical problems. It took 11-14 years to get a chance to tie displays to screens and start doing things with them.

 

 
Berkeley - Credibility

Doug assumed he had to learn about computers. He had been out of college for three years and was due to be married.

To do this kind of research he probably needed a PhD. He Applied to Stanford and Berkeley. Berkeley had a research project to build a computer called CALDIC (California Digital Computer) so that made him decide. However it never worked when he was there - it was not finished before he got my degree and left.

They had labs and courses on digital circuit design. Making adders and multipliers and arithmetic controls, watching registers. They wrote programs in machine language. By hand. And exchanged their designs with other students to debug each others work. There was talk of research projects to make assemblers and compilers, but that was not quite a reality yet. Pretty geeky days!

The idea of individuals using interactive computers was ludicrous at the time.

Masters

For his masters thesis he got an idea: ...When a digital drum rotated it would get successive cells along a track and so you addressed it by what track and cell the speed of the computer was then tied to the speed of the drum rotation. (it spins at a constant speed, counts to when it'll be the right cell) I realized that I could improve on it. Mix of hardware and software....

He got his masters in 1952, which was actually called an Engineers Degree.

This might have given him a better feeling for how programmers have to do things but it could have been better spent I think.

PhD & Patents

So for his PhD thesis he did something acceptable. In 1955 he got his PhD in Electrical Engineering (with specialty in computers) through work on bi-stable gaseous plasma digital devices at UC Berkeley.

At Berkeley he was biding time, learning about basic electromagnetic wave propagation, solid state physics, symbolic logic. Doug puts in a very nice way: I was basically getting my journeyman's card. I also got a bunch of patents - 13-14 from the PhD thesis. Doubt they were useful in the world...

Teaching, kids & BBQ

Then he was an acting assistant professor at Berkeley. Teaching basic electrical engineering,. One singular event happened- he and his wife had had 3 children. His wife got this great theory if you get your first 2 closely together there would be less sibling rivalry, but the unplanned number three came and hour later!

It became a matter of teaching and bringing up the kids. 2-3 hours a day of great focus and concentration. So no more evening time for the crusade.

He made some friends in other faculty though. There was a BBQ at an economics professors. Doug helped clean up afterwards and they got talking. The economics professor wanted to know what kind of research he was planning to get started. What kind of research he'd do would be important for his career etc. Doug told him about computers and augmentation - there came a point when he didn't look very interested. He looked at Doug and said: Do you know how promotions are done at university? Doug remembers the moment well: My jaw dropped, guess I don't. It's about peer review: If you don't get papers published you won't advanced. Papers get published by peer review.Talk like this and they won't get reviews. So much for blindly looking for an academic career!

 

 
HP - Quick Detour

Doug knocked at HP's door, who were in the instrument business at the time. They were nice and offered him a job as they liked the patents. Both Mr.. Hewlett and Mr.. Packard interviewed him. I he asked if they planned to get in to computers. The head of research said "gee Doug, not a chance".

 

 
So on to SRI.

 

SRI - Gaining A Foothold

He settled on a research position at Stanford Research Institute, now SRI International, in 1957. 

SRI had had a project with Bank Of America to build a computer to process cheques or something - it was called ERMA. All vacuum tubes. So he knew about that they had been doing this for a while and he interviewed them. He got hired. But maybe only because the guy who interviewed him, a Danish guy by the name of Torben Meisling, had been a couple of years ahead of Doug at Berkeley. He got hired on the basis of his patents. Torben warned him not to talk about the computer stuff.

It was time to bide time and build a position.

Doug got in involved with Hewitt Crane. He had a project going where he invented - according to Doug it was ingenious -it was all magnetic materials, they were little ferrite little things with multiple apertures (MAD's ). It was a fair amount of work and various parties were interested. Doug invented new things and got more patents.

But all the time he kept thinking ahead how I could do what I wanted to do.

 

By 1959 he had enough standing to get approval for pursuing his own research. He spent the next two years formulating a conceptual framework for a new discipline that became the guiding force for his 1962 seminal work, "Augmenting Human Intellect: A Conceptual Framework," under contract prepared for the Director of Information Sciences of the U.S. Air Force Office of Scientific Research.

A couple of projects came along which were closer to his intended direction.

One of them was a project which was very very important to him and was an accident. Doug talked to air force research manager about components getting smaller. Transistors etc. So Doug suggested that he carry out a study to see what would happen if you make the components smaller - knowing that there would be more demand for computer power and the only practical ways to get faster and more powerful computers is to make the components smaller.

So how could you make them smaller? Would there be any problems? What happens?

Lets just consider the scaling issue for a while.

 
The Business Of Scaling environment & tools

The underlying thing is this: If you change the physical scale of some device, making it 1/100th the size of what it was, you cannot assume it will work. A lot of factors change so you have to sit down and re-design the whole thing.

It is the same with scaling an aircraft up in a wind tunnel. It definitively won't work. The aeronautical guys learnt this way back in school and told him about this. They told him about something called dimensionless numbers. Every measurement generally has dimensions, kg, miles, etc. so there's this amazing thing that if you take all the numbers which are significant you can arrange them in a way so all the dimensions cancel, and you get a dimensionless number, if this works, you can then depend on the numbers. Very mysterious.

A the Solid State Conference 1959 Doug was going to talk to them about the effects of scaling electronic components. He said if you change the scale you get surprises. He was met with looks of disbelief. They were engineers and physicists, how could he possibly lecture them?

Scaling people.

So he said would you notice if everything and everyone here increased by 10 in each dimension? What would happen?

Many said they wouldn't notice a thing as the angles would be the same; looking at someone bigger would look the same if you yourself was bigger. But what about weight? And strength?

If you make something 10 times bigger you get 1,000 times the volume (10 times in each of the three dimensions) and 1,000 the weight.

But the strength? In most materials strength is dependent on the cross sectional area of the material. How much stronger does the material become? Only 100 times as strong (as you are only expanding it in two dimensions).

Let me go into that a bit more: If you look at strength as how much force you can exert by, lets say, stretching a cylinder of a given material before it breaks. Fair measurement? Then you will notice that you have the same pressure at every point pulling the cylinder apart. Or you can think of it as a rope, or anything you can picture stretching. At every point where the cylinder is stretched - a force is applied - in the direction of the force, so that only leaves two dimensions for strenght. A little weird but everyone happy with that?

So for scaling a person- you, you've be 1,000 heavier but only 100 stronger. There becomes a difference of a factor of ten between weight and strength.

That is the same as if you were 10 times heavier right now without the increase in size (normal human is about 70kg - so imagine 700kg), and had the same (muscular and skeletal) strength. You may not even be able to sit on a chair. You could fall, and break bones.

So the Solid State people started to listen to the significance of scale changes.

In the world of electronics there'd be issues too, just like there would be if we were scaled. You could have trouble if you expected the device to work the same at this scale - the temperature and so on is also affected.

The scale of change of the tools out there... wow the impact. it will start changes...

 
for example

Scale and travel.
I was thinking about my father on the way here, when I was flying from London. My father is a business man, He works in Singapore, and London, though he also travels very frequently in the Asia on business. He calculated that during a recent year, he was in the air what amounted to 2 months out of that year. I have thought about this. From my rough calculations, he has, in his lifetime, most likely traveled further than all his ancestors for as long as we have been homo-sapiens- combined. Now that ain't unusual these days.

Scale and voice communication.
Most people in the 'rich world' these days, including kids, have mobile phone. - Everyone can conceivably connect to anyone else at any time at low cost. The Star-Trek idea of saying someone's name and the computer interpreting it as you are talking to the person and then automatically, immediately and transparently connects you to that persons 'mobile phone'/communicator, including the part when you say their name. Just ain't that far of.

The cost of a long distance call has plummeted to the point where we don't often refer to long distance calls as 'long distance'. You just call.

Movies.
Marshall McLuhan points out, speed up a series of pictures and what do you get? Movies - from a simple speed scale change.

Scale of the information explosion.
"More information has been produced in the last 30 years than in the previous 5,000. About 1,000 books are published internationally every day, and the total of all printed knowledge doubles every eight years", according to Peter Large in Information Anxiety. There is now 2.5 times more information stored online than on paper but none of the Internet search engines have even cataloged 1/6th of the total information available and Internet traffic is doubling every 100 days (Interactive Week) "Everyone spoke of an information overload, but what there was in fact was a non-information overload" (Richard Saul Wurman, What-If, Could be).

The average US office worker is bombarded by 52 phone calls, 36 email messages, 23 voice mails, 18 letters, 18 interoffice letters, 14 faxes, 13 Post-Its, 8 pager messages, 4 mobile phone calls and 3 express mail deliveries every day (American Demographics - Intertec Publishing) That's a lot of communication and it's no surprise that's it becoming more and more digital. By the end of last year, there were 569 million e-mail accounts worldwide, 333 million of them in the U.S. (Messaging Online). At least 40 percent of Americans use e-mail, but only about 5 percent of the global population had an electronic mail account in 1999. Dealing with all this is taking its toll: Stress costs US industry $200-300 billion annually (Aaron Fischer "Is your career killing you?" Data Communications February 1998). The National Mental Health Association (US) reports that 75%-90% of all visits to physicians are stress related. Will the solution simply be a great new technology? The workers don't seem to think so. 40% want training to deal with the messages, only 35% receive training in the UK (Mitel).

Scale of communication - email.
Now that you can communicate with millions as easily as one, is that utopia? Well we do live it now and the results are SPAM.

Scale of computation.
A modern computer, like the laptop I wrote this on, an Apple Macintosh PowerBook G4, is capable of completing a calculation faster than the light takes to travel from its monitor to your eyeballs. An average home hard drive holds 10 Gigabytes of information (1 billion bits of information). Remember when digital desk calculators seemed impressive?

When you change the scale, you also, sometimes, transition from one state to another.

But how about this:

We are seeing the beginning of a snowball effect based on technology which becomes 68 billion times more powerful in a single human lifetime - the microchip. And then double that again, only a year and a half later, according to Moore's law, powering an Internet which is due to become more extensive than the telephone network this year if it hasn't already, doubling every 100 days (Interactive Week) adding users quicker than the worlds population is growing (7 new users a second whereas the worlds population increases by 3 people a second according to The Herald Tribune). Takes your breath away.

Not convinced about the fundamental change a change of scale brings about? Take four letters and let them connect together in just two ways. Maybe you could make a fun toy for an infant. Take enough of them however, turn them into molecules, label them C, T A & G and voila, you get the human genetic code.

 

Doug is quick to point out: The scale of the rate of change is also a scaling factor. If it becomes too fast we will not be able to integrate it into society.

Nothing new about this. We are just at a point of history when more people people notice.

Vannevar Bush raised the alarm in The Atlantic Monthly way back in 1945: "Thus far we seem to be worse off than ever before - for we can enormously extend the record, yet even in it's present bulk we can hardly consult it."

Marshall McLuhan saw the same problem and commented on The Best of Ideas, CBC Radio in 1967: "One of the effects of living with electric information is that we live habitually in a state of information overload. There's always more than you can cope with."

Doug puts it this way: The only thing we can help protect yourself with is if we get collectively smarter. It' not just interesting, it's a matter of the survival of humanity. 

 

 
We are at a point in history when the scaling of computers and computer networks have become, well, overwhelming and so have many other factor, including how we treat our environment and so on.

Doug realized this early on and it helped clarify his thoughts, which he expressed in his famous 62 paper:

 

Framing the vision - The 62 Paper

In 1962 Doug wrote the paper which would come to frame the vision and orient the future work. It was titled "AUGMENTING HUMAN INTELLECT : A Conceptual Framework.

It simply and clearly states the goal of his work, to augment human intellect: By "augmenting human intellect" we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.

His work was to be developing means to augment the human intellect. These "means" can include many things--all of which appear to be but extensions of means developed and used in the past to help man apply his native sensory, mental, and motor capabilities-- and we consider the whole system of a human and his augmentation means as a proper field of search for practical possibilities. It is a very important system to our society, and like most systems its performance can best be improved by considering the whole as a set of interacting components rather than by considering the components in isolation.

So the job then become a matter of finding the factors that limit the effectiveness of the individual's basic information-handling capabilities in meeting the various needs of society for problem solving in its most general sense; and to develop new techniques, procedures, and systems that will better match these basic capabilities to the needs' problems, and progress of society.

This is important, simply because man's problem-solving capability represents possibly the most important resource possessed by a society. The other contenders for first importance are all critically dependent for their development and use upon this resource.

This is where he stands. He wrote his mission statement in 1962. It's a great paper. Only some of it's dated. You can read it at www.liquid.org in the Engelbart Resources section. Where there's a lot of other Doug goodies BTW.

 

 

 
Licklieder & Funding

Doug discovered Licks paper Man-Computer Symbiosis from 1960. It was thrilling. On the surface there was so much parallel to his. I learnt he had been lured into the Department of Defenses Advanced Research Projects Agency (ARPA) to start up and head a new division IPTO- The Information Processing Techniques Office.

Doug sent him a proposal to his ARPA office. He got pretty quick success. Doug didn't know it then but Licks colleagues thought it was a big risk project. A year earlier the National Institute of Health had turned them down. They interviewed them but sent them a letter saying interesting, but you are way out there in Palo Alto where there are no computer programmers!

The first 2 years were flops.

The 1st year SRI managers were really concerned about the publication about the 62 report: "AUGMENTING HUMAN INTELLECT : A Conceptual Framework.. It seemed arty fartsy blue sky stuff with no reality.

Still, money from ARPA came for this and they put a 'more experienced guy' in charge. Doug got a promotion to be Senior Research Engineer. But the job would be done by the project leader. Doug was out of the loop. The other guy was in control. Doug protested. Helplessly and frustrated. Doug says: I wasn't cogent enough to call Lick but he came out for a project review. He asked me what it was all about. He said 'god damn it, this stuff is so bad if my boss found out he'd fire me!' So I explained and he called them, he said more money would be sent, but don't do that again.

2nd year Lick wanted Doug to go ahead with an idea of an augmentation system at SRI where he wanted Doug to program a client program on a small computer and a little CRT display. The display could show letters etc. but it was supposed to work through a modem with a time sharing computer at LA who he was already supporting to be time shared.Doug's group was amongst the first groups to be time shared. But it only lasted for about 3-5 minutes before crashing. So the second year didn't mature much either

But Lick kept going

In the third year Doug said we'd like to have our our own computer which would be big enough to run our own real time system. A CDC 3100 arrived. We set it up in a room and it had about six platters (1 1/2' dia). It was a real boon to have a machine.

They needed a display for that so they built their own. It cost $80-90,000. In those days there was no way you could have enough high speed memory to store the bit map of the display. You couldn't store it and have it operate fast enough, so they had to build the electronics for it as well.

It would have to move the beam into position and turn it on and move it around to display the characters. It could only do upper case characters. They used this for 4 years or so. BTW, upper case was indicated by a bar over the character.

As Doug says: This was the best we could do for a ton of money, you can then see how people said it would be crazy to spend this on individuals, but we said it wouldn't stay that way for long...

Then Bill English came to work with Doug at the beginning of 1964. He had gotten his M.S. at Stanford in 62, in engineering. A very energetic and competent engineer. Very bright, very active. He complemented Doug and provided things Doug wasn't good at. Doug had his right hand man, his doer.

Doug tells of how the mouse came about: One thing we did with this was I could get a special research grant from NASA we need to have screen selection so we got NASA to set up the research money to test how to get screen selection.

which way would be best? There were lots of ideas so I said lets experiment. Bill sat up and ran the project, testing various devices. We published a paper on it. One of the first things we did was to run a lot of tests. We got some secretaries who all knew how to type. I looked at all these devices and thought 'gee is that all' so I remembered a sketch I had in a little notebook so I gave it to Bill to build and he did. I couldn't have done it without Bill, but the patent attorney didn't agree with me in wanting bill to share the patent.

Are you all familiar with the 1968 conference?

1968 Fall Joint Computer Conference, San Francisco, CA, December, has since become known as "The Mother of All Demos." That was where Doug first showed NLS to the world In a remarkable 90-minute multimedia presentation, Doug used NLS to outline and illustrate his points, while others of his staff linked in from his lab at SRI to demonstrate key features of the system.

This was the world debut of the mouse, hypermedia, and on-screen video teleconferencing.

 

 
A Few Things on SRI ARC & Precursor

ARC got its name after the 68 conference. They basically needed a name.

How did he orient the ARC team to share the vision? I asked him: Some was easy some was difficult and some was something I didn't know how to do. There were people in there who didn't agree. Some stayed and shut up, some left. Not the most helpful answer.

However, what is interesting is this. He wrote proposals and then they had to build it. Doug: Sometimes I had to back off. sometimes I hoped 25% of what I hoped for would be the best I could do. I kept talking.

What kind of philosophy/attitude/culture was instilled? The whole idea of augmentation was not something which could be dictated. Figuring out what and how to do things would have to evolve. Augmentation research (which is going on in a world of the rapid change we discussed earlier) would have to be dealt with through continuous, facilitated evolution.

Doug says, as an example of the evolution: I tried to get something like a journal -1 guy worked for a year and got it working so we had a journal to help the dialog. we could publish to it and the document would always stay there and we could citation link. there are about 8 of them now. we lost some of the records during some transitions, but a lot of it is still there. 1/4 million entries in the augment journal fx.

Did he use any specific methodologies? No says Doug. I wish I did. I was always naive, I was always a not very well organized guy who didn't understand management.

As far as what was a critical factors to get the ARC started and to keep it going was of course money. They developed more and more organized sources as ARPA, NASA, a bit of work was done for the CIA and the Air Force. It got organized so that all the organizations would funnel money through an Air Force lab.

A big issue with the original ARC was computer time. I had to fight for my computer time Doug says with frustration.

And the magic bullet for the ultimate interface? Nonsense question? There are lots of possibilities so the important thing is to find out ways to evolve.

AND, you need to evolve more than tools (techies tend to think they have the one big solution)

 

 

 
A word on Augment

 

NLS/Augment

NLS (later re-named Augment) was where the mouse, 2-dimensional display editing (using a screen/monitor with a computer), in-file object addressing, linking & hypermedia (Hypertext, but not by that name), outline processing, flexible view control, windows, integrated hypermedia email, hypermedia publishing, document version control, teleconferencing, computer-aided meetings, context-sensitive help, distributed client-server architecture, uniform command syntax, universal "user interface" front-end module, multi-tool integration (use any tool on any document, not simplistic document-application system which is prevalent today) and grammar-driven command language interpreter first saw the light of day.

 

Augment was the system where the ARC successes originated:

 

 
ARC Successes

In short, these are the specific inventions which came out of ARC. The further innovations and societal impact can only be guessed at:

 

 

 

 

 

Today there is the Bootstrap Institute, a spring board to tomorrow

 


Bootstrap

Doug is funny, he invented most of the interfaces we use today, but he doesn't think he can invent all of what we need.

He's a pretty humble fellow. So he is really working for the organized, collective effort to drive augmentation forward in an evolutionary environment.

 

 

 

 

 
A few basic requirements for an ARC for the 21st Century.

 

 

Basic Requirements for ARC

 

 

 

 
From the First ARC at SRI

Let me just read this to you the way the first ARC (called AHI to begin with) described at the time: In the Augmented Human Intellect (AHI) Research Center at Stanford Research Institute a group of researchers is developing an experimental laboratory around an interactive, multi-console computer-display system, and is working to learn the principles by which interactive computer aids can augment their intellectual capability.

The research objective is to develop principles and techniques for designing an "augmentation system."

This includes concern not only for the technology of providing interactive computer service, but also for changes both in ways of conceptualizing, visualizing, and organizing working material, and in procedures and methods for working individually and cooperatively.

The research approach is strongly empirical. At the workplace of each member of the subject group we aim to provide nearly full-time availability of a CRT work station, and then to work continuously to improve both the service available at the stations and the aggregate value derived therefrom by the group over the entire range of its roles and activities.

Thus the research group is also the subject group in the experiment.

Among the special activities of the group are the evolutionary development of a complex hardware-software system, the design of new task procedures for the system's users, and careful documentation of the evolving system designs and user procedures.

The group also has the usual activities of managing its activities, keeping up with outside developments, publishing reports, etc.

Hence, the particulars of the augmentation system evolving here will reflect the nature of these task--i.e., the system is aimed at augmenting a system-development project team. Though the primary research goal is to develop principles of analysis and design so as to understand how to augment human capability, choosing the researchers them selves as subjects yields as valuable secondary benefit a system tailored to help develop complex computer-based systems.

This "bootstrap" group has the interesting (recursive) assignment of developing tools and techniques to make it more effective at carrying out its assignment.

Its tangible product is a developing augmentation system to provide increased capability for developing and studying augmentation systems.

This system can hopefully be transferred, as a whole or by pieces of concept, principle and technique, to help others develop augmentation systems for aiding many other disciplines and activities.

And here a for our age, funny bit: In other words we are concentrating fully upon reaching the point where we can do all of our work on line--placing in computer store all of our specifications, plans, designs, programs, documentation, reports, memos, bibliography and reference notes, etc., and doing all of our scratch work, planning, designing, debugging, etc., and a good deal of our intercommunication, via the consoles.

We are trying to maximize the coverage of our documentation, using it as a dynamic and plastic structure that we continually develop and alter to represent the current state of our evolving goals, plans, progress, knowledge, designs, procedures, and data.

 

THE USER SYSTEM

Basic Facility
As "seen" by the user, the basic facility has the following characteristics: 12 CRT consoles, of which 10 are normally located in offices of AHI research staff. The consoles are served by an SDS 940 time-sharing computer dedicated to full-time service for this staff, and each console may operate entirely independently of the others. Each individual has private file space, and the group has community space, on a high-speed disc with a capacity of 96 million characters.

The system is not intended to serve a general community of time-sharing users, but is being shaped in its entire design toward the special needs of the "bootstrapping" experiment.

Work Stations
As noted above, each work station is equipped with a display, an alphanumeric keyboard, a mouse, and a five-key handset. The display at each of the workstations is provided on a high-resolution, closed-circuit television monitor.

The alphanumeric keyboard is similar to a Teletype keyboard. It has 96 normal characters in two cases. A third-case shift key provides for future expansion, and two special keys are used for system control.

The mouse produces two analog voltages. As the two wheels rotate, each changing in proportion to the X or Y movement over the table top. Three buttons on top of the mouse are used for special control. A set of experiments, comparing (within our techniques of interaction) the relative speed and accuracy obtained with this and other selection devices showed the mouse to be better than a light pen or a joystick.

The chorded keyset. The five-key handset has 31 chords or unique key-stroke combinations, in five "cases." The first four cases contain lower and upper-case letters and punctuation, digits, and special characters. (The chords for the letters correspond to the binary numbers from 1 to 26.) The fifth case is "control case." A particular chord (the same chord in each case) will always transfer subsequent in put-chord interpretations to control case. In control case, one can "backspace" through recent input, specify underlining for subsequent input, transfer to another case, visit another case for one character or one word, etc. One-handed typing with the handset is slower than two-handed typing with the standard keyboard. However, when the user works with one hand on the handset and one on the mouse, the coordinated interspersion of control characters and short literal strings from one hand with mouse control actions from the other yields considerable advantage in speed and smoothness of operation. For literal strings longer than about ten characters, one tends to transfer from the handset to the normal key board. Both from general experience and from specific experiment, it seems that enough handset skill to make its use worthwhile can generally be achieved with about five hours of practice. Beyond this, skill grows with usage.

 

SERVICE-SYSTEM SOFTWARE

In the Augmented Human Intellect (AHI) Research Center at Stanford Research Institute a group of researchers developed an experimental laboratory around an interactive, multi-console computer-display system, and is working to learn the principles by which interactive computer aids can augment their intellectual capability. The research objective is to develop principles and techniques for designing an "augmentation system.

This document build upon their findings in order to further the work of augmenting human intellect, for individuals and groups.

This includes concern not only for the technology of providing interactive computer service, but also for changes both in ways of conceptualizing, visualizing, and organizing working material, and in procedures and methods for working individually and cooperatively.

The research approach is strongly empirical. At the workplace of each member of the subject group we aim to provide full-time availability of a work station, and then to work continuously to improve both the service available at the stations and the aggregate value derived therefrom by the group over the entire range of its roles and activities. Thus the research group is also the subject group in the experiment.

Among the special activities of the group are the evolutionary development of a complex hardware-software system, the design of new task procedures for the system's users, and careful documentation of the evolving system designs and user procedures. The group also has the usual activities of managing its activities, keeping up with outside developments, publishing reports, etc.

Hence, the particulars of the augmentation system evolving here will reflect the nature of these task--i.e., the system is aimed at augmenting a system-development project team. Though the primary research goal is to develop principles of analysis and design so as to understand how to augment human capability, choosing the researchers them selves as subjects yields as valuable secondary benefit a system tailored to help develop com plex computer-based systems.

This "bootstrap" group has the interesting (recursive) assignment of developing tools and techniques to make it more effective at carrying out its assignment.

Its tangible product is a developing augmentation system to provide increased capability for developing and studying augmentation systems.

This system can hopefully be transferred, as a whole or by pieces of concept, principle and technique, to help others develop augmentation systems for aiding many other disciplines and activities.

In other words we are concentrating fully upon reaching the point where we can do all of our work on line--placing in computer store all of our specifications, plans, designs, programs, documentation, reports, memos, bibliography and reference notes, etc., and doing all of our scratch work, planning, designing, debugging, etc., and a good deal of our intercommunication, via the consoles.

We are trying to maximize the coverage of our documentation, using it as a dynamic and plastic structure that we continually develop and alter to represent the current state of our evolving goals, plans, progress, knowledge, designs, procedures, and data.



 
21st Century ARC:

 

Here are some basic requirements for re-igniting the revolution. A list of some of what will be done and some reflections on how to go about it. There are huge empty areas here. This is just a start.

Note, none of this is funded, none of this is official. But I aim to change that.



 
Scope of ARC:

 

The scope of the ARC will be to:

Support high performance teams

in researching & developing human-computer-augmentation (through basic research and through making existing research more accessible) where

the tools used by the HPT's will also be co-developed by the HPTs (just like with the original ARC) and

the continuing point will be: to make the results (whether research findings or new technologies {hardware and software}) available to industry.

As critical as the technological components are in the ARC, it is critical not to loose sight of the changing technological landscape and always be actively aware that any solution is likely to be superseded later and therefore it is critical to facilitate the evolution and development of continually better augmentation systems.

Terminology note: The system developed at ARC21 will be called OHS, for Open HyperDocument System. Simply because that is the term Doug has used when developing the concepts of advanced augmentation systems.

 

 

 

 
Methodologies - how to go about it

 
Smooth Upgrade Path - compatibility & accessibility

If we are going to change the world we can forget about a clean break with the current way of doing things. The world has become to enmeshed in Windows, the Web and the like.

Any work at ARC must, if it is to be really useful, be applyable as a smooth upgrade to what exists in industry.

There needs to be different way to look into your current information environment. So a Web based intermediary called the HyperScope is proposed. HyperScope will make your Web browser scream. It turns it from a TV to a swimming pool. And gradually becomes more powerful and delivers more on the promise of augmentation systems. By then there will have been no major upheaval. You'll just wonder how you ever got along without it.

 

The HyperScope

The HyperScope will be a lightly modified web browser supported by an "Intermediary Processor" (IP) which operates between the browser and the files or data bases holding existing working knowledge of a collaborative community. The HyperScope is not an editor. A HyperScope user will be able to follow links into and between "legacy" files in a manner similar to using a browser with web-based HTML files. And more, there will be numerous new capabilities and features which will give a HyperScope user considerable more flexibility and working power than users limited to standard browsers and "legacy" editors.

 

Other smoothies

 
FLEXIBLE IMPROVEMENT. Flexible future upgrades path built in. Unlike conventional software projects, OHS is at its core an evolving design which will not require dramatic re-writes to add substantial functionality.
 
MODULAR UPGRADE PATH. Features can be added as small additional plug in projects rather than having to be built from scratch, allowing organizations and even individuals to power to shape their information environment.
This is due to the explicit segmentation of the environment. Features can be slotted in along the input path, view generation, manipulation, background timed processes, output path or in any combination.
Depending on the magnitude of the feature and its impact on other parts of the environment, it will be seamlessly slotted in to the major free versions or it will have to be added to the customers own server, where the feature only becomes a part of a catalog of optional installations for others in the future.
Anybody can now add features. With traditional OpenSource projects only by being a programmer or by having the budget and technical know-how could you add to the OpenSource project. With the OHS project however, there is an independent, non-exclusive option of using a centrally coordinated programing mediator, at very low cost.

Smooth upgrade of THE SERVER/DATABASE. The OHS servers will initially be focused on adding value to information stored on other servers and databases through server based intermediaries which will intercept user requests going out to external databases and add OHS functionality as the results are served to the user. There is no DB to start except link DB.

 
Smooth upgrade of THE USER FRONT END. Individuals and organizations can start employing OHS with considerable advantage alongside their current tools and environments.
Initial functionality will be provided by server based intermediaries with no special requirements on the Web browser which will be the user interface. Additional functionality not directly supported by the browser, will become available through browser launched Java applets and later full Java applications as well as other technologies will be employed.
This way there will be no "outside the OHS" nor a special "inside the OHS". Every time anyone accesses a link to something stored in a high-performance OHS database the result will be processed through the OHS system to deliver functionality through any current simple browser.
Ultimately a full OHS environment will evolve with further functionality, but always accessible through standard web interfaces.
EXAMPLE To get to a OHS enhanced document, use http://www.ohs... before normal url or set up browser to use HyperScope

The OHS Interface Architecture will be set up explicitly to provide for multiple UIS options, with a common, full-feature Application Program Interface (API). To support extensive capability evolution, it will be necessary to provide for a range of UIS options, varying in complexity, potential competency level, difficulty to learn, types of interface devices and modalities, etc.

EXAMPLE:Being able effectively to support web-connected mobile phones is one example.

But a VERY IMPORTANT purpose here is to enable individuals, or special-role support teams, to experiment with interface equipment, functionality, and control options, together with optional special attributes of the standard Intermediary File, to pursue especially high performance at important parts of their knowledge processes.

Having this kind of exploration in any event will be necessary. Doing it with special extensions to the widely used OHS will be very important in enabling feasible migration of these tools and skills out into the rest of the communities. Moreover, doing this exploratory high-performance activity over the SAME WORKING domains amplifies that benefit immensely; motivated individuals can optionally acquire special interface equipment, take some special training, and move up to a "new class of user proficiency" (e.g. becoming a certified Class-4B Knowledge Integrator).

There are support roles anticipated in developing and maintaining a community's Dynamic Knowledge Repository (DKR) which could very well be taken on by specially trained High-Performance Support Teams. Such a team could for instance be fielded in a university (as a research project into High-Performance Collective Knowledge Work), and take on the "Knowledge Integrator" role for a professional society's DKR. And competitive exercises could be conducted among teams from different universities -- or companies, or agencies, or countries -- as part of an explicit processes to facilitate improvement in "Collective IQ."

 

 
Legacy Support - compatibility & accessibility

In principle, this manner of HyperScope access can be implemented for any standard type of file or data base. The Project will establish the basic implementation conventions, and proceed to develop the translation and special I-File properties appropriate for a selected sequence of file/DB types -- planning tentatively for those to be used by:

A. the OHS-dev community (including open-source participants);

B. the Software Productivity Consortium's member community:

C. communities selected with NIH (and possibly cooperatively with DARPA) for strategic progression of co-evolving tool- and community-development processes.

Note: Here again, it is planned to facilitate Open-Source development so that many individuals and application communities can pursue specialty application needs and possibilities. (Facilitating this evolution is planned.)

 

 
Open Source - compatibility & accessibility

Implementation of the HyperScope and all of the later stages of the OHS are committed to being done as OpenSource development. There are clear and compelling reasons for this, stemming from the scale and rate of evolution which needs be accommodated, and from the number of collaborative communities which need to be involved, PRO-ACTIVELY.

 

 
Built In Support For Competition:

By having APIs open and explicitly documented, competition is encouraged. I would not be much value to create another Microsoft, even if such a thing were feasible.

A word on Microsoft and competition, from a legal and practical perspective: There is one way to make Microsoft into a fair and productive member of the industry: Make all formats and protocols when end user data is moved open and accessible to developers.

This includes email protocols (Hotmail does not use standard POP so a third party cannot build and email system which is compatible for Hotmail users) and document formats such as Word. (a third party cannot maintain a professional word processor when Microsoft changes the Word format secretly, making reliability to read and write Word documents very difficult to maintain.

Let Microsoft do what they want to, but let other developers compete in a realistic environment where access is guaranteed whenever the end user moves his or her information.

It is not the tying together of the operating system, Windows, and the applications, Word, Excel, Explorer and so on that is the problem, it is the monopoly of access to the users data which creates the problem. An information environment where users may work on their documents on Windows using Microsoft programs and then continue on another operating system and/or another application would foster healthier competition and bring innovation into computer business.

This approach would be actively and aggressively pursued by ARC.

 

 
Built In Support For Evolution:

Evolution of the Intermediary File format will be given careful attention since it is destined to become the format for the full Open Hyperdocument System (which will continue its evolution).

An OHS "User Interface System" (UIS) will be developed to provide a basic range of functions for moving, viewing and editing.

Provision for archiving, version control, etc. will be developed so that it becomes possible to develop and maintain an evolving knowledge base solely within an OHS environment -- with integrated flexibility and power accumulated from the best that was accomplished via HyperScope usage among the legacy files.

Now the VERY important feature of this approach to OHS development comes into play: task by task, or person by person, in almost any order and rate, users can start to keep their files entirely within the OHS environment. All the working material is still interlinkable, whether in OHS or legacy files.

And the critical community-development processes will become VERY important here -- to start the active "co-evolution" of the "Human System" and the OHS "Tool System" (as discussed at length in the "Bootstrap Publications").

For the scale of utilization that will be necessary, in number of inter-operating groups, in the diversity of inter-operable knowledge domains, and in the continuing changes in tools and skills, processes, etc. --

It will be absolutely critical that

A the Tool System be as open to continuing evolution as can be managed, and

B the application communities be specifically organized to participate pro-actively in the Human-Tool co-evolution.

It is sincerely hoped that organizations investing in the Stage-1 HyperScope development and use will do so with clear intent to be simultaneously readying their targeted application communities for becoming pro-active, "evolutionary participants."

 

 
Adherence To Open Standards - In support of evolution & competition

A firm adherence to open standards (with particular emphasis on transmission standards) is necessary to facilitate legacy compatibility, competition and evolution.

 

 

 
Explicit Documentation & High Level Of Encapsulation of Code - In support of evolution & competition

The code generated by the ARC team and by anyone outside must be very clearly documented, to the extent that a technically literate but non specialist can understand it. The documentation must be in-line with the code. Furthermore, the code must be highly encapsulated, with very clear APIs documenting what the code does, how to send information to the code and how to expect the information in return.

 

 

 
Basic Technologies To Develop

 

These are also the real, tangible initial benefits ARC will deliver to industry

 

What is required to be developed before real augmentation can take place? What are the basics missing from todays augmentation/information environments basically?

Most of these are Augment technologies which got lost along the way. Some are new, but thanks to the emphasis on continuous evolution, this is just a taster, baby. I will rush through these, there are many. If you have a question, please just interupt me.

 

 
More Powerful & Flexible Linking:

 

High Resolution Addressability

Ideally, every object in a file should be targetable by a link. Not just a whole page.

 

Back-Link Management

Provision will be made to capture information about links into a specified collection of files, to establish a "Back-Link Data Base" (BLDB). For each such link, information to be captured would be such as:

A. Explicit target object being cited;
B. The "foreign" location of the link;
C. The author of that other-file citation link.
D. The "Type" of link citation, as per the vocabulary of "link typing" adopted by the usage community, and provided for inclusion in "link syntax" by appropriate standardization processes.

NOTE: Link Typing has been advocated and discuses for many years. With the above HyperScope-facilitated LDB, link-type utilization within appropriately developed community conventions and practices, would offer very important enhanced capability for collective knowledge development. In a larger sense, it would enable a practical way to improve on the established academic convention of only publishing after appropriate peer review (with attendant time delays in the cycle of knowledge evolution). Here a promising alternative is offered: Publish now, let Peer Review and "evolving attribution" take place after. I.e., much more than just counting citations can here provide effectively attributed peer evaluation: explicit back-link assessment of trails can operate in many complex knowledge-evolution environments to isolate the key contributions (and also the key misleading entries).

 

Extended addressing conventions to improve linking power:

A. Relative Addressing: A conventional URL with a "#label" extension can position the (initially) HyperScope at a given object in the target file. Extended conventions will enable the link to point to subordinate objects. EXAMPLE: To a word in a paragraph, to an expression in an equation, ...

B. Indirect Linking: A very powerful extension to the relative addressing is a convention which directs the HyperScope to go to a specific location and then follow the link at that position -- and perhaps at the link's destination to do further relative positioning and "link following." This indirect linking provides very powerful functionality when users learn to harness it.

C. Implicit Linking: EXAMPLE: -- every word is implicitly linked to its definition in a dictionary; every special term is implicitly linked to its definition in that discipline's glossary; every instance of an object's name in a source-code file is implicitly linked to its implementation code; ...; every pronoun is implicitly linked to its antecedent. Special "jump" commands can be provided which can operate as though the term in question is explicitly linked to the "implicitly linked" object. (Jump to Definition, ...)

 

Same file in multiple windows

No real limit there -- simultaneously allowing different positioning and different viewing portrayals of a given file (such as a dynamically generated outline in a separate widow to serve as a clickable table of contents). Later, when editing of the Intermediary File (iFile, the HyperScope file format) will be offered, any legal edit operation executed in one window is reflected accurately and immediately in all other of that file's portrayal windows. This flexibility in utilizing multiple windows has surprising value when users learn to make effective use of it.

 

Non-Link Jumps

EXAMPLE: A click in a given paragraph, not on an embedded link, would hoist that paragraph to the top of the window.
EXAMPLE:Click-select a given paragraph, then Jump Next, Last, First, Origin,...

 

Permanent Publishing - The Journal

The Journal is a persistent repository of information. Once a document has been submitted to a Journal you cannot change it and neither can anyone else. Any changed will be in the form of an entirely new document with a new version number.

 

Double-click Jumps

First click indicates what jump is desired; second click can be in any other window, indicating where the jump-result view is to be portrayed. Whatever viewing spec already established in the target window will also prevail when the jumped-to file/location is portrayed there. Also, in the interval between window clicks, icon or menu clicks, or character input, can indicate the new viewing spec if the user desires something different from what is currently set for the target window.

We assume that the above capabilities would be useful to almost any collaborative community, essentially as soon as adequate HyperScope-application support services could be provided.

 

Copying-Pasting HyperScope Links

When viewing a legacy file via the HyperScope, a user will easily be able to install a HyperScope link (HS-Link) in any legacy file, targeting an explicit location in the file being viewed on his HyperScope. Clicking on the desired target object in a HyperScope "Copy mode," he can subsequently turn to the "legacy editor" and "Paste" the appropriate link into the legacy file. Later execution of that link will take any subsequent HyperScope user to the desired, specific location and with the specified view.

 

 

 
More Powerful & Flexible Command Interaction:


Instead of what we have today: entering single, discrete commands to our computer, such as 'Save', 'Open' and so forth we have the flexibility and power of communicating our commends in whole sentences.

Full sentence interaction comes through the simplicity and power of three things working together:

objects,
commands
and
modifiers.

You feel the power full sentence interaction any time you turn away from your computer in your office to talk to a person. OHS is as flexible as talking to a person. You are commanding your information and your tools as richly as you command language and the results is true end user programing. Today we live in a world of programmers and end users. OHS will change that by giving more flexibility to end users for building their own tools and environments.

Instead of the prevailing simplistic icon based interface, which is easy to learn but inherently limited (imagine speaking by clicking on words), OHS builds on a noun (object) and verb (command) language (imagine speaking by typing on a keyboard alphabet. Never mind, you don't need to imagine that...).

The system of nouns (objects) and verbs (commands) (with high definition dynamic linking) can be combined, given simple logic and even timed.

The Objects can be located/specified through both explicit navigation such as URLs as well as contextual navigation saying for example; 'go up one level in the folder hierarchy'. The nouns can be of any granularity (fine or coarse) as required.

The commands are any computer programmable function the computer can perform on a noun (object).

The logic is simply the ability to add decision making abilities to commands, based on the If/Then structure. Examples can include If the data gotten back from a Web page includes such and such text, then do this, otherwise do something else.

You will be able to issue commands like 'transpose these two elements', not just copy and paste.

You can insert sections from the other document and the sections will be pasted with a link to where it came from. Not just what page, exactly where, down to the paragraph to sentence.

You can lay your work out in any way you want. You can place snippets of information on the side if you want, ready for insertion. This is not a linear document anymore.

The power of OHS extends to every corner of your digital information environment. Notes written on your PDA are no longer thrown into the information black hole, they become as easily and browse-ably available as any of your other text.

Presenting your information becomes more than text on a page. Clip art will be in three dimensions with systems and time-lines embodied, with scriptable access:

For example: Want to show the effects of a drug on the brain? Just highlight the areas, flows and relationships on your intelligent brain model. No problem.

You are not restricted to what kinds of 3D models you can use: Want to show migration of people around the world? Drag arrows around the globe and assign width by number of people. Should take you about a minute. You want to add a time-line to allow the reader to see changing migration patterns? Another couple of minutes. Specialist 3D programmer not required.

Adding references from email discussions and newsgroup or mailing list discussions will provide access to the full thread.

Daily nonsense like dealing with large numbers of repetitive emails evaporate. So does maintenance hassles. You have more time for your work.

 

Smart Words and SmartGrammar.

You can effortlessly request a list of effects of "global warming" and expect the document to be scoured for adjectives and verbs which relate grammatically to 'global warming". For instance, if there is a sentence in the document which says: "Global warming will raise sea levels" then the result of a search for "Global Warming" will list "...will raise sea levels'.

Maybe the document is very technical and includes a glossary with definitions of words, terms and acronyms. These become accessible through a pop-up menu on the words and terms themselves on the page. You can't rely on everyone diligently including a glossary however. Therefore you can refer to an external, relevant glossary. If no relevant glossary exists, you can quickly jump to the first occurrence of a word, which will often have a definition attached as it is introduced. For example, if you were reading this document in OHS and wanted to see what 'OHS' means, you could just look at the first occurrence of OHS in this document.

Maybe the document has a cast of characters you need to understand the relationship between. You can easily generate a list of all names and have the list attach descriptors and relationships as well as links and excerpts to where they appear in the text. Useful whether the characters are people or widgets or technical terms.

You may have to find external links which relate to a specific term, but you cannot rely on the term being included in the link. No problem. Just specify that you'd like all sentences which include the term (or all paragraphs if you think that's better) and any Hypertext link.

 

 
Collaboration Support - recorded and accessible dialogue:

Chances are you don't work alone. You will need to engage in dialogue but unlike the non-augmented world the dialog will be recorded and accessible.

Coffee shop and meeting room discussions will be recorded and transcribed with the different voices recognized and tagged. All references, such as documents on the table which are already in the system as well as hand drawn illustrations generated during the meeting etc. will become linked. All you need to do is read the documents title. Hand drawn notes can easily be scanned and referenced later.

The recorded dialog will be stored in searchable in text and audio formats, or even video, should you have recorded it with a camera -whatever you prefer. You could be sitting with a laptop, have everyone new quickly introduce themselves and know what they say will be under their name later. Any brilliant ideas will be properly attributed.

The agenda drawn up during the meeting will be available to all and it will be trivial to see how the agenda was agreed upon. Contact lists will automatically be generated- you can easily send an email to "Lunch meeting last Friday".

Remote collaboration will be supported by audio and video conferencing, again, all recorded and available later.

Remote text based collaboration. In other words, email, newsgroups and mailing lists, are no longer be rivers of text flowing aimlessly away from the group. They have become accessible in a efficient and usable manner. Consider this detail: When you read a document (or discussion group article) and want to comment, you will be presented with a button right there on the document. Not just a 'reply' button or 'email the author' button, but with a 'comment', 'critique' or 'correction' button. The very act of clicking to reply adds useful, searchable data.

It's the whole 'later' episode which makes the recording interesting. You want to know who were present at every meeting (including mailing list discussions and newsgroups) where a certain issue was discussed? Want to know who was positive and who negative? Smart Words are available here as well. So who has been contributing to building a community and actively engaging in peer-review? Who is clearly the expert on a topic?

You've recorded the dialogue. It is all accessible dynamically.

 

 
More Powerful & Flexible Views:

The user should always have control of how he or she wants information presented.

 

View-Specifications:

The HyperScope will offer a set of "transcoded viewing options" which a user can selectively employ to examine that file.
EXAMPLE: just show me the first line of each paragraph.

From past experience it is expected that users will invent many variations of the ways they would like to view portions of their files, under different circumstances, often shifting rapidly between views just as one might rotate a physical object, or shift its distance, to get a better understanding of what is there.

It is planned to enable the option of incorporating a "view specification" (Viewspec) to a link so that a subsequent user will not only have execution of that link take him to the desired specific file location, but will also show the contents there with the specified view.

Considerable evolution is expected to take place here. In the "open-source" mode, many groups would be experimenting and tuning, contributing to the evolution.

USER SPECIFIED CONTENT VIEWS/FILTERS A simple content-analysis language may be used in a "Set Content Pattern" command, which compiles a little content-checking program. One of the view-specification options will cause the system to display only those statements which satisfy both the structure and level conditions imposed by other Viewspec, and which also pass the content-analysis test applied by this program. Where desired, very sophisticated content-analysis programs may be written, using a full-blown programming language, and placed on call for any user.

With OHS documents are no longer static, they are much more dynamic than what we are used to from dealing with word processors and even the World Wide Web: You can expand and collapse the document as you see fit and see it in as many windows as you like: Outlines will be generated dynamically as you specify how deep a level each window is to show- so one window could show chapter headings. Another the first line of ever paragraph to allow you to skim through the chapter, expanding and contracting, formatting and annotating as you go along.

You can quickly and easily choose to only see paragraphs which include a certain word.

Sentences can be color coded as you see fit. Maybe you'd like all verbs in blue. Maybe not. Or all names in green. Or links in purple.

You might want to see a list of effects of a certain word. Because OHS uses Smart Words(TM) and understands grammar.

You can effortlessly request a list of effects of "global warming" and expect the document to be scoured for adjectives and verbs which relate grammatically to 'global warming". For instance, if there is a sentence in the document which says: "Global warming will raise sea levels" then the result of a search for "Global Warming" will list "...will raise sea levels'.

Maybe the document is very technical and includes a glossary with definitions of words, terms and acronyms. These become accessible through a pop-up menu on the words and terms themselves on the page. You can't rely on everyone diligently including a glossary however. Therefore you can refer to an external, relevant glossary. If no relevant glossary exists, you can quickly jump to the first occurrence of a word, which will often have a definition attached as it is introduced. For example, if you were reading this document in OHS and wanted to see what 'OHS' means, you could just look at the first occurrence of OHS in this document.

Maybe the document has a cast of characters you need to understand the relationship between. You can easily generate a list of all names and have the list attach descriptors and relationships as well as links and excerpts to where they appear in the text. Useful whether the characters are people or widgets or technical terms.

You may have to find external links which relate to a specific term, but you cannot rely on the term being included in the link. No problem. Just specify that you'd like all sentences which include the term (or all paragraphs if you think that's better) and any Hypertext link.

You see the power and flexibility here?

 

 

 
Multiple Levels Of User Interface
.

A "look-and-feel interface" software module will be located between the CLI and the window system. Providing optional modules for selected look-and-feel interface characteristics will serve an important practical as well as evolutionary need. There will be a basic constraint necessary here. When working interactively, no matter what particular look-and-feel style is being used, a user has a particular mental model in mind for the significance of every menu item, icon, typed command, or "hot, command-key combination" employed.

The users will automatically learn about their tools and materials, intuitively coming to understand underlying common-vocabulary terms no matter what form of user interface they employ, and may move from more graphically pretty interface modules which spell out potential options at every juncture, to simpler interfaces which rely more on the knowledge of the system.

Besides relaxing the troublesome need to make people conform to a standard look and feel, this approach has a very positive potential outcome. So far, the evolution of popular graphical user interfaces has been heavily affected by the "easy to use" dictum. This has served well to facilitate wide acceptance, but it is quite unlikely that the road to truly high performance can effectively be traveled by people who are stuck with vehicular controls designed to be easy to use by a past generation based on simple click-on-the icon over simplicity.

As important classes of users develop larger and larger workshop vocabularies, and exercise greater process skill in employing them, they will undoubtedly begin to benefit from significant changes in look and feel. The above approach will provide open opportunity for that important aspect of our evolution toward truly high performance.

The bottom line is OHS is easy to get into and easy to evolve in.

 

 
Organizational:

 

 

 

 

 
Resources

Not Available. Yet.

 

 

 

 

 
Funding

Not Available. Yet.

 

 

 

 

 

 
Development Schedule:

 

 

A clear development schedule is crucial to get funding and to deliver the augmentation 'deliverables' within an efficient schedule. As Steve Jobs said to the original Macintosh team: "Real artists ship".

 

 

 

 
Conclusion:

 

 

So there you have it. Doug changed the world. He thought of what problems are important and he wondered how we could best deal with them and what he could do about it.

He worked hard for many years against the odds and got a team together and realized the vision. Pretty simple. In retrospect.

At the time though, people thought he was, well, a little mad. - Now, thirty years later, the same damn people are heaping accolades on him.

But you know, he hasn't been asleep for these last thirty years. Though it's still the same basic powerful vision, of he has continually refined it and expanded it. It's still the same vision of "augmenting human intellect" by which he means increasing the capability of a someone to approach a complex problem situation, to gain comprehension to suit his or her particular needs, and to derive solutions to problems.

HOWEVER, In this day when 'ease of us' is king, there is little time and effort left for augmentation. Who wants to make racing bikes when all that is selling is 'easy to use' tricycles? I guess it's about as crazy as trying to sell word processing in the age of mainframes.

A lot of people have think Doug did the mouse and that's great and thank-you very much.

But hold on a second. The history of human-computer-interfaces we are taught today is basically wrong; For example, Doug also invented/developed full sentence interaction like we discussed. we hear how it was all this awful DOS thing where you had to type in hard to remember commands and then 'poof' there was the Mac and you could click your way around. Wrong. There was Augment, and you could communicate with your information-

- as richly as you can with language.

And you don't interface with computers anyway, you interface with your information, and other people, and your own thoughts, you interface through the computer and computer augmented information environments.

And then there was the beautiful and important but linguistically dumb Mac (which incedentally, I do love, but it's more a pet than an augmentation system...). Click. Do this. Click. Do that. Not really rich interaction.

You know, when I explain the whole full sentence interaction thing to my friends - or anyone who will listen, they feel cheated.

And we full language interaction is not the only feature of Augment we don't have today. There's high resolution linking; link to any part of the document, not just the document itself.

There is the problem of broken URLs on the net. And of verifying who published what. For that there was the Journal. Permanent publishing in a record which could not change.

I can go on. And I will.- to repeat some of what I have said earlier :-) We look at the Web the way specific Web Site designers want us to. It's very much a bill-board with very, very little flexibility in how you view your information.

And there I'll stop. And just say this. Doug still has the vision, it's sharper, clearer and more relevant and important by the day. It's time he gets an Augmentation Research Center for the 21st Century to continue the revolution.

Thank you for listening.

 

 


 

 

 

Useful Terms

NLS (oNLineSystem, later re-named Augment) was first demonstrated to the public at the 1968 Fall Joint Computer Conference in a remarkable 90-minute multimedia presentation, in which Engelbart used NLS to outline and illustrate his points, while others of his staff linked in from his lab at SRI to demonstrate key features of the system. This demo of NLS was the world debut of the mouse, hypermedia, and on-screen video teleconferencing.

The OHS Project is developing open source environments and tools for collaborative knowledge management, building on XML and other open standards as well as NLS. OHS is being designed to manage and create knowledge across the Internet, allowing users to package and share information for collaborative work. OHS is truly scalable and evolutionary.

The HyperScope will be the first component of the OHS. The HyperScope is a browser-like tool that allows the user to access the features of the OHS. Its relationship to the OHS is the same as a normal browser's relationship to the Web -- but the HyperScope will allow the user to navigate, access, and distribute information in new and powerful ways. It's first incarnation will be that of a server based intermediary with no modification of the users Web browser. This will be followed by browser plug-ins and will potentially become a full browser in its own right.

 

 

 

 

 

Useful Links

Doug Engelbart official page.

Bootstrap Institute (Doug Engelbart's Research Institute): http://www.bootstrap.org/

Bootstrap OHS Resources
Bootstrap OHS Resources (Technical)
Bootstrap OHS Resources (Glossary)

Google Search for OHS and HyperScope

www.liquid.org Which contain the Engelbart Resources listed here:

Historical Background & Conceptual Framework

Engelbart Audio Interviews
Easily digestible audio snippets from Doug.

Augment/NLS Tutorial
Worth reading to get an idea of the power and simplicity of use of the system.

Augmenting Human Intellect, A Conceptual Framework by Doug Engelbart.
The document which set the stage back in 1962 for how to proceed in the pursuit of augmenting mans intellect. "By "augmenting human intellect" we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems".

Open Hyperdocument System (OHS)

OHS Overview Document by Doug Engelbart.
The idea behind the Open Hyperdocument System, straight from the man himself

OHS in use : Thought Processing vs. Word Processing.
Currently in Draft stage. The last sections are very rough.

HyperScope

HyperScope LIVE Concept Demo
The HyperScope is the first start towards an Open Hyperdocument System. It will be a lightly modified web browser (through JavaScript, Java and plug-ins, not a completely separate application) supported by an "Intermediary Processor" (which operates between the browser and the files or data bases holding existing working knowledge of a collaborative community. The HyperScope is not an editor. This demo basically just adds a control frame to the bottom of browsed Bootstrap documents to allow the user to see the document in outline form, to highlight any keyword and to address each paragraph separately.