Home » Web 8.0

Category: Web 8.0

Web 8.0: Are my predictions coming true?

Way back in 2006, when “Web 2.0” was all the rage (yes, it was that long ago), I wrote a blog post and created a fake conference about Web 8.0. Here are my 7 year old predictions. How am I doing? Do any of them seem plausible still? Did we skip over Web 4.0 or have we just not gotten there yet?


Chris Minnick’s Plan for Web 3.0 to Web 8.0

Web 1.0: Top-down, authoritative Web (finished)

Web 2.0: Bottom-up, community-driven Web (finished)

Web 3.0: Anonymity-seeking Web. Everyone’s sick of posting to their blogs. Search engines and social networking sites misuse and abuse customer data, and anonymity services that will remove you from the Web and mask your identity while you surf become popular. (early adopters entered Web 3.0 in September 2006)

Web 4.0: Proof of Identity Web. People realize that there are times when they want to be identified online, but this has become difficult as a result of the fake identities they created during Web 3.0. Encryption and digital signatures become widely used.

Web 5.0: Face-to-Face Web. Digital signatures and encryption are too difficult for most people to use correctly. Online video meetings become popular. The Virtual Hang-Out Machine, a $100 piece of consumer hardware that enables video conferencing between multiple people, is invented and becomes as widely used as social networking web sites were during Web 2.0.

Web 6.0: Protecting the Children / Censorship Web. A few cases of the Virtual Hang-Out Machine being used by pedophiles leads to government hearings and federal legislation to protect the children.

Web 7.0: Total breakdown of the Web. Web 6.0 laws have the effect of making the Internet much more difficult to use. People stop using it, and the number of Web sites starts going down.

Web 8.0: Resurgence of the Web. A new and improved Web emerges and it looks almost exactly like the one that was predicted at the 1st Annual Web 8.0 Conference, which was held in 2007.


Nano-Uh Oh

So, it doesn’t look like I’m going to be able to finish my 50,000 word novel this month. I don’t quite have 10,000 words now, and it’s November 18. Darn. The good news is that this is the most fiction I’ve written since college, and I’m having a lot of fun with it when I actually do manage to find/make the time.

On another writing note, my first post on InternetEvolution.com went up last week. Check it out.

Sweatshops of the future

When the renowned computer scientist Jim Gray went missing at sea last week, the internet community rose up to help in the search in any way that they could. One of the more ingenious, although unfortunately not successful, methods employed was to upload thousands of satellite photos of the area where he disappeared to the Web and ask volunteers to examine them for objects that might be his sail boat.

This massive online manhunt was enabled by Amazon.com’s Mechanical Turk service.

Mechanical Turk, named after the chess-playing mannequin illusion of the 18th century, is a web application that enables “artificial artificial intellgence” by dividing up large jobs such as photographic analysis or data entry into chunks that can be performed by people over the Internet. The basic idea is that certain types of jobs, such as jobs involving pattern recognition or identification of objects in photographs, are more easily and accurately performed by humans than computers. If these types of jobs also involve working with large amounts of data, splitting the work up among large numbers of humans can be a more efficient (read: cheaper) method of doing the work than writing or utilizing specialized artificial intelligence software.

Services such as Mechanical Turk are being touted by some as an important development in harnessing the “collective intelligence.” Some have gone so far as to suggest that such human-powered Web applications represent a new paradigm and possibly a new version number of the Web and a new way of doing business on the Web. Frankly, I love the idea of being able to employ volunteers to search photographs for a missing person, but I’m disturbed by the idea of using random Web surfers to boost your corporate profits.

Mechanical Turk allows anyone to create a job that they’d like other people to complete on the Web. The creator of the job can specify the amount they’ll pay for each HIT (Human Intelligence Task). Typical tasks involve looking at photos and identifying certain objects in them, or transcribing podcasts. Typical payments range from 1 cent to $1 per HIT. You don’t get paid for your work unless it’s accepted by the creator of the task.

At first glance, these rates and types of tasks looked to me like the digital equivalent of sweatshop labor. Companies are farming out jobs that are too boring, or that would be too costly to have someone on payroll do, to be done by the ‘collective intelligence’ for very low wages. I decided to try out a few tasks in order to try to get a better feeling for whether it was possible to actually make a living off of this type of work, or if it is really just exploitation and the first step towards “The Matrix.”

At 10:00 AM, I sat down with my laptop and a cup of coffee and started in on my first workday at Mechanical Turk.

I began by going to the list of available HITs and sorting them highest paying first. I’m not cheap. The first HIT said it paid 96 cents, with a bonus of up to twice that for accuracy, and there was a qualification test. The ‘test’ turned out to be just clicking a checkbox and hitting a submit button. So far so good.

The task was to transcribe a 9 minute podcast. I’m a fairly fast typer, but I guessed that transcribing a podcast would take at least 3 times as long as the length of the podcast, and there was a style guide to follow too. In theory, if I were to complete the task perfectly, I could earn $2.88 [I actually ended up making $1.92].

In the interest of making the best possible use of my time, I started listening to the podcast – something about making money on eBay – while looking over the style guide.

At 10:15, I started my work.

At 11:20, after typing 1570 words, I finished my first job. I had grossly underestimated the amount of work it would take to transcribe 9 minutes of audio. So far, the best case scenario was that I was making slightly under $3/hour. I decided to move on to my next HIT.

The next task I found was a simple Google bombing job. I was to search for a certain phrase on Google and then click on a certain company’s link. The idea (not spoken in the HIT description, of course) is that the cumulative effect of hundreds of people doing this same search will improve that company’s rank in Google. I refused this one on ethical grounds.

Several other jobs involved registering on different sites, various search engine scams, Google Adword fraud, and a lot of people looking for creative people to give them ideas or write content for their sites for 25 cents. At 11:30, I became so repulsed by the nature of the majority of the jobs, and by the piddly amounts people were willing to pay for my writing talents that I decided to quit my job as a human CPU.

Yelling “Take this job and shove it!” to my laptop wasn’t very satisfying, but it was actually the first time I had ever spoken those words when quitting a job, so that was one positive aspect of the experience.

Jim Gray is known for his work with databases, and several very large databases in particular, including, ironically, Microsoft’s database of satellite images, Teraserver. He also was the recipient of a Turing Award, which is named after Alan Turing, who famously created the Turing Test of artificial intelligence.

I never met him, but I suspect that Dr. Gray would agree with me that the future of the Internet should not be one in which human intelligence is devalued simply because we now have the technology to give 10 cents to anyone who’s willing to do our most menial tasks for that amount.

Tragically, as of this writing, it seems very unlikely that Jim Gray will return. The amazing volunteer effort to find him, however, stands as a testament to the power of the Internet community and of the Web to bring people together. My experience today with the for-profit uses of the same technology, however, reminds me that the collective intelligence aspect of Web 2.0 also enables less noble endeavors.

NINJAM – One more reason not to leave the house!

As I mentioned earlier, the Gangster Fun reunion show completely re-ignited my interest in performing music. When I got back home, however, all of my musician friends were too busy with jobs, kids, commuting, and the rest of their lives to want to jam on a regular basis.

I started looking for a technological remedy for my need to rock.

The first solution I looked at was email. I’ve worked on several projects where files are emailed between two or more musicians, each of whom does his or her own thing and sends it back to a person who assembles it and mixes it. This technique works pretty well, but it lacks spontaneity. I wanted to jam.

After doing a little more research into online music, I discovered an open source project called NINJAM (which stands for Novel Intervallic Network Jamming Architecture for Music). NINJAM allows people to play music together over the Internet. Since there are various latencies associated with transmitting music over the Internet, the biggest technological obstacle to real-time music collaboration is how to keep everyone synchronized. For a good example of the problem, try calling someone in the same room as you on your mobile phone and telling them to sing along with the voice on the phone, not your live voice. To an observer watching you both singing, but not listening on the phone, it will appear as if you’re singing out of sync, because of the time it takes your voice to travel through the mobile phone network. This is the same problem with trying to jam over the internet.

Instead of trying to minimize the latency, NINJAM takes a very novel approach–it makes the latency much longer. So, imagine that instead of your friend hearing your voice 1 second later, you could tell the phone to play your voice on the other end exactly 16 beats after you sing it! Now, all you have to do is sing a pattern that repeats every 16 beats (like the song “Row, Row, Row Your Boat”) and it will appear to the observer that you and your friend are singing together, even though your friend is actually 16 beats behind you. NINJAM can manage multiple connections at the same time using this scheme. I’ve seen jams involving 8 to 10 different people from all over the world.

After reading up on exactly how NINJAM works and listening to some recordings on the Web site, I installed the NINJAM client on my computer, plugged in a microphone and connected to a server. The first thing that happened is that I felt unworthy to be playing along with the group I dropped in on. I quickly bowed out and started a new jam session with just me. For kicks, I slowed the tempo down and changed the beat to something I thought would be interesting — or at least funny. After a while, a guitar player joined (apparently, there are a LOT of guitarists on NINJAM). It was a mess. I didn’t know what I was doing, and dealing with the latency is enough trouble without also having to worry about unusual beats. I’m sure the guy I was playing with was good, but we sounded horrible and he gave up after 10 minutes. Undeterred, I kept on jamming by myself…I’d be doing that anyway, I figured. A little later, a bass player showed up. This time it went a little better. We weren’t exactly rockin’, but it was passable–in a hanging out in the basement making music sort of a way.

I’m still very much a NINJAM newbie, but I’m very excited about the possibilities. The fact that you’re not actually playing “live” with the other musicians is limiting in some ways, but the experience going online and playing in a band with bunch of good musicians from all over the world any time you want is mind-blowing.

THIS is what the Internet is good for. No more drummers who are late to practice. No more drinking too much at open mic night to get up the courage, only to make a fool of myself because I’ve had too much to drink. Best of all: there are no more excuses for not playing live music regularly. I’m getting ready to call my “real life” friends and tell them that if they ever want to play music, wait until the kids are asleep and get on the computer.

Where is Web 3.0 going? To Monaco, of course

While searching the Web recently to find out what other pundits believe Web 3.0 will come to mean, I found a blog post by Stephen Baker on BusinessWeek.com in which he says that his “assignment in Monaco was to lead a panel in defining Web 3.0.” After summarizing the ideas that his panel came up with, he ends his post by asking readers what they think. My favorite comment on this post (from ‘bob’) simply says “I think you all wasted your time.”

While it’s very doubtful that a trip to Monaco could be considered a waste of time (especially if it’s paid for by someone else), I certainly agree that serious discussions of questions as meaningless as “What features will the next version of the Web include?” are largely a waste of time concocted by marketers and conference-planners. The people who will build what will come to be called ‘Web 3.0’ don’t have these sorts of discussions. So, here I go again.

The fact of the matter is that most people define Web 3.0 in terms of what they’d like to see happen. Some say it will be defined by the widespread adoption of SVG, some say the key concept is “software as a service,” some say Web 3.0 will be when we fix the bugs in Web 2.0.

Personally, I believe that one of the biggest unsolved issues faced by the Web right now involves trust. Wikipedia, Google, Yahoo! Answers, and hundreds of other sites that are heralded as models of Web 2.0-ness all rely on user-contributed content. The theory goes that a crowd of people is smarter than the old-style “gurus.” Whether or not that’s true is a topic for another article (and maybe an experiment). The relevant issue that I think Web 3.0 will deal with is “Who do you sue when the Web 2.0 community gives you bad information?”

Traditional media companies have rules governing things like fact-checking, use of anonymous sources, printing rumors, and separating advertising from editorial. At some point in the past, they even followed these rules.

Today, this isn’t the case, and the media likes to blame it on the free-wheeling ways of the Internet. The logic goes like this: “Someone published this irresponsible or wrong information on the Internet, and so we in the mainstream media can report on the fact that someone reported this information on the Internet. If this information later turns out to be false, don’t blame us, blame the Web 2.0 bloggers.”

However, because many of these bloggers are anonymous or just repeating things they heard on some other blog, there’s no one in particular for mainstream media outlets to point a finger to when they get in trouble for reporting something they read on the Internet. They want this situation remedied pronto!

Clarifying who should be blamed will be the driving force behind Web 3.0. Just like Web 2.0 has it’s signature technologies (AJAX, RSS, Mash-ups), Web 3.0 will have its darling protocols and acronyms. The hot technologies in Web 3.0 will be RFID, biometric identification, and digital certificates. Logging into a Web site using your fingerprint will be marketed as a handy way to not have to remember passwords, but it will also provide solid proof that you were the person who posted that damaging information about that multinational corporation.

Web 3.0 identification technologies will also be used to reduce or eliminate spam. By blocking all email that isn’t signed with a digital signature, you could eliminate 100% of the spam you get. Unfortunately, it would also block all of your legitimate mail, because almost no one uses digital signatures today.

In Web 3.0, more people will start to use digital signatures, which will make everyone start using digital signatures for fear that if they don’t then their emails to or from their old high school sweethearts will get blocked.

Real Web ID will enable new forms of e-commerce, eliminate certain types of e-crime and piracy of electronic media, and reduce the number of fake Myspace profiles. Online privacy advocates will redouble their efforts in response.

Several years into the Web 3.0 revolution, the general population will begin to look for something else…some sort of improvement to Web 3.0. Right at about the point when Web 3.0 has outlived its usefulness, the conference planners, pundits, and marketers will get together somewhere beautiful and start to think of a name for what will come after Web 3.0. I’ll reveal my theories on what this thing might be called and what it might look like in a future column.

Looking for safe work…like boxing, for instance

Alternative computer interfaces (i.e. other than keyboard and mouse) have become a bit of an obsession of mine lately. The primary reason for this new obsession is the recurring tendonitis in my right (mouse) wrist.

As far as my wrist health is concerned, the last two years have looked like this: months of pain, followed by a decision to finally go to the doctor, followed by months of unsuccessful treatment with anti-inflammatory drugs, followed by a visit to the rheumatologist for a cortisone injection. The injection completely knocks out the inflammation for several months, after which the pain returns and I go back to trying not to use my right arm and hoping that the tendonitis just goes away by itself.

Some might say that I should just go to the doctor when the pain returns and demand another injection. But, even though no one has told me as much, I have a sneaking suspicion that anything that works that well can’t really be good for me. So, I’ve started looking for ways to do my job differently.

Talking instead of typing is the obvious first choice. I recently checked out the latest speech recognition software. It’s impressive, but I’m not yet convinced that it’s a viable replacement for the keyboard. Even with the wrist pain, I can still type much faster and more accurately than the computer can take dictation. Also, I run a small company with a small office. My co-workers would go crazy if they had to listen to me whispering sweet business and programmer-speak to my computer all day long.

Other alternative interfaces, such as tablets, touch screen interfaces, pen-like devices, and trackballs are fine–and if my job involved drawing or moving pictures around, I’d have plenty of choices. Unfortunately, there aren’t a lot of good ways to input words or code into a computer without using your fingers or your voice. Brain-computer interfaces aren’t nearly advanced enough, and my toes are just not long enough to press ctrl-alt-delete to log in, much less to type with.

So, when my wrist finally gives up the ghost, I’m considering taking up a safer profession–like boxing. Let me explain.

A friend of mine was among one of the lucky (or persistent) first people to get their hands on a Nintendo Wii video game console. The main attraction of the Wii is that it uses wireless controllers that can detect motion in three dimensions. What this means is that the golf game for the Wii is played by actually swinging the controller as if it were a golf club, for example.

Last week, as part of my research, we spent an afternoon playing various games on the Wii. One of the games that comes with the Wii is a boxing game in which players stand side-by-side and punch towards the screen. The screen is split down the middle—each player sees a character representing himself facing and punching the character representing his opponent. Note: to the players, this is all very cool. But, as my friend’s wife pointed out, it looks very dorky to someone else in the room watching two people duke it out by punching perpendicularly to each other.

The next day, while I was watching Rocky Balboa (aka Rocky VI), it occurred to me that I am really not that unlike Rocky. Besides the obvious–both of us are incredibly muscular–there’s also the unfortunate fact that we’re both suffering from ailments which make it more difficult for us to do our jobs.

The Wii could be the first version of the ultimate in alternate interfaces for people with jobs that require stresses that their body can’t handle any longer. For anyone who’s seen the latest Rocky movie, you know that virtual reality plays a key role in instigating the fight at the movie’s climax. I suspect that if there’s another Rocky movie, 80-year-old Rocky will use something like the Wii to crush his opponents while avoiding further head trauma.

This brings us back to my planned post-retirement career as a professional boxer. In the future, jobs that require long hours of typing–like computer programming or writing–will be left to the young. Careers involving competition, strategy, and the “eye of the tiger”–like boxing and hockey–will be left to those of us who have plenty of life experience and the will to succeed, but who are no longer fit to use a keyboard.

The Minnick Test for the Future

Happy New Year. As I’ve done every year since 2000 I hereby declare that I am hopeful that the future is officially started. I would also like to announce a new test for whether it is actually true. I’ve humbly named this test the Minnick Test of the Future. The test is quite simple: when an article about speech recognition software that aims to be funny or make a point about how speech recognition software doesn’t work correctly yet isn’t funny then the future has officially begun. The idea is that if the raiders attempt to score some shuttles distorted by O. well the software works then the technology for talking directly with their computers rather than using keyboards is here and the future along with the.

With the release of windows vista upon us and it’s integrated speech recognition capabilities I was hopeful that this might just be the year for that start of the future that I was looking forward to as the child.

As the future rather? I suppose that. I mean that I suppose knocked. Both Reagan. I’m going to try one more time. The future has not yet arrived. Of the understood that Simpson’s just five.

I’m using Microsoft’s speech recognition software and I suspect that it would work much better if I didn’t write or call so smoothly and was frequently in middle of sentences.

One of the problems with replacing keyboards with speech recognition software is the many writers are notoriously careful and will cause for several minutes in the middle of sentences while they are dictating. Computers have no way of knowing this and speech recognition software often uses context two were improved accuracy. This results in some sentences that were spoken quickly being perfectly readable but other sentence is that may have taken longer to construct being completely garbage.

Another problem the speech recognition is that other people are listening and is not so easy to just dictate something stupid as to see if it works. Speech recognition seems to be in direct conflict with the latest thinking about open offices and may actually stifle creativity.

However speech recognition may also result in more polite comments on blocks and fewer flame wars. In the same way that the anonymity of the Internet makes people feel comfortable with saying things that they normally wouldn’t say, the public nature of speech recognition could have the effect of artists is talk to Peter Davidson, like like the proposition as it is the topic is writing down everything and people are wrong you’re listening and every call for her for whatever gets written down right now it’s even writing down to write down everything them saying his home and the eighties microphone by two. Also, I tend to mumble.

Despite his perfect this 5/8 product got damages but he’s really talking perfect. Certain frequently used words in blog comments don’t seem to show up very well and speech recognition text for instance: that should or you got him of the Fokker.

So what does it take for us to achieve this glorious future of reduced carpal tunnel syndrome and improved human computer interfaces? Apparently, the cost of a keyboard less computing environment is a new computer capable of running windows vista high quality microphone a private office and membership in your local toastmasters.

Long Live The King

A good place to start looking for clues into what the future of the Web will look like is in the historical record and memories of the time before Web 2.0.

I call anything pre-97 the “Long Public Beta” phase of the Web. This stretches back to Tim Berners-Lee’s first demo version in 1990. Berners-Lee’s original vision for the World Wide Web was as a Semantic Web, in which all of the content on the Web was descriptively tagged and computer-processable. We’ve come a long way since then—both towards and away from the Semantic Web. It was during this phase that HTML, HTML extensions, CGI, JavaScript, and most of the Web-specific technologies still in use today were created. (note that I’m excluding Internet protocols such as TCP/IP and so forth, which were invented long before 1990).

I personally define Web 1.0 as the time between 1997 and late 2000. These were the years during which I had all of my stereotypical “dot-com” experiences (except without the stock options, IPO, and insane wealth). My wife and I were running our small Web development and programming firm in the San Francisco Bay Area and later Austin, TX, and we did a lot of work for a lot of soon-to-be failed start-ups. Here’s an actual email I received in early 2000 (it was a joke, but it’s an important artifact nonetheless):

I am working to integrate a B2B strategy that moves away from “bricks-n-clicks” and towards a homogenization of broadband interconnectivity. The site design is in beta stages and I need to redo the look and feel – I want to present allot of low lying fruit and allow people to drill deep for content. As a member of the new economy, the digital economy, generation E, etc…I am sure that you will agree with me – Content IS King.

During the period between 2000 and 2003, interesting things were happening, but unfortunately for that period, the only catchy marketing term being used to describe the Web at the time was a negative one–“dot-com bust”.

The term “Web 2.0” was coined by Tim O’Reilly in 2003 or so. Today, it’s not uncommon to hear people talk about how great Web 2.0 is, and the great things that are now possible with Web 2.0.

If you’ve been working with this stuff since the mid-90s, you know that the exact same protocols and languages are being used now as were being used during the dot-com era. The most significant recent events leading up to what we now call Web 2.0 actually had nothing to do with Google. They were JavaScript (1995), XML (1998), and the gradually increasing familiarity of Web developers with these and several other technologies. However, if you write a long manifesto packed with jargon and you have enough clout, suddenly the world is speaking your language.

Besides the core technologies, the single thing that’s remained the same throughout my entire Web experience is this: the marketing people always win. This is frustrating for fundamentally technical people like myself, because we so rarely understand what all the fuss is about.

This is my problem with Web 2.0. I have nothing against rich user-interfaces, community-created content, syndication, or large databases. I do have a problem with buzzwords being used as substitutes for substance and comprehension.

When someone asks me if it’s possible for me to build them a Wiki, a podcast, and a Blog, it takes me back to the low-lying fruit and information superhighway days of yore and I tell them “of course”, but I follow it with a reminder that creativity, passion, knowledge, and dedication to doing quality work haven’t been superseded in this version of the Web–and content is still king.