Pages

Sep 12, 2012

What a Difference 30 Years Makes: Megatrends Redux

It’s been thirty years since John Naisbitt published his landmark book, Megatrends, exploring a number of major changes in society, ten to be exact, likely to impact the way we live, work and govern ourselves. Looking back across those three decades, it’s clear that Naisbitt got a lot right and a few things wrong about the information society whose arrival he recognized.

What may be useful to us, however, are the trends he saw in the early 1980s, the variables he expected to shape and balance the resulting changes and the reasons some of those variables didn’t behave as he predicted.

Industrial to Information

First, to his credit, Naisbitt saw and made his first chapter about the rise of an information society in place of the industrial society we had been living with since the end of WWII.

Although the personal computer had only just made its appearance in the late 1970s and while many commentators were still grappling with the end of the industrial age, seeing nothing coherent on the horizon, Naisbitt recognized that information would become the critical resource and wealth in a new “information society.” He reasoned that society would need a new knowledge theory of value to replace earlier labor-based theories because, he also reckoned, we would “mass produce information” the way we used to mass produce hard goods.

He was right of course, and we are mass producing information, or at least data, in volumes that even he couldn’t have conceived. However, the fact that so many information utilities, social media and search engines most notable among them, still rely on advertising for their revenue suggests that we haven’t quite yet solidified that new theory of information value.

Searching Isn’t Always Finding

Use a modern search engine and you will see that the creation side of the information equation is charging ahead full steam. But try finding a complex or subtle item of information on the Internet and you will also see a location side still struggling. As the glut of content mounts, the need for a comprehensive and universally understood way of identifying information so that it is easily available and not buried in a gazillion search engine hits, grows with it.

shutterstock_84231898.jpgThe brick and mortar library world addressed this problem as early as 1876 when Melville Dewey came up with his decimal classification system and in 1908 when the Library of Congress adapted Cutter’s dictionary cataloging scheme. The process of assigning classifications to content was further organized with the 1967 publication of the Anglo-American Cataloging Rules or AACR.

But with the rise of automation in the 1960s, the library world panicked at the thought of those new computer people invading their sandbox and much of the momentum and progress was lost, sending the content world essentially back to square one trying to figure out what an effective cataloging scheme should look like — working for Xerox Education Division back then, I witnessed some of this happen and it wasn’t pretty.

Complicating the process were multiple schemes, developed by different players with different perspectives, different funding, even from different parts of the world, but all convinced that they were right and none much interested in consensus.

The search engine world hasn’t helped either: after all, effective content cataloging reduces the need for their systems by providing pre-configured paths to information and if you don’t need to search the entire Internet to find information, you likely won’t see or respond to the ads that provide the lion’s share of Google’s (et al) revenue. So we spend most of our time searching through everything for tidbits of information that often exist only in a few places.

 

Continue reading this article:

 
 

Source : cmswire[dot]com

No comments:

Post a Comment