AGM-86 Air Launched Cruise Missile

Starting in the mid-1960s, the USAF rapidly gained extensive experience in operating reconnaissance drones over Southeast Asia, and the diminutive “bugs”, principally AQM-34 versions of the Firebee target drone, proved to be quite survivable against anti-aircraft artillery and SA-2 missiles. This pointed a way towards a new generation of air-launched cruise missiles that would give strategic bombers a standoff capability against increasingly effective Soviet air defenses. The AQM-34 was around the size of later ALCMs, but a powerplant more efficient than the turbojet engine of the AQM-34 would be needed to give such small aircraft a useful strategic range. Happily, work was underway on miniature turbofans, and by the early 1970s compact units rated at around 500-600lbs thrust were feasible.

The ALCM actually stems directly from the Subsonic Cruise Armed Decoy (SCAD) program of the early 1970s, which was aimed at providing SAC with small bomber-launched decoy missiles that would flood Soviet radar screens with false targets. For a decade, this mission had been handled by the McDonnell Douglas GAM-72/ADM-20 Quail, a small turbojet drone. The antithesis of what would later be called “stealth” technology, Quail was fitted with features that greatly magnified its radar cross section, in the hopes that Soviet radar operators would read the enhanced returns as coming from the bombers themselves, greatly complicating attempts at intercepting the real threats. SCAD was to take over the decoy role, taking advantage of improvements in ECM technology to further compound the woes of enemy air defense personnel. SCAD itself would be a threat as well, being able to accommodate a small nuclear warhead. SCAD was to be carried by both the B-52 and the B-1A.

(Quail and SCAD were hardly the first programs aimed at providing SAC bombers with decoy and defense suppression missiles. Also known as MX-2013, the Radioplane B-67/GAM-67 Crossbow was a 1950s attempt at a strategic anti-radar missile that would be fired against Soviet installations up to 300 miles away, under the power of a J69 turbojet. The B-50 Superfortress could carry a pair of Crossbows, while the B-47 Stratojet could accommodate four. Another canceled design was the XGAM-71 Buck Duck, which was to be carried by the B-36 Peacemaker. And finally, the SM-73 Bull Goose was a Fairchild program for a ground-launched delta-wing decoy missile (which could be armed) that would fly from US launch sites into the USSR, cruise propulsion being provided by a Fairchild J83 turbojet. The Goose program was dropped in December 1958, with the engine being canceled a month later.)

By July 1972, Boeing had been selected as the SCAD airframe contractor, with Philco-Ford being charged with developing the ECM suite and Litton supplying the guidance. Earlier, Teledyne CAE and Williams Research had been contracted to develop competitive engine prototypes; Williams won production orders with its F107 design. SCAD’s design resembled a small aircraft, a fuselage with a basically triangular cross-section was mated to wings swept at 35 degrees, these being extended after the missile was launched. The engine would have a dorsal inlet just ahead of the small vertical tail. The entire package was sized to fit the standard SRAM launcher.

Despite the contact awards, the SCAD program would only run to July 1973, when the program was put on hiatus to allow the rationale and requirements of the system to be re-examined. By 1974, SCAD had given way to the Air Launched Cruise Missile (ALCM) program, which would be greatly derived from the original AGM-86, but optimized purely for the strike role. The AGM-86A or ALCM-A would have a range of around 750 miles, carrying a SRAM-type W69 warhead. By the spring of 1977, Boeing had been directed to begin work on the long-range version, which was designated AGM-86B. This had an airframe stretched to permit a larger fuel tank, this helping to boost range to 1,500 miles. The wings did not have as much sweep, the contours of the nose and tail were changed, and the W80 warhead from the Navy’s BGM-109 was substituted for the ALCM-A’s W69. Test flights using missiles with live engines were underway by the spring of 1976, and in September of that year “full-up” vehicles began trials.

This did not mean an immediate end to the AGM-86A, as some planners wanted to buy a mixture of A and B-models, using externally-carried AGM-86Bs for missions that demanded extra range, while using the original models for less difficult targets. Additionally, limitation of ALCM range as part of arms control agreements was a possibility, and this gave credence to the idea of making the AGM-86A convertible to B-model configuration, allowing the US, if necessary, the capability to rapidly break out of treaty limitations to match future Soviet developments. Fielding a larger ALCM presented some problems, mainly compatibility concerns with the B-52. A longer missile meant that a new rotary launcher would be necessary for internal carriage, as the existing SRAM unit could not be used, and a longer launcher would interfere with bomb carriage. Ultimately, it was decided that the B-52’s capability to carry the heavy B28 gravity bomb would be abandoned to allow for longer ALCMs.

The ALCM’s small size made many aircraft potential launch platforms for the system, and proposals were made during the late 1970s and early 1980s to adapt both new and older designs to the role. Large transport types in particular were examined by several companies, including Boeing, whose 747 could carry dozens of missiles internally, the weapons being ejected through a fuselage port. Lockheed’s C-5 Galaxy was also a contender, and demonstration hardware was actually built, although air launch tests were not carried out. Other large aircraft considered were the Lockheed C-141, L-1011, and Boeing C-135 and 707. While capable of carrying heavy missile loads, the transport-derived aircraft would little or no capability to penetrate protected airspace. Rockwell, still hoping to salvage some of its B-1A work, proposed a derivative aircraft with fixed wings that could carry an expanded load of ALCMs, while General Dynamics suggested several rebuild programs for the F-111 and FB-111 fleets that would have included adding ALCM capability. Ultimately, it was decided to limit ALCM deployment initially to the converted B-52s.

Despite both Air Force and Navy cruise missile programs having been made as similar as possible, there was still pressure to buy a single common missile for both missions, and Congress dictated that a competitive fly-off between the ALCM and Tomahawk be conducted. The AGM-86B would be the baseline Boeing missile, while GD would enter the AGM-109 version of the Tomahawk. Like the Boeing entry, the AGM-109 would not fit on an unmodified SRAM launcher, and although a shortened version of the missile had earlier been considered, this model would have had a dramatically-shortened range. Ironically, just such a version, albeit conventionally armed and dubbed Airhawk, was proposed in the late 1990s to both the USAF and RAF.

To conduct the flyoff, a trio of B-52s were fitted as launch aircraft, while four Phantoms were earmarked as chase planes. To portray a typical wartime mission that would begin over water, long-range test launches were conducted off the California coast, with the missiles flying to a range in Utah. The flyoff began on July 17, 1979 when an AGM-109 was launched. The Boeing missile first flew on August 3, but crashed in Utah. Despite this inauspicious beginning, Boeing was later named the winner of the evaluation, and on March 25, 1980 the company was formally awarded the production contract.

Even before the flyoff had been completed, the USAF had designated the 416th Bomb Wing’s B-52s at Griffiss AFB as the first aircraft to carry the winning ALCM design operationally. Aside from the structural and avionics changes necessary, ALCM-modified B-52Gs were also fitted with strakelets on the wing leading edges; these were large enough to be seen by Soviet reconnaissance satellites, allowing ALCM carriers to be counted for arms control purposes. Deliveries of operational ALCMs to Griffiss began in the spring of 1981, and by December of the following year the B-52G/AGM-86B combination was in service. The G-model Stratofortresses could only carry ALCM externally on wing pylons, but the later H-model conversions were fitted for internal carriage as well, using the Common Strategic Rotary Launcher. The B-1B Lancer was basically compatible with the ALCM system, but was not operationally configured for using the missile, being used primarily as a penetration bomber before switching over to the conventional role.

At one point, the USAF wanted to buy over 3,400 AGM-86Bs, but ironically, given the amount of controversy, time, and money involved in getting the missile into production, this projected buy would be radically cut. Fears that advanced Soviet “look down/shoot down” interceptors such as the MiG-31 Foxhound and new SAMs such as the SA-10 and SA-12 would be able to find and destroy ALCMs spurred the drive to put low-observable features on a new design, the AGM-129 Advanced Cruise Missile, and to free up budgetary resources the AGM-86B program was scaled back. A total of 1,715 ALCMs were delivered, with the last being turned over in early October 1986.

The Pulse of Technology – Keeping Pace With Continuous Change – November, 1998

Gordon Moore, the co-founder of Intel Corporation first postulated the now-famous Moore’s law in the nineteen seventies. Moore’s law states that the processing or computational power of silicon chips will double every twenty-four months, while pricing for these chips will halve in the same time period. This law has remained relatively constant for over twenty years. We are now approaching a time when this seemingly immutable law is becoming outdated. In fact, new silicon chips are doubling in power; with new chips coming online within twelve to eighteen months, while pricing is being halved in even less time. What has happened to the underlying technology that drives these silicon chips, and what are the market forces that have dictated rapidly declining prices?

There are several factors that lead to the inexorable increase in processing power, just as these same factors exert a downward pressure on prices. Let’s look at several of these factors in the context of hardware developments, software developments and the rise of the Internet as the ubiquitous network that many people predicted as being necessary to make computers universally acceptable in daily life.
Hardware Development.

When Intel was founded by ex-Fairchild developers, the mid-range computer, as personified by the DEC PDP series, Data General machines, IBM 32/34 series and the first HP boxes was the emerging standard in the computer industry. Machines of this time period were often viewed as departmental machines that were required to perform quick, hands-on computing applications that were free from the centralized (i.e., mainframe computing environment) I.T. staffs of the time.

The idea of a small, nimble machine that could be programmed and developed by local departments was extremely appealing at the time. Because of the diversity of manufacturers and proprietary operating systems, standards were largely undeveloped, causing competing platforms to jockey for position. Migration from one machine to another was largely unheard-of due to the high costs of switching data and applications programs; not to mention the high training costs required for I.T. staff.

The acceptance of UNIX as an open standard marks a watershed in the history of computing. For the first time, applications programs could be developed that were cross-platform – that is, capable of running on alternate hardware platforms. This newfound freedom allowed software programmers to write a single application that could be run on multiple machines. The importance to hardware developers was simple – they could spend more time on the refinement of the underlying silicon, and less time developing proprietary hardware systems. It is this process of refinement that has marked the decrease in cost of silicon that we know today.

The advent of the personal computer in the late nineteen-seventies and early nineteen-eighties marked another watershed in the development of hardware. Where mid-range computers allowed entire departments to break free of the constraints of mainframe computing, the advent of the PC brought computing to the thousands of business users who wanted the ability to perform analysis and data gathering at their convenience, not that of the I.T. department. For the first time, individuals could analyze, store and retrieve large amounts of data without having to master a computer language, and they could perform these tasks at their own pace. This device literally transformed the business world, making computations possible to everyday users that were once performed by large mainframe computers. This break-through spirit was best embodied by Apple computer, and symbolized in its “big brother” campaign in 1984. Aside from its edgy attitude, Apple also pioneered consumer usage of the floppy drive, mouse, and graphical user interface that made computing more accessible to everyday users. The ergonomics of computer use drove hardware device design and manufacture in a way previously unknown. Heretofore, ergonomics were largely ignored in computer design and manufacture; Apple changed all that with the introduction of the Macintosh line of PCs.

For all its innovation and edge, Apple made a mistake similar to that made by competing mid-range computers in the mid-seventies – it’s OS (operating system) and architecture was proprietary. Fearing that licensing would erode its technological leadership, Apple kept its systems and hardware proprietary and opened the door for a technically inferior product to gain a foothold that it has not yet relinquished.

In 1981, IBM introduced the first IBM PC. This device was, by most standards, technically inferior to the Apple. It possessed a slower processor, was bulky, and used a text-based approach to computing. Yet, despite these shortcomings, it and its brethren, the so-called IBM compatible machines, have dwarfed the Apple offerings over the past two decades. Why? Unlike Apple, the IBM compatible machines were based on an open architecture. The specifications for these machines were designed so that third-party vendors could develop hardware and software for them. In a sense, the best ideas from the best manufacturers get adopted and become the de-facto standard for that particular piece of hardware.

The final piece of the hardware development puzzle was to emerge in 1985 or 1986 in a somewhat unheralded manner. This final puzzle piece was the adoption of PC networking. Initial reactions to the development of the PC network concept were for the most part, negative. Individual users feared that networked computers would once again lead to I.T. control of what were, up till now, personal computers. Once PCs were networked, control would again be wrested from users back to the large mainframe computing departments of the sixties and seventies.

As it turns out, the PC network actually allowed individual users to communicate effectively, once the infrastructure was in place to allow for wired offices. Instead of wresting control away from users, the PC network allowed sharing and collaboration at previously unheard of levels. A new concept developed as a result of the PC network, known as the “network effect.” The concept of the “network effect” is that the more people share information in a group, the more powerful the group becomes. Users gain more utility as more people, data and ideas are shared. If you are left out of the network, your productivity and connectivity suffer. It is now important to become connected, and users face the prospect of being stranded if they are not part of the larger network. The concept of the “network effect” is similar to the development of large public libraries or databases that become more useful, as more information is stored there.

To summarize, several trends can be seen in hardware development that drives the pace of change in silicon. First, the trend away from mainframe systems to mid-range systems supporting open standards. Next, the development of personal computers that encourage users to take control of data manipulation, storage and retrieval. The next trend is the development of an open architecture and OS that allows standards to be set based on the merits of the product, not a proprietary system. Finally, the development of a networked office where the power of the network is enhanced as more users are added.

These trends will continue, and likely accelerate, as users demand more functionality in a smaller and smaller foot print. The acceptance of PDAs (personal digital assistants), cell phones and pagers will fuel consumer demand for devices that are easier to use and always connected. The convergence of data and voice transmission over the same carrier network will lead to increasing features and lower price points for machines that offer multiple uses – telephone, pager, PC, Internet access – at the same time.

Software Development
Early software languages were developed to instruct computers in binary code. These assembler languages were very basic in function and instructed computers to perform what we would now consider routine run-time and maintenance tasks. Tedious to write and compile, these early languages had none of the programmer conveniences that we take for granted today, such as debugging and writing tools to help make the programmers’ job easier. These languages have become known as first generation computing languages.

As engineers struggled to make the interaction between computer and user more intuitive, a new series of languages were developed such as Fortran and Cobol, the first of which was designed to be primarily a scientific programming language, while the second was designed to be a business programming language. These languages added editing and debugging features and were written in something resembling English-language commands.

The development of Cobol coincided with the widespread commercial use of mainframe and later, of mid-range computers. Other languages such as PL1 and RPGII were also adopted by mid-range computers and could arguably be called the first examples of third generation computing languages. These newer languages incorporated more English-like commands and syntax in the language structure, and incorporated new debugging and editing features directly into the language. As the basic language structure evolved, so too did the applications programs that were being developed. Early in the development of computer languages, a schism formed between that class of software that performed routine maintenance and run-time chores, which came to be known as the operating system (or OS) and a second class of software that performed specific tasks such as running a payroll or updating inventory, that became known as application software.

The widespread use and adoption of second and third generation programming languages corresponded with the growing use of mid-range computer systems. So too, the proliferation of application programs led to a growing acceptance of these departmental computer systems. In fact, the use of departmental computers was tied to the efficiently designed and executed single-purpose programs – such as inventory control or payroll processing, that were often performed in a self-contained business unit.

As computer hardware development evolved from mainframe to mid-range systems, the need for a computing system that allowed multiple users to access the machines and perform independent tasks, increased greatly. A group of Bell Lab scientists created such a language in the late nineteen-sixties, which allowed multiple users and which performed multiple tasks at the same time. This language, known as UNIX, seemed ideally suited to this new computing environment and caught on quickly. The combination of departmental computers and the UNIX language led to the development of distributed computing.

The advent of the personal computer accelerated the trends that were beginning to emerge in the distributed computing model. This model allowed computing power to be located in the hands of those people who required immediate use and manipulation of stored data, while at the same time providing network connectivity.

The original PC operating system, or DOS (disk operating system), was hardly the blueprint for distributed processing. The introduction of the IBM PC in the early eighties married an under-powered processor, memory configuration and hard disk drive (when available) to an anemic OS. Yet, this original machine would morph in rapid succession to a robust group of machines with network capabilities.

The catalyst for this change came in the form of early network cards that allowed PC users to connect to midrange machines or other PCs. These early adopters were driven primarily by a desire to share files or hardware devices (such as printers or larger hard drives) among work groups. Within a short period of time a specialized version of OS was developed to handle these chores more efficiently, with Novell being the most recognized provider of network operating systems. As the capabilities of these network operating systems expanded, new hardware devices were developed to take advantage of the specialized nature of network computing. In short order file servers, print servers and application servers (PCs developed to host application programs in one location) became commonplace.

At about the same time as the development of the network-computing model, a sea change occurred in the way users interacted with their machines. Until now, most application programs were relatively unchanged from their mainframe and midrange counterparts. These programs were for the most part, text-based, with some graphical elements thrown together in a jumbled, clumsy way. Once again Apple led the change in the form of the Macintosh graphical interface that was intuitive to use. Instead of invoking arcane command-line instructions, users could point and click at an object on the screen and launch a file, program or document with ease. The basis for the Apple graphical user interface, along with the point and click device (mouse) was conceived, but not commercialized, in the Xerox PARC facility in Palo Alto in the sixties. Microsoft developed its own version of the graphical user interface with their Windows “operating environment.” The first two version of this environment literally ran on top of its famous DOS system in a somewhat ungainly manner. Microsoft finally got the user interface right in Windows 3.0. In similar fashion, Microsoft incorporated many of the benefits of the network operating system into Windows version 3.11, and later improved both the operating system and network features with Windows 95.

The stage was now set for the next “big thing” in computing. Once again, this next wave had its origins in the nineteen-sixties, only to appear as a full-blown implementation n the nineteen-nineties.

The Rise of the Internet
The Internet was conceived of in the nineteen sixties as a way to link the computing resources of several west-coast universities together. At the time, computational power was expensive, and shared resources were a way to defray the costs of deploying large systems. At the same time, the U.S. government realized that if it could build a similar networked structure, it would be difficult to totally disrupt computer operations and data in the event of a national disaster. Over the next two decades, more universities and governmental agencies were added to this patchwork quilt of networked machines.

In order to link disparate machines running on different operating systems, a common file transfer procedure would be required. The FTP (file transfer protocol) schema was developed for this purpose, allowing different machines to communicate effectively. Similarly, a method of routing these files and messages across different locations was also required. Out of this requirement came the development of TCP/IP protocols that determine how a file is routed through the system. These two developments supplied the backbone that was to become the Internet.

Throughout the eighties, the Internet remained the domain of people in the scientific and academic communities. Visionaries imagined that this network could be used to connect people easily across great distances and multiple computing platforms. This vision awaited the development of some sort of a device that allowed files to be easily viewed across multiple platforms. A group of computer scientists at the University of Illinois came up with the idea of a web browser, a program that allowed people to view files in a graphical manner. The first web browser, known as Mosaic, was launched in 1992. The development of this browser allowed people to easily locate and view files on the Internet, and led to the phenomenon known as the WWW (world-wide-web).

Essentially, the development of the WWW has allowed users to find files and communicate in a worldwide network. The use of the web has transformed an arcane file messaging system into a new media; one that is growing faster than any other media in history. The growth of the web is based on several of the trends noted earlier.

The open nature of the web allows contributions from multiple sources and computing platforms.

Contributions are not limited to programming professionals. People with very little computer training can contribute files, articles, and information on the web.
1. The web is suited to the dynamic nature of business and personal life. It no longer requires weeks, months, or even years to develop applications – these tasks can now be performed easily and in a short period of time.

2. As more people become accustomed to the web, and as adoption rates drop, PC purchases increase, causing further downward pressure on hardware prices. The costs of hardware and web access have been declining by 15% to 20% per year for the past several years.

3. The web is the ultimate “network effect.” The more people participate, the more information is available, and the more critical it becomes to be included in the network.

4. The web has developed a new concept of speed. Internet time is a recently coined term for rapid development times that are roughly 7 times faster than “real” time. This notion of speed has spilled over into Internet business life, where all aspects of running an Internet business – sales, procurement, deal making, occur at warp-speed rates.

5. The economics of web space seem to defy business logic and gravity. People have developed a notion, rightly or wrongly, that information and services provided on the web are free. This has led to web companies developing unusual approaches to raising revenue in this new media. At the same time, the stock prices of web-based companies have achieved phenomenal valuations, seemingly unsupported by the need to have revenues or make earnings. This seeming dichotomy between lack of tangible earnings and high stock valuations will continue for awhile. The space for positioning on the web is one of a market share grab in what could become the largest media invented to date. In addition to sheer size, the web promises the Holy Grail of media – the ability to interact directly with a consumer to influence purchasing behavior.

6. The notion of competitive advantage, the idea that a company can gain a foothold over competitors through focus on a series of core values or competencies, such as Wal*Mart has built with logistics and deployment, or GE with developing management talent, is being dismantled by the web. The web is the ultimate leveling force. A site can be developed and released on the web, and in a matter of a few months, can spawn dozens of competitors many with improved features or benefits. In such an environment, the notion of sustainable competitive advantage has no real meaning, unless managed in weeks or months, not years or decades.

A Vision of the Future

Given the developments that occurred in computing during the past thirty years, how will we be affected by technology in the future? What are the trends that will affect us in the next several years, and how can we prepare for what many believe is a tumultuous, if exciting future?
The most important trend we face is the pace of change that will be occurring in hardware, software and bandwidth on the Internet. Eighteen-month development cycles are a thing of the past. Hardware and software manufacturers and developers are now operating on six to nine month development cycles. This cycle is from concept formation through to manufacture and distribution. The rise of the web has accelerated development times, and will continue to do so for the foreseeable future. To adapt to this trend, developers and manufacturers will have to plan for multiple critical paths and have the ability to react quickly to changes in business trends as they are planning, developing and implementing projects. People currently have the ability to plan in this manner on an intellectual level; in many industries this has been the accepted norm for the past few decades. On an emotional level, the cost of large-scale disruption, change and constant redeployment can and will be unsettling.

The second trend that will come to dominate our lives is the constant downward pressure on hardware and software prices coupled with the ever increasing demand for hardware and software to work easily. Again, there is a seeming contradiction between lower price and ease of use. As computer hardware and software become more mainstream, the need for simplicity and power will dominate every other consideration. Despite dire predictions that the end of the PC era is near, nothing could be further from the truth. PCs will remain with us for a long time to come. But their usage patterns will change. They will become file repositories, akin to vast research libraries. Users will gravitate toward more specialty devices to communicate (combination phones, electronic address books, web skimmers, message boards); process information (voice activated pads, storage devices, intelligent dictation systems); and be entertained (3D game players, downloadable video and music players, web-enabled real-time games connected to anywhere in the world, personalized concerts viewed from wearable stereo receivers).

The third trend that will come to dominate our thinking and beliefs in technology is the notion of ownership of intellectual property. When Netscape made the unique and courageous decision to give away its commercial browser technology – it essentially validated the concept of open computing – but it also set the notion of intellectual property rights on its ear. The foundation of intellectual property rights – that an author or inventor owned the writing or invention – has been the cornerstone of trademark and patent protection for the last 400 years. To give away this right – to make intellectual property free to be distributed modified and shared – is a sea change in the way we view human capital. If knowledge is power, the free distribution of knowledge will enable a new level of empowerment and use of human talent. Make no mistake, we will struggle mightily with how to value, reward and allocate resources to the developers and users of knowledge. Throughout history, this tension – struggle is you will – has led to heightened levels of creativity and knowledge.

The fourth trend is more disturbing in its implications. There have always been classes in human social structures. These classes have developed along economic lines with variations on the methods used to acquire greater economic resources (knowledge, brute power, ruthlessness, etc.). Over the next several decades we have the potential to develop a new social class, one that distinguishes between the connected and not connected. As the “network effect” of the Internet expands, those who are not connected stand to lose out on many of the benefits of the connected. Training, education, development, entertainment will all be provided by the Internet. For those not connected, the lost opportunities will be tremendous. We must ensure that this class distinction does not in fact occur, and that everyone shares the “network” effect equally.

The fifth trend that will occur is the death of market economies, as we know them. Market economies were developed to efficiently bring together groups of willing buyers and sellers in sufficient numbers to conduct business transactions easily. Over time, the emphasis on a single “market” shifted to that of specialized markets based on transactional need. As examples, consumer goods markets developed for retail selling; money markets evolved into banking and financial institutions; specialized financial institutions, such as stock and future markets developed, and over time, business-to-┬Čbusiness markets have evolved. All of these markets, of whatever form, have developed around centralized physical locations. With the rise of the Internet, markets no longer require physical presence. Witness the success of Ecommerce; auction sites, computer and software purchases. Over the web, etc. This trend was actually postulated by Faith Popcorn several years ago when she noticed a trend toward “cocooning.” She theorized that people wanted more privacy and less social interaction, or at least social interaction when they chose. The Internet allows people to cocoon, while at the same time interacting when and how they choose.

The final trend that will affect our lives will be the commercial expansion of the Internet. The web has touched our lives in many ways, and is literally growing-up before our eyes. How will we resolve Internet privacy issues? How will companies make money on the web? Are Internet stock valuations realistic, and sustainable? What information should be free, and what information should be paid for? How will we compensate people for their intellectual capital, if that capital is freely given away? What role should government play in determining Internet policy? How should Internet sales be taxed, and how do tax laws that are based on the notion of Nexus (the physical location of a place of agency) apply to an essentially location-less entity? These rhetorical questions are being asked by countless industry, think-tank, and governmental institutions on a daily basis, and over time, they will be resolved.

In Shakespeare’s Tempest, Miranda upon viewing Caliban and Ariel for the first time declares to her father and other members of their landing party “0 brave new world that hath such people in’t.” Shakespeare was profoundly aware of the effect that the discovery of the New World had upon his audience. It was a time of intense excitement – “0 brave new world,” but it was an excitement mixed with fear and uncertainty “that hath such people in’t.” The landing party in the Tempest was driven aground by a violent storm – the storm is a symbol of the change that was sweeping through Europe in the 1500’s. The Tempest is Shakespeare’s attempt to explain the forces that were at work in creating a New World – the forces of discovery, uncertainty, doubt and ultimately hope in creating a better world.

We are poised on the brink of a New World. For the first time in several hundred years, we have the ability to make major changes in the way we view the world, human capital and the sharing of knowledge. Oh brave New World that hath such people in’t.

Fashion Is an Instinctual Art – Straight From the Jungle

John Fairchild, founder and owner of the fashion publishing empire Fairchild Communications, is considered the arbiter of 20th century style and taste in the fashion, beauty and design world. His many publications provided the last word on the history, trends and forward looking direction that these notoriously fickle categories would follow. Mr. Fairchild was a visionary in a world of creative geniuses and risk takers.

In his 1989 book, Chic Savages, Mr. Fairchild observed the following: “Fashion is a sub-art and is not intellectual. Fashion is a business and operates best when born out of instincts. Fashion appeals to the senses and comes from gut feeling…true fashion comes straight out of the jungle”.

This quote, which summarizes John Fairchild’s observations of the creative instincts essential to become a successful cosmetic, fashion or design entrepreneur are applicable to any form of entrepreneurial endeavor. Any widget invented to fill a market void requires a certain cunning instinct on the part of the creator to not only visualize a product, but to create the thing in reality, sell and market the piece, and fully commercialize their unique creative drive.

The world of high fashion is built on product exclusivity. Most people would love to own a Ferrari, Balenciaga gown or a Cartier, even though it is not realistic given their personal financial circumstance. We aspire to these luxuries. We know of these, and many other limited distribution, high end fashionable brands.

Recently I read a history of the rise and fall of the iconic Ungaro house of fashion. Ungaro, in the 1970’s and 1980’s was one of the leading haute couture brands in the world. Ungaro fashion ensembles were extremely successful in this ultra-competitive, stratospherically priced space. Ungaro himself came to exemplify the ideal of the uber-creative Italian fashion genius. The family was ultra-successful in licensing the brand name to dozens of products including cosmetics, fragrance, bags, jewelry and household goods.

There was a constant look and feel to all goods that carried the Ungaro label. They were of the highest quality, sold only in a few of the best stores, exuded artisan craftsmanship and offered the gorgeous Ugaro-look that emanated from the fashion houses couture lines. Ungaro was the ultimate aspiration-al brand. The more expensive and exclusive the distribution the more consumers sought and desired Ungaro products.

In the mid-1990’s, at the height of the bubble for luxury acquisitions, Ungaro was sold. The family gave up creative control, was paid handsomely and believed that the new investment bankers that had bought the firm would continue the traditions that they had employed to make the brand a world-wide phenomenon. They were soon to be proven wrong on every count.

The new owners brought in a new design team and began to apply modern finance and cost control measures to production and to control overheads. In order to support the debt service incurred in the purchase greater sales volumes needed to be achieved. The result was a classic push-pull between the creative side of the business and the operations side. The need for more sales meant the need for more distribution which began to diminish the exclusivity that had been so important in building Ungaro.

These business pressures resulted in a constant churn on the creative design team and ultimately a lack of direction and loss of the styling edginess that made a garment an Ungaro. Retailers, and more importantly consumers, started to notice these changes and walked away from new Ungaro collections. The family looked on in dismay as sales plummeted from hundreds of millions of dollars in the 1980’s to only a few million dollars at the turn of the 21st century.

Ultimately the House of Ungaro has been bought by a Silicon Valley, tech industry multi-millionaire with no, nada, zero fashion industry experience. The brand was purchased for 85 million dollars and the descent has only accelerated. The outlook, especially in the current economy, is grim for Ungaro.

Investment bankers and technology barons are great at asset utilization, reading balance sheets and designing software to make life easier. But entering the fashion jungles as described by John Fairchild is a completely different universe requiring a completely different set of creative skills.

The instinct for design and fashion greatness possessed by Ungaro, Valentino, Ralph Lauren, Charles Revson, Pininfarina, Enzo Ferrari or Harry Winston is not transferrable like the ability to read a blue print or follow a schematic outline in manufacturing. It comes from the gut, and appeals to the senses in ways that are not easily described. The rise and fall of the House of Ungaro is a cautionary tale that all entrepreneurs can and should learn from. Vision is a rare and beautiful thing that cannot be readily manufactured.