The Fairchild-Dornier Do-328JET

1. Turbine Triumph:

The power of engines, as historically demonstrated, extends beyond the thrust they produce to move airplanes. They also move passenger-toward a particular aircraft, when it is powered by the type that attracts them.

When the first long-range, pure-jet airliners appeared at the end of the 1950w in the form of the de Havilland DH.106 Comet, the Boeing 707, and the Douglas DC-8, it was concluded that this technology would be restricted to those sectors, since its speed could not be adequately exploited over shorter ones, leaving them the domain of piston aircraft, such as the Convair CV-440 Metropolitan and the Martin 4-0-4.

What was underestimated was the power the pure-turbine had to draw passengers to such airplanes, causing them to demand and ultimately expect this engine type on all route types. And manufacturers responded.

By the early-1990s, history repeated itself. The turbine, it was thought, could never be economically viable on regional-range routes, once again leaving the piston and later turboprop airliners with capacities of between 19 and 50 to serve them. But, when Canadair sparked the regional jet revolution with its 50-passenger CRJ-100 and Embraer closely followed suit with its own ERJ-145, there seemed no market for which the turbofan was not suitable-except, perhaps, for the very thin one, supporting no more than 30 seats.

Passengers again responded. And consensus was once again proven wrong.

2. Regional Jet Revolution:

Although powerplants usually precede designs, in the case of the regional market, designs preceded powerplants and provided the crossroads between larger airliners, business jets, and turboprop aircraft. Regional jets could thus originate from four potential sources.

The first, as previously mentioned, trace their roots to business jets-in this case, to the Canadair CL-600/-601 Challenger, which bred the stretched-fuselage regional airliners that followed it. In the second case, Embraer adopted the twin-turboprop EMB-120 Brasilia into a pure-jet counterpart, the ERJ-145. In the third, an existing airliner, intended for longer-range sectors, was scaled-down to produce a lower-capacity derivative, as had occurred with the MD-95/717, a shrink of the MD-90, and the A-318, a shorter-fuselage version of the A-319. Finally, regional jets originate as all-new designs, such as the Vereinigte Flugtechnische Werke VFW-614, the western world’s first 44-seat regional jet; the Fokker F.28 Fellowship, which was succeeded by the modernized F.70 and F.100; and the British Aerospace BAe-146, which itself begot the re-engined Avro International RJ70 to -100 family.

All of these types fueled the regional jet revolution, which created a fundamental change in the market, mirroring the impact the pure-jet engine first had on the long, then medium-, and finally short-range routes, and blurring the line between the major and regional carriers. It also became the most rapidly growing segment of the industry.

According to the Department of Transportation (DOT) report entitled “Regional Jets and their Emerging Roles in the US Aviation Market,” seven US carriers operated 99 regional jets between 126 city pairs and served 103 markets from ten hubs at the beginning of 1998. The domestic regional jet fleet at the time was expected to double, to 200 aircraft, by January of the following year.

And these figures only escalated like the clockwise rotations of analog altimeters installed in climbing aircraft. Indeed, in order to remain competitive and retain market share, airlines were forced to order regional jets. Almost 80 percent of the 570 regional airliners ordered in 1998 were for pure-jets, eclipsing, for the first time, the number of turboprop deliveries the following year with 217 jets as opposed to 120 turboprops. By 2000, 726 regional jet sales were recorded, a 42-percent increase over the year-earlier period and it constituted more than 90 percent of all regional airliners ordered. The diminishing popularity of turboprop types, resulting in a 28-year low in sales, saw the sunset on once ubiquitous models, such as the British Aerospace J41 and the Saab 340 and 2000.

These sales figures, however, reflected more than passenger popularity. Compared to heavier twins, such as earlier BAC-111s and DC-9s-which had not been designed for regional routes, but which were artificially suited for some of them because of then significantly lower fuel prices-aircraft intended, from inception, for this purpose, offered two advantages: their lower structural weights burned less fuel and were rewarded with reduced landing fees, and their decreased thrust capabilities optimized them for lower cruise speeds, since a greater portion of regional flight sectors entail the climb and descent phase than do longer ones.

Barry Eccleston, Executive Vice President of Fairchild-Dornier Aerospace, predicted that the market for regional jets accommodating a maximum of 110 passengers would be worth some $205 billion, amounting to 9,000 aircraft, over the first two decades of the 21st century-or more than two-thirds the $280 billion-worth of ultra large capacity airlines, such as the Boeing 747-8 and the Airbus A-380-except that the regional segment of the industry represented seven times the number of airplanes. He also identified four phases of the regional jet revolution.

The first, entailing the initial breed of 50-seat Canadair CRJ-100s and -200s and Embraer ERJ-145s served to prove the concept, attract the passengers, and demonstrate the economic feasibility of it, its roots planted by Comair in the US and Lufthansa CityLine in Europe. The former initially provided feed to major carrier hubs and the latter bypassed them and instead served short and/or thin sectors between secondary city pairs.

Paving the way by demonstrating the overwhelming passenger acceptance of these aircraft, the 50-seat regional jet planted the seed for the second phase, establishing the seamless service interchange between mainline and microjets and creating demand for pure-jet service on routes even too thin for the 50-seaters. Scaled-down for accommodation of between 30 and 40, these types could altogether replace the comparably sized turboprops, especially since a design such as the ERJ-135, although a smaller derivative of the original -145, was itself a development of the Brasilia turboprop.

Like a rolling snowball, once the concept gained momentum, it was unstoppable and increased in size. So, too, did the aircraft representing the third phase, which offered capacities not unlike the traditional short- to medium-range twins, but at decidedly lower seat-mile costs. Examples of these were the Fokker F.28 Fellowship, the British Aerospace BAe-146, the Fokker F.70 and F.100, the Avro International RJ70 to -100, the Bombardier CRJ-700 to -1000, the Embraer ERJ-170 to -195, the Antonov An-148 and -158, the Sukhoi Superjet 100, and the Bombardier CS-100.

Regional jets accommodating 100 passengers, but flown by major carrier crews because of pilot scope clauses prohibiting their operation characterized the fourth phase.

Closing the gap between major and regional airline profiles, this type of operation entailed the replacement of first generation twins, such as DC-9s and 737s, with their advanced, higher-capacity regional counterparts, yet offered comparable levels of comfort, service, and speed on thinner, point-to-point, hub-bypassing sectors-in the process reducing airport congestion.

Integral to this quad-phase regional jet revolution-and particularly to the second of them-was, of course, the 37-seat Embraer ERJ-135. But, before it even flew, it had competition across the Atlantic, in Europe, in the form of another turboprop-turned-turbofan, the even-smaller Fairchild-Dornier Do-328JET.

3. From Turboprop to Turbofan:

Founded as Dornier-Metallbauten in 1922 by Professor Claude Dornier, that company was known for its massive, 12-engined, Do-X flying boat, becoming Daimler GmbH in 1972 and Daimler-Benz Aerospace 15 years later, when Daimler-Benz itself acquired a majority share holding. It was finally designated Daimler-Chrysler.

Its high-wing, twin-turboprop commuter aircraft, offered in 15-passenger Do-228-100 and 19-passenger Do-228-200 versions, amounted for 270 sales, and led to a 34-seat successor.

Seeking to divest itself of what had intermittently become a loss-making subsidiary, it sold a majority stake of Dornier Luftfahrt, located near Munich, in Oberpfaffenhofen, Germany, to San Antonio, Texas-based Fairchild Aerospace in 1996. Fairchild itself built the venerable 19-passenger Metro commuter turboprop, which sold in excess of 600 during a 35-year production run, and was initially an international partner in the 34-seat Saab-Fairchild SF-340, which accounted for 456 sales.

The Do-328, in the eyes of new owner Fairchild, had potential, and its strength-literally-lay in its robust, German over-engineered design. Already the second-fastest turboprop regional airliner after the 50-passenger Saab 2000, it lent itself to a minimal-modification retrofit with pure-jet engines, although former design owner Daimler-Benz had consistently failed to see the feasibility of the project.

But additional impetus came from several less-than-positive circumstances. The pure-turboprop version, already weighing 2,200 pounds more than targeted and subjected to high production costs, suffered from fierce competition with similar types, such as the Fokker F.50, itself the product of DASA’s previous Dutch subsidiary, and sales were sluggish.

Based upon Fairchild-Dornier’s survey of 50 worldwide airlines conducted between October of 1996 and January of 1997, passengers preferred turbofans, regardless of route type and length, and a turboprop-to-turbofan transition was not only logical, but left little choice, provided it could offer comparable performance and economics.

Powerplant popularity, however, was not the only factor behind airlines’ orders. One of the latest attractions was the ability of an aircraft manufacturer to offer a family of regional jets, as was beginning to occur with Bombardier and Embraer, so that derivative-associated design similarities and common pilot type ratings would offer the cost-effective flexibility to match capacity to route type and departure time.

Although Embraer’s own scaled-down regional jet was now on the horizon, the economics of such 30-seaters had yet to be proven. Nevertheless, if they could, this type of design was foreseen as fulfilling two purposes: (1). It could replace comparably sized turboprops on existing routes, and (2). It could create an entirely new market-one too long for a turboprop’s speed, yet too thin for the higher-capacity of the increasingly common, 50-passenger regional jets, thus heralding a new class of aircraft.

If successful, it could potentially replace some 1,200 aircraft in US service alone. With the ERJ-135 about to become the second member of Embraer’s regional jet family, and the Do-328 notching up less-than-stellar sales, Fairchild-Dornier had little choice but to combine its existing airframe with turbofan engines or concede the race-already as a distance third-to the other two contenders.

4. Do-328JET:

Modifications to the turboprop’s turbofan counterpart and, in many ways, successor, were few.

Because the fuselage was milled from solid material, the aluminum alloy for the pure-jet version retained more at frames 24 and 26, which corresponded to the wing and undercarriage attachment areas, while the upper-fuselage fairing, which served as the blending point for the wing, was also retained, as were the two aft, ventral strakes previously required by the turboprop’s air flow. Although the powerplant change had rendered them superfluous, they were not removed in order to avoid recertificaton costs.

The newly designated Do-328JET featured a 68-foot, 73/4th-inch fuselage and 69-foot, 9 3/4th-inch overall length.

Utilizing the same TNT (Tragfluegels neuer Technologie), supercritical wing as its Do-328 predecessor-which was originally designed for the smaller Do-228-and equally employing solid-milled skins to minimize the amount of riveting, the regional jet sported a unique planform. Aside from differing in its high-wing mounting, it featured highly-swept leading edges near the wing tip, parallel edges inboard of the engines, and a trapezoidal shape outboard of them.

Combined with the turbofans’ thrust capability, its wings, which retained the turboprop’s inflatable, leading edge boot deicing system, facilitated short-field performance, yet feisty climb rates (of 14.2 minutes to 31,000 feet), offering comparable block times to the ERJ-135 with which the aircraft would eventually compete.

High-lift devices encompassed single-slotted trailing edge Fowler flaps.

Internally, the Do-328JET’s wings incorporated a 200-liter fuel capacity increase, dual fuel pumps, and 30-percent larger-diameter fuel lines.

Sporting a 68.10-foot span and 430.6-square-foot area, they introduced a 100-mm trailing edge flap extension and thus increase in chord, rendering an 11.0 aspect ratio, for an ultimately targeted 400-knot cruise speed.

Like the turboprop -328, the regional jet retained the t-tail, but introduced a larger rudder trim tab to counteract the engines’ greater thrust.

The pylon-mounted, thrust-reverser devoid, 6,050 thrust-pound Pratt and Whitney Canada PW306B engines themselves, replacing the nacelle-shrouded turboprops, were originally developed as -306As for the Galaxy business jet and incorporated an 840-mm, 22-bladed, wide-chord fan; a five-stage high pressure compressor (four axial and a single centrifugal); a two-stage high pressure turbine; and a three-stage low pressure turbine. Compared to the corporate version, the commercial powerplant offered a 30-percent increase in core flow and higher temperature-resistant materials in the high pressure turbine.

In order to cater to the Do-328JET’s increased weights, the twin-wheeled, hydraulically-actuated, tricycle undercarriage featured a Dunlop dual-braking system, with carbon disc brakes; a reinforced trailing link; and an anti-skid system to compensate for the lack of engine thrust reversers. Its nose wheel retracted forward, while its two main units were stored in fuselage-side fairings.

An AlliedSignal GTCP36-150 auxiliary power unit (APU) provided power for cabin lighting and air conditioning and engine starts.

Aircraft access was attained by means of a forward, left, out- and downward-opening, airstair- and handrail-equipped Type I crew and passenger door; a Type III emergency exit apposite it, on the forward, right side; a second Type III emergency exit on the aft, left side; and a Type II galley servicing door on the aft, right side.

Standard cabin configuration entailed 32 to 34 three-abreast, one-two-arranged seats at a 30- to 31-inch pitch and an aft galley and lavatory. Because of the 4,000-foot altitude increase in the Do-328JET’s service ceiling-to 35,000 feet-cabin pressurization was equally increased-from 7.0 to 7.4-psi, yielding an 8,000-foot elevation. Internal dimensions were 33 feet, 10 3/4th-inches in length and six feet, 2.5 inches in height.

Baggage, cargo, and mail were stored in the main deck compartment located between the aft cabin wall and the rear pressure bulkhead and accessed via a port door.

5. Flight Test Program:

Unlike clean-sheet design flight test programs, the Do-328JET’s entailed considerable comparison-between the handling and performance of what had been a 365-knot turboprop to one penetrating the 400-knot realm with pure-jet engines. The transition from one to the other had been even less of a leap than initially imagined, since the first -328JET prototype had been nothing more than the turboprop’s second prototype and even retained several of its features.

That prototype itself, registered D-BJET and rolled out for the first time on December 6, 1997 for public viewing, made its maiden flight from the 7,800-foot runway at Fairchild-Dornier’s Oberpfaffenhofen, Germany, complex at 11:16 on January 20 of the following year, piloted by Meinhardt Feuersenger, Chief Test Pilot of the Do-328 turboprop program, and Peter Weger, who, in 1994, had first flown the Eurofighter EF2000.

Maintaining a southerly course over the Bavarian Alps, the aircraft, slated to gauge performance and test envelope expansion, attained a 220-knot speed and 25,000-foot altitude during its almost two-hour sortie.

Evaluating the prototype’s performance in comparison to the turboprop foundation upon which it was based, Feuersenger noted the absence of propeller wash and the smooth, over-wing air flow, no longer needing to continually retrim it as a result of power setting changes. Performance either approximated or exceeded computer calculations.

Assessing the regional jet after landing, Feuersenger said it performed “flawlessly” and “pilots will love this aircraft.”

Three other prototypes took part in the 18-month, 950-flight, 1,560-hour flight test program, which was delayed by four months because of the need to redesign the Dunlop braking system and Messier-Dowty shock absorbers to cater to the aircraft’s deceleration without propeller braking effects. Aircraft D-BWAL, first flying on May 20, was involved in performance certification testing. Avionics integration, the realm of the third prototype (D-BEIR), commenced with its July 10 first flight, and function and reliability testing began three months later, on October 15, when the fourth prototype first took to the air.

The first production-standard aircraft, featuring a five-foot wingspan increase and 8,160-pound fuel capacity, entered its intended aerial realm after the four prototypes.

6. Test Flight:

Initial Do-328JET performance could be gauged by the test flights its prototypes undertook.

The aircraft’s two-person cockpit, with a Honeywell Primus 2000 integrated avionics system, featured five, eight-by-eight-inch CRT displays, the primary flying (PFD) and multifunction (MFD) displays duplicated before each pilot and the engine instrument and crew advisory system (EICAS) located in the center.

The reclinable seats, with five-point harnesses, were equipped with storable armrests and were adjustable forward and aft.

Engine starts, using bleed air from the auxiliary power unit, were automatic, their parameters registered by the full authority digital engine control (FADEC).

After the flight plan had been entered into the flight management system (FMS) and the windshield panels had been electronically heated to prepare them for bird strikes or other foreign object impact eventualities, the twin-jet was steerable by means of its rudder pedals, provided the variation was no more than ten degrees to either the left or right, although sharper turns required the nose wheel steering tiller.

The aircraft was offered with two gross weights. The lower, designated the Do-328-300, could carry a 7,200-pound payload, had a maximum take off and landing weight, respectively, of 33,510 and 31,063 pounds, and a 740-nautical mile range with this payload and reserves at a 31,000-foot altitude. The higher, designated the Do-328-310, could carry an 8,104-pound payload, had a 34,524-pound take off weight and a 31,724-pound landing weight, and a 900-nautical mile range.

A corporate version, the Envoy 3, typically accommodated between 12 and 19 in layouts specified by the operator, but which usually included easy chairs, tables, work stations, divans, sofas, wardrobes, galleys, and lavatories. Additional fuel tankage increased its range to 2,000 nautical miles.

Calculated and entered take off reference speeds varied, of course, according to gross weight and atmospheric conditions. A 27,488-pound ramp weight, for example-including 5,000 pounds of fuel-resulted in V1, VR, and V2 speeds, respectively, of 103, 110, and 117 knots in prototype D-BJET.

Flap settings included 12 degrees for take off, 20 for approach, and 32 for landing.

With the altitude, airspeed, attitude, vertical speed, and cleared altitude visible on the PFD, and the departure track on the MFD, the aircraft, cleared for take off and brake-released, initiated its acceleration run, its throttles advanced and its PW306B turbofans under FADEC control.

A 15-degree pitch angle ensured a best rate-of-climb of a little over 5,000-fpm.

Cruising at its 35,000-foot service ceiling, it assumed a Mach 0.69 speed with a 97.6-percent N1 fan, resulting in a 1,797-pound-per-hour fuel burn. Maximum cruise speed, at 25,000 feet, was 405 knots.

A 4,000-fpm descent rate, to 20,000 feet, was accomplished with a flight-idle power setting and Mach 0.61 airspeed.

The elimination of the previous version’s propellers necessitated a 20-knot increase in approach speed and ground spoilers automatically deploy after touchdown.

7. Sales and Service:

Sales, as with any other aircraft, depended upon quality, price, and the ability to fulfill its design goals. In the case of the Do-328JET, however, that aircraft actually created-and needed to create-its own market niche and therein lay the first obstacle to its orders-namely, was there a requirement for a 30-seat regional jet with in-house competition from its own turboprop and from the likes of the British Aerospace J41, the Embraer EMB-120, and the Saab 340, and could it fulfill its mission as economically as these types?

Not all carriers were likely to follow the 30-passenger pure-jet trend, especially those that saw little benefit in operating a type which was not part of a family, a strong competitive advantage Bombardier and Embraer both enjoyed over Fairchild-Dornier.

So similar, in fact, were its turboprop and turbofan siblings that they shared the same production line and airlines were able to wait until six months before scheduled delivery to choose a powerplant type.

Several factors, however, seemed to indicate its need.

Analyses of 300- to 1,000-mile route sectors revealed that they were either too infrequently served or were done so with inappropriately sized equipment, resulting in low load factors.

Seeking to exploit the former case-in which demand often exceeded capacity-Fairchild-Dornier foresaw initial-and ideal-deployment on traditional 19-seat turboprop routes, which it envisioned as stimulating demand because of its cabin class comfort, in-flight service, and pure-jet speed, the same way the 50-seat regional jets had “recreated” the 30-seat turboprop market.

Finally, because of restrictions inherent in US pilot scope clauses, dictating the number of regional jets that could be operated by major-aligned, code-share partner carriers, orders for turbofan aircraft accommodating 50 passengers or more were limited. Falling below this restriction with its 32 to 34 seats, the Do-328JET was exempt from these regulations. At the same time, it gave carriers the opportunity to close the lower-end service gap between traditional-turboprop capacity and that of the new breed of regional jets, enabling them to substitute mainline flights with increased, businessman-attracting frequencies and those operating during off-peak, service-scarce or altogether -devoid times, particularly midday.

Orders, as with any aircraft, increased as the program progressed. Launched during the 1997 Paris Air Show, the program itself attracted initial orders for six aircraft from Proteus Airlines, based in Dijon, France, and Aspen Mountain Air of the US for four. At the time of its first flight, there were 17 firm and 15-optiioned orders, and by July of 1998, there figures had respectively increased to 51 and 28, of which 11 were for Envoy 3 business versions. Continuing to mount, these totals increased to 75 and 101 by February of 2000 and 141 and 91 by early-2002.

Skyway Airlines, “the Midwest Express Connection” established in 1993 by Midwest Express itself to serve short-range routes and provide feed to mainline flights at its Milwaukee hub with a fleet of 15 19-passenger B1900Ds, took delivery of the first Do-328JET on August 4, 1999, employing it on route-proving sectors before inaugurating it into scheduled service two months later, on October 6.

Although the B1900Ds were suited to certain routes, they created a capacity gap in mainline Midwest’s fleet, whose aircraft featured four-abreast leather seats and premium, all-business class service. Skyway’s Beech aircraft offered little more than standup headroom.

Because 75 percent of Skyway’s traffic was origin-and-destination in nature, and these passengers seldom experienced its parent’s full-service product, its reputation was less than it should have been.

What was needed was an airplane that could accommodate half that of its DC-9s, but offer comparable speed, comfort, and service. The 50-seat CRJ-100/-200 and ERJ-145, considered too close in capacity to them, were quickly discounted.

The solution lay in Fairchild-Dornier’s microjet, of which five were ordered, with another ten on option, and they were seen as serving four purposes.

1). Increase capacity on existing Skyway routes.

2). Inaugurate service between city pairs too dense for its 19-seat B1900Ds, yet too thin for Midwest Express’s own 60-seat DC-9-14s.

3). Replace these DC-9s on short, low-density sectors

4). Add frequency to existing Midwest Express routes during off-peak times.

Featuring the same leather seats, carpets, and sidewall patterns as its parent’s DC-9s, it was able to offer identical service, with cocktails, hot towels, hot snacks, and freshly baked cookies from the aircraft’s dual-oven equipped galley.

Inaugural Do-328JET routes, from Milwaukee, included Grand Rapids, Pittsburgh, Nashville, and Toronto, with the number of daily, per-aircraft sectors, like those of its B1900Ds, nine, except the replacement type considerably reduced their block times-from two hours to 1.20 in the case of Nashville. Its only “inconvenience,” however, was its very speed: although it was higher than that of its turboprops, or about Mach 0.66, it was far lower than the Mach 0.8 of, say, the mainline 737s plying the same airways between VORs, forcing it to accept lower flight levels to avoid traffic conflicts.

Gandalf Airlines, of Bergamo, Italy, became the first European operator of the type, inaugurating service with the first two of 12 ordered aircraft in September of 1999 with three daily round-trips between Milan/Bergamo and Paris.

Atlantic Coast Airlines, like Skyway, was another regional operator aligned with a major US carrier through branding and code sharing agreements-in this case, United and it thus flew under the United Express banner.

Operating 19-passenger Jetstream 31s and 29-passenger 41s, mostly to United’s Washington-Dulles hub, it was able to substitute its 25 Do-328JETs according to demand, frequency, and time of departure.

8. Do-428JET:

Seeking to offer the crucially needed second member of its regional jet family, yet avoid the already-crowded 50-seat market, Fairchild-Dornier launched a stretched version on May 19, 1998 at the Berlin International Air Show, partially in response to often-requested capacity increases.

Having already experienced neck-and-neck competition with the ERJ-135, Fairchild-Dornier anticipated similar conflict with Embraer’s also recently launched, 40-passenger ERJ-140, which shared a 96-percent commonality rate with its smaller predecessor. Both the ERJ-140 and the Do-328JET’s larger brother, the Do-428JET, were aimed at operators that needed a step-up of about ten seats over the smaller-capacity model upon which they were based.

Although it was initially envisioned as a simple-stretch derivative, it quickly became apparent that to do so would have sacrificed its short-field performance, since it offered higher structural and gross weights and only a higher-capacity engine could remedy this deficiency.

According to Stanley Deal, Fairchild-Dornier’s Vice President for the Do-228, -328, -328JET, and -428JET regional airliners, “Our strategy is to add a member to the -328JET family, offering 44 seats… and giving us enough differential between the (-328JET).”

Incorporating forward and aft section insertions, the aircraft, with a new 83.4-foot overall length, introduced a repositioned Type III emergency exit and a second, aft Type I door, accommodating between 42 and 44 passengers at a 31-inch seat pitch in a “new look” cabin, which was 44.7 feet in length. The enlarged baggage compartment behind it had a 336-square-foot area.

A modified wing, with a 71.5-foot span and 516.7-square-foot area, introduced a 1.7-foot greater chord and rounded wingtips, while enlarged, inboard sections facilitated the installation of wider, 33.2-inch-diameter engines. Bleed air replaced its predecessor’s boot deicing system.

The engines themselves, 7,400 thrust-pound Pratt and Whitney Canada PW308Bs designed for the Hawker Horizon business jet, represented a 25-percent power increase over the PW306Bs of the -328JET and introduced thrust reversers.

With a 44,533-pound maximum take off weight, the type had a 425-knot cruise speed and a 900-nautical mile range, now provisioned with a 1,510-US gallon fuel capacity.

Production entailed wings built in and shipped from San Antonio, Texas; fuselage sections assembled by Aermacchi in Italy; final assembly by Israeli Aircraft Industries (IAI) in Israel and external painting and cabin fittings in Oberpfaffenhofen.

With the cockpit commonality between the -328 and -428JET, and common pilot type ratings, Fairchild-Dornier marketed them as the ideal pair of entry-level regional jets, envisioning them as 19- and 30-seat turboprop replacements, respectively, because of the market growth expected to be created as a result of their pure-jet appeal.

Launch customer Atlantic Coast, with an order for 30, foresaw considerable flexibility in operating both types, able to tailor capacity to demand.

Fairchild-Dornier’s own strategy, however, soon proved less than successful. A weaker than expected sales foundation created by the original Do-328JET and a dramatic increase in nonrecurring development costs-by some $100 million for its larger-capacity successor-began to cast doubts on its ultimate reality, with unanticipated design changes–including a 4.7-inch rearward wing repositioning, the addition of an aerodynamic fairing, the relocation of the undercarriage, and a reduction in weight-causing first deliveries to Atlantic Coast to be rescheduled from the last quarter of 2002 to the first of 2003.

Although a vitally needed cash infusion from investment firms Clayton, Dubilier, and Rice, and Allianz Capital Partners ultimately kept the company afloat, its much-needed pairing sank, changing market conditions and the paltry number of orders rendering the stretched version unfeasible and forcing its cancellation. Orders and options, totaling 113 from Atlantic Coast, Skyway Airlines, and Air Alps were worth $1 billion at the time.

With amended US pilot scope clauses now permitting an increasing number of 50-seat regional jet operations, and the consistent-and costly-redesign from the smaller baseline version, the Do-428JET had become less attractive, and the decision to cease its development came down to the lesser of two evils-namely, leave a hole in Fairchild-Dornier’s product line or one in its profits.

The company won out, but only until its cash ran out, and on April 2, 2002, now mired in $670 million of debt, it was forced to declare bankruptcy, ceasing to exist.

9. AvCraft Aviation:

Following the path of its former Fokker subsidiary, it only lay in waiting for a financial lifeline to resurrect it, and that was cast from Leesburg, Virginia-based AvCraft Aviation, itself founded in 1999 by pilot and now CEO Ben Bartel as an aircraft completion center then located in Akron, Ohio.

Having already been an approved maintenance facility for both the turboprop and turbofan versions of the Do-328, it was a logical step for it to purchase these and the Do-428JET programs, along with five aircraft still on the production line and 18 completed, but unsold ones; the name, type, and production certificates; and the tooling, spares, and parts, as it did on December 20, 2002.

Although it intended to restart the production line after it had sold these 18 aircraft and actually succeeded in placing a few of them with Hainan Airlines of China, it never realized its goal of targeting the type more to the corporate than airline market, following in Fairchild-Dornier’s footsteps and declaring its own bankruptcy in early 2005, thus ending a program full of promise, but short on profits.

AGM-86 Air Launched Cruise Missile

Starting in the mid-1960s, the USAF rapidly gained extensive experience in operating reconnaissance drones over Southeast Asia, and the diminutive “bugs”, principally AQM-34 versions of the Firebee target drone, proved to be quite survivable against anti-aircraft artillery and SA-2 missiles. This pointed a way towards a new generation of air-launched cruise missiles that would give strategic bombers a standoff capability against increasingly effective Soviet air defenses. The AQM-34 was around the size of later ALCMs, but a powerplant more efficient than the turbojet engine of the AQM-34 would be needed to give such small aircraft a useful strategic range. Happily, work was underway on miniature turbofans, and by the early 1970s compact units rated at around 500-600lbs thrust were feasible.

The ALCM actually stems directly from the Subsonic Cruise Armed Decoy (SCAD) program of the early 1970s, which was aimed at providing SAC with small bomber-launched decoy missiles that would flood Soviet radar screens with false targets. For a decade, this mission had been handled by the McDonnell Douglas GAM-72/ADM-20 Quail, a small turbojet drone. The antithesis of what would later be called “stealth” technology, Quail was fitted with features that greatly magnified its radar cross section, in the hopes that Soviet radar operators would read the enhanced returns as coming from the bombers themselves, greatly complicating attempts at intercepting the real threats. SCAD was to take over the decoy role, taking advantage of improvements in ECM technology to further compound the woes of enemy air defense personnel. SCAD itself would be a threat as well, being able to accommodate a small nuclear warhead. SCAD was to be carried by both the B-52 and the B-1A.

(Quail and SCAD were hardly the first programs aimed at providing SAC bombers with decoy and defense suppression missiles. Also known as MX-2013, the Radioplane B-67/GAM-67 Crossbow was a 1950s attempt at a strategic anti-radar missile that would be fired against Soviet installations up to 300 miles away, under the power of a J69 turbojet. The B-50 Superfortress could carry a pair of Crossbows, while the B-47 Stratojet could accommodate four. Another canceled design was the XGAM-71 Buck Duck, which was to be carried by the B-36 Peacemaker. And finally, the SM-73 Bull Goose was a Fairchild program for a ground-launched delta-wing decoy missile (which could be armed) that would fly from US launch sites into the USSR, cruise propulsion being provided by a Fairchild J83 turbojet. The Goose program was dropped in December 1958, with the engine being canceled a month later.)

By July 1972, Boeing had been selected as the SCAD airframe contractor, with Philco-Ford being charged with developing the ECM suite and Litton supplying the guidance. Earlier, Teledyne CAE and Williams Research had been contracted to develop competitive engine prototypes; Williams won production orders with its F107 design. SCAD’s design resembled a small aircraft, a fuselage with a basically triangular cross-section was mated to wings swept at 35 degrees, these being extended after the missile was launched. The engine would have a dorsal inlet just ahead of the small vertical tail. The entire package was sized to fit the standard SRAM launcher.

Despite the contact awards, the SCAD program would only run to July 1973, when the program was put on hiatus to allow the rationale and requirements of the system to be re-examined. By 1974, SCAD had given way to the Air Launched Cruise Missile (ALCM) program, which would be greatly derived from the original AGM-86, but optimized purely for the strike role. The AGM-86A or ALCM-A would have a range of around 750 miles, carrying a SRAM-type W69 warhead. By the spring of 1977, Boeing had been directed to begin work on the long-range version, which was designated AGM-86B. This had an airframe stretched to permit a larger fuel tank, this helping to boost range to 1,500 miles. The wings did not have as much sweep, the contours of the nose and tail were changed, and the W80 warhead from the Navy’s BGM-109 was substituted for the ALCM-A’s W69. Test flights using missiles with live engines were underway by the spring of 1976, and in September of that year “full-up” vehicles began trials.

This did not mean an immediate end to the AGM-86A, as some planners wanted to buy a mixture of A and B-models, using externally-carried AGM-86Bs for missions that demanded extra range, while using the original models for less difficult targets. Additionally, limitation of ALCM range as part of arms control agreements was a possibility, and this gave credence to the idea of making the AGM-86A convertible to B-model configuration, allowing the US, if necessary, the capability to rapidly break out of treaty limitations to match future Soviet developments. Fielding a larger ALCM presented some problems, mainly compatibility concerns with the B-52. A longer missile meant that a new rotary launcher would be necessary for internal carriage, as the existing SRAM unit could not be used, and a longer launcher would interfere with bomb carriage. Ultimately, it was decided that the B-52’s capability to carry the heavy B28 gravity bomb would be abandoned to allow for longer ALCMs.

The ALCM’s small size made many aircraft potential launch platforms for the system, and proposals were made during the late 1970s and early 1980s to adapt both new and older designs to the role. Large transport types in particular were examined by several companies, including Boeing, whose 747 could carry dozens of missiles internally, the weapons being ejected through a fuselage port. Lockheed’s C-5 Galaxy was also a contender, and demonstration hardware was actually built, although air launch tests were not carried out. Other large aircraft considered were the Lockheed C-141, L-1011, and Boeing C-135 and 707. While capable of carrying heavy missile loads, the transport-derived aircraft would little or no capability to penetrate protected airspace. Rockwell, still hoping to salvage some of its B-1A work, proposed a derivative aircraft with fixed wings that could carry an expanded load of ALCMs, while General Dynamics suggested several rebuild programs for the F-111 and FB-111 fleets that would have included adding ALCM capability. Ultimately, it was decided to limit ALCM deployment initially to the converted B-52s.

Despite both Air Force and Navy cruise missile programs having been made as similar as possible, there was still pressure to buy a single common missile for both missions, and Congress dictated that a competitive fly-off between the ALCM and Tomahawk be conducted. The AGM-86B would be the baseline Boeing missile, while GD would enter the AGM-109 version of the Tomahawk. Like the Boeing entry, the AGM-109 would not fit on an unmodified SRAM launcher, and although a shortened version of the missile had earlier been considered, this model would have had a dramatically-shortened range. Ironically, just such a version, albeit conventionally armed and dubbed Airhawk, was proposed in the late 1990s to both the USAF and RAF.

To conduct the flyoff, a trio of B-52s were fitted as launch aircraft, while four Phantoms were earmarked as chase planes. To portray a typical wartime mission that would begin over water, long-range test launches were conducted off the California coast, with the missiles flying to a range in Utah. The flyoff began on July 17, 1979 when an AGM-109 was launched. The Boeing missile first flew on August 3, but crashed in Utah. Despite this inauspicious beginning, Boeing was later named the winner of the evaluation, and on March 25, 1980 the company was formally awarded the production contract.

Even before the flyoff had been completed, the USAF had designated the 416th Bomb Wing’s B-52s at Griffiss AFB as the first aircraft to carry the winning ALCM design operationally. Aside from the structural and avionics changes necessary, ALCM-modified B-52Gs were also fitted with strakelets on the wing leading edges; these were large enough to be seen by Soviet reconnaissance satellites, allowing ALCM carriers to be counted for arms control purposes. Deliveries of operational ALCMs to Griffiss began in the spring of 1981, and by December of the following year the B-52G/AGM-86B combination was in service. The G-model Stratofortresses could only carry ALCM externally on wing pylons, but the later H-model conversions were fitted for internal carriage as well, using the Common Strategic Rotary Launcher. The B-1B Lancer was basically compatible with the ALCM system, but was not operationally configured for using the missile, being used primarily as a penetration bomber before switching over to the conventional role.

At one point, the USAF wanted to buy over 3,400 AGM-86Bs, but ironically, given the amount of controversy, time, and money involved in getting the missile into production, this projected buy would be radically cut. Fears that advanced Soviet “look down/shoot down” interceptors such as the MiG-31 Foxhound and new SAMs such as the SA-10 and SA-12 would be able to find and destroy ALCMs spurred the drive to put low-observable features on a new design, the AGM-129 Advanced Cruise Missile, and to free up budgetary resources the AGM-86B program was scaled back. A total of 1,715 ALCMs were delivered, with the last being turned over in early October 1986.

The Pulse of Technology – Keeping Pace With Continuous Change – November, 1998

Gordon Moore, the co-founder of Intel Corporation first postulated the now-famous Moore’s law in the nineteen seventies. Moore’s law states that the processing or computational power of silicon chips will double every twenty-four months, while pricing for these chips will halve in the same time period. This law has remained relatively constant for over twenty years. We are now approaching a time when this seemingly immutable law is becoming outdated. In fact, new silicon chips are doubling in power; with new chips coming online within twelve to eighteen months, while pricing is being halved in even less time. What has happened to the underlying technology that drives these silicon chips, and what are the market forces that have dictated rapidly declining prices?

There are several factors that lead to the inexorable increase in processing power, just as these same factors exert a downward pressure on prices. Let’s look at several of these factors in the context of hardware developments, software developments and the rise of the Internet as the ubiquitous network that many people predicted as being necessary to make computers universally acceptable in daily life.
Hardware Development.

When Intel was founded by ex-Fairchild developers, the mid-range computer, as personified by the DEC PDP series, Data General machines, IBM 32/34 series and the first HP boxes was the emerging standard in the computer industry. Machines of this time period were often viewed as departmental machines that were required to perform quick, hands-on computing applications that were free from the centralized (i.e., mainframe computing environment) I.T. staffs of the time.

The idea of a small, nimble machine that could be programmed and developed by local departments was extremely appealing at the time. Because of the diversity of manufacturers and proprietary operating systems, standards were largely undeveloped, causing competing platforms to jockey for position. Migration from one machine to another was largely unheard-of due to the high costs of switching data and applications programs; not to mention the high training costs required for I.T. staff.

The acceptance of UNIX as an open standard marks a watershed in the history of computing. For the first time, applications programs could be developed that were cross-platform – that is, capable of running on alternate hardware platforms. This newfound freedom allowed software programmers to write a single application that could be run on multiple machines. The importance to hardware developers was simple – they could spend more time on the refinement of the underlying silicon, and less time developing proprietary hardware systems. It is this process of refinement that has marked the decrease in cost of silicon that we know today.

The advent of the personal computer in the late nineteen-seventies and early nineteen-eighties marked another watershed in the development of hardware. Where mid-range computers allowed entire departments to break free of the constraints of mainframe computing, the advent of the PC brought computing to the thousands of business users who wanted the ability to perform analysis and data gathering at their convenience, not that of the I.T. department. For the first time, individuals could analyze, store and retrieve large amounts of data without having to master a computer language, and they could perform these tasks at their own pace. This device literally transformed the business world, making computations possible to everyday users that were once performed by large mainframe computers. This break-through spirit was best embodied by Apple computer, and symbolized in its “big brother” campaign in 1984. Aside from its edgy attitude, Apple also pioneered consumer usage of the floppy drive, mouse, and graphical user interface that made computing more accessible to everyday users. The ergonomics of computer use drove hardware device design and manufacture in a way previously unknown. Heretofore, ergonomics were largely ignored in computer design and manufacture; Apple changed all that with the introduction of the Macintosh line of PCs.

For all its innovation and edge, Apple made a mistake similar to that made by competing mid-range computers in the mid-seventies – it’s OS (operating system) and architecture was proprietary. Fearing that licensing would erode its technological leadership, Apple kept its systems and hardware proprietary and opened the door for a technically inferior product to gain a foothold that it has not yet relinquished.

In 1981, IBM introduced the first IBM PC. This device was, by most standards, technically inferior to the Apple. It possessed a slower processor, was bulky, and used a text-based approach to computing. Yet, despite these shortcomings, it and its brethren, the so-called IBM compatible machines, have dwarfed the Apple offerings over the past two decades. Why? Unlike Apple, the IBM compatible machines were based on an open architecture. The specifications for these machines were designed so that third-party vendors could develop hardware and software for them. In a sense, the best ideas from the best manufacturers get adopted and become the de-facto standard for that particular piece of hardware.

The final piece of the hardware development puzzle was to emerge in 1985 or 1986 in a somewhat unheralded manner. This final puzzle piece was the adoption of PC networking. Initial reactions to the development of the PC network concept were for the most part, negative. Individual users feared that networked computers would once again lead to I.T. control of what were, up till now, personal computers. Once PCs were networked, control would again be wrested from users back to the large mainframe computing departments of the sixties and seventies.

As it turns out, the PC network actually allowed individual users to communicate effectively, once the infrastructure was in place to allow for wired offices. Instead of wresting control away from users, the PC network allowed sharing and collaboration at previously unheard of levels. A new concept developed as a result of the PC network, known as the “network effect.” The concept of the “network effect” is that the more people share information in a group, the more powerful the group becomes. Users gain more utility as more people, data and ideas are shared. If you are left out of the network, your productivity and connectivity suffer. It is now important to become connected, and users face the prospect of being stranded if they are not part of the larger network. The concept of the “network effect” is similar to the development of large public libraries or databases that become more useful, as more information is stored there.

To summarize, several trends can be seen in hardware development that drives the pace of change in silicon. First, the trend away from mainframe systems to mid-range systems supporting open standards. Next, the development of personal computers that encourage users to take control of data manipulation, storage and retrieval. The next trend is the development of an open architecture and OS that allows standards to be set based on the merits of the product, not a proprietary system. Finally, the development of a networked office where the power of the network is enhanced as more users are added.

These trends will continue, and likely accelerate, as users demand more functionality in a smaller and smaller foot print. The acceptance of PDAs (personal digital assistants), cell phones and pagers will fuel consumer demand for devices that are easier to use and always connected. The convergence of data and voice transmission over the same carrier network will lead to increasing features and lower price points for machines that offer multiple uses – telephone, pager, PC, Internet access – at the same time.

Software Development
Early software languages were developed to instruct computers in binary code. These assembler languages were very basic in function and instructed computers to perform what we would now consider routine run-time and maintenance tasks. Tedious to write and compile, these early languages had none of the programmer conveniences that we take for granted today, such as debugging and writing tools to help make the programmers’ job easier. These languages have become known as first generation computing languages.

As engineers struggled to make the interaction between computer and user more intuitive, a new series of languages were developed such as Fortran and Cobol, the first of which was designed to be primarily a scientific programming language, while the second was designed to be a business programming language. These languages added editing and debugging features and were written in something resembling English-language commands.

The development of Cobol coincided with the widespread commercial use of mainframe and later, of mid-range computers. Other languages such as PL1 and RPGII were also adopted by mid-range computers and could arguably be called the first examples of third generation computing languages. These newer languages incorporated more English-like commands and syntax in the language structure, and incorporated new debugging and editing features directly into the language. As the basic language structure evolved, so too did the applications programs that were being developed. Early in the development of computer languages, a schism formed between that class of software that performed routine maintenance and run-time chores, which came to be known as the operating system (or OS) and a second class of software that performed specific tasks such as running a payroll or updating inventory, that became known as application software.

The widespread use and adoption of second and third generation programming languages corresponded with the growing use of mid-range computer systems. So too, the proliferation of application programs led to a growing acceptance of these departmental computer systems. In fact, the use of departmental computers was tied to the efficiently designed and executed single-purpose programs – such as inventory control or payroll processing, that were often performed in a self-contained business unit.

As computer hardware development evolved from mainframe to mid-range systems, the need for a computing system that allowed multiple users to access the machines and perform independent tasks, increased greatly. A group of Bell Lab scientists created such a language in the late nineteen-sixties, which allowed multiple users and which performed multiple tasks at the same time. This language, known as UNIX, seemed ideally suited to this new computing environment and caught on quickly. The combination of departmental computers and the UNIX language led to the development of distributed computing.

The advent of the personal computer accelerated the trends that were beginning to emerge in the distributed computing model. This model allowed computing power to be located in the hands of those people who required immediate use and manipulation of stored data, while at the same time providing network connectivity.

The original PC operating system, or DOS (disk operating system), was hardly the blueprint for distributed processing. The introduction of the IBM PC in the early eighties married an under-powered processor, memory configuration and hard disk drive (when available) to an anemic OS. Yet, this original machine would morph in rapid succession to a robust group of machines with network capabilities.

The catalyst for this change came in the form of early network cards that allowed PC users to connect to midrange machines or other PCs. These early adopters were driven primarily by a desire to share files or hardware devices (such as printers or larger hard drives) among work groups. Within a short period of time a specialized version of OS was developed to handle these chores more efficiently, with Novell being the most recognized provider of network operating systems. As the capabilities of these network operating systems expanded, new hardware devices were developed to take advantage of the specialized nature of network computing. In short order file servers, print servers and application servers (PCs developed to host application programs in one location) became commonplace.

At about the same time as the development of the network-computing model, a sea change occurred in the way users interacted with their machines. Until now, most application programs were relatively unchanged from their mainframe and midrange counterparts. These programs were for the most part, text-based, with some graphical elements thrown together in a jumbled, clumsy way. Once again Apple led the change in the form of the Macintosh graphical interface that was intuitive to use. Instead of invoking arcane command-line instructions, users could point and click at an object on the screen and launch a file, program or document with ease. The basis for the Apple graphical user interface, along with the point and click device (mouse) was conceived, but not commercialized, in the Xerox PARC facility in Palo Alto in the sixties. Microsoft developed its own version of the graphical user interface with their Windows “operating environment.” The first two version of this environment literally ran on top of its famous DOS system in a somewhat ungainly manner. Microsoft finally got the user interface right in Windows 3.0. In similar fashion, Microsoft incorporated many of the benefits of the network operating system into Windows version 3.11, and later improved both the operating system and network features with Windows 95.

The stage was now set for the next “big thing” in computing. Once again, this next wave had its origins in the nineteen-sixties, only to appear as a full-blown implementation n the nineteen-nineties.

The Rise of the Internet
The Internet was conceived of in the nineteen sixties as a way to link the computing resources of several west-coast universities together. At the time, computational power was expensive, and shared resources were a way to defray the costs of deploying large systems. At the same time, the U.S. government realized that if it could build a similar networked structure, it would be difficult to totally disrupt computer operations and data in the event of a national disaster. Over the next two decades, more universities and governmental agencies were added to this patchwork quilt of networked machines.

In order to link disparate machines running on different operating systems, a common file transfer procedure would be required. The FTP (file transfer protocol) schema was developed for this purpose, allowing different machines to communicate effectively. Similarly, a method of routing these files and messages across different locations was also required. Out of this requirement came the development of TCP/IP protocols that determine how a file is routed through the system. These two developments supplied the backbone that was to become the Internet.

Throughout the eighties, the Internet remained the domain of people in the scientific and academic communities. Visionaries imagined that this network could be used to connect people easily across great distances and multiple computing platforms. This vision awaited the development of some sort of a device that allowed files to be easily viewed across multiple platforms. A group of computer scientists at the University of Illinois came up with the idea of a web browser, a program that allowed people to view files in a graphical manner. The first web browser, known as Mosaic, was launched in 1992. The development of this browser allowed people to easily locate and view files on the Internet, and led to the phenomenon known as the WWW (world-wide-web).

Essentially, the development of the WWW has allowed users to find files and communicate in a worldwide network. The use of the web has transformed an arcane file messaging system into a new media; one that is growing faster than any other media in history. The growth of the web is based on several of the trends noted earlier.

The open nature of the web allows contributions from multiple sources and computing platforms.

Contributions are not limited to programming professionals. People with very little computer training can contribute files, articles, and information on the web.
1. The web is suited to the dynamic nature of business and personal life. It no longer requires weeks, months, or even years to develop applications – these tasks can now be performed easily and in a short period of time.

2. As more people become accustomed to the web, and as adoption rates drop, PC purchases increase, causing further downward pressure on hardware prices. The costs of hardware and web access have been declining by 15% to 20% per year for the past several years.

3. The web is the ultimate “network effect.” The more people participate, the more information is available, and the more critical it becomes to be included in the network.

4. The web has developed a new concept of speed. Internet time is a recently coined term for rapid development times that are roughly 7 times faster than “real” time. This notion of speed has spilled over into Internet business life, where all aspects of running an Internet business – sales, procurement, deal making, occur at warp-speed rates.

5. The economics of web space seem to defy business logic and gravity. People have developed a notion, rightly or wrongly, that information and services provided on the web are free. This has led to web companies developing unusual approaches to raising revenue in this new media. At the same time, the stock prices of web-based companies have achieved phenomenal valuations, seemingly unsupported by the need to have revenues or make earnings. This seeming dichotomy between lack of tangible earnings and high stock valuations will continue for awhile. The space for positioning on the web is one of a market share grab in what could become the largest media invented to date. In addition to sheer size, the web promises the Holy Grail of media – the ability to interact directly with a consumer to influence purchasing behavior.

6. The notion of competitive advantage, the idea that a company can gain a foothold over competitors through focus on a series of core values or competencies, such as Wal*Mart has built with logistics and deployment, or GE with developing management talent, is being dismantled by the web. The web is the ultimate leveling force. A site can be developed and released on the web, and in a matter of a few months, can spawn dozens of competitors many with improved features or benefits. In such an environment, the notion of sustainable competitive advantage has no real meaning, unless managed in weeks or months, not years or decades.

A Vision of the Future

Given the developments that occurred in computing during the past thirty years, how will we be affected by technology in the future? What are the trends that will affect us in the next several years, and how can we prepare for what many believe is a tumultuous, if exciting future?
The most important trend we face is the pace of change that will be occurring in hardware, software and bandwidth on the Internet. Eighteen-month development cycles are a thing of the past. Hardware and software manufacturers and developers are now operating on six to nine month development cycles. This cycle is from concept formation through to manufacture and distribution. The rise of the web has accelerated development times, and will continue to do so for the foreseeable future. To adapt to this trend, developers and manufacturers will have to plan for multiple critical paths and have the ability to react quickly to changes in business trends as they are planning, developing and implementing projects. People currently have the ability to plan in this manner on an intellectual level; in many industries this has been the accepted norm for the past few decades. On an emotional level, the cost of large-scale disruption, change and constant redeployment can and will be unsettling.

The second trend that will come to dominate our lives is the constant downward pressure on hardware and software prices coupled with the ever increasing demand for hardware and software to work easily. Again, there is a seeming contradiction between lower price and ease of use. As computer hardware and software become more mainstream, the need for simplicity and power will dominate every other consideration. Despite dire predictions that the end of the PC era is near, nothing could be further from the truth. PCs will remain with us for a long time to come. But their usage patterns will change. They will become file repositories, akin to vast research libraries. Users will gravitate toward more specialty devices to communicate (combination phones, electronic address books, web skimmers, message boards); process information (voice activated pads, storage devices, intelligent dictation systems); and be entertained (3D game players, downloadable video and music players, web-enabled real-time games connected to anywhere in the world, personalized concerts viewed from wearable stereo receivers).

The third trend that will come to dominate our thinking and beliefs in technology is the notion of ownership of intellectual property. When Netscape made the unique and courageous decision to give away its commercial browser technology – it essentially validated the concept of open computing – but it also set the notion of intellectual property rights on its ear. The foundation of intellectual property rights – that an author or inventor owned the writing or invention – has been the cornerstone of trademark and patent protection for the last 400 years. To give away this right – to make intellectual property free to be distributed modified and shared – is a sea change in the way we view human capital. If knowledge is power, the free distribution of knowledge will enable a new level of empowerment and use of human talent. Make no mistake, we will struggle mightily with how to value, reward and allocate resources to the developers and users of knowledge. Throughout history, this tension – struggle is you will – has led to heightened levels of creativity and knowledge.

The fourth trend is more disturbing in its implications. There have always been classes in human social structures. These classes have developed along economic lines with variations on the methods used to acquire greater economic resources (knowledge, brute power, ruthlessness, etc.). Over the next several decades we have the potential to develop a new social class, one that distinguishes between the connected and not connected. As the “network effect” of the Internet expands, those who are not connected stand to lose out on many of the benefits of the connected. Training, education, development, entertainment will all be provided by the Internet. For those not connected, the lost opportunities will be tremendous. We must ensure that this class distinction does not in fact occur, and that everyone shares the “network” effect equally.

The fifth trend that will occur is the death of market economies, as we know them. Market economies were developed to efficiently bring together groups of willing buyers and sellers in sufficient numbers to conduct business transactions easily. Over time, the emphasis on a single “market” shifted to that of specialized markets based on transactional need. As examples, consumer goods markets developed for retail selling; money markets evolved into banking and financial institutions; specialized financial institutions, such as stock and future markets developed, and over time, business-to-┬Čbusiness markets have evolved. All of these markets, of whatever form, have developed around centralized physical locations. With the rise of the Internet, markets no longer require physical presence. Witness the success of Ecommerce; auction sites, computer and software purchases. Over the web, etc. This trend was actually postulated by Faith Popcorn several years ago when she noticed a trend toward “cocooning.” She theorized that people wanted more privacy and less social interaction, or at least social interaction when they chose. The Internet allows people to cocoon, while at the same time interacting when and how they choose.

The final trend that will affect our lives will be the commercial expansion of the Internet. The web has touched our lives in many ways, and is literally growing-up before our eyes. How will we resolve Internet privacy issues? How will companies make money on the web? Are Internet stock valuations realistic, and sustainable? What information should be free, and what information should be paid for? How will we compensate people for their intellectual capital, if that capital is freely given away? What role should government play in determining Internet policy? How should Internet sales be taxed, and how do tax laws that are based on the notion of Nexus (the physical location of a place of agency) apply to an essentially location-less entity? These rhetorical questions are being asked by countless industry, think-tank, and governmental institutions on a daily basis, and over time, they will be resolved.

In Shakespeare’s Tempest, Miranda upon viewing Caliban and Ariel for the first time declares to her father and other members of their landing party “0 brave new world that hath such people in’t.” Shakespeare was profoundly aware of the effect that the discovery of the New World had upon his audience. It was a time of intense excitement – “0 brave new world,” but it was an excitement mixed with fear and uncertainty “that hath such people in’t.” The landing party in the Tempest was driven aground by a violent storm – the storm is a symbol of the change that was sweeping through Europe in the 1500’s. The Tempest is Shakespeare’s attempt to explain the forces that were at work in creating a New World – the forces of discovery, uncertainty, doubt and ultimately hope in creating a better world.

We are poised on the brink of a New World. For the first time in several hundred years, we have the ability to make major changes in the way we view the world, human capital and the sharing of knowledge. Oh brave New World that hath such people in’t.