Memristor-Driven Analog Compute Engine Would Use Chaos to Compute Efficiently

Memristor-Driven Analog Compute Engine Would Use Chaos to Compute Efficiently

With Mott memristors, a system could solve intractable problems using little power

When you’re really harried, you probably feel like your head is brimful of chaos. You’re pretty close. Neuroscientists say your brain operates in a regime termed the “edge of chaos,” and it’s actually a good thing. It’s a state that allows for fast, efficient analog computation of the kind that can solve problems that grow vastly more difficult as they become bigger in size.

The trouble is, if you’re trying to replicate that kind of chaotic computation with electronics, you need an element that both acts chaotically—how and when you want it to—and could scale up to form a big system.

“No one had been able to show chaotic dynamics in a single scalable electronic device,” says Suhas Kumar, a researcher at Hewlett Packard Labs, in Palo Alto, Calif. Until now, that is.

He, John Paul Strachan, and R. Stanley Williams recently reported in the journal Nature that a particular configuration of a certain type of memristor contains that seed of controlled chaos. What’s more, when they simulated wiring these up into a type of circuit called a Hopfield neural network, the circuit was capable of solving a ridiculously difficult problem—1,000 instances of the traveling salesman problem—at a rate of 10 trillion operations per second per watt.

(It’s not an apples-to-apples comparison, but the world’s most powerful supercomputer as of June 2017 managed 93,015 trillion floating point operations per second but consumed 15 megawatts doing it. So about 6 billion operations per second per watt.)

The device in question is called a Mott memristor. Memristors generally are devices that hold a memory, in the form of resistance, of the current that has flowed through them. The most familiar type is called resistive RAM (or ReRAM or RRAM, depending on who’s asking). Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance.

The HP Labs team made their memristor from an 8-nanometer-thick layer of niobium dioxide (NbO2) sandwiched between two layers of titanium nitride. The bottom titanium nitride layer was in the form of a 70-nanometer wide pillar. “We showed that this type of memristor can generate chaotic and nonchaotic signals,” says Williams, who invented the memristor based on theory by Leon Chua.

What’s basically happening is that by controlling voltage and current, the device can be put into a state where tiny, random thermal fluctuations in the few nanometers of NbO2 are amplified enough to alter the way the memristor reacts. Williams and his colleagues note that these fluctuations are only big enough to affect things in memristors of this scale. They never saw it in larger devices.

Once they’d characterized what the memristor was doing and how it was doing it, they simulated it in a circuit to see what it could do. In the simulation, they integrated an array of Mott memristors with another, more common type made of titanium oxide to form a Hopfield network. These networks are particularly good at solving optimization problems. That is, problems where you’re trying to discover the best solution from a number of possibilities.

(The traveling salesman problem is one of these. In it, the salesman must find the shortest route that lets him visit all of his customers’ cities, without going through any of them twice. It’s a difficult problem because it becomes exponentially more difficult to solve with each city you add.)

You can imagine the solutions to these problems as valleys in a landscape. The best solution is the lowest point in the landscape, and a computer’s efforts to find it are like a ball rolling down the hills. The problem is that the ball can get stuck in a valley that is low (a solution) but not the lowest one (the optimal solution). The advantage of the Mott memristor network is that the chaotic behavior is enough to basically bump the ball out of the less-than-optimal solutions so it can find the best solution.

“In our case, we’re using chaotic noise to hop out of these barriers,” says HP Labs researcher scientist Strachan.

Williams envisions these “analog compute engines” one day embedded in systems-on-a-chip to accelerate optimization problems. But there are plenty of steps before that. Among the first is to build the system and investigate how well it scales. They’ll also need to properly benchmark its performance against the best algorithms and hardware.

For Williams, there’s a bigger lesson in the development of these memristors. “Everyone’s trying to reinvent the transistor using a new material,” he notes. “Even if you made a perfect transistor—whatever that is—you’d still not beat scaled CMOS.” Instead scientists and engineers should be looking for new types of computing from these new materials. “It’s important to ask what the material system is doing that’s different than what a transistor does… Rather than make a bad transistor, see if it makes something that would take 100 or 1,000 transistors to replicate.” Williams and his team are hoping their memristor system does just that.


Why Solar Microgrids May Fall Short in Replacing the Caribbean’s Devastated Power Systems

Why Solar Microgrids May Fall Short in Replacing the Caribbean’s Devastated Power Systems

After the destruction inflicted across the Caribbean by hurricanes Harvey, Irma, and Maria, renewable energy advocates are calling for a rethink of the region’s devastated power systems. Rather than simply rebuilding grids that delivered mostly diesel generation via damage-prone overhead power lines, renewables advocates argue that the island grids should leapfrog into the future by interconnecting hundreds or thousands of self-sufficient solar microgrids.

“Puerto Rico will lead the way for the new generation of clean energy infrastructure. The world will follow,” asserted John Berger, CEO for Houston-based solar developer Sunnova Energy in a tweet before meeting in San Juan with Puerto Rico Governor Ricardo Rosselló this week. Rosselló appears to be on board, inviting Elon Musk via tweet to use Puerto Rico as a “flagship project” to “show the world the power and scalability” of Tesla’s technologies, which include photovoltaic (PV) rooftops and Powerwall battery systems.

Some power system experts, however, say the solar-plus-batteries vision may be oversold. They say that the pressing need to restore power, plus equipment costs and other practical considerations, call for sustained reliance on centralized grids and fossil fuels in the Caribbean. “They need to recover from the storm. Unfortunately I think the quickest way to do that is to go back to how things were before,” says Brad Rockwell, power supply manager for the Kauaʻi Island Utility Cooperative that operates one of the most renewable-heavy grids in the U.S.

Now is a tough time for a debate, given the ongoing power and communications blackouts afflicting many Caribbean islands, including Puerto Rico, the U.S. and British Virgin Islands, Dominica, and St. Martin. As of Thursday 12 October—more than three weeks after Maria’s cyclonic wrecking ball crossed the region—over four-fifths of customers in Puerto Rico and the U.S. Virgin Islands remained without power, according to U.S. Department of Energy status reports.

Puerto Rico lost major transmission lines that dispatched electricity generated at oil, coal, and natural gas-fired power plants on its lightly populated South shore to all corners of the territory. Its outage level actually slipped from 88.3 to 89.4 percent earlier this week after a tie line went down near San Juan. But it bounced back slightly, to an estimated 83 percent outage level, by yesterday.

What is clear is that several firms are trying to move fast while they talk, equipping rooftop solar systems with battery storage that enables consumers to operate independently of stricken grids. For example:

  • German storage system manufacturer sonnen launched a PV-plus-battery collaboration with local Aguadilla-based solar developer Pura Energía early this month;
  • Sunnova is crafting storage options for roughly 10,000 customers in Puerto Rico that it has already equipped with PV systems;
  • Tesla says it is sending “hundreds” of its Powerwall battery systems to Puerto Rico and, after reports of price gouging by independent installers, plans to dispatch installers from the mainland to expand its local teams.

Peter Asmus, a microgrids analyst with Navigant Research, says that such solar microgrids will deliver power to solar system owners far faster than grid restoration, which is still months away for many customers. He says microgrids will also make the island systems more resilient in the long run.

Asmus sees the situation as reminiscent of post-war Europe, when devastated European grids left a vacuum that enabled something better. “They built a more advanced grid than we have in the U.S.,” says Asmus. He says the Caribbean has a similar opportunity today: “The infrastructure was devastated so severely. They can start over with a cleaner slate.”

Some suppliers see microgrids actually supplanting some of the region’s largest transmission lines. “The grid in Puerto Rico will never be built back the way it used to be,” wrote John Merritt, applications engineering director for Austin, Texas-based Ideal Power in an email to IEEE Spectrum. Ideal Power’s multi-port power converters enable microgrids to efficiently swap power between their alternating current and direct current components, including PV systems, generators, and storage batteries.

Giving up big transmission lines sounds optimistic to Rockwell at the Kauaʻi Island Utility Cooperative (KIUC). It would, he says, represent a major system overhaul and thus lost time that Puerto Rico’s residents and economy can ill afford. “The people of Puerto Rico are not going to want to withstand any more delays than they have to while people figure out how to rebuild in a different way,” he says.

Rockwell adds that batteries are still a rather costly way to balance variable renewable generation. He speaks from experience. KIUC’s grid is over four-fifths solar-powered during some midday hours. Several utility-scale storage systems help integrate such a high degree of variable power by quickly covering for lost PV generation when clouds pass overhead or by absorbing surplus midday generation and discharging it after the sun sets. But Rockwell says high battery costs mean KIUC still relies heavily on its diesel power plants.

Merritt at Ideal Power acknowledges that the same is true for microgrids. Integrating solar can cut an island microgrid’s fuel consumption by 60 to 70 percent, slashing operating costs and pollution, but he says diesel generators remain “important” assets. “Moving a site from 24/7 diesel-powered microgrid to a 24/7 solar + storage microgrid would be cost prohibitive in most cases,” says Merritt.

There are also questions about PVs’ hardiness. Harvey, Irma, and Maria left many PV systems in shambles. Merritt says that a microgrid for a commercial facility on Saint Croix that Ideal Power participated in assembling before the storms is operating without its six 33-kilowatt solar arrays. While they are out of commission for the next few months, the microgrid is relying solely on its diesel generators, battery, and converters.

Some utility-scale solar plants also took a beating, especially Puerto Rico’s solar array at Humacao. PV panels shattered and flew out of their frames when Maria’s Category-4 winds ripped over the Humacao solar plant, where its French owner Reden Energie was in the process of doubling capacity from 26 to 52 megawatts.

Houston-based microgrid developer Enchanted Rock advocates rugged microgrids supported by natural gas, which is cheaper and cleaner than diesel and may be more reliable than both diesel and solar during heavy weather. “You can build community-type microgrids that have some combination of natural gas generation, solar and storage,” says Enchanted Rock CEO Thomas McAndrew.

Enchanted Rock made a name for itself during Hurricane Harvey when its natural gas-powered microgrids at Houston-area grocery stores and a truck stop turned into hubs for first responders and weary residents. Diesel deliveries were hard to come by for 4-5 days, says McAndrew, but natural gas kept flowing underground throughout the storm.

At present few Caribbean islands have access to natural gas, and even Puerto Rico’s gas infrastructure is limited to one liquefied natural gas (LNG) import terminal that pipes the fuel to two power plants. Before Irma and Maria struck Rosselló had been working to expand LNG imports so more of its oil-fired power plants could burn gas.

Enchanted Rock’s McAndrew favors a network to distribute the gas more widely, which he says would be much cheaper than putting power lines underground to protect them from weather. He acknowledges that his proposal is ambitious, but says the outside investors that Puerto Rico will need to attract to support its revival can insist on infrastructure that will survive future storms. As McAndrew puts it: “Whether it’s private or government money, there’s got to be some sense that we might want to do this differently so we don’t just end up rebuilding it every couple of years.”

Three Advances Make Magnetic Tape More Than a Memory

Three Advances Make Magnetic Tape More Than a Memory

In the age of flash memory and DNA-based data storage, magnetic tape sounds like an anachronism. But the workhorse storage technology is racing along. Scientists at IBM Research say they can now store 201 gigabits per square inch on a special “sputtered” tape made by Sony Storage Media Solutions.

The palm-size cartridge, into which IBM scientists squeezed a kilometer-long ribbon of tape, could hold 330 terabytes of data, or roughly 330 million books’ worth. By comparison, the largest solid-state drive, made by Seagate, is twice as big and can store 60 TB, while the largest hard disk can store only 12 TB. IBM’s best commercial tape cartridge, which began shipping this year, holds 15 TB.

IBM’s first tape drive, introduced in 1952, had an areal density of 1,400 bits per square inch and a capacity of approximately 2.3 megabytes.

IBM sees a growing business opportunity in tape storage, particularly for storing data in the cloud, which is called cold storage. Hard disks are reaching the end of their capacity scaling. And though flash might be much zippier, tape is by far the cheapest and most energy-efficient medium for storing large amounts of data you don’t need to access much. Think backups, archives, and recovery, says IBM Research scientist Mark Lantz. “I’m not aware of anything commercial or on the time horizon of the next few years that’s at all competitive with tape,” he says. “Tape has huge potential to keep scaling areal density.”

To store data on tape, an electromagnet called a write transducer magnetizes tiny regions (small crystals called grains) of the tape so that the magnetization field of each region points left or right, to encode bits 1 or 0. Heretofore, IBM has increased tape drive density by shrinking those magnetic grains, as well as the read/write transducers and the distance between the transducers and the tape. “The marginal costs of manufacturing remain about the same, so we reduce cost per gigabyte,” Lantz says.

The staggering new leap in density, however, required the IBM-Sony team to bring together several novel technologies. Here are three key advances that led to the prototype tape system reported in the IEEE Transactions on Magnetics in July.

New Tape Schematics

The surface of conventional tape is painted with a magnetic material. Sony instead used a “sputtering” method to coat the tape with a multilayer magnetic metal film. The sputtered film is thinner and has narrower grains, with magnetization that points up or down relative to the surface. This allows more bits in the same tape area.

Think of each bit as a rectangular magnetic region. On IBM’s latest commercially available tape, each bit measures 1,347 by 50 nanometers. (Hard disk bits are 47 by 13 nm.) In the new demo system, the researchers shrunk the data bits to 103 by 31 nm. The drastically narrower bits allow more than 20 times as many data tracks to fit in the same width of tape.

To accommodate such tiny data elements, the IBM team decreased the width of the tape reader to 48 nm and added a thin layer of a highly magnetized material inside the writer, yielding a stronger, sharper magnetic field. Sony also added an ultrathin lubricant layer on the tape surface because the thinner tape comes in closer contact with the read/write heads, causing more friction.

More Precise Servo Control

Very much like magnetic disks, every tape has long, continuous servo tracks running down its length. These special magnetization patterns, which look like tire tracks, are recorded on the tape during the manufacturing process. Servo tracks help read/write heads maintain precise positioning relative to the tape.

The IBM team made the servo pattern shorter, narrower, and more angled in order to match the smaller magnetic grains of the tape media. They also equipped the system with two new signal-processing algorithms. One compares the signal from the servo pattern with a reference pattern to more accurately measure position. The other measures the difference between the desired track position and the actual position of the read/write head, and then controls an actuator to fix that error.

Together, these advances allow the read/write head to follow a data track to within 6.5-nm accuracy. This happens even as the tape flies by at speeds as high as 4 meters per second (a feat akin to flying an airplane precisely along a yellow line in the road).

Advanced Noise Detection and Error Correction

As the bits get smaller, reading errors go up. “If we squeeze bits closer, the magnetic fields of neighboring bits start to interfere with the ones we’re trying to read,” Lantz says. So the difference between a lower-value 0 signal and a higher-value 1 might be harder to make out. To make up for this, magnetic storage technologies use algorithms that, instead of reading a single bit, take into account signals from a series of bits and decide on the most likely pattern of data that would create the signal. The IBM researchers came up with a new and improved maximum-likelihood sequence-detection algorithm.

They also improved upon the storage technology’s error-correction coding, which is used to slash bit-reading error rates. In the new system, the raw data goes through two decoders. The first looks for errors along rows, while the stronger second one checks columns. The data is run through these decoders twice.

​Microsoft just ended support for Office 2007 and Outlook 2007 | ZDNet

​Microsoft just ended support for Office 2007 and Outlook 2007 | ZDNet

Microsoft is urging customers still on Outlook 2007 and Office 2007 to upgrade as each of the products ran out of extended support on Tuesday.

That means no more security updates, feature updates, support or technical notes for the products, which Microsoft has supported for the past decade.

Microsoft wants customers on Office 2007 to plan to migrate to Office 365 in the cloud or to upgrade to Office 2016.

Office 2007 introduced Microsoft’s “ribbon” interface that brought a series of tabbed toolbars with each ribbon containing related buttons.

For customers that have already use Office 365 that still use Outlook 2007, it will be important to upgrade by the end of October, after which the product won’t allow users to access Exchange Online mailboxes though the Office 365 portal.

“Customers who use Office 365 will have noted that there is a change to the supported client connectivity methods. Outlook Anywhere is being replaced with MAPI/HTTP. Outlook 2007 does not support MAPI/HTTP, and as such will be unable to connect,” Microsoft highlights in a send-off note for the email client.

Come October 31, Microsoft will drop support for the RPC over HTTP protocol, also known as Outlook Anywhere, for accessing mail data from Exchange Online. The new protocol, MAPI over HTTP, is sturdier and supports multi-factor authentication for Office 365, according to Microsoft. Microsoft didn’t backport the protocol to Outlook 2007 as it would be past its extended support date by the time it cut off Outlook Anywhere.

Microsoft has a full list of Office 2007 products and their exact cut off dates here and Outlook 2007 here.

Unlike previous years Microsoft is not offering enterprise customers extended support for Office 2007 through its custom support contracts. The same goes for its other Office products, including Exchange Server; Office Suites; SharePoint Server; Office Communications Server; Lync Server; Skype for Business Server; Project Server and Visio.

Microsoft said demand for custom support has declined with greater adoption of Office 365.

USA and Japan’s giant robot battle was a slow, brilliant mess

USA and Japan’s giant robot battle was a slow, brilliant mess

The oft-delayed giant robot fight has finally taken place. On Tuesday, Team USA’s mechs scrapped it out with Japan’s Kuratas in an abandoned steel mill for the world to watch. There could only be one victor, and it proved to be the red, white, and blue. Yes, the MegaBots team representing America came out on top, but not before three gruelling rounds of robopocalypse.

Those who tuned into Twitch to view the action saw Team USA’s Iron Glory get knocked down by Japan’s Kuratas bot straight out the gate. Its paintball canon clearly no match for its 13-foot rival’s half-ton fist. In the second round, the MegaBots pilots came back with the newer Eagle Prime machine, itself decked out with a mechanical claw and gattling gun. But, they still struggled to land a deadly blow, instead getting stuck to their foe — with Kuratas’ drone sidekick making life that much harder. Then, in the final round, things got grizzly. Eagle Prime whipped out a chainsaw to dismember Suidobashi Heavy Industry’s juggernaut and end the carnage.

Okay, so Team USA had the unfair advantage of using two bots, and the entire event may have been as choreographed as a WWE match, but it was strangely watchable regardless.

With a win under its belt, the MegaBots team now wants to start a full-blown giant robots sports league. And, there’s at least one contender waiting in the wings.

Carnegie Mellon Solves 12-Year-Old DARPA Grand Challenge Mystery

Carnegie Mellon Solves 12-Year-Old DARPA Grand Challenge Mystery

Carnegie Mellon’s Red Team went into the 2005 DARPA Grand Challenge as the favorite to win. They’d led the pack in the 2004 event, and had been successfully running their two heavily modified autonomous Humvees, H1ghlander and Sandstorm, on mock races across the desert for weeks without any problems. When H1ghlander set out on the 212 km (132 mi) off-road course at dawn on 8 October 2005, it led the pack, and gradually pulled away from Stanford’s robot, Stanley.

About two hours into the race, however, H1ghlander’s engine began to falter, causing it to struggle in climbs and never reach its top speed. Nobody could tell what the issue was, but it slowed the vehicle down enough to cost it more than 40 minutes of race time. Stanley passed H1ghlander and went on to win the race by just 11 minutes. Even after the event, CMU wasn’t able to figure out exactly what happened. But last weekend, at an event celebrating the 10th anniversary of the DARPA Urban Challenge (which CMU won handily with their autonomous Chevy Tahoe BOSS), they accidentally stumbled onto what went wrong.

Here’s the point in the race where H1ghlander started to falter; it’s part of a fantastic NOVA documentary on the DARPA Grand Challenge which you should watch in its entirety if you have time:

Even the DARPA Grand Challenge winner, Stanford University, seemed a bit surprised by how things turned out. “It was a complete act of randomness that Stanley actually won,” Stanford team lead Sebastian Thrun later said. “It was really a failure of Carnegie Mellon’s engine that made us win, no more and no less than that.”

Here are some excerpts from Red Team’s race logs recorded immediately following the Grand Challenge:

October 11: The root cause that capped H1ghlander’s speed and crippled its climbs is not yet known. Requested speeds above 20 mph were under-achieved, even on the long, straight, level roads. H1ghlander didn’t even reach intended speeds going downhill. H1ghlander apparently stopped, rolled backwards, then re-climbed a few times. Weak climbing and stopping are not great practices for winning races. The capped speeds and weak climbs cost H1ghlander over 40 minutes of schedule time. The root cause is still a mystery.

October 12: H1ghlander’s engine was observed to be shaky immediately following the race. The first indication of possible engine trouble was observed when driving H1ghlander from the finish line to the inspection area with a human at the wheel. The engine was running very rough and almost died repeatedly in just that 50 yards of driving with a human foot on the accelerator pedal.

This engine problem is unlike any one that we have seen in the past, as engine performance is severely degraded at and anywhere near idle. Data indicated no limp home mode, no safety mode, and no low-torque mode. Detailed fuel, oil and transmission samples will be analyzed. We do not yet know the root cause that slowed H1ghlander’s driving on race day.

It turned out that the fuel was okay. The oil and transmission fluid were also okay. The electrical system was fine too. With the DARPA Urban Challenge up next and a completely new vehicle under development for that, the CMU team moved on.

Last week, CMU celebrated the 10th anniversary of BOSS’ DARPA Urban Challenge win in 2007. BOSS, Sandstorm, and H1ghlander were all pulled out of storage at CMU and tidied up a bit to be put on display. As H1ghlander’s engine compartment was being cleaned with the engine running, Spencer Spiker (CMU’s operations team leader for the DARPA challenges) leaned against the engine with his knee, and it started to die. This little box is what he was leaning against, as shown to Clint Kelly (who directed DARPA’s research programs in robotics and autonomous systems in the 1980s) by CMU Red Team leader William “Red” Whittaker.

The box is a filter that goes in between the engine control module and the fuel injectors, one of only two electronic parts in the engine on a 1986 Hummer. Spencer discovered that just touching the filter would cause the engine to lose power, and if you actually pushed on it, the engine died completely. But, from a cold start, if the filter wasn’t being touched, the engine would run fine. There was nothing wrong with H1ghlander’s sensors, or software: this filter cost H1ghlander 40 minutes of race time, and the win. “How about that, buddy!” Red said to Chris Urmson (who was working on perception for Red Team during the DARPA challenge, and ran Google’s self driving car program for seven years before starting his own autonomous vehicle company) at the CMU event, showing him the filter. “You’re off the hook!”

As to what may have caused this hardware failure in the first place, many team members at the CMU event suggested that it may have happened just a few weeks before the Grand Challenge, on September 19, when H1ghlander got into a bit of an accident after a 140-mile autonomous test:

Here’s an excerpt from a blog post by Vanessa Hodge, who worked on vehicle navigation and was following H1ghlander in a chase car that night:

H1ghlander was driving autonomously back to the entrance road so we could drive it back to the shop to pamper it before the race. We came to a part of the trail where there was a swamp on the left and a boulder-ridden mountain side on the right, with a road width a little bit larger than the vehicle. H1ghlander kicked up some thick dust and I slowed down to a stop to let the dust settle before catching up. The team in the chase car watched our vehicle display which monitors its actions while the dust settled. Problems appeared in the display, and one team member immediately hit the emergency pause button, but it was too late. In the second we lost visual, H1ghlander tracked off to the right of the path up the slope, slid on its side and flipped entirely to the other side.

By September 22, H1ghlander was back up and running, “hot and strong.” But, that filter may have taken some damage that was difficult or impossible to diagnose, and it ended up failing at the worst possible time.

While it’s impossible to know how a DARPA Grand Challenge win by H1ghlander might have changed autonomous car history, the people I spoke with at CMU generally seemed to feel like everything worked out for the best. BOSS had a convincing win at the DARPA Urban Challenge in 2007, and Stanley’s performance at the Grand Challenge helped to solidify Stanford’s place in the field. Roboticists from both CMU and Stanford helped to form the core of Google’s self-driving car program in California, and today, Pittsburgh is one of the places where both established companies and startups come to do self-driving research, development, and testing. There are very few lingering feelings about what happened, and everyone involved has long since moved on to bigger and better things. But all the same, it’s nice that at last, this final mystery has been solved.

Algorithms Have Already Gone Rogue | WIRED

Algorithms Have Already Gone Rogue | WIRED

For more than two decades, Tim O’Reilly has been the conscience of the tech industry. Originally a publisher of technical manuals, he was among the first to perceive both the societal and commercial value of the internet—and as he transformed his business, he drew upon his education in the classics to apply a moral yardstick to what was happening in tech. He has been a champion of open-source, open-government, and, well, just about everything else that begins with “open.”

His new book WTF: What’s the Future and Why It’s Up to Us seizes on this singular moment in history, in which just about everything makes us say “WTF?”, invoking a word that isn’t “future.” Ever the optimist, O’Reilly celebrates technology’s ability to create magic—but he doesn’t shirk from its dangerous consequences. I got to know Tim when writing a profile of him in 2005, and have never been bored by a conversation. This one touches on the effects of Uber’s behavior and misbehavior, why capitalism is like a rogue AI, and whether Jeff Bezos might be worth voting for in the next election.

Steven Levy: Your book appears at a time when many people who once had good feelings towards technology are now questioning it. Would you defend it?

Tim O’Reilly: I like the title WTF because it can be an expression of amazement and delight or an expression of amazement and dismay. Tech is bringing us both. It has enhanced productivity and made us all richer. I don’t think I would like to roll back the clock.
Not that rolling it back is an option.

No, but it’s important for us to realize that technology is not just about efficiency. It’s about taking these new capabilities that we have and doing more with them. When you do that, you actually increase employment. As people came off the farm, we didn’t end up with a vast leisure class while two percent of people were feeding slop to animals. We ended up creating new kinds of employment, and we used that productivity actually to enhance the quality and the quantity of food. Why should it be different in this era of cognitive enhancement? Uber and Lyft are telling us that things we used to think of as being in the purely digital realm, in the realm of media, whatever, are coming to the real world. So that’s the first wake up call for society. Secondly, we’re seeing a new kind of interaction between people and algorithmic systems. Third, they represent a new kind of marketplaces based on platforms [in this case, they exist because the of the platform of smartphones—and then they can become platforms of their own, as new services, like food delivery, are added in addition to transit]. This marketplace works because people are being augmented with new cognitive superpowers. For example, because of GPS and mapping apps, Uber and Lyft drivers don’t need a lot of training.

Agreed. But when the curtain rolls back we see that those superpowers have consequences: Those algorithms have bias built in.

That’s absolutely right. But I’m optimistic because we’re having a conversation about biased algorithms. We had plenty of bias before but we couldn’t see it. We can’t see, for example, that the algorithms that manage the workers at McDonald’s or The Gap are optimized toward not giving people full-time work so they don’t have to pay benefits. All that was invisible. It wasn’t until we really started seeing the tech-infused algorithms that people started being critical.

In WTF you talk about a specific out-of-control algorithm: the capitalist impulse to maximize profits regardless of societal consequences. The way you describe is reminds me of Nick Bostrom’s scenario of an AI machine devoted to making paper clips—because that’s its sole mission, it winds up eating up all the materials in the world and even killing those who would turn it off. Corporations whose sole justification is shareholder value seem to be working on a similarly destructive algorithm.

Yes, financial markets are the first rogue AI.

How do you roll back that particular AI?

I try to show [earlier cases of] how humans tried to manage their algorithms, by talking about [how Google improved] search quality. Google had some pretty bad patches where the spammers really had the upper hand, and they addressed it.

And that can be done to fix capitalism’s rogue AI?

Somebody planted the idea that shareholder value was the right algorithm, the right thing to be optimizing for. But this wasn’t the way companies acted before. We can plant a different idea. That’s what this political process is about.

Speaking of politics, it seems like another runaway algorithm has led us to a government controlled by people who don’t represent majority views.

I look at it through the long arc of history. You look at the long slow decline of the Roman Empire and see so many analogies—the turning away from literacy and science, the outsourcing of core functions of government to mercenaries effectively. We could go through some real shit before we turn around. We might not turn around at all. But I take hope from something that Tim Urban in Wait But Why calls “the human colossus.” He has this fabulous description of how Elon Musk moves this human colossus in a new direction—to show that it’s possible to go into space, to show that it’s possible to build a brain-machine interface—and then everybody else will follow along. The human colossus I’m most heartened by is the post-World War II period. We learned a lesson from the incredible convulsions after World War I where there was vast dislocation, as we punished the losers of the war. So after World War II they rebuilt Europe, and they invested in the returning veterans with the GI Bill.
As we learn from tech, though, algorithms need continual improvement. You don’t just set them in motion and leave them forever. The strategies put in place after World War II that worked for this period of 30 years have stopped working so well, so we came up with something else [which happened to create income inequality]. There’s a sense that things going wrong will lead to new strategies. And now that Trump has broken the Overton Window—

What’s that?

It’s this idea [named for the late think tank leader Joseph Overton] that there’s a certain set of things that are considered acceptable in public policy debate, and you just can’t go outside that window. And Trump has just done everything unthinkable. Because all bets are off, we are not necessarily going back to the old, tired solutions. I think it’s possible that we’ll shrug off this madness, and we will come back to saying we really have to invest in people, we really have to build a better economy for everyone. In China, they’re already doing that. China has recognized that its vast population is a possible powder keg and it has to take care of its people. That’s something we have not done. We’ve just been pushing down the ordinary people. China is also being more aggressive than any other country in rising to the challenge of climate change. So there’s two possibilities—we’re going to wake up and start acting the same way, or China will lead the world.
Reading your book I think I know who you’d like for our next president: Jeff Bezos. The book is full of Bezos love.

Well. Jeff and Elon [Musk] are probably the two entrepreneurs I admire most.

You can think of the book as an apology to Jeff. As a publisher, I originally bought the usual story, that Amazon would go the way of Wal-Mart—the more dominant it got, the more it would extract value for itself, squeezing down its suppliers. Jeff is a ruthless competitor, no question, but while Amazon has done a chunk of that, it has spent so much time trying to do more. I’m not sure that Jeff would make a great president, but he might.

You’d vote for him, wouldn’t you?

It would depend who he was running against but, yeah, I probably would.

You also praise Uber in your book. Do you think it’s possible to distinguish between the value of its service and the ethics of the company?

Uber is a good metaphor for what’s right and wrong in tech. Here we have this amazing new technology, which is transforming an industry and putting more people to work than worked in that industry before, creating great consumer surplus, and yet it has ridden roughshod over cities, and exploited drivers. It’s interesting that Lyft, which has been both more cooperative in general and better to drivers, is gaining share. That indicates there’s a competitive advantage in doing it right, and you can only go so far being an ass.
Let’s finish by talking about AI. You seem a firm believer that it will be a boon.

AI itself will certainly not take away jobs. I recently saw a wonderful slide from Joanna Bryson, a professor from the University of Bath. It referred to human prehistory and the text said, “12 thousand years of AI,” because everything in technology is artificial intelligence. What we now call AI is just the next stage of us weaving our intelligence together into a greater whole. If you think about the internet as weaving all of us together, transmitting ideas, in some sense an AI might be the equivalent of a multi-cellular being and we’re its microbiome, as opposed to the idea that an AI will be like the Gollum or the Frankenstein. If that’s the case, the systems we are building today, like Google and Facebook and financial markets, are really more important than the fake ethics of worrying about some far future AI. We tend to be afraid of new technology and we tend to demonize it, but to me, you have to use it as an opportunity for introspection. Our fears ultimately should be of ourselves and other people.

The Coming Software Apocalypse – The Atlantic

The Coming Software Apocalypse – The Atlantic

here were six hours during the night of April 10, 2014, when the entire population of Washington State had no 911 service. People who called for help got a busy signal. One Seattle woman dialed 911 at least 37 times while a stranger was trying to break into her house. When he finally crawled into her living room through a window, she picked up a kitchen knife. The man fled.

The 911 outage, at the time the largest ever reported, was traced to software running on a server in Englewood, Colorado. Operated by a systems provider named Intrado, the server kept a running counter of how many calls it had routed to 911 dispatchers around the country. Intrado programmers had set a threshold for how high the counter could go. They picked a number in the millions.

Shortly before midnight on April 10, the counter exceeded that number, resulting in chaos. Because the counter was used to generating a unique identifier for each call, new calls were rejected. And because the programmers hadn’t anticipated the problem, they hadn’t created alarms to call attention to it. Nobody knew what was happening. Dispatch centers in Washington, California, Florida, the Carolinas, and Minnesota, serving 11 million Americans, struggled to make sense of reports that callers were getting busy signals. It took until morning to realize that Intrado’s software in Englewood was responsible, and that the fix was to change a single number.

Not long ago, emergency calls were handled locally. Outages were small and easily diagnosed and fixed. The rise of cellphones and the promise of new capabilities—what if you could text 911? or send videos to the dispatcher?—drove the development of a more complex system that relied on the internet. For the first time, there could be such a thing as a national 911 outage. There have now been four in as many years.

It’s been said that software is “eating the world.” More and more, critical systems that were once controlled mechanically, or by people, are coming to depend on code. This was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down again, this time because a different router failed. The simultaneous failure of so many software systems smelled at first of a coordinated cyberattack. Almost more frightening was the realization, late in the day, that it was just a coincidence.

“When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

Software is different. Just by editing the text in a file somewhere, the same hunk of silicon can become an autopilot or an inventory-control system. This flexibility is software’s miracle, and its curse. Because it can be changed cheaply, software is constantly changed; and because it’s unmoored from anything physical—a program that is a thousand times more complex than another takes up the same actual space—it tends to grow without bound. “The problem,” Leveson wrote in a book, “is that we are attempting to build systems that are beyond our ability to intellectually manage.”

Our standard framework for thinking about engineering failures—reflected, for instance, in regulations for medical devices—was developed shortly after World War II, before the advent of software, for electromechanical systems. The idea was that you make something reliable by making its parts reliable (say, you build your engine to withstand 40,000 takeoff-and-landing cycles) and by planning for the breakdown of those parts (you have two engines). But software doesn’t break. Intrado’s faulty threshold is not like the faulty rivet that leads to the crash of an airliner. The software did exactly what it was told to do. In fact it did it perfectly. The reason it failed is that it was told to do the wrong thing. Software failures are failures of understanding, and of imagination. Intrado actually had a backup router, which, had it been switched to automatically, would have restored 911 service almost immediately. But, as described in a report to the FCC, “the situation occurred at a point in the application logic that was not designed to perform any automated corrective actions.”

This is the trouble with making things out of code, as opposed to something physical. “The complexity,” as Leveson puts it, “is invisible to the eye.”

he attempts now underway to change how we make software all seem to start with the same premise: Code is too hard to think about. Before trying to understand the attempts themselves, then, it’s worth understanding why this might be: what it is about code that makes it so foreign to the mind, and so unlike anything that came before it.

Technological progress used to change the way the world looked—you could watch the roads getting paved; you could see the skylines rise. Today you can hardly tell when something is remade, because so often it is remade by code. When you press your foot down on your car’s accelerator, for instance, you’re no longer controlling anything directly; there’s no mechanical link from the pedal to the throttle. Instead, you’re issuing a command to a piece of software that decides how much air to give the engine. The car is a computer you can sit inside of. The steering wheel and pedals might as well be keyboard keys.

Like everything else, the car has been computerized to enable new features. When a program is in charge of the throttle and brakes, it can slow you down when you’re too close to another car, or precisely control the fuel injection to help you save on gas. When it controls the steering, it can keep you in your lane as you start to drift, or guide you into a parking space. You couldn’t build these features without code. If you tried, a car might weigh 40,000 pounds, an immovable mass of clockwork.

Software has enabled us to make the most intricate machines that have ever existed. And yet we have hardly noticed, because all of that complexity is packed into tiny silicon chips as millions and millions of lines of code. But just because we can’t see the complexity doesn’t mean that it has gone away.

The programmer, the renowned Dutch computer scientist Edsger Dijkstra wrote in 1988, “has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before.” Dijkstra meant this as a warning. As programmers eagerly poured software into critical systems, they became, more and more, the linchpins of the built world—and Dijkstra thought they had perhaps overestimated themselves.

What made programming so difficult was that it required you to think like a computer. The strangeness of it was in some sense more vivid in the early days of computing, when code took the form of literal ones and zeros. Anyone looking over a programmer’s shoulder as they pored over line after line like “100001010011” and “000010011110” would have seen just how alienated the programmer was from the actual problems they were trying to solve; it would have been impossible to tell whether they were trying to calculate artillery trajectories or simulate a game of tic-tac-toe. The introduction of programming languages like Fortran and C, which resemble English, and tools, known as “integrated development environments,” or IDEs, that help correct simple mistakes (like Microsoft Word’s grammar checker but for code), obscured, though did little to actually change, this basic alienation—the fact that the programmer didn’t work on a problem directly, but rather spent their days writing out instructions for a machine.

“The problem is that software engineers don’t understand the problem they’re trying to solve, and don’t care to,” says Leveson, the MIT software-safety expert. The reason is that they’re too wrapped up in getting their code to work. “Software engineers like to provide all kinds of tools and stuff for coding errors,” she says, referring to IDEs. “The serious problems that have happened with software have to do with requirements, not coding errors.” When you’re writing code that controls a car’s throttle, for instance, what’s important is the rules about when and how and by how much to open it. But these systems have become so complicated that hardly anyone can keep them straight in their head. “There’s 100 million lines of code in cars now,” Leveson says. “You just cannot anticipate all these things.”

In September 2007, Jean Bookout was driving on the highway with her best friend in a Toyota Camry when the accelerator seemed to get stuck. When she took her foot off the pedal, the car didn’t slow down. She tried the brakes but they seemed to have lost their power. As she swerved toward an off-ramp going 50 miles per hour, she pulled the emergency brake. The car left a skid mark 150 feet long before running into an embankment by the side of the road. The passenger was killed. Bookout woke up in a hospital a month later.

The incident was one of many in a nearly decade-long investigation into claims of so-called unintended acceleration in Toyota cars. Toyota blamed the incidents on poorly designed floor mats, “sticky” pedals, and driver error, but outsiders suspected that faulty software might be responsible. The National Highway Traffic Safety Administration enlisted software experts from NASA to perform an intensive review of Toyota’s code. After nearly 10 months, the NASA team hadn’t found evidence that software was the cause—but said they couldn’t prove it wasn’t.

It was during litigation of the Bookout accident that someone finally found a convincing connection. Michael Barr, an expert witness for the plaintiff, had a team of software experts spend 18 months with the Toyota code, picking up where NASA left off. Barr described what they found as “spaghetti code,” programmer lingo for software that has become a tangled mess. Code turns to spaghetti when it accretes over many years, with feature after feature piling on top of, and being woven around, what’s already there; eventually the code becomes impossible to follow, let alone to test exhaustively for flaws.

Using the same model as the Camry involved in the accident, Barr’s team demonstrated that there were actually more than 10 million ways for the onboard computer to cause unintended acceleration. They showed that as little as a single bit flip—a one in the computer’s memory becoming a zero or vice versa—could make a car run out of control. The fail-safe code that Toyota had put in place wasn’t enough to stop it. “You have software watching the software,” Barr testified. “If the software malfunctions and the same program or same app that is crashed is supposed to save the day, it can’t save the day because it is not working.”


Barr’s testimony made the case for the plaintiff, resulting in $3 million in damages for Bookout and her friend’s family. According to The New York Times, it was the first of many similar cases against Toyota to bring to trial problems with the electronic throttle-control system, and the first time Toyota was found responsible by a jury for an accident involving unintended acceleration. The parties decided to settle the case before punitive damages could be awarded. In all, Toyota recalled more than 9 million cars, and paid nearly $3 billion in settlements and fines related to unintended acceleration.

There will be more bad days for software. It’s important that we get better at making it, because if we don’t, and as software becomes more sophisticated and connected—as it takes control of more critical functions—those days could get worse.

The problem is that programmers are having a hard time keeping up with their own creations. Since the 1980s, the way programmers work and the tools they use have changed remarkably little. There is a small but growing chorus that worries the status quo is unsustainable. “Even very good programmers are struggling to make sense of the systems that they are working with,” says Chris Granger, a software developer who worked as a lead at Microsoft on Visual Studio, an IDE that costs $1,199 a year and is used by nearly a third of all professional programmers. He told me that while he was at Microsoft, he arranged an end-to-end study of Visual Studio, the only one that had ever been done. For a month and a half, he watched behind a one-way mirror as people wrote code. “How do they use tools? How do they think?” he said. “How do they sit at the computer, do they touch the mouse, do they not touch the mouse? All these things that we have dogma around that we haven’t actually tested empirically.”

The findings surprised him. “Visual Studio is one of the single largest pieces of software in the world,” he said. “It’s over 55 million lines of code. And one of the things that I found out in this study is more than 98 percent of it is completely irrelevant. All this work had been put into this thing, but it missed the fundamental problems that people faced. And the biggest one that I took away from it was that basically people are playing computer inside their head.” Programmers were like chess players trying to play with a blindfold on—so much of their mental energy is spent just trying to picture where the pieces are that there’s hardly any left over to think about the game itself.

John Resig had been noticing the same thing among his students. Resig is a celebrated programmer of JavaScript—software he wrote powers over half of all websites—and a tech lead at the online-education site Khan Academy. In early 2012, he had been struggling with the site’s computer-science curriculum. Why was it so hard to learn to program? The essential problem seemed to be that code was so abstract. Writing software was not like making a bridge out of popsicle sticks, where you could see the sticks and touch the glue. To “make” a program, you typed words. When you wanted to change the behavior of the program, be it a game, or a website, or a simulation of physics, what you actually changed was text. So the students who did well—in fact the only ones who survived at all—were those who could step through that text one instruction at a time in their head, thinking the way a computer would, trying to keep track of every intermediate calculation. Resig, like Granger, started to wonder if it had to be that way. Computers had doubled in power every 18 months for the last 40 years. Why hadn’t programming changed?


The fact that the two of them were thinking about the same problem in the same terms, at the same time, was not a coincidence. They had both just seen the same remarkable talk, given to a group of software-engineering students in a Montreal hotel by a computer researcher named Bret Victor. The talk, which went viral when it was posted online in February 2012, seemed to be making two bold claims. The first was that the way we make software is fundamentally broken. The second was that Victor knew how to fix it.

Bret victor does not like to write code. “It sounds weird,” he says. “When I want to make a thing, especially when I want to create something in software, there’s this initial layer of disgust that I have to push through, where I’m not manipulating the thing that I want to make, I’m writing a bunch of text into a text editor.”

“There’s a pretty strong conviction that that’s the wrong way of doing things.”

Victor has the mien of David Foster Wallace, with a lightning intelligence that lingers beneath a patina of aw-shucks shyness. He is 40 years old, with traces of gray and a thin, undeliberate beard. His voice is gentle, mournful almost, but he wants to share what’s in his head, and when he gets on a roll he’ll seem to skip syllables, as though outrunning his own vocal machinery.

Though he runs a lab that studies the future of computing, he seems less interested in technology per se than in the minds of the people who use it. Like any good toolmaker, he has a way of looking at the world that is equal parts technical and humane. He graduated top of his class at the California Institute of Technology for electrical engineering, and then went on, after grad school at the University of California, Berkeley, to work at a company that develops music synthesizers. It was a problem perfectly matched to his dual personality: He could spend as much time thinking about the way a performer makes music with a keyboard—the way it becomes an extension of their hands—as he could thinking about the mathematics of digital signal processing.

By the time he gave the talk that made his name, the one that Resig and Granger saw in early 2012, Victor had finally landed upon the principle that seemed to thread through all of his work. (He actually called the talk “Inventing on Principle.”) The principle was this: “Creators need an immediate connection to what they’re creating.” The problem with programming was that it violated the principle. That’s why software systems were so hard to think about, and so rife with bugs: The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.

“Our current conception of what a computer program is,” he said, is “derived straight from Fortran and ALGOL in the late ’50s. Those languages were designed for punch cards.” That code now takes the form of letters on a screen in a language like C or Java (derivatives of Fortran and ALGOL), instead of a stack of cards with holes in it, doesn’t make it any less dead, any less indirect.

There is an analogy to word processing. It used to be that all you could see in a program for writing documents was the text itself, and to change the layout or font or margins, you had to write special “control codes,” or commands that would tell the computer that, for instance, “this part of the text should be in italics.” The trouble was that you couldn’t see the effect of those codes until you printed the document. It was hard to predict what you were going to get. You had to imagine how the codes were going to be interpreted by the computer—that is, you had to play computer in your head.

Then WYSIWYG (pronounced “wizzywig”) came along. It stood for “What You See Is What You Get.” When you marked a passage as being in italics, the letters tilted right there on the screen. If you wanted to change the margin, you could drag a ruler at the top of the screen—and see the effect of that change. The document thereby came to feel like something real, something you could poke and prod at. Just by looking you could tell if you’d done something wrong. Control of a sophisticated system—the document’s layout and formatting engine—was made accessible to anyone who could click around on a page.

Victor’s point was that programming itself should be like that. For him, the idea that people were doing important work, like designing adaptive cruise-control systems or trying to understand cancer, by staring at a text editor, was appalling. And it was the proper job of programmers to ensure that someday they wouldn’t have to.

There was precedent enough to suggest that this wasn’t a crazy idea. Photoshop, for instance, puts powerful image-processing algorithms in the hands of people who might not even know what an algorithm is. It’s a complicated piece of software, but complicated in the way a good synth is complicated, with knobs and buttons and sliders that the user learns to play like an instrument. Squarespace, a company that is perhaps best known for advertising aggressively on podcasts, makes a tool that lets users build websites by pointing and clicking, instead of by writing code in HTML and CSS. It is powerful enough to do work that once would have been done by a professional web designer.

But those were just a handful of examples. The overwhelming reality was that when someone wanted to do something interesting with a computer, they had to write code. Victor, who is something of an idealist, saw this not so much as an opportunity but as a moral failing of programmers at large. His talk was a call to arms.

At the heart of it was a series of demos that tried to show just how primitive the available tools were for various problems—circuit design, computer animation, debugging algorithms—and what better ones might look like. His demos were virtuosic. The one that captured everyone’s imagination was, ironically enough, the one that on its face was the most trivial. It showed a split screen with a game that looked like Mario on one side and the code that controlled it on the other. As Victor changed the code, things in the game world changed: He decreased one number, the strength of gravity, and the Mario character floated; he increased another, the player’s speed, and Mario raced across the screen.

Suppose you wanted to design a level where Mario, jumping and bouncing off of a turtle, would just make it into a small passageway. Game programmers were used to solving this kind of problem in two stages: First, you stared at your code—the code controlling how high Mario jumped, how fast he ran, how bouncy the turtle’s back was—and made some changes to it in your text editor, using your imagination to predict what effect they’d have. Then, you’d replay the game to see what actually happened.

Victor wanted something more immediate. “If you have a process in time,” he said, referring to Mario’s path through the level, “and you want to see changes immediately, you have to map time to space.” He hit a button that showed not just where Mario was right now, but where he would be at every moment in the future: a curve of shadow Marios stretching off into the far distance. What’s more, this projected path was reactive: When Victor changed the game’s parameters, now controlled by a quick drag of the mouse, the path’s shape changed. It was like having a god’s-eye view of the game. The whole problem had been reduced to playing with different parameters, as if adjusting levels on a stereo receiver, until you got Mario to thread the needle. With the right interface, it was almost as if you weren’t working with code at all; you were manipulating the game’s behavior directly.


When the audience first saw this in action, they literally gasped. They knew they weren’t looking at a kid’s game, but rather the future of their industry. Most software involved behavior that unfolded, in complex ways, over time, and Victor had shown that if you were imaginative enough, you could develop ways to see that behavior and change it, as if playing with it in your hands. One programmer who saw the talk wrote later: “Suddenly all of my tools feel obsolete.”

When john resig saw the “Inventing on Principle” talk, he scrapped his plans for the Khan Academy programming curriculum. He wanted the site’s programming exercises to work just like Victor’s demos. On the left-hand side you’d have the code, and on the right, the running program: a picture or game or simulation. If you changed the code, it’d instantly change the picture. “In an environment that is truly responsive,” Resig wrote about the approach, “you can completely change the model of how a student learns … [They] can now immediately see the result and intuit how underlying systems inherently work without ever following an explicit explanation.” Khan Academy has become perhaps the largest computer-programming class in the world, with a million students, on average, actively using the program each month.

Chris Granger, who had worked at Microsoft on Visual Studio, was likewise inspired. Within days of seeing a video of Victor’s talk, in January of 2012, he built a prototype of a new programming environment. Its key capability was that it would give you instant feedback on your program’s behavior. You’d see what your system was doing right next to the code that controlled it. It was like taking off a blindfold. Granger called the project “Light Table.”

In April of 2012, he sought funding for Light Table on Kickstarter. In programming circles, it was a sensation. Within a month, the project raised more than $200,000. The ideas spread. The notion of liveness, of being able to see data flowing through your program instantly, made its way into flagship programming tools offered by Google and Apple. The default language for making new iPhone and Mac apps, called Swift, was developed by Apple from the ground up to support an environment, called Playgrounds, that was directly inspired by Light Table.

But seeing the impact that his talk ended up having, Bret Victor was disillusioned. “A lot of those things seemed like misinterpretations of what I was saying,” he said later. He knew something was wrong when people began to invite him to conferences to talk about programming tools. “Everyone thought I was interested in programming environments,” he said. Really he was interested in how people see and understand systems—as he puts it, in the “visual representation of dynamic behavior.” Although code had increasingly become the tool of choice for creating dynamic behavior, it remained one of the worst tools for understanding it. The point of “Inventing on Principle” was to show that you could mitigate that problem by making the connection between a system’s behavior and its code immediate.

In a pair of later talks, “Stop Drawing Dead Fish” and “Drawing Dynamic Visualizations,” Victor went one further. He demoed two programs he’d built—the first for animators, the second for scientists trying to visualize their data—each of which took a process that used to involve writing lots of custom code and reduced it to playing around in a WYSIWYG interface. Victor suggested that the same trick could be pulled for nearly every problem where code was being written today. “I’m not sure that programming has to exist at all,” he told me. “Or at least software developers.” In his mind, a software developer’s proper role was to create tools that removed the need for software developers. Only then would people with the most urgent computational problems be able to grasp those problems directly, without the intermediate muck of code.


Of course, to do that, you’d have to get programmers themselves on board. In a recent essay, Victor implored professional software developers to stop pouring their talent into tools for building apps like Snapchat and Uber. “The inconveniences of daily life are not the significant problems,” he wrote. Instead, they should focus on scientists and engineers—as he put it to me, “these people that are doing work that actually matters, and critically matters, and using really, really bad tools.” Exciting work of this sort, in particular a class of tools for “model-based design,” was already underway, he wrote, and had been for years, but most programmers knew nothing about it.

“If you really look hard at all the industrial goods that you’ve got out there, that you’re using, that companies are using, the only non-industrial stuff that you have inside this is the code.” Eric Bantégnie is the founder of Esterel Technologies (now owned by ANSYS), a French company that makes tools for building safety-critical software. Like Victor, Bantégnie doesn’t think engineers should develop large systems by typing millions of lines of code into an IDE. “Nobody would build a car by hand,” he says. “Code is still, in many places, handicraft. When you’re crafting manually 10,000 lines of code, that’s okay. But you have systems that have 30 million lines of code, like an Airbus, or 100 million lines of code, like your Tesla or high-end cars—that’s becoming very, very complicated.”

Bantégnie’s company is one of the pioneers in the industrial use of model-based design, in which you no longer write code directly. Instead, you create a kind of flowchart that describes the rules your program should follow (the “model”), and the computer generates code for you based on those rules. If you were making the control system for an elevator, for instance, one rule might be that when the door is open, and someone presses the button for the lobby, you should close the door and start moving the car. In a model-based design tool, you’d represent this rule with a small diagram, as though drawing the logic out on a whiteboard, made of boxes that represent different states—like “door open,” “moving,” and “door closed”—and lines that define how you can get from one state to the other. The diagrams make the system’s rules obvious: Just by looking, you can see that the only way to get the elevator moving is to close the door, or that the only way to get the door open is to stop.

It’s not quite Photoshop. The beauty of Photoshop, of course, is that the picture you’re manipulating on the screen is the final product. In model-based design, by contrast, the picture on your screen is more like a blueprint. Still, making software this way is qualitatively different than traditional programming. In traditional programming, your task is to take complex rules and translate them into code; most of your energy is spent doing the translating, rather than thinking about the rules themselves. In the model-based approach, all you have is the rules. So that’s what you spend your time thinking about. It’s a way of focusing less on the machine and more on the problem you’re trying to get it to solve.


“Typically the main problem with software coding—and I’m a coder myself,” Bantégnie says, “is not the skills of the coders. The people know how to code. The problem is what to code. Because most of the requirements are kind of natural language, ambiguous, and a requirement is never extremely precise, it’s often understood differently by the guy who’s supposed to code.”

On this view, software becomes unruly because the media for describing what software should do—conversations, prose descriptions, drawings on a sheet of paper—are too different from the media describing what software does do, namely, code itself. Too much is lost going from one to the other. The idea behind model-based design is to close the gap. The very same model is used both by system designers to express what they want and by the computer to automatically generate code.

Of course, for this approach to succeed, much of the work has to be done well before the project even begins. Someone first has to build a tool for developing models that are natural for people—that feel just like the notes and drawings they’d make on their own—while still being unambiguous enough for a computer to understand. They have to make a program that turns these models into real code. And finally they have to prove that the generated code will always do what it’s supposed to. “We have benefited from fortunately 20 years of initial background work,” Bantégnie says.

Esterel Technologies, which was acquired by ANSYS in 2012, grew out of research begun in the 1980s by the French nuclear and aerospace industries, who worried that as safety-critical code ballooned in complexity, it was getting harder and harder to keep it free of bugs. “I started in 1988,” says Emmanuel Ledinot, the Head of Scientific Studies for Dassault Aviation, a French manufacturer of fighter jets and business aircraft. “At the time, I was working on military avionics systems. And the people in charge of integrating the systems, and debugging them, had noticed that the number of bugs was increasing.” The 80s had seen a surge in the number of onboard computers on planes. Instead of a single flight computer, there were now dozens, each responsible for highly specialized tasks related to control, navigation, and communications. Coordinating these systems to fly the plane as data poured in from sensors and as pilots entered commands required a symphony of perfectly timed reactions. “The handling of these hundreds of and even thousands of possible events in the right order, at the right time,” Ledinot says, “was diagnosed as the main cause of the bug inflation.”

Ledinot decided that writing such convoluted code by hand was no longer sustainable. It was too hard to understand what it was doing, and almost impossible to verify that it would work correctly. He went looking for something new. “You must understand that to change tools is extremely expensive in a process like this,” he said in a talk. “You don’t take this type of decision unless your back is against the wall.”

He began collaborating with Gerard Berry, a computer scientist at INRIA, the French computing-research center, on a tool called Esterel—a portmanteau of the French for “real-time.” The idea behind Esterel was that while traditional programming languages might be good for describing simple procedures that happened in a predetermined order—like a recipe—if you tried to use them in systems where lots of events could happen at nearly any time, in nearly any order—like in the cockpit of a plane—you inevitably got a mess. And a mess in control software was dangerous. In a paper, Berry went as far as to predict that “low-level programming techniques will not remain acceptable for large safety-critical programs, since they make behavior understanding and analysis almost impracticable.”


Esterel was designed to make the computer handle this complexity for you. That was the promise of the model-based approach: Instead of writing normal programming code, you created a model of the system’s behavior—in this case, a model focused on how individual events should be handled, how to prioritize events, which events depended on which others, and so on. The model becomes the detailed blueprint that the computer would use to do the actual programming.

Ledinot and Berry worked for nearly 10 years to get Esterel to the point where it could be used in production. “It was in 2002 that we had the first operational software-modeling environment with automatic code generation,” Ledinot told me, “and the first embedded module in Rafale, the combat aircraft.” Today, the ANSYS SCADE product family (for “safety-critical application development environment”) is used to generate code by companies in the aerospace and defense industries, in nuclear power plants, transit systems, heavy industry, and medical devices. “My initial dream was to have SCADE-generated code in every plane in the world,” Bantégnie, the founder of Esterel Technologies, says, “and we’re not very far off from that objective.” Nearly all safety-critical code on the Airbus A380, including the system controlling the plane’s flight surfaces, was generated with ANSYS SCADE products.

Part of the draw for customers, especially in aviation, is that while it is possible to build highly reliable software by hand, it can be a Herculean effort. Ravi Shivappa, the VP of group software engineering at Meggitt PLC, an ANSYS customer which builds components for airplanes, like pneumatic fire detectors for engines, explains that traditional projects begin with a massive requirements document in English, which specifies everything the software should do. (A requirement might be something like, “When the pressure in this section rises above a threshold, open the safety valve, unless the manual-override switch is turned on.”) The problem with describing the requirements this way is that when you implement them in code, you have to painstakingly check that each one is satisfied. And when the customer changes the requirements, the code has to be changed, too, and tested extensively to make sure that nothing else was broken in the process.

The cost is compounded by exacting regulatory standards. The FAA is fanatical about software safety. The agency mandates that every requirement for a piece of safety-critical software be traceable to the lines of code that implement it, and vice versa. So every time a line of code changes, it must be retraced to the corresponding requirement in the design document, and you must be able to demonstrate that the code actually satisfies the requirement. The idea is that if something goes wrong, you’re able to figure out why; the practice brings order and accountability to large codebases. But, Shivappa says, “it’s a very labor-intensive process.” He estimates that before they used model-based design, on a two-year-long project only two to three months was spent writing code—the rest was spent working on the documentation.

As Bantégnie explains, the beauty of having a computer turn your requirements into code, rather than a human, is that you can be sure—in fact you can mathematically prove—that the generated code actually satisfies those requirements. Much of the benefit of the model-based approach comes from being able to add requirements on the fly while still ensuring that existing ones are met; with every change, the computer can verify that your program still works. You’re free to tweak your blueprint without fear of introducing new bugs. Your code is, in FAA parlance, “correct by construction.”

Still, most software, even in the safety-obsessed world of aviation, is made the old-fashioned way, with engineers writing their requirements in prose and programmers coding them up in a programming language like C. As Bret Victor made clear in his essay, model-based design is relatively unusual. “A lot of people in the FAA think code generation is magic, and hence call for greater scrutiny,” Shivappa told me.

Most programmers feel the same way. They like code. At least they understand it. Tools that write your code for you and verify its correctness using the mathematics of “finite-state machines” and “recurrent systems” sound esoteric and hard to use, if not just too good to be true.

It is a pattern that has played itself out before. Whenever programming has taken a step away from the writing of literal ones and zeros, the loudest objections have come from programmers. Margaret Hamilton, a celebrated software engineer on the Apollo missions—in fact the coiner of the phrase “software engineering”—told me that during her first year at the Draper lab at MIT, in 1964, she remembers a meeting where one faction was fighting the other about transitioning away from “some very low machine language,” as close to ones and zeros as you could get, to “assembly language.” “The people at the lowest level were fighting to keep it. And the arguments were so similar: ‘Well how do we know assembly language is going to do it right?’”

“Guys on one side, their faces got red, and they started screaming,” she said. She said she was “amazed how emotional they got.”

Emmanuel Ledinot, of Dassault Aviation, pointed out that when assembly language was itself phased out in favor of the programming languages still popular today, like C, it was the assembly programmers who were skeptical this time. No wonder, he said, that “people are not so easily transitioning to model-based software development: They perceive it as another opportunity to lose control, even more than they have already.”

The bias against model-based design, sometimes known as model-driven engineering, or MDE, is in fact so ingrained that according to a recent paper, “Some even argue that there is a stronger need to investigate people’s perception of MDE than to research new MDE technologies.”

Which sounds almost like a joke, but for proponents of the model-based approach, it’s an important point: We already know how to make complex software reliable, but in so many places, we’re choosing not to. Why?

In 2011, chris Newcombe had been working at Amazon for almost seven years, and had risen to be a principal engineer. He had worked on some of the company’s most critical systems, including the retail-product catalog and the infrastructure that managed every Kindle device in the world. He was a leader on the highly prized Amazon Web Services team, which maintains cloud servers for some of the web’s biggest properties, like Netflix, Pinterest, and Reddit. Before Amazon, he’d helped build the backbone of Steam, the world’s largest online-gaming service. He is one of those engineers whose work quietly keeps the internet running. The products he’d worked on were considered massive successes. But all he could think about was that buried deep in the designs of those systems were disasters waiting to happen.

“Human intuition is poor at estimating the true probability of supposedly ‘extremely rare’ combinations of events in systems operating at a scale of millions of requests per second,” he wrote in a paper. “That human fallibility means that some of the more subtle, dangerous bugs turn out to be errors in design; the code faithfully implements the intended design, but the design fails to correctly handle a particular ‘rare’ scenario.”

Newcombe was convinced that the algorithms behind truly critical systems—systems storing a significant portion of the web’s data, for instance—ought to be not just good, but perfect. A single subtle bug could be catastrophic. But he knew how hard bugs were to find, especially as an algorithm grew more complex. You could do all the testing you wanted and you’d never find them all.

This is why he was so intrigued when, in the appendix of a paper he’d been reading, he came across a strange mixture of math and code—or what looked like code—that described an algorithm in something called “TLA+.” The surprising part was that this description was said to be mathematically precise: An algorithm written in TLA+ could in principle be proven correct. In practice, it allowed you to create a realistic model of your problem and test it not just thoroughly, but exhaustively. This was exactly what he’d been looking for: a language for writing perfect algorithms.

TLA+, which stands for “Temporal Logic of Actions,” is similar in spirit to model-based design: It’s a language for writing down the requirements—TLA+ calls them “specifications”—of computer programs. These specifications can then be completely verified by a computer. That is, before you write any code, you write a concise outline of your program’s logic, along with the constraints you need it to satisfy (say, if you were programming an ATM, a constraint might be that you can never withdraw the same money twice from your checking account). TLA+ then exhaustively checks that your logic does, in fact, satisfy those constraints. If not, it will show you exactly how they could be violated.

The language was invented by Leslie Lamport, a Turing Award–winning computer scientist. With a big white beard and scruffy white hair, and kind eyes behind large glasses, Lamport looks like he might be one of the friendlier professors at the American Hogwarts. Now at Microsoft Research, he is known as one of the pioneers of the theory of “distributed systems,” which describes any computer system made of multiple parts that communicate with each other. Lamport’s work laid the foundation for many of the systems that power the modern web.

For Lamport, a major reason today’s software is so full of bugs is that programmers jump straight into writing code. “Architects draw detailed plans before a brick is laid or a nail is hammered,” he wrote in an article. “But few programmers write even a rough sketch of what their programs will do before they start coding.” Programmers are drawn to the nitty-gritty of coding because code is what makes programs go; spending time on anything else can seem like a distraction. And there is a patient joy, a meditative kind of satisfaction, to be had from puzzling out the micro-mechanics of code. But code, Lamport argues, was never meant to be a medium for thought. “It really does constrain your ability to think when you’re thinking in terms of a programming language,” he says. Code makes you miss the forest for the trees: It draws your attention to the working of individual pieces, rather than to the bigger picture of how your program fits together, or what it’s supposed to do—and whether it actually does what you think. This is why Lamport created TLA+. As with model-based design, TLA+ draws your focus to the high-level structure of a system, its essential logic, rather than to the code that implements it.

Newcombe and his colleagues at Amazon would go on to use TLA+ to find subtle, critical bugs in major systems, including bugs in the core algorithms behind S3, regarded as perhaps the most reliable storage engine in the world. It is now used widely at the company. In the tiny universe of people who had ever used TLA+, their success was not so unusual. An intern at Microsoft used TLA+ to catch a bug that could have caused every Xbox in the world to crash after four hours of use. Engineers at the European Space Agency used it to rewrite, with 10 times less code, the operating system of a probe that was the first to ever land softly on a comet. Intel uses it regularly to verify its chips.

But TLA+ occupies just a small, far corner of the mainstream, if it can be said to take up any space there at all. Even to a seasoned engineer like Newcombe, the language read at first as bizarre and esoteric—a zoo of symbols. For Lamport, this is a failure of education. Though programming was born in mathematics, it has since largely been divorced from it. Most programmers aren’t very fluent in the kind of math—logic and set theory, mostly—that you need to work with TLA+. “Very few programmers—and including very few teachers of programming—understand the very basic concepts and how they’re applied in practice. And they seem to think that all they need is code,” Lamport says. “The idea that there’s some higher level than the code in which you need to be able to think precisely, and that mathematics actually allows you to think precisely about it, is just completely foreign. Because they never learned it.”

Lamport sees this failure to think mathematically about what they’re doing as the problem of modern software development in a nutshell: The stakes keep rising, but programmers aren’t stepping up—they haven’t developed the chops required to handle increasingly complex problems. “In the 15th century,” he said, “people used to build cathedrals without knowing calculus, and nowadays I don’t think you’d allow anyone to build a cathedral without knowing calculus. And I would hope that after some suitably long period of time, people won’t be allowed to write programs if they don’t understand these simple things.”

Newcombe isn’t so sure that it’s the programmer who is to blame. “I’ve heard from Leslie that he thinks programmers are afraid of math. I’ve found that programmers aren’t aware—or don’t believe—that math can help them handle complexity. Complexity is the biggest challenge for programmers.” The real problem in getting people to use TLA+, he said, was convincing them it wouldn’t be a waste of their time. Programmers, as a species, are relentlessly pragmatic. Tools like TLA+ reek of the ivory tower. When programmers encounter “formal methods” (so called because they involve mathematical, “formally” precise descriptions of programs), their deep-seated instinct is to recoil.

Most programmers who took computer science in college have briefly encountered formal methods. Usually they’re demonstrated on something trivial, like a program that counts up from zero; the student’s job is to mathematically prove that the program does, in fact, count up from zero.

“I needed to change people’s perceptions on what formal methods were,” Newcombe told me. Even Lamport himself didn’t seem to fully grasp this point: Formal methods had an image problem. And the way to fix it wasn’t to implore programmers to change—it was to change yourself. Newcombe realized that to bring tools like TLA+ to the programming mainstream, you had to start speaking their language.

For one thing, he said that when he was introducing colleagues at Amazon to TLA+ he would avoid telling them what it stood for, because he was afraid the name made it seem unnecessarily forbidding: “Temporal Logic of Actions” has exactly the kind of highfalutin ring to it that plays well in academia, but puts off most practicing programmers. He tried also not to use the terms “formal,” “verification,” or “proof,” which reminded programmers of tedious classroom exercises. Instead, he presented TLA+ as a new kind of “pseudocode,” a stepping-stone to real code that allowed you to exhaustively test your algorithms—and that got you thinking precisely early on in the design process. “Engineers think in terms of debugging rather than ‘verification,’” he wrote, so he titled his internal talk on the subject to fellow Amazon engineers “Debugging Designs.” Rather than bemoan the fact that programmers see the world in code, Newcombe embraced it. He knew he’d lose them otherwise. “I’ve had a bunch of people say, ‘Now I get it,’” Newcombe says.

He has since left Amazon for Oracle, where he’s been able to convince his new colleagues to give TLA+ a try. For him, using these tools is now a matter of responsibility. “We need to get better at this,” he said.

“I’m self-taught, been coding since I was nine, so my instincts were to start coding. That was my only—that was my way of thinking: You’d sketch something, try something, you’d organically evolve it.” In his view, this is what many programmers today still do. “They google, and they look on Stack Overflow” (a popular website where programmers answer each other’s technical questions) “and they get snippets of code to solve their tactical concern in this little function, and they glue it together, and iterate.”

“And that’s completely fine until you run smack into a real problem.”

In the summer of 2015, a pair of American security researchers, Charlie Miller and Chris Valasek, convinced that car manufacturers weren’t taking software flaws seriously enough, demonstrated that a 2014 Jeep Cherokee could be remotely controlled by hackers. They took advantage of the fact that the car’s entertainment system, which has a cellular connection (so that, for instance, you can start your car with your iPhone), was connected to more central systems, like the one that controls the windshield wipers, steering, acceleration, and brakes (so that, for instance, you can see guidelines on the rearview screen that respond as you turn the wheel). As proof of their attack, which they developed on nights and weekends, they hacked into Miller’s car while a journalist was driving it on the highway, and made it go haywire; the journalist, who knew what was coming, panicked when they cut the engines, forcing him to a slow crawl on a stretch of road with no shoulder to escape to.

Although they didn’t actually create one, they showed that it was possible to write a clever piece of software, a “vehicle worm,” that would use the onboard computer of a hacked Jeep Cherokee to scan for and hack others; had they wanted to, they could have had simultaneous access to a nationwide fleet of vulnerable cars and SUVs. (There were at least five Fiat Chrysler models affected, including the Jeep Cherokee.) One day they could have told them all to, say, suddenly veer left or cut the engines at high speed.

“We need to think about software differently,” Valasek told me. Car companies have long assembled their final product from parts made by hundreds of different suppliers. But where those parts were once purely mechanical, they now, as often as not, come with millions of lines of code. And while some of this code—for adaptive cruise control, for auto braking and lane assist—has indeed made cars safer (“The safety features on my Jeep have already saved me countless times,” says Miller), it has also created a level of complexity that is entirely new. And it has made possible a new kind of failure.

“There are lots of bugs in cars,” Gerard Berry, the French researcher behind Esterel, said in a talk. “It’s not like avionics—in avionics it’s taken very seriously. And it’s admitted that software is different from mechanics.” The automotive industry is perhaps among those that haven’t yet realized they are actually in the software business.

“We don’t in the automaker industry have a regulator for software safety that knows what it’s doing,” says Michael Barr, the software expert who testified in the Toyota case. NHTSA, he says, “has only limited software expertise. They’ve come at this from a mechanical history.” The same regulatory pressures that have made model-based design and code generation attractive to the aviation industry have been slower to come to car manufacturing. Emmanuel Ledinot, of Dassault Aviation, speculates that there might be economic reasons for the difference, too. Automakers simply can’t afford to increase the price of a component by even a few cents, since it is multiplied so many millionfold; the computers embedded in cars therefore have to be slimmed down to the bare minimum, with little room to run code that hasn’t been hand-tuned to be as lean as possible. “Introducing model-based software development was, I think, for the last decade, too costly for them.”

One suspects the incentives are changing. “I think the autonomous car might push them,” Ledinot told me—“ISO 26262 and the autonomous car might slowly push them to adopt this kind of approach on critical parts.” (ISO 26262 is a safety standard for cars published in 2011.) Barr said much the same thing: In the world of the self-driving car, software can’t be an afterthought. It can’t be built like today’s airline-reservation systems or 911 systems or stock-trading systems. Code will be put in charge of hundreds of millions of lives on the road and it has to work. That is no small task.

“Computing is fundamentally invisible,” Gerard Berry said in his talk. “When your tires are flat, you look at your tires, they are flat. When your software is broken, you look at your software, you see nothing.”

“So that’s a big problem.”

Cheaper, Lighter, Quieter: The Electrification of Flight Is at Hand – IEEE Spectrum

Cheaper, Lighter, Quieter: The Electrification of Flight Is at Hand – IEEE Spectrum

When you first sit in the cockpit of an electric-powered airplane, you see nothing out of the ordinary. However, touch the Start button and it strikes you immediately: an eerie silence. There is no roar, no engine vibration, just the hum of electricity and the soft whoosh of the propeller. You can converse easily with the person in the next seat, without headphones. The silence is a boon to both those in the cockpit and those on the ground below.

You rev the motor not with a throttle but a rheostat, and its high torque, available over a magnificently wide band of motor speeds, is conveyed to the propeller directly, with no power-sapping transmission. At 20 kilograms (45 pounds), the motor can be held in two hands, and it measures only 10 centimeters deep and 30 cm in diameter. An equivalent internal-combustion engine weighs about seven times as much and occupies some 120 by 90 by 90 cm. In part because of the motor’s wonderful efficiency—it turns 95 percent of its electrical energy directly into work—an hour’s flight in this electric plane consumes just US $3 worth of electricity, versus $40 worth of gasoline in a single-engine airplane. With one moving part in the electric motor, e-planes also cost less to maintain and, in the two-seater category, less to buy in the first place.

It’s the cost advantage, even more than the silent operation, that is most striking to a professional pilot. Flying is an expensive business. And, as technologists have shown time and again, if you bring down the cost of a product dramatically, you effectively create an entirely new product. Look no further than the $300 supercomputer in your pocket.

At my company, Bye Aerospace, in Englewood, Colo., we have designed and built a two-seat aircraft called the Sun Flyer that runs on electricity alone. We expect to fly the plane, with the specs described above, later this year. We designed the aircraft for the niche application of pilot training, where the inability to carry a heavy payload or fly for more than 3 hours straight is not a problem and where cost is a major factor. But we believe that pilot training will be just the beginning of electric aviation. As batteries advance and as engineers begin designing hybrid propulsion systems pairing motors with engines, larger aircraft will make the transition to electricity. Such planes will eventually take over most short-hop, hub-and-spoke commuter flights, creating an affordable and quiet air service that will eventually reach right into urban areas, thereby giving rise to an entirely new category of convenient, low-cost aviation.

I will never forget my first experience with electric propulsion, during the early days of Tesla Motors, in the mid-2000s. I was a guest, visiting Tesla’s research warehouse in the San Francisco Bay Area, and there I rode along with a test driver in the prototype of the company’s first Roadster. Looking over the electric components then available—the motor was large and heavy, and the gearbox, inverter, and batteries were all relatively crude—I found it hard to imagine why anyone would take an electric car over a gasoline-powered one. But then the driver’s foot hit the accelerator, the car lunged forward like a rocket, and I was a believer.

Electric flight has advanced on the backs of such efforts, themselves the beneficiaries of the cellphone industry’s work on battery technology and power-management software. I founded Bye Aerospace in 2007 to build electric planes and capitalize on three advances in particular. The first one is improved lithium-ion batteries. The second is efficient and lightweight electric motors and controllers. And the third is aerodynamic design—specifically a long, low-drag fuselage with efficient long-wing aerodynamics, constructed with a very lightweight and strong carbon composite.

Our first project was the Silent Falcon, a 14-kg (30-lb.) solar-electric fixed-⁠wing drone. We optimized the power system for long-duration flight by including only enough lithium-ion batteries to supply peak power for climbing. We designed and built a pneumatic rail launcher so that the plane does not have to take off under its own power. When it reaches the desired altitude, it can cruise for 5 to 7 hours, supplementing a trickle of battery power with electricity from solar panels spanning the 4.2-meter (14-foot) wings. The solar panels turn sunlight into electricity with 11 percent efficiency, effectively doubling the flight time that the batteries alone could provide. Nowadays, the best solar cells are rated at 26 percent efficiency, and they will allow the plane to stay up for 10 to 12 hours.

The Silent Falcon can carry various payloads, including conventional and infrared cameras and sensors useful for surveilling border areas, inspecting power lines, gathering information on forest fires, and many other uses. It flies with a completely autonomous plan. You give it a general order—where you want it to go, how high, and over what location—and then hit the Send button. The Silent Falcon started production in 2015, becoming the world’s first commercial solar-electric unmanned aerial vehicle, or UAV.

Our next project was to develop, with the help of subcontractors around the world, an electric propulsion system for use in an existing full-size airplane: the Cessna 172 four-seater, the most popular airplane in the world. After flying the converted Cessna for a few dozen short hops, we followed up with a purpose-built, single-seat electric airplane. We’ve taken each of these test planes on 20-odd test flights.

Our first problem was finding a suitably light, efficient motor. Years ago, in the early days of electric flight, we encountered aviators who considered dropping (or actually did drop) a conventional electric motor into an airplane. But it weighed too much because of the heavy motor casings, the elaborate liquid-cooling systems, and the complex gearboxes. Our approach has been to work with such companies as Enstroj, Geiger, Siemens, and UQM, which have designed electric motors specifically for aerospace applications.

These aviation-optimized motors differ in several respects from the conventional sort. They can weigh less because they don’t need as much starting power at low revolutions per minute. An airplane has far less inertia to overcome while slowly accelerating along a runway than a car does as it kicks off from a stoplight. Aviation motors can dispense with the heavy motor casing because they don’t need to be as rugged as auto motors, which are frequently jostled by ruts and potholes and stressed by vibration and high torque.

In a Tesla, the power might peak at around 7,000 rpm, and that is fine for driving a car. But when you’re turning a propeller, you need the power curve to peak much sooner, at one-third the revs— about 2,000 rpm. It would be a shame to achieve the shape of that power curve at that lower speed by adding the deadweight of a complex gearbox; therefore, our supplier furnishes us with motors that have the appropriate windings and a motor controller programmed to deliver such a power curve. At 2,000 rpm, the motor can thus directly drive the propeller. As a result, we’ve been able to progress from power plants that developed just 1 to 2 kilowatts per kilogram to models generating more than 5 kW/kg.

Even more important was the lithium-ion battery technology, the steady improvement of which over the past 15 years was key to making our project possible. Bye Aerospace has worked with Panasonic and Dow Kokam; currently we use a battery pack composed of LG Chem’s 18650 lithium-ion batteries, so called because they’re 18 millimeters in diameter and 65 mm long, or a little larger than a standard AA battery. LG Chem’s cell has a record-breaking energy density of 260 watt-hours per kilogram, about 2.5 times as great as the batteries we had when we began working on electric aviation. Each cell also has a robust discharge capability, up to about 10 amperes. Our 330-kg battery pack easily allows normal flight, putting out a steady 18 to 25 kW and up to 80 kW during takeoff. The total energy storage capacity of the battery pack is 83 kWh.

That peak power rating is generally most needed toward the end of a flight, when the state of charge drops and voltage gets low. Just as important, the battery can charge quite rapidly; all we need is the kind of supercharging outlets now available for electric cars.

To use lithium-ion batteries in an airplane, you must take safety precautions beyond those required for a car. For example, we use a packaging system to contain heat and at the same time allow the venting of any vapors that may be created. An electronic safety system monitors each cell during operations, avoiding both under- and overcharges. Our battery-management system monitors all these elements and feeds the corresponding data to the overall information-management system in the cockpit.

Should something go wrong with the batteries in midflight, an alarm light flashes in the cockpit and the pilot can disconnect the batteries, either electronically or mechanically. If this happens, the pilot can then glide back to the airfield, which the plane will always be near, given that it is serving as a trainer.

A key precaution, pioneered in the original Tesla Roadster, is to separate the individual cells with an air gap, so that if one cell overheats, the problem can’t easily propagate to its neighbors. Air cooling is sufficient for the batteries, but we use liquid cooling for the motor and controller, which throw off a lot of heat in certain situations (such as a full-power takeoff and climb-out from a Phoenix airport).

The airframe design takes advantage of advanced composites, which allowed us to produce a wing and fuselage design that is both lightweight and strong. We used advanced aerodynamics design tools to shape the fuselage airfoils and wing for a very low drag without compromising easy handling.

Much of the aerodynamic payoff of our electric propulsion system is centered in the cowling area in the airplane’s nose. The motor sits in this space, between the propeller and the cockpit, and it is so small that we could squeeze the cowling down to an elegant taper, smoothing airflow along the entire fuselage. This allows us to reduce air resistance by 15 percent, as compared with what a conventional plane such as a single-engine Cessna would offer. Also, because the electric motor throws off a lot less heat than a gasoline engine, you need less air cooling and thus can manage with smaller air inlets. The result is less parasitic cooling drag and a nicer appearance (if we do say so ourselves).

The sleek airplane nose also increases propeller efficiency. On a conventional airplane, much of the inner span of the propeller is blocked because of the large motor behind it. In a properly designed electric airplane, the entire propeller blade is in open air, producing considerably more thrust. A bonus: The airplane can regenerate energy during braking, just as electric cars do. When the pilot slows down or descends, the propeller becomes a windmill, running the motor as a generator to recharge the batteries. In the sort of airport traffic pattern typical for general aviation and student-pilot training, this energy savings comes to about 13 percent. In other words, if a plane lands having apparently used 8.7 kWh during the flight, it has actually used 10 kWh⁠—the propeller-recoup system put back roughly 1.3 kWh while flying in the traffic pattern.

The commercial rationale for training aircraft like the Sun Flyer is the projected crisis in the supply of qualified airline pilots. Last year, Boeing made a staggering projection: The world will need an additional 617,000 commercial pilots by the year 2035. To put that in perspective, the total number of commercial pilots in the world today has been estimated at 130,000.

The growing scarcity of pilots has various causes. Fewer trained pilots are coming out of the world’s large militaries. At the same time, it’s increasingly expensive to obtain a commercial airline pilot’s license from civilian pilot schools, as more hours of flight time are now required, some 1,500 flight hours in total. On top of that, the age of the typical training aircraft—in the United States, it’s probably a Cessna or a Piper—now averages 50 years, according to the General Aviation Manufacturers Association.

The Sun Flyer, manufactured by our Aero Electric Aircraft Corp. (AEAC) division, is currently one of a kind, but it won’t be much longer. NASA has announced a project to develop an experimental electric airplane, the X-57 Electric Research Plane, which would be the first new experimental aircraft the agency has designed in five years. (Because NASA is a government agency, its plane would not be a commercial competitor of the Sun Flyer.) Airbus has flown a small experimental electric aircraft several times over the last few years, but it now focuses on hybrid-electric commercial transport (which I’ll discuss in a moment). Pipistrel, a Slovenian maker of gliders and light-sport aircraft (LSA), has flown experimental electric prototypes for several years. However, the future of such craft is unclear because the U.S. Federal Aviation Administration and the European equivalent, EASA, do not now allow any LSAs, electric or otherwise, to be used as commercial trainers.

For now, we are sticking to our niche in training. AEAC is working with Redbird Flight Simulations, in Austin, Texas, to offer a comprehensive training system. The Spartan College of Aeronautics and Technology, in Tulsa, Okla., has placed a deposit toward the purchase of 25 of our Sun Flyers, and it has also signed a training-related agreement that will help us to develop a complete training system. Other flight schools and individual pilots have made deposits and options to buy, bringing the total to more than 100 Sun Flyer deposits and options; another 100 deposits are in various stages of negotiation.

The Sun Flyer aircraft will be FAA certified in the United States according to standard-category, day-night visual flight rules with a target gross weight of less than 864 kg (1,900 lb.). And the airplane will not compromise on performance: We are aiming for a climb rate of 430 meters (1,450 feet) per minute; for comparison, a Cessna 172 climbs at about 210 meters (700 feet) per minute.

Why aren’t we pursuing a larger commercial electric airplane? The main reason is the energy-to-speed ratio. The bigger and faster an electric airplane gets, the greater the number of batteries it needs and the greater the share of its weight those batteries constitute. The underlying problem is the same for any moving object: The drag on a vehicle goes up as the square of speed. If you double speed, you increase drag by a factor of four. In a relatively slow airplane, like a flight trainer, electric aviation is a serious contender, but it will take years before batteries have enough energy density to power airplanes that are substantially faster and heavier than our models.

While we wait for pure-electric technology to mature, we can use hybrid-electric solutions, which operate in planes on the same principle as they do in cars. Because you need about four times as much energy during takeoff as when cruising, you can get that extra burst of energy by running the electric motor at peak power; this is possible because motors have such a wide band of efficiency. Then, we could use a small internal-combustion engine running at optimal rpm to recharge the battery and sustain cruising speed. As a side benefit, relying on the electric motor for takeoff spares the neighbors a lot of noise.

We are in the midst of the monumental task of making the two-seat Sun Flyer 2 and the four-seat Sun Flyer 4 a viable, commercial reality. Some still say it can’t be done. I counter that nothing of any fundamental and lasting value can be accomplished without trying things that have never been done before. Thanks to visionaries and pioneers, electric airplanes are not just an intriguing possibility. They are a reality.

Sun set: Oracle closes down last Sun product lines | ZDNet

Sun set: Oracle closes down last Sun product lines | ZDNet

Officially, Oracle hasn’t said a thing. Unofficially, if you count the cars in Oracle’s Santa Clara office, you’ll find hundreds of spots that were occupied last week now empty. As many as 2,500 Oracle, former Sun, employees have been laid off. Good bye, SPARC. Good bye, Solaris. Your day is done.

None of this is a real surprise. Oracle had cut former Sun engineers and developers by a thousand employees in January. In Oracle’s most recent SPARC/Solaris roadmap, the next generation Solaris 12 had been replaced by Solaris and SPARC next — incremental upgrades.

Former Sun executive Bryan Cantrill reported, based on his conversations with current Solaris team members, that Oracle’s latest layoffs were, “So deep as to be fatal: The core Solaris engineering organization lost on the order of 90 percent of its people, including essentially all management.” James Gosling, Java’s creator, summed it up: “Solaris … got a bullet in the head from Oracle on Friday.”

Stick a fork in Solaris, a once popular Unix operating system. It’s done.

Oracle’s 2009 acquisition of Sun, which gave the company Solaris and SPARC, was a terrible move from day one. The rise of commodity Linux x86-based servers insured that Oracle buying Sun would be an all-time awful technology merger and acquisition.

Indeed, you’d be hard pressed to find anything that went right with Oracle’s $7.2 billion purchase of Sun. Simon Phipps, former Sun open-source officer and managing director of Meshed Insights, gave a long, painful list of all the many once popular Sun programs that Oracle wasted. Among them are:

  • Java was described as the “crown jewels,” but the real reason for buying Java SE — trying to sue $8 billion from Google — has failed twice.
  • Ellison said Java’s role in middleware was the key to success, but Java Enterprise Edition (EE) is now headed to a Foundation.
  • Bureaucracy over MySQL security fixes led to a decent portion of the user community going over to Monty’s MariaDB [MySQL] fork, enough to start a company around.
  • Ellison said he would rebuild Sun’s hardware business, but its boss quit a month ago and the team behind it was part of the lay-off.
  • Despite Scott McNealy’s (former Sun CEO) understanding that Solaris had to be open to win in the market, Oracle hyped it up and closed it down. The result was this week’s layoffs, foreshadowed extensively in January.
  • Oracle renamed StarOffice (OpenOffice) and announced a cloud version, but it couldn’t make it fly. Sensing the impending EOL of the project and alienated by heavy-handed treatment, the community jumped ship to LibreOffice.

When all is said and done — and now all has been said and done — Oracle buying Sun was a waste of money for Oracle and a waste of once valuable Sun technologies. Great moves, Ellison. Let’s see if you can continue your good work with taking Oracle to the cloud.