Intel ends its dreams of replacing the x86 chip in your PC

Intel ends its dreams of replacing the x86 chip in your PC

When Intel launched its first Itanium processor in 2001, it had very high hopes: the 64-bit chip was supposed to do nothing less than kill off the x86 architecture that had dominated PCs for over two decades. Things didn’t quite pan out that way, however, and Intel is officially calling it quits. The company tells PCWorld that its just-shipping Itanium 9700-series processors will be the last models in the family. HPE, the enterprise company resulting from the split of Itanium co-creator HP, will be the last major customer — its extra-reliable Integrity i6 servers are getting the upgraded hardware, but you won’t hear much from anyone else.

The news marks the quiet end to a tumultuous saga. Itanium was supposed to represent a clean break from x86 that put Intel firmly into the 64-bit era. It was first intended for high-end servers and workstations, but it was eventually supposed to find its way into home PCs. Needless to say, that’s not how it worked out. Early Itanium chips were power hogs, and AMD threw a monkey wrench into Intel’s plans by launching 64-bit x86 processors ahead of Intel. Why buy Itanium when you can get many of the benefits of 64-bit technology without tossing your existing software? Intel responded with 64-bit x86 chips of its own, and those quickly took over as Itanium remained the niche option.

That shift effectively killed any hopes of the broad support Itanium needed to survive. Microsoft dropped support in Windows after 2010 , and HP went so far as to sue Oracle for ditching software development in 2011. Not that Intel necessarily minded by that point. It poured most of its energy into many-core Xeon processors that were often up to the job. And it’ll be a while before Itanium disappears forever. HPE says that it’ll offer Linux-based “containers” that let you run Itanium software on x86 servers, so it’ll be relatively easy for companies to jump ship at their own pace.

The cancellation also shows just how much Intel has changed in the past 16 years. Where the chip giant was once obsessed with ruling the high-performance computing world, it’s now trying to move beyond the PC. Why pour resources into exotic server CPUs when the future revolves more around drones, wearables and the Internet of Things? Although server chips aren’t about to disappear any time soon, Intel clearly has better ways to spend its money.

COBOL Is Everywhere. Who Will Maintain It? – The New Stack

COBOL Is Everywhere. Who Will Maintain It? – The New Stack

Think COBOL is dead? About 95 percent of ATM swipes use COBOL code, Reuters reported in April, and the 58-year-old language even powers 80 percent of in-person transactions. In fact, Reuters calculates that there’s still 220 billion lines of COBOL code currently being used in production today, and that every day, COBOL systems handle $3 trillion in commerce. Back in 2014, the prevalence of COBOL drew some concern from the trade newspaper American Banker.

“The mainframe was supposed to have been be replaced by farms of smaller commodity servers and cloud computing by now, but it still endures at many banks,” the trade pub reported.

But should we be concerned that so much of our financial infrastructure runs on an ancient infrastructure? American Banker found 92 of the top 100 banks were still using mainframe computers — and so were 71 percent of the companies in the Fortune 500. As recently as five years ago, the IT group at the Bank of New York Mellon had to tend to 112,500 different COBOL programs — 343 million lines of code, according to a 2012 article in Computerworld. And today a quick Google search today shows the Bank of New York Mellon is still hiring COBOL developers.

COBOL was originally developed in the 1950s as a stop-gap by the Department of Defense, but then computer manufacturers began supporting it, “resulting in widespread adoption,” according to Wikipedia. Now the Eisenhower-era programming language — based on design work by Grace Hopper — is everywhere. And because it’s so entrenched it can be difficult to transition to a new language. Reuters reported in April that when Commonwealth Bank of Australia replaced its core COBOL platform in 2012, it took five years — and cost $749.9 million.

One COBOL programmer told the Tampa Bay Times his experience with a company transitioning to Java from COBOL. “It’s taken them four years, and they’re still not done.”

There’s now some concerns about where the next generation of COBOL programmers will come from. In 2014, American Banker reported banks are “having trouble finding talented young techies who want to work in a bank and a shortage of people with mainframe and COBOL skills.” The CIO at the $38 billion-asset First Niagara Financial Group in Buffalo said they can’t compete with Google and Facebook when it comes to offering young techies a “cool” workplace for their resume.

And then there’s the language itself. “COBOL isn’t as sexy as working with Elixir, or Golang,” argued The Next Web. COBOL historically hasn’t been the most attractive option for a hip young programmer, admitted to Stuart McGill, chief technology officer at development tools vendor Micro Focus. Back in 2009, he was telling Computerworld, “If you’ve been trained on Windows using Virtual Studio the last thing you want to do is go back to the mainframe.”

In a March thread on Hacker News, someone described learning COBOL as “like swallowing a barbed cube-shaped pill,” lamenting the decades-old legacy code “swamped with technical debt…modified, extended, updated, moved to new hardware over and over… Documentation, if any, is hopelessly out of date.”

Another commenter complained that “You will most likely spend the rest of your career doing maintenance work rather than any greenfield development. There is nothing wrong with that but not everybody likes the fact they can’t create something new.”

And a May 2016 study published by Congress’ Government Accountability Office criticized the U.S. Department of Justice and the Treasury Department for their legacy COBOL systems. It found many agencies were using COBOL — including the Department of Homeland Security (which uses COBOL and other languages to track hiring for immigration and customs enforcement agents on a 2008 IBM z10 mainframe). Veterans’ benefits claims were also tracked with a COBOL system, and the Social Security Administration was using COBOL to calculate retirement benefits. (In fact, the SSA had to re-hire some retired employees just to maintain its existing COBOL systems, according to the report.)Even the Department of Justice’s Information about the inmate population passes through a hybrid COBOL/Java system.

There have been reports that some institutions are still clinging to elderly COBOL programmers — suggesting they’re having trouble finding qualified replacements. In 2014, Bob Olson, a vice president at Unisys, even told American Banker about a government client with an IT worker “who’s on oxygen. He’s 70 years old, he knows the keys to the kingdom, he knows where everything is, it’s all sitting in his head. They send out a police car to pick him up every morning and bring him into work in a vault-like room.”

Of course, this has also created some opportunities. 75-year-old Bill Hinshaw, a former COBOL programmer, has even founded a company in northern Texas named COBOL Cowboys. (And yes, their client list includes at least five banks.) The company’s slogan? “Not our first rodeo.”

“Some of the software I wrote for banks in the 1970s is still being used,” Hinshaw told Reuters. “After researching many published articles (both positive and negative) on the future life of COBOL, we came away with renewed confidence in its continued life in the coming years,” explained the company’s web page. It cites IBM enhancements which allow Cobol and Java to run together on mainframes.

Reuters reported that Hinshaw divides his time between 32 children and grandchildren “and helping U.S. companies avert crippling computer meltdowns.” When he started programming, instructions were coded into punch cards which were fed into mainframes. But decades later, when he finally reached retirement age, “calls from former clients just kept coming.”

They’re willing to pay almost anything, he told Reuters, and “You better believe they are nice since they have a problem only you can fix.” Some companies even offered him a full-time position.

The company boasts some retirement age coders on its roster, as well as some “youngsters” who are in their 40s and early 50s.

There are strong reactions to a recent article arguing banks should let COBOL die. “The idea that large corporations are simply going to move on from COBOL is out of touch with reality,” one commenter wrote on Hacker News. “It really can’t be overstated how deeply old COBOL programs are embedded into these corporations. I worked for one that had been using them since the language itself was created, and while they all could see the writing on the wall, the money to do the change simply wasn’t there.”

But they also believed it would be possible to find new programmers. “They just need to maintain and occasionally update some ancient program that’s been rock-solid for longer than they’ve been alive.”

Computerworld also reported that there were 75 schools in the U.S. that were still teaching COBOL, “thanks in part to efforts by companies like IBM.” American Banker found they were mostly community colleges and technical schools, though it adds that 68,000 students entered IBM’s “Master the Mainframe” contest between 2012 and 2014. Last month IBM told Reuters that over the last 12 years they’ve trained more than 180,000 developers through fellowships and other training programs — which averages out to 15,000 a year. One IBM fellow insisted that “Just because a language is 50 years old, doesn’t mean that it isn’t good.” So there are at least some channels in place to create new COBOL programmers.

Leon Kappelman, a professor of information systems at the University of North Texas, says he’s been hearing dire predictions about COBOL’s future for the last 30 years. Last year he told CIO magazine, undergrads who take the school’s two classes in mainframe COBOL “tend to earn about $10,000 per year more starting out than those that don’t.” He also believes it’s a secure career because large organizations rarely have a compelling business case for replacing their COBOL code with something new.

“The potential for career advancement could be limited, so you get a lot of job security – but it could get boring.”

Some commenters on Hacker News see the issue pragmatically. “What you have to remember is that when the COBOL code was written, it replaced hundreds, maybe thousands of people doing manual data entry and manipulation, maybe even pen-on-paper,” one commenter wrote in April. “That gives you a fantastic return on investment. After that’s been done, replacing one computer system with a newer one is completely different, a spectacular case of diminishing returns.”

So business remains strong for the Cobol Cowboys. Recent press coverage (including the Reuters article) brought visitors from 125 countries to their website — and over 300 requests to join their group. I contacted CEO Hinshaw to ask him about the language’s future, and Hinshaw says he feels there’s a renewed interest COBOL that “may help to bring the younger generation of programmers into COBOL if they can overcome the negative press on COBOL and concentrate on a career of backroom business solutions written in COBOL.” He points out that the billions of lines of code obviously represent “60+ years of proven business rules.”

Even if companies transitioned to Java, the problem could recur later. “Will a future generation of young programmers want to transition away from Java to a newer language — and companies will have to once again go through another expensive and time-consuming transition.”

“Only time will tell if COBOL programmers are a dying breed, or a new breed embracing COBOL comes riding onto the scene….”

Source: COBOL Is Everywhere. Who Will Maintain It? – The New Stack

AlarmShock wristband zaps serial snoozers out of bed

AlarmShock wristband zaps serial snoozers out of bed

It’s hard to resist the temptation of the snooze button some mornings, but a masochistic new alarm clock may help you kick the habit. When the AlarmShock goes off, you have two minutes to unlock a wrist-worn bracelet and dock it on the device – or face an electric shock.

For the chronic oversleeper, there are no shortage of creative ways to get you moving in the morning. Clocky runs away and hides, the Puzzle Alarm Clock makes you piece together a puzzle before it pipes down, and Ruggie won’t rest until you plant both feet on the floor.

Joining the fray is Alarmshock, which is part-alarm clock, part-torture device. Users strap the wristband on before they go to bed, locking it into place. When the alarm goes off, sounds and vibrations will wake the wearer, and the countdown begins. The magnetically-locked wristband can only be pried open on the dock, and if you haven’t done so in two minutes, the device punishes you with a short, sharp jolt of electricity.

According to the company, it’s a low enough voltage that it won’t hurt too much, but it should be annoying enough to make you think twice about rolling over and ignoring it – especially since it will zap you up to five times if it’s not deactivated.

The dock unit also has a charging port for the armband, and a microUSB port to charge mobile devices.

The AlarmShock is currently being funded on Kickstarter, and so far, it’s raised almost £8,000 (US$10,370) of its £40,000 (US$52,000) target, with 29 days remaining on the campaign. Early bird pledges are available for £77 (US$100), before the price jumps up to £90 (US$117). If all goes to plan, the AlarmShock will be shipped out in October.

Vendors approve of NIST password draft | CSO Online

Vendors approve of NIST password draft | CSO Online

A recently released draft of the National Institute of Standards and Technology’s (NIST’s) digital identity guidelines has met with approval by vendors. The draft guidelines revise password security recommendations and altering many of the standards and best practices security professionals use when forming policies for their companies.

The new framework recommends, among other things:

Remove periodic password change requirements
There have been multiple studies that have shown requiring frequent password changes to actually be counterproductive to good password security, said Mike Wilson, founder of PasswordPing. NIST said this guideline was suggested because passwords should be changed when a user wants to change it or if there is indication of breach.

Drop the algorithmic complexity song and dance
No more arbitrary password complexity requirements needing mixtures of upper case letters, symbols and numbers. Like frequent password changes, it’s been shown repeatedly that these types of restrictions often result in worse passwords, Wilson adds. NIST said If a user wants a password that is just emojis they should be allowed. It’s important to note the storage requirements. Salting, hashing, MAC such that if a password file is obtained by an adversary an offline attack is very difficult to complete.

Require screening of new passwords against lists of commonly used or compromised passwords
One of the best ways to ratchet up the strength of users’ passwords is to screen them against lists of dictionary passwords and known compromised passwords, he said. NIST adds that dictionary words, user names, repetitive or sequential patterns all should be rejected.

“All three of these recommendations are things we have been advising for some time now and there are now password strength meters that screen for compromised credentials, not just commonly used passwords,” Wilson said. “While it wasn’t explicitly mentioned in the new NIST framework, we contend that another important security practice is periodically checking your user credentials against a list of known compromised credentials.”

NIST’s Paul Grassi, one of the authors of the report, noted that many of the above guidelines are now only strong suggestions and are not mandatory yet. The public comment period closed on May 1 and now the draft goes through an internal review process. It is expected to be completed by early to mid summer.

“We look forward to a day in the near future when technology, culture, and user preference allows these requirements to be more broadly accepted. That said, we reviewed a lot of research in the space and determined that composition and expiration did little for security, while absolutely harming user experience. And bad user experience is a vulnerability in our minds,” he said. “We need technology to support this (not all password stores do), so we didn’t want to create requirements that agencies had no chance of meeting due to tech limitations.”

Users usually find a way around restrictions like composition rules by substituting special characters for alphas. Because the bad guys already know all of the tricks, this adds very little, if nothing, to the true entropy of a password, he said. “Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren’t fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good.”

In terms of new requirements for passwords, he said NIST is excited to introduce password storage requirements, which makes an offline attack much harder. He said fundamentally the new revision does a better job recognizing the password has a valid role to play, if done right. “Yet we provided a slew of new options that gives agencies the ability to leverage the tools that users may already have, like a smartphone, or an authentication app, or a security key. This allows agencies to save money by not having to issue a physical device, but increase their security posture by accepting the strong authenticators users already have.”

Phil Dunkelberger, CEO of Nok Nok Labs, said the username and password paradigm is well past its expiration date. Increasing password complexity requirements and requiring frequent resets adds only marginal security while dramatically decreasing usability.

Phil Dunkelberger, CEO of Nok Nok Labs

“Most security professionals will acknowledge that while such policies look good on paper, they put a cognitive load on end users who respond by repeating passwords across sites and other measures to cope that dramatically weaken overall security. We are glad to see national organizations like NIST recommend an update and change to a paradigm that no longer works,” he said.

User reaction

Ran Shulkind, co-founder and chief product officer at SecuredTouch, said the new password guidelines make a lot of sense. “The volume of passwords people had to manage and the ‘special characters’ ended up making things less secure than they should have been. However, passwords are actually becoming much less important than they used to be. Threats are continuing to increase, and users are getting tired of entering usernames, passwords, and additional identifying codes – no matter the structure.”

Multifactor authentication (MFA) is becoming mandated in some industries and is voluntarily being adopted in others. It adds another layer of security to include something you know (password), something you have (token or SMS), or something you are (fingerprint or behavior), Shulkind said.

“Ultimately, it’s all about balancing security and the user experience. While MFA does enhance security, it can discourage the user from using the app or performing the transaction. That’s why organizations are looking for more user-friendly components, like behavioral biometrics to reduce friction, allowing for smoother device interactions and higher risk transactions,” he said.

Mike Kail, co-founder and CIO at Cybric, said behavioral biometrics, which analyzes and authenticates based on users’ physical interactions with their devices (finger pressure, typing speed, finger size) will eventually phase out the need for passwords completely.

“I feel that the updates in the new framework are a step in the right, tactical direction, especially the password rotation change requirements,” he said.

He would like to see more strategic approaches such as requiring a Cloud IdP/SSO provider and monitor anomalous activity. He also mentioned providing users with a password management tool.

Barry Shteiman, director of threat research at Exabeam, said this is a very positive change in the NIST standard. “Credential stuffing (using compromised credential DBs and replay them against authentication mechanisms) has become very common, especially with breach information being sold or sometimes published online.”

Richard Henderson, global security strategist at Absolute believes this change also makes dictionary and rainbow attacks less useful to test credentials. “Sadly, we’ve lived through many years of more and more confusing and contradictory advice when it comes to creating and using passwords, and that has led to a hodge-podge of implementations and confusion among regular internet users.”

“When you add to this the simple notion that there are still a lot of sites out there with terrible password policies or even worse, still storing passwords in plaintext, are we really surprised that people’s habits lead to widespread password reuse or weak passwords?,” Henderson pondered.

He said the most important piece of advice is continual scan and intake of known vulnerable and stolen password lists to compare against. “Beyond the idea of potentially minimizing the risk of password reuse and creating weaker passwords, it can alert companies to the potential of a breach of one of their users. If a password like 247KangarooKiwi! shows up on a compromised list somewhere, and that’s a password one of your users uses, it’s an awful large red flag to take a look at their corporate or work endpoint devices and look for evidence of compromise.”

NIST’s recommendation to allow the full ASCII and Unicode keyspaces is also good, as it increases the keyspace for attackers using brute forcing attempts to break, he said.

Troy Gill, manager of security research at AppRiver, remembers hearing frequently that passwords were dead. “New authentication technologies have come a long way in the past decade. However, the massive surge in online service with the majority of those services implementing passwords is leading to a bit of a password critical mass,” he said.

He noted that these recommendations also are largely in sync with guidelines laid out last year by the UK’s NCSC.

“In a perfect world, it would be a great idea to require passwords to be changed every few months. But as humans we have inherent limitations with our ‘wetware’ that can prevent most of us from doing what we know is most secure. Instead, we substitute something the meets the minimum requirements and can be managed with the most ease,” he said. “Let’s face it, there are a staggering number of unique passwords that people are required to remember today, with most requiring frequent changes that also have to be memorized.

He said this constant churn inevitably leads to users implementing common, predictable passwords, recording them in unsecured locations, reusing passwords on multiple online accounts, and using only slight variations of prior passwords. He agreed that 30/60/90 day password changes are counterproductive.

He would like to see a more “event driven” approach to when password resets are required as opposed to routine schedule. For example, if an organization is at all suspicious of a breach then requiring password changes across the board would be appropriate. Other events warranting a password change would include a particular user logging in from a unrecognized device or an unexpected location. “Investment in the ability to detect these types of events more easily can build a stronger security posture,” he said.

Gill said it’s true that the attempt to require more algorithmic complexity most often has very predictable results. Like the example that NIST uses in its guidelines of the password “password” morphing into “password1” and later “password1!”.

“While the last iteration may be technically more complex it is essentially just as weak as the original as it is both commonly used and computationally predictable. I would also like to see the term ‘password’ replaced with ‘passphrase’ as lengthy passphrases can be both easier to remember and more difficult to crack in a brute force attack,” he said.

He said using lists of both common passwords and compromised passwords can be quite simple to implement and can make a marked improvement. Organizations should also focus some efforts on monitoring web locations, where breached passwords are likely to appear, for lists containing any of their users/customers.

Eric Avigdor, director of product management at Gemalto, noted that passwords have always been a weak security tool, and conventional wisdom has been that consumers should create complex passwords that they update frequently.

“The reality is that passwords are weak no matter how often they are changed or how difficult they are, and people usually have only a variant of one or two passwords. Man in the middle or man in the browser hacks can take your password even if it is extremely lengthy and complicated – IT administrators can see your passwords, your bank can see your passwords,” he said.

He said the guidelines recognize that the way to solve the password problem is to accept that passwords are weak and add on other complementary factors of authentication, whether mobile or hardware OTP tokens as well as PKI based USB tokens or smart cards.

Avigdor mentioned more reliance on the usage of PKI tokens with a smart card. This involves entering a PIN which is never revealed to anyone, except the owner of the smart card.

DRAFT NIST Special Publication 800-63B
Digital Identity Guidelines

Nvidia Opens Up The “Black Box” of Its Robocar’s Deep Neural Network – IEEE Spectrum

Nvidia Opens Up The “Black Box” of Its Robocar’s Deep Neural Network – IEEE Spectrum

A deep neural network’s ability to teach itself is a strength, because the machine gets better with experience, and a weakness, because it’s got no code that an engineer can tweak. It’s a black box.

That’s why the creators of Google Deep Mind’s AlphaGo couldn’t explain how it played the game of Go. All they could do was watch their brainchild rise from beginner status to defeat one of the best players in the world.

Such opacity’s okay in a game-playing machine but not in a self-driving car. If a robocar makes a mistake, engineers must be able to look under the hood, find the flaw and fix it so that the car never makes the same mistake again. One way to do this is through simulations that first show the AI one feature and then show it another, thus discovering which things affect decision making.

Nvidia, a supplier of an automotive AI chipset, now says it has found a simpler way of instilling transparency. “While the technology lets us build systems that learn to do things we can’t manually program, we can still explain how the systems make decisions,” wrote Danny Shapiro, Nvidia’s head of automotive, in a blog post.

And, because the work is done right inside the layers of processing arrays that make up a neural network, results can be displayed in real time, as a “visualization mask” that’s superimposed on the image coming straight from the car’s forward-looking camera. So far, the results involve the machine’s turning of the steering wheel to keep the car within its lane.

The method works by taking the analytical output from a high layer in the network—one that has already extracted important features from the image fed in by a camera. It then superimposes that output onto lower layers, averages it, then superimposes it on still lower layers until getting all the way to the original camera image.

The result is a camera image on which the AI’s opinion of what’s significant is highlighted. And, in fact, those parts turn out to be just what a human driver would consider significant—lane markings, road edges, parked vehicles, hedges alongside the route, and so forth. But, just to make sure that these features really were key to decision making, the researchers classified all the pixels into two classes—Class 1 contains “salient” features that clearly have to do with driving decisions, and Class 2, which contains non-salient features, typically in the background. The researchers manipulated the two classes digitally and found that only salient features mattered.

“Shifting the salient objects results in a linear change in steering angle that is nearly as large as that which occurs when we shift the entire image,” the researchers write in a white paper. “Shifting just the background pixels has a much smaller effect on the steering angle.”

True, engineers can’t reach into the system to fix a “bug,” because deep neural nets, lacking code, can’t properly be said to have a bug. What they have are features. And now we can visualize them, at least up to a point.

MP3 is dead, long live AAC

MP3 is dead, long live AAC

MP3, the format that revolutionized the way we consume (and steal) music since the 90s, has been officially retired — in a manner of speaking. The German research institution that created the format, The Fraunhofer Institute for Integrated Circuits, announced that it had terminated licensing for certain MP3-related patents…in other words, they didn’t want to keep it on life support, because there are better ways to store music in the year 2017. Rest now forever, MP3.

In its place, the director of the Fraunhofer Institute told NPR, the Advanced Audio Coding (AAC) format has become the “de facto standard for music download and videos on mobile phones.” It’s simply more efficient and has greater functionality, as streaming TV and radio broadcasting use the format to deliver higher-quality audio at lower bitrates than MP3.

Basic research in audio encoding began at the Friedrich-Alexander University of Erlangen-Nuremberg in the late 1980s. Researchers there and from the Fraunhofer Institute joined forces, and their result was the humble MP3 standard. The format takes up 10 percent of the storage space of the original file, a monumental reduction at the time. According to Stephen Witt’s book How Music Got Free, corporate sabotage and and other failures almost stonewalled the MP3 into irrelevancy. Finally, Fraunhofer just started giving away software consumers could use to rip songs from compact discs to MP3 files on their home computer, after which the format took off.

By the end of the 90s, however, those tiny files were zipping around the nascent internet, spawning a gold rush of digital piracy. It ruled illegal sharing for years as sites like Napster and Kazaa hosted popular peer-to-peer services allowing folks to download songs with a click. Of course, the format also enabled development on the legal side of the aisle as online vendors scrambled to lawfully meet the connected public’s need for digitally-acquired music.

Apple’s iTunes store dominated that market, which funneled music into their answer to the MP3 player market, the iPod. Apple gave users the option of using AAC almost from the start, and that format has proven the eventual successor. But MP3 deserves its place in history for enabling casual users to experience for the first time the internet’s true (if dubiously legal) potential for exchanging data.

NYU Accidentally Exposed Military Code-breaking Computer Project to Entire Internet

NYU Accidentally Exposed Military Code-breaking Computer Project to Entire Internet

IN EARLY DECEMBER 2016, Adam was doing what he’s always doing, somewhere between hobby and profession: looking for things that are on the internet that shouldn’t be. That week, he came across a server inside New York University’s famed Institute for Mathematics and Advanced Supercomputing, headed by the brilliant Chudnovsky brothers, David and Gregory. The server appeared to be an internet-connected backup drive. But instead of being filled with family photos and spreadsheets, this drive held confidential information on an advanced code-breaking machine that had never before been described in public. Dozens of documents spanning hundreds of pages detailed the project, a joint supercomputing initiative administered by NYU, the Department of Defense, and IBM. And they were available for the entire world to download.

The supercomputer described in the trove, “WindsorGreen,” was a system designed to excel at the sort of complex mathematics that underlies encryption, the technology that keeps data private, and almost certainly intended for use by the Defense Department’s signals intelligence wing, the National Security Agency. WindsorGreen was the successor to another password-cracking machine used by the NSA, “WindsorBlue,” which was also documented in the material leaked from NYU and which had been previously described in the Norwegian press thanks to a document provided by National Security Agency whistleblower Edward Snowden. Both systems were intended for use by the Pentagon and a select few other Western governments, including Canada and Norway.

Adam, an American digital security researcher, requested that his real name not be published out of fear of losing his day job. Although he deals constantly with digital carelessness, Adam was nonetheless stunned by what NYU had made available to the world. “The fact that this software, these spec sheets, and all the manuals to go with it were sitting out in the open for anyone to copy is just simply mind blowing,” he said.

He described to The Intercept how easy it would have been for someone to obtain the material, which was marked with warnings like “DISTRIBUTION LIMITED TO U.S. GOVERNMENT AGENCIES ONLY,” “REQUESTS FOR THIS DOCUMENT MUST BE REFERRED TO AND APPROVED BY THE DOD,” and “IBM Confidential.” At the time of his discovery, Adam wrote to me in an email:

All of this leaky data is courtesy of what I can only assume are misconfigurations in the IMAS (Institute for Mathematics and Advanced Supercomputing) department at NYU. Not even a single username or password separates these files from the public internet right now. It’s absolute insanity.

The files were taken down after Adam notified NYU.

Intelligence agencies like the NSA hide code-breaking advances like WindsorGreen because their disclosure might accelerate what has become a cryptographic arms race. Encrypting information on a computer used to be a dark art shared between militaries and mathematicians. But advances in cryptography, and rapidly swelling interest in privacy in the wake of Snowden, have helped make encryption tech an effortless, everyday commodity for consumers. Web connections are increasingly shielded using the HTTPS protocol, end-to-end encryption has come to popular chat platforms like WhatsApp, and secure phone calls can now be enabled simply by downloading some software to your device. The average person viewing their checking account online or chatting on iMessage might not realize the mathematical complexity that’s gone into making eavesdropping impractical.

The spread of encryption is a good thing — unless you’re the one trying to eavesdrop. Spy shops like the NSA can sometimes thwart encryption by going around it, finding flaws in the way programmers build their apps or taking advantage of improperly configured devices. When that fails, they may try and deduce encryption keys through extraordinarily complex math or repeated guessing. This is where specialized systems like WindsorGreen can give the NSA an edge, particularly when the agency’s targets aren’t aware of just how much code-breaking computing power they’re up against.

Adam declined to comment on the specifics of any conversations he might have had with the Department of Defense or IBM. He added that NYU, at the very least, expressed its gratitude to him for notifying it of the leak by mailing him a poster.

While he was trying to figure out who exactly the Windsor files belonged to and just how they’d wound up on a completely naked folder on the internet, Adam called David Chudnovsky, the world-renowned mathematician and IMAS co-director at NYU. Reaching Chudnovsky was a cinch, because his entire email outbox, including correspondence with active members of the U.S. military, was for some reason stored on the NYU drive and made publicly available alongside the Windsor documents. According to Adam, Chudnovsky confirmed his knowledge of and the university’s involvement in the supercomputing project; The Intercept was unable to reach Chudnovsky directly to confirm this. The school’s association is also strongly indicated by the fact that David’s brother Gregory, himself an eminent mathematician and professor at NYU, is listed as an author of a 164-page document from the cache describing the capabilities of WindsorGreen in great detail. Although the brothers clearly have ties to WindsorGreen, there is no indication they were responsible for the leak. Indeed, the identity of the person or persons responsible for putting a box filled with military secrets on the public internet remains utterly unclear.

An NYU spokesperson would not comment on the university’s relationship with the Department of Defense, IBM, or the Windsor programs in general. When The Intercept initially asked about WindsorGreen the spokesperson seemed unfamiliar with the project, saying they were “unable to find anything that meets your description.” This same spokesperson later added that “no NYU or NYU Tandon system was breached,” referring to the Tandon School of Engineering, which houses the IMAS. This statement is something of a non sequitur, since, according to Adam, the files leaked simply by being exposed to the open internet — none of the material was protected by a username, password, or firewall of any kind, so no “breach” would have been necessary. You can’t kick down a wide open door.

The documents, replete with intricate processor diagrams, lengthy mathematical proofs, and other exhaustive technical schematics, are dated from 2005 to 2012, when WindsorGreen appears to have been in development. Some documents are clearly marked as drafts, with notes that they were to be reviewed again in 2013. Project progress estimates suggest the computer wouldn’t have been ready for use until 2014 at the earliest. All of the documents appear to be proprietary to IBM and not classified by any government agency, although some are stamped with the aforementioned warnings restricting distribution to within the U.S. government. According to one WindsorGreen document, work on the project was restricted to American citizens, with some positions requiring a top-secret security clearance — which as Adam explains, makes the NYU hard drive an even greater blunder:

Let’s, just for hypotheticals, say that China found the same exposed NYU lab server that I did and downloaded all the stuff I downloaded. That simple act alone, to a large degree, negates a humongous competitive advantage we thought the U.S. had over other countries when it comes to supercomputing.

The only tool Adam used to find the NYU trove was, a website that’s roughly equivalent to Google for internet-connected, and typically unsecured, computers and appliances around the world, famous for turning up everything from baby monitors to farming equipment. Shodan has plenty of constructive technical uses but also serves as a constant reminder that we really ought to stop plugging things into the internet that have no business being there.

The WindsorGreen documents are mostly inscrutable to anyone without a Ph.D. in a related field, but they make clear that the computer is the successor to WindsorBlue, a next generation of specialized IBM hardware that would excel at cracking encryption, whose known customers are the U.S. government and its partners.

Experts who reviewed the IBM documents said WindsorGreen possesses substantially greater computing power than WindsorBlue, making it particularly adept at compromising encryption and passwords. In an overview of WindsorGreen, the computer is described as a “redesign” centered around an improved version of its processor, known as an “application specific integrated circuit,” or ASIC, a type of chip built to do one task, like mining bitcoin, extremely well, as opposed to being relatively good at accomplishing the wide range of tasks that, say, a typical MacBook would handle. One of the upgrades was to switch the processor to smaller transistors, allowing more circuitry to be crammed into the same area, a change quantified by measuring the reduction in nanometers (nm) between certain chip features. The overview states:

The WindsorGreen ASIC is a second-generation redesign of the WindsorBlue ASIC that moves from 90 nm to 32 nm ASIC technology and incorporates performance enhancements based on our experience with WindsorBlue. We expect to achieve at least twice the performance of the WindsorBlue ASIC with half the area, reduced cost, and an objective of half the power. We also expect our system development cost to be only a small fraction of the WindsorBlue development cost because we carry forward intact much of the WindsorBlue infrastructure.

Çetin Kaya Koç is the director of the Koç Lab at the University of California, Santa Barbara, which conducts cryptographic research. Koç reviewed the Windsor documents and told The Intercept that he has “not seen anything like [WindsorGreen],” and that “it is beyond what is commercially or academically available.” He added that outside of computational biology applications like complex gene sequencing (which it’s probably safe to say the NSA is not involved in), the only other purpose for such a machine would be code-breaking: “Probably no other problem deserves this much attention to design an expensive computer like this.”

Andrew “Bunnie” Huang, a hacker and computer hardware researcher who reviewed the documents at The Intercept’s request, said that WindsorGreen would surpass many of the most powerful code-breaking systems in the world: “My guess is this thing, compared to the TOP500 supercomputers at the time (and probably even today) pretty much wipes the floor with them for anything crypto-related.” Conducting a “cursory inspection of power and performance metrics,” according to Huang, puts WindsorGreen “heads and shoulders above any publicly disclosed capability” on the TOP500, a global ranking of supercomputers. Like all computers that use specialized processors, or ASICs, WindsorGreen appears to be a niche computer that excels at one kind of task but performs miserably at anything else. Still, when it comes to crypto-breaking, Huang believes WindsorGreen would be “many orders of magnitude … ahead of the fastest machines I previously knew of.”

But even with expert analysis, no one beyond those who built the thing can be entirely certain of how exactly an agency like the NSA might use WindsorGreen. To get a better sense of why a spy agency would do business with IBM, and how WindsorGreen might evolve into WindsorOrange (or whatever the next generation may be called), it helps to look at documents provided by Snowden that show how WindsorBlue was viewed in the intelligence community. Internal memos from Government Communications Headquarters, the NSA’s British counterpart, show that the agency was interested in purchasing WindsorBlue as part of its High Performance Computing initiative, which sought to help with a major problem: People around the world were getting too good at keeping unwanted eyes out of their data.

Under the header “what is it, and why,” one 2012 HPC document explains, “Over the past 18 months, the Password Recovery Service has seen rapidly increasing volumes of encrypted traffic … the use of much greater range of encryption techniques by our targets, and improved sophistication of both the techniques themselves and the passwords targets are using (due to improved OPSec awareness).” Accordingly, GCHQ had begun to “investigate the acquisition of WINDSORBLUE … and, subject to project board approval, the procurement of the infrastructure required to host the a [sic] WINDSORBLUE system at Benhall,” where the organization is headquartered.

Among the Windsor documents on the NYU hard drive was an illustration of an IBM computer codenamed “Cyclops,” (above) which appears to be a WindsorBlue/WindsorGreen predecessor. A GCHQ document provided by Snowden (below) describes Cyclops as an “NSA/IBM joint development.”

In April 2014, Norway’s Dagbladet newspaper reported that the Norwegian Intelligence Service had purchased a cryptographic computer system code-named STEELWINTER, based on WindsorBlue, as part of a $100 million overhaul of the agency’s intelligence-processing capabilities. The report was based on a document provided by Snowden:

The document does not say when the computer will be delivered, but in addition to the actual purchase, NIS has entered into a partnership with NSA to develop software for decryption. Some of the most interesting data NIS collects are encrypted, and the extensive processes for decryption require huge amounts of computing power.

Widespread modern encryption methods like RSA, named for the initials of the cryptographers who developed it, rely on the use of hugely complex numbers derived from prime numbers. Speaking very roughly, so long as those original prime numbers remain secret, the integrity of the encoded data will remain safe. But were someone able to factor the hugely complex number — a process identical to the sort of math exercise children are taught to do on a chalkboard, but on a massive scale — they would be able to decode the data on their own. Luckily for those using encryption, the numbers in question are so long that they can only be factored down to their prime numbers with an extremely large amount of computing power. Unluckily for those using encryption, government agencies in the U.S., Norway, and around the globe are keenly interested in computers designed to excel at exactly this purpose.

Given the billions of signals intelligence records collected by Western intelligence agencies every day, enormous computing power is required to sift through this data and crack what can be broken so that it can be further analyzed, whether through the factoring method mentioned above or via what’s known as a “brute force” attack, wherein a computer essentially guesses possible keys at a tremendous rate until one works. The NIS commented only to Dagbladet that the agency “handles large amounts of data and needs a relatively high computing power.” Details about how exactly such “high computing power” is achieved are typically held very close — finding hundreds of pages of documentation on a U.S. military code-breaking box, completely unguarded, is virtually unheard of.

A very important question remains: What exactly could WindsorBlue, and then WindsorGreen, crack? Are modern privacy mainstays like PGP, used to encrypt email, or the ciphers behind encrypted chat apps like Signal under threat? The experts who spoke to The Intercept don’t think there’s any reason to assume the worst.

“As long as you use long keys and recent-generation hashes, you should be OK,” said Huang. “Even if [WindsorGreen] gave a 100x advantage in cracking strength, it’s a pittance compared to the additional strength conferred by going from say, 1024-bit RSA to 4096-bit RSA or going from SHA-1 to SHA-256.”

Translation: Older encryption methods based on shorter strings of numbers, which are easier to factor, would be more vulnerable, but anyone using the strongest contemporary encryption software (which uses much longer numbers) should still be safe and confident in their privacy.

Still, “there are certainly classes of algorithms that got, wildly guessing, about 100x weaker from a brute force standpoint,” according to Huang, so “this computer’s greatest operational benefit would have come from a combination of algorithmic weakness and brute force. For example, SHA-1, which today is well-known to be too weak, but around the time of 2013 when this computer might have come online, it would have been pretty valuable to be able to ‘routinely’ collide SHA-1 as SHA-1 was still very popular and widely used.”

A third expert in computer architecture and security, who requested anonymity due to the sensitivity of the documents and a concern for their future livelihood, told The Intercept that “most likely, the system is intended for brute-forcing password-protected data,” and that it “might also have applications for things like … breaking older/weaker (1024 bit) RSA keys.” Although there’s no explicit reference to a particular agency in the documents, this expert added, “I’m assuming NSA judging by the obvious use of the system.”

Huang and Koç both speculated that aside from breaking encryption, WindsorGreen could be used to fake the cryptographic signature used to mark software updates as authentic, so that a targeted computer could be tricked into believing a malicious software update was the real thing. For the NSA, getting a target to install software they shouldn’t be installing is about as great as intelligence-gathering gifts come.

The true silver bullet against encryption, a technology that doesn’t just threaten weaker forms of data protection but all available forms, will not be a computer like WindsorGreen, but something that doesn’t exist yet: a quantum computer. In 2014, the Washington Post reported on a Snowden document that revealed the NSA’s ongoing efforts to build a “quantum” computer processor that’s not confined to just ones and zeroes but can exist in multiple states at once, allowing for computing power incomparable to anything that exists today. Luckily for the privacy concerned, the world is still far from seeing a functional quantum computer. Luckily for the NSA and its partners, IBM is working hard on one right now.

Repeated requests for comment sent to over a dozen members of the IBM media relations team were not returned, nor was a request for comment sent to a Department of Defense spokesperson. The NSA declined to comment. GCHQ declined to comment beyond its standard response that all its work “is carried out in accordance with a strict legal and policy framework, which ensures that our activities are authorised, necessary and proportionate, and that there is rigorous oversight.”

Nasa runs competition to help make old Fortran code faster – BBC News

Nasa runs competition to help make old Fortran code faster – BBC News

It is running a competition that will share $55,000 (£42,000) between the top two people who can make its FUN3D software run up to 10,000 times faster.

The FUN3D code is used to model how air flows around simulated aircraft in a supercomputer.

The software was developed in the 1980s and is written in an older computer programming language called Fortran.

“This is the ultimate ‘geek’ dream assignment,” said Doug Rohn, head of Nasa’s transformative aeronautics concepts program that makes heavy use of the FUN3D code.

In a statement, Mr Rohn said the software is used on the agency’s Pleiades supercomputer to test early designs of futuristic aircraft.

The software suite tests them using computational fluid dynamics, which make heavy use of complicated mathematical formulae and data structures to see how well the designs work.

Once designs are proved on the supercomputer, scale models are tested in wind tunnels and then finally experimental craft undergo real world testing.

Significant improvements could be gained just by simplifying a heavily used sub-routine so it runs a few milliseconds faster, said Nasa on the webpage describing the competition. If the routine is called millions of times during a simulation this could “significantly” trim testing times, it added.

Nasa said it would provide copies of the code to anyone taking part so they can analyse it, find bottlenecks and suggest modifications that could speed it up. Nasa is looking for the code to run at least 10 times faster but would like it quickened by thousands of times, if possible.

Any changes to FUN3D must not make it less accurate, said Nasa.
The sensitive nature of the code means the competition is only open to US citizens who are over 18.

Verizon to Sell Cloud and Managed Hosting Business to IBM | Data Center Knowledge

Verizon to Sell Cloud and Managed Hosting Business to IBM | Data Center Knowledge

Verizon Communications has agreed to sell its cloud and managed hosting business to IBM, announcing the deal the same week it completed the $3.6 billion sale of a massive data center portfolio to Equinix.

The telco began pulling back from being a cloud service provider last year, when it shut down its public cloud but held on to its virtual private cloud business. Verizon and other telcos (such as CenturyLink and AT&T) have been divesting costly infrastructure assets that support their enterprise IT services, switching to less capital-intensive models for some services and pulling out completely from others, namely public cloud.

Besides its bread-and-butter network services, Verizon’s enterprise division will now focus on managed services around partner offerings. “Our goal is to become one of the world’s leading managed services providers enabled by an ecosystem of best-in-class technology solutions from Verizon and a network of other leading providers,” George Fischer, senior VP and group president of Verizon Enterprise Solutions, said in a statement.

He also mentioned an agreement with IBM “on a number of strategic initiatives involving networking and cloud services” but did not elaborate. Terms of the transaction were not disclosed.

The company’s enterprise solutions revenue declined 4.3 percent in the first quarter. Company-wide first-quarter revenue was $29.8 billion, down 7.3 percent year over year.

Verizon saw a steep drop in wireless subscribers in the first quarter, losing more than 300,000 customers. According to analysts, its biggest business would have lost even more customers had it not introduced unlimited data plans halfway through the quarter.

This is not the first major telco to sell IBM its managed hosting business. In 2015, the company took over the managed hosting business of AT&T, acquiring equipment and control of the data centers that supported it.

While its public cloud business lags far behind Amazon Web Services and Microsoft Azure, IBM is currently leading in hosted private cloud.

Artificial intelligence prevails at predicting Supreme Court decisions | Science | AAAS

Artificial intelligence prevails at predicting Supreme Court decisions | Science | AAAS

“See you in the Supreme Court!” President Donald Trump tweeted last week, responding to lower court holds on his national security policies. But is taking cases all the way to the highest court in the land a good idea? Artificial intelligence may soon have the answer. A new study shows that computers can do a better job than legal scholars at predicting Supreme Court decisions, even with less information.

Several other studies have guessed at justices’ behavior with algorithms. A 2011 project, for example, used the votes of any eight justices from 1953 to 2004 to predict the vote of the ninth in those same cases, with 83% accuracy. A 2004 paper tried seeing into the future, by using decisions from the nine justices who’d been on the court since 1994 to predict the outcomes of cases in the 2002 term. That method had an accuracy of 75%.

The new study draws on a much richer set of data to predict the behavior of any set of justices at any time. Researchers used the Supreme Court Database, which contains information on cases dating back to 1791, to build a general algorithm for predicting any justice’s vote at any time. They drew on 16 features of each vote, including the justice, the term, the issue, and the court of origin. Researchers also added other factors, such as whether oral arguments were heard.

For each year from 1816 to 2015, the team created a machine-learning statistical model called a random forest. It looked at all prior years and found associations between case features and decision outcomes. Decision outcomes included whether the court reversed a lower court’s decision and how each justice voted. The model then looked at the features of each case for that year and predicted decision outcomes. Finally, the algorithm was fed information about the outcomes, which allowed it to update its strategy and move on to the next year.

From 1816 until 2015, the algorithm correctly predicted 70.2% of the court’s 28,000 decisions and 71.9% of the justices’ 240,000 votes, the authors report in PLOS ONE. That bests the popular betting strategy of “always guess reverse,” which has been the case in 63% of Supreme Court cases over the last 35 terms. It’s also better than another strategy that uses rulings from the previous 10 years to automatically go with a “reverse” or an “affirm” prediction. Even knowledgeable legal experts are only about 66% accurate at predicting cases, the 2004 study found. “Every time we’ve kept score, it hasn’t been a terribly pretty picture for humans,” says the study’s lead author, Daniel Katz, a law professor at Illinois Institute of Technology in Chicago.

Roger Guimerà, a physicist at Rovira i Virgili University in Tarragona, Spain, and lead author of the 2011 study, says the new algorithm “is rigorous and well done.” Andrew Martin, a political scientist at the University of Michigan in Ann Arbor and an author of the 2004 study, commends the new team for producing an algorithm that works well over 2 centuries. “They’re curating really large data sets and using state-of-the-art methods,” he says. “That’s scientifically really important.”

Outside the lab, bankers and lawyers might put the new algorithm to practical use. Investors could bet on companies that might benefit from a likely ruling. And appellants could decide whether to take a case to the Supreme Court based on their chances of winning. “The lawyers who typically argue these cases are not exactly bargain basement priced,” Katz says.

Attorneys might also plug different variables into the model to forge their best path to a Supreme Court victory, including which lower court circuits are likely to rule in their favor, or the best type of plaintiff for a case. Michael Bommarito, a researcher at Chicago-Kent College of Law and study co-author, offers a real example in National Federation of Independent Business v. Sebelius, in which the Affordable Care Act was on the line: “One of the things that made that really interesting was: Was it about free speech, was it about taxation, was it about some kind of health rights issues?” The algorithm might have helped the plaintiffs decide which issue to highlight.

Future extensions of the algorithm could include the full text of oral arguments or even expert predictions. According to Katz: “We believe the blend of experts, crowds, and algorithms is the secret sauce for the whole thing.”