Intel ends its dreams of replacing the x86 chip in your PC

Intel ends its dreams of replacing the x86 chip in your PC

When Intel launched its first Itanium processor in 2001, it had very high hopes: the 64-bit chip was supposed to do nothing less than kill off the x86 architecture that had dominated PCs for over two decades. Things didn’t quite pan out that way, however, and Intel is officially calling it quits. The company tells PCWorld that its just-shipping Itanium 9700-series processors will be the last models in the family. HPE, the enterprise company resulting from the split of Itanium co-creator HP, will be the last major customer — its extra-reliable Integrity i6 servers are getting the upgraded hardware, but you won’t hear much from anyone else.

The news marks the quiet end to a tumultuous saga. Itanium was supposed to represent a clean break from x86 that put Intel firmly into the 64-bit era. It was first intended for high-end servers and workstations, but it was eventually supposed to find its way into home PCs. Needless to say, that’s not how it worked out. Early Itanium chips were power hogs, and AMD threw a monkey wrench into Intel’s plans by launching 64-bit x86 processors ahead of Intel. Why buy Itanium when you can get many of the benefits of 64-bit technology without tossing your existing software? Intel responded with 64-bit x86 chips of its own, and those quickly took over as Itanium remained the niche option.

That shift effectively killed any hopes of the broad support Itanium needed to survive. Microsoft dropped support in Windows after 2010 , and HP went so far as to sue Oracle for ditching software development in 2011. Not that Intel necessarily minded by that point. It poured most of its energy into many-core Xeon processors that were often up to the job. And it’ll be a while before Itanium disappears forever. HPE says that it’ll offer Linux-based “containers” that let you run Itanium software on x86 servers, so it’ll be relatively easy for companies to jump ship at their own pace.

The cancellation also shows just how much Intel has changed in the past 16 years. Where the chip giant was once obsessed with ruling the high-performance computing world, it’s now trying to move beyond the PC. Why pour resources into exotic server CPUs when the future revolves more around drones, wearables and the Internet of Things? Although server chips aren’t about to disappear any time soon, Intel clearly has better ways to spend its money.

https://www.engadget.com/2017/05/13/intel-ships-last-itanium-chips/

COBOL Is Everywhere. Who Will Maintain It? – The New Stack

COBOL Is Everywhere. Who Will Maintain It? – The New Stack

Think COBOL is dead? About 95 percent of ATM swipes use COBOL code, Reuters reported in April, and the 58-year-old language even powers 80 percent of in-person transactions. In fact, Reuters calculates that there’s still 220 billion lines of COBOL code currently being used in production today, and that every day, COBOL systems handle $3 trillion in commerce. Back in 2014, the prevalence of COBOL drew some concern from the trade newspaper American Banker.

“The mainframe was supposed to have been be replaced by farms of smaller commodity servers and cloud computing by now, but it still endures at many banks,” the trade pub reported.

But should we be concerned that so much of our financial infrastructure runs on an ancient infrastructure? American Banker found 92 of the top 100 banks were still using mainframe computers — and so were 71 percent of the companies in the Fortune 500. As recently as five years ago, the IT group at the Bank of New York Mellon had to tend to 112,500 different COBOL programs — 343 million lines of code, according to a 2012 article in Computerworld. And today a quick Google search today shows the Bank of New York Mellon is still hiring COBOL developers.

COBOL was originally developed in the 1950s as a stop-gap by the Department of Defense, but then computer manufacturers began supporting it, “resulting in widespread adoption,” according to Wikipedia. Now the Eisenhower-era programming language — based on design work by Grace Hopper — is everywhere. And because it’s so entrenched it can be difficult to transition to a new language. Reuters reported in April that when Commonwealth Bank of Australia replaced its core COBOL platform in 2012, it took five years — and cost $749.9 million.

One COBOL programmer told the Tampa Bay Times his experience with a company transitioning to Java from COBOL. “It’s taken them four years, and they’re still not done.”

There’s now some concerns about where the next generation of COBOL programmers will come from. In 2014, American Banker reported banks are “having trouble finding talented young techies who want to work in a bank and a shortage of people with mainframe and COBOL skills.” The CIO at the $38 billion-asset First Niagara Financial Group in Buffalo said they can’t compete with Google and Facebook when it comes to offering young techies a “cool” workplace for their resume.

And then there’s the language itself. “COBOL isn’t as sexy as working with Elixir, or Golang,” argued The Next Web. COBOL historically hasn’t been the most attractive option for a hip young programmer, admitted to Stuart McGill, chief technology officer at development tools vendor Micro Focus. Back in 2009, he was telling Computerworld, “If you’ve been trained on Windows using Virtual Studio the last thing you want to do is go back to the mainframe.”

In a March thread on Hacker News, someone described learning COBOL as “like swallowing a barbed cube-shaped pill,” lamenting the decades-old legacy code “swamped with technical debt…modified, extended, updated, moved to new hardware over and over… Documentation, if any, is hopelessly out of date.”

Another commenter complained that “You will most likely spend the rest of your career doing maintenance work rather than any greenfield development. There is nothing wrong with that but not everybody likes the fact they can’t create something new.”

And a May 2016 study published by Congress’ Government Accountability Office criticized the U.S. Department of Justice and the Treasury Department for their legacy COBOL systems. It found many agencies were using COBOL — including the Department of Homeland Security (which uses COBOL and other languages to track hiring for immigration and customs enforcement agents on a 2008 IBM z10 mainframe). Veterans’ benefits claims were also tracked with a COBOL system, and the Social Security Administration was using COBOL to calculate retirement benefits. (In fact, the SSA had to re-hire some retired employees just to maintain its existing COBOL systems, according to the report.)Even the Department of Justice’s Information about the inmate population passes through a hybrid COBOL/Java system.

There have been reports that some institutions are still clinging to elderly COBOL programmers — suggesting they’re having trouble finding qualified replacements. In 2014, Bob Olson, a vice president at Unisys, even told American Banker about a government client with an IT worker “who’s on oxygen. He’s 70 years old, he knows the keys to the kingdom, he knows where everything is, it’s all sitting in his head. They send out a police car to pick him up every morning and bring him into work in a vault-like room.”

Of course, this has also created some opportunities. 75-year-old Bill Hinshaw, a former COBOL programmer, has even founded a company in northern Texas named COBOL Cowboys. (And yes, their client list includes at least five banks.) The company’s slogan? “Not our first rodeo.”

“Some of the software I wrote for banks in the 1970s is still being used,” Hinshaw told Reuters. “After researching many published articles (both positive and negative) on the future life of COBOL, we came away with renewed confidence in its continued life in the coming years,” explained the company’s web page. It cites IBM enhancements which allow Cobol and Java to run together on mainframes.

Reuters reported that Hinshaw divides his time between 32 children and grandchildren “and helping U.S. companies avert crippling computer meltdowns.” When he started programming, instructions were coded into punch cards which were fed into mainframes. But decades later, when he finally reached retirement age, “calls from former clients just kept coming.”

They’re willing to pay almost anything, he told Reuters, and “You better believe they are nice since they have a problem only you can fix.” Some companies even offered him a full-time position.

The company boasts some retirement age coders on its roster, as well as some “youngsters” who are in their 40s and early 50s.

There are strong reactions to a recent article arguing banks should let COBOL die. “The idea that large corporations are simply going to move on from COBOL is out of touch with reality,” one commenter wrote on Hacker News. “It really can’t be overstated how deeply old COBOL programs are embedded into these corporations. I worked for one that had been using them since the language itself was created, and while they all could see the writing on the wall, the money to do the change simply wasn’t there.”

But they also believed it would be possible to find new programmers. “They just need to maintain and occasionally update some ancient program that’s been rock-solid for longer than they’ve been alive.”

Computerworld also reported that there were 75 schools in the U.S. that were still teaching COBOL, “thanks in part to efforts by companies like IBM.” American Banker found they were mostly community colleges and technical schools, though it adds that 68,000 students entered IBM’s “Master the Mainframe” contest between 2012 and 2014. Last month IBM told Reuters that over the last 12 years they’ve trained more than 180,000 developers through fellowships and other training programs — which averages out to 15,000 a year. One IBM fellow insisted that “Just because a language is 50 years old, doesn’t mean that it isn’t good.” So there are at least some channels in place to create new COBOL programmers.

Leon Kappelman, a professor of information systems at the University of North Texas, says he’s been hearing dire predictions about COBOL’s future for the last 30 years. Last year he told CIO magazine, undergrads who take the school’s two classes in mainframe COBOL “tend to earn about $10,000 per year more starting out than those that don’t.” He also believes it’s a secure career because large organizations rarely have a compelling business case for replacing their COBOL code with something new.

“The potential for career advancement could be limited, so you get a lot of job security – but it could get boring.”

Some commenters on Hacker News see the issue pragmatically. “What you have to remember is that when the COBOL code was written, it replaced hundreds, maybe thousands of people doing manual data entry and manipulation, maybe even pen-on-paper,” one commenter wrote in April. “That gives you a fantastic return on investment. After that’s been done, replacing one computer system with a newer one is completely different, a spectacular case of diminishing returns.”

So business remains strong for the Cobol Cowboys. Recent press coverage (including the Reuters article) brought visitors from 125 countries to their website — and over 300 requests to join their group. I contacted CEO Hinshaw to ask him about the language’s future, and Hinshaw says he feels there’s a renewed interest COBOL that “may help to bring the younger generation of programmers into COBOL if they can overcome the negative press on COBOL and concentrate on a career of backroom business solutions written in COBOL.” He points out that the billions of lines of code obviously represent “60+ years of proven business rules.”

Even if companies transitioned to Java, the problem could recur later. “Will a future generation of young programmers want to transition away from Java to a newer language — and companies will have to once again go through another expensive and time-consuming transition.”

“Only time will tell if COBOL programmers are a dying breed, or a new breed embracing COBOL comes riding onto the scene….”

Source: COBOL Is Everywhere. Who Will Maintain It? – The New Stack

200+ Free Art Books Are Now Available to Download from the Guggenheim – Creators

200+ Free Art Books Are Now Available to Download from the Guggenheim – Creators

A veritable art history degree’s worth of books digitized by the Solomon R. Guggenheim Museum are now available for free.

There’s Wassily Kandinsky’s 1946 treatise, On the Spiritual in Art; books about movements from the Italian metamorphosis and Russian Constructivism; thousands of years of Aztec and Chinese art; and catalogs of work by the many greats to pass through the Guggenheim’s Frank Lloyd Wright-designed halls. Formerly locked in paper prisons (a.k.a., hard-copy books), analysis of work by Pablo Picasso, Roy Lichtenstein, Dan Flavin, Robert Rauschenberg, Gustav Klimt, Mark Rothko, and more is now free to roam the web as PDFs and ePubs.

The initiative to publish certain entries from The Guggenheim’s vast library began with 65 catalogs published in 2012, and has now grown to 205 titles. This joins 43 titles available in the Los Angeles County Museum of Art’s Online Reading Room, 281 from Getty Publications’ Virtual Library, and the Metropolitan Museum of Art MetPublications’s whopping 1,611 books you can download for free. That’s in addition to the 375,000+ high resolution images of the artworks themselves the Met dumped into the public domain earlier this year.

https://creators.vice.com/en_us/article/200-free-art-books-download-guggenheim

https://archive.org/details/guggenheimmuseum

How the Military Helmet Evolved From a Hazard to a Bullet Shield | At the Smithsonian | Smithsonian

How the Military Helmet Evolved From a Hazard to a Bullet Shield | At the Smithsonian | Smithsonian

Schwarzkopf’s helmet, a PASGT, represents “how technology and innovation work together in the field of ground-forces protection,” says Frank Blazich, Jr., the Smithsonian’s curator of modern military forces. (David Miller, Division of Armed Forces History, NMAH)

The object itself is impressive. A Kevlar casque, covered in a sheath of pale-brown desert camouflage cloth, it has a neoprene olive-drab band around the helmet’s lower rim, with the soldier’s name embroidered on it in black. But on this helmet there are also four black stars in its front, just above the visor and “name band.” The stars are there because this particular helmet once belonged to General Norman Schwarzkopf, Jr. , the commanding American General in Operation Desert Storm, which began in January, 1991.

“What’s most amazing to me about General Schwarzkopf’s helmet,” says Frank Blazich, Jr., curator of modern military forces at the Smithsonian’s National Museum of American History in Washington, D.C., “is that it represents how technology and innovation work together in the field of ground-forces protection.”

Known as PASGT (for Personal Armor System Ground Troops), the helmet was introduced to the U.S. ground forces in the years following the Vietnam conflict—and was initially employed in limited numbers during actions in Grenada and Haiti in the 1980s. It was in wide use by American ground forces by the time Operation Desert Storm was initiated in 1991, when U.S. forces led a coalition of 34 nations to liberate Kuwait after its occupation by Iraq in August of 1990.

On May 20, with Gen. Norman Schwarzkopf’s Operation Desert Storm helmet as a centerpiece, the Smithsonian’s Lemelson Center for the Study of Invention and Innovation will host Military Invention Day, an exploration of how objects developed for the battlefield have been adapted into endless aspects of American culture.

Along with General Schwarzkopf’s helmet, will be examples of the entire line of American military helmets over the past century; alongside a thorough timeline of other, different implements of modern warfare. In each example, the program will showcase how advancing military technologies have changed the face of battle and force protection since World War I, and how those technologies than migrated into other areas of American life.

Still, no area of military personnal technology might be more indicative of how change has come to war than the American military helmet. “In 1917,” Blazich says, “when America entered World War I, we used a variation of the British helmet of the time, called the Brodie Helmet, or Mark 1 helmet.” The American helmet was called the M1917.

Effectively an overturned metal dish weighing about 1.3 pounds, with a basic liner to keep a soldier’s scalp from chafing against the helmet’s manganese-steel alloy shell, plus a solid chinstrap that cinched tight, it was a primitive tool at best. As a protective device, Blazich says, it didn’t do much more than keep explosion-driven rocks off the tops of soldier’s heads while they were in the trenches of France. “Though it could also be protective against shrapnel, which was also a big concern in that war,” Blazich adds.

Yet with no real face and side-skull coverage, it left troops wide open to facial and cranial injury, and lasting disfigurement from shell fragmentation was an enormous problem in World War I.

The Brodie Helmet also had other inherent dangers. The chinstrap, which once tightened down, was hard to release: so if a Doughboy’s helmet got trapped or lodged between objects the situation could prove fatal, as the soldier would have a difficult time getting the helmet off and would therefore be trapped and immobile on the field of battle.

Still, despite the M1917’s liabilities, innovation remained slow. In 1936, a slightly more protective version was rolled out, called the M1917A1, or “Kelly” helmet. It had a more comfortable helmet liner and an improved canvas chinstrap. The intent of these changes was to improve the helmet’s overall balance and performance. But it still didn’t provide the kind of protection from side assault that the War Department desired.

So in 1941, in the run-up to World War II, the Army and several of its research partners rolled out the M1 helmet: which had a slight brim on its front to keep precipitation off a soldier’s face and a slightly lipped rim all the way around. The helmet’s sides also trailed down to cover half a soldier’s ears before dropping down to cover the back part of a soldier’s skull. It also employed a manganese steel outer shell that weighed just 2.85 pounds and an inner molded fiber-plastic liner. And later in the war, it was upgraded with an improved canvas chinstrap, “which would break away under pressure,” Blazich says.

“The M1 helmet liner was a big improvement,” says Blazich, “as it allowed for a much closer, more-custom fit. Somewhat remarkably, they originally took the idea for the liner from the liner of Riddell football helmets of the age.”

Blazich says the liner used a network of adjustable webbing connected together, which could be tightened or loosened like the fitting inside today’s construction hard hats, allowing the helmet to more-precisely conform to each soldier’s individual skull features. “It was an enormous development.”

The helmet’s steel still couldn’t stop some close-range bullets or shrapnel, but it offered far better coverage and protection for the skull, appreciably saving American lives. That said, it was somewhat heavy, and was often referred to by troops as the “Steel Pot.” But despite its weight liability, the helmet proved so successful and effective in combat operations that, despite a few design improvements in the liner and exterior flared edging, its use was continued through the conflicts in Korea in the 1950s and Vietnam in the 1960s and 70s.

Then, in 1965, DuPont chemist Stephanie Kwolek invented Kevlar. “That was a game-changer,” says Blazich. In the 1970s, several Army agencies—led by the Army Natick Development Center at the Watertown Arsenal in Massachusetts—began work using layers of tough, puncture-resistant Kevlar 29, a synthetic ballistic fiber bonded with a synthetic polymer resin, to create a helmet capable of stopping most bullets, as well as shrapnel and shell fragments in a skull protecting device that weighed between 3.1 (for the small model) and 4.2 pounds (for the extra-large size).

Because of the malleability and plasticity of Kevlar in the design process, the Army and its agencies were able to make a far more efficient helmet design, creating the PASGT, similar to the one General Schwarzkopf donated to the Smithsonian in 2007. Its design also allowed for coverage of the ears and the back of the skull all the way to the nape of the neck.

Though some of the American troops referred to it as the “K Pot,” referring to its outer Kevlar material, others called it “the Fritz” for its resemblance to the scallop-edged “Stalhelm” helmet worn by German soldiers in both World Wars. But despite the disparaging nickname, the PASGT’s protective qualities, due to the Kevlar exterior, proved a vast protective improvement over the M1. While still not perfect at stopping close-range bullets and shrapnel and shell fragments, the helmet’s provision of safety was recognized as a quantum protective leap forward.

First used combat in Operation Urgent Fury in Grenada in 1983, by the time Operation Desert Storm came around in 1991, it was welcomed as standard equipment until the PASGT, too, was replaced by a new model in 2003.

That year, because the flexibility of Kevlar layered fiber coupled with another evolution in advanced industrial design, the Army rolled out the Advanced Combat Helmet (or ACH). Now constructed of advanced Kevlar 129 and chemically similar Twaron brand ballistic fibers, the ACH is a masterpiece of contemporary military design. Lighter—at 2.4 pounds—and narrower in silhouette, it has better coverage of the ears and also the back of the neck, and offers even better, harder-sided protection from ballistic projectiles, from bullets to shrapnel and shell fragments. It also has an even more sophisticated shock-absorbing liner, which better protects against traumatic brain injury, especially from roadside bombs and improvised explosive devices.

Beyond that, the ACH has a front opening that can accommodate either sunglasses or goggles, which deflect sandstorms in desert fighting, or heavy rains and winds. Because of its lightness, protective qualities, and flexibility with different configurations, the troops were in instant support of it. Add to that an optional black-steel fitting clip above the front visor, which can be used to attach devices from night-vision goggles to video cameras, and the Army had a state-of-the art protective tool at its disposal.

Today, Kevlar’s use has migrated into commercial products for everything from athletic shoes to conveyor belts for hard-rock mining; from athletic cross-training clothing to cut-resistant work gloves and firemen’s outerwear, to auto and bicycle tire antipuncture underliners, to sail and spinnaker lines for recreational and racing sailboats—not to mention cords for parachutes. Light, tough and reliable, Kevlar material has endless applications, and is a prime example of how material developed and first used in military applications has migrated into endless other areas of American life and culture.

Of the display of Army’s helmets shown on May 20 at Military Invention Day, with General Schwarzkopf’s as part of the exhibit’s centerpiece, Blazich seems pleased by the example the array of helmets represents. “It’s just interesting,” he says. “In those examples, you can see an evolutionary change. Really, I think visitors to Military Invention Day will find it all quite enlightening.”

http://www.smithsonianmag.com/smithsonian-institution/how-military-helmet-evolved-hazard-bullet-shield-180963319/

NASA decodes source of strange flashes seen on Earth from space

NASA decodes source of strange flashes seen on Earth from space

Since June 2015 the Deep Space Climate Observatory (DSCOVR) satellite has been floating about a million miles away between the Earth and the Sun. On that satellite, which was developed by the National Oceanic and Atmospheric Administration in the US, NASA’s Earth Polychromatic Imaging Camera (EPIC) instrument has been snapping pictures of our planet about once every hour. In some of those shots, strange flashes have been appearing all over the planet. Researchers now think they know what they are.

While radiation from secret labs, glints of gold from lost cities or flashes from freak weather events would certainly make for a more dramatic story, the NASA researchers have come to a different conclusion about the flashes, 866 of which have were found between DSCOVR’s launch and August 2016. They are ice crystals, likely floating horizontally as high as five miles in the air.

Initially, researchers thought the flashes could simply have been sunlight bouncing off bodies of water, above which the flashes were first spotted. Upon closer investigation, however, they found the flashes over land as well – and the size of the flash was too big to be explained by the presence of a lake or other body of water. Ice was the next logical guess.

To test their theory, the researchers reasoned that if the flashes were caused by sunlight bouncing off ice particles, then the shots would have to be occurring when DSCOVR was in such a position that sunlight would be reflected directly at it when it hit the crystals. Sure enough, the data matched, which meant the flashes were definitely reflections, not some type of weather phenomenon.

They then plotted the angles in more detail and came to the conclusion that the particles would have to be floating nearly horizontally to reflect light in the way they were.

Next, the researchers measured the height of the particles by using two channels on the EPIC instruments that can measure the height of clouds. Wherever there were flashes, there were also cirrus clouds that were three to five miles (5-8 km) high.

Interestingly, famed astronomer Carl Sagan also saw the flashes in the atmosphere in 1993 when he was analyzing images from the Galileo spacecraft, which examined the Earth during a gravity-assist swing-by before it headed out to Jupiter. “Large expanses of blue ocean and apparent coastlines are present, and close examination of the images shows a region of [mirror-like] reflection in ocean but not on land,” Sagan and his colleagues wrote of the find in a paper in the journal Nature.

Now that present-day astronomers know about the ice crystals, their next step will be to classify just how common they really are and determine if they could actually be impacting Earth by blocking some of the sunlight that bathes our planet. Alexander Marshak, DSCOVR deputy project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, said that learning more about the ice particles here could also help us in our study of planets beyond our solar system.

http://newatlas.com/nasa-decodes-strange-flashes/49527/

AlarmShock wristband zaps serial snoozers out of bed

AlarmShock wristband zaps serial snoozers out of bed

It’s hard to resist the temptation of the snooze button some mornings, but a masochistic new alarm clock may help you kick the habit. When the AlarmShock goes off, you have two minutes to unlock a wrist-worn bracelet and dock it on the device – or face an electric shock.

For the chronic oversleeper, there are no shortage of creative ways to get you moving in the morning. Clocky runs away and hides, the Puzzle Alarm Clock makes you piece together a puzzle before it pipes down, and Ruggie won’t rest until you plant both feet on the floor.

Joining the fray is Alarmshock, which is part-alarm clock, part-torture device. Users strap the wristband on before they go to bed, locking it into place. When the alarm goes off, sounds and vibrations will wake the wearer, and the countdown begins. The magnetically-locked wristband can only be pried open on the dock, and if you haven’t done so in two minutes, the device punishes you with a short, sharp jolt of electricity.

According to the company, it’s a low enough voltage that it won’t hurt too much, but it should be annoying enough to make you think twice about rolling over and ignoring it – especially since it will zap you up to five times if it’s not deactivated.

The dock unit also has a charging port for the armband, and a microUSB port to charge mobile devices.

The AlarmShock is currently being funded on Kickstarter, and so far, it’s raised almost £8,000 (US$10,370) of its £40,000 (US$52,000) target, with 29 days remaining on the campaign. Early bird pledges are available for £77 (US$100), before the price jumps up to £90 (US$117). If all goes to plan, the AlarmShock will be shipped out in October.

http://newatlas.com/alarmshock-zaps-snoozers-alarm-clock/49461/

Vendors approve of NIST password draft | CSO Online

Vendors approve of NIST password draft | CSO Online

A recently released draft of the National Institute of Standards and Technology’s (NIST’s) digital identity guidelines has met with approval by vendors. The draft guidelines revise password security recommendations and altering many of the standards and best practices security professionals use when forming policies for their companies.

The new framework recommends, among other things:

Remove periodic password change requirements
There have been multiple studies that have shown requiring frequent password changes to actually be counterproductive to good password security, said Mike Wilson, founder of PasswordPing. NIST said this guideline was suggested because passwords should be changed when a user wants to change it or if there is indication of breach.

Drop the algorithmic complexity song and dance
No more arbitrary password complexity requirements needing mixtures of upper case letters, symbols and numbers. Like frequent password changes, it’s been shown repeatedly that these types of restrictions often result in worse passwords, Wilson adds. NIST said If a user wants a password that is just emojis they should be allowed. It’s important to note the storage requirements. Salting, hashing, MAC such that if a password file is obtained by an adversary an offline attack is very difficult to complete.

Require screening of new passwords against lists of commonly used or compromised passwords
One of the best ways to ratchet up the strength of users’ passwords is to screen them against lists of dictionary passwords and known compromised passwords, he said. NIST adds that dictionary words, user names, repetitive or sequential patterns all should be rejected.

“All three of these recommendations are things we have been advising for some time now and there are now password strength meters that screen for compromised credentials, not just commonly used passwords,” Wilson said. “While it wasn’t explicitly mentioned in the new NIST framework, we contend that another important security practice is periodically checking your user credentials against a list of known compromised credentials.”

NIST’s Paul Grassi, one of the authors of the report, noted that many of the above guidelines are now only strong suggestions and are not mandatory yet. The public comment period closed on May 1 and now the draft goes through an internal review process. It is expected to be completed by early to mid summer.

“We look forward to a day in the near future when technology, culture, and user preference allows these requirements to be more broadly accepted. That said, we reviewed a lot of research in the space and determined that composition and expiration did little for security, while absolutely harming user experience. And bad user experience is a vulnerability in our minds,” he said. “We need technology to support this (not all password stores do), so we didn’t want to create requirements that agencies had no chance of meeting due to tech limitations.”

Users usually find a way around restrictions like composition rules by substituting special characters for alphas. Because the bad guys already know all of the tricks, this adds very little, if nothing, to the true entropy of a password, he said. “Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren’t fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good.”

In terms of new requirements for passwords, he said NIST is excited to introduce password storage requirements, which makes an offline attack much harder. He said fundamentally the new revision does a better job recognizing the password has a valid role to play, if done right. “Yet we provided a slew of new options that gives agencies the ability to leverage the tools that users may already have, like a smartphone, or an authentication app, or a security key. This allows agencies to save money by not having to issue a physical device, but increase their security posture by accepting the strong authenticators users already have.”

Phil Dunkelberger, CEO of Nok Nok Labs, said the username and password paradigm is well past its expiration date. Increasing password complexity requirements and requiring frequent resets adds only marginal security while dramatically decreasing usability.

Phil Dunkelberger, CEO of Nok Nok Labs

“Most security professionals will acknowledge that while such policies look good on paper, they put a cognitive load on end users who respond by repeating passwords across sites and other measures to cope that dramatically weaken overall security. We are glad to see national organizations like NIST recommend an update and change to a paradigm that no longer works,” he said.

User reaction

Ran Shulkind, co-founder and chief product officer at SecuredTouch, said the new password guidelines make a lot of sense. “The volume of passwords people had to manage and the ‘special characters’ ended up making things less secure than they should have been. However, passwords are actually becoming much less important than they used to be. Threats are continuing to increase, and users are getting tired of entering usernames, passwords, and additional identifying codes – no matter the structure.”

Multifactor authentication (MFA) is becoming mandated in some industries and is voluntarily being adopted in others. It adds another layer of security to include something you know (password), something you have (token or SMS), or something you are (fingerprint or behavior), Shulkind said.

“Ultimately, it’s all about balancing security and the user experience. While MFA does enhance security, it can discourage the user from using the app or performing the transaction. That’s why organizations are looking for more user-friendly components, like behavioral biometrics to reduce friction, allowing for smoother device interactions and higher risk transactions,” he said.

Mike Kail, co-founder and CIO at Cybric, said behavioral biometrics, which analyzes and authenticates based on users’ physical interactions with their devices (finger pressure, typing speed, finger size) will eventually phase out the need for passwords completely.

“I feel that the updates in the new framework are a step in the right, tactical direction, especially the password rotation change requirements,” he said.

He would like to see more strategic approaches such as requiring a Cloud IdP/SSO provider and monitor anomalous activity. He also mentioned providing users with a password management tool.

Barry Shteiman, director of threat research at Exabeam, said this is a very positive change in the NIST standard. “Credential stuffing (using compromised credential DBs and replay them against authentication mechanisms) has become very common, especially with breach information being sold or sometimes published online.”

Richard Henderson, global security strategist at Absolute believes this change also makes dictionary and rainbow attacks less useful to test credentials. “Sadly, we’ve lived through many years of more and more confusing and contradictory advice when it comes to creating and using passwords, and that has led to a hodge-podge of implementations and confusion among regular internet users.”

“When you add to this the simple notion that there are still a lot of sites out there with terrible password policies or even worse, still storing passwords in plaintext, are we really surprised that people’s habits lead to widespread password reuse or weak passwords?,” Henderson pondered.

He said the most important piece of advice is continual scan and intake of known vulnerable and stolen password lists to compare against. “Beyond the idea of potentially minimizing the risk of password reuse and creating weaker passwords, it can alert companies to the potential of a breach of one of their users. If a password like 247KangarooKiwi! shows up on a compromised list somewhere, and that’s a password one of your users uses, it’s an awful large red flag to take a look at their corporate or work endpoint devices and look for evidence of compromise.”

NIST’s recommendation to allow the full ASCII and Unicode keyspaces is also good, as it increases the keyspace for attackers using brute forcing attempts to break, he said.

Troy Gill, manager of security research at AppRiver, remembers hearing frequently that passwords were dead. “New authentication technologies have come a long way in the past decade. However, the massive surge in online service with the majority of those services implementing passwords is leading to a bit of a password critical mass,” he said.

He noted that these recommendations also are largely in sync with guidelines laid out last year by the UK’s NCSC.

“In a perfect world, it would be a great idea to require passwords to be changed every few months. But as humans we have inherent limitations with our ‘wetware’ that can prevent most of us from doing what we know is most secure. Instead, we substitute something the meets the minimum requirements and can be managed with the most ease,” he said. “Let’s face it, there are a staggering number of unique passwords that people are required to remember today, with most requiring frequent changes that also have to be memorized.

He said this constant churn inevitably leads to users implementing common, predictable passwords, recording them in unsecured locations, reusing passwords on multiple online accounts, and using only slight variations of prior passwords. He agreed that 30/60/90 day password changes are counterproductive.

He would like to see a more “event driven” approach to when password resets are required as opposed to routine schedule. For example, if an organization is at all suspicious of a breach then requiring password changes across the board would be appropriate. Other events warranting a password change would include a particular user logging in from a unrecognized device or an unexpected location. “Investment in the ability to detect these types of events more easily can build a stronger security posture,” he said.

Gill said it’s true that the attempt to require more algorithmic complexity most often has very predictable results. Like the example that NIST uses in its guidelines of the password “password” morphing into “password1” and later “password1!”.

“While the last iteration may be technically more complex it is essentially just as weak as the original as it is both commonly used and computationally predictable. I would also like to see the term ‘password’ replaced with ‘passphrase’ as lengthy passphrases can be both easier to remember and more difficult to crack in a brute force attack,” he said.

He said using lists of both common passwords and compromised passwords can be quite simple to implement and can make a marked improvement. Organizations should also focus some efforts on monitoring web locations, where breached passwords are likely to appear, for lists containing any of their users/customers.

Eric Avigdor, director of product management at Gemalto, noted that passwords have always been a weak security tool, and conventional wisdom has been that consumers should create complex passwords that they update frequently.

“The reality is that passwords are weak no matter how often they are changed or how difficult they are, and people usually have only a variant of one or two passwords. Man in the middle or man in the browser hacks can take your password even if it is extremely lengthy and complicated – IT administrators can see your passwords, your bank can see your passwords,” he said.

He said the guidelines recognize that the way to solve the password problem is to accept that passwords are weak and add on other complementary factors of authentication, whether mobile or hardware OTP tokens as well as PKI based USB tokens or smart cards.

Avigdor mentioned more reliance on the usage of PKI tokens with a smart card. This involves entering a PIN which is never revealed to anyone, except the owner of the smart card.

http://www.csoonline.com/article/3195181/data-protection/vendors-approve-of-nist-password-draft.html

DRAFT NIST Special Publication 800-63B
Digital Identity Guidelines

https://pages.nist.gov/800-63-3/sp800-63b.html

Nvidia Opens Up The “Black Box” of Its Robocar’s Deep Neural Network – IEEE Spectrum

Nvidia Opens Up The “Black Box” of Its Robocar’s Deep Neural Network – IEEE Spectrum

A deep neural network’s ability to teach itself is a strength, because the machine gets better with experience, and a weakness, because it’s got no code that an engineer can tweak. It’s a black box.

That’s why the creators of Google Deep Mind’s AlphaGo couldn’t explain how it played the game of Go. All they could do was watch their brainchild rise from beginner status to defeat one of the best players in the world.

Such opacity’s okay in a game-playing machine but not in a self-driving car. If a robocar makes a mistake, engineers must be able to look under the hood, find the flaw and fix it so that the car never makes the same mistake again. One way to do this is through simulations that first show the AI one feature and then show it another, thus discovering which things affect decision making.

Nvidia, a supplier of an automotive AI chipset, now says it has found a simpler way of instilling transparency. “While the technology lets us build systems that learn to do things we can’t manually program, we can still explain how the systems make decisions,” wrote Danny Shapiro, Nvidia’s head of automotive, in a blog post.

And, because the work is done right inside the layers of processing arrays that make up a neural network, results can be displayed in real time, as a “visualization mask” that’s superimposed on the image coming straight from the car’s forward-looking camera. So far, the results involve the machine’s turning of the steering wheel to keep the car within its lane.

The method works by taking the analytical output from a high layer in the network—one that has already extracted important features from the image fed in by a camera. It then superimposes that output onto lower layers, averages it, then superimposes it on still lower layers until getting all the way to the original camera image.

The result is a camera image on which the AI’s opinion of what’s significant is highlighted. And, in fact, those parts turn out to be just what a human driver would consider significant—lane markings, road edges, parked vehicles, hedges alongside the route, and so forth. But, just to make sure that these features really were key to decision making, the researchers classified all the pixels into two classes—Class 1 contains “salient” features that clearly have to do with driving decisions, and Class 2, which contains non-salient features, typically in the background. The researchers manipulated the two classes digitally and found that only salient features mattered.

“Shifting the salient objects results in a linear change in steering angle that is nearly as large as that which occurs when we shift the entire image,” the researchers write in a white paper. “Shifting just the background pixels has a much smaller effect on the steering angle.”

True, engineers can’t reach into the system to fix a “bug,” because deep neural nets, lacking code, can’t properly be said to have a bug. What they have are features. And now we can visualize them, at least up to a point.

http://spectrum.ieee.org/cars-that-think/transportation/self-driving/nvidia-looks-inside-its-deeplearning-systems-black-box

Voters could easily reach a bipartisan deficit deal, study finds | TheHill

Voters could easily reach a bipartisan deficit deal, study finds | TheHill

Politicians struggle to reach budget deals every year, but a new study suggests everyday people could get one.

When presented with a set of decisions to make on government spending and taxes, majorities of voters from both parties were able to agree on steps to reduce the nation’s deficit by $86 billion, according to a study by the University of Maryland’s Program for Public Consultation.

When independents were thrown into the mix, the majority of voters agreed to $211 billion in deficit reduction.

“They seem more committed to reducing the deficit than Congress is. They make a lot of hard decision and there’s a lot of bipartisan consensus,” said Dr. Steven Kull, who led the study.

The study surveyed 1,817 registered voters reached through mail and telephone to state how much they agree or disagree with various approaches to taxes, spending, and deficits.

It then asked them to go through and make decisions on specific government spending and tax options, choosing whether to increase, decrease, or leave them stable. The results were weighted to match the latest census figures, and had a 2.3 percent margin of error.

In the end, bipartisan majorities agreed on steps that would lead to $86 billion worth of deficit reductions, comprised of $17 billion in spending cuts and $69 billion in new taxes and fees.

But some of the most interesting results came from the partisan breakdowns.

“You do see Republicans raising revenues and Democrats cutting spending,” said Kull.

Far from shying away from tax increases, a majority of Republicans (and Democrats) in the survey supported a 5 percent tax increase for incomes over $200,000.

Bipartisan majorities also supported imposing fees on uninsured debt, taxing ‘carried interest’ compensation and increasing alcohol taxes.

Half of Republicans and a two-thirds majority of overall voters also supported increasing capital gains taxes, even as President Trump called for lowering them. With the support of four in ten Republicans, overall majorities of voters agreed to a carbon tax and an increased corporate rate, while most rejected Trump’s proposal on tax-through entities.

When it came to spending, Democrats were happy to cut but concentrated on defense and related areas.

Republicans were also willing to cut defense, but by significantly less, spreading smaller cuts to a variety of programs to achieve their deficit reductions.

Democrats backed $96 billion in spending cuts compared to $65 billion in cuts backed by Republicans.

Trump’s early budget calls for cutting $54 billion in non-defense discretionary spending and putting that money into defense.

Non-Democrat majorities of those surveyed supported cuts for the State Department, development assistance and energy, albeit on smaller scales than Trump proposed.

While reducing the nation’s deficit by $86 billion would be a good start in tackling the deficit, it would only be a fraction of the $559 billion deficit the Congressional Budget Office projected for 2017.

Moreover, a report released last week by the Government Accountability Office found that the major deficit drivers came from healthcare and retirement spending, neither of which are part of the discretionary budget.

But in previous studies, Kull found areas of agreement in those areas as well. A study on Social Security found large, bipartisan agreement on decreasing shortfalls, while the results of an in-depth study on Medicare are forthcoming.

“People are not highly ideological” when it comes to taking concrete decisions, he said.

“The discourse around the choice of candidates, the choice of parties, is very polarized. But we don’t label these options as Republican and Democratic agendas. If you did, you’d probably get a more partisan response,” he added.

http://thehill.com/policy/finance/332602-voters-could-easily-reach-a-bipartisan-deficit-deal-study-finds

Pensacola Graffiti Bridge – Pensacola, Florida – Atlas Obscura

Pensacola Graffiti Bridge – Pensacola, Florida – Atlas Obscura

The 17th Avenue railroad trestle bridge was built in 1888. A local Pensacola landmark, people have been painting the bridge for generations and the paintings change daily.

Anything and everything gets painted on the graffiti bridge: tributes, professions of love, invitations to prom, well-wishes to sports teams and graduates, anniversaries, holidays, birthdays, profanity, hopeful messages, celebrations, drawings, artwork, community event announcements, and the list goes on. People get engaged here, and some have even gotten married here. The bridge attracts painters, photographers, and tourists who want a sense of what Pensacolans are all about.

Graffiti coats every square inch of the little bridge, so that the art has moved outwards into the parking lot nearby and the railroad ties above. As with most graffiti, it’s not strictly legal, but the Pensacola community seems to have accepted the bridge and the art on it as a local landmark. Police largely look the other way as they drive past the nightly taggers.

The artists know that nothing is permanent here—their works of art might be painted over in matter of hours—but if they put something really good on the bridge, it might just stick around for a while.

[Carnegie Mellon University in Pittsburgh, PA had the most painted object in the world (Guinness Book of World Records) until it fell over from the weight of the paint in 1993. A new fence was built and they are working on setting a new record.]

http://www.atlasobscura.com/places/pensacola-graffiti-bridge-17th-ave-railroad-trestle

http://www.amusingplanet.com/2014/09/the-fence-of-carnegie-mellon-university.html