Microsoft changes tune on forced Windows 10 updates | CIO Dive

Microsoft changes tune on forced Windows 10 updates | CIO Dive

Dive Brief:

  • Microsoft announced it will no longer force Windows upgrades on customers, according to Beta News.
  • The company made the announcement following an ongoing legal battle with Germany’s Baden-Würtenberg consumer rights center, according to the report. The organization claims Microsoft forcibly downloaded gigabytes of files to upgrade users from Windows 7 and 8 to Windows 10 without consent.
  • The way Microsoft enacts updates changed with the Creators Update. Windows 10 users can now schedule times when they would like to install updates. They can also choose to “snooze” an update for three days if necessary. Along with the Creators Update changes, “active hours” for devices were tweaked to make sure Windows 10 does not update while the device is in use.

Dive Insight:

When Windows 10 was released in 2015, Microsoft was aggressive about getting devices to upgrade. In some instances, it pushed several gigabytes of upgrade files to customers without their knowledge or consent. Consumer backlash caused the company to stop the practice, but two years later, Microsoft is still facing legal repercussions. While the company claims the tactics were in customers’ best interests, the company’s forced upgrades turned into a PR mess for Microsoft.

Last year three Florida men also filed a lawsuit against Microsoft, stating that the company “coerced” them into the upgrade which resulted in damaged PCs, as well as lost time and money. Some customers even complained that a red X option that appears in the Windows 10 update box actually initiates an upgrade, rather than dismissing it as users would commonly expect.

Microsoft has since backed down from some of its more aggressive tactics, and now offers customers options, including “upgrade now, schedule a time to upgrade, or decline the free offer for the new OS.”


Get a Close Look at the World’s Best Mechanical Calculator

Get a Close Look at the World’s Best Mechanical Calculator

Mechanical calculators are an engineering marvel unknown to most people. The most advanced mechanical calculator ever built, the Curta calculator, fits into the palm of your hand and uses dozens of tiny gears and an intricate mechanism to perform addition, subtraction, multiplication, and division.

Each Curta is an incredible piece of art, but so few exist that one in reasonable condition can cost thousands of dollars. Instead, one man decided to make his own. Using 3D-printed parts, Marcus Wu built a 3x scale model fully functional Curta.

If you want a 3D-printed Curta calculator of your own, you can head over to Thingiverse to find all the instructions to print the parts. Just be aware that there are more than 300 parts involved in the calculator so assembly is not going to be easy.

Los Angeles is dressing up violators’ cars with ‘smart’ parking boots | Smart Cities Dive

Los Angeles is dressing up violators’ cars with ‘smart’ parking boots | Smart Cities Dive

Outdated parking enforcement tech is getting the boot in Los Angeles.

Earlier this summer, the Los Angeles Department of Transportation (LADOT) launched a one-year “smart” parking boot pilot to increase enforcement efficiency — after years of towing cars instead of booting them. Parking violators may recognize a familiar yellow device attached to one of their car’s wheels, but the SmartBoot also sports an electromagnetic keypad that adorns the face of the device.

The ticket left on the car includes a number that the violator can call to pay the ticket over the phone, after which they are given a code to put into the keypad on the boot, allowing it to come off on the spot. Before the code is released, drivers have to answer if they can lift 16 pounds, the weight of the boot. If they can’t, PayLock gets a cop to get the boot off for them.

“People have responded a lot more positively to them because they can be on their way,” said Oliver Hou, an engineering associate at LADOT. “Instead of three hours, it’s three minutes.”

The boot will only be put on cars that have five or more unpaid parking tickets, all of which will have to be paid at once to unlock the device. Violators are told to return the boot within 24 hours to one of four locations throughout the city. If it is not returned, they can be fined $25 fine per day up to the cost of the boot, or $500 — a fee which can add to the already-large pool of funding from parking enforcement. Unpaid parking tickets in Los Angeles added up to $21 million over the last five years. According to NBC Los Angeles, paying those tickets could fill 1 million potholes or hire 300 new city firefighters.

.@LADOTofficial says new self-release boots take 5-10 mins to take off from the time parking violators call to pay their tickets. @KNX1070

— Cooper Rummell (@KNXCooper) July 26, 2017

The SmartBoot, built by PayLock, had its first program in Hoboken in 2003. Since then, the boot has been used in 20 municipalities, counties and colleges and used over 400,000 times in the past 10 years. Beside manufacturing the device, PayLock runs a help center for people to call in with their ticket number for payment assistance. The company can three-way connect friends and family to the call to help complete multiple electronic methods of payment.

Matt Silverman, executive vice president at PayLock, said that the SmartBoot allows cities to find middle ground between being aggressive about going after unpaid tickets and not going after violators at all.

“We are changing the status quo of how cities can treat people,” he said.

Within days of the launch, NBC Los Angeles reported at least one SmartBoot was found on a wheel discarded at a tow truck yard. “Nothing is foolproof,” Hou said. “No boot is going to fit 100% of vehicles.”

However, according to Silverman, PayLock has found that their boots are less likely to be vandalized than a traditional boot.

“People have an option that they didn’t have it before,” Silverman said. “You change the decision making model in people’s mind.”

Getting rid of a boot, regardless if it’s smart or not, is a time-honored tradition. Youtube is full of videos and the web has tutorials showing how to remove boots from cars.

“People always find a way,” Hou said.

PayLock gave Los Angeles 500 SmartBoots for free, but they make revenue by receiving part of the $150 boot fee that Los Angeles charges violators.

Even though parking tickets and boots are usually things that aggregate drivers, PayLock still cares what their users think.

“You can call us crazy but we we send out a customer satisfaction survey,” Silverman said. “We get some interesting comments.”

For Los Angeles, even with a discarded boot or two, the program seems to be working. Since launch, LADOT has done over 973 boot applications, 305 tows and has brought in over $281,000. The program has collected over 1,863 parking tickets, and LADOT will evaluate next summer if they would like to continue using the technology.

[I cannot image getting any useful product information from the customer satisfaction surveys.]

Small retailers switching to chip cards but still worried by lack of PIN | National Retail Federation

Small retailers switching to chip cards but still worried by lack of PIN | National Retail Federation

A new survey conducted for NRF shows small retailers have nearly caught up with large merchants in making the switch to chip-and-signature credit cards — even though virtually half say the cards would be more secure if easy-to-forge signatures were replaced with a secret personal identification number.

The survey found that 60 percent of small bricks-and-mortar retailers had installed chip card readers by this spring and another 10 percent expected to have done so by July, bringing the total so far to 70 percent. The number is expected to reach 81 percent by the end of the year. (Online retailers aren’t affected because the chip doesn’t work unless the card is physically present.)

That compares with 86 percent of mid-size and large retailers surveyed last year who said they would have chip readers in place by the end of 2016, with 99 percent planning to do so by the end of this year.

With each chip reader averaging $2,000 when installation and other costs are factored in, small retailers have generally lagged behind larger retail companies with deeper pockets in the changeover from traditional magnetic stripe cards.

Small retailers have made the switch despite concerns the new cards don’t provide all the security they are capable of: Of the 750 surveyed for NRF by research firm GfK, 49 percent said their businesses would be more secure if credit cards required a PIN, which is standard in most parts of the world where chip cards are used. Only 16 percent disagreed, with the remainder neutral.

Nonetheless, 63 percent said their businesses could not afford to risk increased liability for fraudulent transactions, which retailers have faced since a change in card industry rules took effect in October 2015. In the past, banks paid fraud costs when a card turned out to be counterfeit; the cost has now been shifted to retailers if the card has a chip but the retailer doesn’t have a chip reader.

Not all affected small retailers are making the move: The survey found 19 percent have no plans to adopt chip cards, with 55 percent of them saying it is because their businesses are not at high risk for credit card fraud.

The survey results are not surprising. NRF has said for years that chip-and-signature cards are far less secure than chip-and-PIN. The chip makes it more difficult to create a counterfeit card, but counterfeits are still possible and the chip does nothing to prevent lost or stolen cards from being used. As we’ve often said, a chip without a PIN is like locking the front door but leaving the back door wide open. A PIN alone could stop most credit card fraud without the need for a chip — or the expensive new equipment needed to read a chip.

Virtually all U.S. banks have refused to include PINs on their credit cards, choosing to keep transactions on lucrative signature processing networks run by Visa and Mastercard rather than open them up to the dozen or more competing networks that can process PIN transactions.

Beyond the PIN issue, chip cards do nothing to keep card data from being stolen from computer systems. The chip transmits an encrypted code that confirms that the card is not counterfeit, but the actual account number and other card data are still transmitted in the clear.

Despite those shortcomings, the change in fraud liability rules effectively coerces many retailers into adopting chip cards: A coffee shop can afford to lose the cost of a doughnut if a customer uses a counterfeit card, but a jeweler selling rings that cost thousands of dollars can’t take the chance.

Overall, U.S. businesses are being forced to spend $30 billion to switch to chip cards that fall far short of the advances in security that are needed. That’s money that could be better spent on encryption, tokenization and other technologies that actually keep card data from being stolen in the first place. If the card data can be made secure, the physical cards become much less of an issue.

Retailers have been demanding truly secure credit cards for years. It’s time for banks to deliver.

Oracle Wants to Give Java EE to the Open-Source Community

Oracle Wants to Give Java EE to the Open-Source Community

Oracle said this week it plans to transfer management of the Java EE project to an open-source foundation, such as Apache or Eclipse.

The announcement came ahead of Java EE 8’s release this fall when Oracle seems poised to announce to whom Java EE development will be transferred.

The Java EE (Enterprise Edition) project is a collection of APIs for the Java platform that were specifically built to help developers create enterprise-scale applications.

Oracle to withdraw from a leadership role

The project, along with Java SE (Standard Edition) were already managed in a semi-open-source fashion.

Up until now, Oracle has welcomed the participation of the open-source community with suggestions and plans on how to develop the Java SE and EE platforms but has always kept a leading role over Java SE and EE’s future, always having the final say in all matters.

According to a statement from David Delabassee, Java Evangelist at Oracle, the company plans to withdraw from its leadership role for the Java EE platform.

“We believe that moving Java EE technologies including reference implementations and test compatibility kit to an open source foundation may be the right next step, in order to adopt more agile processes, implement more flexible licensing, and change the governance process,” said Delabassee.

“We plan on exploring this possibility with the community, our licensees and several candidate foundations to see if we can move Java EE forward in this direction,” he added.

Apache and Eclipse foundations are main favorites

The Apache Foundation and the Eclipse Foundation are the primary candidates for taking over Java EE. Both manage a slew of Java-based projects and Oracle has previously off-loaded other tools in their laps.

For example, Oracle dumped the NetBeans IDE and the OpenOffice app suite to the Apache Foundation, and the Hudson server to Eclipse.

    If Java EE were to be moved to a foundation outside Oracle, which one would you prefer?

    — Reza Rahman (@reza_rahman) August 11, 2017

Oracle said it will continue to provide feedback for Java EE development, but not from a leadership role. The company did not reveal a similar plan for Java SE.

Oracle has been leaving Java to die

Oracle has been moving away from Java to cloud-based solutions in recent years. In September 2015, the company fired most of its top Java evangelists.

The Java community felt that Oracle was starting to ignore Java development and in 2016 created the Java EE Guardians project to force Oracle to focus more resources on Java EE.

In January 2016, Oracle announced that they will be deprecating the use of Java browser plugins starting in JRK 9, with it ultimately being removed altogether in future versions of the Java runtime environment.

Java 8 is set to be released this fall, while Java 9 is scheduled for next year.

[Claiming Oracle is abandoning Java for cloud computing is nonsense.  They are two different things. The problem Oracle has with Java is they never found a way to make money off of it. Oracle got Java when they bought Sun Microsystems.]

Intel packs a neural network into a USB stick

​Intel packs a neural network into a USB stick

They may be modeled on the human brain, but neural networks are far better than we are at sorting through huge amounts of data and identifying patterns. Now, to make these powerful AI systems more accessible to smaller-scale developers and businesses, Intel acquisition Movidius is launching the Neural Compute Stick, which packs deep learning algorithms into a standard USB thumb drive.

Over the years, the brain-power of neural networks has been set loose on cancer screening, mapping the human genome, and creating trippy works of art. But most of these endeavors have come out of big organizations like Google.

With the Movidius Neural Compute Stick, Intel says it’s “democratizing” the technology, so we might see creative applications from small-scale developers, such as rigging up an AI system to stop cats pooping on the lawn. The brain of the Stick is a Myriad 2 visual processing unit (VPU), which is specifically designed for mobile and wearable devices. That means it’s fast and fully-programmable, yet has an ultra-low power consumption and a small physical footprint.

“The Myriad 2 VPU housed inside the Movidius Neural Compute Stick provides powerful, yet efficient performance – more than 100 gigaflops of performance within a 1W power envelope – to run real-time deep neural networks directly from the device,” says Remi El-Ouazzane, vice president and general manager of Movidius. “This enables a wide range of AI applications to be deployed offline.”

The device can be tuned to run both industry standard and custom-designed neural networks, and can also be used as an accelerator, boosting the brain power of an existing computer.

The Movidius Neural Compute Stick is available now for US$79.

Researchers shut down AI that invented its own language

Researchers shut down AI that invented its own language

An artificial intelligence system being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. Researchers shut the system down when they realized the AI was no longer using English.

The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI “agents.”

Negotiating in a new language

As Fast Co. Design reports, Facebook’s researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.

In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying “I can i i everything else,” to which Alice responded “balls have zero to me to me to me…” The rest of the conversation was formed from variations of these sentences.

While it appears to be nonsense, the repetition of phrases like “i” and “to me” reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob’s later statements, such as “i i can i i i everything else,” indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like “I’ll have three and you have everything else.”

English lacks a “reward”

The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a “reward” principle where they expect following a sudden course of action to give them a “benefit.” In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.

“Agents will drift off from understandable language and invent code-words for themselves,” Fast Co. Design reports Facebook AI researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

AI developers at other companies have observed a similar use of “shorthands” to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.

AI language translates human ones

In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google’s team. Its researchers found the AI had silently written its own language that’s tailored specifically to the task of translating sentences.

If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.

They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.

This AI traffic system in Pittsburgh has reduced travel time by 25% | Smart Cities Dive

This AI traffic system in Pittsburgh has reduced travel time by 25% | Smart Cities Dive

Pittsburgh drivers add 81 extra hours to their commutes each year because of traffic, according to a TomTom survey. While there are other U.S. cities that have it worse, Pittsburgh is known for its difficult driving conditions, with hills, bridges and bikers — all on a gridless city where many intersections have “no right on red” signs. But drivers in Pittsburgh could soon get relief.

Varied road conditions make for tough traffic, but also for a reason why companies like Uber are coming to Pittsburgh to test autonomous vehicles. If traffic technology can work in Pittsburgh, it can work almost anywhere. And, along with AV, that traffic technology includes Surtrac, an AI system that allows traffic lights to adapt to traffic conditions instead of relying on pre-programmed cycles.

At the lights where Surtrac is installed, the team behind the system estimates that it has reduced travel time by 25%, braking by 30% and idling by more than 40%. It costs about $20,000 to wire up and install Surtrac at an intersection.

Surtrac works by detecting traffic and through creating predictive models. First, hardware, including a computer, camera or radar device, is installed at the intersection. Surtrac can then see cars that are coming to the intersection from all directions. The computer runs a predictive model and uses it to generate a signal timing plan in real-time. The processing is done in a way that, through communication with downstream models, builds a local plan from multiple data sources.

Each intersection controls its own traffic, but by communicating projected ouflows to neighboring intersections, those intersections can better prepare for incoming traffic.

Surtrac, which started as a project at Carnegie Mellon, piloted at 12 high-volume intersections in 2012. It’s now at 50 intersections with another 150 on the way, paid for with a grant from the Federal Highway Administration. In 2015, the project spun out from Carnegie Mellon as a company called Rapid Flow Technologies.

After the pilot, Steve Smith, a robotics professor at Carnegie Mellon and the head of Rapid Flow Technologies, said they could notice a significant difference in traffic flow. But they were quickly informed that they had forgotten about non-motorized traffic.

“We immediately got a lot of feedback from pedestrians, who were feeling left out of the picture,” Smith said.

Tweaks to the system made it so there was a maximum wait time for pedestrians at lights. Researchers and students at Carnegie Mellon are working on a side project to make a mobile phone app to communicate with the lights for people with disabilities who need more time to cross the street.

The system is totally automated, but can be pulled up in real time at a central location if desired or necessary. Smith said they don’t really expect people to be manually intervening, however.

“In theory it’s one of the best,” said Aleksander Stevanovic, an associate professor of in the Department of Civil, Environmental and Geomatics Engineering at Florida Atlantic University and director of Lab for Adaptive Traffic Operations & Management (LATOM).

Stevanovic said it’s still a “theory” as it needs more testing, namely, at a minimum of a half-dozen more sites that have different traffic patterns, like longer blocks with faster-moving traffic. But he commends Surtrac for looking at previous technology and collecting as much information as possible.

“There is nothing wrong with needing improvements, these are complex systems,” Stevanovic said. “It’s been said that solving traffic in urban settings can be harder than sending a rocket to the moon.”

Surtrac is expanding beyond Pittsburgh — even beyond Pennsylvania — this year. It’s going to 25 intersections in Atlanta and 15 in Beverly Hills. King County outside of Chicago is also in line for Surtrac deployment.

Eventually, Surtrac will work with autonomous vehicles. Smith said they have been working over the last few years for the traffic signal control with connected cars, noting he wanted the system to be prepared “for that eventuality.” A recent study found having AVs on the road to be another traffic-improving factor.

Traffic control could get even better when information is passing back and forth between the infrastructure and cars. In a simulation, Smith was able to show if a vehicle is willing to share its route with the intersection, like with dedicated short-range communications radios (DSRC) or a navigation device, vehicles move through the network 20% faster without affecting non-equipped vehicles.

“It sounds like magic,” Smith said. “But once the world gets connected, we will know where cars are continuously.”

Smith said they are exploring whether Surtrac could one day detect traffic accidents and other real time events, so they can start to use the information to offer rerouting advice to vehicles. They are also exploring different machine learning algorithms to reduce some uncertainty from sensor data.

Even though Surtrac will be at 200 intersections in the near future, there are over 600 intersections in Pittsburgh. Smith said they haven’t noticed a plateau in improvements as they’ve expanded — so traffic could one day be a thing of the past in Pittsburgh.

“I do feel like the more of the network that you can encompass, the smoother you’ll get to travel,” he said.

Stanford’s search and rescue snake robot grows into its role

​Stanford’s search and rescue snake robot grows into its role

If you ever find yourself stuck in a disaster zone, your rescuer could take on some unexpected forms, like a drone or a cyborg cockroach – and now we can add a soft robotic snake to the mix. A Stanford team has developed a flexible robot that grows like a vine, squeezing through rubble to find trapped survivors and even delivering water to them.

Rather than a rigid robot rummaging through the rubble, the Stanford snake starts life as a rolled-up, inside-out tube made of soft material, with a pump at one end and a camera attached to the other. When it’s fired up, the robot inflates and grows in the direction of the camera end, while the other end stays put. It’s a mobility method closer to that of plants than animals (or robots, for that matter), and the team wanted to explore how this technique could be used.

“Essentially, we’re trying to understand the fundamentals of this new approach to getting mobility or movement out of a mechanism,” says Allison Okamura, senior author of the paper. “It’s very, very different from the way that animals or people get around the world.”

The robot is able to turn corners by inflating one side more than the other, and it decides where to go from the camera and algorithms that interpret what it’s seeing. That allows it to follow complex paths of its own choosing to reach a designated goal.

To test their creation, the Stanford team ran the robot through a series of obstacle courses, and it successfully navigated its way through flypaper, glue and nails, before climbing up an ice wall. It didn’t get through unscathed, but being punctured by the nails didn’t stop it, thanks to its unique method of movement. Since the puncture site doesn’t move, the nail keeps the hole plugged while the tip of the robot continues to extend.

“The body lengthens as the material extends from the end but the rest of the body doesn’t move,” says Elliot Hawkes, lead author of the paper. “The body can be stuck to the environment or jammed between rocks, but that doesn’t stop the robot because the tip can continue to progress as new material is added to the end.”

The growing robot’s list of other abilities is as long as its own body. It was able to inflate itself to lift a 100-kg (220-lb) crate off the ground, squeeze through a gap just a tenth of its own diameter, spiral around on itself to build a free-standing structure, and pull a cable through its body. All of these functions could make it a useful partner in disaster relief, or just day-to-day building maintenance.

The current prototype is made of cheap plastic, but with the concept proven the researchers plan to try making future versions out of tougher materials like Kevlar. They could also grow using pressurized liquid instead of air, letting them deliver water to trapped people or to put out fires, and eventually be scaled down to a size that could see them moving through the human body less invasively.

The research was published in the journal Science Robotics, and the growing robot can be seen in action in the video below.

The 2017 Top Programming Languages – IEEE Spectrum

The 2017 Top Programming Languages – IEEE Spectrum

It’s summertime here at IEEE Spectrum, and that means it’s time for our fourth interactive ranking of the top programming languages. As with all attempts to rank the usage of different languages, we have to rely on various proxies for popularity. In our case, this means having data journalist Nick Diakopoulos mine and combine 12 metrics from 10 carefully chosen online sources to rank 48 languages. But where we really differ from other rankings is that our interactive allows you choose how those metrics are weighted when they are combined, letting you personalize the rankings to your needs.

We have a few preset weightings—a default setting that’s designed with the typical Spectrum reader in mind, as well as settings that emphasize emerging languages, what employers are looking for, and what’s hot in open source. You can also filter out industry sectors that don’t interest you or create a completely customized ranking and make a comparison with a previous year.

So what are the Top Ten Languages for the typical Spectrum reader?

Python has continued its upward trajectory from last year and jumped two places to the No. 1 slot, though the top four—Python, C, Java, and C++—all remain very close in popularity. Indeed, in Diakopoulos’s analysis of what the underlying metrics have to say about the languages currently in demand by recruiting companies, C comes out ahead of Python by a good margin.

C# has reentered the top five, taking back the place it lost to R last year. Ruby has fallen all the way down to 12th position, but in doing so it has given Apple’s Swift the chance to join Google’s Go in the Top Ten. This is impressive, as Swift debuted on the rankings just two years ago. (Outside the Top Ten, Apple’s Objective-C mirrors the ascent of Swift, dropping down to 26th place.)

However, for the second year in a row, no new languages have entered the rankings. We seem to have entered a period of consolidation in coding as programmers digest the tools created to cater to the explosion of cloud, mobile, and big data applications.

Speaking of stabilized programming tools and languages, it’s worth noting Fortran’s continued presence right in the middle of the rankings (sitting still in 28th place), along with Lisp in 35th place and Cobol hanging in at 40th: Clearly even languages that are decades old can still have sustained levels of interest. (And although it just barely clears the threshold for inclusion in our rankings, I’m pleased to see that my personal favorite veteran language—Forth—is still there in 47th place).

Looking at the preset weighting option for open source projects, where we might expect a bias toward newer projects versus decades-old legacy systems, we see that HTML has entered the Top Ten there, rising from 11th place to 8th. (This is a great moment for us to reiterate our response to the complaint of some in years past of “HTML isn’t a programming language, it’s just markup.” At Spectrum, we have a very pragmatic view about what is, and isn’t, a recognizable programming language. HTML is used by coders to instruct computers to do things, so we include it. We don’t insist on, for example, Turing completeness as a threshold for inclusion—and to get really nitpicky, as user Jonny Lin pointed out last year, HTML has grown so complex that when combined with CSS, it is now Turing complete, albeit with a little prodding and requiring an appreciation of cellular automata.)

Finally, one last technical detail: We’ve made some tweaks under the hood to improve the robustness of the results, especially for less popular languages where the signals in the metrics are weaker and so more prone to statistical noise. So that users who look at historical data can make consistent comparisons, we’ve recalculated the previous year’s rankings with the new system. This could lead to some discrepancies between a language’s ranking in a given year as currently shown, versus the ranking that was shown in the original year of publication, but such differences should be relatively small and not affect the more popular languages in any case.