Security of Things – Schneier of Things

The takeaways are many. For the projection room it points to more vigilance – faster turnaround on software updates and a sharp eye for additional equipment tied into the network. 

Read the whole thing and subscribe to his very readable newsletter at: 

           CRYPTO-GRAM

        February 15, 2017

        by Bruce Schneier
      CTO, Resilient Systems, Inc.
      [email protected]
     https://www.schneier.com


A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2017/0215.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/blog>, along with a lively and intelligent comment section. An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
    Security and the Internet of Things
    News
    Schneier News
    Security and Privacy Guidelines for the Internet of Things


** *** ***** ******* *********** *************

    Security and the Internet of Things



Last year, on October 21, your digital video recorder — or at least a DVR like yours — knocked Twitter off the Internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the Internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. “Regulation” might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the Internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the Internet that has given us so much.

         —–     —–

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices — and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and — like everything else we’ve just listed — it’s on the Internet.

The Internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and — increasingly — in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the Internet. You can think of the actuators as the hands and feet of the Internet. And you can think of the stuff in the middle as the brain. We are building an Internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things — or, more precisely, cyber-physical systems — autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

         —–     —–

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original Internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems — and solutions — of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it — the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the Internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

         —–     —–

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software — and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the Internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

         —–     —–

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the Internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and Internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the Internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability — one unsecured avenue for attack — and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the Internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical Internet function for many major Internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the Internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the Internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the Internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The Internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

         —–     —–

In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars — literally — in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

         —–     —–

Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for Internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the Internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the Internet can affect the world in a direct and physical manner.

         —–     —–

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to Internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the Internet. The Internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate — although that’s often a big part of it — and more that governments recognize the importance of the new technologies.

The Internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any Internet regulatory agency will similarly need to engage in a high level of collaborate regulation — both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults — and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And Internet-enabled devices should retain some minimal functionality if disconnected from the Internet.

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analogue for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the Internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the Internet forever. We need to change the fabric of the Internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the Internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

         —–     —–

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in Internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting Internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley — the mistrust of governments by tech companies and the mistrust of tech companies by governments — is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future — meaning the security of ourselves, families, homes, businesses, and communities — depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

         —–     —–

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled — or killed — by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the Internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in “New York Magazine.”
http://nymag.com/selectall/2017/01/the-Internet-of-things-dangerous-future-bruce-schneier.html


** *** ***** ******* *********** *************

    News



Interesting post on Cloudflare’s experience with receiving a National Security Letter.
https://blog.cloudflare.com/cloudflares-transparency-report-for-second-half-2016-and-an-additional-disclosure-for-2013-2/
News article.
https://techcrunch.com/2017/01/11/cloudflare-explains-how-fbi-gag-order-impacted-business/

Complicated reporting on a WhatsApp security vulnerability, which is more of a design decision than an actual vulnerability.
https://www.schneier.com/blog/archives/2017/01/whatsapp_securi.html
Be sure to read Zeynep Tufekci’s letter to the Guardian, which I also signed.
http://technosociology.org/?page_id=1687

Brian Krebs uncovers the Mirai botnet author.
https://krebsonsecurity.com/2017/01/who-is-anna-senpai-the-mirai-worm-author/#more-37412

There’s research in using a heartbeat as a biometric password. No details in the article. My guess is that there isn’t nearly enough entropy in the reproducible biometric, but I might be surprised. The article’s suggestion to use it as a password for health records seems especially problematic. “I’m sorry, but we can’t access the patient’s health records because he’s having a heart attack.”
https://www.ecnmag.com/news/2017/01/heartbeat-could-be-used-password-access-electronic-health-records
I wrote about this before here.
https://www.schneier.com/blog/archives/2015/08/heartbeat_as_a_.html

In early January, the Obama White House released a report on privacy: “Privacy in our Digital Lives: Protecting Individuals and Promoting Innovation.” The report summarizes things the administration has done, and lists future challenges. It’s worth reading. I especially like the framing of privacy as a right. From President Obama’s introduction. The document was originally on the whitehouse.gov website, but was deleted in the Trump transition.
https://www.schneier.com/blog/archives/2017/01/new_white_house.html
https://www.schneier.com/blog/files/Privacy_in_Our_Digital_Lives.pdf

NextGov has a nice article summarizing President Obama’s accomplishments in Internet security: what he did, what he didn’t do, and how it turned out.
http://www.nextgov.com/cybersecurity/2017/01/obamas-cyber-legacy-he-did-almost-everything-right-and-it-still-turned-out-wrong/134612/

Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.
https://priceonomics.com/our-fixation-on-terrorism
This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.
https://www.schneier.com/blog/archives/2007/05/rare_risk_and_o_1.html
https://www.schneier.com/blog/archives/2009/03/fear_and_the_av.html
https://www.schneier.com/blog/archives/2007/05/rare_risk_and_o_1.html
https://www.schneier.com/essays/archives/2008/01/the_psychology_of_se.html

Interesting research on cracking the Android pattern-lock authentication system with a computer vision algorithm that tracks fingertip movements.
http://www.lancaster.ac.uk/staff/wangz3/publications/ndss_17.pdf
https://phys.org/news/2017-01-android-device-pattern.html

Reports are that President Trump is still using his old Android phone. There are security risks here, but they are not the obvious ones. I’m not concerned about the data. Anything he reads on that screen is coming from the insecure network that we all use, and any e-mails, texts, Tweets, and whatever are going out to that same network. But this is a consumer device, and it’s going to have security vulnerabilities. He’s at risk from everybody, ranging from lone hackers to the better-funded intelligence agencies of the world. And while the risk of a forged e-mail is real — it could easily move the stock market — the bigger risk is eavesdropping. That Android has a microphone, which means that it can be turned into a room bug without anyone’s knowledge. That’s my real fear.
https://arstechnica.com/tech-policy/2017/01/post-inauguration-president-trump-still-uses-his-old-android-phone/
https://www.nytimes.com/2017/01/25/us/politics/president-trump-white-house.html
https://www.wired.com/2017/01/trump-android-phone-security-threat/
http://www.politico.com/tipsheets/morning-cybersecurity/2017/01/the-changing-face-of-cyber-espionage-218420
https://www.lawfareblog.com/president-trumps-insecure-android

Mike Specter has an interesting idea on how to make biometric access-control systems more secure: add a duress code. For example, you might configure your iPhone so that either thumb or forefinger unlocks the device, but your left middle finger disables the fingerprint mechanism (useful in the US where being compelled to divulge your password is a 5th Amendment violation but being forced to place your finger on the fingerprint reader is not) and the right middle finger permanently wipes the phone (useful in other countries where coercion techniques are much more severe).
http://www.mit.edu/~specter/articles/17/deniability1.html

Research into Twitter bots. It turns out that there are a lot of them.
http://www.bbc.com/news/technology-38724082
In a world where the number of fans, friends, followers, and likers are social currency — and where the number of reposts is a measure of popularity — this kind of gaming the system is inevitable.

In late January, President Trump signed an executive order affecting the privacy rights of non-US citizens with respect to data residing in the US. Here’s the relevant text: “Privacy Act.  Agencies shall, to the extent consistent with  applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.”
https://www.whitehouse.gov/the-press-office/2017/01/25/presidential-executive-order-enhancing-public-safety-interior-united
At issue is the EU-US Privacy Shield, which is the voluntary agreement among the US government, US companies, and the EU that makes it possible for US companies to store Europeans’ data without having to follow all EU privacy requirements. Interpretations of what this means are all over the place: from extremely serious, to more measured, to don’t worry and we still have PPD-28.
https://www.theregister.co.uk/2017/01/26/trump_blows_up_transatlantic_privacy_shield/
https://techcrunch.com/2017/01/26/trump-order-strips-privacy-rights-from-non-u-s-citizens-could-nix-eu-us-data-flows/
https://epic.org/2017/01/trump-administration-limits-sc-1.html
https://www.lawfareblog.com/interior-security-executive-order-privacy-act-and-privacy-shield
This is clearly still in flux. And, like pretty much everything so far in the Trump administration, we have no idea where this is headed.

Attackers held an Austrian hotel network for ransom, demanding $1,800 in bitcoin to unlock the network. Among other things, the locked network wouldn’t allow any of the guests to open their hotel room doors (although this is being disputed). I expect IoT ransomware to become a major area of crime in the next few years. How long before we see this tactic used against cars? Against home thermostats? Within the year is my guess. And as long as the ransom price isn’t too onerous, people will pay.
https://www.nytimes.com/2017/01/30/world/europe/hotel-austria-bitcoin-ransom.html
http://www.thelocal.at/20170128/hotel-ransomed-by-hackers-as-guests-locked-in-rooms

Here’s a story about data from a pacemaker being used as evidence in an arson conviction.
http://www.networkworld.com/article/3162740/security/cops-use-pacemaker-data-as-evidence-to-charge-homeowner-with-arson-insurance-fraud.html
http://www.networkworld.com/article/3162740/
https://boingboing.net/2017/02/01/suspecting-arson-cops-subpoen.html
https://www.washingtonpost.com/news/to-your-health/wp/2017/02/08/a-man-detailed-his-escape-from-a-burning-house-his-pacemaker-told-police-a-different-story/

Here’s an article about the US Secret Service and their Cell Phone Forensics Facility in Tulsa.
http://www.csmonitor.com/World/Passcode/2017/0202/Hunting-for-evidence-Secret-Service-unlocks-phone-data-with-force-or-finesse
I said it before and I’ll say it again: the FBI needs technical expertise, not back doors.

In January we learned that a hacker broke into Cellebrite’s network and stole 900GB of data. Now the hacker has dumped some of Cellebrite’s phone-hacking tools on the Internet.
https://www.schneier.com/blog/archives/2017/02/hacker_leaks_ce.html

The Linux encryption app Cryptkeeper has a rather stunning security bug: the single-character decryption key “p” decrypts everything.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852751
https://www.theregister.co.uk/2017/01/31/cryptkeeper_cooked/
In 2013, I wrote an essay about how an organization might go about designing a perfect backdoor. This one seems much more like a bad mistake than deliberate action. It’s just too dumb, and too obvious. If anyone actually used Cryptkeeper, it would have been discovered long ago.
https://www.schneier.com/essays/archives/2013/10/how_to_design_and_de.html

Here’s a nice profile of Citizen Lab and its director, Ron Diebert.
https://motherboard.vice.com/en_us/article/ron-deiberts-lab-is-the-robin-hood-of-cyber-security
Citizen Lab is a jewel. There should be more of them.

Wired is reporting on a new slot machine hack. A Russian group has reverse-engineered a particular brand of slot machine — from Austrian company Novomatic — and can simulate and predict the pseudo-random number generator.
https://www.wired.com/2017/02/russians-engineer-brilliant-slot-machine-cheat-casinos-no-fix/
The easy solution is to use a random-number generator that accepts local entropy, like Fortuna. But there’s probably no way to easily reprogram those old machines.
https://www.schneier.com/academic/fortuna/

This online safety guide was written for people concerned about being tracked and stalked online. It’s a good resource.
http://chayn.co/safety/

Interesting research: “De-anonymizing Web Browsing Data with Social Networks”:
http://randomwalker.info/publications/browsing-history-deanonymization.pdf

The Center for Strategic and International Studies (CSIS) published “From Awareness to Action: A Cybersecurity Agenda for the 45th President.” There’s a lot I agree with — and some things I don’t.
https://csis-prod.s3.amazonaws.com/s3fs-public/publication/170110_Lewis_CyberRecommendationsNextAdministration_Web.pdf
https://www.csis.org/news/cybersecurity-agenda-45th-president

There’s a really interesting paper from George Washington University on hacking back: “Into the Gray Zone: The Private Sector and Active Defense against Cyber Threats.” I’ve never been a fan of hacking back. There’s a reason we no longer issue letters of marque or allow private entities to commit crimes, and hacking back is a form a vigilante justice. But the paper makes a lot of good points.
https://cchs.gwu.edu/sites/cchs.gwu.edu/files/downloads/CCHS-ActiveDefenseReportFINAL.pdf
Here are three older papers on the topic.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2270673
http://ethics.calpoly.edu/hackingback.pdf
http://jolt.law.harvard.edu/articles/pdf/v25/25HarvJLTech429.pdf

Pew Research just published their latest research data on Americans and their views on cybersecurity:
http://www.pewInternet.org/2017/1/26/americans-and-cybersecurity/

Interesting article in “Science” discussing field research on how people are radicalized to become terrorists.
http://science.sciencemag.org/content/355/6323/352.full


** *** ***** ******* *********** *************

    Schneier News



I spoke at the 2016 Blockchain Workshop in Nairobi. Here’s a video:
https://www.youtube.com/watch?v=FAskMLNwRPY


** *** ***** ******* *********** *************

    Security and Privacy Guidelines for the Internet of Things



Lately, I have been collecting IoT security and privacy guidelines. Here’s everything I’ve found:

* “Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.
http://www.bitag.org/documents/BITAG_Report_-_Internet_of_Things_(IoT)_Security_and_Privacy_Recommendations.pdf

* “IoT Security Guidance,” Open Web Application Security Project (OWASP), May 2016.
https://www.owasp.org/index.php/IoT_Security_Guidance

* “Strategic Principles for Securing the Internet of Things (IoT),” US Department of Homeland Security, Nov 2016.
https://www.dhs.gov/sites/default/files/publications/Strategic_Principles_for_Securing_the_Internet_of_Things-2016-1115-FINAL_v2-dg11.pdf

* “Security,” OneM2M Technical Specification, Aug 2016.
http://www.onem2m.org/images/files/deliverables/Release2/TR-0008-Security-V2_0_0.pdf

* “Security Solutions,” OneM2M Technical Specification, Aug 2016.
http://onem2m.org/images/files/deliverables/Release2/TS-0003_Security_Solutions-v2_4_1.pdf

* “IoT Security Guidelines Overview Document,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.11-v1.1.pdf

* “IoT Security Guidelines For Service Ecosystems,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.12-v1.0.pdf

* “IoT Security Guidelines for Endpoint Ecosystems,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.13-v1.0.pdf

* “IoT Security Guidelines for Network Operators,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.14-v1.0.pdf

* “Establishing Principles for Internet of Things Security,” IoT Security Foundation, undated.
https://iotsecurityfoundation.org/wp-content/uploads/2015/09/IoTSF-Establishing-Principles-for-IoT-Security-Download.pdf

* “IoT Design Manifesto,” www.iotmanifesto.com, May 2015.
https://www.iotmanifesto.com/wp-content/themes/Manifesto/Manifesto.pdf

* “NYC Guidelines for the Internet of Things,” City of New York, undated.
https://iot.cityofnewyork.us/

* “IoT Security Compliance Framework,” IoT Security Foundation, 2016.
https://iotsecurityfoundation.org/wp-content/uploads/2016/12/IoT-Security-Compliance-Framework.pdf

* “Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development,” IoTIAP, Nov 2016.
http://www.iotiap.com/principles-2016_12_02.html

* “IoT Trust Framework,” Online Trust Alliance, Jan 2017.
http://otalliance.actonsoftware.com/acton/attachment/6361/f-008d/1/-/-/-/-/IoT%20Trust%20Framework.pdf

* “Five Star Automotive Cyber Safety Framework,” I am the Cavalry, Feb 2015.
https://www.iamthecavalry.org/wp-content/uploads/2014/08/Five-Star-Automotive-Cyber-Safety-February-2015.pdf

* “Hippocratic Oath for Connected Medical Devices,” I am the Cavalry, Jan 2016.
https://www.iamthecavalry.org/wp-content/uploads/2016/01/I-Am-The-Cavalry-Hippocratic-Oath-for-Connected-Medical-Devices.pdf

* “Industrial Internet of Things Volume G4: Security Framework,” Industrial Internet Consortium, 2016.
http://www.iiconsortium.org/pdf/IIC_PUB_G4_V1.00_PB-3.pdf

* “Future-proofing the Connected World: 13 Steps to Developing Secure IoT Products,” Cloud Security Alliance, 2016.
https://downloads.cloudsecurityalliance.org/assets/research/Internet-of-things/future-proofing-the-connected-world.pdf

Other, related, items:

* “We All Live in the Computer Now,” The Netgain Partnership, Oct 2016.
https://drive.google.com/file/d/0B9qOTaXg3UmRZlhWQk5LOUo5Ykk/view

* “Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things,” Electronic Privacy Information Center, Jun 2013.
https://epic.org/privacy/ftc/EPIC-FTC-IoT-Cmts.pdf

* “Internet of Things Software Update Workshop (IoTSU),” Internet Architecture Board, Jun 2016.
https://www.iab.org/activities/workshops/iotsu/

* “Multistakeholder Process; Internet of Things (IoT) Security Upgradability and Patching,” National Telecommunications & Information Administration, Jan 2017.
https://www.ntia.doc.gov/other-publication/2016/multistakeholder-process-iot-security

They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I’m missing, please tell me in email.



** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books — including “Liars and Outliers: Enabling the Trust Society Needs to Survive” — as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2017 by Bruce Schneier.

Security of Things – Schneier of Things

The takeaways are many. For the projection room it points to more vigilance – faster turnaround on software updates and a sharp eye for additional equipment tied into the network. 

Read the whole thing and subscribe to his very readable newsletter at: 

           CRYPTO-GRAM

        February 15, 2017

        by Bruce Schneier
      CTO, Resilient Systems, Inc.
      [email protected]
     https://www.schneier.com


A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2017/0215.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/blog>, along with a lively and intelligent comment section. An RSS feed is available.


** *** ***** ******* *********** *************

In this issue:
    Security and the Internet of Things
    News
    Schneier News
    Security and Privacy Guidelines for the Internet of Things


** *** ***** ******* *********** *************

    Security and the Internet of Things



Last year, on October 21, your digital video recorder — or at least a DVR like yours — knocked Twitter off the Internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the Internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. “Regulation” might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the Internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the Internet that has given us so much.

         —–     —–

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices — and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and — like everything else we’ve just listed — it’s on the Internet.

The Internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and — increasingly — in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the Internet. You can think of the actuators as the hands and feet of the Internet. And you can think of the stuff in the middle as the brain. We are building an Internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things — or, more precisely, cyber-physical systems — autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

         —–     —–

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original Internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems — and solutions — of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it — the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the Internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

         —–     —–

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software — and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the Internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

         —–     —–

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the Internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and Internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the Internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability — one unsecured avenue for attack — and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the Internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical Internet function for many major Internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the Internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the Internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the Internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The Internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

         —–     —–

In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars — literally — in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

         —–     —–

Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for Internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the Internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the Internet can affect the world in a direct and physical manner.

         —–     —–

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to Internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the Internet. The Internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate — although that’s often a big part of it — and more that governments recognize the importance of the new technologies.

The Internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any Internet regulatory agency will similarly need to engage in a high level of collaborate regulation — both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults — and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And Internet-enabled devices should retain some minimal functionality if disconnected from the Internet.

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analogue for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the Internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the Internet forever. We need to change the fabric of the Internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the Internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

         —–     —–

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in Internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting Internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley — the mistrust of governments by tech companies and the mistrust of tech companies by governments — is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future — meaning the security of ourselves, families, homes, businesses, and communities — depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

         —–     —–

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled — or killed — by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the Internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in “New York Magazine.”
http://nymag.com/selectall/2017/01/the-Internet-of-things-dangerous-future-bruce-schneier.html


** *** ***** ******* *********** *************

    News



Interesting post on Cloudflare’s experience with receiving a National Security Letter.
https://blog.cloudflare.com/cloudflares-transparency-report-for-second-half-2016-and-an-additional-disclosure-for-2013-2/
News article.
https://techcrunch.com/2017/01/11/cloudflare-explains-how-fbi-gag-order-impacted-business/

Complicated reporting on a WhatsApp security vulnerability, which is more of a design decision than an actual vulnerability.
https://www.schneier.com/blog/archives/2017/01/whatsapp_securi.html
Be sure to read Zeynep Tufekci’s letter to the Guardian, which I also signed.
http://technosociology.org/?page_id=1687

Brian Krebs uncovers the Mirai botnet author.
https://krebsonsecurity.com/2017/01/who-is-anna-senpai-the-mirai-worm-author/#more-37412

There’s research in using a heartbeat as a biometric password. No details in the article. My guess is that there isn’t nearly enough entropy in the reproducible biometric, but I might be surprised. The article’s suggestion to use it as a password for health records seems especially problematic. “I’m sorry, but we can’t access the patient’s health records because he’s having a heart attack.”
https://www.ecnmag.com/news/2017/01/heartbeat-could-be-used-password-access-electronic-health-records
I wrote about this before here.
https://www.schneier.com/blog/archives/2015/08/heartbeat_as_a_.html

In early January, the Obama White House released a report on privacy: “Privacy in our Digital Lives: Protecting Individuals and Promoting Innovation.” The report summarizes things the administration has done, and lists future challenges. It’s worth reading. I especially like the framing of privacy as a right. From President Obama’s introduction. The document was originally on the whitehouse.gov website, but was deleted in the Trump transition.
https://www.schneier.com/blog/archives/2017/01/new_white_house.html
https://www.schneier.com/blog/files/Privacy_in_Our_Digital_Lives.pdf

NextGov has a nice article summarizing President Obama’s accomplishments in Internet security: what he did, what he didn’t do, and how it turned out.
http://www.nextgov.com/cybersecurity/2017/01/obamas-cyber-legacy-he-did-almost-everything-right-and-it-still-turned-out-wrong/134612/

Good article that crunches the data and shows that the press’s coverage of terrorism is disproportional to its comparative risk.
https://priceonomics.com/our-fixation-on-terrorism
This isn’t new. I’ve written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can’t think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It’s not the media’s fault. By definition, news is “something that hardly ever happens.” But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.
https://www.schneier.com/blog/archives/2007/05/rare_risk_and_o_1.html
https://www.schneier.com/blog/archives/2009/03/fear_and_the_av.html
https://www.schneier.com/blog/archives/2007/05/rare_risk_and_o_1.html
https://www.schneier.com/essays/archives/2008/01/the_psychology_of_se.html

Interesting research on cracking the Android pattern-lock authentication system with a computer vision algorithm that tracks fingertip movements.
http://www.lancaster.ac.uk/staff/wangz3/publications/ndss_17.pdf
https://phys.org/news/2017-01-android-device-pattern.html

Reports are that President Trump is still using his old Android phone. There are security risks here, but they are not the obvious ones. I’m not concerned about the data. Anything he reads on that screen is coming from the insecure network that we all use, and any e-mails, texts, Tweets, and whatever are going out to that same network. But this is a consumer device, and it’s going to have security vulnerabilities. He’s at risk from everybody, ranging from lone hackers to the better-funded intelligence agencies of the world. And while the risk of a forged e-mail is real — it could easily move the stock market — the bigger risk is eavesdropping. That Android has a microphone, which means that it can be turned into a room bug without anyone’s knowledge. That’s my real fear.
https://arstechnica.com/tech-policy/2017/01/post-inauguration-president-trump-still-uses-his-old-android-phone/
https://www.nytimes.com/2017/01/25/us/politics/president-trump-white-house.html
https://www.wired.com/2017/01/trump-android-phone-security-threat/
http://www.politico.com/tipsheets/morning-cybersecurity/2017/01/the-changing-face-of-cyber-espionage-218420
https://www.lawfareblog.com/president-trumps-insecure-android

Mike Specter has an interesting idea on how to make biometric access-control systems more secure: add a duress code. For example, you might configure your iPhone so that either thumb or forefinger unlocks the device, but your left middle finger disables the fingerprint mechanism (useful in the US where being compelled to divulge your password is a 5th Amendment violation but being forced to place your finger on the fingerprint reader is not) and the right middle finger permanently wipes the phone (useful in other countries where coercion techniques are much more severe).
http://www.mit.edu/~specter/articles/17/deniability1.html

Research into Twitter bots. It turns out that there are a lot of them.
http://www.bbc.com/news/technology-38724082
In a world where the number of fans, friends, followers, and likers are social currency — and where the number of reposts is a measure of popularity — this kind of gaming the system is inevitable.

In late January, President Trump signed an executive order affecting the privacy rights of non-US citizens with respect to data residing in the US. Here’s the relevant text: “Privacy Act.  Agencies shall, to the extent consistent with  applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.”
https://www.whitehouse.gov/the-press-office/2017/01/25/presidential-executive-order-enhancing-public-safety-interior-united
At issue is the EU-US Privacy Shield, which is the voluntary agreement among the US government, US companies, and the EU that makes it possible for US companies to store Europeans’ data without having to follow all EU privacy requirements. Interpretations of what this means are all over the place: from extremely serious, to more measured, to don’t worry and we still have PPD-28.
https://www.theregister.co.uk/2017/01/26/trump_blows_up_transatlantic_privacy_shield/
https://techcrunch.com/2017/01/26/trump-order-strips-privacy-rights-from-non-u-s-citizens-could-nix-eu-us-data-flows/
https://epic.org/2017/01/trump-administration-limits-sc-1.html
https://www.lawfareblog.com/interior-security-executive-order-privacy-act-and-privacy-shield
This is clearly still in flux. And, like pretty much everything so far in the Trump administration, we have no idea where this is headed.

Attackers held an Austrian hotel network for ransom, demanding $1,800 in bitcoin to unlock the network. Among other things, the locked network wouldn’t allow any of the guests to open their hotel room doors (although this is being disputed). I expect IoT ransomware to become a major area of crime in the next few years. How long before we see this tactic used against cars? Against home thermostats? Within the year is my guess. And as long as the ransom price isn’t too onerous, people will pay.
https://www.nytimes.com/2017/01/30/world/europe/hotel-austria-bitcoin-ransom.html
http://www.thelocal.at/20170128/hotel-ransomed-by-hackers-as-guests-locked-in-rooms

Here’s a story about data from a pacemaker being used as evidence in an arson conviction.
http://www.networkworld.com/article/3162740/security/cops-use-pacemaker-data-as-evidence-to-charge-homeowner-with-arson-insurance-fraud.html
http://www.networkworld.com/article/3162740/
https://boingboing.net/2017/02/01/suspecting-arson-cops-subpoen.html
https://www.washingtonpost.com/news/to-your-health/wp/2017/02/08/a-man-detailed-his-escape-from-a-burning-house-his-pacemaker-told-police-a-different-story/

Here’s an article about the US Secret Service and their Cell Phone Forensics Facility in Tulsa.
http://www.csmonitor.com/World/Passcode/2017/0202/Hunting-for-evidence-Secret-Service-unlocks-phone-data-with-force-or-finesse
I said it before and I’ll say it again: the FBI needs technical expertise, not back doors.

In January we learned that a hacker broke into Cellebrite’s network and stole 900GB of data. Now the hacker has dumped some of Cellebrite’s phone-hacking tools on the Internet.
https://www.schneier.com/blog/archives/2017/02/hacker_leaks_ce.html

The Linux encryption app Cryptkeeper has a rather stunning security bug: the single-character decryption key “p” decrypts everything.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852751
https://www.theregister.co.uk/2017/01/31/cryptkeeper_cooked/
In 2013, I wrote an essay about how an organization might go about designing a perfect backdoor. This one seems much more like a bad mistake than deliberate action. It’s just too dumb, and too obvious. If anyone actually used Cryptkeeper, it would have been discovered long ago.
https://www.schneier.com/essays/archives/2013/10/how_to_design_and_de.html

Here’s a nice profile of Citizen Lab and its director, Ron Diebert.
https://motherboard.vice.com/en_us/article/ron-deiberts-lab-is-the-robin-hood-of-cyber-security
Citizen Lab is a jewel. There should be more of them.

Wired is reporting on a new slot machine hack. A Russian group has reverse-engineered a particular brand of slot machine — from Austrian company Novomatic — and can simulate and predict the pseudo-random number generator.
https://www.wired.com/2017/02/russians-engineer-brilliant-slot-machine-cheat-casinos-no-fix/
The easy solution is to use a random-number generator that accepts local entropy, like Fortuna. But there’s probably no way to easily reprogram those old machines.
https://www.schneier.com/academic/fortuna/

This online safety guide was written for people concerned about being tracked and stalked online. It’s a good resource.
http://chayn.co/safety/

Interesting research: “De-anonymizing Web Browsing Data with Social Networks”:
http://randomwalker.info/publications/browsing-history-deanonymization.pdf

The Center for Strategic and International Studies (CSIS) published “From Awareness to Action: A Cybersecurity Agenda for the 45th President.” There’s a lot I agree with — and some things I don’t.
https://csis-prod.s3.amazonaws.com/s3fs-public/publication/170110_Lewis_CyberRecommendationsNextAdministration_Web.pdf
https://www.csis.org/news/cybersecurity-agenda-45th-president

There’s a really interesting paper from George Washington University on hacking back: “Into the Gray Zone: The Private Sector and Active Defense against Cyber Threats.” I’ve never been a fan of hacking back. There’s a reason we no longer issue letters of marque or allow private entities to commit crimes, and hacking back is a form a vigilante justice. But the paper makes a lot of good points.
https://cchs.gwu.edu/sites/cchs.gwu.edu/files/downloads/CCHS-ActiveDefenseReportFINAL.pdf
Here are three older papers on the topic.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2270673
http://ethics.calpoly.edu/hackingback.pdf
http://jolt.law.harvard.edu/articles/pdf/v25/25HarvJLTech429.pdf

Pew Research just published their latest research data on Americans and their views on cybersecurity:
http://www.pewInternet.org/2017/1/26/americans-and-cybersecurity/

Interesting article in “Science” discussing field research on how people are radicalized to become terrorists.
http://science.sciencemag.org/content/355/6323/352.full


** *** ***** ******* *********** *************

    Schneier News



I spoke at the 2016 Blockchain Workshop in Nairobi. Here’s a video:
https://www.youtube.com/watch?v=FAskMLNwRPY


** *** ***** ******* *********** *************

    Security and Privacy Guidelines for the Internet of Things



Lately, I have been collecting IoT security and privacy guidelines. Here’s everything I’ve found:

* “Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.
http://www.bitag.org/documents/BITAG_Report_-_Internet_of_Things_(IoT)_Security_and_Privacy_Recommendations.pdf

* “IoT Security Guidance,” Open Web Application Security Project (OWASP), May 2016.
https://www.owasp.org/index.php/IoT_Security_Guidance

* “Strategic Principles for Securing the Internet of Things (IoT),” US Department of Homeland Security, Nov 2016.
https://www.dhs.gov/sites/default/files/publications/Strategic_Principles_for_Securing_the_Internet_of_Things-2016-1115-FINAL_v2-dg11.pdf

* “Security,” OneM2M Technical Specification, Aug 2016.
http://www.onem2m.org/images/files/deliverables/Release2/TR-0008-Security-V2_0_0.pdf

* “Security Solutions,” OneM2M Technical Specification, Aug 2016.
http://onem2m.org/images/files/deliverables/Release2/TS-0003_Security_Solutions-v2_4_1.pdf

* “IoT Security Guidelines Overview Document,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.11-v1.1.pdf

* “IoT Security Guidelines For Service Ecosystems,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.12-v1.0.pdf

* “IoT Security Guidelines for Endpoint Ecosystems,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.13-v1.0.pdf

* “IoT Security Guidelines for Network Operators,” GSM Alliance, Feb 2016.
http://www.gsma.com/connectedliving/wp-content/uploads/2016/02/CLP.14-v1.0.pdf

* “Establishing Principles for Internet of Things Security,” IoT Security Foundation, undated.
https://iotsecurityfoundation.org/wp-content/uploads/2015/09/IoTSF-Establishing-Principles-for-IoT-Security-Download.pdf

* “IoT Design Manifesto,” www.iotmanifesto.com, May 2015.
https://www.iotmanifesto.com/wp-content/themes/Manifesto/Manifesto.pdf

* “NYC Guidelines for the Internet of Things,” City of New York, undated.
https://iot.cityofnewyork.us/

* “IoT Security Compliance Framework,” IoT Security Foundation, 2016.
https://iotsecurityfoundation.org/wp-content/uploads/2016/12/IoT-Security-Compliance-Framework.pdf

* “Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development,” IoTIAP, Nov 2016.
http://www.iotiap.com/principles-2016_12_02.html

* “IoT Trust Framework,” Online Trust Alliance, Jan 2017.
http://otalliance.actonsoftware.com/acton/attachment/6361/f-008d/1/-/-/-/-/IoT%20Trust%20Framework.pdf

* “Five Star Automotive Cyber Safety Framework,” I am the Cavalry, Feb 2015.
https://www.iamthecavalry.org/wp-content/uploads/2014/08/Five-Star-Automotive-Cyber-Safety-February-2015.pdf

* “Hippocratic Oath for Connected Medical Devices,” I am the Cavalry, Jan 2016.
https://www.iamthecavalry.org/wp-content/uploads/2016/01/I-Am-The-Cavalry-Hippocratic-Oath-for-Connected-Medical-Devices.pdf

* “Industrial Internet of Things Volume G4: Security Framework,” Industrial Internet Consortium, 2016.
http://www.iiconsortium.org/pdf/IIC_PUB_G4_V1.00_PB-3.pdf

* “Future-proofing the Connected World: 13 Steps to Developing Secure IoT Products,” Cloud Security Alliance, 2016.
https://downloads.cloudsecurityalliance.org/assets/research/Internet-of-things/future-proofing-the-connected-world.pdf

Other, related, items:

* “We All Live in the Computer Now,” The Netgain Partnership, Oct 2016.
https://drive.google.com/file/d/0B9qOTaXg3UmRZlhWQk5LOUo5Ykk/view

* “Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things,” Electronic Privacy Information Center, Jun 2013.
https://epic.org/privacy/ftc/EPIC-FTC-IoT-Cmts.pdf

* “Internet of Things Software Update Workshop (IoTSU),” Internet Architecture Board, Jun 2016.
https://www.iab.org/activities/workshops/iotsu/

* “Multistakeholder Process; Internet of Things (IoT) Security Upgradability and Patching,” National Telecommunications & Information Administration, Jan 2017.
https://www.ntia.doc.gov/other-publication/2016/multistakeholder-process-iot-security

They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I’m missing, please tell me in email.



** *** ***** ******* *********** *************

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books — including “Liars and Outliers: Enabling the Trust Society Needs to Survive” — as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Resilient Systems, Inc.

Copyright (c) 2017 by Bruce Schneier.

Backgrounds of Sounds – Oscars

The scripts for this years Oscar contending movies have been available for a while (Oscar’s Scripts and Cameras), and now for it’s 2nd year is The Dolby Institute Podcast Series featuring Conversations with Sound Artists: 2017 Oscar Edition.

Sound Artists and The Little PrinceThese are a set of conversations with the sound artists who’ve been nominated for Academy Awards for Best Achievement in Sound Editing and Best Achievement in Sound Mixing. They are made in association with the Sound Works Collection which is also a not-to-be-missed set of recordings. Now, if you’re finished with bothering me, I’m listening to The Little Prince.

EU Cinema-Going Report

In Finland for example, position 2 and 4 are American studio tentpole movies (Angry Birds and Secret Life of Pets !!! Wait, this just in…Angry Birds is a “National Qualified Production” of Finland), while Poland – with a ±17% increase in box office and admissions – it was Rogue One and Ice Age in positions 3 and 5. Germany had a 12% fall from the previous excellent year, mostly because the local movies didn’t do as well, and there, 5 of the top 5 were tentpoles. The top 5 Films per Territory list is marked provisional, but it makes fun interesting reading…and it is attached to this article.

We have a question into UNIC as to whether the status is generally the same as the US, which is: Pick any set of ten years since the 60s and the trend is always rising, with logical ups and downs within that time. 

Another question: This year has been notable in that Chinese and Korean movies are showing up at the top of the International Box Office figures, and showing up, along with more movies from India in the local US multiplexes. Are movies from those countries showing up regularly in the EU?

The attachments follow this press release:

UNIC: EUROPEAN CINEMA INDUSTRY SEES FURTHER GROWTH IN 2016

Brussels: 9 February 2017 – The International Union of Cinemas (UNIC), the body representing European cinema trade associations and key operators hastoday released its provisional update on admissions and box office revenues across Europe for 2016.

While some data remains to be collated and figures for certain territories are based only on initial estimates, the overview provided by UNIC represents the first wide-ranging assessment of the performance of the European cinema sector last year. More detailed final data on the performance of each territory will be released inSpring 2017.

European cinema-going in 2016

2016 has been a positive year for cinema operators in most European territories.Total admissions for EU Member States (where data was available) increasedby1.6 per centcompared to 2015, while total admissions for all UNIC territories* increased by2.6 per cent,totalling more than1.26 billionvisits to the cinema.

While the increase was also the result of a wide range of highly successful local films across Europe, box office was again dominated by strong international titles,including, but not limited to,Rogue One: A Star Wars Story, Zootopia, Fantastic Beasts and Where to Find Them, The Secret Life of PetsandIce Age: CollisionCourse.

Once final box office figures for all UNIC territories are available, total box office revenues will be shared.

Increase in France, Russia and Southern Europe; stable results in UK and Turkey

France saw admissions increase by 3.6 per cent compared to 2015 and achieved its second-best performance for the past 50 years. Similarly, Russia enjoyed verypositive results (box office +9.6 per cent / admissions +10.1 per cent), asserting itself as the second biggest UNIC territory with over 190 million admissions.

The Spanish cinema industry reached the symbolic mark of 100 million admissions, bolstered by popular local co-productionA Monster Callsand despite a continuinghigh VAT rate on cinema tickets. In Italy, the local filmsQuo Vado?andPerfetti Sconosciutihelped the industry reach positive results in 2016 (box office +3.9 per cent /admissions +6.1 per cent). Following a highly successful 2015, Portugal again enjoyed a further increase in results (box office +2.2 per cent / admissions +2.2 percent).

While the UK box office increased by 0.5 per cent in 2016 – beating a record set in 2015 – admissions slightly decreased by 2.1 per cent. This was primarily due to theunprecedented success of SPECTRE and Star Wars: The Force Awakens in the previous year. A similar trend was observed in Turkey (box office +2.2 per cent /admissions -3.0 per cent), where the box office was once again dominated by local productions.

Decrease in Germany; varying fortunes in Scandinavia

The German cinema sector suffered a 12.4 per cent decrease in box office and 13 per cent decrease in admissions in 2016, as primarily local films found it hard toreproduce record-breaking performances of 2015. A similar trend could be observed in Austria (box office -2.4 per cent / admissions -5.2 per cent) and Switzerland(box office -9.4 per cent / admissions -7.2 per cent).

Box office and admissions in Scandinavian countries were bolstered by strong local titles, such asEn man som heter Ovein Sweden (box office +6.3 per cent /admissions +4.2 per cent) andKonges neiin Norway (box office +11.7 per cent / admissions +9.0 per cent). Following record performances in 2015 and despite localproductions leading the box office in 2016, Denmark (box office -6.0 per cent / admissions -5.1 per cent) and Finland (box office -0.8 per cent / admissions -1.8 percent) did not share the same fortune.

Significant growth in Central and Eastern Europe

Reaching over 50 million admissions, the Polish sector recorded its best year ever (box office +17.6 per cent / admissions +16.5 per cent), bolstered by three localfilms ranked in the box office top five. Similarly, Slovakia (box office +23.5 per cent / admissions +23.8 per cent) and the Czech Republic (box office +20.5 per cent /admissions 20.6 per cent) enjoyed the most significant growth across UNIC territories in 2016. Several other Central and Eastern European countries experiencedsimilarly positive developments in 2016, notably Bulgaria (box office +5.5 per cent / admissions +3.7 per cent), Hungary (box office +13.1 per cent / admissions +12.1per cent) and Romania (box office +10.2 per cent / admissions +7.5 per cent). Positive results could also be observed in Estonia (box office +13.5 per cent /admissions +6.1 per cent), Latvia (box office +10.7 per cent / admissions +5.5 per cent) and Lithuania (box office +14.9 per cent / admissions +9.8 per cent).

Admissions per capita, European film share, outlook for 2017

Admissions per capita for all UNIC territories (where data was available) came in at 1.6 visits per year, a slight 0.1 point increase from 2015. France and Ireland (bothat 3.3) experienced the highest rates of cinema-going.

Due to incomplete figures for several countries, it is too early to assess the total market share for European films in 2016.

The industry looks forward to a busy and exciting release schedule in 2017, one full of promising European as well as international titles.

Attachments

Table with tentative market performance indicators for 2016 (where available). Chart of top 5 films for selected territories.

Notes for editors

UNIC is the European trade grouping representing cinema exhibitors and their national trade associations across 36 European territories. More information availableonunic-cinemas.org

* Including Albania, Bosnia and Herzegovina, Israel, Macedonia, Montenegro, Norway, Russia, Serbia, Switzerland and Turkey.

 

Accessibility Technology Requirements for Cinema

A White Paper in which Harold Hallikainen of USL/QSC gives the critical information about the most recent United States Department of Justice rules for accessibility equipment in the cinema auditorium.

The deadlines (from last page of report) are:

  • Assistive Listening Systems are required to be operational now (rules requiring them are more than 20 years old). 
  • Staff Requirements are effective January 17, 2017, if a theater is providing closed captioning or audio description.
  • Advertising Requirements are effective January 17, 2017, if a theater is providing closed captioning or audio description.
  • Closed Captioning and Audio Description equipment is to be operational by June 2, 2018. However, if a theater converts from film to digital after December 2, 2016, closed captioning and audio description equipment must be installed within 6 months of the conversion or December 2, 2018, whichever is later.

Europe Report and Conference from UNIC

UNIC – Union Internationale des Cinémas – has presented a new report titled Innovation and the Big Screen. With as many useful graphics as words, with concise summaries of the many various elements that are made possible by the rollout of digital cinema, the presentation offers an overview of the potential for, and need of, cinema(s) in the future.

UNIC Report 2017After stating that digital technology has both established unparalleled and diverse film availability to consumers at more than 38,000 member cinema screens, digital technology has also been key to the strength of VOD [among other distractions for the cinema audience], the report points out:

The role of cinemas in raising awareness around and providing access to a diverse European film offer is therefore ever more important to maintain competitiveness and diversity inEuropean cinema. UNIC data for a number of territories shows that the level of local and Europeanfilms enjoyed in cinemas has continuously increased over the past years if one takes alonger-term perspective. In this context, sup- port networks such as Europa Cinemas help maintain audience demand for non-national European titles and are the best way to promote apan-Europeean market for local films.

There are other reports available for more in-depth detail of the many interesting segments of cinemas place in the social and financial fabric. UNIC has a few, and Media Sales has kept an ongoing record of the industry.

On the other hand, this report is meant to give a full overview of the current and future well-being of cinema now that the roll-out of digital is complete. It highlights most of the many different enterprises available to commercializing that have been or shall be available to exhibition by means of digital cinema. Many have been successful, but never well integrated or well scaled. This is true throughout the world and the reasons range from corporate miscalculation to technology and standards not being quite ready…and just plain bad luck.   

The report seems to be a highlight piece for an upcoming European Parliament Conference on 8 February titled “INNOVATION AND THE BIG SCREEN – The Future of Cinema in Digital Europe“. This 3 hour panel in Bruxelles will review many of these topics, keying on growth and strategies for fostering innovation in cinema. Innovation and the Big Screen conference at the European Parliament on 8 February 2017 | UNIC

It is probably no coincidence that this report is also well timed for the Event Cinema Association event that begins tomorrow in London – ECACon 2017. The growing strengths of that organization will perhaps bring momentum through to CinemaCon and throughout the world.

Celluloid Junkie has an interview article with the principles of UNIC at: CJ + UNIC Cinema Innovation – Interview with VP European Commission Andrus Ansip and UKCA/UNIC’s Phil Clapp – Celluloid Junkie

Cinema Stats 2017

DCP Tools: Windows for Virtual Developers

And another: https://az792536.vo.msecnd.net/vms/release_notes_license_terms_8_1_15.pdf

Free means not activating the installation. There may be downsides to that, but most people in the ‘every once in a while for testing’ world won’t notice. It has long been the practice of Microsoft to allow users to test the OS before buying, but sometimes you just don’t know where to look through the forest of spam and trojan traps to find the One Safe One.

Good luck to us all.

DCP Training Tutorials

Or how about a Festival Runners guide? Well, at least that is a good way started with autoDCP’s Festival Runner’s Guide to DCP’s – AutoDCP Easy automated tool to make a DCP.

AutoDCP LogoThere are many other excellently written lessons for the movie or doc maker, among them: 

Common mistakes when making a DCP, a must read. – AutoDCP Easy automated tool to make a DCP, or 

How to get your trailer DCP to pass Deluxe QC – AutoDCP Easy automated tool to make a DCP, and many others.

DCPs are easy to make now. It takes a few nights figuring out the mistakes and options, then sure enough, it is on your disk. Then a few nights figuring out how to make a drive that can be checked at the theater, and a couple nights figuring out how to straighten out the problems. 

And now, the option of uploading your .mov file and getting back a folder full of DCP.

CinemaCon/NAB Split

For a number of years one could land in Vegas and within a two week period see CinemaCon and NAB …sometimes one was immediately after, sometimes the other.

The best years where when there was enough time that the European Digital Cinema Forum – the EDF – could put together a bus tour with members and friends to various manufacturers, studios, post and cinema facilities and…studios. Great people talking on a bus. Alas, the schedule in 2016 was too close. One fears that the schedule in 2017 might be too far apart. More as it happens.

This year it is: 

CinemaCon – 27-30 March

NAB/SMPTE Weekend of Cinema – 22-23 April

NAB – 24-27 April

BKSTS Now IMIS – Int’l Moving Image Society

 
 
 
 
It is planned to stream our events Internationally so that those who are near and far can participate in our events. It will be possible to watch them live or on demand.

We have been working diligently on crafting you a new website with great and intuitive functionality. It can be located at: https://www.societyinmotion.com  Through the website you will able to keep track of your membership, book your place at upcoming events, and see our latest articles and content as we move forward. Stay tuned on how to set up your membership on it!

We have begun to form a team of content developers who are eagerly scouring the world to investigate the newest trends, tips on craft skills, and offer advice on what to do. If you are interested in participating—either as a developer or offering yourself, a colleague, or suggested topic to cover, please email this information to : [email protected] 

Lastly, we are beginning a new initiative to recruit members and are offering free Associate membership for the remainder of the 2016 year. If you know anyone who would like to be a part of the future of the industry, please invite them to join HERE .

We hope you’re excited, we are.

 
Visit Website

Loudness in Cinema – IBC 2016 Presentation

A Complete Facility Inventory tool with RESTful hooks for an FLM interface is basically working. A Manufacturer’s Product Line Input Tool is in the works.

Loudness Intro Inventory System

 

The Audio Maintenance and Set-up sheets from the upcoming SMPTE Modern Calibration Procedures are laid out and working.

Loudness Intro SMPTE Audio Survey and Maintenance System

Daily/Weekly/Monthly Checklists are working, but need some detail added.

Loudness Intro Checklist System

These are all available for testing at the site: DCinemaCompliance.com

The Projectionist Training site DCinemaTraining.com needs 2 more chapters and a QA pass.

Loudness Intro Projectionist Training

Digital Test Tools, the hardware company, has a developed monitoring product waiting for production financing.


Today’s topic is that major tangent of Quality Assurance, Loudness in Cinemas.

Loudness In Cinema Intro MainSlide

We’re not dealing here today with Fletcher Munson Curve-like loudness. We’re dealing with what it is when the audience member says “It’s too loud.”

Loudness In Cinema Definition 0

We’ll start with the reminder that in a quiet room, the mosquito which generates 20/100,000th of a Pascal is too loud (20 x 10-6).

The attempt to create a clever Venn Diagram and a Loudness Matrix turned out to be a ridiculous proposition.

Loudness In Cinema DCPs - Definitions Venn

Too many tangents.

Every time I interviewed someone else, it was obvious: It’s tangents all the way around.

Loudness In Cinema Tangents of Goals and Purposes

One thing was clear throughout: The word of the year is “Annoyance”.

We’ll take up the tangents by Stakeholder segment, attempting to include a

  • What Can Be Done or
  • What Should Be Studied

for each stakeholder.

Loudness In Cinema Stakeholder Goals and Purposes

We start with this poignant quote from Hans Zimmer, who has taken a lot of abuse in the last couple years, along with Christopher Nolan.

Loudness In Cinema Hans Zimmer Quote

Creative Intent

Not much can be said about this – It is why we chose this field, this technology. For people in sound post production, it means that after years of getting a movie made and locked, music, dialog and effects are forced largely be created and assembled in a few weeks.

A few items on the To Be Studied List is:

  • whether mixes are actually louder and
  • why are mixes louder, and
  • how to communicate better that turning down a mix at the audio processor actually makes the critical dialog more unintelligible.
  • to separate fact from anecdote are stories that mixers in the EU are messing with the master gain to match what is happening in auditoriums.
    (From several interviews at major stages and mix rooms in LA, this isn’t happening there and most engineers sneer at the idea of it.)

Mixers go to a great deal of trouble to get the mix right then check it at other auditoriums – not just premier rooms but other auditoriums where us common people go. But they bring in their own projectors and tune the room. So it isn’t exactly like what us common people see and hear…note to self – Create studies to find:

  • If any producer or director sits for 15 minutes in dim-ish light with dim-ish music until their hearing sensitizes to quiet, then gets blasted by TASA Compliant but fully compressed, loud trailers (not their own).
  • Is it the loudness of the trailers that most people respond to as too loud and not the movies

Insert anecdote about how this is what my wife now trusts when I explain it to her while in the theater.

  • how many of the public is a complainer
  • how many of the public would choose to hear the movies like the Creatives intended.

A side study would be to

  • Find out the levels that people are playing these movies (or their music) at when they listen with ear buds on their phones and tablets.

David Monk makes the observation that once he knows what is being said, on his normal TV setup for example, the words are obvious. But sometimes he only finds these obvious words after replaying with subtitles on. Several people interviewed subsequently tell of doing and finding the same thing. David suggests that producers and mixers don’t know what we don’t know, that those words which are obvious to them would be obvious in any circumstance after hearing the words so many times during production and mixing.

For another view of the topic– one which verifies the theory of a prominent studio mixer/exec – I flew in with a women who trained at a major film university, and has subsequently mixed and directed several movies. She wasn’t trained and has never mixed with a VU meter. Instead of building a mix around dialog between -14 to -20 on a VU meter, it is done “at a comfortable level with the peak meter never hitting red.” She has never considered how this could create a big difference in movie loudness.

Tangents and Edge Cases

One horrible stat is that 60% of recent US war veterans have permanent hearing loss or chronic tinnitus ringing in the ears. That’s 600,000 of the former and 850,000 of the latter in the US alone. Add to that, 15% of baby boomers have significant hearing problems, 7.5% of 29-40year olds. UK military and civilian stats are similar in percentage and degree.

Loudness In Cinema Audience as a Stakeholder

The vets problems areorders of magnitude worse than imaginable; it isn’t rare for 25dB loss in one ear and 16dB in the other depending on how they held their rifle or what they typically sat next to. They were instructed to wear ear plugs under their helmets, but the military’s own studies show that finding a target in that condition doesn’t work.

Everyone you ask about cinemas has an opinion on loudness. Not all bad, e.g., one of the scientist interviewed for this segment said that:

The ability to control sound level while watching movies at home is the main reason people like me (no longer teen age) avoid the cinema (movie theaters) altogether.

His idea is to be able to bring and listen through his own bluetooth headphones so he can regulate the volume. Interesting concept.

I spoke to a friend about another friend whose hearing loss is suspiciously at the tone that his wife uses when she is upset. The 2nd friend says that he had the same issue – his wife insisted on tests and his doctor showed the frequency band on his chart where this occurred.

Their loss areas are one thing, but the edge frequencies leading to them are often ‘annoying’. Yet because speaker’s vertical frequency dispersion is nowhere near as smooth as their horizontal dispersion, we commonly place people with sub-prime hearing in sub-prime auditorium seats.

In the practical world, tangents are the all too common edge conditions. Later, we’ll look at some of the impacts that edge conditions might cause in our efforts at building a great room of sound.

Loudness In Cinema Zebrafish Do It

Loudness In Cinema Zebra Don't

Here’s the deal. All vertebrates can regenerate the damaged hair follicles that allow hearing and other sensing (such as the microscopic hairs on the bottom of a fish that senses variations in water currents.)

All vertebrates, except mammals.

Loudness In Cinema DCPs for Non-Except for Mammals

This is one view of the sets of hairs that are inside the Organ of Corti, which is part of the cochlea of the inner ear of mammals. We see the longer hairs and other views will show shorter adjacent hairs.

Chevron of Inner Ear Hairs

There are about 18-20,000 interconnected hairs. They all contain an even finer stereocilia that does the delicate touching on various parts of adjacent hairs, and which then help convert the stimulus into electrical signals using a transfer of potassium ions from the tip to the base of each hair.

Loudness In Cinema DCPs - Hair to Hair Transfer

This is an electron microscope view of a frogs hair which work on the same principles but when damaged by trauma, will regenerate. It is in the power of the nerve and its adjacent helper cells to recreate them using a gene factor called ATOH1. There is something in mammals blocking this function.

Loudness In Cinema – Frogs Inner Ear Hair

Another view of good working hairs. You’ll have noticed several in a chevron shape. These are tonotopically organized from high to low frequency.
If you start thinking of ⅓ octave EQ sets, you won’t be too far wrong.

Loudness In Cinema Chevron Hairs

Shown here (at the asterisk) are missing and damaged hairs. After damage the hairs and helper cells seem to maintain some viability for 10 days. Noise-deafened guinea pigs – given 60-70 dB hearing loss by simulated gunfire – can get substantial improvement if the Atoh1-based gene therapy is applied during that time period. Suffice to say, that’s a bumper-sticker statement for a complex decade of study.

Loudness In Cinema Missing Hairs

Again, good on the right and damaged in the center and left. We’ve all probably heard of missing limb phantom pain? There’s a working theory that these stragglers or the missing hairs themselves initiate a missing limb-style effect, which causes tinnitus.

Loudness In Cinema Good and Bad Hairs

Yehoash Raphael, the scientists who wants to use his bluetooth headphones in the cinema, makes the unequivocal statement:

There is no viable biological treatment for hearing loss yet.

Dr. Raphael also mentions that current science indicates an important negative outcome of acoustic trauma named Synaptopathy, that is, hidden hearing transmission issues at the nerve itself.

There are hundreds of researchers and many grants funding the laborious process of finding what works and what doesn’t, many using a friendly virus as a carrier for a gene factor.

Loudness In Cinema Gene Therapy ATOH-1

That’s as far as we’ll follow this tangent. Supporting documents will be put in the package of this presentation on the EDCF website. (Please acknowledge their copyrights if you republish this!)

So, what is the To Be Studied or done in the audience stakeholders domain. Education? Discovery?

What percentage actually complain, but this time, what is their history, and what exactly are they complaining of? Myself, if I get wax build-up, I will hear crackling sounds at loud piano recitals.

Instead of damaging the audio balance and intelligibility of the dialog by turning down the dial, could we map the auditorium for the audience? Would they understand if the cinema manager said that the sound won’t be turn down but that they would guide them to a seat that is less loud, or less loud at various frequencies?

Another tangent: The two reasons that broadcast world’s Loudness science doesn’t apply in cinema; the audience doesn’t have a remote control, and LUFS technology needs modifying for the length of movies. Thus the argument that goes: If I come to the theater and it is too cold, I put on a sweater. If it is always too loud, I put in my ear plugs (or in some pluperfect future, I shall have put on my bluetooth headphones).

Loudness In Cinema Exhibition as Stakeholder

Exhibition is very reticent of long drawn out studies becoming a red flag for sensationalists. They are the ones which stand to be most impacted by hyperbole and dissemination of partial truths. Recently, such hyperbolic betrayal came from within the technical community.

Here is what they are afraid of.

Loudness In Cinema Exhibitor House

Loudness In Cinema Focus on External Monitoring

Adding to the already complex structure of a cinema facility, the region of Barcelona passed a law that requires a back channel to the mayor’s office giving loudness data and the logs of the limiters as they kicked in during overages! Only because they were convinced by a certain company that the equipment is not available – such as a 64 channel limiters for an ATMOS system, or even an 8 channel for a 7.1 system, did the enforcement get dropped.

Just as there are no SMPTE or ISO police, there is no NATO or UNIC enforcers. The exhibition community response reverts to the basic premise that there are many commercial decisions that can’t be enforced by fiat. There are benefits and drawbacks to that. As an extreme example, as late as 2007, I installed digital cinema servers into rooms that were just converting from mono.

Loudness In Cinema Stakeholder 4 Technology

On the other hand, France, the largest EU market by many industry metrics, does have an enforcement arm that monitors exhibition facilities. The CNC normalized the ISO/SMPTE documents, and made them the law of the land. Alain Besse of the CST has begun taking his research project into Loudness all the way back to distortions created in production – microphone choices and placement among other things. He is planning a December symposium to study these and other matters.

There is one other important thing that Alain points out to those that grouse about loud sports and other entertainment venues. Communities are investigating sound not because of cinemas per se. There is fear of an epidemic of destroyed hearing from loud sound – especially low frequency sounds – in public venues, and cinemas are just another public space on the list.

On the To Be Studied List for exhibition is whether short term exposure to 85 and 90 and even 100 dB bursts of sound destroys ears. There is generalized info but not rigorous data. Mothers complain about children who come out with ringing ears, but are those kids also wearing ear buds listening to constant barrages of even louder sounds?

It should be clear that this is not an excuse for badly implemented sound in the auditorium, but what is the reality?

Loudness In Cinema Stakeholder Five – Standards Groups

Time, of course.

…and biting off more than can be handled.

The SMPTE group that developed the new digital pink noise standard started nearly 3 years ago. The documents were released many months ago. But for something so primary, there is little public knowledge and very little implementation. The SMPTE store still doesn’t have a standard tone package available for download. A pink standard was Building Block Number One of a list that needs to be done before Loudness can be tackled. Not pointing fingers, but rather pointing out that there is only so much that a volunteer group can do with their spare time. The long-term arc is great, but short-term progress is slow, and expecting engineers to be good at socializing is probably a good source for an oxymoron joke.

People new to the field always ask, “How about transcribing old standards for hearing loss in the workplace?” Well, it turns out that workplace laws – OSHA and the like – were draconian, not in the sense of being onerous for the facility owner, but onerous in the sense of only caring whether the worker still had the bandwidth available to hear a conversation at the end of their work day and at retirement. A worker was considered to have a material hearing impairment when his or her average hearing threshold levels for both ears exceeded 25 dB at 1, 2 and 3 kHz.

And finally, no group wants to walk the path that gets near the briar patch of liability. Societies become litigious for a reason, and despite the extremes that are used to mock the rules on either swing of the pendulum – with real or imagined anecdotes – we as practitioners of the technological arts can’t allow a vacuum to pull in a problem without being ready to correct it with real science in the face of legislators who hire an over ambitious engineering group to “Save The Children”. If we do allow our heads to be put in the sand, we’ll get laws like in Flanders mandating that children’s programs have to be played at essentially 4 on the dial, and adult fare at 5 on the audio processor dial.

Our new contribution to this Quality Assurance situation is a website that offers free DCPs and a comprehensive checklist for non-technical managers.

This Manager’s Walk Through Series gives the facility manager some method and knowledge against the impossible task of judging their auditoriums and communicating with their tech staff.

Each DCP is different, but each has high and low tones played in sequence around the room, with distorted and muted tones for comparison.

Loudness In Cinema DCPs for Non-Technical Manager w/Checksheet

There are graphics included to get the managers used to sensing the problems and the quality potential of their rooms. One DCP uses faces as the empirical standard to judge colors by. Another uses a cool educational graphic from the xkcd.com website, and there are more to come with lessons that fill them in on what they should expect as they build their talents. There’s also a nice dose of the new SMPTE pink noise for sweeping a room and a 2Pop DCP that puts a sync pop into different speakers every 2 seconds.

Download these from cinematesttools.com, password: QA_b4_QC

Loudness In Cinema DCPs for Non-Technical Manager

We look forward to helping you advance the trend of quality assurance in the cinemas. Thank you.

What there was no time to say in a 15 minute presentation:

A mythos has been created that there is a trend toward loudness laws because two, maybe 3, county level (not country level) groups have created laws that regulate audio levels in cinemas. In fact, the Flanders section of Belgium and the Municipality of Barcelona in Spain are the highlights and limits of that trend, and they did so 3 years ago (and only one is, or can be, implemented).

Not to say that there isn’t a good purpose in studying audio levels, but there is no need for the science, fact-based groups to use hyperbole any more than the “gee, we must do something to save the children” or the sensationalist press groups to use hyperbole. Likewise, there is no need to demean the low-knowledge groups as just did, since many are, in fact, properly working in a difficult area – clubs, concerts, sporting arenas, auto racing…) where there is a need to regulate due to entertainment industries that do deliver long and repeating exposure of +110dB levels. That cinemas, which might use brief periods of plus 100dB levels as part of the storytelling experience, get lumped into the same category is all the more reason that the area needs to be examined with the talents we have and not rely upon hope and namecalling.

If Annoyance is the buzzword, it is Distortion that is the hidden hole. In the field of projection we know that there was a long trend of installers specifying projectors down to the level that they just barely made the luminance levels for the size of the screen. This turned around to haunt the industry when exceedingly lower light level 3D became the norm, a norm that could only deliver a set of distortions, from horrible contrast to minimal stereoscopic separation. Those human visual system distortions developed horrible pictures that developed headaches and complaints and eventual collapse of a technology that should have improved but couldn’t due to under performing equipment.

Likewise, under-spec’d (and old) audio equipment delivers distortions of their own, and amplify distortions that are inherent but might go unheard in better systems that correctly play to the sensitivities of the human auditory system. From inexpensive first generation converters to speakers aimed above the heads of the audience, there are numerous potential points of failure that need to be put into a matrix and studied alongside the numerous potential points of failure in the hearing system of the varied audience members. The study that is required is one of a grand scale, and get even grander if there is any attempt to quantify subtle factors dialog intelligibility, loudness and room size (and image size?), their relation to annoyance, accommodation, and audience engagement, and differences between pre-show material and the movie itself, or (shudder to even type it) studying the actual limits for safe listening given the variations of human structure and past listening habits.

It will require a huge conclave of the various sciences. There are many existing groups which have different pieces to match the needed scope of the problem, many which are encumbered by the same time and access problems that SMPTE has, and the political expedient of self-regulation that is demonstrably incapable of reliably playing back movies at the level of artistic intent. Perhaps just creating a group that can generate a public venue to even create an outline of this kind of shared open project would be a first good step.

Finally, more at hand, there are methods that use current tools in the cinema field and variations of new tools developed as a solution for the broadcast field as a beginning for study and development of a valuable metric and algorithm and technique for use, instead of the silly and quite arbitrary Flander-based rules. Mr. Allen has developed a moving time window technique with Leq and Mr. Leem has put forth LUFS-based ideas. These studies, and others as they present themselves, should be open-sourced so that peer review can be done in a more modern and expeditious manner. The first step might be to describe these procedures well onto GitHub.

Good luck to us all.

Filmmaker Ang Lee Becomes a Humble Technology Ambassador at IBC

(This Post-IBC article by CJ Flynn and Patrick von Sychowski was also published at Celluloid Junkie)

The event was produced by Julian Pinn as was the entire Big Screen Experience Conference track.

Celluloid Junkie’s Editor Patrick von Sychowski and regular contributor Charles ‘CJ’ Flynn were both present at IBC and discuss the impact of the screening and how the technology being demonstrated may impact the future of cinema.

CJ Flynn: One can’t consider IBC 2016 without starting and ending with reflections on the Ang Lee presentation and his thoughts and clips of “Billy Lynn’s Long Halftime Walk”.

We were given a triptych that included an on-stage interview detailing Mr. Lee’s long walk to and through the production’s desire to deliver a non-obtrusive 3D presentation, and how this drove constant unforeseen advances requiring “beyond-incremental” technology leaps.

We then saw a significant movie clip and some new technology test clips. The finish was a panel discussion titled “Realising an Auteur’s Vision: a technical deep-dive into Mr. Ang Lee’s ‘Billy Lynn’s Long Halftime Walk’” that featured the film’s editor Tim Squyres, its technical supervisor Ben Gervais and Sony Pictures Entertainment Head of Production Technology Scott Barbour who, along with Mr. Lee moved us through dozens of aspects of the ever-expanding sphere of newly exposed needs as the rough edges of the vision technology became evident.

Simply stated, needing more light showed that a 360° shutter is needed, which exposed more flicker which in turn required the highest frame rate which in turn mandated less (to the point of none) of both “standard” lighting and makeup. Unnatural shadows and even powder on the skin showed up as false and distracting in these conditions, as well as any emotion that wasn’t differently (and exceptionally) directed and portrayed by the actors.

In theory, we saw nearly the same footage and discussion at the SMPTE/NAB presentation in April. But in reality, that first presentation was in a small make-shift conference room with a screen that was too high for comfort and projectors just over our shoulders. At the time we heard Mr. Lee’s presentation before experiencing the newness of it all (or afterwards while still digesting the new and raw scenes and emotions). While this was all very striking it was somehow inartistic…un-capturable perhaps. I mean, I caught that it was exceptional in scope, but I caught only the slightest bits of the detail that made it exceptional.

In the case of IBC, from the exceptionally natural tête-à-tête flow of Julian Pinn’s introduction and questions, to the darkened auditorium with full Dolby Atmos and ultra hot-rodded Christie Mirage projectors – it was as if the 15 years of the EDCF’s IBC Digital Cinema Presentations at the RAI were all just practice for those 2 hours.

Patrick von Sychowski: Before I say anything, I should declare that as a member of the IBC Big Screen committee I could be expected to be biased in saying that this was an amazing event – but it really was an amazing event, based on my distinctly un-scientific poll of talking to people afterwards. Ninety-nine percent of people seem to have come away impressed, most of whom did not see it screened at NAB earlier this year and the sole detractors seem to just want to make a point about what they thought a “video look” was.

“Visceral” is a word that came up more than once in discussing the Ang Lee footage, because as you said, it felt very raw (and I’m not talking about the file format or unfinished edit) and direct in terms of stirring emotions. Talking to EDCF President David Hancock afterwards he said that he had a similar experience with a particularly good virtual reality “film”, so it is encouraging that cinema can find a way on the big screen to have just as hard-hitting of an emotional impact as the latest that is cropping up in VR technology.

And you’re right CJ, it was a genuinely Big Screen experience at IBC, compared to the small demo room that we were ushered into patiently waiting in groups at the SMPTE/NAB demo earlier this year. I was impressed that time, but this time it seemed more like an extraordinary cinematic event.

CJ Flynn: I didn’t get it before…at the SMPTE/NAB presentation. It seemed too wild and uncontrolled, the technology was interesting but wasted on me. There aren’t that many cones in the eye and they’re fixed on 7° of our vision, so Mr. Lee is almost giving us too much information, I thought, and he’s not telling us where to look, it’s too bright. I couldn’t put the tangents together – exploded ordinance in the mind without the expansive humanity of “Crouching Tiger” or “Life of Pi”.

What I also didn’t understand, but what was well transmitted at IBC, was that I was experiencing as an audience member what Mr. Lee was experiencing as a director – everything is coming up raw. The look in the eye as a window to the soul of the character is more broadly and granularly exposed by every ingredient of the capturing technology, every 120th of a second. So every director’s task and every cinematographer’s task and every lighting tech and gaffer’s task and every actor’s moment from prep to take – it was all raw and before those challenges were encountered as the milestones flying by, they didn’t get it any more than I did.

Then every challenge of the camera and the 7.5 terabytes of daily data and post-production with and without the Christie Mirage projectors and choosing each post technology not only for their usefulness but for their potential to expand to unknown needs, each of those flying milestones became a millstone until solved by a team of people who had to figure out then describe the problem and the team of people who had to grok it and solve it. That’s was a great story well told, in and of itself.

It was a mitigating feeling for me therefore how Mr. Lee explained how he was so constantly being lead on the journey that he had to not only locate and ride with the humility of the travel himself, but keep the crew on tempo by explaining that none of them are practiced in the specifics of this evolving craft, each of them, including himself, were “not good enough” he would say, while he tried to put into words what he often described as “humbling”.

It is a journey for us as an audience as well, and he specifically asked us to be patient as he and others in the craft walked the steps with these new tools toward better storytelling, in all their aspects. He was especially gracious to other directors and other ingredients of the technology that we have been presented with in the course of advancing the art. For example, his compelling argument that what people complained about in the high frame rate versions of “The Hobbit” as being video-like was due to their having no other category to label what they’re eyes were seeing. And as someone pointed out, it is a false category since videotape never looked that good.

Patrick von Sychowski: That leads me to the conclusion that Ang Lee is both the best and the worst ambassador possible for this new cinema technology milestone.

I’ve seen and heard James Cameron talk about new technology (primarily stereoscopic 3D) on the very same stage in Amsterdam, and boy are both filmmakers polar opposites of the spectrum. You can say a lot of things about fellow Oscar-winner Cameron, but “humble” is not a term that springs to mind. Ang Lee on the other hand doesn’t do a hard sell and sees this new technology and filmmaking methodology as a journey of discovery.

Lee is also very likely aware of the backlash against stereoscopic 3D, even though he directed what is arguably one of the greatest 3D films ever made with “Life of Pi”, not to mention the rough ride Peter Jackson faced with the 48fps HFR version of “The Hobbit”. So he is not here to tell us that “this is the future” and that we should all get onboard or find ourselves confined to the celluloid dustbin of history.

In fact, Lee was quoted in the Hollywood Reporter when he received IBC’s Honour of Excellence Award as saying, “With this movie I’m getting a new world. The use of high frame rate and high dynamic range will provide, I hope, a unique opportunity to feel the realities of war and peace through the protagonist’s eyes.” That’s about as much of a hard sell on HFR and HDR that you are ever going to get from Lee.

But if you were listening closely to Lee, particularly in the discussion with his editor, tech supervisor and head of production, you realise that there have been tremendous technical learnings that will benefit filmmakers and cinemas for a long time. For one thing, there is something happening to the human visual system when you go from 60fps to 120fps display, where suddenly the veil, window, whatever you want to call it, is removed as we see reality up on the screen. In this regard I would have loved to watch the whole clip in 2D 120fps, which is one of the versions that will be going out to cinemas.

Secondly, the various clips they showed afterwards demonstrated clearly that even if you can’t show 120fps 3D 4K in any cinema today, shooting at 120fps means that you have better images at 60fps, 48fps and even 24fps, just as we knew already that capturing 4K images and displaying them in 2K can look better than 2K projected at 2K. I will bet you anything that Cameron has changed his plans for “Avatar” 2, 3, 4, 5… to be 120fps acquisition. I just hope that the rest of the motion picture industry follows the lead into 120fps for future-proofing purposes, even though this raises the question about getting rid of makeup and lighting.

And this is even before we get into issues such as variable frame rate and its potential, such as when they discussed switching to 60fps for a pan of the cheerleaders, but staying with 120fps for the lead cheerleader to make her stand out. Will actors demand that “For Your Consideration” screenings for members of AMPAS, SAG, BAFTA, etc. be held in higher frame rate so that their non-acting acting skills can be better appreciated? Post digital conversion, is there a new technology arms race brewing in the cinema industry, with everything from 4D seating to immersive audio? There certainly seemed to be a lot of other things happening at IBC, other than just Ang Lee and “Billy Lynn”.

CJ Flynn: Well, I’m a fanboy of the technology being highlighted at IBC. I’m a Dolby Atmos fanboy – am I the only one who has thought that it is every teenage audio technologists wet dream? I’m a laser fanboy for the several reasons that range through the environmental to achieving more light for 3D and certainly, much better contrast. I’m a training fanboy for audiences and the kids who are responsible for the popcorn while also responsible for the artistic vision during those last few meters from the lens to the screen and speakers to our body’s sensors. So it is such a joy to hear and see not only Ang Lee travel the steps through the technology, but I was very impressed with the dreams and openness of the technologists during the EDCF DCinema Wrap-up.

John Hurst (CTO, Cinecert and mid-wife of digital cinema) presented a cool idea that still requires several little modules on top of the obvious but long-in-coming FLM/CDN technology which, if nudged into place could make movie theaters as immediately compelling in their programming selection as OnDemand TV.

Cinecert Proposes OnDemand at Cinemas

Andy Maltz of AMPAS brought to light some HDR benefits that have arrived naturally with the now juggernaut-level of uptake for ACES. And Barco’s Tom Bert took away some of the apparent false equivalency dust that got spread by marketing and fairness – not only was his “Demystifying laser projection for cinema: 5 frequently asked questions” on target, but the show leaves one confused with numbers – so his slide stating that there are 125 Flagship RGB lasers in Cinemas, plus over a thousand of their retrofit and blue phosphor laser units in the field….this in the year following real introduction…wow.

Patrick von Sychowski: If anything the EDCF session demonstrated that if digital cinema was originally a solution in search of a problem, having been implemented, it now leaves us with even more new problems than we ever expected.

There were very honest and candid presentations about the many new technical challenges facing the industry, ranging from affordable laser projection to SMPTE DCPs, so hats of to the EDCF under its new President David Hancock (who does this in addition to his fully consuming day job at IHS) for making it an open and honest forum that matters.

Without wishing to just accentuate the negative, the EDCF day also pointed to some of the exciting developments. In addition to the ones you have already mentioned I would single out EclairColor, which I first saw demonstrated at CineEurope this summer and which is getting a major push this autumn on both sides of the Atlantic. Whatever the relative merits of EclairColor, Dolby Cinema and any other flavour of HDR, it is good to have competition and choice in imaging technologies, because that is ultimately what the cinemas are asking for.

CJ Flynn: I feel like I still don’t have the placement of the Ymagis/EclairColor technology in the big picture of things, but I did get the joy of the technology from Cedric Lejeune (Vice President of Technology – Eclair) who I have long respected for his photographic and colorist work. This concept of getting HDR onto the screen without requiring HDR to be defined only as the deservedly much vaunted million to one Dolby Vision, while not allowing it to be marketed down to milquetoast is going to be important. Customers going to premium large format rooms deserve a real definition and we need to find an absolute and communicable baseline.

An informational slide detailing the features of EclairColor as shown during IBC 2016.In the same but opposite manner, the angst of creating and promoting and pushing a technology through an embryonic stage was tangible as Chris Witham (Director, Emerging Technology, The Walt Disney Studios) told of the last year of SMTPE DCP transition steps and issues as the only major studio grabbing the reins and delivering features in SMPTE DCP. That was followed by Tony Glover (VP, Mastering Technology and Development, Deluxe Technicolor Digital Cinema UK) detailing the two live tests of the EDCF SMPTE DCP testing. Great data of a plan that I look forward to following as it is rolled out into larger and larger spheres.

Also provocative were the similar presentations of Julian Pinn (CEO – Julian Pinn Ltd) and Rich Welsh (CEO – Sundog Media Toolkit) who both spoke about “watch-this-space” developments that they are now productizing to handle important nuances left behind in the rollout of digital cinema. As a developer myself, I wondered if anyone else had a problem with transmitting the emotion of a heartfelt belief when there are only words to do it with. But there they were, people with exceptional histories touching us with their passions.

Oh, and there was that guy who showed a picture of frog’s hair and thought that just because the DCPs for non-technical managers were free and include a Manager’s Walk Through Series Report checklist that he could get away with pushing his new website at www.CinemaTestTools.com. That was myself, and yes, that is a shameless plug.

IBC2016_CinemaLoudness_Hearing Loss Implications

And leaving the first for last, I agree with you Patrick, it was great to see what the new EDCF President, David Hancock can do with the numbers. For so long there was the driving focus of one metric – the march toward 100% saturation. Now it is an interesting group of metrics on diversification, the very things that the digital transition were supposed make cinemas more viable in an age when studios were closing distribution windows and a new audience had more choices for their time and money. The transition itself balled everything up for a long while. It will be nice to see how these numbers progress.

A slide with digital cinema figures as presented by David Hancock of IHS during IBC 2016.

Patrick von Sychowski: I was hoping you would get to the regenerating hair in the frog’s ear, which enabled it to restore damaged hearing, something us humans can’t do. Your talk was definitely the talk of the talk at the drinks reception right after the EDCF session – and not just for the frog ear hairs. There were also people coming up and asking about the availability of the DCP test tools from your website, so let’s return to those in a future post.

I have also come up with the perfect analogy for the proliferation of DCP versions that currently bedevils our industry (any guesses if “Billy Lynn” will go out in fewer or more than 400 versions?). It is like an EDCF drinks party where you get a choice of red wine, white wine, water or juice from a tray, as opposed to everyone standing in line to get their custom designed cocktails made to order. Waiting in line for 10-15 minutes to get to alcohol when you’ve just sat through three hours-plus of heavy-duty tech talk was not ideal. Let’s go back to just the four options next year, because there are always plenty of other things going on at IBC.

David Hancock EDCF IBC2016

CJ Flynn: It should also be an embarrassment for the IBC that they – with a portion of each convention area for every technology type and a sponsor everywhere, and having been the first and best with video presentations of the convention events – why aren’t they finding an IT and delivery sponsor and making the Big Screen Experience presentations live on the Internet, complete with audience interaction. Otherwise, too many of the presentations become commodities speaking to an uncompelled crowd now that the equipment has matured. There are important product differences, but the presenters have to be too very polite even with the technology, lest the fellow panelists or future client in the audience get miffed.

Patrick von Sychowski: I will half-agree with you on this one. Obviously showing “Billy Lynn” 120fps 3D 4K streamed over the internet can’t be done, even if Sony Pictures were to allow it. The same goes for other sessions where Hollywood studio material is shown on the big screen. But it would be good if the sessions that don’t use sensitive material were captured and shared, if not live, then at least TED-style at some later point. Because all of the sessions, even Ang Lee, deserved an even bigger audience than they got.

That takes me to the perennial point that the largest challenge facing the IBC Big Screen Experience is going beyond preaching to the choir and attracting more cinema people. Apart from the two representatives of Vue and Cinemax who were speaking on panels I only met one exhibitor in the audience. This was their chance to see “Billy Lynn” in a format that will shape the future of cinema, and yet despite the fact that attendance is free, they did not make the journey to Amsterdam. Nor did representatives of any cinema trade association.

I wish I knew of a good way to persuade cinema people to come to Amsterdam for IBC other than telling them that the Ang Lee presentation was an amazing eye opener. We really did watch cinema history being made in front of our eyes.

CJ Flynn: The Ang Lee keynote and the final presentation, which took an entirely different tack, were terrific and provocative. When that exceptionally talented moderator – oh, wait, that was you Patrick – when you put out the idea of having a panel of judges rate different cinema technologies like “Strictly Come Dancing” (“Dancing With the Stars” in the US), I thought, cute, but uhm…OK. But then it showed off the best sequence of the presentations.

Four different people from different slices of the business – a manufacturer, an installer, a cinema technical chief and an analyst – all got to point out reasoning that I couldn’t have imagined for justifying or nay-saying different portions of the technology spectrum as having potential return-on-investment or not. Brilliant idea, well done and I hope you take full credit for it since you’ll be remembered for the presentation. If only we in the audience could have had control of a laugh and groan track, and a dynamic rating bar.

Patrick von Sychowski: I will do an Ang Lee and say that I feel humble, not because it wasn’t a brilliant last-minute panic move to steal the format from reality TV, but because I would not have expected that of the dozen technology categories that were judged, blue phosphor laser would come out on top while VR languished in the bottom. Let’s see at next year’s IBC what the future holds. Please feel free to bring a cinema friend or two.

In the meantime I’d like to thank all the technology companies that worked with Mr. Pinn and IBC to make the Big Screen Experience day possible. I know that there is a risk in trying to list everyone as you inevitably forget someone, but Phil White’s team who coordinated everything on the tech side deserve a major shout-out, because they had to install so much equipment in the balcony of the Rai auditorium that at one point they thought it would need reinforcing. Don’t forget that there were not only the Christie twin Mirage projectors for “Billy Lynn”, but also the dual Christie/Dolby laser projector (and that’s a lot of cooling required) set-up for the screenings of Disney’s “The Junglebook” and 20th Century Fox’s “The Revenant” in Dolby Vision and with Atmos immersive audio.

I didn’t stay for the latter but I’m sure the bear mauling is even more visceral in HDR and surround-growls, while “Jungle Book” was a bright 3D delight. So Christie, Dolby, Harkness Screens, the projector and audio technicians, the studios who let IBC see the films and footage and everyone who flew to Amsterdam to share their insights. I’m sure I’ve neglected to mention plenty of others: QSC? EDCF? Ang Lee’s entourage?

CJ Flynn: Mustn’t forget Terry Nelson and partner Sean O’Dea for making the talent sound good every year while handling the live audio mixing console and other physical aspects of setting up the presentations. And thank you Patrick for this conversation. À la prochaine fois.

Filmmaker Ang Lee Becomes a Humble Technology Ambassador at IBC

(This Post-IBC article by CJ Flynn and Patrick von Sychowski was also published at Celluloid Junkie)

The event was produced by Julian Pinn as was the entire Big Screen Experience Conference track.

Celluloid Junkie’s Editor Patrick von Sychowski and regular contributor Charles ‘CJ’ Flynn were both present at IBC and discuss the impact of the screening and how the technology being demonstrated may impact the future of cinema.

CJ Flynn: One can’t consider IBC 2016 without starting and ending with reflections on the Ang Lee presentation and his thoughts and clips of “Billy Lynn’s Long Halftime Walk”.

We were given a triptych that included an on-stage interview detailing Mr. Lee’s long walk to and through the production’s desire to deliver a non-obtrusive 3D presentation, and how this drove constant unforeseen advances requiring “beyond-incremental” technology leaps.

We then saw a significant movie clip and some new technology test clips. The finish was a panel discussion titled “Realising an Auteur’s Vision: a technical deep-dive into Mr. Ang Lee’s ‘Billy Lynn’s Long Halftime Walk’” that featured the film’s editor Tim Squyres, its technical supervisor Ben Gervais and Sony Pictures Entertainment Head of Production Technology Scott Barbour who, along with Mr. Lee moved us through dozens of aspects of the ever-expanding sphere of newly exposed needs as the rough edges of the vision technology became evident.

Simply stated, needing more light showed that a 360° shutter is needed, which exposed more flicker which in turn required the highest frame rate which in turn mandated less (to the point of none) of both “standard” lighting and makeup. Unnatural shadows and even powder on the skin showed up as false and distracting in these conditions, as well as any emotion that wasn’t differently (and exceptionally) directed and portrayed by the actors.

In theory, we saw nearly the same footage and discussion at the SMPTE/NAB presentation in April. But in reality, that first presentation was in a small make-shift conference room with a screen that was too high for comfort and projectors just over our shoulders. At the time we heard Mr. Lee’s presentation before experiencing the newness of it all (or afterwards while still digesting the new and raw scenes and emotions). While this was all very striking it was somehow inartistic…un-capturable perhaps. I mean, I caught that it was exceptional in scope, but I caught only the slightest bits of the detail that made it exceptional.

In the case of IBC, from the exceptionally natural tête-à-tête flow of Julian Pinn’s introduction and questions, to the darkened auditorium with full Dolby Atmos and ultra hot-rodded Christie Mirage projectors – it was as if the 15 years of the EDCF’s IBC Digital Cinema Presentations at the RAI were all just practice for those 2 hours.

Patrick von Sychowski: Before I say anything, I should declare that as a member of the IBC Big Screen committee I could be expected to be biased in saying that this was an amazing event – but it really was an amazing event, based on my distinctly un-scientific poll of talking to people afterwards. Ninety-nine percent of people seem to have come away impressed, most of whom did not see it screened at NAB earlier this year and the sole detractors seem to just want to make a point about what they thought a “video look” was.

“Visceral” is a word that came up more than once in discussing the Ang Lee footage, because as you said, it felt very raw (and I’m not talking about the file format or unfinished edit) and direct in terms of stirring emotions. Talking to EDCF President David Hancock afterwards he said that he had a similar experience with a particularly good virtual reality “film”, so it is encouraging that cinema can find a way on the big screen to have just as hard-hitting of an emotional impact as the latest that is cropping up in VR technology.

And you’re right CJ, it was a genuinely Big Screen experience at IBC, compared to the small demo room that we were ushered into patiently waiting in groups at the SMPTE/NAB demo earlier this year. I was impressed that time, but this time it seemed more like an extraordinary cinematic event.

CJ Flynn: I didn’t get it before…at the SMPTE/NAB presentation. It seemed too wild and uncontrolled, the technology was interesting but wasted on me. There aren’t that many cones in the eye and they’re fixed on 7° of our vision, so Mr. Lee is almost giving us too much information, I thought, and he’s not telling us where to look, it’s too bright. I couldn’t put the tangents together – exploded ordinance in the mind without the expansive humanity of “Crouching Tiger” or “Life of Pi”.

What I also didn’t understand, but what was well transmitted at IBC, was that I was experiencing as an audience member what Mr. Lee was experiencing as a director – everything is coming up raw. The look in the eye as a window to the soul of the character is more broadly and granularly exposed by every ingredient of the capturing technology, every 120th of a second. So every director’s task and every cinematographer’s task and every lighting tech and gaffer’s task and every actor’s moment from prep to take – it was all raw and before those challenges were encountered as the milestones flying by, they didn’t get it any more than I did.

Then every challenge of the camera and the 7.5 terabytes of daily data and post-production with and without the Christie Mirage projectors and choosing each post technology not only for their usefulness but for their potential to expand to unknown needs, each of those flying milestones became a millstone until solved by a team of people who had to figure out then describe the problem and the team of people who had to grok it and solve it. That’s was a great story well told, in and of itself.

It was a mitigating feeling for me therefore how Mr. Lee explained how he was so constantly being lead on the journey that he had to not only locate and ride with the humility of the travel himself, but keep the crew on tempo by explaining that none of them are practiced in the specifics of this evolving craft, each of them, including himself, were “not good enough” he would say, while he tried to put into words what he often described as “humbling”.

It is a journey for us as an audience as well, and he specifically asked us to be patient as he and others in the craft walked the steps with these new tools toward better storytelling, in all their aspects. He was especially gracious to other directors and other ingredients of the technology that we have been presented with in the course of advancing the art. For example, his compelling argument that what people complained about in the high frame rate versions of “The Hobbit” as being video-like was due to their having no other category to label what they’re eyes were seeing. And as someone pointed out, it is a false category since videotape never looked that good.

Patrick von Sychowski: That leads me to the conclusion that Ang Lee is both the best and the worst ambassador possible for this new cinema technology milestone.

I’ve seen and heard James Cameron talk about new technology (primarily stereoscopic 3D) on the very same stage in Amsterdam, and boy are both filmmakers polar opposites of the spectrum. You can say a lot of things about fellow Oscar-winner Cameron, but “humble” is not a term that springs to mind. Ang Lee on the other hand doesn’t do a hard sell and sees this new technology and filmmaking methodology as a journey of discovery.

Lee is also very likely aware of the backlash against stereoscopic 3D, even though he directed what is arguably one of the greatest 3D films ever made with “Life of Pi”, not to mention the rough ride Peter Jackson faced with the 48fps HFR version of “The Hobbit”. So he is not here to tell us that “this is the future” and that we should all get onboard or find ourselves confined to the celluloid dustbin of history.

In fact, Lee was quoted in the Hollywood Reporter when he received IBC’s Honour of Excellence Award as saying, “With this movie I’m getting a new world. The use of high frame rate and high dynamic range will provide, I hope, a unique opportunity to feel the realities of war and peace through the protagonist’s eyes.” That’s about as much of a hard sell on HFR and HDR that you are ever going to get from Lee.

But if you were listening closely to Lee, particularly in the discussion with his editor, tech supervisor and head of production, you realise that there have been tremendous technical learnings that will benefit filmmakers and cinemas for a long time. For one thing, there is something happening to the human visual system when you go from 60fps to 120fps display, where suddenly the veil, window, whatever you want to call it, is removed as we see reality up on the screen. In this regard I would have loved to watch the whole clip in 2D 120fps, which is one of the versions that will be going out to cinemas.

Secondly, the various clips they showed afterwards demonstrated clearly that even if you can’t show 120fps 3D 4K in any cinema today, shooting at 120fps means that you have better images at 60fps, 48fps and even 24fps, just as we knew already that capturing 4K images and displaying them in 2K can look better than 2K projected at 2K. I will bet you anything that Cameron has changed his plans for “Avatar” 2, 3, 4, 5… to be 120fps acquisition. I just hope that the rest of the motion picture industry follows the lead into 120fps for future-proofing purposes, even though this raises the question about getting rid of makeup and lighting.

And this is even before we get into issues such as variable frame rate and its potential, such as when they discussed switching to 60fps for a pan of the cheerleaders, but staying with 120fps for the lead cheerleader to make her stand out. Will actors demand that “For Your Consideration” screenings for members of AMPAS, SAG, BAFTA, etc. be held in higher frame rate so that their non-acting acting skills can be better appreciated? Post digital conversion, is there a new technology arms race brewing in the cinema industry, with everything from 4D seating to immersive audio? There certainly seemed to be a lot of other things happening at IBC, other than just Ang Lee and “Billy Lynn”.

CJ Flynn: Well, I’m a fanboy of the technology being highlighted at IBC. I’m a Dolby Atmos fanboy – am I the only one who has thought that it is every teenage audio technologists wet dream? I’m a laser fanboy for the several reasons that range through the environmental to achieving more light for 3D and certainly, much better contrast. I’m a training fanboy for audiences and the kids who are responsible for the popcorn while also responsible for the artistic vision during those last few meters from the lens to the screen and speakers to our body’s sensors. So it is such a joy to hear and see not only Ang Lee travel the steps through the technology, but I was very impressed with the dreams and openness of the technologists during the EDCF DCinema Wrap-up.

John Hurst (CTO, Cinecert and mid-wife of digital cinema) presented a cool idea that still requires several little modules on top of the obvious but long-in-coming FLM/CDN technology which, if nudged into place could make movie theaters as immediately compelling in their programming selection as OnDemand TV.

Cinecert Proposes OnDemand at Cinemas

Andy Maltz of AMPAS brought to light some HDR benefits that have arrived naturally with the now juggernaut-level of uptake for ACES. And Barco’s Tom Bert took away some of the apparent false equivalency dust that got spread by marketing and fairness – not only was his “Demystifying laser projection for cinema: 5 frequently asked questions” on target, but the show leaves one confused with numbers – so his slide stating that there are 125 Flagship RGB lasers in Cinemas, plus over a thousand of their retrofit and blue phosphor laser units in the field….this in the year following real introduction…wow.

Patrick von Sychowski: If anything the EDCF session demonstrated that if digital cinema was originally a solution in search of a problem, having been implemented, it now leaves us with even more new problems than we ever expected.

There were very honest and candid presentations about the many new technical challenges facing the industry, ranging from affordable laser projection to SMPTE DCPs, so hats of to the EDCF under its new President David Hancock (who does this in addition to his fully consuming day job at IHS) for making it an open and honest forum that matters.

Without wishing to just accentuate the negative, the EDCF day also pointed to some of the exciting developments. In addition to the ones you have already mentioned I would single out EclairColor, which I first saw demonstrated at CineEurope this summer and which is getting a major push this autumn on both sides of the Atlantic. Whatever the relative merits of EclairColor, Dolby Cinema and any other flavour of HDR, it is good to have competition and choice in imaging technologies, because that is ultimately what the cinemas are asking for.

CJ Flynn: I feel like I still don’t have the placement of the Ymagis/EclairColor technology in the big picture of things, but I did get the joy of the technology from Cedric Lejeune (Vice President of Technology – Eclair) who I have long respected for his photographic and colorist work. This concept of getting HDR onto the screen without requiring HDR to be defined only as the deservedly much vaunted million to one Dolby Vision, while not allowing it to be marketed down to milquetoast is going to be important. Customers going to premium large format rooms deserve a real definition and we need to find an absolute and communicable baseline.

An informational slide detailing the features of EclairColor as shown during IBC 2016.In the same but opposite manner, the angst of creating and promoting and pushing a technology through an embryonic stage was tangible as Chris Witham (Director, Emerging Technology, The Walt Disney Studios) told of the last year of SMTPE DCP transition steps and issues as the only major studio grabbing the reins and delivering features in SMPTE DCP. That was followed by Tony Glover (VP, Mastering Technology and Development, Deluxe Technicolor Digital Cinema UK) detailing the two live tests of the EDCF SMPTE DCP testing. Great data of a plan that I look forward to following as it is rolled out into larger and larger spheres.

Also provocative were the similar presentations of Julian Pinn (CEO – Julian Pinn Ltd) and Rich Welsh (CEO – Sundog Media Toolkit) who both spoke about “watch-this-space” developments that they are now productizing to handle important nuances left behind in the rollout of digital cinema. As a developer myself, I wondered if anyone else had a problem with transmitting the emotion of a heartfelt belief when there are only words to do it with. But there they were, people with exceptional histories touching us with their passions.

Oh, and there was that guy who showed a picture of frog’s hair and thought that just because the DCPs for non-technical managers were free and include a Manager’s Walk Through Series Report checklist that he could get away with pushing his new website at www.CinemaTestTools.com. That was myself, and yes, that is a shameless plug.

IBC2016_CinemaLoudness_Hearing Loss Implications

And leaving the first for last, I agree with you Patrick, it was great to see what the new EDCF President, David Hancock can do with the numbers. For so long there was the driving focus of one metric – the march toward 100% saturation. Now it is an interesting group of metrics on diversification, the very things that the digital transition were supposed make cinemas more viable in an age when studios were closing distribution windows and a new audience had more choices for their time and money. The transition itself balled everything up for a long while. It will be nice to see how these numbers progress.

A slide with digital cinema figures as presented by David Hancock of IHS during IBC 2016.

Patrick von Sychowski: I was hoping you would get to the regenerating hair in the frog’s ear, which enabled it to restore damaged hearing, something us humans can’t do. Your talk was definitely the talk of the talk at the drinks reception right after the EDCF session – and not just for the frog ear hairs. There were also people coming up and asking about the availability of the DCP test tools from your website, so let’s return to those in a future post.

I have also come up with the perfect analogy for the proliferation of DCP versions that currently bedevils our industry (any guesses if “Billy Lynn” will go out in fewer or more than 400 versions?). It is like an EDCF drinks party where you get a choice of red wine, white wine, water or juice from a tray, as opposed to everyone standing in line to get their custom designed cocktails made to order. Waiting in line for 10-15 minutes to get to alcohol when you’ve just sat through three hours-plus of heavy-duty tech talk was not ideal. Let’s go back to just the four options next year, because there are always plenty of other things going on at IBC.

David Hancock EDCF IBC2016

CJ Flynn: It should also be an embarrassment for the IBC that they – with a portion of each convention area for every technology type and a sponsor everywhere, and having been the first and best with video presentations of the convention events – why aren’t they finding an IT and delivery sponsor and making the Big Screen Experience presentations live on the Internet, complete with audience interaction. Otherwise, too many of the presentations become commodities speaking to an uncompelled crowd now that the equipment has matured. There are important product differences, but the presenters have to be too very polite even with the technology, lest the fellow panelists or future client in the audience get miffed.

Patrick von Sychowski: I will half-agree with you on this one. Obviously showing “Billy Lynn” 120fps 3D 4K streamed over the internet can’t be done, even if Sony Pictures were to allow it. The same goes for other sessions where Hollywood studio material is shown on the big screen. But it would be good if the sessions that don’t use sensitive material were captured and shared, if not live, then at least TED-style at some later point. Because all of the sessions, even Ang Lee, deserved an even bigger audience than they got.

That takes me to the perennial point that the largest challenge facing the IBC Big Screen Experience is going beyond preaching to the choir and attracting more cinema people. Apart from the two representatives of Vue and Cinemax who were speaking on panels I only met one exhibitor in the audience. This was their chance to see “Billy Lynn” in a format that will shape the future of cinema, and yet despite the fact that attendance is free, they did not make the journey to Amsterdam. Nor did representatives of any cinema trade association.

I wish I knew of a good way to persuade cinema people to come to Amsterdam for IBC other than telling them that the Ang Lee presentation was an amazing eye opener. We really did watch cinema history being made in front of our eyes.

CJ Flynn: The Ang Lee keynote and the final presentation, which took an entirely different tack, were terrific and provocative. When that exceptionally talented moderator – oh, wait, that was you Patrick – when you put out the idea of having a panel of judges rate different cinema technologies like “Strictly Come Dancing” (“Dancing With the Stars” in the US), I thought, cute, but uhm…OK. But then it showed off the best sequence of the presentations.

Four different people from different slices of the business – a manufacturer, an installer, a cinema technical chief and an analyst – all got to point out reasoning that I couldn’t have imagined for justifying or nay-saying different portions of the technology spectrum as having potential return-on-investment or not. Brilliant idea, well done and I hope you take full credit for it since you’ll be remembered for the presentation. If only we in the audience could have had control of a laugh and groan track, and a dynamic rating bar.

Patrick von Sychowski: I will do an Ang Lee and say that I feel humble, not because it wasn’t a brilliant last-minute panic move to steal the format from reality TV, but because I would not have expected that of the dozen technology categories that were judged, blue phosphor laser would come out on top while VR languished in the bottom. Let’s see at next year’s IBC what the future holds. Please feel free to bring a cinema friend or two.

In the meantime I’d like to thank all the technology companies that worked with Mr. Pinn and IBC to make the Big Screen Experience day possible. I know that there is a risk in trying to list everyone as you inevitably forget someone, but Phil White’s team who coordinated everything on the tech side deserve a major shout-out, because they had to install so much equipment in the balcony of the Rai auditorium that at one point they thought it would need reinforcing. Don’t forget that there were not only the Christie twin Mirage projectors for “Billy Lynn”, but also the dual Christie/Dolby laser projector (and that’s a lot of cooling required) set-up for the screenings of Disney’s “The Junglebook” and 20th Century Fox’s “The Revenant” in Dolby Vision and with Atmos immersive audio.

I didn’t stay for the latter but I’m sure the bear mauling is even more visceral in HDR and surround-growls, while “Jungle Book” was a bright 3D delight. So Christie, Dolby, Harkness Screens, the projector and audio technicians, the studios who let IBC see the films and footage and everyone who flew to Amsterdam to share their insights. I’m sure I’ve neglected to mention plenty of others: QSC? EDCF? Ang Lee’s entourage?

CJ Flynn: Mustn’t forget Terry Nelson and partner Sean O’Dea for making the talent sound good every year while handling the live audio mixing console and other physical aspects of setting up the presentations. And thank you Patrick for this conversation. À la prochaine fois.

…Like Tangents In Rain