Take a VideoTour of Facebook’s 20-Person US Election Security War Room

Beneath an American flag, 20 people packed tight into a beige conference room are Facebook’s, and so too the Internet’s, first line of defense for democracy. This is Facebook’s election security war room. Screens visualize influxes of foreign political content and voter suppression attempts as high-ranking team members from across divisions at Facebook, Instagram, and WhatsApp coordinate rapid responses. The hope is through face-to-face real-time collaboration in the war room, Facebook can speed up decision-making to minimize how misinformation influences how users vote.

In this video, TechCrunch takes you inside the war room at Facebook’s Menlo Park headquarters. Bustling with action beneath the glow of the threat dashboards, you see what should have existed two years ago. During the U.S. presidential election, Russian government trolls and profit-driven fake news outlets polluted the social network with polarizing propaganda. Now Facebook hopes to avoid a repeat in the upcoming US midterms as well as elections across the globe. And to win the hearts, minds, and trust of the public, it’s being more transparent about its strategy.

 

 

“It’s not something you can scale to solve with just human.s And it’s not something you can solve with just technology either” says Facebook’s head of cybersecurity Nathaniel Gleicher. “I think artificial intelligence is a critical component of a solution and humans are critical component of a solution.” The two approaches combine in the war room.

Who’s In The War Room And How They Fight Back

Engineers – Facebook’s coders develop the dashboards that monitor political content, hate speech, user reports of potential false news, voter suppression content, and more. They build in alarms that warn the team of anomalies and spikes in the data, triggering investigation by…

  • Data Scientists – Once a threat is detected and visualized on the threat boards, these team members dig into who’s behind an attack, and the web of accounts executing the misinformation campaign.
  • Operations Specialists – They determine if and how the attacks violate Facebook’s community standards. If a violation is confirmed, they take down the appropriate accounts and content wherever they appear on the network.
  • Threat Intelligence Researchers and Investigators – These professional cybersecurity professionals have tons of experience in deciphering the sophisticated tactics used by Facebook’s most powerful adversaries including state actors. They also help Facebook run war games and drills to practice defense against last-minute election day attacks.
  • Instagram and WhatsApp Leaders – Facebook’s acquisitions must also be protected, so representatives from those teams join the war room to coordinate monitoring and takedowns across the company’s family of apps. Together with Facebook’s high-ups, they dispense info about election protection to Facebook’s 20,000 security staffers.
  • Local Experts – Facebook now starts working to defend an election 1.5 to 2 years ahead of time. To provide maximum context for decisions, local experts from countries with the next elections join to bring knowledge of cultural norms and idiosyncracies.
  • Policy Makers – To keep Facebook’s rules about what’s allowed up to date to bar the latest election interference tactics, legal and policy team members join to turn responses into process.

 

 

Beyond fellow Facebook employees, the team works external government, security, and tech industry partners. Facebook routinely cooperates with other social networks to pass each other information and synchronize take-downs. Facebook has to get used to this. Following the mid-terms it will evaluate whether it needs to constantly operate a war room. But after it was caught be surprise in 2016, Facebook accepts that it can never turn a blind eye again.

Facebook’s director of our global politics and government outreach team Katie Harbath concludes. “This is our new normal.”

 

 

from: https://techcrunch.com/2018/10/18/facebook-election-war-room/

 

 

 

 

Noisy Intermediate Scale Quantum (NISQ) Technology

In a keynote speech given in late 2017, the physicist John Preskill coined the term Noisy Intermediate Scale Quantum (NISQ) technology for the kinds of quantum computers that will be available in the next few years. Here, ‘noisy’ refers to the fact that the devices will be disturbed by what is happening in their environment. For instance, small changes in temperature, or stray electric or magnetic fields, can cause the quantum information in the computer to be degraded — a process known as decoherence. To overcome this, we need to be able to perform error correction — essentially looking at the system to determine which disturbances have occurred, then reversing them.

Error Correction Algorithms for NISQ Machines: applicable to gate-based quantum computers,
are the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimisation Algorithm (QAOA).

 

Quantum computing: near- and far-term opportunities

“Quantum computing sounds wonderful, but isn’t it still 10 years away?” A simple question with a subtle answer. And it’s a question that is surely being asked more commonly these days, given the current excitement surrounding quantum computing (QC). Of course, the transition from from a pre-QC world to a post-QC world will not be marked by any single moment or breakthrough. Rather, it will involve a continued process of scientific and engineering advances, with interim opportunities for practical applications arising along the way to the perceived endpoint of ‘full-scale, universal’ quantum computing.

In a keynote speech given in late 2017, the physicist John Preskill coined the term Noisy Intermediate Scale Quantum (NISQ) technology for the kinds of quantum computers that will be available in the next few years. Here, ‘noisy’ refers to the fact that the devices will be disturbed by what is happening in their environment. For instance, small changes in temperature, or stray electric or magnetic fields, can cause the quantum information in the computer to be degraded — a process known as decoherence. To overcome this, we need to be able to perform error correction — essentially looking at the system to determine which disturbances have occurred, then reversing them.

‘Intermediate Scale’ refers to the fact that the number of qubits will most likely be limited to a few hundred, or perhaps a few thousand. Of course, the word ‘intermediate’ conveys the sense of there being a grander goal. Ultimately, we are aiming for much larger devices, with several millions of qubits and error correction. Often referred to as the fault-tolerant regime, this is perhaps what the question posed at the beginning of this article would consider to be quantum computing. The goal of this article is to explore the differences between the NISQ and fault-tolerant regimes. This will set the stage for future articles, where we will discuss the potential real-world applications of NISQ technology.

 

Landscape of quantum computing from an error correction perspective. Inspired by a figure by Daniel Gottesman.

 

Computing in the NISQ era

The term ‘NISQ’ is often used interchangeably with ‘near term’ when speaking of quantum computing, because the devices that will be available in the next few years will be small, and will lack error correction. This is true regardless of the physical platform being used, be it superconducting qubits, or continuous variable photonic quantum computers. As you might expect, the fact that these near-term devices will be ‘small’ and ‘noisy’ restricts what we will be able to do with them. Let’s explore why this is the case.

When we say a quantum computer is ‘small’, we are of course not referring to the physical size of the device and all of its supporting apparatus. Rather, we have in mind the number of basic information processing building blocks it contains  —  typically, the number of qubits. Everyone knows how gargantuan the mainframe computers of the 1950s were compared to today’s laptops. Computationally, however, your laptop is far more powerful, because the number of transistors it contains far exceeds the number in a vintage mainframe  —  by many orders of magnitude, in fact. You can simply work with much more information, and this allows it to run far more complex operations.

Similarly, the greater the number of qubits a quantum computer contains, the more computational power it can potentially deliver. As will become apparent in the remainder of this article, and subsequent articles, the number of qubits is by no means the only metric that determines the power of a quantum computer. Nevertheless, it does provide important limits to the problem sizes that can be tackled. For example, to compute molecular electronic configurations in chemistry, each electron orbital might be represented in quantum computer by one qubit. If the number of qubits available is limited, then the complexity of the molecules that can be simulated is correspondingly restricted.

 

The number of qubits is by no means the only metric that determines the power of a quantum computer.

 

Turning to the ‘N’ in NISQ, it might seem obvious that noise should be a degrading and limiting factor to the performance of any device, quantum or classical. Indeed, errors can occur in classical computers too, and methods for dealing with them had to be developed. A common scheme involves encoding a single, ‘logical’ bit of information in multiple ‘physical’ bits. Used together with a simple ‘majority voting’ rule for interpreting the logical state from the physical state, classical error correction can be performed.

For example, we might encode a logical 0 in three physical bits as 000, and a logical 1 as 111. Suppose at some point we then observe the state 010 — what is its corresponding logical state? Since there are two 0s and only a single 1, we would democratically interpret this as a logical 0, and the error could be corrected by flipping the 1 in the middle back to a 0. Of course, this scheme only works if the probability of there being two errors is entirely negligible. Otherwise, we might end up in the state 011, and using the majority voting rule we’d incorrectly conclude the logical bit value to be 1.

If the probability of a single error occurring is p, then the probability of two (independent) errors is . A natural condition for the error correction scheme to work is therefore that the error probability p should be much smaller than one, so that p² is very close to zero. Alternatively, if you can’t make p small enough (perhaps due to technical limitations with the hardware), you might sacrifice more physical bits to encode a single logical bit. For example, if you chose ten physical bits to encode one logical bit, from the initial state 0000000000 there would need to be at least five errors (with total probability p⁵) before interpreting the logical state became ambiguous.

Much of the intuition of the previous paragraphs carries over to quantum error correction schemes, with one particularly important caveat. Observing qubits means that we collapse their quantum state — we can’t simply look at them and ask what the logical state of three qubits is without actually changing their state, and thus the information they encode! To address this, more sophisticated techniques have had to be developed for quantum error correction  —  a story we shall tell in a future article.

The bottom line is that even some of the leading quantum error correction schemes require a significant resource overhead, perhaps around 1000 physical qubits to encode a single logical qubit. In the near term, where the number of qubits is limited to perhaps at most a few thousand, it follows that error-corrected quantum computation at scale will not be possible.

Crucially, however, this does not imply that NISQ technologies will not be useful for practical purposes. The rate at which errors occur in the system essentially determines how long you can use a quantum computer  —  or alternatively, how many quantum gates you can perform  —  before you can be certain that an error actually does occur. A key focus is now to determine which kinds of computations can be done with relatively few operations (so-called shallow quantum circuits), keeping the probability of errors low, but which can nonetheless deliver an advantage over classical computation.

In particular, new classes of algorithms have been developed specifically for near-term quantum computers.

Two prominent examples, applicable to gate-based quantum computers, are the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimisation Algorithm (QAOA). These techniques leverage both classical and quantum hardware, as illustrated in the figure below. A typical problem might consist of finding the lowest energy of a molecule, or minimising a function during training and/or inference in machine learning. The classical computer feeds a set of parameters to the quantum computer, which determine a specific sequence of gate operations to be performed — for example, by microwave pulses on superconducting qubits. This set of operations produces an output quantum state that we measure to determine some property — for instance, the total energy. The outcome of the measurement is communicated back to the classical computer, completing one loop of the computational procedure.

The classical computer then suggests a new set of parameters that it thinks will lead the quantum computer to produce a lower energy output state. With the qubits reset to the state |0>, the new circuit is then executed, and the energy is again measured. The process is repeated until the energy (or cost function value) converges to its apparent minimum.

 

Typical flow of a hybrid quantum-classical algorithm. Image credit: Rigetti Computing

 

There are several points to make about this quantum-classical hybrid model of computation. Observe that the classical computer is performing an optimisation loop: it is searching through a sea of parameters, and making update suggestions based on the energy reported from measuring the quantum state. If the error rates in the quantum computer were sufficiently low, it may not need such external guidance in searching for the lowest energy configuration. In the NISQ era, however, the QPU will generally rely on instructions from the CPU.

Secondly, the quantum chip is used to output a number that would be computationally demanding — perhaps even impossible — for a classical computer to determine. Even with shallow circuits and noisy qubits, simulating the behaviour of quantum systems on a classical computer can be intractable; the only option is to use a real quantum computer!

This idea lies at the heart of the push to demonstrate quantum advantage (or quantum supremacy), where a quantum computer demonstrably performs a calculation that would be impossible for even the most powerful classical computers, in any reasonable amount of time. We’ll pick up the story of quantum advantage in our next article.

Lastly, these quantum-classical hybrid algorithms exhibit some degree of robustness to errors. Suppose, for example, that the classical computer keeps suggesting circuit parameters that involve operating on a particularly noisy qubit. If errors that occur in this qubit result in significant energy penalties (or higher cost functions), the algorithm will learn not to use that qubit in subsequent computations.

Now that we understand the basic working procedure and limitations of NISQ computing, the obvious question begs: for which kinds of problems can we use the technology? We will explore this question in detail as we continue our focus on applications in the coming months, however the prime candidates are thought to include quantum chemistry, quantum machine learning, and quantum optimisation.

Fault-tolerant quantum computing

Let’s move away from the land of imperfect quantum computation — where limitations on system size and noise place significant restrictions on what we can do — and familiarise ourselves with what is over the horizon, in the land of fault tolerance.

Let’s first think about the road to get there. The figure below — produced by John Martinis of Google — maps out the qubit quality vs. quantity relationship. The y-axis shows the value of the largest (and thus limiting) error rate in a given quantum computer. The dashed line at 10^-2 denotes what is known as the error correction threshold: if the limiting error rate in the system exceeds this number, it becomes futile to keep up with the accumulation of errors, and to attempt to correct for them. In this light, it is perfectly clear why simply increasing the number of qubits is not helpful in itself — as indicated in the figure by “quantity hype”.

 

Illustration of the qubit quality vs quantity relationship. Image credit: John Martinis, Google.

 

As we drop below the error threshold, and reach beyond around 50 or 60 qubits, it quickly becomes impossible to simulate quantum circuits on a classical computer. The blue shaded region corresponds to the regime of NISQ technology, where applications in the next few years will be explored.

 

Pushing towards limiting error rates of around one in a thousand, and with enough physical qubits to encode several tens of logical qubits, we reach the realm of useful error-corrected quantum computing. It is here that we start to reach the holy grail — the promised land of ideal quantum computing at scale.

 

What do we expect to find in this promised land? A large amount of theoretical work over the years allows us to say quite a bit about the rewards in store. In particular, this is where we will be able to run Shor’s factoring and Grover’s search algorithms*, perhaps the two canonical textbook examples of the power of quantum computing. Arguably, a more tantalising prospect is that of digital quantum simulation, which will allow us to use a quantum computer to replicate the behaviour of any other quantum system. This is expected to facilitate the design of new materials, and even of more efficient chemical reactions.

We also find a plethora of techniques based on the quantum linear systems algorithm* — also known as the HHL algorithm, after its founders — which provides an exponentially faster way of solving certain classes of linear equations. This may bring dramatic boosts to many real-world engineering and mathematics problems, including machine learning on large classical data sets, however thus far there has been little research on practical applications of the algorithm.

A very natural application for large-scale quantum computers is to the analysis of quantum data. At present, there are very few instances where we work with natively quantum data, since we neither produce it nor detect it. However, this will change as the use of quantum technologies becomes more widespread. The output of a quantum computer or simulator is a quantum state, while quantum sensors may eventually allow us to collect quantum data from other sources. Processing and analysing this information will necessarily require large, fault-tolerant quantum computers. One can envision the emergence of an entirely new ‘quantum data’ economy.

Looking to the near term

Having glanced into the far future, let’s bring ourselves back to where we are today. Currently, quantum computers are too small and noisy to be able to do anything a laptop can’t do. However, in the next few years, NISQ technologies may offer a path to quantum advantage that can be deployed for important applications such as chemistry and machine learning.

 

While we focused above on algorithms for gate-based quantum computers, such as VQE and QAOA, there are potentially other means of achieving quantum advantage in the near term. This involves the use of specialised hardware designed to perform only a single type of problem — quantum application-specific chips, if you will. Two prominent examples of such devices include quantum annealers and boson samplers.

 

Quantum annealers will be the focus of an upcoming article, however our next post will describe the quest to demonstrate quantum advantage — a feat that some believe to be imminent.

QWA is helping to bridge the gap between quantum and business. The go-to for providing suggestions, feedback and questions is our email address info@quantumwa.org.

* This is another story for another day, but algorithms such as Shor, Grover, and HHL actually require more than just a fault-tolerant and sufficiently large quantum computer. To maintain the speedups these algorithms offer, we also need an efficient way of loading classical data into a quantum superposition state. There are proposals for a device known as quantum random access memory (qRAM) to achieve this, however the question of how to actually build a practical qRAM remains very much open.

 

from: https://medium.com/@quantum_wa/quantum-computing-near-and-far-term-opportunities-f8ffa83cc0c9

 

 

Published on Dec 20, 2017

John Preskill – Richard P. Feynman Professor of Theoretical Physics at Caltech Professor Preskill’s talk kicked off the conference by presenting the consensus among the community of technical experts concerning what we can expect from quantum computing technology in the near term. Preskill is already known for having coined the phrase “quantum supremacy”, and during the conference he added to this list by coining the acronym NISQ, standing for Noisy Intermediate-Scale Quantum Computers, which future speakers of the conference gladly adopted. Preskill pointed out that while there is very little (if any) doubt that NISQ-era QPUs will be capable of outperforming the world’s most powerful classical supercomputers for certain computation tasks, experimenting will be key in telling us more about which useful tasks they will be capable of addressing better than current computing resources.

 

 

 

“Quantum Computing and the Entanglement Frontier” John Preskill, CalTech

Published on Mar 7, 2017

The quantum laws governing atoms and other tiny objects seem to defy common sense, and information encoded in quantum systems has weird properties that baffle our feeble human minds. John Preskill will explain why he loves quantum entanglement, the elusive feature making quantum information fundamentally different from information in the macroscopic world. By exploiting quantum entanglement, quantum computers should be able to solve otherwise intractable problems, with far-reaching applications to cryptology, materials, and fundamental physical science. Preskill is less weird than a quantum computer, and easier to understand.

 

 

 

Boson sampling

Boson sampling constitutes a restricted model of non-universal quantum computation introduced by S. Aaronson and A. Arkhipov. It consists of sampling from the probability distribution of identical bosons scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its photonic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to linear optical quantum computing. Moreover, while not universal, the boson sampling scheme is strongly believed to implement a classically hard task using far fewer physical resources than a full linear-optical quantum computing setup. This makes it an outstanding candidate for demonstrating the power of quantum computation in the near term.

[…]

The Task

Consider a multimode linear-optical circuit of N modes that is injected with M indistinguishable single photons (N>M). Then, the photonic implementation of the boson sampling task consists of generating a sample from the probability distribution of single-photon measurements at the output of the circuit. Specifically, this requires reliable sources of single photons (currently the most widely used ones are parametric down-conversion crystals), as well as a linear interferometer. The latter can be fabricated, e.g., with fused-fiber beam splitters, through silica-on-silicon or laser-written integrated interferometers, or electrically and optically interfaced optical chips. Finally, the scheme also necessitates high efficiency single photon-counting detectors, such as those based on current-biased superconducting nanowires, which perform the measurements at the output of the circuit. Therefore, based on these three ingredients, the boson sampling setup does not require any ancillas, adaptive measurements or entangling operations, as does e.g. the universal optical scheme by Knill, Laflamme and Milburn (the KLM scheme). This makes it a non-universal model of quantum computation, and reduces the amount of physical resources needed for its practical realization.

 

more: https://en.wikipedia.org/wiki/Boson_sampling

 

 

 

 

 

$194 Million was Moved Using Bitcoin With $0.1 Fee: The True Potential of Crypto (Banks: $10,000+ Fee)

On October 16, a Bitcoin user moved 29,999 BTC worth $194 million with a $0.1 fee, a transaction which with banks would cost tens of thousands of dollars.

An often pushed narrative against cryptocurrencies like Bitcoin and Ethereum is that it is expensive to clear transactions due to fees sent to miners. However, the $194 million payment on the Bitcoin blockchain demonstrates the potential of consensus currencies to optimize cross-border payments significantly.

$1 Million Through a Bank Costs $10,000+

Transferwise is a UK-based multi-billion dollar firm that eliminates hidden fees in bank transfers. On the platform, users can send small to large payments through bank accounts with substantially lower fees.

However, even on a platform like Transferwise, to send over $1 million, it costs over $7,500 in transaction fees. That means, through wire transfers and conventional banking methods, tens of thousands of dollars are required to clear a transaction that is larger than $1 million.

Percentage-wise, $7,500 is less than 1 percent of $1 million, and in that sense, a $7,500 fee is cheap. But, on the Bitcoin network, which is supposedly highly inefficient in processing payments, it costs less than $0.1 to clear a $194 million transaction.

 

 

On October 14, publicly acclaimed cryptocurrency critic Nouriel Roubini, an economist and professor at Stern School, falsely claimed that it costs $60 to process a Bitcoin transaction and as such, it costs $63 to purchase a Starbucks latte that costs $3, using Bitcoin.

“So the cost per transaction of bitcoin is literally $60. So if I were to buy a $3 latte at Starbucks I would have to pay $63 to get it! So the myth of a ‘Brilliant new technology that reduces the vast fees of legacy financial systems!’ turns out to be a Big Fat Lie!” Nouriel claimed.

In response, respected cryptocurrency investor and Blocktower co-founder Ari Paul stated that the transaction fee of Bitcoin, which is less than $0.1, is publicly verifiable on the blockchain.

BTC fees are less than $0.10, easily verifiable. If you value truth, you’d provide a public correction. If your goal is to mislead people with simply false statements, carry on. There’s nothing to research. Fees are publicly viewable from many sources (googling it works.) I find it better not to provide a specific source because then regardless of source, the source gets attacked,” Paul noted.

Crypto Could Crack Offshore Banking Market First

As scalability of public blockchain networks improves with the integration of both on-chain and second-layer scaling solutions, cryptocurrencies will be able to handle small payments with higher efficiency.

But, in the mid-term, given the ability of the blockchain to process large-scale payments at the same cost of a small transaction, it is highly likely that cryptocurrencies will gain wide acceptance by investors and firms in the offshore banking market, a $30 trillion industry that relies on financial institutions to clear large transactions.

Spending $0.1 to $1 for a $5 to $10 transaction could be inefficient and impractical. However, spending the same fee to process multi-million dollar transactions provide cryptocurrencies a clear edge over legacy systems.

 

from: https://www.ccn.com/194-million-was-moved-using-bitcoin-with-0-1-fee-true-potential-of-crypto/

 

This is the transaction & fee:

https://www.blockchain.com/en/btc/tx/bace354d53088d92740485ade3211309d80b427355b931a790575b6646970202

 

 

 

 

D-Wave Annealing Quantum Computers Tackle Big Data With A ML Quantum Boltzmann Machine (Artificial Neural Network)

Every two seconds, sensors measuring the United States’ electrical grid collect 3 petabytes of data – the equivalent of 3 million gigabytes. Data analysis on that scale is a challenge when crucial information is stored in an inaccessible database.

But researchers at Purdue University are working on a solution, combining with classical computing on small-scale quantum computers to speed up database accessibility. They are using data from the U.S. Department of Energy National Labs’ sensors, called phasor measurement units, that collect information on the electrical power grid about voltages, currents and power generation. Because these values can vary, keeping the power grid stable involves continuously monitoring the sensors.

Sabre Kais, a professor of chemical physics and principal investigator, will lead the effort to develop new quantum algorithms for computing the extensive data generated by the .

“Non-quantum algorithms that are used to analyze the data can predict the state of the grid, but as more and more phasor measurement units are deployed in the electrical network, we need faster algorithms,” said Alex Pothen, professor of computer science and co-investigator on the project. “Quantum algorithms for have the potential to speed up the computations substantially in a theoretical sense, but great challenges remain in achieving quantum computers that can process such large amounts of data.”

The research team’s method has potential for a number of practical applications, such as helping industries optimize their supply-chain and logistics management. It could also lead to new chemical and material discovery using an artificial neural network known as a quantum Boltzmann machine. This kind of neural network is used for machine learning and data analysis.

“We have already developed a hybrid quantum employing a quantum Boltzmann machine to obtain accurate electronic structure calculations,” Kais said. “We have proof of concept showing results for small molecular systems, which will allow us to screen molecules and accelerate the discovery of new materials.”

A paper outlining these results was published Wednesday in the journal Nature Communications.

Machine learning algorithms have been used to calculate the approximate electronic properties of millions of small molecules, but navigating these molecular systems is challenging for chemical physicists. Kais and co-investigator Yong Chen, director of the Purdue Quantum Center and professor of physics and astronomy and of electrical and computer engineering, are confident that their quantum machine learning algorithm could address this.

Their algorithms could also be used for optimizing solar farms. The lifetime of a solar farm varies depending on the climate as solar cells degrade each year from weather, according to Muhammad Alam, professor of electrical and computer engineering and a co-investigator of the project. Using quantum algorithms would make it easier to determine the lifetime of solar farms and other sustainable energy technologies for a given geographical location and could help make solar technologies more efficient.

Additionally, the team hopes to launch an externally-funded industry-university collaborative research center (IUCRC) to promote further research in quantum machine learning for data analytics and optimization. Benefits of an IUCRC include leveraging academic-corporate partnerships, expanding material science research, and acting on market incentive. Further research in quantum machine learning for data analysis is necessary before it can be of use to industries for practical application, Chen said, and an IUCRC would make tangible progress.

“We are close to developing the classical algorithms for this data analysis, and we expect them to be widely used,” Pothen said. “Quantum algorithms are high-risk, high-reward research, and it is difficult to predict in what time frame these algorithms will find practical use.”

The team’s research project was one of eight selected by the Purdue’s Integrative Data Science Initiative to be funded for a two-year period. The initiative will encourage interdisciplinary collaboration and build on Purdue’s strengths to position the university as a leader in data science research and focus on one of four areas:

  • health care
  • defense
  • ethics, society and policy
  • fundamentals, methods, and algorithms.

The research thrusts of the Integrative Data Science Initiative is hosted by Purdue’s Discovery Park.

“This is an exciting time to combine machine learning with ,” Kais said. “Impressive progress has been made recently in building quantum computers, and techniques will become powerful tools for finding new patterns in big data.

from: https://phys.org/news/2018-10-quantum-tackle-big-machine.html

Quantum machine learning

Quantum machine learning is an emerging interdisciplinary research area at the intersection of quantum physics and machine learning. The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer. This includes hybrid methods that involve both classical and quantum processing, where computationally expensive subroutines are outsourced to a quantum device. Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data. Beyond quantum computing, the term “quantum machine learning” is often associated with machine learning methods applied to data generated from quantum experiments, such as learning quantum phase transitions or creating new quantum experiments. Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics carry over to classical deep learning and vice versa. Finally, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as “quantum learning theory”.

Quantum sampling techniques

Sampling from high-dimensional probability distributions is at the core of a wide spectrum of computational techniques with important applications across science, engineering, and society. Examples include deep learning, probabilistic programming, and other machine learning and artificial intelligence applications.

A computationally hard problem, which is key for some relevant machine learning tasks, is the estimation of averages over probabilistic models defined in terms of a Boltzmann distribution. Sampling from generic probabilistic models is hard: algorithms relying heavily on sampling are expected to remain intractable no matter how large and powerful classical computing resources become. Even though quantum annealers, like those produced by D-Wave Systems, were designed for challenging combinatorial optimization problems, it has been recently recognized as a potential candidate to speed up computations that rely on sampling by exploiting quantum effects.

Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. The standard approach to training Boltzmann machines relies on the computation of certain averages that can be estimated by standard sampling techniques, such as Markov chain Monte Carlo algorithms. Another possibility is to rely on a physical process, like quantum annealing, that naturally generates samples from a Boltzmann distribution. The objective is to find the optimal control parameters that best represent the empirical distribution of a given dataset.

The D-Wave 2X system hosted at NASA Ames Research Center has been recently used for the learning of a special class of restricted Boltzmann machines that can serve as a building block for deep learning architectures. Complementary work that appeared roughly simultaneously showed that quantum annealing can be used for supervised learning in classification tasks. The same device was later used to train a fully connected Boltzmann machine to generate, reconstruct, and classify down-scaled, low-resolution handwritten digits, among other synthetic datasets. In both cases, the models trained by quantum annealing had a similar or better performance in terms of quality. The ultimate question that drives this endeavour is whether there is quantum speedup in sampling applications. Experience with the use of quantum annealers for combinatorial optimization suggests the answer is not straightforward.

Inspired by the success of Boltzmann machines based on classical Boltzmann distribution, a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Hamiltonian was recently proposed. Due to the non-commutative nature of quantum mechanics, the training process of the quantum Boltzmann machine can become nontrivial. This problem was, to some extent, circumvented by introducing bounds on the quantum probabilities, allowing the authors to train the model efficiently by sampling. It is possible that a specific type of quantum Boltzmann machine has been trained in the D-Wave 2X by using a learning rule analogous to that of classical Boltzmann machines.

Quantum annealing is not the only technology for sampling. In a prepare-and-measure scenario, a universal quantum computer prepares a thermal state, which is then sampled by measurements. This can reduce the time required to train a deep restricted Boltzmann machine, and provide a richer and more comprehensive framework for deep learning than classical computing. The same quantum methods also permit efficient training of full Boltzmann machines and multi-layer, fully connected models and do not have well-known classical counterparts. Relying on an efficient thermal state preparation protocol starting from an arbitrary state, quantum-enhanced Markov logic networks exploit the symmetries and the locality structure of the probabilistic graphical model generated by a first-order logic template. This provides an exponential reduction in computational complexity in probabilistic inference, and, while the protocol relies on a universal quantum computer, under mild assumptions it can be embedded on contemporary quantum annealing hardware.

Quantum neural networks

Quantum analogues or generalizations of classical neural nets are often referred to as quantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models.

Boltzmann Distribution

In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution, probability measure, or frequency distribution of particles in a system over various possible states. The distribution is expressed in the form

 
where is state energy (which varies from state to state), and (a constant of the distribution) is the product of Boltzmann’s constant and thermodynamic temperature.

In statistical mechanics, the Boltzmann distribution is a probability distribution that gives the probability that a system will be in a certain state as a function of that state’s energy and the temperature of the system. It is given as

 

where pi is the probability of state i, εi the energy of state i, k the Boltzmann constant, T the temperature of the system, and M is the number of states accessible to the system. The sum is over all states accessible to the system of interest. The term system here has a very wide meaning; it can range from a single atom to a macroscopic system such as a natural gas storage tank. Because of this the Boltzmann distribution can be used to solve a very wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy.

 

Reinforcement Learning Using Quantum Boltzmann Machines

By Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi, & Pooya Ronagh

We investigate whether quantum annealers with select chip layouts can outperform classical computers in reinforcement learning tasks. We associate a transverse fi eld Ising spin Hamiltonian with a layout of qubits similar to that of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to numerically simulate quantum sampling from this system. We design a reinforcement learning algorithm in which the set of visible nodes representing the states and actions of an optimal policy are the fi rst and last layers of the deep network. In absence of a transverse field, our simulations show that DBMs train more effectively than restricted Boltzmann machines (RBM) with the same number of weights. Since sampling from Boltzmann distributions of a DBM is not classically feasible, this is evidence of advantage of a non-Turing sampling oracle. We then develop a framework for training the network as a quantum Boltzmann machine (QBM) in the presence of a signifi cant transverse field for reinforcement learning. This further improves the reinforcement learning method using DBMs.

 

Video:

Quantum Boltzmann Machine using a Quantum Annealer

Recording Details

Speaker(s):
Scientific Areas:
Collection/Series:
PIRSA Number:
16080009

Abstract

Machine learning is a rapidly growing field in computer science with applications in computer vision, voice recognition, medical diagnosis, spam filtering, search engines, etc. In this presentation, I will introduce a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Model. Due to the non-commutative nature of quantum mechanics, the training process of the Quantum Boltzmann Machine (QBM) can become nontrivial.  I will show how to circumvent this problem by introducing bounds on the quantum probabilities. This allows training the QBM efficiently by sampling. I will then show examples of QBM training with and without the bound, using exact diagonalization, and compare the results with classical Boltzmann training. Finally, after a brief introduction to D-Wave quantum annealing processors, I will discuss the possibility of using such processors for QBM training and application.

see the video (40 mins) of the presentation:
https://perimeterinstitute.ca/videos/quantum-boltzmann-machine-using-quantum-annealer

LinkedIn Post Analytics for the first 24 hours of this post on the network:

How Hacked Water Heaters Could Trigger Mass Blackouts

A new study found that just 42,000 of those hacked home devices
could be enough to leave a country of 38 million people in the dark.

 

When the cybersecurity industry warns about the nightmare of hackers causing blackouts, the scenario they describe typically entails an elite team of hackers breaking into the inner sanctum of a power utility to start flipping switches. But one group of researchers has imagined how an entire power grid could be taken down by hacking a less centralized and protected class of targets: home air conditioners and water heaters. Lots of them.

At the Usenix Security conference this week, a group of Princeton University security researchers will present a study that considers a little-examined question in power grid cybersecurity: What if hackers attacked not the supply side of the power grid, but the demand side? In a series of simulations, the researchers imagined what might happen if hackers controlled a botnet composed of thousands of silently hacked consumer internet of things devices, particularly power-hungry ones like air conditioners, water heaters, and space heaters. Then they ran a series of software simulations to see how many of those devices an attacker would need to simultaneously hijack to disrupt the stability of the power grid.

Their answers point to a disturbing, if not quite yet practical scenario: In a power network large enough to serve an area of 38 million people — a population roughly equal to Canada or California — the researchers estimate that just a one percent bump in demand might be enough to take down the majority of the grid. That demand increase could be created by a botnet as small as a few tens of thousands of hacked electric water heaters or a couple hundred thousand air conditioners.

“Power grids are stable as long as supply is equal to demand,” says Saleh Soltan, a researcher in Princeton’s Department of Electrical Engineering, who led the study. “If you have a very large botnet of IoT devices, you can really manipulate the demand, changing it abruptly, any time you want.”

Just a one percent bump in demand might be enough to take down the majority of the grid.

The result of that botnet-induced imbalance, Soltan says, could be cascading blackouts. When demand in one part of the grid rapidly increases, it can overload the current on certain power lines, damaging them or more likely triggering devices called protective relays, which turn off the power when they sense dangerous conditions. Switching off those lines puts more load on the remaining ones, potentially leading to a chain reaction.

“Fewer lines need to carry the same flows and they get overloaded, so then the next one will be disconnected and the next one,” says Soltan. “In the worst case, most or all of them are disconnected, and you have a blackout in most of your grid.”

Power utility engineers, of course, expertly forecast fluctuations in electric demand on a daily basis. They plan for everything from heat waves that predictably cause spikes in air conditioner usage to the moment at the end of British soap opera episodes when hundreds of thousands of viewers all switch on their tea kettles. But the Princeton researchers’ study suggests that hackers could make those demand spikes not only unpredictable, but maliciously timed.

The researchers don’t actually point to any vulnerabilities in specific household devices, or suggest how exactly they might be hacked. Instead, they start from the premise that a large number of those devices could somehow be compromised and silently controlled by a hacker. That’s arguably a realistic assumption, given the myriad vulnerabilities other security researchers and hackers have found in the internet of things. One talk at the Kaspersky Analyst Summit in 2016 described security flaws in air conditioners that could be used to pull off the sort of grid disturbance that the Princeton researchers describe. And real-world malicious hackers have compromised everything from refrigerators to fish tanks.

Given that assumption, the researchers ran simulations in power grid software MATPOWER and Power World to determine what sort of botnet would / could disrupt what size grid. They ran most of their simulations on models of the Polish power grid from 2004 and 2008, a rare country-sized electrical system whose architecture is described in publicly available records. They found they could cause a cascading blackout of 86 percent of the power lines in the 2008 Poland grid model with just a one percent increase in demand. That would require the equivalent of 210,000 hacked air conditioners, or 42,000 electric water heaters.

The notion of an internet of things botnet large enough to pull off one of those attacks isn’t entirely farfetched. The Princeton researchers point to the Mirai botnet of 600,000 hacked IoT devices, including security cameras and home routers. That zombie horde hit DNS provider Dyn with an unprecedented denial of service attack in late 2016, taking down a broad collection of websites.

Building a botnet of the same size out of more power-hungry IoT devices is probably impossible today, says Ben Miller, a former cybersecurity engineer at electric utility Constellation Energy and now the director of the threat operations center at industrial security firm Dragos. There simply aren’t enough high-power smart devices in homes, he says, especially since the entire botnet would have to be within the geographic area of the target electrical grid, not distributed across the world like the Mirai botnet.

‘If you have a very large botnet of IoT devices, you can really manipulate the demand, changing it abruptly, any time you want.’
Saleh Soltan, Princeton University

But as internet-connected air conditioners, heaters, and the smart thermostats that control them increasingly show up in homes for convenience and efficiency, a demand-based attack like the one the Princeton researchers describes could become more practical than one that targets grid operators. “It’s as simple as running a botnet. When a botnet is successful, it can scale by itself. That makes the attack easier,” Miller says. “It’s really hard to attack all the generation sites on a grid all at once. But with a botnet you could attack all these end user devices at once and have some sort of impact.”

The Princeton researchers modeled more devious techniques their imaginary IoT botnet might use to mess with power grids, too. They found it was possible to increase demand in one area while decreasing it in another, so that the total load on a system’s generators remains constant while the attack overloads certain lines. That could make it even harder for utility operators to figure out the source of the disruption.

If a botnet did succeed in taking down a grid, the researchers’ models showed it would be even easier to keep it down as operators attempted to bring it back online, triggering smaller scale versions of their attack in the sections or “islands” of the grid that recover first. And smaller scale attacks could force utility operators to pay for expensive backup power supplies, even if they fall short of causing actual blackouts. And the researchers point out that since the source of the demand spikes would be largely hidden from utilities, attackers could simply try them again and again, experimenting until they had the desired effect.

The owners of the actual air conditioners and water heaters might notice that their equipment was suddenly behaving strangely. But that still wouldn’t immediately be apparent to the target energy utility. “Where do the consumers report it?” asks Princeton’s Soltan. “They don’t report it to Con Edison, they report it to the manufacturer of the smart device. But the real impact is on the power system that doesn’t have any of this data.”

That disconnect represents the root of the security vulnerability that utility operators need to fix, Soltan argues. Just as utilities carefully model heat waves and British tea times and keep a stock of energy in reserve to cover those demands, they now need to account for the number of potentially hackable high-powered devices on their grids, too. As high-power smart-home gadgets multiply, the consequences of IoT insecurity could someday be more than just a haywire thermostat, but entire portions of a country going dark.

 

from: https://www.wired.com/story/water-heaters-power-grid-hack-blackout/

 

 

The European Blockchain Partnership Finds Europe Getting Serious About Distributed Ledger Technology

Here’s why the European Blockchain Partnership is a big step towards widespread blockchain adoption:
European Blockchain Services Infrastructure (EBSI)
to become an international “gold standard” for large-scale DLTs.

 

On April 10, 2018, 21 EU member states and Norway signed up to create the European Blockchain Partnership. Including the UK, France, Germany, Sweden, the Netherlands and Ireland, they committed themselves to “cooperate in the establishment of a European Blockchain Services Infrastructure (EBSI) that will support the delivery of cross-border digital public services, with the highest standards of security and privacy.”

Since April, a further five nations have joined the Partnership, with Italy becoming the latest to do so after it signed the Partnership’s Declaration in September. As a member, it has committed itself to helping to identify, by the end of 2018, “an initial set of cross-border digital public sector services that could be deployed through the European Blockchain Services Infrastructure.”

By bringing distributed ledger technology (DLT) to European infrastructure, the Partnership hopes to make cross-border services – such as those related to logistics and regulatory reporting – safer and more efficient. However, progress towards this goal has so far been slow and piecemeal, with the Partnership’s members having had only three meetings since April. Nonetheless, it retains ambitious aims, with the European Commission telling Cointelegraph that it wants the European Blockchain Services Infrastructure (EBSI) to become an international “gold standard” for large-scale DLTs.

 

Still deciding

So far, the Partnership’s mission is vaguely defined. While there was already agreement in April that it would work towards developing cross-border, blockchain-based public services, there is still no actual agreement on what particular services to hone in on and develop. The European Commission’s head of Digital Innovation and Blockchain, Pēteris Zilgalvis explains:

“The Partnership’s mission is defined in the Joint Declaration and it is on that mandate that we have to deliver before the end of the year. In the Joint Declaration the signatories committed to working together and with the European Commission in order to develop an EBSI that can support the delivery of cross-border digital public services in Europe. So the description of what this services’ infrastructure [EBSI] could look like is what we are currently working on.”

In other words, the Partnership’s membership is currently at the very early stage of negotiating just what kind of blockchain-based public services to develop. However, as Zilgalvis explained to Cointelegraph, it expects to have agreed on all the fundamental details by the end of the year, so that these can be used as the basis for actually building and rolling out distributed cross-border technologies.

“As stated in the Joint Declaration, by end of 2018 the Partnership must provide a set of use cases of cross-border digital public services that could be deployed through the EBSI, a set of functional and technical specifications for the EBSI and finally, a governance model describing how the EBSI will be managed.”

A global reference for blockchain

The Partnership and its members will therefore be busy for the rest of 2018, although it has only three more meetings left to hammer out the all-important details, having already had three meetings so far. According to Finland‘s representative to the Partnership, Kimmo Mäkinen, a senior advisor at the Department of Public Sector Digitalization, the most recent meeting took place on September 17. “This was the third meeting,” he tells Cointelegraph. “The main topic was to discuss about the most prominent cross-border blockchain use-cases that had been proposed by member states and by the commission.”

As for whether the Partnership will successfully decide on all the necessary parameters before the start of 2019, Mäkinen doesn’t offer confirmation. “We will have three monthly meetings by the end of this year during which we will have to agree not only on use-cases but also technical/functional requirements and governance model for European blockchain infrastructure,” he says, his use of “not only” implying that the Partnership has a more-than sizeable workload to get through before Christmas.

Still, even though three meetings and no particular end-product hardly counts as an impressive achievement, these meetings were positive for the Partnership. More importantly, they’ve revealed a strong commitment among its members towards developing blockchain technologies, as explained by Pēteris Zilgalvis:

“At these meetings we found that the Partners were extremely supportive of collective efforts to establish strong EU leadership in distributed ledger technology, drawing on the Digital Single Market framework, and that EBSI could play a very important role in achieving this objective.”

Indeed, it would appear that the European Blockchain Partnership is being used by the European Commission as a vehicle for the EU becoming a global leader on DLT.

“In the longer term, we would like EBSI to become a global reference when it comes to trusted blockchain infrastructures,” admits Zilgalvis, “a ‘gold standard’ infrastructure that is governed through a transparent multi-stakeholder organisation, meets the most advanced cybersecurity and energy efficiency standards, is scalable to accommodate different use cases, is highly-performant in terms of speed and throughput, ensures the continuity of services on the long term, integrates eIDAS (electronic IDentification, Authentication and trust Services) and supports full compliance with the EU requirements on data protection (General Data Protection Regulation) and network information security.”

So even if the Partnership hasn’t really achieved anything concrete yet, its significance lies in the fact that it represents a massive vote of confidence in blockchain technology. By committing to it, and by aiming to build “highly-performant” blockchain tech, the Partnership’s 27 member nations have effectively declared that they believe DLT is here to stay and that it has genuine applicability to a range of areas.

Separately, each member is for their own purposes interested in blockchain tech from a variety of different perspectives, further testifying to blockchain’s growing status as a promising new solution to a range of problems. “Finland is interested and curious of new possibilities that are to be presented by blockchain technology,” acknowledges Kimmo Mäkinen, “in order to boost cross-border services for example in matters related to document authenticity, data exchange and identity management.”

Implementation mode in 2019?

Of course, while there’s little doubt that the Partnership’s signatories are completely serious about DLT, there still remains the unavoidable question of when, exactly, it will produce and begin introducing the platforms it was set up to build. Well, despite there not being anything absolutely definite on this front, Pēteris Zilgalvis states that we may begin seeing actual output as early as next year:

“These deliverables [functional and technical specifications, governance model] will be addressed to the political representatives who signed the Declaration, and if approved, the Partnership could move into implementation mode in 2019.”

Once again, this time frame is ambitious. But even if certain differences of opinion may need to be ironed out between members before implementation can begin, the target of 2019 shows just how confident the European Commission is that the Partnership’s member states are on the same page with regards to blockchain, which is further indicated by them signing its Declaration in the first place. If the Partnership does indeed follow through with its plans and implements blockchain-based cross-border infrastructure, this will only have positive ramifications and knock-on effects for wider blockchain adoption elsewhere. All of which means that the future of blockchain adoption in Europe looks increasingly bright.

 

from: https://cointelegraph.com/news/the-european-blockchain-partnership-finds-europe-getting-serious-about-distributed-ledger-technology

 

 

 

Pentagon Hacked Again: Cyber Breach of Travel Records

The Pentagon on Friday said there has been a cyber breach of Defense Department travel records that compromised the personal information and credit card data of U.S. military and civilian personnel.

According to a U.S. official familiar with the matter, the breach could have affected as many as 30,000 workers, but that number may grow as the investigation continues. The breach could have happened some months ago but was only recently discovered.

The official, who spoke on condition of anonymity because the breach is under investigation, said that no classified information was compromised.

According to a Pentagon statement, a department cyber team informed leaders about the breach on Oct. 4.

Lt. Col. Joseph Buccino, a Pentagon spokesman, said the department is still gathering information on the size and scope of the hack and who did it.

“It’s important to understand that this was a breach of a single commercial vendor that provided service to a very small percentage of the total population” of Defense Department personnel, said Lt. Col. Buccino.

The vendor was not identified and additional details about the breach were not available.

“The department is continuing to assess the risk of harm and will ensure notifications are made to affected personnel,” said the statement, adding that affected individuals will be informed in the coming days and fraud protection services will be provided to them.

Buccino said that due to security reasons, the department is not identifying the vendor. He said the vendor is still under contract, but the department “has taken steps to have the vendor cease performance under its contracts.”

Disclosure of the breach comes on the heels of a federal report released Tuesday that concluded that military weapons programs are vulnerable to cyberattacks and the Pentagon has been slow to protect the systems.

And it mirrors a number of other breaches that have hit federal government agencies in recent years, exposing health data, personal information, and social security numbers.

The U.S. Government Accountability Office in its Tuesday report said the Pentagon has worked to ensure its networks are secure, but only recently began to focus more on its weapons systems security. The audit, conducted between September 2017 and October 2018, found that there are “mounting challenges in protecting its weapons systems from increasingly sophisticated cyber threats.”

In 2015, a massive hack of the federal Office of Personnel Management, widely blamed on China’s government, compromised personal information of more than 21 million current, former and prospective federal employees, including those in the Pentagon. It also likely occurred months before it was discovered and made public, and it eventually led to the resignation of the OPM director.

Also that year, hackers breached into the email system used by the Joint Chiefs of Staff, affecting several thousand military and civilian workers.

The Defense Department has consistently said that its networks and systems are probed and attacked thousands of times a day.

 

from: https://www.securityweek.com/pentagon-reveals-cyber-breach-travel-records

 

 

 

 

 

Bitcoin’s Time Locks

Bitcoin, having no discernible faults, comes equipped with several different time locks. These tools allow you to specify time-based conditions under which transactions are valid. Using time locks you make a transaction now that pays someone next week, add a mandatory waiting period for coin movements, set up complex smart contracts that flow across several transactions, or accidentally lock up your coins for centuries.

Most of these time locks were added to Bitcoin quite recently. They’re built into the structure of transactions, and have more than a few idiosyncrasies left over from buggy code written by our favorite anonymous cypherpunk, Mr. Nakamoto. The corresponding Bitcoin Improvement Proposals (BIPs) are wonderful and detailed, but assume a lot of background knowledge. This is an attempt to compile all the information I can find on the time locks, and explain it in depth.

Classifying Time Locks

Before we dive into the tools themselves, let’s figure out how to describe their operation. Time locks have three important attributes: location, targeting, and metric.

Location: Transaction vs. Script

Time is the longest distance between two places.
— Tennessee Williams

Time locks can be found in the transaction itself and/or in its Pay to Script Hash (P2SH) inputs’ associated scripts. Every transaction has multiple time lock fields (they’re present even when not used). Scripts, on the other hand, can have zero or many time locks. In terms of functionality, transaction-level and script-level locks are superficially similar, but perform very different roles. Transaction-level time locks cause a transaction to be invalid until a certain time, regardless of the validity of the signatures and scripts. Script-level time locks will cause script evaluation to fail unless the transaction is also locked. A failed script evaluation makes the transaction invalid. In a nutshell, transaction-level locks determine when a transaction may be confirmed, while script-level locks determine whether a given scriptsig is valid.

The major difference between them is what exactly they lock. A transaction-level lock constrains only a specific transaction. Think of transaction-level locks as future-dating a check: I can write you a check that becomes valid later, but the date applies only to that check, and I could spend the money other ways you don’t know about. Script-level locks sets conditions on all transactions spending an output. In other words, Transaction-level locks affect what you can do with a transaction after it’s constructed, but Script-level locks determine what transactions can be made in the first place.

Transaction locks aren’t as useful as you might think. They don’t control coins, only spends. This is why all the fun stuff required OP_CLTV and OP_CSV. Using script-level locks and conditional logic (OP_IF) we can make complex scripts that can, for example allow multisig spends any time, or single signature spends after a certain amount of time has passed. This provides a lot of versatility to P2SH transactions.

Script-level time locks require that a corresponding transaction-level time lock is also included. Script-level locks rely on transaction-level locks for enforcement. Rather than checking the time from within the script, script-level locks check the transaction’s lock. This is elegant and economical, if a bit un-intuitive. The script checks that the transaction is locked at least as long as the script path. It treats the transaction lock as a guarantee that time has passed.

Targeting: Absolute vs. Relative

Time is an illusion, lunchtime doubly so.
— Douglas Adams

Well, really, they’re both relative. You get to choose arbitrary-origin-point-relative or previous-output-relative. But that’s the kind of meaningless pedantry I love.

When we time lock coins, we set a target for their release. Absolute locks define this target in terms of a set time. They pick an exact moment when the lock expires. Relative time locks define it as an amount of time elapsed since the previous output confirmed. It’s the difference between “meet me at 15:00” and “meet me in 4 hours.”

Transactions that are locked to an absolute time are invalid until that time has passed. This means that I can make a transaction years in advance, sign it, share it, and even post it publicly with a guarantee that it won’t be confirmed before its lock expires. I might use an absolute timestamp to send money to my children or create a savings account that you can deposit to, but can’t withdraw from until next year.

Relative locks, on the other hand, mark a transaction invalid until a certain amount of time has passed since the transaction’s previous outputs were confirmed. This is a subtle and powerful feature. The real beauty of relative lock-times is setting locks relative to un-broadcast, un-confirmed transactions. Once a transaction is confirmed, we can always set an absolute lock-time in its future. But to do that, you have to wait for it to confirm, and learn its confirmation time. Or set up a long lock in advance, which becomes an expiration time for your entire smart contract. Relative locks can be set on un-confirmed transactions, meaning that you can create and sign an entire multi-step smart contract in advance, and be guaranteed that its transactions will confirm in the order you expect, no matter when you execute it.

Metric: Blocks vs. Seconds

Then’s the time to time the time flies –
Like time flies like an arrow.
— Edison B. Schroeder

In Bitcoin, time is a consensual hallucination and lunchtime never comes. Transactions can’t exactly look at a clock on the wall, so we need to decide what “time” is. Bitcoin has two ways of measuring “time”: block number, and block timestamp. These were implemented as modes of operation for each time lock, instead of full separate lock mechanisms. You can specify a number of blocks for the lock, or a number of seconds. Both of these have their own complications. In practice, both of these metrics are accurate enough for real-world uses. But it’s important to understand their idiosyncrasies.

We often say that blocks follow a poisson distribution: they’re expected to come every 10 minutes. But this isn’t quite right. When hashpower is increasing, blocks come faster than expected. Hashpower goes offline or is pointed at other chains, blocks come slower. Difficulty adjusts every 2016 blocks (about every 2 weeks) to target 10 minutes, but blocks can slip a significant amount from where you’d expect them to be due to network conditions, or just random chance.

Timestamps are just as finicky. You see, in Bitcoin, time doesn’t always go forward. Due to consensus rules for block timestamps, time can sometimes reverse itself for a block or two. Or just stop for a minute. There are reasons for this, I promise. It pretty much always stays within a couple hours of “real” time. To make timestamp-based locks reliable in spite of this, they measure using ‘median time past’ (MTP) method described in BIP 113. Rather than using the current block’s timestamp, timestamp-based locks use the median timestamp of the previous 11 blocks. This smooths out time’s advance, and ensures that time never goes backwards.

 

 

The Locks

Now that we understand what we’re talking about, let’s talk about the tools themselves. There are four time lock options right now: nLocktime, nSequence, OP_CHECKLOCKTIMEVERIFY (OP_CLTV), and OP_CHECKSEQUENCEVERIFY (OP_CSV). Two of them are script-level, two are transaction-level.

nLocktime

nLocktime is the transaction-level, absolute time lock. It’s also the only time lock that was part of Satoshi’s Original Vision (SOV).

A transaction is a simple datastructure that contains fields for version, inputs, outputs, and a few other things. nLocktime has its own special field lock_time. It specifies a block number or time stamp. The transaction is not valid until that time has passed. Transactions made by Bitcoin core have the lock_time field set to the current block by default to prevent fee sniping. Times are expressed as an unsigned 32 bit integer. If time_lock is 0, it’s ignored. If it is 500,000,000 or above, it’s treated as a unix timestamp. So nLocktime can lock transactions for a 9500 years using block numbers, or until 2106ish using timestamps.

Curiously, the lock_time field is ignored entirely if all inputs have a sequence number of 0xFFFFFFFF (the max for a 32 bit integer). Opt-in Replace-By-Fee (RBF) signals similarly as described in BIP 125. Using sequence_no to signal is an artifact from Satoshi’s half-baked time lock implementation. And at this point we’d have to hard fork to change that. nLocktime and input sequence numbers were originally supposed to create a simple transaction update mechanism. The idea was that you could create a transaction with a lock-time, and then replace it by sending a new version with at least one higher sequence number.

Miners were supposed to drop transactions with lower sequence numbers from the mempool. If all inputs had the maximum sequence number, it meant there could be no more updates, and the transaction could clear regardless of the time lock. This was never fully implemented, and later abandoned. In a curios slip up, Satoshi seems to have assumed good behavior, which is not a reasonable assumption in Bitcoin. It’s impossible to guarantee that miners will see updated transactions, or drop older version if they do see newer ones. Miners will mine the most profitable version of a transaction, not the latest.

nLocktime examples:

# Most of the transaction is omitted. Using decimal for human readability.
# Using hex for sequence numbers due to the presence of flags.
# Transaction is invalid until block 499999999 (this is a Bad Idea)
tx_1:
  lock_time: 49999999
# Transaction is invalid until the MTP is 1514764800 (1/1/2018 0:00:00 GMT)
tx_2:
  lock_time: 1514764800
# No lock time. Transaction is valid immediately.
tx_3:
  lock_time: 0
# nLocktime lock is not in effect, because all sequence numbers are set to 0xFFFFFFFF
tx_4:
  lock_time: 3928420
  input_1:
    sequence_no: 0xFFFFFFFFnSequence

nSequence is the transaction-level relative time lock (technically, nSequence is actually input-level, more later). It repurposes the old sequence_no field of each input to invalidate transactions based on the time elapsed since the previous outputs’ confirmations. nSequence locks were introduced in BIP 68 and activated by soft fork in mid-2016. Satoshi gave us lemons, and we made nSequence time locks.

Sequence numbers have been around since the beginning. But because transaction replacement was never implemented (and wouldn’t have worked in the long run anyway), they became cruft. For years, the only thing they could do was disable nLocktime. Now, sequence numbers are used to enforce relative time locks on the the transaction level as described in BIP 68. nSequence locks are set on each input, and measured against the output that each input is consuming. This means that several different time lock conditions can be specified on the same transaction. In order for a transaction to be valid, all conditions must be satisfied. If even a single sequence lock is not met, the entire transaction will be rejected.

Bitcoin developers are amazing at upcycling, but sometimes you end up with a few knots in the wood. Because nSequence time locks re-purpose the existing sequence_no field, it has a few idiosyncrasies. The sequence field is 32 bits, but we can’t use all of them, as it would interfere with nLocktime and RBF signaling. In addition, sequence_no is one of the few places where we have leeway to make future changes. To balance these demands, nSequence was built to use only 18 of the 32 bits. This conserves 14 bits for any future uses we can come up with.

Two bits are flags that tell the node how to interpret the sequence_no field. The most significant bit is the disable flag. If the disable flag is set, nSequence locks are disabled. If the disable flag is not set, the rest of the sequence_no field is interpreted as a relative lock-time. Bit 22 (the 23rd least significant bit), is the type flag. If the type flag is set, the lock is specified in seconds. If the type flag is not set, the lock is specified in blocks.

The least significant 16 bits of the sequence_no are used to encode the target time. Unlike nLocktime, nSequence uses only 16 bits to encode the lock-time. This means nSequence time locks are limited to 65535 units. This allows for locks up to about 455 days when using blocks, but would only allow about 18 hours in seconds. To mitigate this, nSequence does not measure in seconds. Instead it uses chunks of 512 seconds. If the type flag is set, and the lock-time is set to 16 units, the input will be locked until 16 * 512 seconds have elapsed.

Transactions made by Bitcoin Core, by default, have the sequence_no of each input set to 0xFFFFFFFE. This enables nLocktime to discourage fee sniping as described above, and disables Replace-By-Fee. Replace-By-Fee transactions typically have the sequence_no of each input set to 0xFFFFFFFD. It’s worth noting at this point that RBF is not a protocol change, only a change in default mining policy. However, because nSequence locks require that the sequence_no field be set lower than 0xFFFFFFFD to be meaningful, all nSequence locked transactions are opting into RBF.

nSequence examples:

# Most of the transaction is omitted. Using decimal for human readability.
# Using hex for sequence numbers due to the presence of flags.
# This transaction is locked for 4096 second. Just over 1 hour.
tx_1:
  input_1:
    sequence_no: 0x00400008
    # Disable flag is not set, type flag is set. Input locked for 8 * 512 seconds.
# This transaction is not nSequence locked, but may be nLocktime locked, and allows RBF.
tx_2:
  input_1:
    sequence_no: 0xFEDC3210
    # Disable flag is set. nSequence locking disabled.
# This transaction is invalid until 16 blocks have elapsed since input_1's prevout confirms.
tx_3:
  input_1:
    sequence_no: 0x00000010  
    # Disable flag is not set, type flag not set. This input locked for 16 blocks.
  input_2:
    sequence_no: 0xFFFFFFFF  
    # Disable flag is set.
# This transaction is not time locked, but has opted to allow Replace-By-Fee.
tx_4:
  lock_time: 0
  input_1:
    sequence_no: 0xFFFFFFFE  
    # nSequence is disabled, nLocktime is enabled, RBF is not signaled.
  input_2:
    sequence_no: 0xFFFFFFFD  
    # nSequence is disabled, nLocktime is enabled, RBF is signaled.
# This transaction is not valid until block 506221
# It is also not valid until 87040 seconds have passed since the confirmation of input_1's previous output
tx_5:
  lock_time: 506221
  input_1:
    sequence_no: 0x004000AA

 

 

CLTV

OP_CHECKLOCKTIMEVERIFY (OP_CLTV) is the script-level absolute time lock. It was detailed in BIP 65 and softforked into mainnet in late 2015. OP_CLTV enabled hashed timelocked contracts and as such was a hard requirement for the first version of Lightning channels.

Its source is simple and elegant, comprising less than 20 lines of clean, superbly-commented code. Put simply: OP_CLTV compares the top item of the stack to the transaction’s nLocktime. It checks that the top item of the stack is a valid time in seconds or blocks, and that the transaction itself is locked for at least that long via an appropriate lock_time. In this way, OP_CLTV checks that the transacion can’t be confirmed before a certain time.

OP_CHECKLOCKTIMEVERIFY causes script evaluation to fail immediately in the following five situations:

  1. The stack is empty (i.e. there’s no target time specified for OP_CLTV to check).
  2. The top stack item is less than 0 (negative time locks don’t make sense).
  3. The nLocktime is measured in blocks, and the top stack item uses seconds, or vice versa (apples and oranges).
  4. The top stack item is greater than the transaction’s lock_time (not enough time has passed).
  5. The nSequence field of this input is set to 0xFFFFFFFF (timelock might be disabled).

OP_CLTV replaces OP_NOP2, which (as you might expect) did nothing. Designing OP_CLTV to replace OP_NOP2 as a softfork provided an interesting constraint: OP_CLTV must leave the stack exactly as it found it. Because of this OP_CLTV reads a stack item, but does not consume a stack item. It checks the time lock, but then leaves the target time on the stack. As such, it is almost always followed by OP_DROP, which drops the top stack item.

Comparing the lock time specified in the script to the lock time of the transaction is a wonderfully clever implementation because the time is checked only indirectly. It passes enforcement to the nLocktime consensus rules while still allowing scipts to specify multiple different time-locked conditions. It allows scriptsig validity to be checked at any time and cached. The The downside is that if OP_CLTV is used in the script, lock_time must be specified in the spending transaction, and a sequence_no less than 0xFFFFFFFF must be present in the input. This can be counterintuitive for new developers, so keep this in mind.

OP_CLTV examples:

# Most of the transaction is omitted. Using decimal for human readability.
# Using hex for sequence numbers due to the presence of flags.
# Anyone can spend, at or after block 506391
tx_1:
  lock_time: 506391
  input_1:
    sequence_no: 0xFFFFFFFE
    script:
      506391 OP_CHECKLOCKTIMEVERIFY OP_DROP
# This transaction is invalid:
# The lock_time is in blocks, and the CLTV is in seconds
# The sequence_no is 0xFFFFFFFF
tx_2:
  lock_time: 506391
  input_1:
    sequence_no: 0xFFFFFFFF
    script:
      1514764800 OP_CHECKLOCKTIMEVERIFY OP_DROP
# This transaction is invalid
# The top stack item is greater than the lock_time
tx_3:
  lock_time: 506391
  input_1:
    sequence_no: 0xFFFFFFFE
    script:
      600000 OP_CHECKLOCKTIMEVERIFY OP_DROP
# This transaction is valid at block 512462, but only if at least 32 * 512 seconds have passed since its previous output confirmed.
# A separate transaction could be constructed to spend the coins between 506391 and 512462
tx_4:
  lock_time: 512462
  input_1:
    sequence_no: 0x00400020
    script:
      506391 OP_CHECKLOCKTIMEVERIFY OP_DROP
# This transaction becomes valid at block 506321
# The script allows an alternate execution path using 2-of-2 multisig.
# A separate transaction can be created that will not be time locked.
tx_5:
  lock_time: 506321
  input_1:
    sequence_no: 0xFFFFFFFE
    scriptsig:
      OP_TRUE
    script:
      OP_IF
        506321 OP_CHECKLOCKTIMEVERIFY OP_DROP
      OP_ELSE
        OP_2 <pubkey_1> <pubkey_2> OP_2 OP_CHECKMULTISIG
      OP_ENDIF
# This is a variation of an HTLC.
# This transaction is valid at block 507381 assuming:
# 1. The secret for input_2's script matches the expected secret hash
# 2. Valid signatures and pubkeys are provided for input_2
# 3. input_2's nSequence time-lock is respected.
tx_6:
  lock_time: 507381
  input_1:
    sequence_no: 0xFFFFFFFE
    script:
      507381 OP_CHECKLOCKTIMEVERIFY OP_DROP
  input_2:
    sequence_no: 0x000000A0
    scriptsig:
      <signature> <pubkey> <secret>
    script
      OP_HASH160 <secret hash> OP_EQUALVERIFY
      OP_DUP OP_HASH160 <pub keyhash> OP_EQUALVERIFY OP_CHECKSIGOP_CSV

OP_CHECKSEQUENCEVERIFY (OP_CSV) is the script-level relative time lock. It was described in BIP 112 and softforked in along with nSequence and MTP measurement in mid-2016.

Functionally, OP_CSV is extremely similar to OP_CLTV. Rather than checking the time, it compares the top stack item to the input’s sequence_no field. OP_CSV parses stack items the same way nSequence interprets lock-times. It respects nSequence’s disable flag and type flag, and reads 16-bit lock duration specifications from the last 16 bits of the stack item. OP_CSV errors if:

  1. The stack is empty (there’s no lock time specified).
  2. The top stack item is less than 0 (negative time is silly).
  3. The top stack item’s disable flag is not set and at least one of the following is true:
  • The transaction version is less than 2 (transaction does not signal OP_CSV compatibility).
  • The input’s sequence_no disable flag is set, (relative locktime is disabled).
  • The input’s sequence_no and top stack item’s type flags are not the same (not using the same metric).
  • The top stack item’s 16-bit duration is longer than the duration found in the input’s sequence_no field (not enough time has elapsed).

OP_CSV replaces OP_NOP3, and (like OP_CLTV) must leave the stack unmodified when it executes to maintain compatibility with older clients. It reads the top stack item, but does not consume it. So again it is often paired with OP_DROP. If the disable flag of the top stack item is set OP_CSV behaves as OP_NOP3.

As described earlier when discussing relative lock-times, OP_CSV is an amazing tool for stringing together chains of transactions. If we used OP_CLTV instead, the entire transaction chain would have an absolute expiration date. OP_CSV allows us to set an expiration date relative to the first broadcast transaction. So a chain of transactions can be made and stored indefinitely while maintaining time lock guarantees.

Transactions, once confirmed, cannot be revoked without a chain re-org. But chaining transactions via OP_CSV relative lock-times allows us to create script evaluation paths that almost provide that feature by creating mutually-exclusive future paths. Using OP_IF, we can construct multiple transactions spending the same previous output (which may itself be from an un-confirmed transaction), and ensure that one has a relative time-lock. Then, if the locked version be broadcast during its timelock, the unlocked version will confirm first and spend the coins. This means that we can give certain transactions priority over others, and control the execution of complex smart contracts. The Lightning network makes extensive use of this.

OP_CSV examples

# Most of the transaction is omitted. Using decimal for human readability.
# Using hex for sequence numbers due to the presence of flags.
# Anyone can spend, 255 blocks after the previous output confifrms.
tx_1:
  lock_time: 0
  input_1:
    sequence_no: 0x000000FF
    script:
      0x000000FF OP_CHECKSEQUENCEVERIFY OP_DROP
# Anyone can spend, so long as both of the following are true:
# a) 16,384 seconds have passed since input_1's previous output was confirmed
# b) 255 blocks have passed since input_2's previous output was confirmed
tx_2:
  lock_time: 0
  input_1:
    sequence_no: 0x00400020
    script:
      0x00400020 OP_CHECKSEQUENCEVERIFY OP_DROP
  input_2:
    sequence_no: 0x000000FF
    script:
      0x000000FF OP_CHECKSEQUENCEVERIFY OP_DROP
# Anyone can spend, so long as 256 blocks have passed since input_1's previous output.
# Note that a separate transaction can be created to spend these coins.
# The alternate path would specify a lock_time of at least 506321.
# The script allows either an absolute or relative time lock, whichever is shorter.
tx_3:
  lock_time: 0
  input_1:
    sequence_no: 0x00000100
    scriptsig:
      OP_FALSE
    script:
      OP_IF
        506321 OP_CHECKLOCKTIMEVERIFY
      OP_ELSE
        0x00000100 OP_CHECKSEQUENCEVERIFY
      OP_ENDIF
      OP_DROP
# This transaction is invalid until 1/1/2020,
# AND until 31457280 seconds after the previous output confirmed.
# It also specifies a single approved spender by their pubkey.
tx_4:
  lock_time: 1577836800
  input_1:
    sequence_no: 0x0004F000  # type flag is set
    scriptsig:
      <signature> <pubkey>
    script:
      1577836800 OP_CHECKLOCKTIMEVERIFY OP_DROP
      0x0004F000 OP_CHECKSEQUENCEVERIFY OP_DROP
      OP_DUP OP_HASH160 <pubkey hash> OP_EQUALVERIFY
      OP_CHECKSIGVERIFY
# This transaction is invalid 3 ways:
# 1) input_1's script fails because the stack item's 16-bit lock duration is greater than specified in the sequence_no.
# 2) input_2's script fails because the sequence_no's type flag is not set, while the stack item's type flag is set.
# 3) input_3's script fails because the stack is not empty at the end.
tx_5:
  lock_time: 0
  input_1:
    sequence_no: 0x0004F000
    script:
      0x0004FFFF OP_CHECKSEQUENCEVERIFY OP_DROP
  input_2:
    sequence_no: 0x0000FFFF
    script:
      0x0004FFFF OP_CHECKSEQUENCEVERIFY OP_DROP
  input_3:
    sequence_no: 0x00000001
    script:
      0x00000001 OP_CHECKSEQUENCEVERIFYReview

Bitcoin’s time locks are powerful tools, but can be confusing. Here’s a quick list of important things to remember:

  • OPs go in scripts.
  • “Locktime” means absolute.
  • “Sequence” means relative.
  • All time locks can do blocks or seconds, but they have different ways of signalling.
  • Don’t accidentally lock things for centuries.
  • Script-level time locks need a transaction-level lock of the same type in the spending tx.

Further Reading

 

from: https://medium.com/summa-technology/bitcoins-time-locks-27e0c362d7a1

 

 

 

[UPDATED] Making Amends: Microsoft adds 60,000 patents to the Open Invention Network

With their long-term strategy to walk away from the old, cumbersome, and many-times-patches Windows,
this makes perfect sense in switching to open source and ist vast resources and people.
A decade after claiming that Linux and open-source software infringes on more than 200 of its patents,
Microsoft is now pledging its patents to the Open Invention Network in support of Linux.

Microsoft announced today that it’s joined open-source patent group Open Invention Network in an effort to help shield Linux and other open-source software from patent-related suits.

As part of the deal, the software giant is opening a library of 60,000 patents to OIN members. Access to the massive portfolio is unlimited and royalty free.

It is, as ZDNET notes, a shift away from the aggressively litigious corporation of year’s past. Among other suits, the company had previously gone after a number of different companies in the Android ecosystem. Microsoft acknowledges as much in its announcement, adding that the news should be taken as a sign that it’s turning over a new leaf.

“We know Microsoft’s decision to join OIN may be viewed as surprising to some,” EVP Erich Andersen writes in a blog post, “it is no secret that there has been friction in the past between Microsoft and the open source community over the issue of patents. For others who have followed our evolution, we hope this announcement will be viewed as the next logical step for a company that is listening to customers and developers and is firmly committed to Linux and other open source programs.

The news also finds the company looking to blur the lines between Windows and Linux development, encouraging devs to create programs for both operating systems, along with .NET and Java.

Last week, Microsoft followed the lead of companies like Google, Facebook and Amazon by joining anti-patent trolling group the LOT Network.

 

from: https://techcrunch.com/2018/10/10/microsoft-adds-60000-patents-to-the-open-invention-network/

 

 

Microsoft Pledges to Protect Linux and Open Source With Its Patents

 

NEWS ANALYSIS: A decade after claiming that Linux and open-source software infringes on more than 200 of its patents, Microsoft is now pledging its patents to the Open Invention Network in support of Linux.

In a move that would have seemed unfathomable a decade ago, Microsoft announced on Oct. 10 that it is joining the Open Invention Network and contributing 60,000 patents.

With the patent pledge, Microsoft is making its patents available to OIN members in a bid to help protect Linux against patent claims. Microsoft is positioning its move as part of its multiyear plan to support Linux and embrace the open-source ecosystem.

“We believe the protection OIN offers the open source community helps increase global contributions to and adoption of open source technologies,” Erich Andersen, corporate vice president and chief IP counsel at Microsoft, wrote in a statement. “We are honored to stand with OIN as an active participant in its program to protect against patent aggression in core Linux and other important OSS technologies.”

The OIN was founded in 2005 as an effort to share patents in a royalty-free way to any company, institution or individual that agrees not to assert its patents against the Linux operating system or certain Linux-related applications.

Microsoft for its part was once one of the most voracious patent claimants against Linux and open-source technologies. In 2007, Microsoft alleged that open-source technologies infringed on at least 235 Microsoft patents. The same year, then Microsoft CEO Steve Ballmer said that Linux vendors, including Red Hat, had an obligation to “pay up” for infringing on Microsoft’s patents.

Things have changed a lot since 2007. Ballmer is no longer the CEO of Microsoft, and the official position of Microsoft on Linux is no longer one of rivalry. In 2016, Microsoft became a platinum member of the Linux Foundation, taking a large step away from its past.

“We were thrilled to welcome Microsoft as a platinum member of the Linux Foundation in 2016 and we are absolutely delighted to see their continuing evolution into a full-fledged supporter of the entire Linux ecosystem and open-source community.” Jim Zemlin, executive director of the Linux Foundation, wrote in a statement sent to eWEEK.

Yet even after joining the Linux Foundation and becoming an active contributor to multiple open-source efforts, the issue of the 235 patents has remained. Despite repeatedly saying that it “loves Linux,” Microsoft had never formally renounced its patent claims. The patent move to the OIN appears to be a step in that direction.

When asked by eWEEK if the OIN patent agreement involved the 235 patents that Microsoft alleges that open-source software infringes on, Microsoft provided a nuanced statement.

“We’re licensing all patents we own that read on the ‘Linux system’ for free to other OIN licensees,” a Microsoft spokesperson wrote in an email to eWEEK.

In a Twitter message to eWEEK, Nat Friedman, vice president of developer services at Microsoft, provided some additional clarification.

“It includes all patents, but OIN coverage is limited to the Linux System. So only patents reading on the Linux System are relevant to OIN,” Friedman wrote. “This isn’t a fixed list and we didn’t exclude any patents.”

Even with Microsoft’s patent move to the OIN, there is still likely going to be some skepticism in the broader open-source community about the company’s intentions. Erasing nearly two decades of animosity is not an easy task.

That said, a decade after telling Linux vendors to “pay up” over patents, it sure does look like Microsoft has now come full circle, as an active contributor and supporter of open-source technologies.

 

from: http://www.eweek.com/enterprise-apps/microsoft-pledges-to-protect-linux-and-open-source-with-its-patents

 

 

 

 

Over Nine Million Cameras And DVRs Open To APTs, Botnet Herders, And Voyeurs

Conceptual diagram of a PDoS attack:
1) Attack sponsor hires botnet herder.
2) Botnet herder uses server to manage recruitment.
3) Malware scans for vulnerable IoT devices and begins cascading infection.
4) Botnet herder uses devices (e.g., HVAC controllers) to deplete bandwidth of a cyber-physical service (e.g., electrical power). 

APT: Advanced Persistent Threats (advanced hacking, such as Russia’s APT28 group)

Millions of white label security cameras, DVRs, and NVRs manufactured by Hangzhou Xiongmai Technology Co., Ltd. contain vulnerabilities that can allow a remote attacker to take over devices with little effort, security researchers have revealed today.

 

Re-branded IP cameras and DVRs sold by over 100 companies can be easily hacked, researchers say.

Millions of security cameras, DVRs, and NVRs contain vulnerabilities that can allow a remote attacker to take over devices with little effort, security researchers have revealed today.

All vulnerable devices have been manufactured by Hangzhou Xiongmai Technology Co., Ltd. (Xiongmai hereinafter), a Chinese company based in the city of Hangzhou.

But end users won’t be able to tell that they’re using a hackable device because the company doesn’t sell any products with its name on them, but ships all equipment as white label products on which other companies put their logo on top.

Security researchers from EU-based SEC Consult say they’ve identified over 100 companies that buy and re-brand Xiongmai devices as their own.

All of these devices are vulnerable to easy hacks, researchers say. The source of all vulnerabilities is a feature found in all devices named the “XMEye P2P Cloud.”

The XMEye P2P Cloud works by creating a tunnel between a customer’s device and an XMEye cloud account. Device owners can access this account via their browser or via a mobile app to view device video feeds in real time.

SEC Consult researchers say that these XMEye cloud accounts have not been sufficiently protected. For starters, an attacker can guess account IDs because they’ve been based on devices’ sequential physical addresses (MACs).

Second, all new XMEye accounts use a default admin username of “admin” with no password.

Third, users aren’t prompted to change this default password during the account setup process.

Fourth, even if the user has changed the XMEye admin account password, there is also a second hidden account with the username and password combo of default/tluafed.

Fifth, access to this account allows an attacker to trigger a firmware update. Researchers say Xiongmai devices firmware updates are not signed, and an attacker can easily impersonate the XMEye cloud and deliver a malicious firmware version that contains malware.

Researchers argue the vulnerabilities they found can be easily used by voyeurs to take over camera feeds and watch victims in their homes. In some cases, some cameras have a two-way audio intercom, so it’s even possible that an attacker may be able to interact with victims as well.

 

Furthermore, all these devices can be hacked by cyber-espionage groups and be used as entry points inside the networks of targeted organizations, or to relay traffic as part of a technique known as UPnProxy. Cyber-espionage groups, also known as advanced persistent threats (APTs) have been increasingly leveraging routers for their attacks, with the most recent being the VPNFilter botnet, set up by Russia’s APT28 group.

Last but not least, all these Xiongmai devices are also the perfect cannon fodder for IoT botnet herders, who can now mass-scan the XMEye P2P Cloud for accounts with default creds and hijack devices with malicious firmware.

Xiongmai devices have been abused in the past by IoT botnets, and especially by botnets built with the Mirai malware. For example, half of the devices that were part of the massive Mirai-based DDoS attack on managed DNS provider Dyn, which took out around a quarter of the Internet, were Xiongmai devices.

At the time, Xiongmai came under heavy criticism and promised to recall all vulnerable devices.

But SEC Consult claim in a report published today that the Chinese company hasn’t invested in security since patching the vulnerabilities exploited by the Mirai malware in late 2016.

Ever since then, at least four vulnerabilities [1, 2, 3, 4], some at least one year old, were left unpatched, researchers said.

SEC Consult didn’t have much luck when they reported the flaws they found. The company says that despite engaging with both the US and China CERT teams in alerting Xiongmai, the company did not patch the flaws they reported back in March this year.

“The conversation with them over the past months has shown that security is just not a priority to them at all,” SEC Consult researchers said.

Based on scans performed by researchers, there are at least nine million Xiongmai-based devices sitting around on the Internet.

Because none of these devices feature the Xiongmai name or logo, device owners who’d like to take this equipment offline will have a hard time determining if they use one of these vulnerable devices.

SEC Consult says the easiest way to identify a Xiongmai-manufactured (and later rebranded) device is by the equipment’s admin panel login page, which looks like the image below.

 

xiongmai Login

 

In cases where the vendor that bought the Xiongmai devices used a different design for the login page, users can access the device’s error page at http://[device_IP]/err.htm for a second clue. If the error page looks like the image below, then it’s a Xiongmai device.

 

xiongmail Error Page

 

Furthermore, users can find a last clue in their product’s description inside the device’s printed manual, or on Amazon, HomeDepot, or Walmart listings. If the product description mentions anything about “XMEye,” then despite the logo on the front of the device, the equipment was made by Xiongmai.

SEC Consult says it was able to track down more than a hundred other vendors that bought Xiongmai white-label devices and put their logo on top. The list includes names such as:

9Trading, Abowone, AHWVSE, ANRAN, ASECAM, Autoeye, AZISHN, A-ZONE, BESDER/BESDERSEC, BESSKY, Bestmo, BFMore, BOAVISION, BULWARK, CANAVIS, CWH, DAGRO, datocctv, DEFEWAY, digoo, DiySecurityCameraWorld, DONPHIA, ENKLOV, ESAMACT, ESCAM, EVTEVISION, Fayele, FLOUREON , Funi, GADINAN, GARUNK, HAMROL, HAMROLTE, Highfly, Hiseeu, HISVISION, HMQC, IHOMEGUARD, ISSEUSEE, iTooner, JENNOV, Jooan, Jshida, JUESENWDM, JUFENG, JZTEK, KERUI, KKMOON, KONLEN, Kopda, Lenyes, LESHP, LEVCOECAM, LINGSEE, LOOSAFE, MIEBUL, MISECU, Nextrend, OEM, OLOEY, OUERTECH, QNTSQ, SACAM, SANNCE, SANSCO, SecTec, Shell film, Sifvision / sifsecurityvision, smar, SMTSEC, SSICON, SUNBA, Sunivision, Susikum, TECBOX, Techage, Techege, TianAnXun, TMEZON, TVPSii, Unique Vision, unitoptek, USAFEQLO, VOLDRELI, Westmile, Westshine, Wistino, Witrue, WNK Security Technology, WOFEA, WOSHIJIA, WUSONLUSAN, XIAO MA, XinAnX, xloongx, YiiSPO, YUCHENG, YUNSYE, zclever, zilnk, ZJUXIN, zmodo, and ZRHUNTER.

 

from: https://www.zdnet.com/article/over-nine-million-cameras-and-dvrs-open-to-apts-botnet-herders-and-voyeurs/

see also: Zmap Scan Project to scan the entire (!)) Internet https://zmap.io/

 

 

Research: $20 Billion Raised Through ICOs Since 2017

Initial Coin Offerings (ICOs) have raised $20 billion since the start of 2017, which is $18 billion more than the previous year, according to a recent study by financial research firm Autonomous Research. The study dubbed “Crypto Utopia” explores the cryptocurrency industry over the past year, focusing on ICOs and the regulation to which they are exposed.

Per the study, $12 billion has been raised through ICOs in the course of 2018, while last year they raised $7 billion. The ICOs of blockchain protocol EOS and messaging app Telegram are responsible for almost half of all ICO funds in 2018 at $4.2 billion and $1.7 billion, respectively.

Though over 300 crypto funds have been launched to invest in crypto assets, a vast majority of funds are concentrated within a small minority of organizations, according to Autonomous.

The research notes that ICOs are often exposed to fraud and scams, which form 20 percent of project white papers, while phishing and hacking are responsible for stealing 15 percent of all crypto assets by market capitalization. More than 50 percent of ICOs have failed to raise funds and subsequently have closed.

2017 saw over $7 billion of investment flow into ICOs, which is fourfold greater that equity investment in crypto companies. Many ICOs were purportedly launched to take advantage of the “goldrush,” subsequently resulting in quality and regulatory concerns regarding tokens.

Price performance for the top 200 liquid coins during the last 1.5 years has reportedly demonstrated an unprecedented surge, from 10 to 1 million percent. The authors of the study suggest that such a performance shows exponential software-like growth for digital currencies.

The study states that  venture and trading funds are “the most numerous and hold the most assets under management.”

Another study by Autonomous Research published last month stated that funding in ICOs has seen its hardest slump in 16 months, stating that in August startups raised $326 million, which is the smallest amount since May 2017.

In August, ICORating published a study showing that the ICO market more than doubled in a year. ICOs in Q1–2 2018 had already raised over $11 billion in investments, a figure which it purports is ten times larger than the sum of investments from ICOs in Q1–2 2017.

from: https://cointelegraph.com/news/research-20-billion-raised-through-icos-since-2017

MOBI Blockchain Grand Mobility Challenge: From October To Showcase Demo In Feburary 2019

The MOBI Grand Challenge intends to develop “the first viable” blockchain-powered network of vehicles and system to coordinate machines, provide data sharing, as well as to improve the level of mobility in urban conditions.

 

The Mobility Open Blockchain Initiative (MOBI), and the Trusted IoT Alliance (TIoTA) have launched a tournament for blockchain applications in vehicles, according to an official press release published Oct. 10.

The new tournament entitled MOBI Grand Challenge reportedly intends to develop “the first viable” blockchain-powered network of vehicles and system to coordinate machines, provide data sharing, as well as to improve the level of mobility in urban conditions.

The three-year blockchain challenge, which plans to award winners with over $1 million dollars worth of tokens, will cover a number of events, and invites entrants to participate online globally.

The MOBI Grand Challenge will begin Oct. 12 with the first four-month challenge to showcase “potential uses of blockchain in coordinating vehicle movement and improving transportation in urban environments.”

Selected technologies from the first challenge will be demonstrated at an event hosted by MOBI community member BMW Group in Munich, Germany in February 2019.

The outcomes found in the first series of the MOBI Grand Challenge will be used as basis to create the next challenges of the three-year tournament.

According to the press release, the winners of the first challenge will be granted $350,000 worth of awards in a number of categories, including $250,000 worth of tokens by Beyond Protocol, and $100,000 worth of tokens by Ocean Protocol.

Ocean Protocol is a blockchain-based data exchange protocol that has committed a prize of $1 million dollars in tokens to the MOBI Grand Challenge. Beyond Protocol is a Silicon Valley-based firm that is applying distributed ledger technology (DLT) to secure Internet of Things (IoT) devices. The firm has committed $250,000 worth of tokens to be used on its protocol network.

Zaki Manian, Executive Director of the Trusted IoT Alliance and a member of MOBI’s Board of Advisors, stated that mobility is a “breakout” IoT industry direction for blockchain. According to Manian, just a “small percentage of companies have completed end-to-end proof of concepts in this area,” so the new tournament intends to “fill this gap.”

In May 2018, four leading global vehicle suppliers BMW, GM, Ford, and Renault launched a jointed blockchain platform aiming to “change transportation.” The joint effort aims to address mobility issues, making it “safer, greener, and more affordable” by using blockchain technology.

In March, Cointelegraph reported that major American car manufacturer Ford patented a system for vehicle-to-vehicle communication methods via exchange of crypto tokens in order to facilitate traffic flow.

 

 

SETI neural networks spot dozens of new mysterious signals emanating from distant galaxy

The Borg are coming – they are only 3 billion miles away :-)

 

The perennial optimists at the Search for Extraterrestrial Intelligence, or SETI, have joined the rest of the world in deploying AI to help manage huge data sets — and their efforts almost instantly bore fruit. Seventy-two new “fast radio bursts” from a mysteriously noisy galaxy  3 billion miles away were discovered in previously analyzed data by using a custom machine learning model.

To be clear, this isn’t Morse code or encrypted instructions to build a teleporter, à la Contact, or at least not that we know of. But these fast radio bursts, or FRBs, are poorly understood and may very well represent, at the very least, some hitherto unobserved cosmic phenomenon. FRB 121102 is the only stellar object known to give off the signals regularly, and so is the target of continued observation.

The data comes from the Green Bank Telescope in West Virginia (above), which was pointed toward this source of fast and bright (hence the name) bursts for five hours in August of 2017. Believe it or not, that five-hour session yielded 400 terabytes of transmission data.

Initial “standard” algorithms identified 21 FRBs, all happening in one hour’s worth of the observations. But Gerry Zhang, a graduate student at UC Berkeley and part of the Breakthrough Listen project, created a convolutional neural network system that would theoretically scour the data set more effectively. Sure enough, the machine learning model picked out 72 more FRBs in the same period.

 

A Berkeley GIF visualizing the data of a series of bursts.

 

That’s quite an improvement, though it’s worth noting that without manual and traditional methods to find an initial set of interesting data, we would have little with which to train such neural networks. They’re complementary tools; one is not necessarily succeeding the other.

The paper on the discoveries, co-authored by Cal postdoc Vishal Gajjar, is due to be published in the Astrophysical Journal. Breakthrough Listen is one of the initiatives funded by billionaires Yuri and Julia Milner, of mail.ru and DST fame. The organization posted its own press release for the work.

The new data suggests that the signals are not being received in any kind of pattern we can determine, at least no pattern longer than 10 milliseconds. That may sound discouraging, but it’s just as important to rule things out as it is to find something new.

“Gerry’s work is exciting not just because it helps us understand the dynamic behavior of FRBs in more detail, but also because of the promise it shows for using machine learning to detect signals missed by classical algorithms,” explained Berkeley’s Andrew Siemion, who leads the SETI research center there and is principal investigator for Breakthrough Listen.

And if we’re being imaginative, there’s no reason some hyper-advanced civilization couldn’t cram a bunch of interesting info into such short bursts, or use a pattern we haven’t yet grokked. We don’t know what we don’t know, after all.

Whatever the case, SETI and Breakthrough will continue to keep their antennas fastened on FRB 121102. Even if they don’t turn out to be alien SOS signals, it’s good solid science. You can keep up with the Berkeley SETI center’s work right here.

 

from: https://techcrunch.com/2018/09/10/seti-neural-networks-spot-dozens-of-new-mysterious-signals-emanating-from-distant-galaxy/

 

 

 

 

Free D-Wave Quantum Computer Access: D-Wave Launches Leap, the First Real-Time Quantum Application Environment

Application developers and researchers get immediate, free access to a D-Wave 2000Q™quantum computer,
comprehensive software tools, demos, live code, documentation, and community forums.

 

BURNABY, BC – (October 4, 2018) — D-Wave Systems Inc., the leader in quantum computing systems and software, today announced the immediate availability of free, real-time access to the D‑Wave Leap™ Quantum Application Environment (QAE). Leap is the first cloud-based QAE providing real-time access to a live quantum computer. In addition to access, Leap provides open-source development tools, interactive demos and coding examples, educational resources, and knowledge base articles. Designed for developers, researchers, and forward-thinking enterprises, Leap enables collaboration through its online community, helping Leap users write and run quantum applications to accelerate the development of real-world applications.

Leap QAE provides:

  • Free access: free, real-time access to a D-Wave 2000Q quantum computer to submit and run applications, receiving solutions in seconds
  • Familiar software: the open-source Ocean software development kit (SDK), available on GitHub and in Leap, has built-in templates for algorithms, as well as the ability to develop new code with the familiar programming language Python
  • Hands-on coding: interactive examples in the form of Jupyter notebooks with live code, equations, visualizations, and narrative text to jumpstart quantum application development
  • Learning resources: comprehensive live demos and educational resources to help developers get up to speed quickly on how to write applications for a quantum computer
  • Community support: community and technical forums to enable easy developer collaboration

Leap builds on D-Wave’s continuing work to drive real-world quantum application development. To‑date, D‑Wave customers have developed 100 early applications for problems spanning airline scheduling, election modeling, quantum chemistry simulation, automotive design, preventative healthcare, logistics, and more. Many have also developed software tools that make it easier to develop new applications. These existing applications and tools, along with access to a growing community, give developers a wealth of examples to learn from and build upon.

“Our job is to sift through the sands of data to find the gold—information that will help our manufacturing customers increase equipment efficiency and reduce defects. With D‑Wave Leap, we are showing we can solve computationally difficult problems today, while also learning and preparing for new approaches to AI and machine learning that quantum computing will allow,” said Abhi Rampal, CEO of Solid State AI. “We started with quantum computing on D-Wave because we knew we wanted to be where the market was going, and we continue because we want to be a leader in finding commercial applications for the technology. With Leap, D‑Wave is making systems, software, and support available to help developers and innovators commercialize quantum applications.“

“We are developing innovative new materials to solve large-scale industrial problems using our proprietary Materials Discovery Platform. Part of our platform relies on first-principles materials simulations, requiring exceptional amounts of computational processing power. I firmly believe that advancements in quantum computing will accelerate our business growth, by accelerating our platform. By providing access to a live quantum computer, D‑Wave Leap provides a robust environment for developers to learn, code, and teach, furthering the quantum ecosystem,” said Michael Helander CEO, OTI Lumionics. “Today, we are able to use the D‑Wave 2000Q as a powerful optimizer to help calculate the electronic structure of industrially-relevant sized molecules, a first for a quantum computer. As the community grows, shares, and innovates, the possibilities for materials discovery are endless. I expect D‑Wave to continue to innovate with us, enabling the discovery of countless new materials using quantum computing.”

“Entrepreneurs are recognizing that quantum computing will help them unlock new technologies, solutions, and business. At the Creative Destruction Lab (CDL), we have more than 20 companies as part of our Quantum Machine Learning Incubator Stream, with growing interest from prospective ventures,” said Khalid Kurji, Senior Venture Manager at the CDL. “D‑Wave’s Quantum Application Environment is central to developers helping developers, and will play an important role not just in the growth of ideas we have today, but in the fostering of innovations for tomorrow.”

“Every technology ecosystem begins by giving smart developers access, tools, and training. Leap eliminates the barrier to entry for quantum application development and deployment by providing live developer access and extensive tools and resources,” said Alan Baratz, D‑Wave EVP R&D and chief product officer. “Leap can enable hundreds of thousands of developers to write and run quantum applications, without having to learn the complex physics that underpins quantum computers. Any one of these developers could write the first killer quantum application, solving complex global problems with quantum computing.”

“The next frontier of quantum computing is quantum application development. While we continue to advance our industry-leading quantum technology, our goal with Leap is to ignite a new generation of developers who will explore, experiment, and ultimately build our quantum application future,” said Vern Brownell, D‑Wave CEO. “Since day one, D‑Wave has been focused on fueling real-world quantum application development. We believe that the Leap Quantum Application Environment is one of the most important steps toward realizing our vision of practical quantum computing to-date.”

Leap offers both free and paid plans designed for individual developers, commercial enterprises, and for government, research, and education sectors. To find out more and get started using Leap, visit the D‑Wave website at www.dwavesys.com.

Leap Developer Feedback:

“Leap is the only Quantum Application Environment that gives developers access to a real quantum computer. Today, you can’t get that from any other provider of quantum hardware at the scale needed to solve real problems,” said Thomas Phillips, CTO of Ridgeback Network Defense. “It’s incredibly exciting to be able to have access to something that before now most developers couldn’t access, and see the quantum computer in action. Because the programming is intuitive, the D‑Wave approach allows me to map familiar algorithms for very hard problems directly onto the system, which is nearly impossible to do on other quantum systems. And most importantly, I can now tackle exceptionally difficult cybersecurity problems I’ve only imagined solving before now.”

“As a long-time software developer, I leapt at the chance to be part of the beta program and get access to a real quantum computer for the first time,” said Scott Davis, independent software consultant. “The online demos, Jupyter notebooks, and documentation gave me a jump-start. Soon I was writing Python programs using the Ocean software development kit and running experiments on D-Wave’s quantum computer with more than 2000 qubits.  In just four weeks I was able to implement a basic proof of a concept I had been thinking about for 17 years.”

“QC Ware works with enterprises to build quantum software applications. QC Ware’s customers in aerospace, automotive, and financial services typically gravitate towards D-Wave because of the large problem sizes the D‑Wave 2000Q can support,” said Juan Adame, quantum software engineer, QC Ware. “With Leap, the user experience and new Ocean software tools will help early developers. And for developers who have quantum experience, the Leap Quantum Application Environment expands the lower level embedding functionality for very finely grained control of how their problem gets mapped on specific physical qubits.”

 

from: https://www.dwavesys.com/press-releases/d-wave-launches-leap-first-real-time-quantum-application-environment

 

 

 

AI Camera Can Spot Guns And Alert Law Enforcement: 99% Accurate

The company says it’s accurate 99 percent of the time.

 

Athena Security has developed a camera system that uses artificial intelligence and cloud technology to spot guns and alert authorities. The company says that because the system can recognize weapons and notify police quickly, casualties may be prevented in places where the system is implemented, such as schools or businesses. It has already been installed in Archbishop Wood High School in Warminster, Pennsylvania.

Once the system spots a gun in the vicinity, it uses the cloud to send that information to those that need it, whether that be a business owner or law enforcement. It can also stream footage of the event in question through an app, giving police a real-time look at what’s going on and where. Users also have the option of connecting the camera to other third-party security systems, allowing it lock doors or stop elevators.

False positives can be a problem for computer vision systems, and that would be particularly troublesome for a security camera that has the ability to alert police directly. But the company claims its system’s gun detection is 99 percent accurate. “We’ve basically perfected that,” co-founder Lisa Falzone told Fortune, “and so we’re already starting to work on fights, knives and other crimes. We expect fights to be done in the next couple months, at least the first version of it.”

 

 

Back in 2015, a company called NanoWatt Design tried to crowdfund its GunDetect camera, which used computer vision to detect firearms and text alerts to its users. It now says its security systems can be customized to detect specific threats, including guns.

Athena offers multiple systems at a range of prices, but the one that includes gun detection, lock and elevator integration and real-time access costs $100 per camera per month.

 

from: https://www.engadget.com/2018/09/27/ai-camera-detect-guns-alert-law-enforcement/

 

 

Air France-KLM Wants Blockchain To Cut Out Middlemen

It is just some Ethereum-derivative, not actual Blockchain technology –
but it does indicate the major shift from middlemen to a different consensus model in business.

 

On Wednesday, October 3, one of the world’s largest airlines, Air France-KLM, announced its partnership with Winding Tree, a “blockchain-powered decentralized travel ecosystem.” Through this agreement, the airline aims to provide customers with “more advantageous travel offer[s],” such as a wide range of flight and hotel options, as well as travel solutions to better suit customers’ needs.

Air France-KLM asserts that travel suppliers would profit from blockchain technology because fewer intermediaries, such as travel agencies and tourism package distributors, would be required.

Sonia Barrière, executive vice president of strategy and innovation at Air France-KLM, expressed her enthusiasm for the partnership:

“Air France-KLM is constantly creating the future of travel and devising solutions to make the travel experience easier and more personalized. With blockchain technology, we aim to revolutionize exchanges within the travel industry for our customers, companies and start-ups.”

Although the company has not identified specific projects in the pipeline, Barrière said that it is one of the first airlines to work with Winding Tree on blockchain-based travel solutions. Air France-KLM will also test Winding Tree’s technological developments and provide the organization with feedback.

Speaking more broadly, the travel industry is no stranger to blockchain technology. In July, Singapore Airlines launched a blockchain-based digital wallet as part of the company’s loyalty program. With this wallet, customers can accrue air miles to use at partner merchants across Singapore.

In August, Russia’s Siberian Airlines announced it had developed a blockchain-based system in partnership with Gazpromneft-Aero, an arm of the energy company Gazprom. This system reportedly improves the speed and efficiency of the aviation refueling process.

 

from: https://www.ethnews.com/major-european-airline-wants-blockchain-to-cut-out-middlemen

 

 

 

US DoJ Charges 7 Russian Intelligence Officers With Crypto-Funded Hacking Attacks

Using the same IP for Bitcoin mining and transactions AND as an identifiable source of hacks DOES frequently allow to point the finger at the correct individuals (and their employers, obviously).

The U.S. Department of Justice (DoJ) has charged seven officers from Russia’s Main Intelligence Directorate (GRU) with cryptocurrency-funded global hacking and related disinformation operations. The indictment was filed by the grand jury at the Western District of Pennsylvania October 3.

The defendants, all of whom are alleged to work for the GRU — a military intelligence agency of the General Staff of the Armed Forces of the Russian Federation — have been charged on multiple counts for alleged “computer hacking, wire fraud, identity theft, and money laundering,” according to a DoJ press release published October 4.

The group is said to belong to a hack team known as “Fancy Bear,” and the indictment contains charges dating back as early as 2014.

According to the indictment, in order to “facilitate the purchase of infrastructure used in their hacking activity […] [the defendants] conspired to launder money through a web of transactions structured to capitalize on the perceived anonymity of cryptocurrencies such as bitcoin.”

The document alleges that the use of Bitcoin (BTC) “allow[ed] the conspirators to avoid direct relationships with traditional financial institutions,” enabling them to further dissimulate their identities and sources of funds.

The defendants are further alleged to have created “hundreds of different email accounts” in order to “avoid creating a centralized paper trail of all their purchases.” Several of these accounts are said to have been dedicated to tracking Bitcoin transaction information and facilitating Bitcoin payments to vendors.

The indictment also charged the defendants with funding their activities through Bitcoin mining:

“The pool of bitcoin generated from the GRU’s mining activity was used, for example, to pay a United States-based company to register the [phishing] domain wada-arna.org through a payment processing company located in the United States. The conspirators used the same funding structure—and in some cases, the very same pool of funds—to purchase key accounts, servers, and domains used in their anti-doping related hacking activity.”

This latter reference to anti-doping related hacking activity refers to the DoJ’s charge that Fancy Bear conspired to steal data from 250 international athletes, as well as anti-doping agencies across the world. These attacks are alleged to have been in retaliation for the banning of Russian athletes from the 2018 Olympics, following suspicions of a state-sponsored doping program.

Although these specific charges are not part of the Robert Mueller investigation into alleged Russian interference in the 2016 U.S. elections, notably three of the seven officials named by the DoJ in this indictment have also been named in the Mueller investigation.

As previously reported, this July the DoJ charged twelve individuals from two units of the GRU with using crypto – allegedly either mined or obtained by “other means” – to fuel efforts to hack into computer networks associated with the Democratic Party, Hillary Clinton’s presidential campaign, and U.S. elections-related state boards and technology firms.

 

from: https://cointelegraph.com/news/us-doj-charges-7-russian-intelligence-officers-with-crypto-funded-hacking-attacks

 

 

 

A Multi-Million Dollar Bet Ethereum’s Proof-of-Stake Isn’t Coming Soon

“I don’t know if [ethereum] will or will not switch to proof-of-stake. Proof-of-stake has a lot of problems.”
Several mining companies have invested millions in building specialized mining chips for ethereum,
machinery that will only function as long as the network pays out new cryptocurrency
to those who dedicate computing hardware to the effort.

 

What if ethereum never switches its core consensus algorithm?

It’s an idea that may sound blasphemous to developers building the world’s second-largest blockchain, where plans have long been laid for a transition away from bitcoin’s proof-of-work model to a more egalitarian alternative. Yet, entrepreneurs appear to be betting that between now and that bright future, a small fortune might be waiting.

Already, several mining companies have invested millions in building specialized mining chips for ethereum, machinery that will only function as long as the network pays out new cryptocurrency to those who dedicate computing hardware to the effort.

One such investor is Chen Min, CEO and founder of Linzhi, a Shenzhen-based, startup that has spent $4 million in pursuit of designing the fastest specialized mining chip, or ASIC, for ethereum. An industry veteran, Chen was previously the lead ASIC designer at Canaan Creative, one of three (largely bitcoin-focused) mining firms that have dominated the production of crypto hardware over the last decade.

However, she’s since departed to try her hand in making machinery for ethereum, already investing amply in the goal.

The cost to get to first silicon and sample machines is roughly $4 million. Additionally we have our ongoing cost of operations, salaries, office, which are all modest, lean and efficient,” Chen said.

Announced in September, Linzhi’s ASIC promises to overtake previous ethereum ASIC designs, featuring high improvements to energy efficiency and computing power. Still, the mining chip will only function on ethereum if the blockchain keeps its current code-base.

But Chen isn’t too concerned.

“I don’t know if [ethereum] will or will not switch to proof-of-stake,” she told CoinDesk. “Proof-of-stake has a lot of problems.”

Evidence exists that Linzhi isn’t alone in this position. As detailed in CoinDesk, mining giant Bitmain released its ethereum miner, the Antminer E3, back in March, while Innosocilion announced three ethereum miners in July.

While Chen recognizes the inherent risk of introducing an ASIC in such an unpredictable environment, she told CoinDesk:

“The information is open, we are not hiding that risk. Our customer can decide to buy or not.”

High-risk climate

Also backing Chen’s conviction is the idea that proof-of-work is simply a better system for managing the distribution of cryptocurrency rewards. In this way, Chen described a possible proof-of-stake switch as “not a smart thing.”

“There are so many people, so many users, developers and hardware invested in that coin. If they ignore the work that has been done and switch to proof-of-stake, maybe later they can also ignore your stake and switch to proof-of-some other idea,” Chen said.

But there’s other risks facing ASIC mining on ethereum as well.

At a core developer call last week, the engineers behind ProgPoW – a proposal that would change the code to only allow GPU miners as an alternative to ASICs were in attendance. Though still in the proposal stage, if executed, ProgPoW would effectively disable ASICs from mining on ethereum – and momentum is building toward the implementation.

Chen, however, argued that such ideas are little more than knee-jerk reactions, ones that don’t actually provide solutions to some of the concerns about how ether rewards are distributed in the community at large.

“ProgPoW is being pushed by large farms that have not disclosed their real intentions,” Chen said, adding:

“The fear of Bitmain is driving the [ethereum] community into the arms of some very powerful well-funded farms that they don’t even know about.”

Kristy-Leigh Minehan, a leading developer behind the ProgPoW switch, pushed back against this claim, arguing that “large-scale GPU farms don’t really exist.” In a sense, Minehan is making the case that GPUs can promote a larger number of participants in securing ethereum, something she argues ASICs, due to their cost and operational requirements, cannot.

Benefits to hardware

More broadly, the push for ProgPoW is typical of what has been termed crypto’s “war on miners,” in which several cryptocurrencies have moved to remove ASIC hardware manufacturers from their respective networks.

Yet according to Chen, much of the conversation about removing ASICs from ethereum lacks an awareness of the kind of advantages specialized hardware can bring to a cryptocurrency project.

“Our chip is optimized, specialized for ethereum, not only for mining, but also for verification and node operation, so I’m very curious about why people think it is wrong,” Chen told CoinDesk.

Chen added that specialized hardware is often condemned on moral, not rational, scientific grounds.

Pointing to scaling challenges faced by ethereum, Chen theorized that advancements in mining hardware could even help ethereum overcome its current concerns about scaling to more people and more transactions.

“[Ethereum] is still so far away from the traditional banking system. I think hardware can contribute,” she said.

In her mind, because ASICs will be able to mine ethereum faster and more efficiently, they will be able to process more transactions at a faster pace. “If we have a fast enough physical layer,” the community won’t have to rely on complex software scaling solutions, such as sharding, she argued.

Chen depicted Linzhi as deeply interested in participating in and assisting with the improvement of the ethereum protocol.

Indeed, pointing to a recent proposal by ethereum founder Vitalik Buterin that offers a scaling method based on hardware running zk-snarks, Chen said that Linzhi would be capable of producing such hardware in the future, although it’s not on their roadmap.

Last resort

All in all, it’s the latest sign that a larger argument is being had about how ethereum will secure its $22 billion blockchain. However, that argument may not break from the original roadmap anytime soon.

Speaking to CoinDesk, Hudson Jameson, a communications officers for the Ethereum Foundation, said he was unaware of any ASIC advocates in the ethereum developer community who might protest the plan to switch to proof-of-stake.

Much of the movement stems from the idea that the presence of ASICs optimized to only run one particular algorithm could interfere with a smooth transition to proof-of-stake, now dubbed “Shasper” due to its fusion with scaling method, sharding.

“That’s the entire reason ProgPoW was created: to ensure [ethereum] could transition safely over to [proof-of-stake] without larger parties like Bitmain manipulating the coin and the price,” Minehan told CoinDesk.

Still, Chen didn’t express too much concern in this regard, emphasizing that such efforts are still very much within “the proposal state.”

Irrespective, Chen urged that in the event of ProgPoW or proof-of-stake, Linzhi will switch to mining ethereum classic, a rival ethereum platform that split away from the blockchain in 2016, and that traditionally been more friendly to ASIC hardware.

She told CoinDesk:

“We would like to reduce the power consumed to secure [ethereum], but if they want to stick with wasteful GPUs run by two companies and powerful secret farming concerns, then we will just press on with [ethereum classic].”

 

from: https://www.coindesk.com/momentum-is-building-to-block-ethereum-asics/

 

 

 

Thomas Melle in München: Die Mensch-Maschine

(Theater)

 

Wir dachten schon, wir wüssten alles von und über Thomas Melle. Wie kaum einer hat der deutsche Schriftsteller sein mehrschichtiges Innenleben und seine bipolare Störung offengelegt, hat bis ins peinlichste Detail hinein von seinem manisch-depressiven Leiden vor sich selber schaudernd erzählt. Vor Augen haben wir noch seine eindringlich beschriebene “lasche Körperhaltung, die hängenden Schultern, die sichtbare Apathie”: so begegnete uns Melle auf 350 Seiten.

Und nun dies: Thomas Melle, den ohnehin im Innersten persönlichkeitsgespaltenen, gibt es ab sofort – mindestens – zweimal. In den Münchner Kammerspielen sitzt er zum einen leibhaftig im Publikum und dann noch einmal zum Verwechseln ähnlich auf der Bühne. Das lädierte Aussehen ist von früher, dieser Mann hier – diese Männer! – trägt korrekt Pullover, das Haar ist gescheitelt und die Haltung entspannt. Lassen wir Melle im Zuschauerraum einmal bis zum Schlussapplaus beiseite und konzentrieren wir uns auf Melle im Scheinwerferlicht, dann irritiert nur eines: am Hinterkopf ist die Figur, die dort im Sessel bequem sitzt, verkabelt. Ansonsten bleibt sie lässig mit übereinander geschlagenen Beinen, wiegt das Haupt, bewegt die Hände spärlich, spricht klar bei minimaler Bewegung der Lippen. Starr ist nur der Körper.

 

muenchner kammerspiele/ unheimliches-tal

 

Dieser Melle, man ahnt es, ist nicht echt: ein Avatar, eine Maschine, ein humanoider Roboter. Und er ist die Hauptperson in dem Stück “Unheimliches Tal/Uncanny valley”, erdacht und realisiert von Stefan Kaegi von Rimini Protokoll und eben Thomas Melle, das von einem Thomas Melle handelt, der nur Thomas Melle spielt – oder umgekehrt? Eine Personality-Show mit Verwechslungsfallen?

Tatsächlich geht es vor allem um einen Menschen, der staunend bei seiner künstlichen Verdoppelung zusieht. Dass nebenbei auch die Fragwürdigkeit solcher Möglichkeiten verhandelt wird, die Nachteile von Original und Kopie ausgelotet werden, dass sich der kurze Abend zum langen Verwundern irgendwie um ausgelagerte Biografien, Kontrollverlust, Unstetigkeit und “Risse im Gewebe” des Wahrhaftigen und Verständlichen dreht – natürlich, auch das kommt zur Sprache.

Melles abgrundtiefer Gedanken

Aber mal ehrlich, wirklich interessant und frappierend ist doch eigentlich diese perfekte Puppe da vorne, die jetzt zur allgemeinen Belustigung mal kurz den linken Fuß um 360 Grad dreht. Das kann keiner im Publikum, dem die zirzensische Vorführung vielleicht wie ein böser Traum von der schlimmen neuen Welt verkauft werden soll, das aber eher so eine Art unendlichen Science-Fiction-Spaß an dem Programm zu haben scheint. Nach dem verhaltenen Gliedermaßen-Ballett wird die Szene abgesperrt und man kann sich das Wunderwerk der Technik von allen Seiten anschauen wie einen Formel-eins-Boliden.

Wenn man Melles “Die Welt im Rücken” aufmerksam liest, dann stolpert man über einige Stellen, die schon auf diesen Abend hinweisen. Er schreibt da mal, er gehöre mit seiner unkontrollierbaren psychischen Krankheit, während und zwischen den manisch-depressiven Schüben der “Klasse der unbelebten Gegenstände” an, er wäre “aus Plastik, meine Adern sind Kabel” und ein “knochenloser Parasit”: “Etwas sitzt da. Ich bin nichts. Etwas sitzt da und ist nicht mehr.” Und tatsächlich ist “Unheimliches Tal” im Grund nur die Silikon- und Elektronik-Werdung dieser abgrundtiefen Gedanken. Die sich freilich hier eher ungefährlich, nett und verspielt, verliebt in die Kunst des Machbaren zeigt.

 

muenchner kammerspiele/ unheimliches-tal

 

Was hat das mit dem Theater überhaupt zu tun?

Zwar sinniert Melle als Mensch (Text und Stimme stammen von ihm) und Maschine über die schweren Fragen nach Verantwortung im Zeitalter der endlosen technischen Reproduzierbarkeit; über den metaphorischen Tod, den so eine “Jahrmarkts-Attraktion” ereilen kann oder auch nicht (ewiges Leben durch Festplatten-Wechsel!); über den Computer als Sklaven seines Schöpfers und über die Technik als die wahre Natur des Menschen. Bedenkenswert auch der Einwurf, ob man zögert, wenn der Computer einen fragt, ob man ein Roboter ist. Es gibt einen biografischen Ausflug zu einem Mann, der das Gehör abschalten kann und ins Leben des amerikanischen Informatikers Alan Turing, einem Pionier der Computer-Entwicklung, der freilich auch Leidensgenosse Melles war: mittels Medikamenten machte man aus ihm einen anderen Menschen als ihm selber lieb war.

Aber das alles ist in sonorem Tonfall erzähltes, mit Säuselmusik, durch man bisweilen die Motorik durchhört, unterlegtes theoretisches Beiwerk von der Gummilippe, wie auch die Frage, warum dies alles im Theater stattfinden muss und was das mit dem überhaupt zu tun hat nur ein tänzelnder Scheinwerfer dürftig wacklig beantwortet. Immer öfter wird man stattdessen in kleinen Filmen mit der Vermessung des Thomas Melle konfrontiert, sieht, wie die Arme und Hände kopiert werden, leidet mit dem Autor, wie er eine bedrohliche Totenmasken-Prozedur über sich ergehen lässt oder gruselt sich ein wenig vor dem Kunst-Gerippe, dass ferngesteuert die Gesichtszüge in die täuschend ähnlichen Larve zaubert. Mit der allseits eingeforderten “Verwandlung”, die Sache des Theater und des Schauspielers sein sollte, hat solch Mimikry freilich nichts zu tun.

Inszenierte Überwältigung statt Überzeugung

Eine höchst aufwendige Spielerei, die im Deutschen Museum stattfinden könnte, und wieder einmal das alte/neue Problem: das Theater rennt der zukünftigen Zeit hinterher, probt den technischen Fortschritt und kopiert dessen Erfolge, sein Scheitern. Vor lauter Machbarkeitseifer vergisst es aber die Inhalte: statt Überzeugung nur noch inszenierte Überwältigung. Mit einer ziemlich läppischen Moral noch obendrein: fernsehonkelhaft beruhigt Thomas Melle – ja welcher nun: der wahre oder der Golem? – die Zuschauer, die möglicherweise kurzfristig Unbehagen befiel ob der austauschbaren Geister, die da auf sie zukommen könnten.

Tatsächlich hätte man in dieser Stunde doch Gefühle geteilt – “wie in einem sehr alten, sehr menschlichen Programm.” Bleibense Mensch, sagt die Maschine! Eine harmlose Münchner Puppenkiste.

“Unheimliches Tal / Uncanny Valley”, 7., 15., 17., 30. Oktober, 13. bis 15. November, an den Münchener Kammerspiele.

 

from: http://www.spiegel.de/kultur/gesellschaft/thomas-melle-muenchen-unheimliches-tal-bleibense-mensch-sagt-die-maschine-a-1231689.html

 

 

 

 

[UPDATED] China Snuck A Tiny Microchip Inside US Top Secret Servers Used By The DoD And CIA — NSA Struggles To Assess Risk

[UPDATED SEVERAL TIMES FROM DIFFERENT SOURCES – SCROLL DOWN
You find all relevant reporting on the matter on this page;
including the implant built into the server’s Ethernet connector at a US Telecom company.]

 

The Chinese military surreptitiously inserted tiny microchips no larger than single grains of rice into servers on local assembly lines in order to gain access to data networks run by U.S. government agencies ranging from the Department of Defense to the Central Intelligence Agency, according to an explosive investigation from Bloomberg.

  • A three-year investigation by U.S. government officials found that servers assembled for startup Elemental Technologies by San Jose-based company Supermicro reportedly contained tiny microchips “inserted at factories run by manufacturing subcontractors in China,” Bloomberg reported.
  • The chips, independently discovered by engineers at Amazon and Apple in 2015, purportedly allowed hackers to “create a stealth doorway into any network that included the altered machines,” per Bloomberg, a Trojan horse that gave hackers a direct line into any sensitive network.
  • Elemental servers assembled by Supermicro are “found in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships,” per Bloomberg, and the revelation prompted DoD officials at the time to request a small group of technologists “to think about creating commercial products that could detect hardware implants.”
  • “Public documents, including the company’s own promotional materials, show that the servers have been used inside Department of Defense data centers to process drone and surveillance-camera footage, on Navy warships to transmit feeds of airborne missions, and inside government buildings to enable secure videoconferencing,” Bloomberg reports. “NASA, both houses of Congress, and the Department of Homeland Security have also been customers.”
  • News of the years-long infiltration of secure networks through the lowest levels of the global industrial supply chain  — China still manufactures the majority of the raw tech behind the world’s mobile phones and personal computers — reflects not just a coup for the Chinese intelligence community, but an alarming vulnerability of the U.S. industrial base.
  • Technologist Joe Grand put it best in an interview with Bloomberg: “Having a well-done, nation-state-level hardware implant surface would be like witnessing a unicorn jumping over a rainbow … Hardware is just so far off the radar, it’s almost treated like black magic.”

August Cole, a coauthor of the novel “Ghost Fleet” — which features an eerily similar scenario involving Chinese chips hidden inside an F-35 that ruin its stealth capabilities, wrote on Twitter, “Hey Siri, what is my #ghostfleet moment of the day?”

from: https://taskandpurpose.com/china-hacking-microchips-dod-cia/

 

The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies

The attack by Chinese spies reached almost 30 U.S. companies, including Amazon and Apple, by compromising America’s technology supply chain, according to extensive interviews with government and corporate sources.

 

 

In 2015, Amazon.com Inc. began quietly evaluating a startup called Elemental Technologies, a potential acquisition to help with a major expansion of its streaming video service, known today as Amazon Prime Video. Based in Portland, Ore., Elemental made software for compressing massive video files and formatting them for different devices. Its technology had helped stream the Olympic Games online, communicate with the International Space Station, and funnel drone footage to the Central Intelligence Agency. Elemental’s national security contracts weren’t the main reason for the proposed acquisition, but they fit nicely with Amazon’s government businesses, such as the highly secure cloud that Amazon Web Services (AWS) was building for the CIA.

To help with due diligence, AWS, which was overseeing the prospective acquisition, hired a third-party company to scrutinize Elemental’s security, according to one person familiar with the process. The first pass uncovered troubling issues, prompting AWS to take a closer look at Elemental’s main product: the expensive servers that customers installed in their networks to handle the video compression. These servers were assembled for Elemental by Super Micro Computer Inc., a San Jose-based company (commonly known as Supermicro) that’s also one of the world’s biggest suppliers of server motherboards, the fiberglass-mounted clusters of chips and capacitors that act as the neurons of data centers large and small. In late spring of 2015, Elemental’s staff boxed up several servers and sent them to Ontario, Canada, for the third-party security company to test, the person says.

Nested on the servers’ motherboards, the testers found a tiny microchip, not much bigger than a grain of rice, that wasn’t part of the boards’ original design. Amazon reported the discovery to U.S. authorities, sending a shudder through the intelligence community. Elemental’s servers could be found in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships. And Elemental was just one of hundreds of Supermicro customers.

During the ensuing top-secret probe, which remains open more than three years later, investigators determined that the chips allowed the attackers to create a stealth doorway into any network that included the altered machines. Multiple people familiar with the matter say investigators found that the chips had been inserted at factories run by manufacturing subcontractors in China.

This attack was something graver than the software-based incidents the world has grown accustomed to seeing. Hardware hacks are more difficult to pull off and potentially more devastating, promising the kind of long-term, stealth access that spy agencies are willing to invest millions of dollars and many years to get.

There are two ways for spies to alter the guts of computer equipment. One, known as interdiction, consists of manipulating devices as they’re in transit from manufacturer to customer. This approach is favored by U.S. spy agencies, according to documents leaked by former National Security Agency contractor Edward Snowden. The other method involves seeding changes from the very beginning.

One country in particular has an advantage executing this kind of attack: China, which by some estimates makes 75 percent of the world’s mobile phones and 90 percent of its PCs. Still, to actually accomplish a seeding attack would mean developing a deep understanding of a product’s design, manipulating components at the factory, and ensuring that the doctored devices made it through the global logistics chain to the desired location—a feat akin to throwing a stick in the Yangtze River upstream from Shanghai and ensuring that it washes ashore in Seattle. “Having a well-done, nation-state-level hardware implant surface would be like witnessing a unicorn jumping over a rainbow,” says Joe Grand, a hardware hacker and the founder of Grand Idea Studio Inc. “Hardware is just so far off the radar, it’s almost treated like black magic.”

But that’s just what U.S. investigators found: The chips had been inserted during the manufacturing process, two officials say, by operatives from a unit of the People’s Liberation Army. In Supermicro, China’s spies appear to have found a perfect conduit for what U.S. officials now describe as the most significant supply chain attack known to have been carried out against American companies.

One official says investigators found that it eventually affected almost 30 companies, including a major bank, government contractors, and the world’s most valuable company, Apple Inc. Apple was an important Supermicro customer and had planned to order more than 30,000 of its servers in two years for a new global network of data centers. Three senior insiders at Apple say that in the summer of 2015, it, too, found malicious chips on Supermicro motherboards. Apple severed ties with Supermicro the following year, for what it described as unrelated reasons.

In emailed statements, Amazon (which announced its acquisition of Elemental in September 2015), Apple, and Supermicro disputed summaries of Bloomberg Businessweek’s reporting. “It’s untrue that AWS knew about a supply chain compromise, an issue with malicious chips, or hardware modifications when acquiring Elemental,” Amazon wrote. “On this we can be very clear: Apple has never found malicious chips, ‘hardware manipulations’ or vulnerabilities purposely planted in any server,” Apple wrote. “We remain unaware of any such investigation,” wrote a spokesman for Supermicro, Perry Hayes. The Chinese government didn’t directly address questions about manipulation of Supermicro servers, issuing a statement that read, in part, “Supply chain safety in cyberspace is an issue of common concern, and China is also a victim.” The FBI and the Office of the Director of National Intelligence, representing the CIA and NSA, declined to comment.

The companies’ denials are countered by six current and former senior national security officials, who—in conversations that began during the Obama administration and continued under the Trump administration—detailed the discovery of the chips and the government’s investigation. One of those officials and two people inside AWS provided extensive information on how the attack played out at Elemental and Amazon; the official and one of the insiders also described Amazon’s cooperation with the government investigation. In addition to the three Apple insiders, four of the six U.S. officials confirmed that Apple was a victim. In all, 17 people confirmed the manipulation of Supermicro’s hardware and other elements of the attacks. The sources were granted anonymity because of the sensitive, and in some cases classified, nature of the information.

One government official says China’s goal was long-term access to high-value corporate secrets and sensitive government networks. No consumer data is known to have been stolen.

The ramifications of the attack continue to play out. The Trump administration has made computer and networking hardware, including motherboards, a focus of its latest round of trade sanctions against China, and White House officials have made it clear they think companies will begin shifting their supply chains to other countries as a result. Such a shift might assuage officials who have been warning for years about the security of the supply chain—even though they’ve never disclosed a major reason for their concerns.

How the Hack Worked, According to U.S. Officials

 

Back in 2006, three engineers in Oregon had a clever idea. Demand for mobile video was about to explode, and they predicted that broadcasters would be desperate to transform programs designed to fit TV screens into the various formats needed for viewing on smartphones, laptops, and other devices. To meet the anticipated demand, the engineers started Elemental Technologies, assembling what one former adviser to the company calls a genius team to write code that would adapt the superfast graphics chips being produced for high-end video-gaming machines. The resulting software dramatically reduced the time it took to process large video files. Elemental then loaded the software onto custom-built servers emblazoned with its leprechaun-green logos.

Elemental servers sold for as much as $100,000 each, at profit margins of as high as 70 percent, according to a former adviser to the company. Two of Elemental’s biggest early clients were the Mormon church, which used the technology to beam sermons to congregations around the world, and the adult film industry, which did not.

Elemental also started working with American spy agencies. In 2009 the company announced a development partnership with In-Q-Tel Inc., the CIA’s investment arm, a deal that paved the way for Elemental servers to be used in national security missions across the U.S. government. Public documents, including the company’s own promotional materials, show that the servers have been used inside Department of Defense data centers to process drone and surveillance-camera footage, on Navy warships to transmit feeds of airborne missions, and inside government buildings to enable secure videoconferencing. NASA, both houses of Congress, and the Department of Homeland Security have also been customers. This portfolio made Elemental a target for foreign adversaries.

Supermicro had been an obvious choice to build Elemental’s servers. Headquartered north of San Jose’s airport, up a smoggy stretch of Interstate 880, the company was founded by Charles Liang, a Taiwanese engineer who attended graduate school in Texas and then moved west to start Supermicro with his wife in 1993. Silicon Valley was then embracing outsourcing, forging a pathway from Taiwanese, and later Chinese, factories to American consumers, and Liang added a comforting advantage: Supermicro’s motherboards would be engineered mostly in San Jose, close to the company’s biggest clients, even if the products were manufactured overseas.

Today, Supermicro sells more server motherboards than almost anyone else. It also dominates the $1 billion market for boards used in special-purpose computers, from MRI machines to weapons systems. Its motherboards can be found in made-to-order server setups at banks, hedge funds, cloud computing providers, and web-hosting services, among other places. Supermicro has assembly facilities in California, the Netherlands, and Taiwan, but its motherboards—its core product—are nearly all manufactured by contractors in China.

The company’s pitch to customers hinges on unmatched customization, made possible by hundreds of full-time engineers and a catalog encompassing more than 600 designs. The majority of its workforce in San Jose is Taiwanese or Chinese, and Mandarin is the preferred language, with hanzi filling the whiteboards, according to six former employees. Chinese pastries are delivered every week, and many routine calls are done twice, once for English-only workers and again in Mandarin. The latter are more productive, according to people who’ve been on both. These overseas ties, especially the widespread use of Mandarin, would have made it easier for China to gain an understanding of Supermicro’s operations and potentially to infiltrate the company. (A U.S. official says the government’s probe is still examining whether spies were planted inside Supermicro or other American companies to aid the attack.)

With more than 900 customers in 100 countries by 2015, Supermicro offered inroads to a bountiful collection of sensitive targets. “Think of Supermicro as the Microsoft of the hardware world,” says a former U.S. intelligence official who’s studied Supermicro and its business model. “Attacking Supermicro motherboards is like attacking Windows. It’s like attacking the whole world.”

Well before evidence of the attack surfaced inside the networks of U.S. companies, American intelligence sources were reporting that China’s spies had plans to introduce malicious microchips into the supply chain. The sources weren’t specific, according to a person familiar with the information they provided, and millions of motherboards are shipped into the U.S. annually. But in the first half of 2014, a different person briefed on high-level discussions says, intelligence officials went to the White House with something more concrete: China’s military was preparing to insert the chips into Supermicro motherboards bound for U.S. companies.

The specificity of the information was remarkable, but so were the challenges it posed. Issuing a broad warning to Supermicro’s customers could have crippled the company, a major American hardware maker, and it wasn’t clear from the intelligence whom the operation was targeting or what its ultimate aims were. Plus, without confirmation that anyone had been attacked, the FBI was limited in how it could respond. The White House requested periodic updates as information came in, the person familiar with the discussions says.

Apple made its discovery of suspicious chips inside Supermicro servers around May 2015, after detecting odd network activity and firmware problems, according to a person familiar with the timeline. Two of the senior Apple insiders say the company reported the incident to the FBI but kept details about what it had detected tightly held, even internally. Government investigators were still chasing clues on their own when Amazon made its discovery and gave them access to sabotaged hardware, according to one U.S. official. This created an invaluable opportunity for intelligence agencies and the FBI—by then running a full investigation led by its cyber- and counterintelligence teams—to see what the chips looked like and how they worked.

The chips on Elemental servers were designed to be as inconspicuous as possible, according to one person who saw a detailed report prepared for Amazon by its third-party security contractor, as well as a second person who saw digital photos and X-ray images of the chips incorporated into a later report prepared by Amazon’s security team. Gray or off-white in color, they looked more like signal conditioning couplers, another common motherboard component, than microchips, and so they were unlikely to be detectable without specialized equipment. Depending on the board model, the chips varied slightly in size, suggesting that the attackers had supplied different factories with different batches.

Officials familiar with the investigation say the primary role of implants such as these is to open doors that other attackers can go through. “Hardware attacks are about access,” as one former senior official puts it. In simplified terms, the implants on Supermicro hardware manipulated the core operating instructions that tell the server what to do as data move across a motherboard, two people familiar with the chips’ operation say. This happened at a crucial moment, as small bits of the operating system were being stored in the board’s temporary memory en route to the server’s central processor, the CPU. The implant was placed on the board in a way that allowed it to effectively edit this information queue, injecting its own code or altering the order of the instructions the CPU was meant to follow. Deviously small changes could create disastrous effects.

Since the implants were small, the amount of code they contained was small as well. But they were capable of doing two very important things: telling the device to communicate with one of several anonymous computers elsewhere on the internet that were loaded with more complex code; and preparing the device’s operating system to accept this new code. The illicit chips could do all this because they were connected to the baseboard management controller, a kind of superchip that administrators use to remotely log in to problematic servers, giving them access to the most sensitive code even on machines that have crashed or are turned off.

This system could let the attackers alter how the device functioned, line by line, however they wanted, leaving no one the wiser. To understand the power that would give them, take this hypothetical example: Somewhere in the Linux operating system, which runs in many servers, is code that authorizes a user by verifying a typed password against a stored encrypted one. An implanted chip can alter part of that code so the server won’t check for a password—and presto! A secure machine is open to any and all users. A chip can also steal encryption keys for secure communications, block security updates that would neutralize the attack, and open up new pathways to the internet. Should some anomaly be noticed, it would likely be cast as an unexplained oddity. “The hardware opens whatever door it wants,” says Joe FitzPatrick, founder of Hardware Security Resources LLC, a company that trains cybersecurity professionals in hardware hacking techniques.

U.S. officials had caught China experimenting with hardware tampering before, but they’d never seen anything of this scale and ambition. The security of the global technology supply chain had been compromised, even if consumers and most companies didn’t know it yet. What remained for investigators to learn was how the attackers had so thoroughly infiltrated Supermicro’s production process—and how many doors they’d opened into American targets.

Unlike software-based hacks, hardware manipulation creates a real-world trail. Components leave a wake of shipping manifests and invoices. Boards have serial numbers that trace to specific factories. To track the corrupted chips to their source, U.S. intelligence agencies began following Supermicro’s serpentine supply chain in reverse, a person briefed on evidence gathered during the probe says.

As recently as 2016, according to DigiTimes, a news site specializing in supply chain research, Supermicro had three primary manufacturers constructing its motherboards, two headquartered in Taiwan and one in Shanghai. When such suppliers are choked with big orders, they sometimes parcel out work to subcontractors. In order to get further down the trail, U.S. spy agencies drew on the prodigious tools at their disposal. They sifted through communications intercepts, tapped informants in Taiwan and China, even tracked key individuals through their phones, according to the person briefed on evidence gathered during the probe. Eventually, that person says, they traced the malicious chips to four subcontracting factories that had been building Supermicro motherboards for at least two years.

As the agents monitored interactions among Chinese officials, motherboard manufacturers, and middlemen, they glimpsed how the seeding process worked. In some cases, plant managers were approached by people who claimed to represent Supermicro or who held positions suggesting a connection to the government. The middlemen would request changes to the motherboards’ original designs, initially offering bribes in conjunction with their unusual requests. If that didn’t work, they threatened factory managers with inspections that could shut down their plants. Once arrangements were in place, the middlemen would organize delivery of the chips to the factories.

The investigators concluded that this intricate scheme was the work of a People’s Liberation Army unit specializing in hardware attacks, according to two people briefed on its activities. The existence of this group has never been revealed before, but one official says, “We’ve been tracking these guys for longer than we’d like to admit.” The unit is believed to focus on high-priority targets, including advanced commercial technology and the computers of rival militaries. In past attacks, it targeted the designs for high-performance computer chips and computing systems of large U.S. internet providers.

Provided details of Businessweek’s reporting, China’s Ministry of Foreign Affairs sent a statement that said “China is a resolute defender of cybersecurity.” The ministry added that in 2011, China proposed international guarantees on hardware security along with other members of the Shanghai Cooperation Organization, a regional security body. The statement concluded, “We hope parties make less gratuitous accusations and suspicions but conduct more constructive talk and collaboration so that we can work together in building a peaceful, safe, open, cooperative and orderly cyberspace.”

The Supermicro attack was on another order entirely from earlier episodes attributed to the PLA. It threatened to have reached a dizzying array of end users, with some vital ones in the mix. Apple, for its part, has used Supermicro hardware in its data centers sporadically for years, but the relationship intensified after 2013, when Apple acquired a startup called Topsy Labs, which created superfast technology for indexing and searching vast troves of internet content. By 2014, the startup was put to work building small data centers in or near major global cities. This project, known internally as Ledbelly, was designed to make the search function for Apple’s voice assistant, Siri, faster, according to the three senior Apple insiders.

Documents seen by Businessweek show that in 2014, Apple planned to order more than 6,000 Supermicro servers for installation in 17 locations, including Amsterdam, Chicago, Hong Kong, Los Angeles, New York, San Jose, Singapore, and Tokyo, plus 4,000 servers for its existing North Carolina and Oregon data centers. Those orders were supposed to double, to 20,000, by 2015. Ledbelly made Apple an important Supermicro customer at the exact same time the PLA was found to be manipulating the vendor’s hardware.

Project delays and early performance problems meant that around 7,000 Supermicro servers were humming in Apple’s network by the time the company’s security team found the added chips. Because Apple didn’t, according to a U.S. official, provide government investigators with access to its facilities or the tampered hardware, the extent of the attack there remained outside their view.

 

Microchips found on altered motherboards in some cases looked like signal conditioning couplers.
Photographer: Victor Prado for Bloomberg Businessweek

 

American investigators eventually figured out who else had been hit. Since the implanted chips were designed to ping anonymous computers on the internet for further instructions, operatives could hack those computers to identify others who’d been affected. Although the investigators couldn’t be sure they’d found every victim, a person familiar with the U.S. probe says they ultimately concluded that the number was almost 30 companies.

That left the question of whom to notify and how. U.S. officials had been warning for years that hardware made by two Chinese telecommunications giants, Huawei Corp. and ZTE Corp., was subject to Chinese government manipulation. (Both Huawei and ZTE have said no such tampering has occurred.) But a similar public alert regarding a U.S. company was out of the question. Instead, officials reached out to a small number of important Supermicro customers. One executive of a large web-hosting company says the message he took away from the exchange was clear: Supermicro’s hardware couldn’t be trusted. “That’s been the nudge to everyone—get that crap out,” the person says.

Amazon, for its part, began acquisition talks with an Elemental competitor, but according to one person familiar with Amazon’s deliberations, it reversed course in the summer of 2015 after learning that Elemental’s board was nearing a deal with another buyer. Amazon announced its acquisition of Elemental in September 2015, in a transaction whose value one person familiar with the deal places at $350 million. Multiple sources say that Amazon intended to move Elemental’s software to AWS’s cloud, whose chips, motherboards, and servers are typically designed in-house and built by factories that Amazon contracts from directly.

A notable exception was AWS’s data centers inside China, which were filled with Supermicro-built servers, according to two people with knowledge of AWS’s operations there. Mindful of the Elemental findings, Amazon’s security team conducted its own investigation into AWS’s Beijing facilities and found altered motherboards there as well, including more sophisticated designs than they’d previously encountered. In one case, the malicious chips were thin enough that they’d been embedded between the layers of fiberglass onto which the other components were attached, according to one person who saw pictures of the chips. That generation of chips was smaller than a sharpened pencil tip, the person says. (Amazon denies that AWS knew of servers found in China containing malicious chips.)

China has long been known to monitor banks, manufacturers, and ordinary citizens on its own soil, and the main customers of AWS’s China cloud were domestic companies or foreign entities with operations there. Still, the fact that the country appeared to be conducting those operations inside Amazon’s cloud presented the company with a Gordian knot. Its security team determined that it would be difficult to quietly remove the equipment and that, even if they could devise a way, doing so would alert the attackers that the chips had been found, according to a person familiar with the company’s probe. Instead, the team developed a method of monitoring the chips. In the ensuing months, they detected brief check-in communications between the attackers and the sabotaged servers but didn’t see any attempts to remove data. That likely meant either that the attackers were saving the chips for a later operation or that they’d infiltrated other parts of the network before the monitoring began. Neither possibility was reassuring.

When in 2016 the Chinese government was about to pass a new cybersecurity law—seen by many outside the country as a pretext to give authorities wider access to sensitive data—Amazon decided to act, the person familiar with the company’s probe says. In August it transferred operational control of its Beijing data center to its local partner, Beijing Sinnet, a move the companies said was needed to comply with the incoming law. The following November, Amazon sold the entire infrastructure to Beijing Sinnet for about $300 million. The person familiar with Amazon’s probe casts the sale as a choice to “hack off the diseased limb.”

As for Apple, one of the three senior insiders says that in the summer of 2015, a few weeks after it identified the malicious chips, the company started removing all Supermicro servers from its data centers, a process Apple referred to internally as “going to zero.” Every Supermicro server, all 7,000 or so, was replaced in a matter of weeks, the senior insider says. (Apple denies that any servers were removed.) In 2016, Apple informed Supermicro that it was severing their relationship entirely—a decision a spokesman for Apple ascribed in response to Businessweek’s questions to an unrelated and relatively minor security incident.

That August, Supermicro’s CEO, Liang, revealed that the company had lost two major customers. Although he didn’t name them, one was later identified in news reports as Apple. He blamed competition, but his explanation was vague. “When customers asked for lower price, our people did not respond quickly enough,” he said on a conference call with analysts. Hayes, the Supermicro spokesman, says the company has never been notified of the existence of malicious chips on its motherboards by either customers or U.S. law enforcement.

Concurrent with the illicit chips’ discovery in 2015 and the unfolding investigation, Supermicro has been plagued by an accounting problem, which the company characterizes as an issue related to the timing of certain revenue recognition. After missing two deadlines to file quarterly and annual reports required by regulators, Supermicro was delisted from the Nasdaq on Aug. 23 of this year. It marked an extraordinary stumble for a company whose annual revenue had risen sharply in the previous four years, from a reported $1.5 billion in 2014 to a projected $3.2 billion this year.

One Friday in late September 2015, President Barack Obama and Chinese President Xi Jinping appeared together at the White House for an hourlong press conference headlined by a landmark deal on cybersecurity. After months of negotiations, the U.S. had extracted from China a grand promise: It would no longer support the theft by hackers of U.S. intellectual property to benefit Chinese companies. Left out of those pronouncements, according to a person familiar with discussions among senior officials across the U.S. government, was the White House’s deep concern that China was willing to offer this concession because it was already developing far more advanced and surreptitious forms of hacking founded on its near monopoly of the technology supply chain.

In the weeks after the agreement was announced, the U.S. government quietly raised the alarm with several dozen tech executives and investors at a small, invite-only meeting in McLean, Va., organized by the Pentagon. According to someone who was present, Defense Department officials briefed the technologists on a recent attack and asked them to think about creating commercial products that could detect hardware implants. Attendees weren’t told the name of the hardware maker involved, but it was clear to at least some in the room that it was Supermicro, the person says.

The problem under discussion wasn’t just technological. It spoke to decisions made decades ago to send advanced production work to Southeast Asia. In the intervening years, low-cost Chinese manufacturing had come to underpin the business models of many of America’s largest technology companies. Early on, Apple, for instance, made many of its most sophisticated electronics domestically. Then in 1992, it closed a state-of-the-art plant for motherboard and computer assembly in Fremont, Calif., and sent much of that work overseas.

Over the decades, the security of the supply chain became an article of faith despite repeated warnings by Western officials. A belief formed that China was unlikely to jeopardize its position as workshop to the world by letting its spies meddle in its factories. That left the decision about where to build commercial systems resting largely on where capacity was greatest and cheapest. “You end up with a classic Satan’s bargain,” one former U.S. official says. “You can have less supply than you want and guarantee it’s secure, or you can have the supply you need, but there will be risk. Every organization has accepted the second proposition.”

In the three years since the briefing in McLean, no commercially viable way to detect attacks like the one on Supermicro’s motherboards has emerged—or has looked likely to emerge. Few companies have the resources of Apple and Amazon, and it took some luck even for them to spot the problem. “This stuff is at the cutting edge of the cutting edge, and there is no easy technological solution,” one of the people present in McLean says. “You have to invest in things that the world wants. You cannot invest in things that the world is not ready to accept yet.”

 

 

from: https://www.bloomberg.com/news/features/2018-10-04/the-big-hack-how-china-used-a-tiny-chip-to-infiltrate-america-s-top-companies

 

 

‘We Have No Way Of Addressing This’: Ex-NSA Scientist Reacts To China Sneaking Microchips Into DoD Servers

 

After an explosive Bloomberg report revealed that China was surreptitiously inserting small microchips into servers that later ended up being used by the Department of Defense, CIA, and many large American companies, an ex-NSA scientist warned there was “no way of addressing this risk” from a strategic standpoint.

“We can find a couple of them, but we’re not gonna find the next generation version,” said Dave Aitel, a former computer scientist for the National Security Agency now working as the Chief Security Technical Officer for Cyxtera. “That makes it very hard to trust computers in general.”

U.S. government investigators found that servers assembled by American companies contained motherboards — made by Chinese subcontractors — with tiny microchips that could allow hackers to “create a stealth doorway into any network that included the altered machines,” according to Bloomberg.

“They are literally in between the layers of the board,” Aitel said, adding that in order to see it, “you would have to take a board, strip it down, and X-ray it” to find the suspect chip.

“That’s just not a thing we should expect corporations to be able to do, even the biggest organizations.”

The machines are found inside DoD data centers, on Navy warships, and at the CIA, the site reported.

The Pentagon declined to comment on whether the suspect chips were found on DoD networks, citing operational security reasons. Still, Department spokeswoman Heather Babb told Task & Purpose, the U.S. military “has policies in place to address software assurance and supply chain risk management, as well as established security standards to ensure all procured commercial products and services are rigorously inspected for security vulnerabilities. As threats within the cyberspace domain change, DOD looks for solutions that provide more capability.”

“The protection of the National Security Innovation Base is a priority for the Department. Working closely with Congress and private industry, DOD is already advancing to elevate security within the supply chain,” she added.

China isn’t the only nation-state working to infiltrate hardware as a means to hack its enemies. The U.S. does much the same thing — intercepting network hardware and secretly installing beacons that call back to NSA — except it doesn’t seem to get or can legally force the cooperation of the factory making the product.

China doesn’t seem to have that problem.

“The question becomes can we move to a trusted supply chain or not?” Aitel asked. He added that “tin foil” hat thinking that foreign-made hardware should be treated as suspect isn’t so conspiratorial after all.

Still, he did offer some more positive news: “The good news is we caught it, and we’re on it,” Aitel said. “That’s actually phenomenally good news. That does send a message of deterrence. That does send a message that you can’t get away with it.”

President Barack Obama and Chinese President Xi Jinping agreed in 2015 that neither government would “conduct or knowingly support cyber-enabled theft of intellectual property” and said they would work together on other cybersecurity issues.

This latest disclosure of cyber-espionage adds fuel to the fire that China has clearly violated the agreement, which the Trump administration accused Beijing of doing earlier this year.

Aitel said it was more than likely that DoD and other governmental organizations were pulling the suspect servers if they haven’t done so already. Still, the risk will likely remain as long as the hardware is not manufactured in the U.S.

 

from: https://taskandpurpose.com/china-hacking-microchips-nsa-reaction/

 

China reportedly infiltrated Apple and other US companies using ‘spy’ chips on servers

 

Ready for information about what may be one of the largest corporate espionage programs from a nation-state? The Chinese government managed to gain access to the servers of more than 30 U.S. companies, including Apple, according to an explosive report from Bloomberg published today.

Bloomberg reports that U.S-based server motherboard specialist Supermicro was compromised in China where government-affiliated groups are alleged to have infiltrated its supply chain to attach tiny chips, some merely the size of a pencil tip, to motherboards which ended up in servers deployed in the U.S.

The goal, Bloomberg said, was to gain an entry point within company systems to potentially grab IP or confidential information. While the micro-servers themselves were limited in terms of direct capabilities, they represented a “stealth doorway” that could allow China-based operatives to remotely alter how a device functioned to potentially access information.

Once aware of the program, the U.S. government spied on the spies behind the chips but, according to Bloomberg, no consumer data is known to have been stolen through the attacks. Even still, this episode represents one of the most striking espionage programs from the Chinese government to date.

The story reports that the chips were discovered and reported to the FBI by Amazon, which found them during due diligence ahead of its 2015 acquisition of Elemental Systems, a company that held a range of U.S. government contracts, and Apple, which is said to have deployed up to 7,000 Supermicro servers at peak. Bloomberg reported that Amazon removed them all within a one-month period. Apple did indeed cut ties with Supermicro back in 2016, but it denied a claim from The Information which reported at the time that it was based on a security issue.

Amazon, meanwhile, completed the deal for Elemental Systems — reportedly worth $500 million — after it switched its software to the AWS cloud. Supermicro, meanwhile, was suspended from trading on the Nasdaq in August after failing to submit quarterly reports on time. The company is likely to be delisted once the timeframe for an appeal is over.

Amazon, Apple, Supermicro and China’s Ministry of Foreign Affairs all denied Bloomberg’s findings with strong and lengthy statements — a full list of rebuttals is here. The publication claims that it sourced its information using no fewer than 17 individuals with knowledge of developments, including six U.S. officials and four Apple “insiders.”

 

from: https://techcrunch.com/2018/10/04/china-reportedly-infiltrated-apple-and-other-us-companies-using-spy-chips-on-servers/

 

 

Chinesische Spionage

Apple und Amazon sollen Spionagechips in Servern gefunden haben

Dutzende US-Unternehmen und Regierungseinrichtungen haben Server eingesetzt, die vom selben Hersteller stammen: Supermicro. Doch deren Platinen waren einem Medienbericht zufolge in China manipuliert worden.

 

Rechenzentrum

 

Donnerstag, 04.10.2018   14:40 Uhr

 

Eine Einheit der chinesischen Armee soll dafür gesorgt haben, dass winzige Spionagechips in Tausenden Servern für große Unternehmen wie Amazon und Apple verbaut wurden. Diese Chips, zum Teil so klein wie die Spitze eines Bleistifts, sollen einen heimlichen Verbindungsaufbau zu den Tätern und das unbemerkte Nachladen von Code ermöglicht haben. Das berichtet “Bloomberg Businessweek” unter Berufung auf insgesamt 17 anonyme Informanten aus Unternehmens- und US-Regierungskreisen. Dem gegenüber stehen scharfe Dementis von Amazon, Apple, dem Hersteller der Server und der chinesischen Regierung.

Gefunden wurden die Chips dem Bericht zufolge sowohl von Amazon, als auch von Apple selbst, und zwar auf den Hauptplatinen von Servern, die das in Kalifornien beheimatete Unternehmen Supermicro zusammenbaut oder von Auftragsfirmen zusammenbauen lässt. Diese Auftragshersteller, das erste Glied der Lieferkette, befänden sich im Fall von Supermicro in China. Dort sollen Militärs die Manager bestochen oder bedroht haben, bis diese einwilligten, die Bauteile in das Design der Platinen einzuschmuggeln und zu verbauen.

Zu den betroffenen US-Kunden von Supermicro gehörten “fast 30 Unternehmen”, neben Apple und Amazon auch eine große Bank und mehrere Auftragnehmer der US-Regierung.

Apple: “Wir haben niemals bösartige Chips gefunden”

Amazons Cloudsparte teilte auf Anfrage von “Bloomberg” allerdings mit, es sei unwahr, dass man von der kompromittierten Lieferkette wusste. Apple schrieb: “Wir haben niemals bösartige Chips, manipulierte Hardware oder in Servern versteckte Schwachstellen gefunden”. In ihren vollständigen Dementis führen beide Unternehmen das länger und unmissverständlich aus.

Die insgesamt 17 Quellen der Journalisten widersprechen dieser Darstellung. Sie konnten der Zeitung detailliert darlegen, wie Amazon und Apple die Chips unabhängig voneinander im Jahr 2015 fanden.

Im Fall von Amazon geschah das angeblich bei einer externen Überprüfung von Servern der Firma Elemental, die eine spezielle Software zum Komprimieren und Formatieren von Videos entwickelt hatte und zusammen mit passenden Supermicro-Servern verkaufte, unter anderem an das US-Verteidigungsministerium, die CIA und die US-Marine. Elemental galt damals als möglicher Übernahmekandidat für Amazon, das deshalb die Sicherheitsüberprüfung veranlasst hatte.

Apple beendete 2016 Geschäftsbeziehungen zu Supermicro

Später kaufte Amazon das Start-up zwar wirklich auf, doch kompromittierte Supermicro-Server sollen nur in Amazons chinesischen Cloudzentren zum Einsatz gekommen sein, bis dessen Inventar wieder an ein einheimisches Unternehmen verkauft wurde. An anderen Standorten wollte Amazon nur Elementals Software einsetzen – auf den eigenen Maschinen.

Apple indes habe nach seinem eigenen Fund innerhalb weniger Wochen rund 7000 Supermicro-Server ersetzt, heißt es. Auch das bestreitet das Unternehmen, allerdings räumt es ein, 2016 alle Geschäftsbeziehungen zu Supermicro beendet zu haben, wenn auch aus einem anderen Grund, der etwas mit einem “vergleichsweise kleinen Sicherheitsproblem” zu tun gehabt habe.

Das Ziel der Chinesen sei ein dauerhafter Zugang zu Unternehmens- und Regierungsnetzwerken sowie Geschäftsgeheimnissen gewesen, sagte eine Quelle aus Regierungskreisen “Bloomberg”. Die entsprechenden Untersuchungen dauerten bis heute an.

 

from: http://www.spiegel.de/netzwelt/web/apple-und-amazon-laut-medienbericht-spionagechips-in-servern-gefunden-a-1231543.html

 

 

Bloomberg’s spy chip story reveals the murky world of national security reporting

MOSCOW, RUSSIA AUGUST 9, 2018: Printed circuit boards (PCB) at MikroEM Tekhnologii, a Russian manufacturer of electronic components, in Zelenograd. Anton Novoderezhkin/TASS (Photo by Anton NovoderezhkinTASS via Getty Images)

Today’s bombshell Bloomberg story has the internet split: either the story is right, and reporters have uncovered one of the largest and jarring breaches of the U.S. tech industry by a foreign adversary… or it’s not, and a lot of people screwed up.

To recap, Chinese spies reportedly infiltrated the supply chain and installed tiny chips the size of a pencil tip on the motherboards built by Supermicro, which are used in data center servers across the U.S. tech industry — from Apple to Amazon. That chip can compromise data on the server, allowing China to spy on some of the world’s most wealthy and powerful countries.

Apple, Amazon and Supermicro — and the Chinese government — strenuously denied the allegations. Apple also released its own standalone statement later in the day, as did Supermicro. You don’t see that very often unless they think they have nothing to hide. You can — and should — read the statements for yourself.

Welcome to the murky world of national security reporting.

I’ve covered cybersecurity and national security for about five years, most recently at CBS, where I reported exclusively on several stories — including the U.S. government’s covert efforts to force tech companies to hand over their source code in an effort to find vulnerabilities and conduct surveillance. And last year I revealed that the National Security Agency had its fifth data breach in as many years, and classified documents showed that a government data collection program was far wider than first thought and was collecting data on U.S. citizens.

Even with this story, my gut is mixed.

Where reporters across any topic and beat try to seek the truth, tapping information from the intelligence community is near impossible. For spies and diplomats, it’s illegal to share classified information with anyone and can be — and is — punishable by time in prison.

As a security reporter, you’re either incredibly well sourced or downright lucky. More often than not it’s the latter.

Naturally, people are skeptical of this “spy chip” story. On one side you have Bloomberg’s decades-long stellar reputation and reporting acumen, a thoroughly researched story citing more than a dozen sources — some inside the government and out — and presenting enough evidence to present a convincing case.

On the other, the sources are anonymous — likely because the information they shared wasn’t theirs to share or it was classified, putting sources in risk of legal jeopardy. But that makes accountability difficult. No reporter wants to say “a source familiar with the matter” because it weakens the story. It’s the reason reporters will tag names to spokespeople or officials so that it holds the powers accountable for their words. And, the denials from the companies themselves — though transparently published in full by Bloomberg — are not bulletproof in outright rejection of the story’s claims. These statements go through legal counsel and are subject to government regulation. These statements become a counterbalance — turning the story from an evidence-based report into a “he said, she said” situation.

That puts the onus on the reader to judge Bloomberg’s reporting. Reporters can publish the truth all they want, but ultimately it’s down to the reader to believe it or not.

In fairness to Bloomberg, chief among Apple’s complaints is a claim that Bloomberg’s reporters were vague in their questioning. Given the magnitude of the story, you don’t want to reveal all of your cards — but still want to seek answers and clarifications without having the subject tip off another news agency — a trick sometimes employed by the government in the hope of lighter coverage.

Yet, to Apple — and Amazon and other companies implicated by the report — they too might also be in the dark. Assuming there was an active espionage investigation into the alleged actions of a foreign government, you can bet that only a handful of people at these companies will be even cursorily aware of the situation. U.S. surveillance and counter-espionage laws restrict who can be told about classified information or investigations. Only those who need to be in the know are kept in a very tight loop — typically a company’s chief counsel. Often their bosses, the chief executive or president, are not told to avoid making false or misleading statements to shareholders.

It’s worth casting your mind back to 2013, days after the first Edward Snowden documents were published.

In the aftermath of the disclosure of PRISM, the NSA’s data pulling program that implicated several tech companies — including Apple, but not Amazon — the companies came out fighting, vehemently denying any involvement or connection. Was it a failure of reporting? Partially, yes. But the companies also had plausible deniability by cherry picking what they rebuffed. Despite a claim by the government that PRISM had “direct access” to tech companies’ servers, the companies responded that this wasn’t true. They didn’t, however, refute indirect access — which the companies wouldn’t be allowed to say in any case.

Critics of Bloomberg’s story have rightfully argued for more information — such as more technical data on the chip, its design and its functionality. Rightfully so — it’s entirely reasonable to want to know more. Jake Williams, a former NSA hacker turned founder of Rendition Infosec, told me that the story is “credible,” but “even if it turns out to be untrue, the capability exists and you need to architect your networks to detect this.”

I was hesitant to cover this at first given the complexity of the allegations and how explosive the claims are without also seeking confirmation. That’s not easy to do in an hour when Bloomberg’s reporters have been working for the best part of a year. Assuming Bloomberg did everything right — a cover story on its magazine, no less, which would have gone through endless editing and fact-checking before going to print — the reporters likely hit a wall and had nothing more to report, and went to print.

But Bloomberg’s delivery could have been better. Just as The New York Times does — even as recently as its coverage of President Trump’s tax affairs, Bloomberg missed an opportunity to be more open and transparent in how it came to the conclusions that it did. Journalism isn’t proprietary. It should be open to as many people as possible. If you’re not transparent in how you report things, you lose readers’ trust.

That’s where the story rests on shaky ground. Admittedly, as detailed and as well-sourced as the story is, you — and I — have to put a lot of trust and faith in Bloomberg and its reporters.

And in this day and age where “fake news” is splashed around wrongly and unfairly, for the sake of journalism, my only hope is they’re not wrong.

 

from: https://techcrunch.com/2018/10/04/bloomberg-spy-chip-murky-world-national-security-reporting/

 

Supply Chain Security is the Whole Enchilada, But Who’s Willing to Pay for It?

 

From time to time, there emerge cybersecurity stories of such potential impact that they have the effect of making all other security concerns seem minuscule and trifling by comparison. Yesterday was one of those times. Bloomberg Businessweek on Thursday published a bombshell investigation alleging that Chinese cyber spies had used a U.S.-based tech firm to secretly embed tiny computer chips into electronic devices purchased and used by almost 30 different companies. There aren’t any corroborating accounts of this scoop so far, but it is both fascinating and terrifying to look at why threats to the global technology supply chain can be so difficult to detect, verify and counter.

In the context of computer and Internet security, supply chain security refers to the challenge of validating that a given piece of electronics — and by extension the software that powers those computing parts — does not include any extraneous or fraudulent components beyond what was specified by the company that paid for the production of said item.

In a nutshell, the Bloomberg story claims that San Jose, Calif. based tech giant Supermicro was somehow caught up in a plan to quietly insert a rice-sized computer chip on the circuit boards that get put into a variety of servers and electronic components purchased by major vendors, allegedly including Amazon and Apple. The chips were alleged to have spied on users of the devices and sent unspecified data back to the Chinese military.

It’s critical to note up top that Amazon, Apple and Supermicro have categorically denied most of the claims in the Bloomberg piece. That is, their positions refuting core components of the story would appear to leave little wiggle room for future backtracking on those statements. Amazon also penned a blog post that more emphatically stated their objections to the Bloomberg piece.

Nevertheless, Bloomberg reporters write that “the companies’ denials are countered by six current and former senior national security officials, who—in conversations that began during the Obama administration and continued under the Trump administration—detailed the discovery of the chips and the government’s investigation.”

The story continues:

Today, Supermicro sells more server motherboards than almost anyone else. It also dominates the $1 billion market for boards used in special-purpose computers, from MRI machines to weapons systems. Its motherboards can be found in made-to-order server setups at banks, hedge funds, cloud computing providers, and web-hosting services, among other places. Supermicro has assembly facilities in California, the Netherlands, and Taiwan, but its motherboards—its core product—are nearly all manufactured by contractors in China.

Many readers have asked for my take on this piece. I heard similar allegations earlier this year about Supermicro and tried mightily to verify them but could not. That in itself should be zero gauge of the story’s potential merit. After all, I am just one guy, whereas this is the type of scoop that usually takes entire portions of a newsroom to research, report and vet. By Bloomberg’s own account, the story took more than a year to report and write, and cites 17 anonymous sources as confirming the activity.

Most of what I have to share here is based on conversations with some clueful people over the years who would probably find themselves confined to a tiny, windowless room for an extended period if their names or quotes ever showed up in a story like this, so I will tread carefully around this subject.

The U.S. Government isn’t eager to admit it, but there has long been an unofficial inventory of tech components and vendors that are forbidden to buy from if you’re in charge of procuring products or services on behalf of the U.S. Government. Call it the “brown list, “black list,” “entity list” or what have you, but it’s basically an indelible index of companies that are on the permanent Shit List of Uncle Sam for having been caught pulling some kind of supply chain shenanigans.

More than a decade ago when I was a reporter with The Washington Post, I heard from an extremely well-placed source that one Chinese tech company had made it onto Uncle Sam’s entity list because they sold a custom hardware component for many Internet-enabled printers that secretly made a copy of every document or image sent to the printer and forwarded that to a server allegedly controlled by hackers aligned with the Chinese government.

That example gives a whole new meaning to the term “supply chain,” doesn’t it? If Bloomberg’s reporting is accurate, that’s more or less what we’re dealing with here in Supermicro as well.

But here’s the thing: Even if you identify which technology vendors are guilty of supply-chain hacks, it can be difficult to enforce their banishment from the procurement chain. One reason is that it is often tough to tell from the brand name of a given gizmo who actually makes all the multifarious components that go into any one electronic device sold today.

Take, for instance, the problem right now with insecure Internet of Things (IoT) devices — cheapo security cameras, Internet routers and digital video recorders — sold at places like Amazon and Walmart. Many of these IoT devices have become a major security problem because they are massively insecure by default and difficult if not also impractical to secure after they are sold and put into use.

For every company in China that produces these IoT devices, there are dozens of “white label” firms that market and/or sell the core electronic components as their own. So while security researchers might identify a set of security holes in IoT products made by one company whose products are white labeled by others, actually informing consumers about which third-party products include those vulnerabilities can be extremely challenging. In some cases, a technology vendor responsible for some part of this mess may simply go out of business or close its doors and re-emerge under different names and managers.

Mind you, there is no indication anyone is purposefully engineering so many of these IoT products to be insecure; a more likely explanation is that building in more security tends to make devices considerably more expensive and slower to market. In many cases, their insecurity stems from a combination of factors: They ship with every imaginable feature turned on by default; they bundle outdated software and firmware components; and their default settings are difficult or impossible for users to change.

We don’t often hear about intentional efforts to subvert the security of the technology supply chain simply because these incidents tend to get quickly classified by the military when they are discovered. But the U.S. Congress has held multiple hearings about supply chain security challenges, and the U.S. government has taken steps on several occasions to block Chinese tech companies from doing business with the federal government and/or U.S.-based firms.

Most recently, the Pentagon banned the sale of Chinese-made ZTE and Huawei phones on military bases, according to a Defense Department directive that cites security risks posed by the devices. The U.S. Department of Commerce also has instituted a seven-year export restriction for ZTE, resulting in a ban on U.S. component makers selling to ZTE.

Still, the issue here isn’t that we can’t trust technology products made in China. Indeed there are numerous examples of other countries — including the United States and its allies — slipping their own “backdoors” into hardware and software products.

Like it or not, the vast majority of electronics are made in China, and this is unlikely to change anytime soon. The central issue is that we don’t have any other choice right nowThe reason is that by nearly all accounts it would be punishingly expensive to replicate that manufacturing process here in the United States.

Even if the U.S. government and Silicon Valley somehow mustered the funding and political will to do that, insisting that products sold to U.S. consumers or the U.S. government be made only with components made here in the U.S.A. would massively drive up the cost of all forms of technology. Consumers would almost certainly balk at buying these way more expensive devices. Years of experience has shown that consumers aren’t interested in paying a huge premium for security when a comparable product with the features they want is available much more cheaply.

Indeed, noted security expert Bruce Schneier calls supply-chain security “an insurmountably hard problem.”

“Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product,” Schneier wrote in an opinion piece published earlier this year in The Washington Post. “No one wants to even think about a US-only anything; prices would multiply many times over. We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.

The Bloomberg piece also addresses this elephant in the room:

“The problem under discussion wasn’t just technological. It spoke to decisions made decades ago to send advanced production work to Southeast Asia. In the intervening years, low-cost Chinese manufacturing had come to underpin the business models of many of America’s largest technology companies. Early on, Apple, for instance, made many of its most sophisticated electronics domestically. Then in 1992, it closed a state-of-the-art plant for motherboard and computer assembly in Fremont, Calif., and sent much of that work overseas.

Over the decades, the security of the supply chain became an article of faith despite repeated warnings by Western officials. A belief formed that China was unlikely to jeopardize its position as workshop to the world by letting its spies meddle in its factories. That left the decision about where to build commercial systems resting largely on where capacity was greatest and cheapest. “You end up with a classic Satan’s bargain,” one former U.S. official says. “You can have less supply than you want and guarantee it’s secure, or you can have the supply you need, but there will be risk. Every organization has accepted the second proposition.”

Another huge challenge of securing the technology supply chain is that it’s quite time consuming and expensive to detect when products may have been intentionally compromised during some part of the manufacturing process. Your typical motherboard of the kind produced by a company like Supermicro can include hundreds of chips, but it only takes one hinky chip to subvert the security of the entire product.

Also, most of the U.S. government’s efforts to police the global technology supply chain seem to be focused on preventing counterfeits — not finding secretly added spying components.

Finally, it’s not clear that private industry is up to the job, either. At least not yet.

“In the three years since the briefing in McLean, no commercially viable way to detect attacks like the one on Supermicro’s motherboards has emerged—or has looked likely to emerge,” the Bloomberg story concludes. “Few companies have the resources of Apple and Amazon, and it took some luck even for them to spot the problem. ‘This stuff is at the cutting edge of the cutting edge, and there is no easy technological solution,’ one of the people present in McLean says. ‘You have to invest in things that the world wants. You cannot invest in things that the world is not ready to accept yet.’”

For my part, I try not to spin my wheels worrying about things I can’t change, and the supply chain challenges definitely fit into that category. I’ll have some more thoughts on the supply chain problem and what we can do about it in an interview to be published next week.

But for the time being, there are some things worth thinking about that can help mitigate the threat from stealthy supply chain hacks. Writing for this week’s newsletter put out by the SANS Institute, a security training company based in Bethesda, Md., editorial board member William Hugh Murray has a few provocative thoughts:

  1. Abandon the password for all but trivial applications. Steve Jobs and the ubiquitous mobile computer have lowered the cost and improved the convenience of strong authentication enough to overcome all arguments against it.
  2. Abandon the flat network. Secure and trusted communication now trump ease of any-to-any communication.
  3. Move traffic monitoring from encouraged to essential.
  4. Establish and maintain end-to-end encryption for all applications. Think TLS, VPNs, VLANs and physically segmented networks. Software Defined Networks put this within the budget of most enterprises.
  5. Abandon the convenient but dangerously permissive default access control rule of “read/write/execute” in favor of restrictive “read/execute-only” or even better, “Least privilege.” Least privilege is expensive to administer but it is effective. Our current strategy of “ship low-quality early/patch late” is proving to be ineffective and more expensive in maintenance and breaches than we could ever have imagined.

 

from: https://krebsonsecurity.com/2018/10/supply-chain-security-is-the-whole-enchilada-but-whos-willing-to-pay-for-it/

 

 

04 OCT 2018

Decoding the Chinese Super Micro super spy-chip super-scandal: What do we know – and who is telling the truth?

 

Analysis Chinese government agents sneaked spy chips into Super Micro servers used by Amazon, Apple, the US government, and about 30 other organizations, giving Beijing’s snoops access to highly sensitive data, according to a bombshell Bloomberg report today.

The story, which has been a year in the making and covers events it says happened three years ago, had a huge impact on the markets: the company at the center of the story, San Jose-based Super Micro, saw its share price drop by nearly 50 per cent; likewise Apple’s share price dropped by just under two per cent, and Amazon’s dropped by more than two per cent.

But the article has been strongly denied by the three main companies involved: Apple, Amazon, and Super Micro. Each has issued strong and seemingly unambiguous statements denying the existence and discovery of such chips or any investigation by the US intelligence services into the surveillance implants.

These statements will have gone through layers of lawyers to make sure they do not open these publicly traded corporations to lawsuits and securities fraud claims down the line. Similarly, Bloomberg employs veteran reporters and layers of editors, who check and refine stories, and has a zero tolerance for inaccuracies.

So which is true: did the Chinese government succeed in infiltrating the hardware supply chain and install spy chips in highly sensitive US systems; or did Bloomberg’s journalists go too far in their assertions? We’ll dig in.

The report

First up, the key details of the exclusive. According to the report, tiny microchips that were made to look like signal conditioning couplers were added to Super Micro data center server motherboards manufactured by sub-contractors based in China.

Those spy chips were not on the original board designs, and were secretly added after factory bosses were pressured or bribed into altering the blueprints, it is claimed. The surveillance chips, we’re told, contained enough memory and processing power to effectively backdoor the host systems so that outside agents could, say, meddle with the servers and exfiltrate information.

The Bloomberg article is not particularly technical, so a lot of us are having to guesstimate how the hack worked. From what we can tell, the spy chip was designed to look like an innocuous component on the motherboard with a few connector pins – just enough for power and a serial interface, perhaps. One version was sandwiched between the fiberglass layers of the PCB, it is claimed.

The spy chip could have been placed electrically between the baseboard management controller (BMC) and its SPI flash or serial EEPROM storage containing the BMC’s firmware. Thus, when the BMC fetched and executed its code from this memory, the spy chip would intercept the signals and modify the bitstream to inject malicious code into the BMC processor, allowing its masters to control the BMC.

The BMC is a crucial component on a server motherboard. It allows administrators to remotely monitor and repair machines, typically over a network, without having to find the box in a data center, physically pull it out of the rack, fix it, and re-rack it. The BMC and its firmware can be told to power-cycle the server, reinstall or modify the host operating system, mount additional storage containing malicious code and data, access a virtual keyboard and terminal connected to the computer, and so on. If you can reach the BMC and its software, you have total control over the box.

With the BMC compromised, it is possible the alleged spies modified the controller’s firmware and/or the host operating system and software to allow attackers to connect in or allow data to flow out. We’ve been covering BMC security issues for a while.

Here is Bloomberg’s layman explanation for how that snoop-chip worked: the component “manipulated the core operating instructions that tell the server what to do as data move across a motherboard… this happened at a crucial moment, as small bits of the operating system were being stored in the board’s temporary memory en route to the server’s central processor, the CPU. The implant was placed on the board in a way that allowed it to effectively edit this information queue, injecting its own code or altering the order of the instructions the CPU was meant to follow.”

There are a few things to bear in mind: one is that it should be possible to detect weird network traffic coming from the compromised machine, and another is that modifying BMC firmware on the fly to compromise the host system is non-trivial but also not impossible. Various methods are described, here.

“It is technically plausible,” said infosec expert and US military veteran Jake Williams in a hastily organized web conference on Thursday morning. “If I wanted to do this, this is how I’d do it.”

The BMC would be a “great place to put it,” said Williams, because the controller has access to the server’s main memory, allowing it to inject backdoor code into the host operating system kernel. From there, it could pull down second-stage spyware and execute it, assuming this doesn’t set off any firewall rules.

A third thing to consider is this: if true, a lot of effort went into this surveillance operation. It’s not the sort of thing that would be added to any Super Micro server shipping to any old company – it would be highly targeted to minimize its discovery. If you’ve bought Super Micro kit, it’s very unlikely it has a spy chip in it, we reckon, if the report is correct. Other than Apple and Amazon, the other 30 or so organizations that used allegedly compromised Super Micro boxes included a major bank and government contractors.

A fourth thing is this: why go to the bother of smuggling another chip on the board, when a chip already due to be placed in the circuitry could be tampered with during manufacture, using bribes and pressure? Why not switch the SPI flash chip with a backdoored one – one that looks identical to a legit one? Perhaps the disguised signal coupler was the best way to go.

And a fifth thing: the chip allegedly fits on a pencil tip. That it can intercept and rewrite data on the fly from SPI flash or a serial EEPROM is not impossible. However, it has to contain enough data to replace the fetched BMC firmware code, that then alters the running operating system or otherwise implements a viable backdoor. Either the chip pictured in Bloomberg’s article is incorrect and just an illustration, and the actual device is larger, or there is state-of-the-art custom semiconductor fabrication involved here.

 

One final point: you would expect corporations like Apple and Amazon to have in place systems that detect not only unexpected network traffic, but also unexpected operating system states. It should be possible that alterations to the kernel and the stack of software above it should set off alarms during or after boot.

Bloomberg claims the chip was first noticed in 2015 in a third-party security audit of Super Micro servers that was carried out when Amazon was doing due diligence into a company called Elemental Technologies that it was thinking of acquiring. Elemental used Super Micro’s servers to do super-fast video processing.

Big problem

Amazon reported what it found to the authorities and, according to Bloomberg, that “sent a shudder” through the intelligence community because similar motherboards were in use “in Department of Defense data centers, the CIA’s drone operations, and the onboard networks of Navy warships.”

Around the same time, Apple also found the tiny chips, according to the report, “after detecting odd network activity and firmware problems.” Apple contacted the FBI and gave the agency access to the actual hardware. US intelligence agencies then tracked the hardware components backwards through the supply chain, and used their various spying programs to sift through intercepted communications, eventually ending up with a focus on four sub-contracting factories in China.

According to Bloomberg, the US intelligence agencies were then able to uncover how the seeding process worked: “Plant managers were approached by people who claimed to represent Super Micro or who held positions suggesting a connection to the government. The middlemen would request changes to the motherboards’ original designs, initially offering bribes in conjunction with their unusual requests. If that didn’t work, they threatened factory managers with inspections that could shut down their plants. Once arrangements were in place, the middlemen would organize delivery of the chips to the factories.”

This explanation seemingly passes the sniff test: it fits what we know of US intelligence agencies investigative approaches, their spy programs, and how the Chinese government works when interacting with private businesses.

The report then provides various forms of circumstantial evidence that adds weight to the idea that this all happened by pointing to subsequent actions of both Apple and Amazon. Apple ditched Super Micro entirely as a supplier, over the course of just a few weeks, despite planning to put in a massive order for thousands of motherboards. And Amazon sold off its Beijing data center to its local partner, Beijing Sinnet, for $300m.

 

from: https://www.theregister.co.uk/2018/10/04/supermicro_bloomberg/

 

 

07 SEP 2018

Supermicro wraps crypto-blanket around server firmware to hide it from malware injectors

A Reg vulture reacting to the vulnerability

 

Researchers claim to have discovered an exploitable flaw in the baseboard management controller (BMC) hardware used by Supermicro servers.

Security biz Eclypsium today said a weakness in the mechanism for updating a BMC’s firmware could be abused by an attacker to install and run malicious code that would be extremely difficult to remove.

A BMC is typically installed directly onto the motherboard of a server where it is able to directly control and manage the various hardware components of the server independent of the host and guest operating systems. It can also repair, alter, or reinstall the system software, and is remotely controlled over a network or dedicated channel by an administrator. It allows IT staff to manage, configure, and power cycle boxes from afar, which is handy for people looking after warehouses of machines.

Because BMCs operate at such a low level, they are also valuable targets for hackers.

In this case, Eclypsium says the firmware update code in Supermicro’s BMCs don’t bother to cryptographically verify whether or not the downloaded upgrade was issued by the manufacturer, leaving them vulnerable to tampering. The bug could be exploited to execute code that would then be able to withstand OS-level antivirus tools and reinstalls.

To do this, an attacker already on the data center network, or otherwise able to access the controllers, would need to intercept the firmware download, meddle with it, and pass it on to the hardware that will then blindly install it. Alternatively, a miscreant able to eavesdrop on and fiddle with internet traffic feeding into an organization could tamper with the IT team’s BMC firmware downloads, which again would be accepted by the controller.

“We found that the BMC code responsible for processing and applying firmware updates does not perform cryptographic signature verification on the provided firmware image before accepting the update and committing it to non-volatile storage,” says Eclypsium.

“This effectively allows the attacker to load modified code onto the BMC.”

In addition to running malware code beneath the OS level, the researchers said the flaw could also be used to permanently brick the BMC or even the entire server. Even worse, a potential attack wouldn’t even necessarily require physical access to the server itself.

“Because IPMI communications can be performed over the BMC LAN interface, this update mechanism could also be exploited remotely if the attacker has been able to capture the admin password for the BMC,” Eclypsium warned.

“This requires access to the systems management network, which should be isolated and protected from the production network. However, the implicit trust of management networks and interfaces may generate a false sense of security, leading to otherwise-diligent administrators practicing password reuse for convenience.”

Fortunately, Eclypsium says it has already reported the bug to Supermicro, who responded by adding signature verification to the firmware update tool, effectively plugging this vulnerability. Admins are being advised to get in touch with their Supermicro security contacts to get the fix in place.

 

from: https://www.theregister.co.uk/2018/09/07/supermicro_bmcs_hole/

 

 

18 OCT 2018

Supply Chain Security 101: An Expert’s View

 

Earlier this month I spoke at a cybersecurity conference in Albany, N.Y. alongside Tony Sager, senior vice president and chief evangelist at the Center for Internet Security and a former bug hunter at the U.S. National Security Agency. We talked at length about many issues, including supply chain security, and I asked Sager whether he’d heard anything about rumors that Supermicro — a high tech firm in San Jose, Calif. — had allegedly inserted hardware backdoors in technology sold to a number of American companies.

The event Sager and I spoke at was prior to the publication of Bloomberg Businessweek‘s controversial story alleging that Supermicro had duped almost 30 companies into buying backdoored hardware. Sager said he hadn’t heard anything about Supermicro specifically, but we chatted at length about the challenges of policing the technology supply chain.

 

Tony Sager, senior vice president and chief evangelist at the Center for Internet Security.

 

Below are some excerpts from our conversation. I learned quite bit, and I hope you will, too.

Brian Krebs (BK): Do you think Uncle Sam spends enough time focusing on the supply chain security problem? It seems like a pretty big threat, but also one that is really hard to counter.

Tony Sager (TS): The federal government has been worrying about this kind of problem for decades. In the 70s and 80s, the government was more dominant in the technology industry and didn’t have this massive internationalization of the technology supply chain.

But even then there were people who saw where this was all going, and there were some pretty big government programs to look into it.

BK: Right, the Trusted Foundry program I guess is a good example.

TS: Exactly. That was an attempt to help support a U.S.-based technology industry so that we had an indigenous place to work with, and where we have only cleared people and total control over the processes and parts.

BK: Why do you think more companies aren’t insisting on producing stuff through code and hardware foundries here in the U.S.?

TS: Like a lot of things in security, the economics always win. And eventually the cost differential for offshoring parts and labor overwhelmed attempts at managing that challenge.

BK: But certainly there are some areas of computer hardware and network design where you absolutely must have far greater integrity assurance?

TS: Right, and this is how they approach things at Sandia National Laboratories [one of three national nuclear security research and development laboratories]. One of the things they’ve looked at is this whole business of whether someone might sneak something into the design of a nuclear weapon.

The basic design principle has been to assume that one person in the process may have been subverted somehow, and the whole design philosophy is built around making sure that no one person gets to sign off on what goes into a particular process, and that there is never unobserved control over any one aspect of the system. So, there are a lot of technical and procedural controls there.

But the bottom line is that doing this is really much harder [for non-nuclear electronic components] because of all the offshoring now of electronic parts, as well as the software that runs on top of that hardware.

BK: So is the government basically only interested in supply chain security so long as it affects stuff they want to buy and use?

TS: The government still has regular meetings on supply chain risk management, but there are no easy answers to this problem. The technical ability to detect something wrong has been outpaced by the ability to do something about it.

BK: Wait…what?

TS: Suppose a nation state dominates a piece of technology and in theory could plant something inside of it. The attacker in this case has a risk model, too. Yes, he could put something in the circuitry or design, but his risk of exposure also goes up.

Could I as an attacker control components that go into certain designs or products? Sure, but it’s often not very clear what the target is for that product, or how you will guarantee it gets used by your target. And there are still a limited set of bad guys who can pull that stuff off. In the past, it’s been much more lucrative for the attacker to attack the supply chain on the distribution side, to go after targeted machines in targeted markets to lessen the exposure of this activity.

BK: So targeting your attack becomes problematic if you’re not really limiting the scope of targets that get hit with compromised hardware.

TS: Yes, you can put something into everything, but all of a sudden you have this massive big data collection problem on the back end where you as the attacker have created a different kind of analysis problem. Of course, some nations have more capability than others to sift through huge amounts of data they’re collecting.

BK: Can you talk about some of the things the government has typically done to figure out whether a given technology supplier might be trying to slip in a few compromised devices among an order of many?

TS: There’s this concept of the “blind buy,” where if you think the threat vector is someone gets into my supply chain and subverts the security of individual machines or groups of machines, the government figures out a way to purchase specific systems so that no one can target them. In other words, the seller doesn’t know it’s the government who’s buying it. This is a pretty standard technique to get past this, but it’s an ongoing cat and mouse game to be sure.

BK: I know you said before this interview that you weren’t prepared to comment on the specific claims in the recent Bloomberg article, but it does seem that supply chain attacks targeting cloud providers could be very attractive for an attacker. Can you talk about how the big cloud providers could mitigate the threat of incorporating factory-compromised hardware into their operations?

TS: It’s certainly a natural place to attack, but it’s also a complicated place to attack — particularly the very nature of the cloud, which is many tenants on one machine. If you’re attacking a target with on-premise technology, that’s pretty simple. But the purpose of the cloud is to abstract machines and make more efficient use of the same resources, so that there could be many users on a given machine. So how do you target that in a supply chain attack?

BK: Is there anything about the way these cloud-based companies operate….maybe just sheer scale…that makes them perhaps uniquely more resilient to supply chain attacks vis-a-vis companies in other industries?

TS: That’s a great question. The counter positive trend is that in order to get the kind of speed and scale that the Googles and Amazons and Microsofts of the world want and need, these companies are far less inclined now to just take off-the-shelf hardware and they’re actually now more inclined to build their own.

BK: Can you give some examples?

TS: There’s a fair amount of discussion among these cloud providers about commonalities — what parts of design could they cooperate on so there’s a marketplace for all of them to draw upon. And so we’re starting to see a real shift from off-the-shelf components to things that the service provider is either designing or pretty closely involved in the design, and so they can also build in security controls for that hardware. Now, if you’re counting on people to exactly implement designs, you have a different problem. But these are really complex technologies, so it’s non-trivial to insert backdoors. It gets harder and harder to hide those kinds of things.

BK: That’s interesting, given how much each of us have tied up in various cloud platforms. Are there other examples of how the cloud providers can make it harder for attackers who might seek to subvert their services through supply chain shenanigans?

TS: One factor is they’re rolling this technology out fairly regularly, and on top of that the shelf life of technology for these cloud providers is now a very small number of years. They all want faster, more efficient, powerful hardware, and a dynamic environment is much harder to attack. This actually turns out to be a very expensive problem for the attacker because it might have taken them a year to get that foothold, but in a lot of cases the short shelf life of this technology [with the cloud providers] is really raising the costs for the attackers.

When I looked at what Amazon and Google and Microsoft are pushing for it’s really a lot of horsepower going into the architecture and designs that support that service model, including the building in of more and more security right up front. Yes, they’re still making lots of use of non-U.S. made parts, but they’re really aware of that when they do. That doesn’t mean these kinds of supply chain attacks are impossible to pull off, but by the same token they don’t get easier with time.

BK: It seems to me that the majority of the government’s efforts to help secure the tech supply chain come in the form of looking for counterfeit products that might somehow wind up in tanks and ships and planes and cause problems there — as opposed to using that microscope to look at commercial technology. Do you think that’s accurate?

TS: I think that’s a fair characterization. It’s a logistical issue. This problem of counterfeits is a related problem. Transparency is one general design philosophy. Another is accountability and traceability back to a source. There’s this buzzphrase that if you can’t build in security then build in accountability. Basically the notion there was you often can’t build in the best or perfect security, but if you can build in accountability and traceability, that’s a pretty powerful deterrent as well as a necessary aid.

BK: For example….?

TS: Well, there’s this emphasis on high quality and unchangeable logging. If you can build strong accountability that if something goes wrong I can trace it back to who caused that, I can trace it back far enough to make the problem more technically difficult for the attacker. Once I know I can trace back the construction of a computer board to a certain place, you’ve built a different kind of security challenge for the attacker. So the notion there is while you may not be able to prevent every attack, this causes the attacker different kinds of difficulties, which is good news for the defense.

BK: So is supply chain security more of a physical security or cybersecurity problem?

TS: We like to think of this as we’re fighting in cyber all the time, but often that’s not true. If you can force attackers to subvert your supply chain, they you first off take away the mid-level criminal elements and you force the attackers to do things that are outside the cyber domain, such as set up front companies, bribe humans, etc. And in those domains — particularly the human dimension — we have other mechanisms that are detectors of activity there.

BK: What role does network monitoring play here? I’m hearing a lot right now from tech experts who say organizations should be able to detect supply chain compromises because at some point they should be able to see truckloads of data leaving their networks if they’re doing network monitoring right. What do you think about the role of effective network monitoring in fighting potential supply chain attacks.

TS:  I’m not so optimistic about that. It’s too easy to hide. Monitoring is about finding anomalies, either in the volume or type of traffic you’d expect to see. It’s a hard problem category. For the US government, with perimeter monitoring there’s always a trade off in the ability to monitor traffic and the natural movement of the entire Internet towards encryption by default. So a lot of things we don’t get to touch because of tunneling and encryption, and the Department of Defense in particular has really struggled with this.

Now obviously what you can do is man-in-the-middle traffic with proxies and inspect everything there, and the perimeter of the network is ideally where you’d like to do that, but the speed and volume of the traffic is often just too great.

BK: Isn’t the government already doing this with the “trusted internet connections” or Einstein program, where they consolidate all this traffic at the gateways and try to inspect what’s going in and out?

TS: Yes, so they’re creating a highest volume, highest speed problem. To monitor that and to not interrupt traffic you have to have bleeding edge technology to do that, and then handle a ton of it which is already encrypted. If you’re going to try to proxy that, break it out, do the inspection and then re-encrypt the data, a lot of times that’s hard to keep up with technically and speed-wise.

BK: Does that mean it’s a waste of time to do this monitoring at the perimeter?

TS: No. The initial foothold by the attacker could have easily been via a legitimate tunnel and someone took over an account inside the enterprise. The real meaning of a particular stream of packets coming through the perimeter you may not know until that thing gets through and executes. So you can’t solve every problem at the perimeter. Some things only because obvious and make sense to catch them when they open up at the desktop.

BK: Do you see any parallels between the challenges of securing the supply chain and the challenges of getting companies to secure Internet of Things (IoT) devices so that they don’t continue to become a national security threat for just about any critical infrastructure, such as with DDoS attacks like we’ve seen over the past few years?

TS: Absolutely, and again the economics of security are so compelling. With IoT we have the cheapest possible parts, devices with a relatively short life span and it’s interesting to hear people talking about regulation around IoT. But a lot of the discussion I’ve heard recently does not revolve around top-down solutions but more like how do we learn from places like the Food and Drug Administration about certification of medical devices. In other words, are there known characteristics that we would like to see these devices put through before they become in some generic sense safe.

BK: How much of addressing the IoT and supply chain problems is about being able to look at the code that powers the hardware and finding the vulnerabilities there? Where does accountability come in?

TS: I used to look at other peoples’ software for a living and find zero-day bugs. What I realized was that our ability to find things as human beings with limited technology was never going to solve the problem. The deterrent effect that people believed someone was inspecting their software usually got more positive results than the actual looking. If they were going to make a mistake – deliberately or otherwise — they would have to work hard at it and if there was some method of transparency, us finding the one or two and making a big deal of it when we did was often enough of a deterrent.

BK: Sounds like an approach that would work well to help us feel better about the security and code inside of these election machines that have become the subject of so much intense scrutiny of late.

TS: We’re definitely going through this now in thinking about the election devices. We’re kind of going through this classic argument where hackers are carrying the noble flag of truth and vendors are hunkering down on liability. So some of the vendors seem willing to do something different, but at the same time they’re kind of trapped now by the good intentions of open vulnerability community.

The question is, how do we bring some level of transparency to the process, but probably short of vendors exposing their trade secrets and the code to the world? What is it that they can demonstrate in terms of cost effectiveness of development practices to scrub out some of the problems before they get out there. This is important, because elections need one outcome: Public confidence in the outcome. And of course, one way to do that is through greater transparency.

BK: What, if anything, are the takeaways for the average user here? With the proliferation of IoT devices in consumer homes, is there any hope that we’ll see more tools that help people gain more control over how these systems are behaving on the local network?

TS: Most of [the supply chain problem] is outside the individual’s ability to do anything about, and beyond ability of small businesses to grapple with this. It’s in fact outside of the autonomy of the average company to figure it out. We do need more national focus on the problem.

It’s now almost impossible to for consumers to buy electronics stuff that isn’t Internet-connected. The chipsets are so cheap and the ability for every device to have its own Wi-Fi chip built in means that [manufacturers] are adding them whether it makes sense to or not. I think we’ll see more security coming into the marketplace to manage devices. So for example you might define rules that say appliances can talk to the manufacturer only. 

We’re going to see more easy-to-use tools available to consumers to help manage all these devices. We’re starting to see the fight for dominance in this space already at the home gateway and network management level. As these devices get more numerous and complicated, there will be more consumer oriented ways to manage them. Some of the broadband providers already offer services that will tell what devices are operating in your home and let users control when those various devices are allowed to talk to the Internet.

 

Since Bloomberg’s story broke, The U.S. Department of Homeland Security and the National Cyber Security Centre, a unit of Britain’s eavesdropping agency, GCHQ, both came out with statements saying they had no reason to doubt vehement denials by Amazon and Apple that they were affected by any incidents involving Supermicro’s supply chain security. Apple also penned a strongly-worded letter to lawmakers denying claims in the story.

Meanwhile, Bloomberg reporters published a follow-up story citing new, on-the-record evidence to back up claims made in their original story.

 

from: https://krebsonsecurity.com/2018/10/supply-chain-security-101-an-experts-view/

 

 

09 OCT 2018 – Bloomberg Follow-Up

New Evidence of Hacked Supermicro Hardware Found in U.S. Telecom

The discovery shows that China continues to sabotage critical technology components bound for America:
implant built into the server’s Ethernet connector.

 

A major U.S. telecommunications company discovered manipulated hardware from Super Micro Computer Inc. in its network and removed it in August, fresh evidence of tampering in China of critical technology components bound for the U.S., according to a security expert working for the telecom company.

 

Yossi Appleboum

The security expert, Yossi Appleboum, provided documents, analysis and other evidence of the discovery following the publication of an investigative report in Bloomberg Businessweek that detailed how China’s intelligence services had ordered subcontractors to plant malicious chips in Supermicro server motherboards over a two-year period ending in 2015.

Appleboum previously worked in the technology unit of the Israeli Army Intelligence Corps and is now co-chief executive officer of Sepio Systems in Gaithersburg, Maryland. His firm specializes in hardware security and was hired to scan several large data centers belonging to the telecommunications company. Bloomberg is not identifying the company due to Appleboum’s nondisclosure agreement with the client. Unusual communications from a Supermicro server and a subsequent physical inspection revealed an implant built into the server’s Ethernet connector, a component that’s used to attach network cables to the computer, Appleboum said.

The executive said he has seen similar manipulations of different vendors’ computer hardware made by contractors in China, not just products from Supermicro. “Supermicro is a victim — so is everyone else,” he said. Appleboum said his concern is that there are countless points in the supply chain in China where manipulations can be introduced, and deducing them can in many cases be impossible. “That’s the problem with the Chinese supply chain,” he said.

Supermicro, based in San Jose, California, gave this statement: “The security of our customers and the integrity of our products are core to our business and our company values. We take care to secure the integrity of our products throughout the manufacturing process, and supply chain security is an important topic of discussion for our industry. We still have no knowledge of any unauthorized components and have not been informed by any customer that such components have been found. We are dismayed that Bloomberg would give us only limited information, no documentation, and half a day to respond to these new allegations.”

Bloomberg News first contacted Supermicro for comment on this story on Monday at 9:23 a.m. Eastern time and gave the company 24 hours to respond.

Supermicro said after the earlier story that it “strongly refutes” reports that servers it sold to customers contained malicious microchips. China’s embassy in Washington did not return a request for comment Monday. In response to the earlier Bloomberg Businessweek investigation, China’s Ministry of Foreign Affairs didn’t directly address questions about the manipulation of Supermicro servers but said supply chain security is “an issue of common concern, and China is also a victim.”

Supermicro shares plunged 41 percent last Thursday, the most since it became a public company in 2007, following the Bloomberg Businessweek revelations about the hacked servers. They fell as much as 27 percent on Tuesday after the latest story.

The more recent manipulation is different from the one described in the Bloomberg Businessweek report last week, but it shares key characteristics: They’re both designed to give attackers invisible access to data on a computer network in which the server is installed; and the alterations were found to have been made at the factory as the motherboard was being produced by a Supermicro subcontractor in China.

Based on his inspection of the device, Appleboum determined that the telecom company’s server was modified at the factory where it was manufactured. He said that he was told by Western intelligence contacts that the device was made at a Supermicro subcontractor factory in Guangzhou, a port city in southeastern China. Guangzhou is 90 miles upstream from Shenzhen, dubbed the `Silicon Valley of Hardware,’ and home to giants such as Tencent Holdings Ltd. and Huawei Technologies Co. Ltd.

The tampered hardware was found in a facility that had large numbers of Supermicro servers, and the telecommunication company’s technicians couldn’t answer what kind of data was pulsing through the infected one, said Appleboum, who accompanied them for a visual inspection of the machine. It’s not clear if the telecommunications company contacted the FBI about the discovery. An FBI spokeswoman declined to comment on whether it was aware of the finding.

AT&T Inc. spokesman Fletcher Cook said, “These devices are not part of our network, and we are not affected.” A Verizon Communications Inc. spokesman said “we’re not affected.”

“Sprint does not have Supermicro equipment deployed in our network,” said Lisa Belot, a Sprint spokeswoman. T-Mobile U.S. Inc. didn’t respond to requests for comment.

Sepio Systems’ board includes Chairman Tamir Pardo, former director of the Israeli Mossad, the national defense agency of Israel, and its advisory board includes Robert Bigman, former chief information security officer of the U.S. Central Intelligence Agency.

U.S. communications networks are an important target of foreign intelligence agencies, because data from millions of mobile phones, computers, and other devices pass through their systems. Hardware implants are key tools used to create covert openings into those networks, perform reconnaissance and hunt for corporate intellectual property or government secrets.

The manipulation of the Ethernet connector appeared to be similar to a method also used by the U.S. National Security Agency, details of which were leaked in 2013. In e-mails, Appleboum and his team refer to the implant as their “old friend,” because he said they had previously seen several variations in investigations of hardware made by other companies manufacturing in China.

In Bloomberg Businessweek’s report, one official said investigators found that the Chinese infiltration through Supermicro reached almost 30 companies, including Amazon.com Inc. and Apple Inc. Both Amazon and Apple also disputed the findings. The U.S. Department of Homeland Security said it has “no reason to doubt” the companies’ denials of Bloomberg Businessweek’s reporting.

People familiar with the federal investigation into the 2014-2015 attacks say that it is being led by the FBI’s cyber and counterintelligence teams, and that DHS may not have been involved. Counterintelligence investigations are among the FBI’s most closely held and few officials and agencies outside of those units are briefed on the existence of those investigations.

Appleboum said that he’s consulted with intelligence agencies outside the U.S. that have told him they’ve been tracking the manipulation of Supermicro hardware, and the hardware of other companies, for some time. 

In response to the Bloomberg Businessweek story, the Norwegian National Security Authority said last week that it had been “aware of an issue” connected to Supermicro products since June.  Trond Ovstedal, a spokesman for the agency, later added to that statement, saying the agency was alerted to the concerns by someone who had heard of them via Bloomberg’s news gathering efforts. In its initial statement, the authority couldn’t confirm the details of Bloomberg’s reporting, but said that it has recently been in dialogue with partners over the issue.

Hardware manipulation is extremely difficult to detect, which is why intelligence agencies invest billions of dollars in such sabotage. The U.S. is known to have extensive programs to seed technology heading to foreign countries with spy implants, based on revelations from former CIA employee Edward Snowden. But China appears to be aggressively deploying its own versions, which take advantage of the grip the country has over global technology manufacturing.

Three security experts who have analyzed foreign hardware implants for the U.S. Department of Defense confirmed that the way Sepio’s software detected the implant is sound. One of the few ways to identify suspicious hardware is by looking at the lowest levels of network traffic. Those include not only normal network transmissions, but also analog signals — such as power consumption — that can indicate the presence of a covert piece of hardware.

In the case of the telecommunications company, Sepio’s technology detected that the tampered Supermicro server actually appeared on the network as two devices in one. The legitimate server was communicating one way, and the implant another, but all the traffic appeared to be coming from the same trusted server, which allowed it to pass through security filters.  

Appleboum said one key sign of the implant is that the manipulated Ethernet connector has metal sides instead of the usual plastic ones. The metal is necessary to diffuse heat from the chip hidden inside, which acts like a mini computer. “The module looks really innocent, high quality and ‘original’ but it was added as part of a supply chain attack,” he said.

The goal of hardware implants is to establish a covert staging area within sensitive networks, and that’s what Appleboum and his team concluded in this case. They decided it represented a serious security breach, along with multiple rogue electronics also detected on the network, and alerted the client’s security team in August, which then removed them for analysis. Once the implant was identified and the server removed, Sepio’s team was not able to perform further analysis on the chip.

The threat from hardware implants “is very real,” said Sean Kanuck, who until 2016 was the top cyber official inside the Office of the Director of National Intelligence. He’s now director of future conflict and cyber security for the International Institute for Strategic Studies in Washington. Hardware implants can give attackers power that software attacks don’t.

“Manufacturers that overlook this concern are ignoring a potentially serious problem,” Kanuck said. “Capable cyber actors — like the Chinese intelligence and security services — can access the IT supply chain at multiple points to create advanced and persistent subversions.”

One of the keys to any successful hardware attack is altering components that have an ample power supply to them, a daunting challenge the deeper into a motherboard you go. That’s why peripherals such as keyboards and mice are also perennial favorites for intelligence agencies to target, Appleboum said.

In the wake of Bloomberg’s reporting on the attack against Supermicro products, security experts say that teams around the world, from large banks and cloud computing providers to small research labs and startups, are analyzing their servers and other hardware for modifications, a stark change from normal practices. Their findings won’t necessarily be made public, since hardware manipulation is typically designed to access government and corporate secrets, rather than consumer data.

National security experts say a key problem is that, in a cybersecurity industry approaching $100 billion in revenue annually, very little of that has been spent on inspecting hardware for tampering. That’s allowed intelligence agencies around the world to work relatively unimpeded, with China holding a key advantage.

“For China, these efforts are all-encompassing,” said Tony Lawrence, CEO of VOR Technology, a Columbia, Maryland-based contractor to the intelligence community. “There is no way for us to identify the gravity or the size of these exploits — we don’t know until we find some. It could be all over the place — it could be anything coming out of China. The unknown is what gets you and that’s where we are now. We don’t know the level of exploits within our own systems.”

 

from: https://www.bloomberg.com/news/articles/2018-10-09/new-evidence-of-hacked-supermicro-hardware-found-in-u-s-telecom

 

 

 

 

By continuing to use this site, you agree to the use of cookies. Please consult the Privacy Policy page for details on data use. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close