Bitcoin verbraucht mehr Strom als die Schweiz

[just quoting the statistics … the power use discussion is more complex]

Der Bitcoin verbraucht laut Cambridge Bitcoin Electricity Consumption Index aktuell rund 60 Terrawattstunden Strom pro Jahr. Das ist mehr als beispielsweise die Schweiz oder Irland.

Von 219 der im CIA Factbook gelisteten Länder und Gebiete verbrauchen nur 42 mehr als die Digitalwährung.

Deutschland liegt in diesem Ranking mit 537 Terrawattstunden auf Platz sechs hinter China, den USA, Indien, Japan und Russland. Das alles sind wohlgemerkt Schätzwerte.

Im Fall des Bitcoins besteht hinsichtlich des tatsächlichen Energiebedarfs eine große Unsicherheit – die Untergrenze setzen die Analysten derzeit bei 23, die Obergrenze bei 183 Terrawattstunden.





Difficulty & Hashrate Records: It’s Now Harder to Mine Bitcoin Than Ever. 70 EH/s is Coming.

Bitcoin mining has become more competitive than ever.

Bitcoin mining difficulty – the measure of how hard it is to earn mining rewards in the world’s largest cryptocurrency by market cap – has reached a new record high above 7.93 trillion. That’s a seven percent jump from the 7.45 trillion record set during the recent two-week adjustment cycle, which was the highest since October 2018.

Bitcoin is designed to adjust its mining difficulty every 2,016 blocks (approximately 14 days), based on the amount of computing power deployed to the network. This is done to ensure the block production interval at the next period will remain constant at around every 10 minutes. When there are fewer machines racing to solve math problems to earn the next payout of newly created bitcoin, difficulty falls; when there are more computers in the game, it rises.


Data from


Right now the machines are humming furiously. Bitcoin miners across the world have been performing calculations at an average 56.77 quintillion hashes per second (EH/s) over the last 14 days to compete for mining rewards on the world’s first blockchain, according to data from mining pool data further indicates the average bitcoin mining hash rate in the last 24-hour and three-day periods were 59.58 EH/s and 59.70 EH/s, respectively, even higher than the average 56.77 EH/s from May 15 to June 27, or any 14-day data in the network’s history.

Similarly, data from also shows the aggregate of bitcoin computing power was around 66 EH/s as of June 22, surpassing last year’s record high of 61.86 EH/s tracked by the site, and has more than doubled since December 2018 when the hash rate dropped to as low as 31 EH/s amid bitcoin’s price fall.

Assuming all such additional computing power has come from more widely used equipment such as the AntMiner S9, which performs calculations at an average rate of 14 tera hashes per second (TH/s), that suggests more than 2 million units of mining equipment may have been switched on over the past several months. (1 EH/s equals to 1 million TH/s)



The increase in capacity is also in line with bitcoin’s price jump over the first half of 2019, which caused the price of second-hand mining equipment to double in China, and also juiced demand for new machines. further estimates the bitcoin mining difficulty will jump by another seven percent at the beginning of the next adjustment cycle, which would be the first time for bitcoin mining difficulty to cross the eight trillion threshold.

Delayed plugging in

Such computing interest comes at a time when mining farms in China, especially in the country’s mountainous southwest, have been gradually plugging in equipment as the rainy summer approaches.

According to a report published by blockchain research firm Coinshare, as of earlier this month, 50 percent of the global bitcoin computing power was located in China’s Sichuan province.

However, it’s important to note that this year, the arrival of the rainy season in China’s southwest has been delayed by nearly a month compared to previous years. As a result, some local mining farms were only running less than half of their total capacity in the past month.

Xun Zheng, CEO of mining farm operator Hashage based in Chengdu that owns several facilities across China’s southwestern provinces, said there had been no rain in the area for over 20 days since early May, which was “unusual.”

“In the past years, it usually starts raining continuously throughout May so [hydropower plants] normally will have enough water resources by early June,” he said.

As a result, in early June his firm was only operating at 40 percent of capacity; it can host more than 200,000 ASIC miners. But as the rain has arrived gradually over the past two weeks, the proportion has climbed to over 60 percent.

Mining farms in China previously estimated that the total hash rate this year during the peak of the rainy season around August could break the threshold of 70 EH/s. That means another 300,000 units of mining machines could be further activated, assuming all are AntMiner S9s or similar models.

Those waiting to be switched on will also include new capital in the sector such as Shanghai-based Fundamental Labs, a blockchain fund that has invested $44 million on top-of-the-line mining equipment, which will be activated in June.





Buried in Facebook’s Libra White Paper, a Digital Identity Bombshell: “I am over 18” credential … and more

The Takeaway

  • Facebook’s Libra white paper includes a brief but potentially seismic nod to digital identity standards.
  • With 2 billion users worldwide, Facebook may be able to succeed where others have failed in jump-starting a globally accepted digital ID.
  • Some identity experts say this is even more important than the cryptocurrency, but others question how much control Libra would give users and find its approach overbearing.
  • see below: The Libra Technical Whitepaper
  • see below: Libra White Paper Shows How Facebook Borrowed From Bitcoin and Ethereum
  • see below: The Libra Move Programming Language
  • see below: A Deep-Dive into Libra Move

Buried in Facebook’s Libra white paper are two short sentences hinting that the project’s ambitions go even further than bringing billions of people into the global financial system.

More than launching a price-stable cryptocurrency for the masses, Libra could be aiming to change the way people trust each other on the internet.

At the top of page nine, in a section describing the consortium that will govern the Libra coin, the white paper states:

“An additional goal of the association is to develop and promote an open identity standard. We believe that decentralized and portable digital identity is a prerequisite to financial inclusion and competition.”

That’s all the paper has to say on the topic of identity, perhaps explaining why the brief mention of such a foundational issue for 21st-century commerce escaped widespread notice despite all the hype over the document itself.

But to some observers, the line dropped like a bomb.

Dave Birch, director of Consult Hyperion and the author of books on digital identity and bitcoin, flagged these lines as “the most interesting” in the paper.

Smoothing pathways on the internet using identity is a bigger deal to many people than a putative cryptocurrency, Birch argued, adding:

“There are no throwaway remarks in a Facebook white paper that has taken a year to put together. It’s in there for a reason. [Facebook] are actually going to try and fix the identity problem.”

A Facebook spokeswoman said this week that the company had nothing to add about identity beyond what’s in the white paper.

Who are you?

It’s a problem almost as old as the internet itself. As the classic “New Yorker” cartoon put it, “on the internet, nobody knows you’re a dog.”

In such an environment, businesses need to guard against fraud, but the copious amounts of personal data consumers must share to prove they are who they say they are leaves them vulnerable to identity theft and spying.

Fixing this problem means finding a way to have the sort of credentials an individual holds in their physical wallet realized in a verifiable digital version which can be trusted across the internet. And for many technologists who have thought long and hard about identity, the solution must be “self-sovereign,” or controlled by the individual.

Birch, who has long seen the potential of social networks as natural springboards for managing digital identity, described a scenario where a user’s “I am over 18” credential (rather than their exact birthdate) is needed to log into a dating site.

This could be accessed through Libra’s cryptocurrency wallet Calibra via one of its partners, Mastercard, for example, with its two-factor authentication process. Then a cryptographic credential is sent back to Calibra containing no personally identifiable information but stating this person is over 18, which can then be presented to the dating site at log in.

While others have proposed similar arrangements (sometimes involving blockchains), none had the reach of Facebook, with its 2.38 billion users worldwide.

If Libra were “to drift in the direction of self-sovereign solutions, Facebook’s endorsement of that approach might make more of an impact on the market than, say, uPort or Evernym might have done,” Birch said, referring to two such blockchain ID startups.

And despite its reputation as the ultimate Peeping Tom, Facebook has hinted at such aspirations before. In February, while Libra was still under wraps, CEO Mark Zuckerberg said he was investigating blockchain’s potential to allow internet users to log in to various services via one set of credentials without relying on third parties.

Standard setting

Stepping back, technologists have been trying to address the challenge of identity for more than a decade by establishing open standards. In the same way that URLs, for example, open webpages anywhere on the internet, standards are also needed to ensure digital attributes about an individual can be universally issued and verified.

The OAuth standard, for example, is what let you log into websites through a third-party service like Facebook without sharing a password. More recently, such work under the auspices of the World Wide Web Consortium (W3C) has included things like Decentralized Identifiers (DIDs) and the verifiable credentials standard, both meant to enable self-sovereign digital identity.

Some veterans of this field were taken aback by the suggestion that the Libra Association (a group of 30 or so companies, expected to reach 100 or more) would develop an open identity standard.

“That’s very world domination-ish of them,” said Kaliya Young, a co-author of “A Comprehensive Guide to Self Sovereign Identity” and co-founder of the Internet Identity Workshop. “Some of us have been working on that problem for a really long time. You already have a set of open standards for verifiable credentials that are basically done and working.”

Young pointed out that “unilaterally declaring” an open standard belies the process of going through standards development with an open community, adding that all the people working on identity standards are connected to one another in reaching a common goal.

“That work is being led by a community of people deeply committed to there being no one company owning it in the end, because identity is too big to be owned, just like the web is too big to be owned,” she said.

(Indeed, Facebook was previously said to have rebuffed an invitation to participate in the DID project alongside Microsoft.)

Phil Windley, chair at the Sovrin Foundation, which contributed the codebase to the Hyperledger Indy blockchain ID project, acknowledged the risk of parsing two sentences in Libra’s paper too finely. But he made the point that “decentralized” and “portable” (Facebook’s words) are not exactly the same as self-sovereign.

“Decentralized” could simply mean a user’s identity data – their attributes and identifiers – are spread among nodes that are run on the Libra blockchain, said Windley. This doesn’t imply the user necessarily has control of them. Likewise, “portable” just means credentials can be moved from one place to another but doesn’t necessarily mean you get a say in how they are used.

Windley told CoinDesk:

“People often use ‘decentralized’ as an unalloyed gilt and just assume that it means everything is going to be great. That could be what they are doing here – just using ‘decentralized’ as a synonym for ‘awesome.’”

Joining the dots

That said, Windley was respectful about the scale of Libra’s vision, which he suspects is much bigger than dealing with know-your-customer (KYC) checks and the regulation around building a global permissioned currency platform.

He pointed to the paper’s authors which include many firms like Mastercard or Kiva, folks who have thought very hard about digital identity. (Neither company would comment on Libra’s approach to digital identity).

“I suspect given Libra’s goal of financial inclusion, they are probably thinking about it bigger than just authentication and authorization for a few narrow purposes,” said Windley. “I think there is enough there (e.g. the smart contract language) to believe a stablecoin is just one thing that they envisage using Libra for.”

In the absence of any detail on what might comprise a decentralized identity standard from Libra’s perspective, some dots can be joined by examining the recent work of George Danezis and his co-founders at Chainspace, a startup acquired by Facebook in May.

A paper introducing a “selective disclosure credential scheme” called Coconut explains how a system of smart contracts (computer programs that run on top of blockchains) could “issue user credentials depending on the state of the blockchain, or attest some claim about a user operating through the contract – such as their identity, attributes, or even the balance of their wallet.”

The Coconut protocol goes on to describe how credentials can be jointly issued in a decentralized manner by a group of “mutually distrusting authorities.” These credentials cannot be forged by users or a group of corrupt authorities, and are also “re-randomized” prior to being presented for verification to further protect user privacy. Unlike some computationally-hungry proving schemes, this is done in a matter of a few milliseconds making it highly scalable.

Returning to the question of standards, Birch said W3C, DIDs and verified credentials might be the right option for Libra, but whether it’s that or something else, basically whatever they choose would end up being a standard, he said, concluding:

And you could argue, is that necessarily a bad thing? I mean what happens if they come up with a good standard for identity and attributes and so on and then other people can use it, e.g. banks would be one obvious example.”




Libra White Paper Shows How Facebook Borrowed From Bitcoin and Ethereum

With the long-awaited Libra white paper, Facebook is showing off its blockchain smarts, and making a bid for crypto credibility.

Released Tuesday morning, the 29-page paper describes a protocol designed to evolve as it powers a new global currency. More than a year in the making, the document opens by trumpeting the new blockchain’s ambitious goal:

“The Libra Blockchain is a decentralized, programmable database designed to support a low-volatility cryptocurrency that will have the ability to serve as an efficient medium of exchange for billions of people around the world.”

As a first step toward achieving the “decentralized” part, the protocol has been turned over to a new organization, the Libra Association, whose members will hold separate tokens allowing them on-chain voting rights to govern decisions about Libra.

“Over time, it’s designed to transition the node membership from these founding members who have a stake in the creation of the ecosystem to people who hold Libra and have a stake in the ecosystem as a whole,” Ben Maurer, Facebook’s blockchain technical lead, told CoinDesk in an exclusive interview.

In short, Libra is designed to be a high throughput, global blockchain, one that’s built with programmable money in mind but limits how much users can do initially as it evolves from prototype to a robust ecosystem.

Unlike many other blockchains, Libra seems laser-focused on payments and other financial use cases for consumers.

But the white paper itself seems geared to demonstrate both Facebook’s proposed advances to the science of distributed consensus and its appreciation for what has been built so far.

Indeed, over the last several months, many sources told CoinDesk they had visited Facebook to share their perspective on decentralized technology. The company has done a lot of homework.

And now it has created a new language for writing commands on its blockchain, called Move, and opened its software to public inspection.

“To validate the design of the Libra protocol, we have built an open-source prototype implementation — Libra Core — in anticipation of a global collaborative effort to advance this new ecosystem,” the white paper states.

“It’d be sort of presumptuous for us to say we’re creating an open environment and then say, ‘Well, but we’ve set everything in stone,’” Maurer told CoinDesk. “It’s a paper that requests feedback.”

Mix and match

Libra’s designers have picked what they see as the best features of existing blockchains while providing their own updates and refinements.

1. Like bitcoin, there’s no real identity on the blockchain.

From the perspective of the blockchain itself, you don’t exist. Only public-private key pairs exist. The white paper states: “The Libra protocol does not link accounts to a real-world identity. A user is free to create multiple accounts by generating multiple key-pairs. Accounts controlled by the same user have no inherent link to each other.”

2. Like Hyperledger, it’s permissioned (at least to start).

Initially, the consensus structure for Libra will be dozens of organizations that will run nodes on the network, validating transactions. Each time consensus is voted on for a new set of transactions, a leader will be designated at random to count up the votes.

Libra opts to rely on familiarity rather than democracy to choose the right entities to establish consensus in the early days. “Founding Members are organizations with established reputations, making it unlikely that they would act maliciously,” the white paper states. These are entities range from traditional payment networks (Mastercard, Visa) to internet and gig-economy giants (eBay, Lyft) to blockchain natives (Xapo) to VCs (Andreessen Horowitz, Thrive Capital).

3. Like tezos, it comes with on-chain governance.

The only entities that can vote at the outset are Founding Members. These members hold Libra Investment Tokens that give them voting rights on the network, where they can make decisions about managing the reserve and letting new validators join the network.

The governance structure is built into the Move software from the start, and like Tezos it is subject to revision over time. Updates will be essential as it adds members and evolves from what’s more like a delegated proof-of-stake (DPoS) system (such as EOS or steem) to a fully decentralized proof-of-stake ecosystem.

4. Like ethereum, it makes currency programmable.

In a number of ways, the white paper defines interesting ways in which its users can interact with the core software and data structure. For example, anyone can make a non-voting replica of the blockchain or run various read commands associated with objects (such as smart contracts or a set of wallets) defined on Libra. Crucially, Libra’s designers seem to agree with ethereum’s that running code should have a cost, so all operations require payment of Libra as gas in order to run.

Unlike ethereum, Libra makes two important changes in its smart contracts. First, it limits how much users can do on the protocol at first (the full breadth of Move’s features are not yet open). Second, it breaks data out from software, so one smart contract (what Move refers to as a “module”) can be directed at any pool of assets, which Move calls “resources.” So one set of code can be used on any number of wallets or collections of assets.

5. Also like ethereum, it thinks proof-of-stake is the future, but it is also not ready yet.

“Over time, membership eligibility will shift to become completely open and based only on the member’s holdings of Libra,” the white paper promises, describing a path to real permissionless-ness.

Meanwhile, the paper dismisses the approach of the blockchains with the longest track record (namely bitcoin), stating, “We did not consider proof-of-work based protocols due to their poor performance and high energy (and environmental) costs.”

6. Like Binance’s coin, it does a lot of burning.

Blockchains that build in purposeful burning of tokens became very influential last year. Binance, the world’s leading exchange, created the BNB token, with which users could pay trading fees at a discount. Binance led the way to token bonfires, regularly burning a significant portion of its profits paid in BNB.

Libra won’t use burning to enhance the value of its coin. Rather (as with collateralized stablecoins such as tether), tokens will be issued and burned constantly, as the association responds to demand shifts for its reserve, with no supply maximum or minimum supply.

7. Like coda, users don’t need to hold onto the whole transaction history.

A lesser-known protocol, Coda, was one of the first to make its ledger disposable. Users only need to hold a proof of the last block, which they can easily check on a smartphone to be sure they are interacting with a valid ledger.

Similarly, on Libra, “historical data may grow beyond the amount that can be handled by an individual server. Validators are free to discard historical data not needed to process new transactions.”

8. Like EOS, it hasn’t worked everything out yet.

EOS launched without its approach to governance well defined, which yielded complications down the road. Similarly, Libra promises to decentralize, but there’s nothing that inherently forces its members to do so.

Work in progress

Other matters are left undecided as well. For example, the storage of data.

“We anticipate that as the system is used, eventually storage growth associated with accounts may become a problem,” the white paper says. The document anticipates but does not define a system of rent for data storage.

It cites a number of examples of other open questions, such as how best to maintain security as more validators join the network, how often the pool of validators can change and how modules can be updated safely.

As the paper admits:

“This paper is the first step toward building a technical infrastructure to support the Libra ecosystem. We are publishing this early report to seek feedback from the community on the initial design, the plans for evolving the system, and the currently unresolved research challenges discussed in the proposal.”

Dream team

The Libra white paper is signed by 53 people. Though senior Facebook executives such as CEO Mark Zuckerberg and blockchain lead David Marcus are notably absent from the author list, the team that wrote the document looks to be one of the most-heavy hitting in blockchain history.

The signatories hail from nearly every continent and include Ph.D. students from Stanford, computer science professors, and artificial intelligence (AI) developers.

They include:

  • Christian Catalini: The MIT professor was one of the first to study the economics of cryptocurrency alongside crowdfunding and tokenization. Catalini has written extensively for the Harvard Business Review and other publications.
  • Ben Maurer: Facebook’s infrastructure engineer graduated from Carnegie Mellon University with a degree in computer science. He and CMU assistant professor Luis von Ahn built the reCAPTCHA service that Google bought in 2009. He is leading the team that built the Move programming language.
  • George Danezis: A privacy engineer at University College London, Danezis was one of the creators of Chainspace and the Coconut protocol upon which Libra is based. He is currently a researcher at Facebook after the company bought his startup in February 2019.
  • François Garillot: A machine-learning and AI expert who worked at Swisscom and, Garillot focuses on distributed AI.
  • Ramnik Arora: Arora spent time as an analyst at Goldman Sachs Investment Strategy Group as well as at IV Capital as a quant. His background is in finance and he has a master’s in computer science from Stanford and an undergraduate degree in the mathematics of finance.





The Facebook Libra Technical Whitepaper

see also:

Introducing Libra

The world truly needs a reliable digital currency and infrastructure that together can deliver on the promise of “the internet of money.”

Securing your financial assets on your mobile device should be simple and intuitive. Moving money around globally should be as easy and cost-effective as — and even more safe and secure than — sending a text message or sharing a photo, no matter where you live, what you do, or how much you earn. New product innovation and additional entrants to the ecosystem will enable the lowering of barriers to access and cost of capital for everyone and facilitate frictionless payments for more people.

Now is the time to create a new kind of digital currency built on the foundation of blockchain technology. The mission for Libra is a simple global currency and financial infrastructure that empowers billions of people. Libra is made up of three parts that will work together to create a more inclusive financial system:

  1. It is built on a secure, scalable, and reliable blockchain;
  2. It is backed by a reserve of assets designed to give it intrinsic value;
  3. It is governed by the independent Libra Association tasked with evolving the ecosystem.

The Libra currency is built on the “Libra Blockchain.” Because it is intended to address a global audience, the software that implements the Libra Blockchain is open source — designed so that anyone can build on it, and billions of people can depend on it for their financial needs. Imagine an open, interoperable ecosystem of financial services that developers and organizations will build to help people and businesses hold and transfer Libra for everyday use. With the proliferation of smartphones and wireless data, increasingly more people will be online and able to access Libra through these new services. To enable the Libra ecosystem to achieve this vision over time, the blockchain has been built from the ground up to prioritize scalability, security, efficiency in storage and throughput, and future adaptability. Keep reading for an overview of the Libra Blockchain, or read the technical paper.

The unit of currency is called “Libra.” Libra will need to be accepted in many places and easy to access for those who want to use it. In other words, people need to have confidence that they can use Libra and that its value will remain relatively stable over time. Unlike the majority of cryptocurrencies, Libra is fully backed by a reserve of real assets. A basket of bank deposits and short-term government securities will be held in the Libra Reserve for every Libra that is created, building trust in its intrinsic value. The Libra Reserve will be administered with the objective of preserving the value of Libra over time. Keep reading for an overview of Libra and the reserve, or read more here.

The Libra Association is an independent, not-for-profit membership organization headquartered in Geneva, Switzerland. The association’s purpose is to coordinate and provide a framework for governance for the network and reserve and lead social impact grant-making in support of financial inclusion. This white paper is a reflection of its mission, vision, and purview. The association’s membership is formed from the network of validator nodes that operate the Libra Blockchain.

Members of the Libra Association will consist of geographically distributed and diverse businesses, nonprofit and multilateral organizations, and academic institutions. The initial group of organizations that will work together on finalizing the association’s charter and become “Founding Members” upon its completion are, by industry:

  • Payments: Mastercard, Mercado Pago, PayPal, PayU (Naspers’ fintech arm), Stripe, Visa
  • Technology and marketplaces: Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, Spotify AB, Uber Technologies, Inc.
  • Telecommunications: Iliad, Vodafone Group
  • Blockchain: Anchorage, Bison Trails, Coinbase, Inc., Xapo Holdings Limited
  • Venture Capital: Andreessen Horowitz, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, Union Square Ventures
  • Nonprofit and multilateral organizations, and academic institutions: Creative Destruction Lab, Kiva, Mercy Corps, Women’s World Banking

We hope to have approximately 100 members of the Libra Association by the target launch in the first half of 2020.

Facebook teams played a key role in the creation of the Libra Association and the Libra Blockchain, working with the other Founding Members. While final decision-making authority rests with the association, Facebook is expected to maintain a leadership role through 2019. Facebook created Calibra, a regulated subsidiary, to ensure separation between social and financial data and to build and operate services on its behalf on top of the Libra network.

Once the Libra network launches, Facebook, and its affiliates, will have the same commitments, privileges, and financial obligations as any other Founding Member. As one member among many, Facebook’s role in governance of the association will be equal to that of its peers.

Blockchains are described as either permissioned or permissionless in relation to the ability to participate as a validator node. In a “permissioned blockchain,” access is granted to run a validator node. In a “permissionless blockchain,” anyone who meets the technical requirements can run a validator node. In that sense, Libra will start as a permissioned blockchain.

To ensure that Libra is truly open and always operates in the best interest of its users, our ambition is for the Libra network to become permissionless. The challenge is that as of today we do not believe that there is a proven solution that can deliver the scale, stability, and security needed to support billions of people and transactions across the globe through a permissionless network. One of the association’s directives will be to work with the community to research and implement this transition, which will begin within five years of the public launch of the Libra Blockchain and ecosystem.

Essential to the spirit of Libra, in both its permissioned and permissionless state, the Libra Blockchain will be open to everyone: any consumer, developer, or business can use the Libra network, build products on top of it, and add value through their services. Open access ensures low barriers to entry and innovation and encourages healthy competition that benefits consumers. This is foundational to the goal of building more inclusive financial options for the world.



Move, a safe and flexible programming language for the Libra Blockchain

Whitepaper Deep Dive — Move: Facebook Libra Blockchain’s New Programming Language

Key characteristics of Move and How it differentiates with Ethereum from a developer’s perspective

Overview & Motivation

This is a walkthrough of the 26 pages technical whitepaper of Move, Facebook Libra’s new programming language. As an Ethereum developer and a blockchain community enthusiast, I hope to provide a quick overview and highlights of the paper for everyone curious about this new language :)

Hope that you will like it, happy learning!


Move is an executable bytecode language used to implement custom transactions and smart contracts.

There’re two things to take note:

  1. While Move is a bytecode language which can be directly executed in Move’s VM, Solidity (Ethereum’s smart contract language) is a higher level language that needs to be compiled down to bytecode before executing in EVM (Ethereum’s Virtual Machine).
  2. Move can not only be used to implement smart contracts but also custom transactions (explained later in the article), while Solidity is a language for smart contracts on Ethereum only.

The key feature of Move is the ability to define custom resource types with semantics inspired by linear logic: a resource can never be copied or implicitly discarded, only moved between program storage locations.

This is a feature similar to Rust. Values in Rust can only be assigned to one name at a time. Assigning a value to a different name causes it to no longer be accessible under the previous name.

For example, the following code snippet will output the error: Use of moved value ‘x’. This is because Rust has no garbage collection. When variables go out of scope, the memory they refer to is also deallocated. For simplicity, we can understand this as there can only be one “owner” of data at a time. In this example, x is the original owner, and then y becomes the owner.




2.2 Encoding Digital Assets in an Open System

There are two properties of physical assets that are difficult to encode in digital assets:
• Scarcity. The supply of assets in the system should be controlled. Duplicating existing assets should be prohibited, and creating new assets should be a privileged operation.
• Access control. A participant in the system should be able to protect her assets with access control policies.

It points out two major characteristics that digital assets need to achieve, which are considered natural for physical assets. For example, rare metal is naturally scarce, and only you have the access (ownership) of the bill in your hand before spending it.

To illustrate how we came up with the two properties, let’s start with the following proposals:

Proposal#1: Simplest Rule Without Scarcity and Access Control

The simplest state evaluation rule without scarcity and access control.


  • G[K]:=n denotes updating the number stored at key 𝐾 in the global blockchain state with the value 𝑛.
  • transaction ⟨Alice, 100⟩ means set Alice’s account balance to 100.

The above representation has serval serious problems:

  • Alice can have unlimited coins by sending transaction ⟨Alice, 100⟩ herself.
  • The coins that Alice sends to Bob are worthless since Bob could send himself unlimited coins using the same technic as well.

Proposal#2: Taking Scarcity into Account

The second proposal that takes scarcity into account


Now we enforce that the number of coins stored under 𝐾𝑎 is at least 𝑛 before the transfer takes place.

However, though this solves the scarcity issue, there’s no ownership checking on who can send Alice’s coins. (anyone can do so under this evaluation rule)

Proposal#3: Considering both Scarcity and Access Control


The third that considers both scarcity and access control


We address the problem by using digital signature mechanism verify_sig before the scarcity checking, which means Alice uses her private key to sign the transaction and prove that she is the owner of her coin.

2.3. Existing Blockchain Languages

Existing blockchain languages are facing the following problems (all of them have been solved in Move):

1. Indirect representation of assets. An asset is encoded using an integer, but an integer value is not the same thing as an asset. In fact, there is no type or value that represents Bitcoin/Ether/StrawCoin! This makes it awkward and error-prone to write programs that use assets. Patterns such as passing assets into/out of procedures or storing assets in data structures require special language support.

2. Scarcity is not extensible. The language only represents one scarce asset. In addition, the scarcity protections are hardcoded directly in the language semantics. A programmer that wishes to create a custom asset must carefully reimplement scarcity with no support from the language.

These are exactly the problems in Ethereum smart contracts. Custom assets such as ERC-20 tokens use integer to represent its value and its total supply. Whenever new tokens are minted, the smart contract code has to manually check if the scarcity (total supply in this case) has been reached.

Furthermore, serious bugs such as duplication, reuse, or loss of assets, are more likely to be introduced due to the Indirect representation of asset problem.

3. Access control is not flexible. The only access control policy the model enforces is the signature scheme based on the public key. Like the scarcity protections, the access control policy is deeply embedded in the language semantics. It is not obvious how to extend the language to allow programmers to define custom access control policies.

This is also true in Ethereum, where smart contracts do not have native language support for the public-private key cryptography to do access control. Developers have to manually write access control such as using OnlyOwner.

Despite that I’m a big fan of Ethereum, I agree that these asset properties should be natively supported by the language for safety purposes.

In particular, transferring Ether to a smart contract involves dynamic dispatch, which has led to a new class of bugs known as re-entrancy vulnerabilities

Dynamic dispatch here means that the code execution logic will be determined at runtime (dynamic) instead of compile time (static). Thus in Solidity, when contract A calls contract B’s function, contract B can run code that was unanticipated by contract A’s designer, which can lead to re-entrancy vulnerabilities (contract A accidentally executes contract B’s function to withdraw money before actually deducting balances from the account).

3. Move Design Goals

3.1. First-Class Resources

At a high level, the relationship between modules/resources/procedures in Move is similar to the relationship between classes/objects/methods in object-oriented programming.
Move modules are similar to smart contracts in other blockchain languages. A module declares resource types and procedures that encode the rules for creating, destroying, and updating its declared resources.

The modules/resources/procedures are just some jargons in Move. We will have an example to illustrate these later in this article;)

3.2. Flexibility

Move adds flexibility to Libra via transaction scripts. Each Libra transaction includes a transaction script that is effectively the main procedure of the transaction.

The scripts can perform either expressive one-off behaviors (such as paying a specific set of recipients) or reusable behaviors (by invoking a single procedure that encapsulates the reusable logic)

From the above, we can see that Move’s transaction script introduces more flexibility since it is capable of one-off behaviors as well as reusable behaviors, while Ethereum can only perform reusable behaviors (which is invoking a single smart contract method). The reason why it’s named “reusable” is that smart contract functions can be executed multiple times.

3.3. Safety

The executable format of Move is a typed bytecode that is higher-level than assembly yet lower-level than a source language. The bytecode is checked on-chain for resource, type, and memory safety by a bytecode verifier and then executed directly by a bytecode interpreter. This choice allows Move to provide safety guarantees typically associated with a source language, but without adding the source compiler to the trusted computing base or the cost of compilation to the critical path for transaction execution.

This is indeed a very neat design for Move to be a bytecode language. Since it doesn’t need to be compiled from the source to bytecode like Solidity, it doesn’t have to worry about the possible failures or attacks in compilers.

3.4. Verifiability

Our approach is to perform as much lightweight on-chain verification of key safety properties as possible, but design the Move language to support advanced off-chain static verification tools.

From here we can see that Move prefers performing static verification instead of doing on-chain verification work. Nonetheless, as stated at the end of their paper, the verification tool is left for future work.

3. Modularity. Move modules enforce data abstraction and localize critical operations on resources. The encapsulation enabled by a module combined with the protections enforced by the Move type system ensures that the properties established for a module’s types cannot be violated by code outside the module.

This is also a very well thought data abstraction design! which means that the data in a smart contract can only be modified within the contract scope but not other contracts from the outside.



4. Move Overview

The example transaction script demonstrates that a malicious or careless programmer outside the module cannot violate the key safety invariants of the module’s resources.

This section walks you through an example about what modules, resources, and procedures actually is when writing the programming language.

4.1. Peer-to-Peer Payment Transaction Script

The amount of coins will be transferred from the transaction sender to payee


There are several new symbols here (The small red text is my own notes XD):

  • 0x0: the account address where the module is stored
  • Currency: the name of the module
  • Coin: the resource type
  • The value coin returned by the procedure is a resource value whose type is 0x0.Currency.Coin
  • move(): the value can not be used again
  • copy(): the value can be used later

Code breakdown:

In the first step, the sender invokes a procedure named withdraw_from_sender from the module stored at 0x0.Currency.

In the second step, the sender transfers the funds to payee by moving the coin resource value into the 0x0.Currency module’s deposit procedure.

Here are 3 types of code examples that will be rejected:

1. Duplicating currency by changing move(coin) to copy(coin)

Resource values can only be moved. Attempting to duplicate a resource value (e.g., using copy(coin) in the example above) will cause an error at bytecode verification time.

Because coin is a resource value, it can only be moved.

2. Reusing currency by writing move(coin) twice

Adding the line 0x0.Currency.deposit(copy(some_other_payee), move(coin)) to the example above would let the sender “spend” coin twice — the first time with payee and the second with some_other_payee. This undesirable behavior would not be possible with a physical asset. Fortunately, Move will reject this program.

3. Losing currency by neglecting to move(coin)

Failing to move a resource (e.g., by deleting the line that contains move(coin) in the example above) will trigger a bytecode verification error. This protects Move programmers from accidentally — or intentionally — losing track of the resource.

4.2. Currency Module

4.2.1 Primer: Move execution model


Each account can contain zero or more modules (depicted as rectangles) and one or more resource val- ues (depicted as cylinders). For example, the account at address 0x0 contains a module 0x0.Currency and a resource value of type 0x0.Currency.Coin. The account at address 0x1 has two resources and one module; the account at address 0x2 has two modules and a single resource value.

Some highlights:

  • Executing a transaction script is all-or-nothing
  • A module is a long-lived piece of code published in the global state
  • The global state is structured as a map from account addresses to accounts
  • Accounts can contain at most one resource value of a given type and at most one module with a given name (The account at address 0x0 would not be allowed to contain an additional 0x0.Currency.Coin resource or another module named Currency)
  • The address of the declaring module is part of the type (0x0.Currency.Coin and 0x1.Currency.Coin are distinct types that cannot be used interchangeably)
  • Programmers can still hold multiple instances of a given resource type in an account by defining a custom wrapper resource

(resource TwoCoins { c1: 0x0.Currency.Coin, c2: 0x0.Currency.Coin })

  • The rule is it is ok as long as you can still reference the resource by its name without having conflicts, for example, you can reference the two resources using TwoCoins.c1 and TwoCoins.c2.

4.2.2 Declaring the Coin Resource

A module named Currency and a resource type named Coin that is managed by the module

Some highlights:

  • A Coin is a struct type with a single field value of type u64 (a 64-bit unsigned integer)
  • Only the procedures of the Currency module can create or destroy values of type Coin
  • Other modules and transaction scripts can only write or reference the value field via the public procedures exposed by the module

4.2.3 Implementing Deposit

This procedure takes a Coin resource as input and combines it with the Coin resource stored in the payee’s account by:
1. Destroying the input Coin and recording its value.
2. Acquiring a reference to the unique Coin resource stored under the payee’s account.
3. Incrementing the value of payee’s Coin by the value of the Coin passed to the procedure.

Some highlights:

  • Unpack, BorrowGlobal are builtin procedures
  • Unpack<T> is the only way to delete a resource of type T. It takes a resource of type T as input, destroys it, and returns the values bound to the fields of the resource
  • BorrowGlobal<T> takes an address as input and returns a reference to the unique instance of T published under that address
  • &mut Coin is a mutable reference to a Coin resource, not Coin

4.2.4 Implementing withdraw_from_sender

This procedure:

1. Acquires a reference to the unique resource of type Coin published under the sender’s account.
2. Decreases the value of the referenced Coin by the input amount.
3. Creates and returns a new Coin with value amount.

Some highlights:

  • Deposit can be called by anyone but withdraw_from_sender has access control to only be callable by the owner of coin
  • GetTxnSenderAddress is similar to Solidity’s msg.sender
  • RejectUnless is similar to Solidity’s require. If this check fails, execution of the current transaction script halts and none of the operations it performed will be applied to the global state
  • Pack<T>, also a builtin procedure, creates a new resource of type T
  • Like Unpack<T>, Pack<T> can only be invoked inside the declaring module of resource T

Wrap up

Now that you have an overview of what is the main characteristics of Move, how it compares to Ethereum, and also familiar with its basic syntax.

Lastly, I highly recommend reading through the original white paper. It includes a lot of details regarding the programming language design principles behind and many great references.

Thank you so much for your time reading. Feel free to share this with someone who might be interested :) Any suggestions are also welcomed!!




*** -OR-



Cortex Launches Deep Learning and AI Network for Decentralized Apps

Cortex claims that this is the first time that artificial intelligence has been introduced to a crypto network at scale.

Cortex has launched a network for decentralized apps powered by artificial intelligence (AI,) according to a news release published on June 26.

The company claims this is the first time that AI has been introduced to a crypto network at scale. It is hoped the technology will be used to generate credit reports for the decentralized finance industry and facilitate anti-fraud reporting for exchanges — and Cortex believes the gaming and eSports sector could also benefit from a “diverse range of use cases.” Cortex CEO Ziqi Chen said:

“In the near future, we expect to see stablecoins based on machine learning, decentralized decision making, malicious behavior detection, smart resource allocation, and much more. These are challenges that all intersect with crypto networks, where having trained AI models that are accessible on-chain will prove to be extremely valuable.”

Looking ahead, Cortex says it plans to work with developers to implement AI dApps on its network, and deliver on-chain machine learning to networks beyond Ethereum.

Earlier in June, the European Union announced plans to increase the amount of data that can be reused as raw material for AI and blockchain projects.

An AI-powered index tracking the 100 strongest-performing crypto coins and tokens was also recently added to Reuters and Bloomberg trading terminals.



Cortex Network Launches Mainnet to Democratize Deep Learning and AI

SINGAPORE — (BUSINESS WIRE) — Decentralized AI world computer, Cortex has announced the successful launch of its technology platform. The network, designed for AI-powered dApps, brings deep learning models as artificial intelligence support to the blockchain ecosystem. The Cortex mainnet, which launched on June 26th following 15 months of intense development, marks the first time that AI has been introduced to a crypto network at scale.

“In the near future, we expect to see stablecoins based on machine learning, decentralized decision making, malicious behavior detection, smart resource allocation, and much more. These are challenges that all intersect with crypto networks, where having trained AI models that are accessible on-chain will prove to be extremely valuable.”

The Cortex team has overcome a number of technical challenges to create, in the Cortex Virtual Machine (CVM), the means for AI models to be executed on-chain using a Graphics Processing Unit (GPU). This opens the door to a myriad of applications, including dApp and AI development. Trained AI models can be uploaded onto the storage layer of the Cortex chain before being incorporated into smart contracts by dApp developers.

“On-chain machine learning is an extremely complex endeavor due to the computational demands, and the need to create a virtual machine that is Ethereum Virtual Machine compatible. With the Cortex Virtual Machine, we’ve achieved a breakthrough that brings the benefits of artificial intelligence to a wider audience. Although dApp developers will be among the first beneficiaries of the Cortex mainnet, this is only the beginning. In time, we expect to develop a diverse range of use cases, all delivered on-chain,” said Cortex CEO Ziqi Chen.

These use cases will include generating credit reports for the burgeoning DeFi industry, and facilitating anti-fraud reporting for decentralized exchanges, P2P financing platforms, insurance, and cryptocurrency lending. Other potential applications include gaming, esports, and AI governance structures.

Explaining the rationale behind this latter concept, Ziqi Chen explains: “In the near future, we expect to see stablecoins based on machine learning, decentralized decision making, malicious behavior detection, smart resource allocation, and much more. These are challenges that all intersect with crypto networks, where having trained AI models that are accessible on-chain will prove to be extremely valuable.”

Cortex provides a mechanism for machine learning researchers to upload data models to the storage layer and monetize them with entities in need of AI models which are able to make inferences after arranging payment using CTXC tokens. The Cortex mainnet has launched with 23 AI models available, trained with four datasets. The CVM is backward-compatible with the EVM, and capable of running traditional smart contracts as well as AI smart contracts.

A detailed roadmap includes plans for the Cortex Foundation to work with dApp developers to implement AI dApps on the Cortex chain and plans to broaden cross-chain support to bring on-chain machine learning to networks beyond Ethereum. The Cortex team also intends to collaborate with academia and industry for research partnerships, and with publications to help further understanding of neural networks and their integration into the emerging field of blockchain.

To learn more visit:




Founder of Ripple, Co-Founder of Stellar, and Mt Gox Founder Jed McCaleb is being sued for neglecting severe Mount Gox security problems

Mt. Gox founder Jed McCaleb is being sued by two traders who used the doomed exchange, court documents filed on May 19 show.

Joseph Jones and Peter Steinmetz have accused the ex-CEO of fraudulently and negligently misrepresenting the exchange.

The pair also allege that McCaleb was aware of “serious security risks” back in late 2010 or early 2011 — more than three years before 850,000 bitcoin (BTC) was stolen in an audacious hack. Their complaint adds:

“Rather than secure the exchange, McCaleb sold a large portion of his interest in the then sole proprietorship, and provided avenues to the purchases to cover-up security concerns at the time without ever informing or disclosing these issues to the public.”

Both of the plaintiffs describe themselves as experienced cryptocurrency traders. They said they were reassured by McCaleb following a “dictionary attack” in 2011, where a fraudster stole coins after targeting accounts with weak passwords.

The court document alleges that 80,000 BTC was already missing at that time, and claims that McCaleb sold a majority of his interest in Mt. Gox to Mark Karpeles instead of staying to repair the security issues.

While Jones said he owned 1,900 BTC at the time of Mt. Gox’s bankruptcy in February 2014 (worth $24 million at press time,) Steinmetz said he owned 43,000 BTC — crypto that would be worth more than $542 million at today’s rates. Both men are still in pursuit of their lost funds, and say they would not have used Mt. Gox had they known about the “significant security concerns” that existed in 2011.

In April, Mt. Gox rehabilitation trustee Nobuaki Kobayashi successfully petitioned a Japanese court to extend the deadline for the submission of rehabilitation plans to October 2019.

Meanwhile, back in March, former CEO Mark Karpeles was given a suspended jail sentence after being found guilty of tampering with financial records.

Mt. Gox was once the world’s biggest crypto exchange, and McCaleb later went on to become the founder of Ripple and the co-founder of Stellar.

Facebook’s Libra: “It would make the early 20th century Morgans or Rockefellers seem downright competitive.”

Standard Oil depicted as an Octopus in a 1904 political cartoon
(image via Wikimedia Commons).

Facebook’s Libra Cryptocurrency: Bad for Privacy, Bad for Competition

Author Scott A. Shay is co-founder and chairman of Signature Bank of New York and also the author of “In Good Faith: Questioning Religion and Atheism” (Post Hill Press, 2018).

Allowing Facebook to mint its own coin, the Libra, would turn it into the greatest anti-competitive trust case in history. It would make the early 20th century Morgans or Rockefellers seem downright competitive.

Even before it unveiled its vision for a global cryptocurrency this month, Facebook was already a near-monopoly in social media, and part of a duopoly in its main markets. Together with Google, it controls 82% of the digital advertising market. 

In the past, Facebook has purchased any company that threatened it, e.g. Instagram and WhatsApp. And, when it spots a company that won’t sell itself or would be difficult to purchase, it uses the “embrace, enhance and extinguish” technique.  

Facebook saw Snap Inc. (maker of Snapchat) contesting a small part of its franchise, so it embraced Snap’s best features and integrated them into its app. Now, Facebook is hoping to extinguish Snap as a competitor. Compare the stock performance of Snap and Facebook, and you will probably place your bet on Facebook.

But it is not simply Facebook’s business practices that are of concern.

Neither Facebook nor Google charges for their consumer products, obscuring the fact that all-encompassing consumer tracking is their real product. In many cases, their data is better than what the KGB or CIA could have gathered 20 years ago. And their data is certainly a lot cheaper, since it is voluntarily provided and easily accessible.

We would not want our government agencies to have this sort of power, nor should we want it to be in the hands of corporations. 

Facebook and Google have already shown their political muscle. With their duopoly on digital marketing advertising, these companies have transformed the nature of news.  Only a few news sites, such as The Wall Street Journal and The New York Times, can resist their gravitational pull and still attract direct advertisers as well as subscribers.

Most other publications must use Google ads, which provide far less revenue to the outlet, slice and dice their readership, and force newspapers to write clickbait. Ads to readers are so well-placed because of the mountain of information that can be inputted into their algorithms. The same holds true for news content viewed on Facebook.

Now, with the Libra project, Facebook wants to exponentially increase its monopolistic power by accessing unparalleled information about our consumer purchasing habits. If allowed to proceed with Libra, a company that knows your every mood and virtually controls the news you see will also have access to the deepest insights into your spending patterns.

Privacy threat

Of course, Facebook will speak piously about privacy controls and its concern for the consumer, yet it will still figure out a way to sell the data or others who buy the data will figure it out for them.

Furthermore, with the richness of the social media data Facebook consistently garners, even anonymized data can be recalibrated to distill specific individual-related information and preferences. Facebook, along with its other monopolist rent-seeking cohorts, such as eBay, Uber and Mastercard, all say they won’t do that. 

Quite frankly, there is zero reason to believe such promises. Their culture is based strictly on brand concerns and access to personal data. Additionally, hacks of social media are now so common that we are inured to them.

Consumers can have the benefit of a digital payment mechanism without allowing Facebook to gain more power. In the financial services sector, my institution, Signature Bank, was the first to introduce a 24/7 blockchain-enabled payment system. As one would expect, others, such as JPMorgan, are trying to follow suit and will no doubt be competitors someday.

Banks and financial institutions are limited in their access to, and transmission of, information, and for good reason. If Facebook, on the other hand, establishes Libra, no other competitor will have equal access to its data, and therefore, a chance at the consumer payment market.

In this way, Libra is in keeping with Facebook’s monopolistic business style.

Further, the information monopoly Facebook would possess will be similar to what the Chinese government possesses but needs the Great Firewall to execute. Monopolistic forces will produce the same result through different means.

Call to action

Action needs to be taken quickly to stop Libra and break up Big Tech, not only for the welfare of consumers but for the good of the nation.

The first step is to force Facebook to divest or spin off Instagram, WhatsApp, Instagram and Chainspace, the blockchain startup it acqui-hired early this year.

Facebook also must be mandated to offer a parallel, ad-free, “no collection of information” site supported by fee-based subscriptions. Over time, this would provide some transparency as to the value of the consumer information currently being gifted to Facebook.

Google should be forced to divest or spin off YouTube, Double Click and other advertising entities, cloud services and Android. Amazon similarly needs a radical breakup as it too poses systemic threats to a transparent market. (Alexa is a prime example of the private data Amazon gathers on users’ lifestyle and personal habits.)

The breakup of these behemoths cannot wait until after the 2020 election.  Such action must be taken on a bipartisan basis as soon as possible.

Even once stripped down, Facebook should remain separated from commerce due to privacy concerns. Congress, which has scheduled hearings on Libra for next month, is right to intervene.





SWIFT Gives Blockchain Platforms Access to ‘Instant’ GPI Payments Following R3 Trial

[needless to say: while it bears the names “DLT” and “Blockchain”, it has little to do with either]
The firm said 55 percent of SWIFT cross-border payments are now being made over GPI,

a payments flow worth over $40 trillion
Jun 24, 2019 at 13:30 UTC

Global interbank messaging giant SWIFT has revealed it will allow blockchain firms to make use of its Global Payments Innovation (GPI) platform for near real-time payments.

In a report published late last week, SWIFT said that, following a successful proof-of-concept with R3’s Corda platform, it would “soon be enabling gpi payments on DLT [distributed ledger technology]-based trade platforms.”

Saying that [SWIFT’s] GPI would resolve the “payment challenges” faced by DLT platforms, the firm explained that payments using the system would be initiated within trade workflows and be automatically sent on to the banking system.

Launched in early 2017, GPI was created as a set of business rules encoded on top of the firm’s [SWIFT’s] existing infrastructure as a means to increased speed, transparency and the traceability of transactions.

In the report, the firm said 55 percent of SWIFT cross-border payments are now being made over GPI, a payments flow worth over $40 trillion.

“Half of them are reaching end beneficiary customers within minutes, and practically all within 24 hours,” the report stated, further predicting that all cross-border SWIFT payments will be made over GPI “within two years.”

At the launch of the proof-of-concept back in January, SWIFT explained that the trial would connect the GPI Link gateway with R3’s Corda platform to monitor payment flows and support application programming interfaces (APIs), as well as SWIFT and ISO standards.

Commenting on the SWIFT news, Charley Cooper, managing director at R3, said:

“We’re proud to be pioneering this work with SWIFT on the GPI Link initiative and R3’s Corda Settler. SWIFT’s intention to expand blockchain access to its GPI Link is an important step towards enterprise blockchain adoption and Corda is already a leader in this space. The ability for firms utilising enterprise blockchain applications to settle off chain using existing, established and trusted payment networks, like SWIFT, allows firms to access the efficiency gains from blockchain while reducing the friction in crossing between on-chain and pre-existing payment systems.”

It’s also notable that R3 began testing its Corda Settler payments engine with XRP, the native cryptocurrency of Ripple, prompting a frisson of excitement among Ripple supporters. In any case, R3 has made clear from the start that the technology was always designed to be interoperable with a variety of payments systems.





Der Preis (in $$) der persönlichen Daten in USA

Daten gegen kostenlose Nutzung.

Das ist kurz zusammengefasst der Deal auf den sich NutzerInnen sozialer Netzwerke einlassen. Laut einer Umfrage von NBC News/Wall Street Journal aus dem März 2019 finden 74 Prozent der US-Amerikaner, dass das kein fairer Handel ist. Auch in Deutschland dürfte eine entsprechende Umfrage ähnlich ausfallen.

Aber was wäre dann ein guter Deal? Dieser Frage ist das Meinungsforschungsunternehmen Morning Consult in einer Umfrage unter 2.200 Erwachsenen in den USA nachgegangen.

  • Für Informationen wie den vollen Namen oder das Einkaufsverhalten würden die Teilnehmer 50 US-Dollar aufrufen.
  • Für Bonitätswerte und Führerscheinnummer würden 300 beziehungsweise 500 US-Dollar fällig werden.
  • Für Passnummer oder biometrische Daten müssten Unternehmen 1.000 US-Dollar zahlen.





Kodak Reveals New Blockchain-Based Document Management System: 40% Cost Savings

Kodak Services for Business unveiled a blockchain-based document management platform during a two-day conference in New York, according to a news release published on June 5.

The company says the technology enables businesses and governments to better manage sensitive documents and keep them secure — automating workflows and archiving to ensure records can be accessed in real time.

According to Kodak, this system can help organizations achieve cost savings of up to 40% by improving productivity and preventing the loss of information.

Other products showcased during the event included Scan Cloud, which enables users to process data wherever they are.

“Smart Cities” were also discussed at the conference — a concept where cutting-edge technology is used to improve infrastructure and services in urban areas.

The company has used blockchain to establish databases before, and recently forged a partnership with RYDE Holding to build an image rights platform to protect copyright and help photographers monetize their works. Known as KodakONE, a limited beta test reportedly generated more than $1 million in licensing claims.

Back in February 2018, Kodak was forced to delay the launch of its KodakCOIN cryptocurrency in order to evaluate the status of potential investors — a day before a planned initial coin offering was due to start.

Last month, the Polish-British fintech firm Billon secured a $2.1 million grant from the European Commission to further the development of its own blockchain document management system.


The real risk of Facebook’s Libra coin is crooked developers

They’ll steal your money, not just your data.

Everyone’s worried about Mark Zuckerberg controlling the next currency, but I’m more concerned about a crypto Cambridge Analytica.

Today Facebook announced Libra, its forthcoming stablecoin designed to let you shop and send money overseas with almost zero transaction fees. Immediately, critics started harping about the dangers of centralizing control of tomorrow’s money in the hands of a company with a poor track record of privacy and security.

Facebook anticipated this, though, and created a subsidiary called Calibra to run its crypto dealings and keep all transaction data separate from your social data. Facebook shares control of Libra with 27 other Libra Association founding members, and as many as 100 total when the token launches in the first half of 2020. Each member gets just one vote on the Libra council, so Facebook can’t hijack the token’s governance even though it invented it.

With privacy fears and centralized control issues at least somewhat addressed, there’s always the issue of security. Facebook naturally has a huge target on its back for hackers. Not just because Libra could hold so much value to steal, but because plenty of trolls would get off on screwing up Facebook’s currency. That’s why Facebook open-sourced the Libra Blockchain and is offering a prototype in a pre-launch testnet. This developer beta plus a bug bounty program run in partnership with HackerOne is meant to surface all the flaws and vulnerabilities before Libra goes live with real money connected.

Yet that leaves one giant vector for abuse of Libra: the developer platform.

“Essential to the spirit of Libra . . . the Libra Blockchain will be open to everyone: any consumer, developer, or business can use the Libra network, build products on top of it, and add value through their services. Open access ensures low barriers to entry and innovation and encourages healthy competition that benefits consumers,” Facebook explained in its white paper and Libra launch documents. It’s even building a whole coding language called Move for making Libra apps.

Apparently Facebook has already forgotten how allowing anyone to build on the Facebook app platform and its low barriers to “innovation” are exactly what opened the door for Cambridge Analytica to hijack 87 million people’s personal data and use it for political ad targeting.

But in this case, it won’t be users’ interests and birthdays that get grabbed. It could be hundreds or thousands of dollars’ worth of Libra currency that’s stolen. A shady developer could build a wallet that just cleans out a user’s account or funnels their coins to the wrong recipient, mines their purchase history for marketing data or uses them to launder money. Digital risks become a lot less abstract when real-world assets are at stake.

In the wake of the Cambridge Analytica scandal, Facebook raced to lock down its app platform, restrict APIs, more heavily vet new developers and audit ones that look shady. So you’d imagine the Libra Association would be planning to thoroughly scrutinize any developer trying to build a Libra wallet, exchange or other related app, right? “There are no plans for the Libra Association to take a role in actively vetting [developers],” Calibra’s head of product Kevin Weil surprisingly told me. “The minute that you start limiting it is the minute you start walking back to the system you have today with a closed ecosystem and a smaller number of competitors, and you start to see fees rise.”

That translates to “the minute we start responsibly verifying Libra app developers, things start to get expensive, complicated or agitating to cryptocurrency purists. That might hurt growth and adoption.”

You know what will hurt growth of Libra a lot worse? A sob story about some migrant family or a small business getting all their Libra stolen. And that blame is going to land squarely on Facebook, not some amorphous Libra Association.


Facebook’s own Calibra Wallet


Inevitably, some unsavvy users won’t understand the difference between Facebook’s own wallet app Calibra and any other app built for the currency. “Libra is Facebook’s cryptocurrency. They wouldn’t let me get robbed,” some will surely say. And on Calibra they’d be right. It’s a custodial wallet that will refund you if your Libra are stolen and it offers 24/7 customer support via chat to help you regain access to your account.

Yet the Libra Blockchain itself is irreversible. Outside of custodial wallets like Calibra, there’s no getting your stolen or mis-sent money back. There’s likely no customer support. And there are plenty of crooked crypto developers happy to prey on the inexperienced. Indeed, $1.7 billion in cryptocurrency was stolen last year alone, according to CypherTrace via CNBC. “As with anything, there’s fraud and there are scams in the existing financial ecosystem today . . .  that’s going to be true of Libra too. There’s nothing special or magical that prevents that,” says Weil, who concluded “I think those pros massively outweigh the cons.”

Until now, the blockchain world was mostly inhabited by technologists, except for when skyrocketing values convinced average citizens to invest in Bitcoin just before prices crashed. Now Facebook wants to bring its family of apps’ 2.7 billion users into the world of cryptocurrency. That’s deeply worrisome.


Facebook founder and CEO Mark Zuckerberg arrives to testify during a Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee joint hearing about Facebook on Capitol Hill in Washington, DC, April 10, 2018. (Photo: SAUL LOEB/AFP/Getty Images)


Regulators are already bristling, but perhaps for the wrong reasons. Democrat Senator Sherrod Brown tweeted that “We cannot allow Facebook to run a risky new cryptocurrency out of a Swiss bank account without oversight.”


And French Finance Minister Bruno Le Maire told Europe 1 radio that Libra can’t be allowed to “become a sovereign currency.”


Most harshly, Rep. Maxine Waters issued a statement saying, “Given the company’s troubled past, I am requesting that Facebook agree to a moratorium on any movement forward on developing a cryptocurrency until Congress and regulators have the opportunity to examine these issues and take action.”

Yet Facebook has just one vote in controlling the currency, and the Libra Association preempted these criticisms, writing, “We welcome public inquiry and accountability. We are committed to a dialogue with regulators and policymakers. We share policymakers’ interest in the ongoing stability of national currencies.”

That’s why as lawmakers confer about how to regulate Libra, I hope they remember what triggered the last round of Facebook execs having to appear before Congress and Parliament. A totally open, unvetted Libra developer platform in the name of “innovation” over safety is a ticking time bomb. Governments should insist the Libra Association thoroughly audit developers and maintain the power to ban bad actors. In this strange new crypto world, the public can’t be expected to perfectly protect itself from Cambridge Analytica 2.$.






Österreich bringt erste Blockchain-Briefmarke der Welt heraus

So sieht eine Papierbriefmarke aus. Links die normal verwendbare Briefmarke,
rechts der verdeckte Teil mit Zugangsdaten zu einer Ethereum-Wallet. (Screenshot: t3n)

Die erste Blockchain-Briefmarke der Welt gibt’s im Onchain-Shop der Österreichischen Post

Die Österreichische Post hat die erste Blockchain-Briefmarke der Welt auf den Markt gebracht. Unser Gastautor hat sich das etwas genauer angeschaut.

Die sogenannte Crypto Stamp besteht aus zwei Teilen. Bei dem ersten Teil handelt es sich um eine echte Briefmarke aus Papier mit einem Nennwert von 6,90 Euro. Dieser Teil der Briefmarke kann ganz normal verwendet werden. Zusätzlich erhält man beim Kauf aber auch einen zweiten Teil. Dieser besteht ebenfalls aus Papier und verfügt über zwei verdeckte Textfelder, die freigerubbelt werden können. Hinter den Rubbelflächen befinden sich Zugangsdaten zu einer Ethereum-Wallet – also einer digitalen Geldbörse auf der Ethereum-Blockchain. Innerhalb dieser digitalen Geldbörse liegt ein Token mit dem Namen „Crypto stamp Edition 1“ und mit einer einzigartigen Token-ID. Dieser einzigartige Token kann Philatelisten und Krypto-Enthusiasten als virtuelles Sammelgut dienen. Daneben befindet sich in der digitalen Geldbörse noch ein kleiner Betrag von 0,001666 Ether mit einem Gegenwert von etwa 40 Cent. Dieser Betrag der verbreiteten Kryptowährung soll dem Käufer der Marke vermutlich ermöglichen, Transaktionsgebühren innerhalb der Blockchain zu bezahlen, sollte er seine virtuelle Marke an eine andere Ethereum-Wallet übertragen wollen.

Die Crypto Stamp ist in einer Auflage von 150.000 Stück erschienen und kann ganz normal in Ladengeschäften und im Onlineshop der Österreichischen Post gekauft werden. Allerdings werden nicht alle Briefmarken normal verkauft. 500 Stück können exklusiv im sogenannten Onchain-Shop der Österreichischen Post erworben werden – also direkt in der Blockchain. Und auch an dieser Stelle wird es spannend.

Der Kauf im Onchain-Shop

Der Onchain-Shop kann ganz normal über einen Browser erreicht werden. Alternativ kann die Ethereum-Adresse 0xC5BA58b8362a25b1ddB59E2106910B6c324A5668 genutzt werden. Der Kauf im Onchain-Shop setzt voraus, dass der Käufer bereits über eine eigene digitale Ethereum-Geldbörse verfügt und sich in dieser die Kryptowährung Ether befindet.

Der eigentlich Kauf der Krypto-Briefmarke wird dann über einen Smart Contract innerhalb der Ethereum-Blockchain abgewickelt. Der Smart Contract enthält ein Regelwerk, das grundsätzlich folgender Logik folgt: Wenn ein Nutzer von seiner digitalen Geldbörse den aktuellen Gegenwert von 6,90 Euro in Ether an die Blockchain-Adresse des Smart Contracts sendet, sendet dieser im Gegenzug einen einzigartigen Token der Art „Crypto stamp Edition 1“ an die digitale Geldbörse des Käufers. Der Kauf ist damit abgeschlossen und das virtuelle Sammelgut auch bereits auf den Käufer übertragen. Das Besondere an dieser Art des Kaufs: Unmittelbar personenbezogene Daten mussten bis hierhin nicht ausgetauscht werden.

Die Versandanschrift wird direkt über die Blockchain mitgeteilt

Was dann allerdings noch fehlt, ist die Briefmarke in Papierform – denn diese kauft man auch im Onchain-Shop mit. Die Papiermarke wird weltweit versendet. Bemerkenswert ist, wie die Post an die Versandadresse des Käufers kommt: Der Käufer muss mit aktivierter digitaler Geldbörse den Onchain-Shop besuchen und seine Versandanschrift in ein Formular eintragen, das auf den ersten Blick einem normalen Webformular ähnelt. Allerdings werden die Daten nicht an einen Server der österreichischen Post gesendet, sondern direkt und unveränderlich in die Blockchain gespeichert. Es wird also direkt über die Blockchain kommuniziert. Damit an dieser Stelle keine Datenschutzverletzung entsteht, werden die Daten nicht im Klartext in der Blockchain gespeichert, sondern vor der Speicherung mit einem öffentlichen Schlüssel der Post verschlüsselt.

Sammeln, sammeln, sammeln

Außerhalb der Blockchain bietet der Kryptobereich der österreichischen Post im Hinblick auf alle 150.000 Briefmarken beziehungsweise deren digitalen Tokens noch weitere Möglichkeiten. So wird beispielsweise der digitale Werdegang der Tokens öffentlich einsehbar angezeigt. Es ist also möglich, einzusehen, in welcher digitalen Geldbörse welcher Token liegt, ob ein Verkauf des Tokens innerhalb der Blockchain erfolgte und wenn ja, an welche Adresse. Die gesamte Transaktionshistorie der Tokens ist somit für jedermann abrufbar. Was auf den ersten Blick aussieht wie ein Datenschutzverstoß, ist in der Ethereum-Blockchain ganz normal; die Blockchain dient als eine Art öffentliches Kassenbuch.

Um das Sammlerherz höher schlagen zu lassen, bietet die Website der Österreichischen Post die Möglichkeit, sich weitere Metadaten seiner digitalen Briefmarken im Internet anzeigen zu lassen. So existiert die virtuelle Krypto-Briefmarke in fünf unterschiedlichen Farben. Am seltensten ist mit 1.500 Stück die virtuell rote Marke.


Die virtuelle Version der Crypto stamp existiert in fünf verschiedenen Farben. (Screenshot: t3n)


Welche Farbe die jeweilige Marke hat, bestimmt sich über die in der Blockchain gespeicherte Token-ID. Leider werden die der Token-ID zugeordneten Metainformationen aber nicht in der Blockchain selbst gespeichert, sondern in einer zentralisierten Datenbank, die im Internet unter zu erreichen ist – die Zahl am Ende der URL entspricht dabei der jeweiligen Token-ID. Die Gefahr für Sammler: Sollte die Österreichische Post die Seiten vom Netz nehmen, existiert diese Information grundsätzlich nicht mehr. Zudem könnten die Informationen jederzeit geändert werden. Dies widerspricht eigentlich dem Gedanken eines echten Cryptocollectibles und trübt die Sammelfreude etwas.

Dafür enthält der Smart Contract des Onchain-Shops aber noch eine interessante Funktion: Stehen im Onchain-Shop nur noch 100 Marken zum Verkauf, so wird der Preis dieser letzten Marken jeweils um den Faktor 1,08 erhöht. Das bedeutet konkret: Die letzte Crypto Stamp des Onchain-Shops müsste für ungefähr 13.000 Euro über die virtuelle Ladentheke gehen. Zumindest in der Theorie.

Also: Datenmüll oder wertvolles Sammelgut?

Die Antwort auf die Frage liegt im Auge des Betrachters. Für einen Preis 6,90 Euro bekommen Käufer zumindest einiges geboten. Und eins muss festgestellt werden: Die Österreichische Post hat keine Mühen gescheut. Denn jede Papierbriefmarke kommt aufgrund der Zugangsdaten als individuell bedrucktes Unikat mit einem virtuellen Gegenpart und wird für viele Käufer darüber hinaus das erste Mal die Möglichkeit eröffnen, auf die Kryptowährung Ether zuzugreifen. Wer an dieser Stelle Lust auf die ersten Schritte in der Blockchain bekommen hat, benötigt neben der Crypto Stamp nur eine Browsererweiterung, um Zugang zum Ethereum-Netzwerk zu erhalten. Sehr verbreitet ist hierfür das Plugin Metamask, das unter anderem für Chrome und Firefox verfügbar ist.





Aufwärtstrend bei Kryptowährungen im 1. Halbjahr 2019

Derzeit gibt es laut 2.238 unterschiedliche Kryptowährungen mit einem Gesamtwert von über 280 Milliarden US-Dollar, von denen mehr als die Hälfte auf den Bitcoin entfallen. Nun hat Facebook seine eigene Digitalwährung, den”Libra”, vorgestellt. Da das größte soziale Netzwerk der Welt mit seinen 2,3 Milliarden aktiven Accounts jede Menge potentielle Nutzer mitbringt, wird dem Vorhaben eine große Bedeutung zugemessen. Noch vor wenigen Jahren waren Krypto-Coins allenfalls etwas für Internet-Nerds. Mitte 2013 waren gerade einmal 26 unterschiedliche Digitalwährung im Gesamtwert von 1,1 Milliarden US-Dollar aktiv.



Massive Panne bei der Commerzbank am 3. Juni: totale Transaktions-Amnesie

5. Juni 2019


Fallen wir gleich mal mit der Tür ins Haus: Nein, die Mega-Panne in der Commerzbank-IT ist nicht behoben. Auch wenn die Coba das im offenbar für diese Zwecke präferierten Kommunikationsweg Social Media bereits am Montag um 16:10 Uhr verkündet hatte („Die Störung ist behoben“). Und dann am Montagabend gegenüber noch einmal („Die Störung ist behoben“). Und dann gestern um 17:35 Uhr auf Facebook noch ein drittes Mal („Die Störung ist behoben“).

Denn: Was ist die Definition von „Störung behoben“?  Dass der totale Blackout erst einmal vorbei ist? Okay, dann stimmt, was die Commerzbank sagt. Aber müsste man den Begriff „Störung“ nicht eigentlich ein bisschen weiter fassen?

Denn: Was ist mit den Aufräumarbeiten und diversen Reparaturversuchen mit wechselnden Handlungsempfehlungen – sind die nicht Teil der „Störung“? Was mit den Kunden, die nach wie vor auf ihre Geld warten? Was mit den geplatzten Lastschriften (bei denen man davon ausgehen muss, dass viele Kunden gar nicht genau wissen, dass da Lastschriften geplatzt sind, weil der Auftraggeber die Info bekam, das Konto existiere gar nicht)? Und was ist mit den Paypal-Konten, die geschlossen wurden, weil das Referenzkonto bei der Commerzbank nicht verfügbar war?

Störung behoben? Sollte die Commerzbank nicht lieber mal aktiv an ihre Kunden herantreten, damit es am Ende der ganzen Veranstaltung wenigstens heißen kann: Schaden begrenzt!

Für diejenigen, die erst später eingeschaltet haben, hier die Kurzform der Lage: Die Commerzbank hatte am Montag und damit ausgerechnet am ersten Werktag des Monats (mit entsprechend viel Zahlungsverkehr) ab Mitternacht einen (mindestens) 8,5-stündigen „Blackout“.

  • Während dieser Zeit konnten bei den betroffenen Konten weder Lastschriften noch Überweisungen noch Daueraufträge verbucht werden.
  • Zahlungsausgang defekt.
  • Zahlungseingang defekt.

Man muss sich das vorstellen wie bei einem Kneipengast, der sich noch erinnern kann, wie er um Mitternacht durch die Tür ist, aber dessen Gedächtnis erst gegen halb neun morgens wieder einsetzt. Und der dann nach und nach durchgehen muss,  was da nächtens so alles passiert ist oder passiert sein könnte.

Bei der Commerzbank jedenfalls reicht das Gedächtnis bis Sonntag um Mitternacht. Bis da ist alles da, auch, was an Daueraufträgen zu tun ist. Und dann setzte es Montag um 8.30 Uhr wieder ein. Aber was dazwischen war – daran kann sich die Bank allenfalls nur noch schemenhaft erinnern. Und bei Zahlungen und Lastschriften von außen bei den betroffenen Konten an gar nichts.

Klingt übertrieben? Dann gehen wir den Recherchestand und auch die Antworten auf die Fragen, die wir an die Commerzbank gerichtet haben, einmal durch. Und urteilen Sie selbst:

  • Kunden mit betroffenem Commerzbank-Konto, die auf einen Zahlungseingang warten (etwa Lohn, Gehalt, Rente, Unterhalt, elterlicher Zuschuss, Miete…), warten vergeblich; die „Absender“ mussten feststellen, dass das Geld mit dem Hinweis „IBAN fehlerhaft“ nicht überwiesen wurde. Mutmaßlich haben das bislang nicht einmal alle Absender (oder Empfänger) bemerkt. Denn nicht alle Menschen da draußen checken täglich oder gar stündlich ihre Kontoumsätze. Die Coba jedenfalls schreibt in ihrem Kundenbereich: „Die meisten eingehenden Zahlungen werden von den Absendern in den kommenden Tagen noch einmal veranlasst“. Da fragen wir uns: Was macht die Commerzbank da so sicher? Denn: Das geschieht ja nicht etwa automatisch und muss von Kunden beauftragt werden. Auf Nachfrage teilt die Commerzbank gegenüber mit, man habe „alle wesentlichen Banken informiert, so dass diese ihre Kunden noch einmal bitten können, die Zahlungen erneut auszuführen.“ Aber werden die das tun? Und bis wann werden die das tun?
  • Kunden mit betroffenem Commerzbank-Konto, die eine Lastschrift erteilt haben – auch hier (etwa bei Lastschriftzahlungen via Girocard, Handyrechnungen, Netflix, Strom- und Gas-Abschläge usw.) erhielt die abbuchende Bank den Hinweis, dass die IBAN unbekannt sei. „Hinsichtlich eingehender Lastschriften gehen wir davon aus, dass diese kurzfristig erneut eingereicht werden. Anderenfalls wird der Kunde vom Einreicher der Lastschrift typischerweise kontaktiert“; erklärt die Commerzbank. Mithin: Auch hier weiß die Gelbbank (von) nix. Kunden, einziehende Banken und Auftraggeber müssen das Problem also selbst lösen. Denn Unternehmen und ihre Payment-Mechanismen finden nicht einlösbare Lastschriften, könnten wir uns vorstellen, alles andere als knorke.
  • Skurril wird es bei Terminüberweisungen von betroffenen Commerzbank-Konten mit dem Ausführungstermin 1. Juni, 2. Juni oder 3. Juni sowie bei  Überweisungen, die am Montag bis 8.30 Uhr getätigt oder verbucht wurden. Denn nach auf Basis hunderter Social-Media-Einträge tauchen genau diese Vorgänge häufig mit dem Status „ausstehend“ in der Umsatzübersicht aus und belasten auch das Kontosaldo und die Limits (hier, hier und hier Beispiele, viele mehr hier). Was die entsprechenden Kunden finanziell lähmen kann und es teils unmöglich macht, Überweisungen händisch nachzubuchen. Die Überweisungen sind aber weiterhin nicht „unterwegs“. Noch am Montag erklärte die Commerzbank in ihrer Online-Information Folgendes: „Wenn der Auftrag noch in den „ausstehenden Buchungen“ oder gar nicht angezeigt wird, wird er nicht mehr gebucht. Bitte erfassen Sie den Auftrag in diesem Fall noch einmal neu“ – Kunden, die genau dies getan haben, haben nun aber ein Problem. Denn: Die Überweisungen „hängen“ immer noch. Nun erklärt die Commerzbank aber seit gestern gleich mehrfach (etwa hier, hier, hier und hier) via Social Media: Derzeit prüfen wir, ob wir auch Überweisungen maschinell nachbuchen können“ – mithin werden diese Überweisungen also möglicherweise doch noch gebucht. Wer es eilig hat, möge sicherheitshalber selbst überweisen. Wenn das Geld reicht, möchte man anmerken. Und bei den dann drohenden „Doppelbuchungen“? Wenden Sie sich bitte an Ihren Berater, so die Bank. {UPDATE 5.6.2019, 10:21: Laut der Commerzbank hat man heute (Mittwoch) mit der Nachverarbeitung der noch nicht ausgeführten Überweisungen und Terminüberweisungen begonnen. Man bittet darum, keine nachträglichen Überweisungen zu tätigen.}
  • Dass sich die Pläne, wie man den Gedächtnisverlust in den Griff kriegt, im Laufe des Prozesses verändert haben, gibt auch die ebenfalls betroffene Commerzbank-Tochter Comdirect unumwunden zu. Dort erhielten Kunden laut entsprechenden Einträgen eine Nachricht zur Nichtausführung der Aufträge in der PostBox (Beispiel hier und hier) sowie den Hinweis, die Überweisung selbst aufzugeben – um dann am Dienstag mit der Information konfrontiert zu werden, die Überweisungen würden dann doch womöglich gebucht. Was dann konkret zu Doppelbuchungen geführt hat und die Frage aufwirft, wieso es zunächst hieß, diese Transaktionen würden nicht ausgeführt.  „Zum Zeitpunkt unserer letzten Info, war es eigentlich so geplant. Manchmal folgt ein Missgeschick dem anderen.  Noch einmal „Sorry!“. Bitte kläre mit dem Empfänger der Zahlung, dass das Geld zurücküberwiesen wird“, so die Comdirect via Twitter. Im Klartext: Liebe Kunden, rennt dem Geld nach, das ihr aufgrund unserer Fehlinformation überwiesen habt.
  • Und sonst (I)? Auf die Frage, wie es sein kann, dass die alten Dresdner-Bank-Konten und jene der alten Dresdner-Filialen nicht betroffen waren (siehe unser gestriger Artikel) antwortet die Commerzbank: „Weil sie die Ziffer „8“ an der vierten Stelle der Bankleitzahl haben, die Störung jedoch nur Konten mit der Ziffer „4″ betraf – genau, das war ja unsere Frage, warum sind die einen Konten betroffen gewesen und die anderen nicht acht Jahre nach Integration der Dresdner Bank?
  • Und sonst (II)? Wie viele betroffene Kunden und Überweisungen beziehungsweise Lastschriften gibt es? Dazu macht die Commerzbank auf Nachfrage keine Angaben, was nach positiver Lesart nicht eben auf eine überschaubare Fallzahl hindeutet, die man gerne kommunizieren würde. Und was man ihr das bei negativer Lesart sogar abnehmen kann, denn sie weiß es – Stichwort Amnesie – womöglich wirklich nicht genau oder bestenfalls näherungsweise auf Basis früherer Transaktionen. bietet folgende Näherung an: Die Commerzbank kommt nach eigenen Angaben im Privatkundengeschäft auf einen Marktanteil von 8% und bei Firmenkunden von gut 5%. 2017 wickelten Banken in Deutschland 10,3 Mrd. Lastschriften und 6,3 Mrd. Überweisungen ab. Macht im Schnitt rund 64 Millionen Transaktionen pro Bankarbeitstag, zu Monatsbeginn vermutlich eher mehr. Die Spekulation – selbst wenn man die nicht betroffenen Alt-Dresdner-Konten abzieht –, ob die Zahl der betroffenen Transaktionen eine sechsstellige oder eine siebenstellige Zahl ist, überlassen wir Ihnen, liebe Leser*innen.
  • Und sonst (III)? Auf Social Media häufen sich die Beschwerden auch von Firmenkunden (hier im Thread), die Zahlungseingänge ihrer Kunden erwarten und nun darauf angewiesen sind, dem Geld und den Kunden einzeln nachzulaufen. Viele Kunden monieren auch die weiterhin schwere Erreichbarkeit von Filialen und Hotlines.
  • Gibt es auch positive Entwicklungen (I)? Ja, denn die ausgehenden Daueraufträge hat die Commerzbank nach eigenen Angaben vollständig gebucht.
  • Gibt es auch positive Entwicklungen (II)? Die Commerzbank kündigte an, an den Bezahldienstleister Paypal bezüglich einer Lösung heranzutreten, die aber offenbar (vgl. „Störung ist behoben“?) noch nicht gefunden ist. Denn Paypal entfernte aus Sicherheitsgründen das Commerzbank-Konto als Referenzkonto, wenn der Bezahldienstleister davon nicht abbuchen konnte – und Kunden konnten es nach eigenen Angaben auch nicht wieder hinzufügen. Das Konto ist gesperrt, bis Kunden ein alternatives Referenzkonto angeben. (Beispiele hier und hier und hier). Also dann irgendwie doch keine positive Entwicklung.
  • Gibt es auch positive Entwicklungen (III)? Ja: Über Social Media erhalten die Bank und Ihre kommunizierenden Mitarbeiter auch viel Zuspruch, dass Fehler nun mal passieren könnten und man ihnen gute Nerven bei der Aufarbeitung wünscht. Und das tun wir – hier selbst oft nicht frei von Kloppern – ohne jeden Zynismus auch.






“Aktuell haben wir eine Störung”

Commerzbank kämpft erneut mit IT-Panne


Kunden der Commerzbank ärgern sich schon wieder über Schwierigkeiten. Bei der Auszahlung an Geldautomaten und bei der Kartenzahlung kommt es ausgerechnet vor dem Wochenende zu Problemen. Die Commerzbank spricht von einer “Störung in der IT”.

Deutschlands zweitgrößtes privates Kreditinstitut kämpft erneut gegen die Tücken der Technik: Bei der Commerzbank traten am Vormittag erneut gravierende Probleme auf. Es ist bereits die zweite Computerpanne innerhalb weniger Wochen.

“Aktuell haben wir eine Störung in der IT”, teilte das Institut vor dem Wochenende mit. Betroffen waren demnach unter anderem Geldautomaten und Kartenzahlungen. Die Bank arbeite an einer Lösung, hieß es. “Bitte entschuldigen Sie die Unannehmlichkeiten.”

Nach gut einer Stunde konnte zumindest ein Teil der Probleme behoben werden. “Geldautomaten und Kartenzahlungen funktionieren wieder”, hieß es in einer weiteren Mitteilung, die am späten Vormittag veröffentlicht wurde. Die Anmeldung (“Log-in”) in das Online-Banking ist allerdings weiterhin nur eingeschränkt möglich.

“Arbeiten unter Hochdruck”

Weitere Details wurden zunächst nicht genannt. An der Börse lösten die technischen Schwierigkeiten keine sichtbaren Reaktionen aus. Der Kurs der Coba-Aktie notierte weitgehend unverändert bei 6,28 Euro und lag damit deutlich fester als der Gesamtmarkt starke 1,23 Prozent im Plus.


Bereits Anfang des Monats hatte eine IT-Panne für Ärger bei Kunden gesorgt. Wegen einer technischen Störung konnten Daueraufträge, Überweisungen und Lastschriften zeitweise nicht verarbeitet werden. Besonders unangenehm: Gerade zum Monatsanfang werden besonders viele Daueraufträge ausgeführt und Lastschriften eingezogen.

Bei dem Vorfall Anfang Juni konnten gut acht Stunden lang bei betroffenen Konten weder Lastschriften noch Überweisungen oder Daueraufträge verbucht werden – ausgerechnet am ersten Arbeitstag des Monats als viele Kunden ihre Gehalts- oder Rentenzahlungen erwarteten. Die Commerzbank hat nach jüngsten Angaben in Deutschland gut 13 Millionen Privat- und Firmenkunden.

Im Frühjahr stand zeitweilig eine Übernahme der Commerzbank durch die Deutsche Bank im Raum. Die Sondierungsgespräche über eine mögliche Fusion wurden dann jedoch ergebnislos beendet. “Wir sind alleine stark genug, um unseren Weg zu gehen”, hatte Commerzbank-Chef Martin Zielke Ende April erklärt.



Quelle:, mmo/dpa/rts




There is no cloud – it’s just someone else’s computer.

P.S. you can get the sticker here:

2,67 € for small
7,43 € for medium
11,13 € for large size
(prices include Euro VAT — I am not associated with sticker sales, just saving you the search)


People often argue “you get the elasticity” and the “extra bandwidth at spikes”, and a “rich toolkit to pick from”as if this changes the fundamentals?

There can be good arguments for using someone else’s computer – you may possibly save some up-front capital and people expenses; but it comes at a price (you lose the control over it all, in every aspect) and one should very carefully compare the actual cost of cloud vs own when calculating ‘savings’.

  • all your eggs are in one basket
  • the basket is not yours: both the basket and your eggs are controlled by someone else
  • you have to trust service availability (seems rock solid at first .. “too big to fail” ? — but every little admin mistake can bring you down along with everyone else – with little hope to be able to recover losses .. which you might otherwise have, with a decent SLA)
  • you have to trust someone else’s cyber security measures
  • you are a small part of a highly attractive target for the bad guys – and not just a (much smaller) target on your own merit
  • you are at the total mercy of that someone else, if the business you bring no longer fits their model – example Google, example tumblr (have you ever tried migration from one cloud to another?)
  • you can only use the toolset offered – should you require different functionality or tools: sorry, does not fit their business model, which is only re-active to the needs of many, many customers to make it worth their while
  • you have zero control over latency – should you require different measures from what the lowest common denominator across all the cloud customers is, then the little business you bring to the cloud provider’s table is just too small for such extras (i.e. private BGP4 peering, and such)
  • you DO have the cosy feeling of “everyone else does it, so I won’t get fired over a decision to go cloud” – but do you really want to be known for that Dilbert-like, middle manager image from the 1990s, when people made the same choices in favor of Microsoft products? And the Microsoft monopoly has since milked you how many times with new, expensive versions of the same stuff you had no choice in (migration to …?) and made you suffer very painfully for the lousy cyber security you automatically outsourced to them – bringing you many zero-days, phishing via VBS, crypto-locking malware, “patch Tuesdays” bi-weekly, disrupting your IT as if it is normal – and yet, you have not learned anything from it, and keep thinking today’s cloud business model is any different from Micro$oft’s in the old days? (hint: Microsoft also got big time into clouds … “Azure”, now increasingly running on Linux, like the rest of the cloud providers and the Internet)
  • so: all this is worth how much in ‘savings’?

There is a reason why several big, and many medium-size business build their entire business model around it – they would not, unless it makes them a bunch of money (and that is money you think you are “saving” plus money made from using and selling your data). Example: AWS makes much more profit than the whole, huge retail empire – with a lot less effort. Should make you think :)

So while this sticker is meant to be funny, it does put the finger where it should hurt, too.

P.S.: of course, there are always alternatives – there is never just one, inevitable choice. If you have trouble thinking of those yourself, do some smart searches on the net; and you are welcome to talk to me about it.


Das weltweite Datenaufkommen explodiert und bringt traditionelle Speichermedien an die Grenzen ihrer Kapazität. Die Lösung ist die Cloud. Wie die Grafik auf Basis des Statista Digital Economy Compass zeigt, werden bereits im kommenden Jahr mehr Daten via Internet auf großen Serverfarmen gespeichert und bereitgestellt als auf lokalen Geräten. Das bietet für Privatverbraucher und Geschäftskunden einige Vorteile in Sachen Komfort und Geschwindigkeit von Arbeitsabläufen, hat aber auch Nachteile: Immer wieder kommt es dabei zu Datenlecks, bei dem sensible Kundendaten gestohlen werden. Der finanzielle Schaden ist meist hoch und variiert je nach Branche.


“As mentioned by Nicolas Marot, Google is currently having major service problems.
Snapchat (uses Google Cloud) is described as “Down” and there is a lot of red on the Google status dashboard.”- 04 JUN 2019




03 JUN 2019

50.000 Windows-Datenbank-Server mit Krypto-Minern infiziert

Vermeintlich chinesische Hacker haben weltweit Microsoft-SQL-Server- und PHPMyAdmin-Installationen mit versteckten Krypto-Miner-Skripten infiziert. Während sie damit Einheiten der Krypto-Währung Monero verdienten, kostete es die infizierten Server Rechenleistung und andere Ressourcen. Aufgedeckt hat das ganze die amerikanisch-israelische Sicherheitsfirma Guardicore Anfang April.

Über 50.000 Microsoft Windows SQL Datenbank-Server
über Uralt-Windows-Bug mit Krypto-Minern infiziert

Mit raffinierten Methoden haben Hacker zehntausende schlecht gesicherte Windows-Server gekapert und schürfen dort heimlich Monero.

Unbekannte Hacker, vermutlich aus China, infizieren momentan Microsoft-SQL-Server- und PHPMyAdmin-Installationen auf der ganzen Welt mit versteckten Krypto-Miner-Skripten. Die Opfer kostet das Rechenleistung und andere Ressourcen, die Angreifer verdienen sich heimlich Einheiten der Krypto-Währung Monero.

Während solche Angriffe gewöhnlicherweise mit relativ simplen, und damit vergleichsweise einfach zu entdeckenden, Methoden auskommen, zeichnen sich die aktuellen Angriffe durch ziemlich raffinierte Tricks aus. Unter anderem verwenden die Angreifer Malware, die mit einem gültigen Zertifikat signiert wurde und so unter dem Radar vieler automatischer Erkennungstechniken, unter anderem von Windows selbst, fliegt.

Entdeckt wurden die Angriffe von der amerikanisch-israelischen Sicherheitsfirma Guardicore Anfang April. Seitdem konnten Sicherheitsforscher der Firma weitere Angriffsmuster ausmachen, die sie denselben Hackern zuordnen. Demnach sind diese seit mindestens Ende Februar aktiv und feilen seitdem kontinuierlich an ihren Angriffstechniken und der eingesetzten Malware.

Insgesamt konnten die Forscher bis zu 20 verschiedene Malware-Varianten ausmachen. Ihren Erkenntnissen nach infizieren die Hacker so bis zu 700 verschiedene Server am Tag. Die Sicherheitsforscher verschafften sich im Laufe ihrer Untersuchungen Zugang zu den Kontrollservern der Hacker und beziffern die Zahl der momentan infizierten Server auf knapp 50.000 Systeme.

Anscheinend haben die Hacker es auf Windows-Server abgesehen. Systeme im Gesundheitssektor, bei IT-Firmen, Telekommunikations-Dienstleistern und Medienunternehmen fielen den Angriffen bereits zum Opfer. Im ersten Schritt verschaffen sich die Hacker per Bruteforce-Angriff auf schwache Passwörter Zugang zum Microsoft SQL Server auf dem System. Dann nutzen sie den Zugang zum SQL Server, um ein VB-Skript zu erstellen und auszuführen, dass den Krypto-Miner auf dem System installiert, versteckt und ausführt.

Die Angreifer missbrauchen eine alte Schwachstelle im Windows-Kernel (CVE-2014-4113, von Microsoft im Oktober 2014 gepatcht), um System-Rechte für diesen Schritt zu erlangen. In einigen Fällen fand ein ähnlicher Angriff über schwache PHPMyAdmin-Passwörter, ebenfalls auf Windows, statt.

Die Angreifer verankern ihre Mining-Malware mit Hilfe von Registry-Schlüsseln fest im System und installieren ein Rootkit, dass den Miner-Prozess überwacht und das System daran hindert, diesen zu beenden. Die Angreifer nutzen die Kernel-Schwachstelle mit einem Treiber aus, der zum Zeitpunkt des Angriffes ein gültiges Verisign-Zertifikat besaß, ausgestellt auf eine Firma namens Hangzhou Hootian Network Technology – bei dem Firmennamen handelt es sich um eine Fälschung. Das Zertifikat wurde nach einem Hinweis der Sicherheitsfirma mittlerweile von Verisign zurückgezogen und ist nun nicht mehr gültig.

Die Sicherheitsforscher vermuten auf Grund der chinesischen Fake-Firma und weil Teile des Malware-Codes in der proprietären, chinesischen Programmiersprache Easy Programming Language (EPL) geschrieben sind, dass die Angreifer aus dem Reich der Mitte stammen. Sie haben die Angriffsserie deswegen “Nansh0u” getauft. Diese Zeichenfolge kam in einer bei den Angriffen erstellten Datei vor – “nánshòu” ist Mandarin für “ungemütlich” oder “schwer zu ertragen”.

Nansh0u verdeutlicht vor allem zwei Dinge: Es gibt im weltweiten Netz eine Menge von Windows-Servern mit uralten, ungepatchten Sicherheitslücken. Diese gefährden Systeme auch dann, wenn sie nicht direkt zum Einbruch in den Server missbraucht werden können, denn wo es uralte ungepatchte Lücken gibt, gibt es oft auch schwache Passwörter und ungenügenden Bruteforce-Schutz.

Außerdem zeigen die Angriffe, dass Krypto-Mining-Attacken nicht nur von Script Kiddies mit fertigen Exploit Kits durchgeführt werden, sondern auch von Angreifern, die sehr professionell vorgehen und viel Mühe in ihre Techniken stecken. Mit dem Schürfen von Monero lässt sich demnach noch genug Geld verdienen, um einen solchen Aufwand zu rechtfertigen. (fab)

Published 30 JUN 2019




03 JUN 2019

Im weiteren Zusammenhang (Zentralisierung, Globale Monopole):


Den entscheidenden Sachverhalt verschweigen uns die Strategen schamvoll: die Deutschland AG gehört nicht mehr den Deutschen. 85 Prozent des Dax befinden sich inzwischen in ausländischer Hand.

Nordamerikanische und britische Investoren halten derzeit 54,1 Prozent der Anteile an den 30 Dax-Unternehmen. Das enthüllt eine aktuelle Studie des Deutschen Investor Relations Verbands (DIRK).

► Die USA haben ihren Anteil an der Deutschland AG von 32,6 Prozent (2016) über 33,5 Prozent (2017) auf 34,6 Prozent (2018) ausgebaut und kaufen – auch angesichts der schwachen Kurse vieler ehemaliger Blue-Chips-Firmen – weiter zu.

► Der größte Einzelinvestor ursprünglich deutscher Vermögenswerte im Dax ist BlackRock mit 9,4 Prozent.

► Chinesische und andere asiatische Investoren spielen, anders als der mediale Alarmismus erwarten lässt, mit knapp vier Prozent nur eine untergeordnete Rolle.


via: Handelsblatt Daily



Those Machines In The Cloud

Cloud AI And The Future of Work


Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” — Larry Page

What happens when you take two fundamentally life changing technologies and merge them into an ultimate use case? The answer: businesses may become efficient but social disruption could become more prevalent. The argument for Universal Basic Income (UBI) becomes stronger as jobs get automated and vanish from the corporate landscape. However, all this is conjecture at this point. Big Tech firms are now offering Machine Learning (ML) tools on their respective clouds which allows corporate IT departments and novices to create ML applications without writing pieces of code to automate tasks. This article takes a look at a nascent boom in Research and Development (R&D) and new cloud AI platforms deployed by Google, Microsoft and Amazon. It concludes with a futuristic view of the employment landscape should these technologies succeed in creating ML Platforms as a Service (PaaS) for creating and deploying AI and ML applications.

Introduction To The Cloud

The cloud refers to the internet. Period. The internet began to be referred as a cloud because IT system diagrams would depict the internet using the cloud as a symbol. While the concept of the cloud is as old as the 1960’s with some attributing the idea to John McCarthy and others to JCR Licklider who enabled the development of ARPANET (pre-cursor to the modern internet). Irrespective of the attribution, the cloud was envisioned as a computer on the internet that would provide infrastructure such as storage, platforms such as operating systems and software applications over the internet for a fee. In a nutshell, it was conceived as renting hardware and/or software depending on the user’s requirement.

Subsequently, the launch of a Customer Relationship Management (CRM) software called ‘salesforce’ on the cloud which companies could license for a fee marked the beginning of the era of pervasive cloud computing. In fact, salesforce’s rai·son d’ê·tre. Subsequently, Amazon with the launch of the Elastic Compute Cloud which was a pay as you go cloud provided a further boost to popularization of the cloud. Today, Microsoft offers it’s cloud under the name Azure, Amazan under the name Amazon Web Services (AWS) and Google under Google Cloud Services.

As mentioned before, the cloud can host infrastructure as a service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Also, the cloud can be a public (open to all on sharing basis) or a private cloud. Total market for global public cloud is estomated to be over $150 Billion in 2020.



Amazon, with its first mover advantage leads the pack in terms of global market share. Today, the market leader Amazon offers a plethora of solutions under AWS banner


Source: Wall Street Journal


Today, any business can be virtualized. What that means is that the backend becomes a service that can be rented from single or multiple providers.

A financial institution can be setup completely on the cloud and it can offer products via website and mobile apps. It could leverage networks such as the STAR network for issuance of ATM/Debit card. The Blockchain will become essential for accounting and facilitating cross border transactions. Banks can become completely digital without any brick and mortar architecture. However, that means the field would be ripe for big tech to enter and to leverage their network of users to build a loyal clientele.

Cloud computing has therefore spawned a revolution that is taking entire industries and virtualizing them. The next step is to look at human tasks that can be automated. This is where Artificial Intelligence (AI) residing on the cloud is key.

When AI Meets The Cloud

Now you know the cloud (internet, web 2.0 call it what you will) is everything today. However, challenges around 24/7 connectivity to the internet and cyber security still impede complete overhaul of IT systems across the world. That has not stopped technology companies for spending billions of dollars on research in AI. A deep learning revolution that began with Geoffrey Hinton’s back propagation is now being continued in the form of convolutional neural networks (CNN’s) and generative adversarial networks (GAN’s). The evolution in the approaches to machine learning is quite mind boggling. It’s like the torch is being passed from leader to the other across the world without any clear direction on who will emerge the likely winner. Till then, the battle rages on. The new arena is called Machine Learning as a Service (MLaaS).




There are tons of stories about how Microsoft, Google and Amazon are making ML accessible not only to data scientists but to common people as well. One of the most interesting stories is how Makoto, a farmer from Japan is using AI to cultivate cucumbers using Google’s TensorFlow opensource AI platform.



Microsoft Azure AI

Microsoft announced its cloud computing service on October 2008 and released it on February 1, 2010. Elektronische Fahrwerksysteme which develops Chassis for Audi uses Microsoft Azure to analyze roads. The idea is to enable autonomous vehicles think ahead and understand the roads they are on:

As part of its research efforts, the company used Azure NC-series virtual machines powered by NVIDIA Tesla P100 GPUs to drive a deep learning AI solution that analyzes high-resolution two-dimensional images of roads (source: Microsoft)

Ubisoft, a video game publisher, runs its eSports game, Rainbow Six Siege, in Microsoft Azure:



In 2016, Microsoft created what it called “the world’s first AI supercomputer” by installing Field-Programmable Gate Array (FPGA) across every Azure cloud server in 15 countries. As per wikipedia, an FPGA is an integrated circuit designed to be configured by customer or a designer after manufacturing — hence “field-programmable.

Google Cloud AutoML, Gluon, Tensorflow

Fei-Fei Li, Chief Scientist, Cloud AI at Google is trying to make machine learning accessible to all businesses. However, she also notes that very few corporations have the talent and other resources necessary to successfully embed AI into their business applications. To support it’s bid to gain and retain leadership in the Cloud AI space, Google opened up an entire ecosystem to developers which includes TensorFlow and Kubeflow as well as it’s container based system called Kubernetes.



Newspapers such as the Dainik Bhaskar (DB corp) group in India as well as Hearst group of publications utilizes Google Cloud AI to categorize digital content across it’s digital properties.

Amazon SageMaker

Amazon Sagemaker is a platform for developing and deploying deep learning applications. It was launched in November 2017. As per Amazon:

What Is Amazon SageMaker?

Amazon SageMaker is a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don’t have to manage servers. It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring-your-own-algorithms and frameworks, Amazon SageMaker offers flexible distributed training options that adjust to your specific workflows. Deploy a model into a secure and scalable environment by launching it with a single click from the Amazon SageMaker console. Training and hosting are billed by minutes of usage, with no minimum fees and no upfront commitments.

This is a HIPAA Eligible Service. For more information about AWS, U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), and using AWS services to process, store, and transmit protected health information (PHI), see HIPAA Overview.

Are You a First-time User of Amazon SageMaker?

If you are a first-time user of Amazon SageMaker, we recommend that you do the following:

  1. Read How Amazon SageMaker Works – This section provides an overview of Amazon SageMaker, explains key concepts, and describes the core components involved in building AI solutions with Amazon SageMaker. We recommend that you read this topic in the order presented.
  2. Read Get Started – This section explains how to set up your account and create your first Amazon SageMaker notebook instance.
  3. Try a model training exercise – This exercise walks you through training your first model. You use training algorithms provided by Amazon SageMaker. For more information, see Get Started.
  4. Explore other topics – Depending on your needs, do the following:
  5. See the API Reference – This section describes the Amazon SageMaker API operations.more:


Let’s take an example to understand how Amazon’s applications help integrate machine learning into everyday life.

Alex Schultz, a father with no deep learning experience built an application that reads books to his kids using Amazon Deep Lens called ReadtoMe. Alex built the application using opencv, Amazon Deeplens camera, python, polly, tesseract-ocr, lambda, mxnet and Google’s tensorflow. This example demonstrates the fact that ML is accessible and can be embedded in real life through a variety of applications which strengthens the argument that is may become all pervasive and ubiquitous.




On October 12, 2017, Amazon Web Services and Microsoft announced a new deep learning library, called Gluon, which allows developers of all skill levels to prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps.

AI is a very broad term that contains many building blocks such as the cloud platform, the programming platform, the API’s as well as the integrated circuits. Such a wide variety of inter-related and continuously evolving technologies with a wide array of applications in business and daily life provides big tech companies an edge. Who among them is the first among equals is for the future to say.

Future of Work

Every company is becoming a technology company. If not, they need to be aware of the mega trends and deploy resources wisely to prevent obsolescence.

Financial institutions today prefer to rent Software as a Service (SaaS) because developing software is not their core competency. Also, there is so much churn in technology today that being nimble and flexible is key to survival and growth.

Imagine a scenario where you wanted a certain sales report or a business review prepared for senior leadership. You could just give a voice command to a digital assistant (AI) to give you a year to date national report of all sales during 2018. The data would be stored on the same cloud as the AI assistant is. The AI bot would then organize data and send it to an output of your choice which could be an augmented reality screen. Now, extend this scenario to all routine and non routine tasks that can be automated and you can imagine how scary AI as a technology can be. It is like technology just gobbled the world of work as we know it.

At first, routine tasks can be automated. Later, AI can become a recommendation engine and finally it will be able to take decisions on its own. There are varying estimates on the extent of automation in the next decade or so. However, the takeaway for most is that learning new skills or treating life as a continuing education should be the mantra.


Source: McKinsey


While technological disruption is continuing its march into the workplace, AI and its effects are not pervasive enough to cause social unrest. Yet. Therein lies the ethical dilemma.

For instance, politicians in developing economies such as India are already hearing distressed voices complaining about the loss of driving jobs due to automated vehicles. When an automated Uber killed a pedestrian in Arizona, the incident gave rise to suspicions around the viability of automation. As with Bitcoin, CRISPR CAS 9 Genetic Editing technology, AI regulation will need to be ahead of the game.

However, for common folk like us, learning just acquired a whole new practical meaning.






Human-level performance in 3D multiplayer games with population-based reinforcement learning

Science  31 May 2019:
Vol. 364, Issue 6443, pp. 859-865
DOI: 10.1126/science.aau6249


End-to-end reinforcement learning (RL) methods (15) have so far not succeeded in training agents in multiagent games that combine team and competitive play owing to the high complexity of the learning problem that arises from the concurrent adaptation of multiple learning agents in the environment (6, 7). We approached this challenge by studying team-based multiplayer three-dimensional (3D) first-person video games, a genre that is particularly immersive for humans (8) and has even been shown to improve a wide range of cognitive abilities (9). We focused specifically on a modified version (10) of Quake III Arena (11), the canonical multiplayer 3D first-person video game, whose game mechanics served as the basis for many subsequent games and which has a thriving professional scene (12).

The task we considered is the game mode Capture the Flag (CTF), which is played on both indoor- and outdoor-themed maps that are randomly generated for each game (Fig. 1, A and B). Two opposing teams consisting of multiple individual players compete to capture each other’s flags by strategically navigating, tagging, and evading opponents. The team with the greatest number of flag captures after five minutes wins. The opposing teams’ flags are situated at opposite ends of each map—a team’s base—and in indoor-themed maps, the base room is colored according to the team color. In addition to moving through the environment, agents can tag opponents by activating their laser gadget when pointed at an opponent, which sends the opponent back to their base room after a short delay, known as respawning. If an agent is holding a flag when they are tagged, this flag is dropped to the floor where they are tagged and is said to be stray. CTF is played in a visually rich simulated physical environment (movie S1), and agents interact with the environment and with other agents only through their observations and actions (moving forward and backward; strafing left and right; and looking by rotating, jumping, and tagging). In contrast to previous work (1323), agents do not have access to models of the environment, state of other players, or human policy priors, nor can they communicate with each other outside of the game environment. Each agent acts and learns independently, resulting in decentralized control within a team.


Fig. 1 CTF task and computational training framework.
(A and B) Two example maps that have been sampled from the distribution of (A) outdoor maps and (B) indoor maps. Each agent in the game sees only its own first-person pixel view of the environment. (C) Training data are generated by playing thousands of CTF games in parallel on a diverse distribution of procedurally generated maps and (D) used to train the agents that played in each game with RL. (E) We trained a population of 30 different agents together, which provided a diverse set of teammates and opponents to play with and was also used to evolve the internal rewards and hyperparameters of agents and learning process. Each circle represents an agent in the population, with the size of the inner circle representing strength. Agents undergo computational evolution (represented as splitting) with descendents inheriting and mutating hyperparameters (represented as color). Gameplay footage and further exposition of the environment variability can be found in movie S1.


Learning system

We aimed to devise an algorithm and training procedure that enables agents to acquire policies that are robust to the variability of maps, number of players, and choice of teammates and opponents, a challenge that generalizes that of ad hoc teamwork (24). In contrast to previous work (25), the proposed method is based purely on end-to-end learning and generalization. The proposed training algorithm stabilizes the learning process in partially observable multiagent environments by concurrently training a diverse population of agents who learn by playing with each other. In addition, the agent population provides a mechanism for meta-optimization.

In our formulation, the agent’s policy π uses the same interface available to human players. It receives raw red-green-blue (RGB) pixel input xt from the agent’s first-person perspective at time step t, produces control actions at ~ π(⸱|x1, …, xt) by sampling from the distribution given by policy π, and receives ρt, game points, which are visible on the in-game scoreboard. The goal of RL in this context is to find a policy that maximizes the expected cumulative reward Eπ[Tt=1rt]

over a CTF game with T time steps. We used a multistep actor-critic policy gradient algorithm (2) with off-policy correction (26) and auxiliary tasks (5) for RL. The agent’s policy π was parameterized by means of a multi–time scale recurrent neural network with external memory (Fig. 2A and fig. S11) (27). Actions in this model were generated conditional on a stochastic latent variable, whose distribution was modulated by a more slowly evolving prior process. The variational objective function encodes a trade-off between maximizing expected reward and consistency between the two time scales of inference (28). Whereas some previous hierarchical RL agents construct explicit hierarchical goals or skills (2932), this agent architecture is conceptually more closely related to work outside of RL on building hierarchical temporal representations (3336) and recurrent latent variable models for sequential data (37, 38). The resulting model constructs a temporally hierarchical representation space in a way that promotes the use of memory (fig. S7) and temporally coherent action sequences.


Fig. 2 Agent architecture and benchmarking.
(A) How the agent processes a temporal sequence of observations xt from the environment. The model operates at two different time scales, faster at the bottom and slower by a factor of τ at the top. A stochastic vector-valued latent variable is sampled at the fast time scale from distribution Qt
on the basis of observations xt. The action distribution πt is sampled conditional on the latent variable at each time step t. The latent variable is regularized by the slow moving prior Pt, which helps capture long-range temporal correlations and promotes memory. The network parameters are updated by using RL according to the agent’s own internal reward signal rt, which is obtained from a learned transformation w of game points ρt. w is optimized for winning probability through PBT, another level of training performed at yet a slower time scale than that of RL. Detailed network architectures are described in fig. S11. (B) (Top) The Elo skill ratings of the FTW agent population throughout training (blue) together with those of the best baseline agents by using hand-tuned reward shaping (RS) (red) and game-winning reward signal only (black), compared with human and random agent reference points (violet, shaded region shows strength between 10th and 90th percentile). The FTW agent achieves a skill level considerably beyond strong human subjects, whereas the baseline agent’s skill plateaus below and does not learn anything without reward shaping [evaluation procedure is provided in (28)]. (Bottom) The evolution of three hyperparameters of the FTW agent population: learning rate, Kullback-Leibler divergence (KL) weighting, and internal time scale τ, plotted as mean and standard deviation across the population.

For ad hoc teams, we postulated that an agent’s policy π1 should maximize the probability P(π1steamwins|ω,π1:N)

of winning for its team, π1:N2=(π1,π2,πN2), which is composed of π1 itself, and its teammates’ policies π2,,πN2, for a total of N players in the game


(1)in which trajectories τ (sequences of actions, states, and rewards) are sampled from the joint probability distribution pπ1:Nω

over game setup ω and actions sampled from policies. The operator 𝟙[x] returns 1 if and only if x is true, and ⚐(τ, π) returns the number of flag captures obtained by agents in π in trajectory τ. Ties are broken by ϵ, which is sampled from an independent Bernoulli distribution with probability 0.5. The distribution Ω over specific game setups is defined over the Cartesian product of the set of maps and the set of random seeds. During learning and testing, each game setup ω is sampled from Ω, ω ~ Ω. The final game outcome is too sparse to be effectively used as the sole reward signal for RL, and so we learn rewards rt to direct the learning process toward winning; these are more frequently available than the game outcome. In our approach, we operationalized the idea that each agent has a dense internal reward function (3941) by specifying rt = wt) based on the available game points signals ρt (points are registered for events such as capturing a flag) and, crucially, allowing the agent to learn the transformation w so that policy optimization on the internal rewards rt optimizes the policy “For The Win,” giving us the “FTW agent.”

Training agents in multiagent systems requires instantiations of other agents in the environment, such as teammates and opponents, to generate learning experience. A solution could be self-play RL, in which an agent is trained by playing against its own policy. Although self-play variants can prove effective in some multiagent games (14, 15, 4246), these methods can be unstable and in their basic form do not support concurrent training, which is crucial for scalability. Our solution is to train in parallel a population of P different agents π=(πp)Pp=1

that play with each other, introducing diversity among players in order to stabilize training (47). Each agent within this population learns from experience generated by playing with teammates and opponents sampled from the population. We sampled the agents indexed by ι for a training game by using a stochastic matchmaking scheme mp(π) that biases co-players to be of similar skill to player p. This scheme ensures that—a priori—the outcome is sufficiently uncertain to provide a meaningful learning signal and that a diverse set of teammates and opponents participate in training. Agents’ skill levels were estimated online by calculating Elo scores [adapted from chess (48)] on the basis of outcomes of training games. We also used the population to meta-optimize the internal rewards and hyperparameters of the RL process itself, which results in the joint maximization of





This can be seen as a two-tier RL problem. The inner optimization maximizes Jinner, the agents’ expected future discounted internal rewards. The outer optimization of Jouter can be viewed as a meta-game, in which the meta-reward of winning the match is maximized with respect to internal reward schemes wp and hyperparameters ϕp, with the inner optimization providing the meta transition dynamics. We solved the inner optimization with RL as previously described, and the outer optimization with population-based training (PBT) (49). PBT is an online evolutionary process that adapts internal rewards and hyperparameters and performs model selection by replacing underperforming agents with mutated versions of better agents. This joint optimization of the agent policy by using RL together with the optimization of the RL procedure itself toward a high-level goal proves to be an effective and potentially widely applicable strategy and uses the potential of combining learning and evolution (50) in large-scale learning systems.

Tournament evaluation

To assess the generalization performance of agents at different points during training, we performed a large tournament on procedurally generated maps with ad hoc matches that involved three types of agents as teammates and opponents: ablated versions of FTW (including state-of-the-art baselines), Quake III Arena scripted bots of various levels (51), and human participants with first-person video game experience. The Elo scores and derived winning probabilities for different ablations of FTW, and how the combination of components provide superior performance, are shown in Fig. 2B and fig. S1. The FTW agents clearly exceeded the win-rate of humans in maps that neither agent nor human had seen previously—that is, zero-shot generalization—with a team of two humans on average capturing 16 fewer flags per game than a team of two FTW agents (fig. S1, bottom, FF versus hh). Only as part of a human-agent team did we observe a human winning over an agent-agent team (5% win probability). This result suggests that trained agents are capable of cooperating with never-seen-before teammates, such as humans. In a separate study, we probed the exploitability of the FTW agent by allowing a team of two professional games testers with full communication to play continuously against a fixed pair of FTW agents. Even after 12 hours of practice, the human game testers were only able to win 25% (6.3% draw rate) of games against the agent team (28).

Interpreting the difference in performance between agents and humans must take into account the subtle differences in observation resolution, frame rate, control fidelity, and intrinsic limitations in reaction time and sensorimotor skills (fig. S10A) [(28), section 3.1]. For example, humans have superior observation and control resolution; this may be responsible for humans successfully tagging at long range where agents could not (humans, 17% tags above 5 map units; agents, 0.5%). By contrast, at short range, agents have superior tagging reaction times to humans: By one measure, FTW agents respond to newly appeared opponents with a mean of 258 ms, compared with 559 ms for humans (fig. S10B). Another advantage exhibited by agents is their tagging accuracy, in which FTW agents achieve 80% accuracy compared with humans’ 48%. By artificially reducing the FTW agents’ tagging accuracy to be similar to humans (without retraining them), the agents’ win rate was reduced though still exceeded that of humans (fig. S10C). Thus, although agents learned to make use of their potential for better tagging accuracy, this is only one factor contributing to their overall performance.

To explicitly investigate the effect of the native superiority in the reaction time of agents compared with that of humans, we introduced an artificial 267-ms reaction delay to the FTW agent (in line with the previously reported discrepancies, and corresponding to fast human reaction times in simple psychophysical paradigms) (5254). This response-delayed FTW agent was fine-tuned from the nondelayed FTW agent through a combination of RL and distillation through time [(28), section 3.1.1]. In a further exploitability study, the human game testers achieved a 30% win rate against the resulting response-delayed agents. In additional tournament games with a wider pool of human participants, a team composed of a strong human and a response-delayed agent could only achieve an average win rate of 21% against a team of entirely response-delayed agents. The human participants performed slightly more tags than the response-delayed agent opponents, although delayed agents achieved more flag pickups and captures (Fig. 2). This highlights that even with more human-comparable reaction times, the agent exhibits human-level performance.

Agent analysis

We hypothesized that trained agents of such high skill have learned a rich representation of the game. To investigate this, we extracted ground-truth state from the game engine at each point in time in terms of 200 binary features such as “Do I have the flag?”, “Did I see my teammate recently?”, and “Will I be in the opponent’s base soon?” We say that the agent has knowledge of a given feature if logistic regression on the internal state of the agent accurately models the feature. In this sense, the internal representation of the agent was found to encode a wide variety of knowledge about the game situation (fig. S4). The FTW agent’s representation was found to encode features related to the past particularly well; for example, the FTW agent was able to classify the state “both flags are stray” (flags dropped not at base) with 91% AUCROC (area under the receiver operating characteristic curve), compared with 70% with the self-play baseline. Looking at the acquisition of knowledge as training progresses, the agent first learned about its own base, then about the opponent’s base, and then about picking up the flag. Immediately useful flag knowledge was learned before knowledge related to tagging or their teammate’s situation. Agents were never explicitly trained to model this knowledge; thus, these results show the spontaneous emergence of these concepts purely through RL-based training.

A visualization of how the agent represents knowledge was obtained by performing dimensionality reduction of the agent’s activations through use of t-distributed stochastic neighbor embedding (t-SNE) (Fig. 3) (55). Internal agent state clustered in accordance with conjunctions of high-level game-state features: flag status, respawn state, and agent location (Fig. 3B). We also found individual neurons whose activations coded directly for some of these features—for example, a neuron that was active if and only if the agent’s teammate was holding the flag, which is reminiscent of concept cells (56). This knowledge was acquired in a distributed manner early in training (after 45,000 games) but then represented by a single, highly discriminative neuron later in training (at around 200,000 games). This observed disentangling of game state is most pronounced in the FTW agent (fig. S8).


Fig. 3 Knowledge representation and behavioral analysis.
(A) The 2D t-SNE embedding of an FTW agent’s internal states during gameplay. Each point represents the internal state (hp, hq) at a particular point in the game and is colored according to the high-level game state at this time—the conjunction of (B) four basic CTF situations, each state of which is colored distinctly. Color clusters form, showing that nearby regions in the internal representation of the agent correspond to the same high-level game state. (C) A visualization of the expected internal state arranged in a similarity-preserving topological embedding and colored according to activation (fig. S5). (D) Distributions of situation conditional activations (each conditional distribution is colored gray and green) for particular single neurons that are distinctly selective for these CTF situations and show the predictive accuracy of this neuron. (E) The true return of the agent’s internal reward signal and (F) the agent’s prediction, its value function (orange denotes high value, and purple denotes low value). (G) Regions where the agent’s internal two–time scale representation diverges (red), the agent’s surprise, measured as the KL between the agent’s slow– and fast–time scale representations (28). (H) The four-step temporal sequence of the high-level strategy “opponent base camping.” (I) Three automatically discovered high-level behaviors of agents and corresponding regions in the t-SNE embedding. (Right) Average occurrence per game of each behavior for the FTW agent, the FTW agent without temporal hierarchy (TH), self-play with reward shaping agent, and human subjects (fig. S9).


One of the most salient aspects of the CTF task is that each game takes place on a randomly generated map, with walls, bases, and flags in new locations. We hypothesized that this requires agents to develop rich representations of these spatial environments in order to deal with task demands and that the temporal hierarchy and explicit memory module of the FTW agent help toward this. An analysis of the memory recall patterns of the FTW agent playing in indoor environments shows precisely that; once the agent had discovered the entrances to the two bases, it primarily recalled memories formed at these base entrances (Fig. 4 and fig. S7). We also found that the full FTW agent with temporal hierarchy learned a coordination strategy during maze navigation that ablated versions of the agent did not, resulting in more efficient flag capturing (fig. S2).


Fig. 4 Progression of agent during training.
Shown is the development of knowledge representation and behaviors of the FTW agent over the training period of 450,000 games, segmented into three phases (movie S2). “Knowledge” indicates the percentage of game knowledge that is linearly decodable from the agent’s representation, measured by average scaled AUCROC across 200 features of game state. Some knowledge is compressed to single-neuron responses (Fig. 3A), whose emergence in training is shown at the top. “Relative internal reward magnitude” indicates the relative magnitude of the agent’s internal reward weights of 3 of the 13 events corresponding to game points ρ. Early in training, the agent puts large reward weight on picking up the opponent’s flag, whereas later, this weight is reduced, and reward for tagging an opponent and penalty when opponents capture a flag are increased by a factor of two. “Behavior probability” indicates the frequencies of occurrence for 3 of the 32 automatically discovered behavior clusters through training. Opponent base camping (red) is discovered early on, whereas teammate following (blue) becomes very prominent midway through training before mostly disappearing. The “home base defense” behavior (green) resurges in occurrence toward the end of training, which is in line with the agent’s increased internal penalty for more opponent flag captures. “Memory usage” comprises heat maps of visitation frequencies for (left) locations in a particular map and (right) locations of the agent at which the top-10 most frequently read memories were written to memory, normalized by random reads from memory, indicating which locations the agent learned to recall. Recalled locations change considerably throughout training, eventually showing the agent recalling the entrances to both bases, presumably in order to perform more efficient navigation in unseen maps (fig. S7).


Analysis of temporally extended behaviors provided another view on the complexity of behavioral strategies learned by the agent (57) and is related to the problem a coach might face when analyzing behavior patterns in an opponent team (58). We developed an unsupervised method to automatically discover and quantitatively characterize temporally extended behavior patterns, inspired by models of mouse behavior (59), which groups short game-play sequences into behavioral clusters (fig. S9 and movie S3). The discovered behaviors included well-known tactics observed in human play, such as “waiting in the opponents base for a flag to reappear” (“opponent base camping”), which we only observed in FTW agents with a temporal hierarchy. Some behaviors, such as “following a flag-carrying teammate,” were discovered and discarded midway through training, whereas others such as “performing home base defense” are most prominent later in training (Fig. 4).


In this work, we have demonstrated that an artificial agent using only pixels and game points as input can learn to play highly competitively in a rich multiagent environment: a popular multiplayer first-person video game. This was achieved by combining PBT of agents, internal reward optimization, and temporally hierarchical RL with scalable computational architectures. The presented framework of training populations of agents, each with their own learned rewards, makes minimal assumptions about the game structure and therefore could be applicable for scalable and stable learning in a wide variety of multiagent systems. The temporally hierarchical agent represents a powerful architecture for problems that require memory and temporally extended inference. Limitations of the current framework, which should be addressed in future work, include the difficulty of maintaining diversity in agent populations, the greedy nature of the meta-optimization performed by PBT, and the variance from temporal credit assignment in the proposed RL updates. Our work combines techniques to train agents that can achieve human-level performance at previously insurmountable tasks. When trained in a sufficiently rich multiagent world, complex and surprising high-level intelligent artificial behavior emerged.

Supplementary Materials

Supplementary Text

Figs. S1 to S12

References (6183)


Supplementary Data

Movies S1 to S4

References and Notes

  1. Additional information is available as supplementary materials.
Acknowledgments: We thank M. Botvinick, S. Osindero, V. Mnih, A. Graves, N. de Freitas, N. Heess, and K. Tuyls for helpful comments on the manuscript; A. Grabska-Barwińska for support with analysis; S. Green and D. Purves for additional environment support and design; K. McKee and T. Zhu for human experiment assistance; A. Sadik, S. York, and P. Mendolicchio for exploitation study participation; A. Cain for help with figure design; P. Lewis, D. Fritz, and J. Sanchez Elias for 3D map visualization work; V. Holgate, A. Bolton, C. Hillier, and H. King for organizational support; and the rest of the DeepMind team for their invaluable support and ideas.
Author contributions: M.J. and T.Gra. conceived and managed the project; M.J., W.M.C., and I.D. designed and implemented the learning system and algorithm with additional help from L.M., T.Gra., G.L., N.S., T.Gre., and J.Z.L.; A.G.C., C.B., and L.M. created the game environment presented; M.J., W.M.C., I.D., and L.M. ran experiments and analyzed data with additional input from N.C.R., A.S.M., and A.R.; L.M. and L.D. ran human experiments; D.S., D.H., and K.K. provided additional advice and management; M.J., W.M.C., and T.Gra. wrote the paper; and M.J. and W.M.C. created figures and videos.
Competing interests:
M.J., W.M.C., and I.D. are inventors on U.S. patent application US62/677,632 submitted by DeepMind that covers temporally hierarchical RL. M.J., W.M.C., and T.G. are inventors on U.S. patent application PCT/EP2018/082162 submitted by DeepMind that covers population based training of neural networks. I.D. is additionally affiliated with Hudson River Trading, New York, NY, USA.
Data and materials availability:
A full description of the algorithm in pseudocode is available in the supplementary materials. The data are deposited in 10.7910/DVN/JJETYE (60).

“” – EU Authorities Shut Down Bitcoin Transaction Mixer

22 MAY 2019

The Dutch Financial Criminal Investigative Service has seized the website of a bitcoin transaction mixer in a crackdown involving Europol and other authorities.

Calling it the the “first law enforcement action of its kind against such a cryptocurrency mixer service,” Europol said in a statement Wednesday that the seizure of followed an investigation that began last summer. As part of the move, police seized six servers based in Luxemourg and the Netherlands.

Coin mixers or “tumblers” like work by pooling funds together and creating a web of new transactions in an effort to obfuscate their original source. Typically, coin mixer users pay a fee on top of the funds they send in, receiving back their money from a wholly new address.

Europol alleged that much of the money that passed through “had a criminal origin or destination,” contending that “in these cases, the mixer was probably used to conceal and launder criminal flows of money.” The agency said that the service, which launched in May 2018, mixed approximately 27,000 bitcoins.

“Today’s Bestmixer seizure shows an increase in law enforcement activities on pure crypto-to-crypto services,” said Dave Jevans, CipherTrace CEO. “This follows on the heels of European AMLD5 regulations and the views expressed by US FinCEN that crypto-to-crypto services are considered to be money services businesses and must comply with those regulations. This is the first public seizure of a bitcoin mixing service, and shows that not only are dark marketplaces subject to criminal enforcement, but other services are as well.”

Europol’s statement suggests that the investigation isn’t complete and that authorities intend to follow up on the information gleaned from this week’s server seizures.

“The Dutch FIOD has gathered information on all the interactions on this platform in the past year. This includes IP-addresses, transaction details, bitcoin addresses and chat messages,” the agency said. “This information will now be analysed by the FIOD in cooperation with Europol and intelligence packages will be shared with other countries.”

“Bestmixer has blatantly advertised money laundering services, and falsely claimed to be domiciled in Curacao where they claimed it was a legal service. The reality is that they were operating in Europe and services customers from many countries around the world,” said Jevans.




Multi-million euro cryptocurrency laundering service taken down


22 May 2019 – Press Release

First law enforcement action of its kind against such a cryptocurrency mixer service

Today, the Dutch Fiscal Information and Investigation Service (FIOD), in close cooperation with Europol and the authorities in Luxembourg, clamped down on one of the world’s leading cryptocurrency mixing service

Initiated back in June 2018 by the FIOD with the support of the internet security company McAfee, this investigation resulted in the seizure of six servers in the Netherlands and Luxembourg.

One of the largest mixing services was one of the three largest mixing services for cryptocurrencies and offered services for mixing the cryptocurrencies bitcoins, bitcoin cash and litecoins. The service started in May 2018 and achieved a turnover of at least $200 million (approx. 27,000 bitcoins) in a year’s time and guaranteed that the customers would remain anonymous.

Nature of the service

A cryptocurrency tumbler or cryptocurrency mixing service is a service offered to mix potentially identifiable or ‘tainted’ cryptocurrency funds with others, so as to obscure the trail back to the fund’s original source.

The investigation so far into this case has shown that many of the mixed cryptocurrencies on had a criminal origin or destination. In these cases, the mixer was probably used to conceal and launder criminal flows of money.


The Dutch FIOD has gathered information on all the interactions on this platform in the past year. This includes IP-addresses, transaction details, bitcoin addresses and chat messages. This information will now be analysed by the FIOD in cooperation with Europol and intelligence packages will be shared with other countries.






23 MAY 2019

“We need a first step toward more privacy,” Vitalik Buterin, founder of the ethereum blockchain network, said Wendesday.

In a new HackMD post, Buterin detailed a design to help obscure ethereum user activity on the blockchain. More specifically, Buterin proposed a “minimal mixer design” aimed at obfuscating user addresses when sending fixed quantities of ether (ETH).

According to Buterin, users can transact in one of two ways. “The default behavior” is to send and receive ether from a single account, which, of course, also means that all of a user’s activity will be publicly linked on the blockchain. Alternatively, users can transact through multiple accounts or addresses. However, this too isn’t a perfect solution to obfuscating user activity on the blockchain.

“The transactions you make to send ETH to those addresses themselves reveal the link between them,” detailed Buterin in his post.

As such, by creating two smart contracts on ethereum – “the mixer and the relayer registry” – users can opt-in to making private transactions on the ethereum blockchain through what is called an anonymity set.

Buterin told CoinDesk in a follow-up email:

“Anonymity set is cryptography speak for ‘set of users that this thing could have come from.’ For example if I sent you 1 ETH and you can’t tell who exactly it was from but you can tell that it came from (myself, Alice, Bob or Charlie), then the anonymity set has size 4. The bigger the anonymity set the more privacy you have.”

Buterin added that the design does not require any changes to ethereum on a protocol level but could be something implemented by a group of users today.

To this point, Eric Conner, product researcher at blockchain startup Gnosis, noted that a key strength of Buterin’s proposal was precisely its ease for integration.

“Strengths are it gives us a solid privacy solution if users want it,” Conner explained. “The goal is to make a solution that can be easily integrated into current wallets.”

At the same time, the design proposed by Buterin does require users to pay a fee – called gas cost – in order to send private transactions. However, for the use cases that Buterin envisions in his mind the fee won’t be a major deterrent for users.

Buterin tweeted about the design:

“The main use case I’m thinking of is a one-off send from one account to another account so you can use applications without linking that account to the one that has all your tokens in it. So even though it is a 2m gas cost, it only needs to be paid once per account, not too bad.”





31 MAY 2019

Bitcoin Blender Cryptocurrency Mixing Service Shuts Itself Down


Cryptocurrency mixing service Bitcoin Blender has reportedly willingly shut down after issuing a short notice asking its users to withdraw their funds, tech news outlet BleepingComputer reports on May 30.

Per the report, the message describing the service that appeared on the homepage of the website present both on the Tor network (often referred to as the darknet, dark web or deep web) and on clearnet before it shut down was the following:

“We are a hidden service that mixes your bitcoins to remove the link between you and your transactions. This adds an essential layer of anonymity to your online activity to protect against ‘Blockchain Analysis.’”

The shutdown was reportedly announced both on the homepage of the dark web website and on the BitcoinTalk Forums on Monday. Some users reportedly missed the short time window and were not able to withdraw their funds, as one user said on the aforementioned forum:

“I recently came to know about the shutting down process of bitblender, I had much coins saved onto it. I unfortunately missed the withdrawal warning as I was away for past few weeks. I am trying to access http://bitblendervrfkzr.onion/ for last 2~3 hours but I can not succeed.”

At press time, while the Tor mirror is currently inaccessible, the clearnet website is still online.

As Cointelegraph recently reported, Dutch, Luxembourg authorities and Europol shut down one of the three largest cryptocurrency tumblers, BestMixer, after an investigation found a number of coins from the mixer were used in money laundering.

Ethereum (ETH) co-founder Vitalik Buterin proposed shortly after the shutdown the possibility of creating an on-chain smart contract-based ether mixer.






Multicurrency Crypto Wallet Integrates Apple Pay, With Google Pay to Follow Within Weeks



A multicurrency digital wallet that enables consumers to store crypto and spend their funds at more than 40 million outlets worldwide has fully integrated Apple Pay.

Spend says the latest version of its Spend Wallet app unlocks instant access to the popular iPhone feature, meaning that more than 20 supported cryptocurrencies can be instantly converted to fiat and used to complete a mobile transaction on demand.

The platform says its priority has been ensuring that users can access all of these features instantly. An emphasis has been placed on streamlining the Know Your Customer process, and once this is complete, eligible consumers can immediately receive a Spend Visa virtual card.

To ensure that the process is as simple as possible, users can select which cryptocurrency they want to use. All conversions into fiat happen behind the scenes, meaning shoppers are spared the convoluted and frustrating process of switching the assets themselves.

According to Spend, the next step is to roll out support for Google Pay within the next couple of weeks, meaning that owners of Android devices will be able to access similar features.

Bringing crypto to the masses

Spend says its cards can be used anywhere where Visa is accepted as a payment method. Whereas it has been a challenge for crypto enthusiasts to be able to use their assets for everyday purchases in the past, the platform hopes its technology will open doors and help this new approach to personal finance go mainstream.

In terms of security and protection, “leading encryption methods” are used to ensure sensitive personal information doesn’t fall into the wrong hands. This technology is also deployed to prevent any Spend card from being used without permission. Meanwhile, the platform says fiat funds are held at licensed payment and financial institutions to give users peace of mind.

Spend is available here

Spend says it is international in focus and wants to appeal to as many consumers around the world as possible. To this end, account holders can alternate between 27 fiat currencies and perform exchanges at the click of a button, including United States dollars, Canadian dollars and euros. As well as being beneficial for business travelers and those taking a holiday, this choice can help users protect themselves against volatility in the forex markets.

In a blog post from July 2018 explaining its vision for the Spend app, the company’s team said: “We are a firm believer in the fourth industrial revolution and the digitization of currencies. There are tremendous benefits for users worldwide to have control of their finances and to create an alternative financial solution for those whom don’t have access to banking which currently totals to 2 billion people worldwide.”

Keeping users informed

Through the digital wallet within the Spend app, the company gives users the chance to make informed decisions when making transfers to businesses, friends and family. As well as clearly indicating which currency is being used to complete the transaction, other crucial details — such as how long the transaction will take to clear and any associated fees — are displayed before money is sent. The payment can then be verified using a passcode or facial recognition.

Keeping tabs on the ever-changing crypto markets can also be crucial. From within the Spend app, users can access in-depth charts and analytics that make it easy to monitor recent fluctuations in a virtual currency’s value. As well as helping consumers to make financial choices, it is also hoped that users will learn new things about the industry, too.

Learn more about Spend

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you all important information that we could obtain, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor this article can be considered as an investment advice.




Chinese E-Commerce Giant Applied for 200+ Blockchain Patents, Alibaba: 262, Tencent: 80 – China Total: 4,435

Chinese e-commerce giant has applied for over 200 blockchain patents, according to a report by Securities Daily News on May 20.

The report also notes that major e-commerce competitor Alibaba has applied for 262 blockchain patents, and Chinese internet titans Tencent and Baidu have applied for 80 and 50 such patents, respectively, as recorded by the Intellectual Property Center of China Information and Communication.

According to interpretation of the data provided by Intellectual Property Center of China Information and Communication, was in first place for “global blockchain patent strength,” with Alibaba, Tencent, and Baidu coming it at second, seventh, and fifteenth place, respectively.

The report also notes that China is the global forerunner in blockchain applications. From 2013 to 2018, China filed 4,435 blockchain patent applications, which is 48% of global blockchain patent filings, as per the “Blockchain Patent Situation White Paper (Version 1.0)” published by the official website for China Telecom.

The runner-up in patent numbers was the United States, which purportedly filed for 1,833 blockchain patents in total, occupying the global patent space by 21%.

Securities Daily said that, with a breakdown of patent filings by industry, companies accounted for 75% of applicants, vastly outnumbering the quantity filed by research institutions, individuals, and government agencies. Out of this 75%, the report noted that the majority of companies that filed were internet-related.

The Intellectual Property Center of China Information and Communication also notes that intellectual property infringements have been an issue in the past for Chinese blockchain patents, and reportedly advises:

“It is recommended that the government do a good job in industry supervision and supervision and patent quality improvement. Enterprises should raise awareness of intellectual property protection and risk prevention, avoid blind investment in the blockchain field, apply for low-value patents, and avoid future blockchains. There have been a large number of infringement lawsuits in the field.” released a blockchain-as-a-service (BaaS) platform JD Blockchain Open Platform in 2018, which allows organizations to streamline blockchain creation and run smart contracts, as per the Cointelegraph report. has also helped create institutes for blockchain research, such as the Smart City Research Institute and a blockchain research lab.





Jingdong has applied for nearly 200 blockchain patents
The number of BAT blockchain patents ranks among the top 20 in the world
On the track of the blockchain patent application, many small and medium-sized companies show their enthusiasm far beyond their business volume.

Reporter Xing Meng

On May 20, at the National Science and Technology Week in 2019, Jingdong Digital Technology disclosed the patent application for blockchain for the first time. The data shows that there are nearly 200 patents in the blockchain that have been applied for, and the blockchain technology has been put on the scenes of quality traceability, digital deposit certificate, credit network and value innovation.

With the final disclosure of the blockchain patents in Jingdong District, the Internet giant’s competition in this field has become increasingly hot. According to the data of the Intellectual Property Center of China Information and Communication Research Institute, Alibaba, Tencent and Baidu were all selected for the 2018 global blockchain patent applicants TOP20 list. Among them, Alibaba has 262 patent applications, 80 Tencent and 50 Baidu. From this point of view, the number of blockchain patent applications in nearly 200 pieces of Jingdong has been at the world’s leading level.






Self-Proclaimed Satoshi Craig Wright Files US Copyright Registrations for BTC White Paper

Craig Wright has filed United States copyright registrations for the Bitcoin (BTC) white paper authored by Satoshi Nakamoto.

Court documents show that the U.S. Copyright Office has registrations with Wright as the author of the white paper, as well as most of the original code used to build Bitcoin.

The Australian entrepreneur has long claimed to have written the cryptocurrency blueprint under the pseudonym.

news release from May 21 claims that U.S. officials have received confirmation that Wright is indeed Satoshi Nakamoto, but the news has been met with skepticism from some crypto commentators.

Jerry Brito, executive director at non-profit organization Coin Center, tweeted:

“Registering a copyright is just filing a form. The Copyright Office does not investigate the validity of the claim; they just register it. Unfortunately there is no official way to challenge a registration. If there are competing claims, the Office will just register all of them.”

According to the news release, Wright is making moves to establish himself as Bitcoin’s creator “after being dismayed to see his original Bitcoin design bastardized by protocol developer groups.”

It is believed that Wright is planning to assign the copyright registrations to the Bitcoin Association.

The businessman is currently the chief scientist at a startup known as nChain. The entrepreneur has been known for attracting controversy, with major crypto platforms recently beginning to boycott bitcoin sv (BSV,) the fork of bitcoin cash (BCH) which he backs.



Craig Wright Attempts to Copyright the Satoshi White Paper and Bitcoin Code

raig Wright, the self-proclaimed creator of bitcoin, has filed registrations with the U.S. Copyright Office supporting his claims of authorship over the original bitcoin code and the Satoshi white paper.

The registrations, which are visible here and here, pertain specifically to “Bitcoin: A Peer-to-Peer Electronic Cash System” and “Bitcoin,” meaning the original 2009 code.

A press release sent to CoinDesk states:

“In the future, Wright intends to assign the copyright registrations to Bitcoin Association to hold for the benefit of the Bitcoin ecosystem. Bitcoin Association is a global industry organization for Bitcoin businesses. It supports BSV and owns the Bitcoin SV client software.”

Founding President Jimmy Nguyen commented in the release:

“We are thrilled to see Craig Wright recognized as author of the landmark Bitcoin white paper and early code. Better than anyone else, Craig understands that Bitcoin was created be a massively scaled blockchain to power the world’s electronic cash for billions of people to use, and be the global data ledger for the biggest enterprise applications. We look forward to working with Craig and others to ensure his original vision is recognized as Bitcoin and is realized through BSV.”

To be clear, registration does not imply ownership nor is this an official patent. The copyright process allows anyone to register anything in an effort to prepare, say, for lawsuits associated to ownership.

Computer code and white papers can be copyrighted insofar as they are considered literary works and, as the copyright office writes: “In general, registration is voluntary. Copyright exists from the moment the work is created. You will have to register, however, if you wish to bring a lawsuit for infringement of a U.S. work.”

In other words you, the reader, could register this post and I would have to fight you in court to contest it.

Jerry Brito, executive director at advocacy group Coin Center, tweeted:

“People register things for a reason. They want to exploit it and they want the credit for it,” said David H. Faux, Esq., an intellectual property attorney in New York City. “Someone dishonest would register the Bitcoin white paper to put it on his website and get speaking engagements. But at some point it would catch up with him.”

“The market takes care of itself,” said Faux.

When asked for comment noted Wright critic Jameson Lopp said “LOL.”

CoinDesk has contacted Wright’s representatives and the Copyright Office for further comment.




Dependency on Centralized Services: Massive Outage at

Salesforce Woes Linger as Admins Clean Up After Service Outage

An accidental permissions snafu caused a massive outage for all Salesforce customers that continues to affect some businesses.

After a massive service outage on Friday, software-as-a-service giant Salesforce restored partial access to its affected customers over the weekend, while admins continued with cleanup into Monday.

The outage was brought on by a scripting error that affected all Pardot marketing automation software clients; a database script that Salesforce pushed out accidentally gave users broader access to data than their permissions levels should allow.

In response, Salesforce on Friday cut off all access to all Salesforce software clients, not just Pardot clients, while it triaged the situation – leading to a bit of a meltdown among users. Twitter hashtags #salesforcedown and #permissiongeddon began trending as users took to social media to complain.

“my salesforce rollout was scheduled at 2pm today. 300 folks on a call to do training with me. oops @salesforce #salesforcedown,” tweeted one user.

#Salesforce #outage means that I can’t access any meaningful records, or properly do my job Now that most tabs have disappeared like 1/2 the universe in @Avengers, please bring them back, Tony Stark of @salesforce!” tweeted another.


“To all our @salesforce customers, please be aware that we are experiencing a major issue with our service and apologize for the impact it is having on you,” Salesforce co-founder and CTO Parker Harris tweeted on Friday. “Please know that we have all hands on this issue and are resolving as quickly as possible.”

Some users saw an upside to the situation:

All the Sales teams in headed to the bar once they heard @salesforce was down.


Over the weekend, the cloud app provider said that access was restored to everyone not affected by the database script, so regular users were back in business. However, for companies using the affected Pardot software, only system administrators were given access to their accounts – so they could help rebuild user profiles and restore user permissions. According to the incident status page, some regular users remained incapable of logging into the system as of Monday morning as administrators continued the restoration process.

That process could be onerous for many: Salesforce said that if there’s a valid backup of their profiles and user permission data in the service’s sandbox, admins can simply deploy that. However, if there’s no valid backup, admins will need to manually update the profile and permission settings. Salesforce noted in an update Monday that it has deployed automated provisioning to restore permissions where possible.

Balaji Parimi, CEO at CloudKnox, told Threatpost that admins should take care when restoring the settings.

“Enterprises need to understand that their biggest security risk is not from the attackers targeting them or even malicious insiders – it’s identities with over-provisioned privileges,” he said via email. “Security teams need to make sure that privileges with massive powers are restricted to a small number of properly trained personnel. Until companies better understand which identities have the privileges that can lead to these types of accidents and proactively manage those privileges to minimize their risk exposure, they’ll be vulnerable to devastating incidents like the one we’re seeing with Salesforce right now.”





By continuing to use this site, you agree to the use of cookies. Please consult the Privacy Policy page for details on data use. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.