Mobile Apps für Apple sind lukrativer

Android dominiert den Smartphone-Markt und ist auch darüber hinaus das mit Abstand am weitesten verbreitete mobile Betriebssystem.

Trotzdem ist Apple für App Publisher immer noch die deutlich lukrativere Adresse, wie ein aktueller Report von Sensor Tower zeigt. Demnach erzielten die 100 größten iOS-App-Publisher im ersten Quartal 2019 durchschnittlich 84 Millionen US-Dollar Umsatz, verglichen mit 51 Millionen US-Dollar bei den erfolgreichsten Android-App-Herstellern.

Insgesamt hat Apple bislang mehr als 120 Milliarden US-Dollar ab Entwickler ausgezahlt (Stand: Januar 2019) – davon 60 Milliarden US-Dollar in den letzten beiden Jahren.

from: https://de.statista.com/infografik/18480/durchschnittlicher-bruttoumsatz-der-top-100-app-publisher/

***

Apple: Spotify argumentiert mit falschen Zahlen
Spotify-Gründer Daniel Ek hatte Mitte März bei der EU-Kommission ein Prüfverfahren gegen Apple eingeleitet. Der Grund: Apple erhebe von Anbietern 30 Prozent Gebühren für Käufe über den App Store, beispielsweise wenn ein Spotify-Kunde sein Gratisabo auf die Premium-Funktion upgrade. Apple wirft dem Streaming-Anbieter nun vor, wissentlich mit falschen Zahlen zu argumentieren. Von den 30 Prozent seien nicht etwa alle Nutzer betroffen, sondern nur jene, die ihr Abo zwischen 2014 und 2016 abgeschlossen haben – etwa 680.000 Kunden. Zudem habe deren Gebühr nur bei 15 Prozent gelegen. spiegel.de

 

 

ARCHIV – ILLUSTRATION – Kopfhörer hängen am 17.03.2014 in Berlin vor einem Apple Iphone 5s, auf dem das Logo vom Musik-Streaming-Dienst Spotify angezeigt wird. (zu dpa “Milliardenklage gegen Spotify wegen Autorenrechten” vom 03.01.2018) Foto: Daniel Bockwoldt/dpa +++(c) dpa – Bildfunk+++

Streit um Abo-Kosten Apple wehrt sich gegen Spotify-Vorwürfe

Apple schlägt gegen Spotify zurück: Der Konzern kassiere keine überhöhten Provisionen von Kunden des Streamingdienstes, wie dessen Chef Daniel Ek behauptet. Ek nutze falsche Zahlen, heißt es in einem internen Dokument.

Es waren schwere Vorwürfe, die Daniel Ek erhob. Apple sei zwar ein Konkurrent seiner Firma, schrieb der Gründer und Geschäftsführer des Musik-Streamingdienstes Spotify, und das sei auch gut so. “Aber Apple verschafft sich immer noch bei jeder Gelegenheit Vorteile”, schimpfte Ek Mitte März. Deshalb habe Spotify Beschwerde bei der EU-Kommission eingelegt.

Zur Begründung behauptete Ek, dass Apple von Spotify eine “Steuer” in Höhe von 30 Prozent auf Käufe über Apples Bezahlsystem erhebe – etwa dann, wenn Spotify-Nutzer von einem Gratis- auf ein kostenpflichtiges Premiumkonto umsteigen. Das würde Spotify zwingen, seine Preise “künstlich aufzublasen”, und zwar deutlich über das, was Apple für seinen eigenen Streamingdienst Apple Music verlange.

 

Daniel Ek, CEO of Swedish music streaming service Spotify, poses for photographers at a press conference in Tokyo on September 29, 2016.
Spotify kicked off its services in Japan on September 29. / AFP PHOTO / TORU YAMANAKA

 

Apple wehrt sich jetzt gegen diese Vorwürfe – und beschuldigt Spotify, wissentlich mit irreführenden Zahlen zu operieren. So erwecke Spotify den Eindruck, dass die 30-Prozent-Abgabe für alle Nutzer von Apple-Geräten fällig werde. Dabei gehe es um nur 680.000 Nutzer, wie es nach SPIEGEL-Informationen in Apples Stellungnahme an die EU-Kommission heißt, die Ende Mai in Brüssel eingetroffen ist.

Apple: Spotify operiert mit irreführenden Zahlen

Die Kommission von 30 Prozent sei nur bei jenen Spotify-Kunden erhoben worden, die ihr Abo über Apples In-App-Kauffunktion von Gratis auf Premium umgestellt hätten. Diese Funktion sei aber nur von 2014 bis 2016 in der Spotify-App aktiv gewesen – und in dieser Zeit hätten nur 680.000 Kunden davon Gebrauch gemacht. Für alle anderen Abo-Upgrades vorher und nachher hat Apple nach eigenen Angaben keinen Cent kassiert.

Spotify hatte laut seinem letzten Geschäftsbericht Ende des ersten Quartals 2019 weltweit rund 100 Millionen zahlende Nutzer – eine Steigerung von 32 Prozent gegenüber dem Vorjahr. Apple Music kommt aktuell auf gut 50 Millionen Kunden, wuchs zuletzt aber schneller als der schwedische Wettbewerber. Europa ist die wichtigste Region für Spotify. In den USA hat Apple Spotify zuletzt offenbar überholt.

Spotify lässt Anfragen unbeantwortet

Auch für die betroffenen 680.000 Spotify-Abos verlangt Apple offenbar – anders als Ek in seinem Blog schreibt – nicht 30 Prozent, sondern nur die Hälfte. Schon vor einiger Zeit hat die Firma die Kommission für Abo-Kunden gesenkt: Nach einem Jahr Mitgliedschaft fällt sie von 30 auf 15 Prozent. Da die 680.000 Spotify-User ihre Abos vor drei bis fünf Jahren abgeschlossen haben, muss Spotify für sie nach Apple-Angaben nur noch 15 Prozent abführen.

Warum Ek dennoch behauptet, dass Apple bis heute eine Kommission verlange und diese 30 Prozent betrage, ist unklar. Spotify hat auf mehrere Anfragen des SPIEGEL nicht reagiert.

Ek hat in seinem Blogpost eingeräumt, dass Spotify die Gebühr an Apple umgehen könne, indem man die Apple-eigene Bezahlfunktion nicht nutze. Dann aber erschwere Apple die Kommunikation zwischen Spotify und seinen Kunden, blockiere App-Updates oder halte Spotify von Produkten wie der Assistenzsoftware Siri, dem vernetzten Lautsprecher HomePod und der Computer-Uhr Apple Watch fern. Apple weist diese Behauptungen als unwahr zurück.

Vestager erinnert an Milliarden-Bußgelder gegen Google und Microsoft

Die entscheidende Frage im Prüfverfahren der EU-Kommission ist nun, ob Apples App Store eine dominante Plattform ist, die den gesamten Musikstreaming-Markt beeinflussen könnte, und ob Apple seinen eigenen Streaming-Dienst bevorteilt. “Wir haben eine Plattform, die Kunden zu verschiedenen Anbietern leitet, und dann beginnt die Plattform, solche Geschäfte selbst zu machen, also selbst zum Anbieter zu werden”, sagte EU-Wettbewerbskommissarin Margrethe Vestager im März über den App Store. Das sei ein Muster, “das wir schon kennen”. Es war eine Anspielung auf die milliardenschweren Bußgelder gegen Google und Microsoft.

Bei Apple hält man diesen Vergleich schon deshalb für falsch, weil das iPhone in der EU nur einen Anteil von 25 Prozent des Smartphone-Markts hält. Nahezu der gesamte Rest entfällt auf Handys mit Googles Android-Betriebssystem. Zudem sei Apple Music auch nicht dominant auf dem Markt der Streaming-Anbieter.

Zu Dauer und Stand des von Spotify angestrengten Prüfverfahrens wollte die EU-Kommission auf Anfrage keine Angaben machen.

from: https://www.spiegel.de/netzwelt/netzpolitik/spotify-beschwerde-bei-eu-kommission-apple-wehrt-sich-a-1273755.html

 

***

Any transaction that Apple processes for you will be subject to the 30% transaction fee. Any direct “in app purchase.”
Essentially, anything that can be delivered via the app that’s “digital content” is taxable.

If you have a basic eCommerce app where you sell physical products but you yourself process the payments, Apple will not take 30%.

from: https://www.startups.com/community/questions/381/does-the-30-apple-transaction-fee-apply-to-physical-goods-purchased-on-an-app

***

According to Apple’s official guidelines:

If you want to unlock features or functionality within your app (by way of example: subscriptions, in-game currencies, game levels, access to premium content, or unlocking a full version), you must use in-app purchase. Apps may use in-app purchase currencies to enable customers to “tip” digital content providers in the app. Apps and their metadata may not include buttons, external links, or other calls to action that direct customers to purchasing mechanisms other than in-app purchase.

You must use in-app purchases and Apple’s official API’s, if it’s not a physical item.

Otherwise your app will be rejected.

from: https://stackoverflow.com/questions/48058415/is-there-a-way-to-avoid-in-app-30-fee-for-any-purchases-in-ios

***

In-App Purchases. What you need to know before developing a Mobile App

So you’re building an iOS app. Great! Let’s get to the brass tacks; how are you going to make money on it? Will there be some kind of purchasing ability within the app?

If your app is going to be anything like the majority of the 1.6 Million apps in the App Store, whose in-app purchases account for nearly $24 Billion annually – you need to know how purchasing works on iOS.

In-App Purchases vs. Apple Pay:

Apple has built two ways to pay for things directly into iOS: Apple Pay and In-App Purchase (IAP). Apple Pay is similar to a credit card transaction: it takes a small percentage of the transaction, plus a flat-fee. IAPs use the iTunes store purchasing system, and therefore take a 30% cut on all purchases, whether they’re one time uses or subscription based.

Based on the fee structure alone, it sounds like you’d be a fool to not go with Apple Pay. Well, the plain truth is you can’t use Apple Pay everywhere. In fact, unless you fall into a few specific use cases, you can’t use Apple Pay, or any other payment processor, at all.

Taking a Deeper Look at IAP – It’s important!

Regardless of whether you’re a CEO or a developer, do yourself a favor and read up on the purchasing guidelines for IAP and the ones for Apple Pay too. It’s important to understand the specific rules, so you don’t find yourself crashing into a brick wall later.

The gist is: any time you offer new/renewing content that users will pay for in-app (like news articles), they must be processed via IAP. Similarly, if you want to restrict some functionality, such as “Pro” features, that must be IAP. Finally, if you want to sell tokens/credits/gold coins/gems or whatever as consumables in a game or other service, they also must be through IAP.

One of the toughest decisions to make is whether or not to process subscription sign-ups through your app – or somewhere else like your website (more on that later). If you do decide to allow purchases within the app, then those must be through IAP too.

Given the fact that every In-App Purchase gives Apple a 30% cut, it can throw a really big wrench in your business plan if you aren’t expecting it.

What Doesn’t Fall in Apple’s In-App Purchase Policy:

Ok, when can you avoid using IAP? The simple answer is, when you’re selling physical goods and services.

My favorite example is Uber or Lyft. They can have their own credit card processing system (or Apple Pay) because the customer is paying for an actual ride from one place in the real world to another. When the customer purchases something from Amazon, they are buying a physical product, so Amazon can use their own payment system as well. However, you will notice that you cannot buy books in the Amazon Kindle app. You can download samples and add to a wish list, money does not change hands in the Kindle app.

Curiously, you can buy a Kindle book in the normal Amazon store app, using Amazon’s own payment processing. I don’t know if Amazon worked out a special deal with Apple, or they just snuck it in there. When you’re on Amazon’s size you can get just a small bit of leeway.

The trouble is 1) there aren’t a lot of businesses that offer these types of products or services that transact on mobile applications, and 2) it may not be immediately obvious that your pricing model falls under the IAP umbrella. If there is any doubt, submit your application for review as early as possible to validate this. It is far easier to adjust a business model months before launch than hours.

What about Software-as-a-Service businesses?

If you sell a SaaS subscription within an iOS app, it’s just another subscription in Apple’s eyes; you have to use an IAP subscription and give Apple 30%.

You can however, sell the subscription on your website and still have a companion iOS app. Take a look at the Basecamp iOS app:

https://apps.apple.com/us/app/basecamp-3/id1015603248

This is the first screen you see when you launch the app. There are no IAPs for their subscription model. Instead, Basecamp has you sign up and pay for their subscription service on their website, not in their mobile app.

https://basecamp.com/via

So you’re saying there is a loophole?

Yes. Well, maybe… You can certainly avoid paying the 30% fee for IAP, but there is a Shaq sized catch. You CANNOT advertise anywhere in an app that you are selling something outside of iOS.

This is a pretty tough decision to make, as it has implications not only for product development, but also user acquisition, engagement and retention.

Originally, Netflix did not allow you to sign up for their $10/month plan inside of their iOS app, instead forcing every user to activate their account online. They held steady on that for a long time, opting to avoid the IAP fees for a clunkier user experience. That was until they determined that the number of signups they received by the convenience of activating subscriptions right there in the app outweighed the 30% hit on revenue.

I cannot stress enough that you will be rejected if you link to a website that displays a payment form. Even if you link to your homepage, and that links to a payment page, you’ll be rejected. Notice that there is no link to basecamp.com in that screenshot above. (Bonus tip: if you have a link to an Android version on that homepage, you’ll also get rejected. Life is fun sometimes.)

What this means for your business. And your app.

All too often we have a tendency to rush into things. Apps, and software development, are notoriously hard to estimate regardless. There is always an unknown wrench that will be thrown into your plans, but your business model and how you scale revenue should always be in your hands.

Apple’s IAP policy might seem a little imperious, and it is. Apple has $528 billion reasons that allow them to get away with it though and they won’t be changing anytime soon. With a little bit of foreknowledge your business and your development plan can adapt.

from: https://blog.tallwave.com/2016/04/13/in-app-purchases-what-you-need-to-know-before-developing-a-mobile-app

 

***

[Google is no different: it also has a 30% cut in its Google Play Store]

Opinion: Google’s 30% cut of Play Store app sales is nothing short of highway robbery

Congratulations: You’ve finally developed your million-dollar app. You took a great idea, implemented it, built it into a polished UI, and tested it until you tracked down every last bug. Now it’s ready for public release, so you can sit back, relax and … earn just 70% of what users pay for your software? That doesn’t sound right. Yet it’s a position that mobile app developers everywhere find themselves in, one that’s perched somewhere on the intersection between wildly unfair and mild extortion.

As you’re probably aware, Google takes a 30% cut of all software sales going through the Play Store — that counts for both for the initial sale of apps, as well as any supplementary in-app purchases. In the context of the industry, this practice doesn’t seem too outlandish; Apple does the same thing with iOS software distribution through its App Store, and we see similar arrangements in the PC sphere on platforms like Steam.

But just because it’s commonplace, does that mean it’s fair, or even right? How did we get to this place where paying a developer 70 cents on the dollar for their hard work seems OK?

Back before the days when software distribution was primarily online, developers had it a lot worse. First you had to find a publisher, who was going to want their cut. Then you had the cost of physical media to consider, as well as designing and manufacturing some attractive packaging. You had to pay to ship your software to stores, and to even get it on shelves meant giving retailers their slice of the pie. And of course, with all these parties involved and them wanting to ensure as high sales as possible, you’d probably also be paying for an expensive advertising campaign.

In the end, the developer would be very lucky to end up with even 20% of the ultimate sale price (and forget about that if we’re talking console games, with royalties to the console manufacturer knocking things under 10% easily).

But that’s not the world we live in today, and so many of those costs have either seriously diminished or become irrelevant altogether. There’s no need to fight for retailer shelf space, no unsold merchandise taking up space in warehouses, and no need to pay so many middlemen along the way — heck, why even bother with a publisher when you can be a one-man app studio yourself?

from: https://www.androidpolice.com/2018/09/22/opinion-googles-30-cut-play-store-app-sales-nothing-short-highway-robbery/

 

Buying software used to mean a trip to the mall, with retailers and distributors taking a big cut. Now with digital sales, is Google’s 30% take still fair? (Image: Mike Mozart)

 

 

 

Facebook’s Libra: “It would make the early 20th century Morgans or Rockefellers seem downright competitive.”

Standard Oil depicted as an Octopus in a 1904 political cartoon
(image via Wikimedia Commons).

Facebook’s Libra Cryptocurrency: Bad for Privacy, Bad for Competition

Author Scott A. Shay is co-founder and chairman of Signature Bank of New York and also the author of “In Good Faith: Questioning Religion and Atheism” (Post Hill Press, 2018).

Allowing Facebook to mint its own coin, the Libra, would turn it into the greatest anti-competitive trust case in history. It would make the early 20th century Morgans or Rockefellers seem downright competitive.

Even before it unveiled its vision for a global cryptocurrency this month, Facebook was already a near-monopoly in social media, and part of a duopoly in its main markets. Together with Google, it controls 82% of the digital advertising market. 

In the past, Facebook has purchased any company that threatened it, e.g. Instagram and WhatsApp. And, when it spots a company that won’t sell itself or would be difficult to purchase, it uses the “embrace, enhance and extinguish” technique.  

Facebook saw Snap Inc. (maker of Snapchat) contesting a small part of its franchise, so it embraced Snap’s best features and integrated them into its app. Now, Facebook is hoping to extinguish Snap as a competitor. Compare the stock performance of Snap and Facebook, and you will probably place your bet on Facebook.

But it is not simply Facebook’s business practices that are of concern.

Neither Facebook nor Google charges for their consumer products, obscuring the fact that all-encompassing consumer tracking is their real product. In many cases, their data is better than what the KGB or CIA could have gathered 20 years ago. And their data is certainly a lot cheaper, since it is voluntarily provided and easily accessible.

We would not want our government agencies to have this sort of power, nor should we want it to be in the hands of corporations. 

Facebook and Google have already shown their political muscle. With their duopoly on digital marketing advertising, these companies have transformed the nature of news.  Only a few news sites, such as The Wall Street Journal and The New York Times, can resist their gravitational pull and still attract direct advertisers as well as subscribers.

Most other publications must use Google ads, which provide far less revenue to the outlet, slice and dice their readership, and force newspapers to write clickbait. Ads to readers are so well-placed because of the mountain of information that can be inputted into their algorithms. The same holds true for news content viewed on Facebook.

Now, with the Libra project, Facebook wants to exponentially increase its monopolistic power by accessing unparalleled information about our consumer purchasing habits. If allowed to proceed with Libra, a company that knows your every mood and virtually controls the news you see will also have access to the deepest insights into your spending patterns.

Privacy threat

Of course, Facebook will speak piously about privacy controls and its concern for the consumer, yet it will still figure out a way to sell the data or others who buy the data will figure it out for them.

Furthermore, with the richness of the social media data Facebook consistently garners, even anonymized data can be recalibrated to distill specific individual-related information and preferences. Facebook, along with its other monopolist rent-seeking cohorts, such as eBay, Uber and Mastercard, all say they won’t do that. 

Quite frankly, there is zero reason to believe such promises. Their culture is based strictly on brand concerns and access to personal data. Additionally, hacks of social media are now so common that we are inured to them.

Consumers can have the benefit of a digital payment mechanism without allowing Facebook to gain more power. In the financial services sector, my institution, Signature Bank, was the first to introduce a 24/7 blockchain-enabled payment system. As one would expect, others, such as JPMorgan, are trying to follow suit and will no doubt be competitors someday.

Banks and financial institutions are limited in their access to, and transmission of, information, and for good reason. If Facebook, on the other hand, establishes Libra, no other competitor will have equal access to its data, and therefore, a chance at the consumer payment market.

In this way, Libra is in keeping with Facebook’s monopolistic business style.

Further, the information monopoly Facebook would possess will be similar to what the Chinese government possesses but needs the Great Firewall to execute. Monopolistic forces will produce the same result through different means.

Call to action

Action needs to be taken quickly to stop Libra and break up Big Tech, not only for the welfare of consumers but for the good of the nation.

The first step is to force Facebook to divest or spin off Instagram, WhatsApp, Instagram and Chainspace, the blockchain startup it acqui-hired early this year.

Facebook also must be mandated to offer a parallel, ad-free, “no collection of information” site supported by fee-based subscriptions. Over time, this would provide some transparency as to the value of the consumer information currently being gifted to Facebook.

Google should be forced to divest or spin off YouTube, Double Click and other advertising entities, cloud services and Android. Amazon similarly needs a radical breakup as it too poses systemic threats to a transparent market. (Alexa is a prime example of the private data Amazon gathers on users’ lifestyle and personal habits.)

The breakup of these behemoths cannot wait until after the 2020 election.  Such action must be taken on a bipartisan basis as soon as possible.

Even once stripped down, Facebook should remain separated from commerce due to privacy concerns. Congress, which has scheduled hearings on Libra for next month, is right to intervene.

 

from: https://www.coindesk.com/facebooks-libra-cryptocurrency-bad-for-privacy-bad-for-competition

 

 

Der Preis (in $$) der persönlichen Daten in USA

Daten gegen kostenlose Nutzung.

Das ist kurz zusammengefasst der Deal auf den sich NutzerInnen sozialer Netzwerke einlassen. Laut einer Umfrage von NBC News/Wall Street Journal aus dem März 2019 finden 74 Prozent der US-Amerikaner, dass das kein fairer Handel ist. Auch in Deutschland dürfte eine entsprechende Umfrage ähnlich ausfallen.

Aber was wäre dann ein guter Deal? Dieser Frage ist das Meinungsforschungsunternehmen Morning Consult in einer Umfrage unter 2.200 Erwachsenen in den USA nachgegangen.

  • Für Informationen wie den vollen Namen oder das Einkaufsverhalten würden die Teilnehmer 50 US-Dollar aufrufen.
  • Für Bonitätswerte und Führerscheinnummer würden 300 beziehungsweise 500 US-Dollar fällig werden.
  • Für Passnummer oder biometrische Daten müssten Unternehmen 1.000 US-Dollar zahlen.

 

from: https://de.statista.com/infografik/18449/umfrage-zum-preis-fuer-personenbezogene-daten-in-den-usa/

 

 

No Patch: Hackers Can Bypass Windows Lockscreen on Remote Desktop Sessions

The Network Level Authentication (NLA) feature of Windows Remote Desktop Services (RDS) can allow a hacker to bypass the lockscreen on remote sessions, and there is no patch from Microsoft, the CERT Coordination Center at Carnegie Mellon University warned on Tuesday.

NLA provides better protection for Remote Desktop (RD) sessions by requiring the user to authenticate to the RD Session Host server before a session is created. Microsoft recently recommended NLA as a workaround for a critical RDS vulnerability tracked as BlueKeep and CVE-2019-0708.

When a user connects to a remote system over RDS, they can lock the session similar to how sessions can be locked locally in Windows. If the session is locked, the user is presented with a lockscreen where they have to authenticate in order to continue using the session.

Joe Tammariello of the Software Engineering Institute at Carnegie Mellon University discovered a vulnerability that can be exploited to bypass the lockscreen on an RDS session. The flaw, tracked as CVE-2019-9510 and assigned a CVSS score of 4.6 (medium severity), affects versions of Windows starting with Windows 10 1803 and Server 2019.

“If a network anomaly triggers a temporary RDP disconnect, upon automatic reconnection the RDP session will be restored to an unlocked state, regardless of how the remote system was left,” CERT/CC explained in an advisory.

The organization has described the following attack scenario: the targeted user connects to a Windows 10 or Server 2019 system via RDS, they lock the remote session, and leave the client device unattended. At this point, an attacker who has access to the client device can interrupt its network connectivity, and they can then gain access to the remote system without needing any credentials.

“Two-factor authentication systems that integrate with the Windows login screen, such as Duo Security MFA, are also bypassed using this mechanism. Any login banners enforced by an organization will also be bypassed,” CERT/CC said.

Tammariello reported his findings to Microsoft, but the tech giant apparently does not plan on patching the vulnerability too soon.

“After investigating this scenario, we have determined that this behavior does not meet the Microsoft Security Servicing Criteria for Windows,” Microsoft said, according to CERT/CC vulnerability analyst Will Dormann.

Users can protect themselves against potential attacks via two methods: locking the local system instead of the remote system, and disconnecting the RDS session instead of locking it.

 

 

from: https://www.securityweek.com/hackers-can-bypass-windows-lockscreen-remote-desktop-sessions

 

 

There is no cloud – it’s just someone else’s computer.

P.S. you can get the sticker here:

https://www.redbubble.com/de/people/tamagothings/works/28066602-there-is-no-cloud?p=sticker

2,67 € for small
7,43 € for medium
11,13 € for large size
(prices include Euro VAT — I am not associated with sticker sales, just saving you the search)

 

People often argue “you get the elasticity” and the “extra bandwidth at spikes”, and a “rich toolkit to pick from”as if this changes the fundamentals?

There can be good arguments for using someone else’s computer – you may possibly save some up-front capital and people expenses; but it comes at a price (you lose the control over it all, in every aspect) and one should very carefully compare the actual cost of cloud vs own when calculating ‘savings’.

  • all your eggs are in one basket
  • the basket is not yours: both the basket and your eggs are controlled by someone else
  • you have to trust service availability (seems rock solid at first .. “too big to fail” ? — but every little admin mistake can bring you down along with everyone else – with little hope to be able to recover losses .. which you might otherwise have, with a decent SLA)
  • you have to trust someone else’s cyber security measures
  • you are a small part of a highly attractive target for the bad guys – and not just a (much smaller) target on your own merit
  • you are at the total mercy of that someone else, if the business you bring no longer fits their model – example Google, example tumblr (have you ever tried migration from one cloud to another?)
  • you can only use the toolset offered – should you require different functionality or tools: sorry, does not fit their business model, which is only re-active to the needs of many, many customers to make it worth their while
  • you have zero control over latency – should you require different measures from what the lowest common denominator across all the cloud customers is, then the little business you bring to the cloud provider’s table is just too small for such extras (i.e. private BGP4 peering, and such)
  • you DO have the cosy feeling of “everyone else does it, so I won’t get fired over a decision to go cloud” – but do you really want to be known for that Dilbert-like, middle manager image from the 1990s, when people made the same choices in favor of Microsoft products? And the Microsoft monopoly has since milked you how many times with new, expensive versions of the same stuff you had no choice in (migration to …?) and made you suffer very painfully for the lousy cyber security you automatically outsourced to them – bringing you many zero-days, phishing via VBS, crypto-locking malware, “patch Tuesdays” bi-weekly, disrupting your IT as if it is normal – and yet, you have not learned anything from it, and keep thinking today’s cloud business model is any different from Micro$oft’s in the old days? (hint: Microsoft also got big time into clouds … “Azure”, now increasingly running on Linux, like the rest of the cloud providers and the Internet)
  • so: all this is worth how much in ‘savings’?

There is a reason why several big, and many medium-size business build their entire business model around it – they would not, unless it makes them a bunch of money (and that is money you think you are “saving” plus money made from using and selling your data). Example: AWS makes much more profit than the whole, huge retail empire – with a lot less effort. Should make you think :)

So while this sticker is meant to be funny, it does put the finger where it should hurt, too.

P.S.: of course, there are always alternatives – there is never just one, inevitable choice. If you have trouble thinking of those yourself, do some smart searches on the net; and you are welcome to talk to me about it.

 

Das weltweite Datenaufkommen explodiert und bringt traditionelle Speichermedien an die Grenzen ihrer Kapazität. Die Lösung ist die Cloud. Wie die Grafik auf Basis des Statista Digital Economy Compass zeigt, werden bereits im kommenden Jahr mehr Daten via Internet auf großen Serverfarmen gespeichert und bereitgestellt als auf lokalen Geräten. Das bietet für Privatverbraucher und Geschäftskunden einige Vorteile in Sachen Komfort und Geschwindigkeit von Arbeitsabläufen, hat aber auch Nachteile: Immer wieder kommt es dabei zu Datenlecks, bei dem sensible Kundendaten gestohlen werden. Der finanzielle Schaden ist meist hoch und variiert je nach Branche.
https://de.statista.com/infografik/18231/cloud-vs-lokaler-speicher/

 

“As mentioned by Nicolas Marot, Google is currently having major service problems.
Snapchat (uses Google Cloud) is described as “Down” and there is a lot of red on the Google status dashboard.”- 04 JUN 2019

https://downdetector.com/

 

https://www.google.com/appsstatus#hl=en&v=issue&sid=1&iid=74a7efa47b9de665d02699bbe9dd11fc

 

https://downdetector.com/status/aws-amazon-web-services

 

https://downdetector.com/status/google-cloud

 

https://downdetector.com/status/windows-azure

***

 

***

03 JUN 2019

50.000 Windows-Datenbank-Server mit Krypto-Minern infiziert

Vermeintlich chinesische Hacker haben weltweit Microsoft-SQL-Server- und PHPMyAdmin-Installationen mit versteckten Krypto-Miner-Skripten infiziert. Während sie damit Einheiten der Krypto-Währung Monero verdienten, kostete es die infizierten Server Rechenleistung und andere Ressourcen. Aufgedeckt hat das ganze die amerikanisch-israelische Sicherheitsfirma Guardicore Anfang April.

Über 50.000 Microsoft Windows SQL Datenbank-Server
über Uralt-Windows-Bug mit Krypto-Minern infiziert

Mit raffinierten Methoden haben Hacker zehntausende schlecht gesicherte Windows-Server gekapert und schürfen dort heimlich Monero.

Unbekannte Hacker, vermutlich aus China, infizieren momentan Microsoft-SQL-Server- und PHPMyAdmin-Installationen auf der ganzen Welt mit versteckten Krypto-Miner-Skripten. Die Opfer kostet das Rechenleistung und andere Ressourcen, die Angreifer verdienen sich heimlich Einheiten der Krypto-Währung Monero.

Während solche Angriffe gewöhnlicherweise mit relativ simplen, und damit vergleichsweise einfach zu entdeckenden, Methoden auskommen, zeichnen sich die aktuellen Angriffe durch ziemlich raffinierte Tricks aus. Unter anderem verwenden die Angreifer Malware, die mit einem gültigen Zertifikat signiert wurde und so unter dem Radar vieler automatischer Erkennungstechniken, unter anderem von Windows selbst, fliegt.

Entdeckt wurden die Angriffe von der amerikanisch-israelischen Sicherheitsfirma Guardicore Anfang April. Seitdem konnten Sicherheitsforscher der Firma weitere Angriffsmuster ausmachen, die sie denselben Hackern zuordnen. Demnach sind diese seit mindestens Ende Februar aktiv und feilen seitdem kontinuierlich an ihren Angriffstechniken und der eingesetzten Malware.

Insgesamt konnten die Forscher bis zu 20 verschiedene Malware-Varianten ausmachen. Ihren Erkenntnissen nach infizieren die Hacker so bis zu 700 verschiedene Server am Tag. Die Sicherheitsforscher verschafften sich im Laufe ihrer Untersuchungen Zugang zu den Kontrollservern der Hacker und beziffern die Zahl der momentan infizierten Server auf knapp 50.000 Systeme.

Anscheinend haben die Hacker es auf Windows-Server abgesehen. Systeme im Gesundheitssektor, bei IT-Firmen, Telekommunikations-Dienstleistern und Medienunternehmen fielen den Angriffen bereits zum Opfer. Im ersten Schritt verschaffen sich die Hacker per Bruteforce-Angriff auf schwache Passwörter Zugang zum Microsoft SQL Server auf dem System. Dann nutzen sie den Zugang zum SQL Server, um ein VB-Skript zu erstellen und auszuführen, dass den Krypto-Miner auf dem System installiert, versteckt und ausführt.

Die Angreifer missbrauchen eine alte Schwachstelle im Windows-Kernel (CVE-2014-4113, von Microsoft im Oktober 2014 gepatcht), um System-Rechte für diesen Schritt zu erlangen. In einigen Fällen fand ein ähnlicher Angriff über schwache PHPMyAdmin-Passwörter, ebenfalls auf Windows, statt.

Die Angreifer verankern ihre Mining-Malware mit Hilfe von Registry-Schlüsseln fest im System und installieren ein Rootkit, dass den Miner-Prozess überwacht und das System daran hindert, diesen zu beenden. Die Angreifer nutzen die Kernel-Schwachstelle mit einem Treiber aus, der zum Zeitpunkt des Angriffes ein gültiges Verisign-Zertifikat besaß, ausgestellt auf eine Firma namens Hangzhou Hootian Network Technology – bei dem Firmennamen handelt es sich um eine Fälschung. Das Zertifikat wurde nach einem Hinweis der Sicherheitsfirma mittlerweile von Verisign zurückgezogen und ist nun nicht mehr gültig.

Die Sicherheitsforscher vermuten auf Grund der chinesischen Fake-Firma und weil Teile des Malware-Codes in der proprietären, chinesischen Programmiersprache Easy Programming Language (EPL) geschrieben sind, dass die Angreifer aus dem Reich der Mitte stammen. Sie haben die Angriffsserie deswegen “Nansh0u” getauft. Diese Zeichenfolge kam in einer bei den Angriffen erstellten Datei vor – “nánshòu” ist Mandarin für “ungemütlich” oder “schwer zu ertragen”.

Nansh0u verdeutlicht vor allem zwei Dinge: Es gibt im weltweiten Netz eine Menge von Windows-Servern mit uralten, ungepatchten Sicherheitslücken. Diese gefährden Systeme auch dann, wenn sie nicht direkt zum Einbruch in den Server missbraucht werden können, denn wo es uralte ungepatchte Lücken gibt, gibt es oft auch schwache Passwörter und ungenügenden Bruteforce-Schutz.

Außerdem zeigen die Angriffe, dass Krypto-Mining-Attacken nicht nur von Script Kiddies mit fertigen Exploit Kits durchgeführt werden, sondern auch von Angreifern, die sehr professionell vorgehen und viel Mühe in ihre Techniken stecken. Mit dem Schürfen von Monero lässt sich demnach noch genug Geld verdienen, um einen solchen Aufwand zu rechtfertigen. (fab)

Published 30 JUN 2019

from: https://www.heise.de/newsticker/meldung/Ueber-50-000-Datenbank-Sever-ueber-Uralt-Windows-Bug-mit-Krypto-Minern-infiziert-4435622.html

 

***

03 JUN 2019

Im weiteren Zusammenhang (Zentralisierung, Globale Monopole):

 

Den entscheidenden Sachverhalt verschweigen uns die Strategen schamvoll: die Deutschland AG gehört nicht mehr den Deutschen. 85 Prozent des Dax befinden sich inzwischen in ausländischer Hand.

Nordamerikanische und britische Investoren halten derzeit 54,1 Prozent der Anteile an den 30 Dax-Unternehmen. Das enthüllt eine aktuelle Studie des Deutschen Investor Relations Verbands (DIRK).

► Die USA haben ihren Anteil an der Deutschland AG von 32,6 Prozent (2016) über 33,5 Prozent (2017) auf 34,6 Prozent (2018) ausgebaut und kaufen – auch angesichts der schwachen Kurse vieler ehemaliger Blue-Chips-Firmen – weiter zu.

► Der größte Einzelinvestor ursprünglich deutscher Vermögenswerte im Dax ist BlackRock mit 9,4 Prozent.

► Chinesische und andere asiatische Investoren spielen, anders als der mediale Alarmismus erwarten lässt, mit knapp vier Prozent nur eine untergeordnete Rolle.

 

via: Handelsblatt Daily

 

***

Those Machines In The Cloud

Cloud AI And The Future of Work

https://timesofcloud.com

 

Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” — Larry Page

What happens when you take two fundamentally life changing technologies and merge them into an ultimate use case? The answer: businesses may become efficient but social disruption could become more prevalent. The argument for Universal Basic Income (UBI) becomes stronger as jobs get automated and vanish from the corporate landscape. However, all this is conjecture at this point. Big Tech firms are now offering Machine Learning (ML) tools on their respective clouds which allows corporate IT departments and novices to create ML applications without writing pieces of code to automate tasks. This article takes a look at a nascent boom in Research and Development (R&D) and new cloud AI platforms deployed by Google, Microsoft and Amazon. It concludes with a futuristic view of the employment landscape should these technologies succeed in creating ML Platforms as a Service (PaaS) for creating and deploying AI and ML applications.

Introduction To The Cloud

The cloud refers to the internet. Period. The internet began to be referred as a cloud because IT system diagrams would depict the internet using the cloud as a symbol. While the concept of the cloud is as old as the 1960’s with some attributing the idea to John McCarthy and others to JCR Licklider who enabled the development of ARPANET (pre-cursor to the modern internet). Irrespective of the attribution, the cloud was envisioned as a computer on the internet that would provide infrastructure such as storage, platforms such as operating systems and software applications over the internet for a fee. In a nutshell, it was conceived as renting hardware and/or software depending on the user’s requirement.

Subsequently, the launch of a Customer Relationship Management (CRM) software called ‘salesforce’ on the cloud which companies could license for a fee marked the beginning of the era of pervasive cloud computing. In fact, salesforce’s rai·son d’ê·tre. Subsequently, Amazon with the launch of the Elastic Compute Cloud which was a pay as you go cloud provided a further boost to popularization of the cloud. Today, Microsoft offers it’s cloud under the name Azure, Amazan under the name Amazon Web Services (AWS) and Google under Google Cloud Services.

As mentioned before, the cloud can host infrastructure as a service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Also, the cloud can be a public (open to all on sharing basis) or a private cloud. Total market for global public cloud is estomated to be over $150 Billion in 2020.

 

 

Amazon, with its first mover advantage leads the pack in terms of global market share. Today, the market leader Amazon offers a plethora of solutions under AWS banner

 

Source: Wall Street Journal

 

Today, any business can be virtualized. What that means is that the backend becomes a service that can be rented from single or multiple providers.

A financial institution can be setup completely on the cloud and it can offer products via website and mobile apps. It could leverage networks such as the STAR network for issuance of ATM/Debit card. The Blockchain will become essential for accounting and facilitating cross border transactions. Banks can become completely digital without any brick and mortar architecture. However, that means the field would be ripe for big tech to enter and to leverage their network of users to build a loyal clientele.

Cloud computing has therefore spawned a revolution that is taking entire industries and virtualizing them. The next step is to look at human tasks that can be automated. This is where Artificial Intelligence (AI) residing on the cloud is key.

When AI Meets The Cloud

Now you know the cloud (internet, web 2.0 call it what you will) is everything today. However, challenges around 24/7 connectivity to the internet and cyber security still impede complete overhaul of IT systems across the world. That has not stopped technology companies for spending billions of dollars on research in AI. A deep learning revolution that began with Geoffrey Hinton’s back propagation is now being continued in the form of convolutional neural networks (CNN’s) and generative adversarial networks (GAN’s). The evolution in the approaches to machine learning is quite mind boggling. It’s like the torch is being passed from leader to the other across the world without any clear direction on who will emerge the likely winner. Till then, the battle rages on. The new arena is called Machine Learning as a Service (MLaaS).

 

source: https://www.altexsoft.com/blog/datascience/comparing-machine-learning-as-a-service-amazon-microsoft-azure-google-cloud-ai/

 

There are tons of stories about how Microsoft, Google and Amazon are making ML accessible not only to data scientists but to common people as well. One of the most interesting stories is how Makoto, a farmer from Japan is using AI to cultivate cucumbers using Google’s TensorFlow opensource AI platform.

 

 

Microsoft Azure AI

Microsoft announced its cloud computing service on October 2008 and released it on February 1, 2010. Elektronische Fahrwerksysteme which develops Chassis for Audi uses Microsoft Azure to analyze roads. The idea is to enable autonomous vehicles think ahead and understand the roads they are on:

As part of its research efforts, the company used Azure NC-series virtual machines powered by NVIDIA Tesla P100 GPUs to drive a deep learning AI solution that analyzes high-resolution two-dimensional images of roads (source: Microsoft)

Ubisoft, a video game publisher, runs its eSports game, Rainbow Six Siege, in Microsoft Azure:

 

 

In 2016, Microsoft created what it called “the world’s first AI supercomputer” by installing Field-Programmable Gate Array (FPGA) across every Azure cloud server in 15 countries. As per wikipedia, an FPGA is an integrated circuit designed to be configured by customer or a designer after manufacturing — hence “field-programmable.

Google Cloud AutoML, Gluon, Tensorflow

Fei-Fei Li, Chief Scientist, Cloud AI at Google is trying to make machine learning accessible to all businesses. However, she also notes that very few corporations have the talent and other resources necessary to successfully embed AI into their business applications. To support it’s bid to gain and retain leadership in the Cloud AI space, Google opened up an entire ecosystem to developers which includes TensorFlow and Kubeflow as well as it’s container based system called Kubernetes.

 

 

Newspapers such as the Dainik Bhaskar (DB corp) group in India as well as Hearst group of publications utilizes Google Cloud AI to categorize digital content across it’s digital properties.

Amazon SageMaker

Amazon Sagemaker is a platform for developing and deploying deep learning applications. It was launched in November 2017. As per Amazon:

What Is Amazon SageMaker?

Amazon SageMaker is a fully managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment. It provides an integrated Jupyter authoring notebook instance for easy access to your data sources for exploration and analysis, so you don’t have to manage servers. It also provides common machine learning algorithms that are optimized to run efficiently against extremely large data in a distributed environment. With native support for bring-your-own-algorithms and frameworks, Amazon SageMaker offers flexible distributed training options that adjust to your specific workflows. Deploy a model into a secure and scalable environment by launching it with a single click from the Amazon SageMaker console. Training and hosting are billed by minutes of usage, with no minimum fees and no upfront commitments.

This is a HIPAA Eligible Service. For more information about AWS, U.S. Health Insurance Portability and Accountability Act of 1996 (HIPAA), and using AWS services to process, store, and transmit protected health information (PHI), see HIPAA Overview.

Are You a First-time User of Amazon SageMaker?

If you are a first-time user of Amazon SageMaker, we recommend that you do the following:

  1. Read How Amazon SageMaker Works – This section provides an overview of Amazon SageMaker, explains key concepts, and describes the core components involved in building AI solutions with Amazon SageMaker. We recommend that you read this topic in the order presented.
  2. Read Get Started – This section explains how to set up your account and create your first Amazon SageMaker notebook instance.
  3. Try a model training exercise – This exercise walks you through training your first model. You use training algorithms provided by Amazon SageMaker. For more information, see Get Started.
  4. Explore other topics – Depending on your needs, do the following:
  5. See the API Reference – This section describes the Amazon SageMaker API operations.more: https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html

 

Let’s take an example to understand how Amazon’s applications help integrate machine learning into everyday life.

Alex Schultz, a father with no deep learning experience built an application that reads books to his kids using Amazon Deep Lens called ReadtoMe. Alex built the application using opencv, Amazon Deeplens camera, python, polly, tesseract-ocr, lambda, mxnet and Google’s tensorflow. This example demonstrates the fact that ML is accessible and can be embedded in real life through a variety of applications which strengthens the argument that is may become all pervasive and ubiquitous.

 

 

Co-Opetition

On October 12, 2017, Amazon Web Services and Microsoft announced a new deep learning library, called Gluon, which allows developers of all skill levels to prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps.

AI is a very broad term that contains many building blocks such as the cloud platform, the programming platform, the API’s as well as the integrated circuits. Such a wide variety of inter-related and continuously evolving technologies with a wide array of applications in business and daily life provides big tech companies an edge. Who among them is the first among equals is for the future to say.

Future of Work

Every company is becoming a technology company. If not, they need to be aware of the mega trends and deploy resources wisely to prevent obsolescence.

Financial institutions today prefer to rent Software as a Service (SaaS) because developing software is not their core competency. Also, there is so much churn in technology today that being nimble and flexible is key to survival and growth.

Imagine a scenario where you wanted a certain sales report or a business review prepared for senior leadership. You could just give a voice command to a digital assistant (AI) to give you a year to date national report of all sales during 2018. The data would be stored on the same cloud as the AI assistant is. The AI bot would then organize data and send it to an output of your choice which could be an augmented reality screen. Now, extend this scenario to all routine and non routine tasks that can be automated and you can imagine how scary AI as a technology can be. It is like technology just gobbled the world of work as we know it.

At first, routine tasks can be automated. Later, AI can become a recommendation engine and finally it will be able to take decisions on its own. There are varying estimates on the extent of automation in the next decade or so. However, the takeaway for most is that learning new skills or treating life as a continuing education should be the mantra.

 

Source: McKinsey

 

While technological disruption is continuing its march into the workplace, AI and its effects are not pervasive enough to cause social unrest. Yet. Therein lies the ethical dilemma.

For instance, politicians in developing economies such as India are already hearing distressed voices complaining about the loss of driving jobs due to automated vehicles. When an automated Uber killed a pedestrian in Arizona, the incident gave rise to suspicions around the viability of automation. As with Bitcoin, CRISPR CAS 9 Genetic Editing technology, AI regulation will need to be ahead of the game.

However, for common folk like us, learning just acquired a whole new practical meaning.

 

from: https://hackernoon.com/those-machines-in-the-cloud-c988f36b6bef

 

***

 

Human-level performance in 3D multiplayer games with population-based reinforcement learning

Science  31 May 2019:
Vol. 364, Issue 6443, pp. 859-865
DOI: 10.1126/science.aau6249

 

End-to-end reinforcement learning (RL) methods (15) have so far not succeeded in training agents in multiagent games that combine team and competitive play owing to the high complexity of the learning problem that arises from the concurrent adaptation of multiple learning agents in the environment (6, 7). We approached this challenge by studying team-based multiplayer three-dimensional (3D) first-person video games, a genre that is particularly immersive for humans (8) and has even been shown to improve a wide range of cognitive abilities (9). We focused specifically on a modified version (10) of Quake III Arena (11), the canonical multiplayer 3D first-person video game, whose game mechanics served as the basis for many subsequent games and which has a thriving professional scene (12).

The task we considered is the game mode Capture the Flag (CTF), which is played on both indoor- and outdoor-themed maps that are randomly generated for each game (Fig. 1, A and B). Two opposing teams consisting of multiple individual players compete to capture each other’s flags by strategically navigating, tagging, and evading opponents. The team with the greatest number of flag captures after five minutes wins. The opposing teams’ flags are situated at opposite ends of each map—a team’s base—and in indoor-themed maps, the base room is colored according to the team color. In addition to moving through the environment, agents can tag opponents by activating their laser gadget when pointed at an opponent, which sends the opponent back to their base room after a short delay, known as respawning. If an agent is holding a flag when they are tagged, this flag is dropped to the floor where they are tagged and is said to be stray. CTF is played in a visually rich simulated physical environment (movie S1), and agents interact with the environment and with other agents only through their observations and actions (moving forward and backward; strafing left and right; and looking by rotating, jumping, and tagging). In contrast to previous work (1323), agents do not have access to models of the environment, state of other players, or human policy priors, nor can they communicate with each other outside of the game environment. Each agent acts and learns independently, resulting in decentralized control within a team.

 

Fig. 1 CTF task and computational training framework.
(A and B) Two example maps that have been sampled from the distribution of (A) outdoor maps and (B) indoor maps. Each agent in the game sees only its own first-person pixel view of the environment. (C) Training data are generated by playing thousands of CTF games in parallel on a diverse distribution of procedurally generated maps and (D) used to train the agents that played in each game with RL. (E) We trained a population of 30 different agents together, which provided a diverse set of teammates and opponents to play with and was also used to evolve the internal rewards and hyperparameters of agents and learning process. Each circle represents an agent in the population, with the size of the inner circle representing strength. Agents undergo computational evolution (represented as splitting) with descendents inheriting and mutating hyperparameters (represented as color). Gameplay footage and further exposition of the environment variability can be found in movie S1.

 

Learning system

We aimed to devise an algorithm and training procedure that enables agents to acquire policies that are robust to the variability of maps, number of players, and choice of teammates and opponents, a challenge that generalizes that of ad hoc teamwork (24). In contrast to previous work (25), the proposed method is based purely on end-to-end learning and generalization. The proposed training algorithm stabilizes the learning process in partially observable multiagent environments by concurrently training a diverse population of agents who learn by playing with each other. In addition, the agent population provides a mechanism for meta-optimization.

In our formulation, the agent’s policy π uses the same interface available to human players. It receives raw red-green-blue (RGB) pixel input xt from the agent’s first-person perspective at time step t, produces control actions at ~ π(⸱|x1, …, xt) by sampling from the distribution given by policy π, and receives ρt, game points, which are visible on the in-game scoreboard. The goal of RL in this context is to find a policy that maximizes the expected cumulative reward Eπ[Tt=1rt]

over a CTF game with T time steps. We used a multistep actor-critic policy gradient algorithm (2) with off-policy correction (26) and auxiliary tasks (5) for RL. The agent’s policy π was parameterized by means of a multi–time scale recurrent neural network with external memory (Fig. 2A and fig. S11) (27). Actions in this model were generated conditional on a stochastic latent variable, whose distribution was modulated by a more slowly evolving prior process. The variational objective function encodes a trade-off between maximizing expected reward and consistency between the two time scales of inference (28). Whereas some previous hierarchical RL agents construct explicit hierarchical goals or skills (2932), this agent architecture is conceptually more closely related to work outside of RL on building hierarchical temporal representations (3336) and recurrent latent variable models for sequential data (37, 38). The resulting model constructs a temporally hierarchical representation space in a way that promotes the use of memory (fig. S7) and temporally coherent action sequences.

 

Fig. 2 Agent architecture and benchmarking.
(A) How the agent processes a temporal sequence of observations xt from the environment. The model operates at two different time scales, faster at the bottom and slower by a factor of τ at the top. A stochastic vector-valued latent variable is sampled at the fast time scale from distribution Qt
on the basis of observations xt. The action distribution πt is sampled conditional on the latent variable at each time step t. The latent variable is regularized by the slow moving prior Pt, which helps capture long-range temporal correlations and promotes memory. The network parameters are updated by using RL according to the agent’s own internal reward signal rt, which is obtained from a learned transformation w of game points ρt. w is optimized for winning probability through PBT, another level of training performed at yet a slower time scale than that of RL. Detailed network architectures are described in fig. S11. (B) (Top) The Elo skill ratings of the FTW agent population throughout training (blue) together with those of the best baseline agents by using hand-tuned reward shaping (RS) (red) and game-winning reward signal only (black), compared with human and random agent reference points (violet, shaded region shows strength between 10th and 90th percentile). The FTW agent achieves a skill level considerably beyond strong human subjects, whereas the baseline agent’s skill plateaus below and does not learn anything without reward shaping [evaluation procedure is provided in (28)]. (Bottom) The evolution of three hyperparameters of the FTW agent population: learning rate, Kullback-Leibler divergence (KL) weighting, and internal time scale τ, plotted as mean and standard deviation across the population.

For ad hoc teams, we postulated that an agent’s policy π1 should maximize the probability P(π1steamwins|ω,π1:N)

of winning for its team, π1:N2=(π1,π2,πN2), which is composed of π1 itself, and its teammates’ policies π2,,πN2, for a total of N players in the game

P(π1steamwins|ω,π1:N)=Eτpπ1:Nω,ϵB(0.5)[(τ,π1:N2)>(τ,πN2+1:N)+ϵ0.5]

(1)in which trajectories τ (sequences of actions, states, and rewards) are sampled from the joint probability distribution pπ1:Nω

over game setup ω and actions sampled from policies. The operator 𝟙[x] returns 1 if and only if x is true, and ⚐(τ, π) returns the number of flag captures obtained by agents in π in trajectory τ. Ties are broken by ϵ, which is sampled from an independent Bernoulli distribution with probability 0.5. The distribution Ω over specific game setups is defined over the Cartesian product of the set of maps and the set of random seeds. During learning and testing, each game setup ω is sampled from Ω, ω ~ Ω. The final game outcome is too sparse to be effectively used as the sole reward signal for RL, and so we learn rewards rt to direct the learning process toward winning; these are more frequently available than the game outcome. In our approach, we operationalized the idea that each agent has a dense internal reward function (3941) by specifying rt = wt) based on the available game points signals ρt (points are registered for events such as capturing a flag) and, crucially, allowing the agent to learn the transformation w so that policy optimization on the internal rewards rt optimizes the policy “For The Win,” giving us the “FTW agent.”

Training agents in multiagent systems requires instantiations of other agents in the environment, such as teammates and opponents, to generate learning experience. A solution could be self-play RL, in which an agent is trained by playing against its own policy. Although self-play variants can prove effective in some multiagent games (14, 15, 4246), these methods can be unstable and in their basic form do not support concurrent training, which is crucial for scalability. Our solution is to train in parallel a population of P different agents π=(πp)Pp=1

that play with each other, introducing diversity among players in order to stabilize training (47). Each agent within this population learns from experience generated by playing with teammates and opponents sampled from the population. We sampled the agents indexed by ι for a training game by using a stochastic matchmaking scheme mp(π) that biases co-players to be of similar skill to player p. This scheme ensures that—a priori—the outcome is sufficiently uncertain to provide a meaningful learning signal and that a diverse set of teammates and opponents participate in training. Agents’ skill levels were estimated online by calculating Elo scores [adapted from chess (48)] on the basis of outcomes of training games. We also used the population to meta-optimize the internal rewards and hyperparameters of the RL process itself, which results in the joint maximization of

Jinner(πp|wp)=Eιmp(π),ωΩEτpπtω[t=1Tγt1wp(ρp,t)]πpπ

Jouter(wp,ϕp|π)=Eιmp(π),ωΩP(πw,ϕpsteamwins∣∣ω,πw,ϕι)

(2)

πw,ϕp=optimizeπp(Jinner,w,ϕ)

This can be seen as a two-tier RL problem. The inner optimization maximizes Jinner, the agents’ expected future discounted internal rewards. The outer optimization of Jouter can be viewed as a meta-game, in which the meta-reward of winning the match is maximized with respect to internal reward schemes wp and hyperparameters ϕp, with the inner optimization providing the meta transition dynamics. We solved the inner optimization with RL as previously described, and the outer optimization with population-based training (PBT) (49). PBT is an online evolutionary process that adapts internal rewards and hyperparameters and performs model selection by replacing underperforming agents with mutated versions of better agents. This joint optimization of the agent policy by using RL together with the optimization of the RL procedure itself toward a high-level goal proves to be an effective and potentially widely applicable strategy and uses the potential of combining learning and evolution (50) in large-scale learning systems.

Tournament evaluation

To assess the generalization performance of agents at different points during training, we performed a large tournament on procedurally generated maps with ad hoc matches that involved three types of agents as teammates and opponents: ablated versions of FTW (including state-of-the-art baselines), Quake III Arena scripted bots of various levels (51), and human participants with first-person video game experience. The Elo scores and derived winning probabilities for different ablations of FTW, and how the combination of components provide superior performance, are shown in Fig. 2B and fig. S1. The FTW agents clearly exceeded the win-rate of humans in maps that neither agent nor human had seen previously—that is, zero-shot generalization—with a team of two humans on average capturing 16 fewer flags per game than a team of two FTW agents (fig. S1, bottom, FF versus hh). Only as part of a human-agent team did we observe a human winning over an agent-agent team (5% win probability). This result suggests that trained agents are capable of cooperating with never-seen-before teammates, such as humans. In a separate study, we probed the exploitability of the FTW agent by allowing a team of two professional games testers with full communication to play continuously against a fixed pair of FTW agents. Even after 12 hours of practice, the human game testers were only able to win 25% (6.3% draw rate) of games against the agent team (28).

Interpreting the difference in performance between agents and humans must take into account the subtle differences in observation resolution, frame rate, control fidelity, and intrinsic limitations in reaction time and sensorimotor skills (fig. S10A) [(28), section 3.1]. For example, humans have superior observation and control resolution; this may be responsible for humans successfully tagging at long range where agents could not (humans, 17% tags above 5 map units; agents, 0.5%). By contrast, at short range, agents have superior tagging reaction times to humans: By one measure, FTW agents respond to newly appeared opponents with a mean of 258 ms, compared with 559 ms for humans (fig. S10B). Another advantage exhibited by agents is their tagging accuracy, in which FTW agents achieve 80% accuracy compared with humans’ 48%. By artificially reducing the FTW agents’ tagging accuracy to be similar to humans (without retraining them), the agents’ win rate was reduced though still exceeded that of humans (fig. S10C). Thus, although agents learned to make use of their potential for better tagging accuracy, this is only one factor contributing to their overall performance.

To explicitly investigate the effect of the native superiority in the reaction time of agents compared with that of humans, we introduced an artificial 267-ms reaction delay to the FTW agent (in line with the previously reported discrepancies, and corresponding to fast human reaction times in simple psychophysical paradigms) (5254). This response-delayed FTW agent was fine-tuned from the nondelayed FTW agent through a combination of RL and distillation through time [(28), section 3.1.1]. In a further exploitability study, the human game testers achieved a 30% win rate against the resulting response-delayed agents. In additional tournament games with a wider pool of human participants, a team composed of a strong human and a response-delayed agent could only achieve an average win rate of 21% against a team of entirely response-delayed agents. The human participants performed slightly more tags than the response-delayed agent opponents, although delayed agents achieved more flag pickups and captures (Fig. 2). This highlights that even with more human-comparable reaction times, the agent exhibits human-level performance.

Agent analysis

We hypothesized that trained agents of such high skill have learned a rich representation of the game. To investigate this, we extracted ground-truth state from the game engine at each point in time in terms of 200 binary features such as “Do I have the flag?”, “Did I see my teammate recently?”, and “Will I be in the opponent’s base soon?” We say that the agent has knowledge of a given feature if logistic regression on the internal state of the agent accurately models the feature. In this sense, the internal representation of the agent was found to encode a wide variety of knowledge about the game situation (fig. S4). The FTW agent’s representation was found to encode features related to the past particularly well; for example, the FTW agent was able to classify the state “both flags are stray” (flags dropped not at base) with 91% AUCROC (area under the receiver operating characteristic curve), compared with 70% with the self-play baseline. Looking at the acquisition of knowledge as training progresses, the agent first learned about its own base, then about the opponent’s base, and then about picking up the flag. Immediately useful flag knowledge was learned before knowledge related to tagging or their teammate’s situation. Agents were never explicitly trained to model this knowledge; thus, these results show the spontaneous emergence of these concepts purely through RL-based training.

A visualization of how the agent represents knowledge was obtained by performing dimensionality reduction of the agent’s activations through use of t-distributed stochastic neighbor embedding (t-SNE) (Fig. 3) (55). Internal agent state clustered in accordance with conjunctions of high-level game-state features: flag status, respawn state, and agent location (Fig. 3B). We also found individual neurons whose activations coded directly for some of these features—for example, a neuron that was active if and only if the agent’s teammate was holding the flag, which is reminiscent of concept cells (56). This knowledge was acquired in a distributed manner early in training (after 45,000 games) but then represented by a single, highly discriminative neuron later in training (at around 200,000 games). This observed disentangling of game state is most pronounced in the FTW agent (fig. S8).

 

Fig. 3 Knowledge representation and behavioral analysis.
(A) The 2D t-SNE embedding of an FTW agent’s internal states during gameplay. Each point represents the internal state (hp, hq) at a particular point in the game and is colored according to the high-level game state at this time—the conjunction of (B) four basic CTF situations, each state of which is colored distinctly. Color clusters form, showing that nearby regions in the internal representation of the agent correspond to the same high-level game state. (C) A visualization of the expected internal state arranged in a similarity-preserving topological embedding and colored according to activation (fig. S5). (D) Distributions of situation conditional activations (each conditional distribution is colored gray and green) for particular single neurons that are distinctly selective for these CTF situations and show the predictive accuracy of this neuron. (E) The true return of the agent’s internal reward signal and (F) the agent’s prediction, its value function (orange denotes high value, and purple denotes low value). (G) Regions where the agent’s internal two–time scale representation diverges (red), the agent’s surprise, measured as the KL between the agent’s slow– and fast–time scale representations (28). (H) The four-step temporal sequence of the high-level strategy “opponent base camping.” (I) Three automatically discovered high-level behaviors of agents and corresponding regions in the t-SNE embedding. (Right) Average occurrence per game of each behavior for the FTW agent, the FTW agent without temporal hierarchy (TH), self-play with reward shaping agent, and human subjects (fig. S9).

 

One of the most salient aspects of the CTF task is that each game takes place on a randomly generated map, with walls, bases, and flags in new locations. We hypothesized that this requires agents to develop rich representations of these spatial environments in order to deal with task demands and that the temporal hierarchy and explicit memory module of the FTW agent help toward this. An analysis of the memory recall patterns of the FTW agent playing in indoor environments shows precisely that; once the agent had discovered the entrances to the two bases, it primarily recalled memories formed at these base entrances (Fig. 4 and fig. S7). We also found that the full FTW agent with temporal hierarchy learned a coordination strategy during maze navigation that ablated versions of the agent did not, resulting in more efficient flag capturing (fig. S2).

 

Fig. 4 Progression of agent during training.
Shown is the development of knowledge representation and behaviors of the FTW agent over the training period of 450,000 games, segmented into three phases (movie S2). “Knowledge” indicates the percentage of game knowledge that is linearly decodable from the agent’s representation, measured by average scaled AUCROC across 200 features of game state. Some knowledge is compressed to single-neuron responses (Fig. 3A), whose emergence in training is shown at the top. “Relative internal reward magnitude” indicates the relative magnitude of the agent’s internal reward weights of 3 of the 13 events corresponding to game points ρ. Early in training, the agent puts large reward weight on picking up the opponent’s flag, whereas later, this weight is reduced, and reward for tagging an opponent and penalty when opponents capture a flag are increased by a factor of two. “Behavior probability” indicates the frequencies of occurrence for 3 of the 32 automatically discovered behavior clusters through training. Opponent base camping (red) is discovered early on, whereas teammate following (blue) becomes very prominent midway through training before mostly disappearing. The “home base defense” behavior (green) resurges in occurrence toward the end of training, which is in line with the agent’s increased internal penalty for more opponent flag captures. “Memory usage” comprises heat maps of visitation frequencies for (left) locations in a particular map and (right) locations of the agent at which the top-10 most frequently read memories were written to memory, normalized by random reads from memory, indicating which locations the agent learned to recall. Recalled locations change considerably throughout training, eventually showing the agent recalling the entrances to both bases, presumably in order to perform more efficient navigation in unseen maps (fig. S7).

 

Analysis of temporally extended behaviors provided another view on the complexity of behavioral strategies learned by the agent (57) and is related to the problem a coach might face when analyzing behavior patterns in an opponent team (58). We developed an unsupervised method to automatically discover and quantitatively characterize temporally extended behavior patterns, inspired by models of mouse behavior (59), which groups short game-play sequences into behavioral clusters (fig. S9 and movie S3). The discovered behaviors included well-known tactics observed in human play, such as “waiting in the opponents base for a flag to reappear” (“opponent base camping”), which we only observed in FTW agents with a temporal hierarchy. Some behaviors, such as “following a flag-carrying teammate,” were discovered and discarded midway through training, whereas others such as “performing home base defense” are most prominent later in training (Fig. 4).

Conclusions

In this work, we have demonstrated that an artificial agent using only pixels and game points as input can learn to play highly competitively in a rich multiagent environment: a popular multiplayer first-person video game. This was achieved by combining PBT of agents, internal reward optimization, and temporally hierarchical RL with scalable computational architectures. The presented framework of training populations of agents, each with their own learned rewards, makes minimal assumptions about the game structure and therefore could be applicable for scalable and stable learning in a wide variety of multiagent systems. The temporally hierarchical agent represents a powerful architecture for problems that require memory and temporally extended inference. Limitations of the current framework, which should be addressed in future work, include the difficulty of maintaining diversity in agent populations, the greedy nature of the meta-optimization performed by PBT, and the variance from temporal credit assignment in the proposed RL updates. Our work combines techniques to train agents that can achieve human-level performance at previously insurmountable tasks. When trained in a sufficiently rich multiagent world, complex and surprising high-level intelligent artificial behavior emerged.

Supplementary Materials

science.sciencemag.org/content/364/6443/859/suppl/DC1

Supplementary Text

Figs. S1 to S12

References (6183)

Pseudocode

Supplementary Data

Movies S1 to S4

References and Notes

  1. Additional information is available as supplementary materials.
Acknowledgments: We thank M. Botvinick, S. Osindero, V. Mnih, A. Graves, N. de Freitas, N. Heess, and K. Tuyls for helpful comments on the manuscript; A. Grabska-Barwińska for support with analysis; S. Green and D. Purves for additional environment support and design; K. McKee and T. Zhu for human experiment assistance; A. Sadik, S. York, and P. Mendolicchio for exploitation study participation; A. Cain for help with figure design; P. Lewis, D. Fritz, and J. Sanchez Elias for 3D map visualization work; V. Holgate, A. Bolton, C. Hillier, and H. King for organizational support; and the rest of the DeepMind team for their invaluable support and ideas.
Author contributions: M.J. and T.Gra. conceived and managed the project; M.J., W.M.C., and I.D. designed and implemented the learning system and algorithm with additional help from L.M., T.Gra., G.L., N.S., T.Gre., and J.Z.L.; A.G.C., C.B., and L.M. created the game environment presented; M.J., W.M.C., I.D., and L.M. ran experiments and analyzed data with additional input from N.C.R., A.S.M., and A.R.; L.M. and L.D. ran human experiments; D.S., D.H., and K.K. provided additional advice and management; M.J., W.M.C., and T.Gra. wrote the paper; and M.J. and W.M.C. created figures and videos.
Competing interests:
M.J., W.M.C., and I.D. are inventors on U.S. patent application US62/677,632 submitted by DeepMind that covers temporally hierarchical RL. M.J., W.M.C., and T.G. are inventors on U.S. patent application PCT/EP2018/082162 submitted by DeepMind that covers population based training of neural networks. I.D. is additionally affiliated with Hudson River Trading, New York, NY, USA.
Data and materials availability:
A full description of the algorithm in pseudocode is available in the supplementary materials. The data are deposited in 10.7910/DVN/JJETYE (60).

ExoWarfare done lousy: Facebook löscht Milliarden Fake Accounts

(if you can identify them – its done badly)

 

24 MAY 2019

2,4 Milliarden monatlich aktive Nutzer hat Facebook im Geschäftsbericht für das erste Quartal 2019 gemeldet. Im selben Zeitraum löschte das Unternehmen 2,2 Milliarden Fake-Accounts. Das geht aus dem gestern veröffentlichten Community Standards Enforcement Report hervor. Und dabei erwischt Facebook längst nicht alle gegen die Regeln verstoßenden Profile, wie eine weitere Statista-Grafik zeigt. Die enormen Zahlen gehen laut Mark Zuckerberg unter anderem auf Betrüger zurück, die versuchen automatisch tausende Accounts auf einmal einzurichten.

 

from: https://de.statista.com/infografik/18152/anzahl-der-von-facebook-geloeschten-fake-accounts/

 

 

“Bestmixer.io” – EU Authorities Shut Down Bitcoin Transaction Mixer

22 MAY 2019

The Dutch Financial Criminal Investigative Service has seized the website of a bitcoin transaction mixer in a crackdown involving Europol and other authorities.

Calling it the the “first law enforcement action of its kind against such a cryptocurrency mixer service,” Europol said in a statement Wednesday that the seizure of Bestmixer.io followed an investigation that began last summer. As part of the move, police seized six servers based in Luxemourg and the Netherlands.

Coin mixers or “tumblers” like Bestmixer.io work by pooling funds together and creating a web of new transactions in an effort to obfuscate their original source. Typically, coin mixer users pay a fee on top of the funds they send in, receiving back their money from a wholly new address.

Europol alleged that much of the money that passed through Bestmixer.io “had a criminal origin or destination,” contending that “in these cases, the mixer was probably used to conceal and launder criminal flows of money.” The agency said that the service, which launched in May 2018, mixed approximately 27,000 bitcoins.

“Today’s Bestmixer seizure shows an increase in law enforcement activities on pure crypto-to-crypto services,” said Dave Jevans, CipherTrace CEO. “This follows on the heels of European AMLD5 regulations and the views expressed by US FinCEN that crypto-to-crypto services are considered to be money services businesses and must comply with those regulations. This is the first public seizure of a bitcoin mixing service, and shows that not only are dark marketplaces subject to criminal enforcement, but other services are as well.”

Europol’s statement suggests that the investigation isn’t complete and that authorities intend to follow up on the information gleaned from this week’s server seizures.

“The Dutch FIOD has gathered information on all the interactions on this platform in the past year. This includes IP-addresses, transaction details, bitcoin addresses and chat messages,” the agency said. “This information will now be analysed by the FIOD in cooperation with Europol and intelligence packages will be shared with other countries.”

“Bestmixer has blatantly advertised money laundering services, and falsely claimed to be domiciled in Curacao where they claimed it was a legal service. The reality is that they were operating in Europe and services customers from many countries around the world,” said Jevans.

from: https://www.coindesk.com/eu-authorities-crack-down-on-bitcoin-transaction-mixer

 

***

EUROPOL:
Multi-million euro cryptocurrency laundering service Bestmixer.io taken down

 

22 May 2019 – Press Release

First law enforcement action of its kind against such a cryptocurrency mixer service

Today, the Dutch Fiscal Information and Investigation Service (FIOD), in close cooperation with Europol and the authorities in Luxembourg, clamped down on one of the world’s leading cryptocurrency mixing service Bestmixer.io.

Initiated back in June 2018 by the FIOD with the support of the internet security company McAfee, this investigation resulted in the seizure of six servers in the Netherlands and Luxembourg.

One of the largest mixing services

Bestmixer.io was one of the three largest mixing services for cryptocurrencies and offered services for mixing the cryptocurrencies bitcoins, bitcoin cash and litecoins. The service started in May 2018 and achieved a turnover of at least $200 million (approx. 27,000 bitcoins) in a year’s time and guaranteed that the customers would remain anonymous.

Nature of the service

A cryptocurrency tumbler or cryptocurrency mixing service is a service offered to mix potentially identifiable or ‘tainted’ cryptocurrency funds with others, so as to obscure the trail back to the fund’s original source.

The investigation so far into this case has shown that many of the mixed cryptocurrencies on Bestmixer.io had a criminal origin or destination. In these cases, the mixer was probably used to conceal and launder criminal flows of money.

Follow-up

The Dutch FIOD has gathered information on all the interactions on this platform in the past year. This includes IP-addresses, transaction details, bitcoin addresses and chat messages. This information will now be analysed by the FIOD in cooperation with Europol and intelligence packages will be shared with other countries.

 

from: https://www.europol.europa.eu/newsroom/news/multi-million-euro-cryptocurrency-laundering-service-bestmixerio-taken-down

 

***

 

23 MAY 2019

“We need a first step toward more privacy,” Vitalik Buterin, founder of the ethereum blockchain network, said Wendesday.

In a new HackMD post, Buterin detailed a design to help obscure ethereum user activity on the blockchain. More specifically, Buterin proposed a “minimal mixer design” aimed at obfuscating user addresses when sending fixed quantities of ether (ETH).

According to Buterin, users can transact in one of two ways. “The default behavior” is to send and receive ether from a single account, which, of course, also means that all of a user’s activity will be publicly linked on the blockchain. Alternatively, users can transact through multiple accounts or addresses. However, this too isn’t a perfect solution to obfuscating user activity on the blockchain.

“The transactions you make to send ETH to those addresses themselves reveal the link between them,” detailed Buterin in his post.

As such, by creating two smart contracts on ethereum – “the mixer and the relayer registry” – users can opt-in to making private transactions on the ethereum blockchain through what is called an anonymity set.

Buterin told CoinDesk in a follow-up email:

“Anonymity set is cryptography speak for ‘set of users that this thing could have come from.’ For example if I sent you 1 ETH and you can’t tell who exactly it was from but you can tell that it came from (myself, Alice, Bob or Charlie), then the anonymity set has size 4. The bigger the anonymity set the more privacy you have.”

Buterin added that the design does not require any changes to ethereum on a protocol level but could be something implemented by a group of users today.

To this point, Eric Conner, product researcher at blockchain startup Gnosis, noted that a key strength of Buterin’s proposal was precisely its ease for integration.

“Strengths are it gives us a solid privacy solution if users want it,” Conner explained. “The goal is to make a solution that can be easily integrated into current wallets.”

At the same time, the design proposed by Buterin does require users to pay a fee – called gas cost – in order to send private transactions. However, for the use cases that Buterin envisions in his mind the fee won’t be a major deterrent for users.

Buterin tweeted about the design:

“The main use case I’m thinking of is a one-off send from one account to another account so you can use applications without linking that account to the one that has all your tokens in it. So even though it is a 2m gas cost, it only needs to be paid once per account, not too bad.”

 

from: https://www.coindesk.com/vitalik-proposes-mixer-to-anonymize-one-off-transactions-on-ethereum

 

***

31 MAY 2019

Bitcoin Blender Cryptocurrency Mixing Service Shuts Itself Down

 

Cryptocurrency mixing service Bitcoin Blender has reportedly willingly shut down after issuing a short notice asking its users to withdraw their funds, tech news outlet BleepingComputer reports on May 30.

Per the report, the message describing the service that appeared on the homepage of the website present both on the Tor network (often referred to as the darknet, dark web or deep web) and on clearnet before it shut down was the following:

“We are a hidden service that mixes your bitcoins to remove the link between you and your transactions. This adds an essential layer of anonymity to your online activity to protect against ‘Blockchain Analysis.’”

The shutdown was reportedly announced both on the homepage of the dark web website and on the BitcoinTalk Forums on Monday. Some users reportedly missed the short time window and were not able to withdraw their funds, as one user said on the aforementioned forum:

“I recently came to know about the shutting down process of bitblender, I had much coins saved onto it. I unfortunately missed the withdrawal warning as I was away for past few weeks. I am trying to access http://bitblendervrfkzr.onion/ for last 2~3 hours but I can not succeed.”

At press time, while the Tor mirror is currently inaccessible, the clearnet website is still online.

As Cointelegraph recently reported, Dutch, Luxembourg authorities and Europol shut down one of the three largest cryptocurrency tumblers, BestMixer, after an investigation found a number of coins from the mixer were used in money laundering.

Ethereum (ETH) co-founder Vitalik Buterin proposed shortly after the shutdown the possibility of creating an on-chain smart contract-based ether mixer.

 

from: https://cointelegraph.com/news/bitcoin-blender-cryptocurrency-mixing-service-shuts-itself-down

 

 

 

USMC: Marines want their phones and tablets to handle classified data

The Marine Common Handheld program will provide secure mobile computing at the tactical edge. (Lance Cpl. Harrison C. Rakhshani/Marine Corps)

 

The Marine Corps has selected several companies to bid on task orders that will allow warfighters to transmit secure on-the-move command-and-control and situational awareness data, including sending classified information through commercial smartphones and tablets.

The infantry community has long wanted to use wireless commercial devices for dismounted Marines for reference and tactical sharing. The Marine Common Handheld program will provide the Marine Air Ground Task Force secure mobile computing at the tactical edge enabling tactical combat, combat support and combat service support commanders, leaders and key command and control nodes by using digital communications.

At least two companies have announced they are eligible for task orders under the indefinite delivery-indefinite quantity contract: PacStar and iGov Technologies. The total value of the contract is $48 million.

iGov was awarded $4.4 million in the first delivery for the program.

In a May 21 announcement, PacStar said its portion of the award consists of components from the company’s Secure Wireless Command Post to be used for network infrastructure, encryption and cybersecurity. Specifically, PacStar’s system will provide secure, encrypted access to classified networks for smart mobile devices in the tactical network.

The Marines requested modular, man-portable equipment suite allowing units to quickly acquire targets in day, night and near all-weather visibility conditions as well as control close air support and artillery.

 

from: https://www.c4isrnet.com/c2-comms/2019/05/22/marines-want-their-phones-and-tablets-to-handle-classified-data/

 

 

Reality Mining: How Mass Surveillance Works in Xinjiang, China

(Articles below updated last: 05 JUN 2019)

‘Reverse Engineering’ Police App Reveals Profiling and Monitoring Strategies

(Synapsis; full report further below)

New York, May 2, 2019

Chinese authorities are using a mobile app to carry out illegal mass surveillance and arbitrary detention of Muslims in China’s western Xinjiang region.

The Human Rights Watch report, “China’s Algorithms of Repression: Reverse Engineering a Xinjiang Police Mass Surveillance App,” presents new evidence about the surveillance state in Xinjiang, where the government has subjected 13 million Turkic Muslims to heightened repression as part of its “Strike Hard Campaign against Violent Terrorism.” Between January 2018 and February 2019, Human Rights Watch was able to reverse engineer the mobile app that officials use to connect to the Integrated Joint Operations Platform (IJOP), the Xinjiang policing program that aggregates data about people and flags those deemed potentially threatening. By examining the design of the app, which at the time was publicly available, Human Rights Watch revealed specifically the kinds of behaviors and people this mass surveillance system targets.

“Our research shows, for the first time, that Xinjiang police are using illegally gathered information about people’s completely lawful behavior – and using it against them,” said Maya Wang, senior China researcher at Human Rights Watch. “The Chinese government is monitoring every aspect of people’s lives in Xinjiang, picking out those it mistrusts, and subjecting them to extra scrutiny.”

Human Rights Watch published screenshots from the IJOP app, in the original Chinese and translated into English.

The app prompts government officials to collect a wide array of information from ordinary people in Xinjiang.

From a drop-down menu, officials are prompted to choose the circumstances under which information is being collected.

The information it gathers ranges from people’s blood type to their height, from their “religious atmosphere” to their political affiliation.

The app’s source code also reveals that the police platform targets 36 types of people for data collection. Those include people who have stopped using smart phones, those who fail to “socialize with neighbors,” and those who “collected money or materials for mosques with enthusiasm.”

The IJOP platform tracks everyone in Xinjiang. It monitors people’s movements by tracing their phones, vehicles, and ID cards. It keeps track of people’s use of electricity and gas stations.

Human Rights Watch found that the system and some of the region’s checkpoints work together to form a series of invisible or virtual fences. People’s freedom of movement is restricted to varying degrees depending on the level of threat authorities perceive they pose, determined by factors programmed into the system.

A former Xinjiang resident told Human Rights Watch a week after he was released from arbitrary detention: “I was entering a mall, and an orange alarm went off.” The police came and took him to a police station. “I said to them, ‘I was in a detention center and you guys released me because I was innocent.’… The police told me, ‘Just don’t go to any public places.’… I said, ‘What do I do now? Just stay home?’ He said, ‘Yes, that’s better than this, right?’”

The authorities have programmed the IJOP so that it treats many ordinary and lawful activities as indicators of suspicious behavior. For example:

Officials are prompted to investigate those determined to have used an “unusual” amount of electricity.

Officials can select from a list of reasons for unusual electricity consumption, such as: “purchased new electronics for domestic use” or “doing renovations.”

The system detects when the registered owner of the car is not the same as the person who is buying gasoline.

The app’s source code suggests that nearby officials are required to investigate by logging the reasons for the mismatch, …

… and deciding whether this case seems suspicious and requires further police investigation.

The app alerts officials to people who took trips abroad that it considers excessively long, …

… then prompts officials to interrogate the “overdue” person or their relatives and other acquaintances, asking them for details about the travel.

The system alerts officials if it has lost track of someone’s phone, to determine whether the owner’s actions are suspicious and require investigation.

Some of the investigations involve checking people’s phones for any one of the 51 internet tools that are considered suspicious, including WhatsApp, Viber, Telegram, and Virtual Private Networks (VPNs), Human Rights Watch found. The IJOP system also monitors people’s relationships, identifying as suspicious travelling with anyone on a police watch list, for example, or anyone related to someone who has recently obtained a new phone number.

Based on these broad and dubious criteria, the system generates lists of people to be evaluated by officials for detention. Official documents state individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize detentions for people found to be “untrustworthy.” Those people are then interrogated without basic protections. They have no right to legal counsel, and some are tortured or otherwise mistreated, for which they have no effective redress.

The IJOP system was developed by China Electronics Technology Group Corporation (CETC), a major state-owned military contractor in China. The IJOP app was developed by Hebei Far East Communication System Engineering Company (HBFEC), a company that, at the time of the app’s development, was fully owned by CETC.

Under the Strike Hard Campaign, Xinjiang authorities have also collected biometrics, including DNA samples, fingerprints, iris scans, and blood types of all residents in the region ages 12 to 65. The authorities require residents to give voice samples when they apply for passports. All of this data is being entered into centralized, searchable government databases. While Xinjiang’s systems are particularly intrusive, their basic designs are similar to those the police are planning and implementing throughout China.

The Chinese government should immediately shut down the IJOP platform and delete all the data that it has collected from individuals in Xinjiang, Human Rights Watch said. Concerned foreign governments should impose targeted sanctions, such as under the US Global Magnitsky Act, including visa bans and asset freezes, against the Xinjiang Party Secretary, Chen Quanguo, and other senior officials linked to abuses in the Strike Hard Campaign. They should also impose appropriate export control mechanisms to prevent the Chinese government from obtaining technologies used to violate basic rights. United Nations member countries should push for an international fact-finding mission to assess the situation in Xinjiang and report to the UN Human Rights Council.

 

from: https://www.hrw.org/video-photos/interactive/2019/05/02/china-how-mass-surveillance-works-xinjiang

 

***

 

read the full report:

China’s Algorithms of Repression

Reverse Engineering a Xinjiang Police Mass Surveillance App

A Xinjiang Police College webpage shows police officers collecting information from villagers in Kargilik (or Yecheng) County in Kashgar Prefecture, Xinjiang. Source: Xinjiang Police College website

 

Since late 2016, the Chinese government has subjected the 13 million ethnic Uyghurs and other Turkic Muslims in Xinjiang to mass arbitrary detention, forced political indoctrination, restrictions on movement, and religious oppression. Credible estimates indicate that under this heightened repression, up to one million people are being held in “political education” camps. The government’s “Strike Hard Campaign against Violent Terrorism” (Strike Hard Campaign, 严厉打击暴力恐怖活动专项行动) has turned Xinjiang into one of China’s major centers for using innovative technologies for social control.

This report provides a detailed description and analysis of a mobile app that police and other officials use to communicate with the Integrated Joint Operations Platform (IJOP, 一体化联合作战平台), one of the main systems Chinese authorities use for mass surveillance in Xinjiang. Human Rights Watch first reported on the IJOP in February 2018, noting the policing program aggregates data about people and flags to officials those it deems potentially threatening; some of those targeted are detained and sent to political education camps and other facilities. But by “reverse engineering” this mobile app, we now know specifically the kinds of behaviors and people this mass surveillance system targets.

The findings have broader significance, providing an unprecedented window into how mass surveillance actually works in Xinjiang, because the IJOP system is central to a larger ecosystem of social monitoring and control in the region. They also shed light on how mass surveillance functions in China. While Xinjiang’s systems are particularly intrusive, their basic designs are similar to those the police are planning and implementing throughout China.

Many—perhaps all—of the mass surveillance practices described in this report appear to be contrary to Chinese law. They violate the internationally guaranteed rights to privacy, to be presumed innocent until proven guilty, and to freedom of association and movement. Their impact on other rights, such as freedom of expression and religion, is profound.

 

 

Human Rights Watch finds that officials use the IJOP app to fulfill three broad functions: collecting personal information, reporting on activities or circumstances deemed suspicious, and prompting investigations of people the system flags as problematic.

Analysis of the IJOP app reveals that authorities are collecting massive amounts of personal information—from the color of a person’s car to their height down to the precise centimeter—and feeding it into the IJOP central system, linking that data to the person’s national identification card number. Our analysis also shows that Xinjiang authorities consider many forms of lawful, everyday, non-violent behavior—such as “not socializing with neighbors, often avoiding using the front door”—as suspicious. The app also labels the use of 51 network tools as suspicious, including many Virtual Private Networks (VPNs) and encrypted communication tools, such as WhatsApp and Viber.

The IJOP app demonstrates that Chinese authorities consider certain peaceful religious activities as suspicious, such as donating to mosques or preaching the Quran without authorization. But most of the other behavior the app considers problematic are ethnic-and religion-neutral. Our findings suggest the IJOP system surveils and collects data on everyone in Xinjiang. The system is tracking the movement of people by monitoring the “trajectory” and location data of their phones, ID cards, and vehicles; it is also monitoring the use of electricity and gas stations of everybody in the region. This is consistent with Xinjiang local government statements that emphasize officials must collect data for the IJOP system in a “comprehensive manner” from “everyone in every household.”

When the IJOP system detects irregularities or deviations from what it considers normal, such as when people are using a phone that is not registered to them, when they use more electricity than “normal,” or when they leave the area in which they are registered to live without police permission, the system flags these “micro-clues” to the authorities as suspicious and prompts an investigation.

Another key element of IJOP system is the monitoring of personal relationships. Authorities seem to consider some of these relationships inherently suspicious. For example, the IJOP app instructs officers to investigate people who are related to people who have obtained a new phone number or who have foreign links.

The authorities have sought to justify mass surveillance in Xinjiang as a means to fight terrorism. While the app instructs officials to check for “terrorism” and “violent audio-visual content” when conducting phone and software checks, these terms are broadly defined under Chinese laws. It also instructs officials to watch out for “adherents of Wahhabism,” a term suggesting an ultra-conservative form of Islamic belief, and “families of those…who detonated [devices] and killed themselves.” But many—if not most—behaviors the IJOP system pays special attention to have no clear relationship to terrorism or extremism. Our analysis of the IJOP system suggests that gathering information to counter genuine terrorism or extremist violence is not a central goal of the system.

The app also scores government officials on their performance in fulfilling tasks and is a tool for higher-level supervisors to assign tasks to, and keep tabs on the performance of, lower-level officials. The IJOP app, in part, aims to control government officials to ensure that they are efficiently carrying out the government’s repressive orders.

In creating the IJOP system, the Chinese government has benefitted from Chinese companies who provide them with technologies. While the Chinese government has primary responsibility for the human rights violations taking place in Xinjiang, these companies also have a responsibility under international law to respect human rights, avoid complicity in abuses, and adequately remedy them when they occur.

As detailed below, the IJOP system and some of the region’s checkpoints work together to form a series of invisible or virtual fences. Authorities describe them as a series of “filters” or “sieves” throughout the region, sifting out undesirable elements. Depending on the level of threat authorities perceive—determined by factors programmed into the IJOP system—, individuals’ freedom of movement is restricted to different degrees. Some are held captive in Xinjiang’s prisons and political education camps; others are subjected to house arrest, not allowed to leave their registered locales, not allowed to enter public places, or not allowed to leave China.

Government control over movement in Xinjiang today bears similarities to the Mao Zedong era (1949-1976), when people were restricted to where they were registered to live and police could detain anyone for venturing outside their locales. After economic liberalization was launched in 1979, most of these controls had become largely obsolete. However, Xinjiang’s modern police state—which uses a combination of technological systems and administrative controls—empowers the authorities to reimpose a Mao-era degree of control, but in a graded manner that also meets the economy’s demands for largely free movement of labor.

The intrusive, massive collection of personal information through the IJOP app helps explain reports by Turkic Muslims in Xinjiang that government officials have asked them or their family members a bewildering array of personal questions. When government agents conduct intrusive visits to Muslims’ homes and offices, for example, they typically ask whether the residents own exercise equipment and how they communicate with families who live abroad; it appears that such officials are fulfilling requirements sent to them through apps such as the IJOP app. The IJOP app does not require government officials to inform the people whose daily lives are pored over and logged the purpose of such intrusive data collection or how their information is being used or stored, much less obtain consent for such data collection.

 

A checkpoint in Turpan, Xinjiang. Some of Xinjiang’s checkpoints are equipped with special machines that, in addition to recognizing people through their ID cards or facial recognition, are also vacuuming up people’s identifying information from their electronic devices. © 2018 Darren Byler

 

The Strike Hard Campaign has shown complete disregard for the rights of Turkic Muslims to be presumed innocent until proven guilty. In Xinjiang, authorities have created a system that considers individuals suspicious based on broad and dubious criteria, and then generates lists of people to be evaluated by officials for detention. Official documents state that individuals “who ought to be taken, should be taken,” suggesting the goal is to maximize the number of people they find “untrustworthy” in detention. Such people are then subjected to police interrogation without basic procedural protections. They have no right to legal counsel, and some are subjected to torture and mistreatment, for which they have no effective redress, as we have documented in our September 2018 report. The result is Chinese authorities, bolstered by technology, arbitrarily and indefinitely detaining Turkic Muslims in Xinjiang en masse for actions and behavior that are not crimes under Chinese law.

And yet Chinese authorities continue to make wildly inaccurate claims that their “sophisticated” systems are keeping Xinjiang safe by “targeting” terrorists “with precision.” In China, the lack of an independent judiciary and free press, coupled with fierce government hostility to independent civil society organizations, means there is no way to hold the government or participating businesses accountable for their actions, including for the devastating consequences these systems inflict on people’s lives.

The Chinese government should immediately shut down the IJOP and delete all the data it has collected from individuals in Xinjiang. It should cease the Strike Hard Campaign, including all compulsory programs aimed at surveilling and controlling Turkic Muslims. All those held in political education camps should be unconditionally released and the camps shut down. The government should also investigate Party Secretary Chen Quanguo and other senior officials implicated in human rights abuses, including violating privacy rights, and grant access to Xinjiang, as requested by the Office of the United Nations High Commissioner for Human Rights and UN human rights experts.

Concerned foreign governments should impose targeted sanctions, such as the US Global Magnitsky Act, including visa bans and asset freezes, against Party Secretary Chen and other senior officials linked to abuses in the Strike Hard Campaign. They should also impose appropriate export control mechanisms to prevent the Chinese government from obtaining technologies used to violate basic rights.

 

from: https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass-surveillance

 

***

Uighur’s pray in Xinjiang in 2015. UN chief Antonio Guterres was under pressure from rights groups to publicly confront Beijing over the mass detention of the Muslim minority (AFP Photo/Greg Baker)

China data leak exposes mass surveillance in Xinjiang

Beijing (AFP) – A Chinese technology firm has compiled a range of personal information on 2.6 million people in Xinjiang — from their ethnicity to locations — according to a data leak highlighting the wide extent of surveillance in the restive region.

Xinjiang is home to most of China’s Uighur ethnic minority lives and has been under heavy police surveillance in recent years after violent inter-ethnic tensions.

Nearly one million Uighurs and other Turkic language-speaking minorities in Xinjiang are reportedly held in re-education camps, according to a UN panel of experts.

The leak was discovered last week by security researcher Victor Gevers, who found that Chinese tech company SenseNets had stored the records of individuals in an open database “fully accessible to anyone”.

The records included information such as their Chinese ID number, birthday, address, ethnicity, and employer.

The exposed data also linked individuals to GPS coordinates — labelled with descriptions such as “mosque” — captured by tracking devices around the region.

Within a 24-hour period, more than six million locations were saved by SenseNets’ tracking devices, according to Gevers, who works at Dutch online security non-profit GDI Foundation and posted his findings on Twitter.

“You can clearly see they have absolutely no clue about network security,” he told AFP, describing SenseNets’ IT skills as belonging “to the early 90s”.

“Who in their right mind runs a database which is completely open and gives any visitors full administrative rights so then those database records can be manipulated by anyone with an internet connection?” he said.

“It simply does not compute.”

The database had been exposed since last July but was closed last Thursday, after Gevers reported the leak to SenseNets, he said.

SenseNets told AFP it was not accepting media interviews. The Xinjiang government did not immediately respond to AFP’s request for comment.

– Blacklisted –

The demand for high-tech surveillance in Xinjiang region has led to the placing of surveillance cameras inside mosques, restaurants and other public places, while police checkpoints have been set up across the region.

It has has also created lucrative business opportunities for artificial intelligence companies like SenseNets, which specialises in facial recognition.

On its website, the Shenzhen-based firm showcases different applications, from detecting “blacklisted” individuals in a crowd to tracing a suspect’s whereabouts.

The technology firm partners with public security bureaus around the country, as well as US tech firms such as Microsoft and semiconductor company AMD.

In 2016, for instance, it helped local police in southern Guangdong province identify individuals involved in organising an “illegal gathering” — a term that often refers to protests in China.

SenseNets is majority-owned by NetPosa, a public company listed on the Shenzhen stock exchange. On its website, the Beijing-based firm calls itself a “leading manufacturer of video surveillance platforms” and boasted coverage of over 1.5 million roads in China at the end of 2017.

 

from: https://news.yahoo.com/china-data-leak-exposes-mass-surveillance-xinjiang-103612161.html

 

***

 

China’s Xinjiang Region A Surveillance State Unlike Any the World Has Ever Seen

In western China, Beijing is using the most modern means available to control its Uighur minority. Tens of thousands have disappeared into re-education camps. A journey to an eerily quiet region.

Police patrol a night food market near the Id Kah Mosque in Kashgar in Chinas Xinjiang Uighur Autonomous Region. This picture taken on June 25, 2017 shows police patrolling in a night food market near the Id Kah Mosque in Kashgar in China’s Xinjiang Uighur Autonomous Region, a day before the Eid al-Fitr holiday.
The increasingly strict curbs imposed on the mostly Muslim Uighur population have stifled life in the tense Xinjiang region, where beards are partially banned and no one is allowed to pray in public. Beijing says the restrictions and heavy police presence seek to control the spread of Islamic extremism and separatist movements, but analysts warn that Xinjiang is becoming an open air prison. / AFP PHOTO / Johannes EISELE / TO GO WITH China-religion-politics, FOCUS by Ben Dooley (Photo credit should read JOHANNES EISELE/AFP/Getty Images)

 

July 26, 2018

These days, the city of Kashgar in westernmost China feels a bit like Baghdad after the war. The sound of wailing sirens fills the air, armed trucks patrol the streets and fighter jets roar above the city. The few hotels that still host a smattering of tourists are surrounded by high concrete walls. Police in protective vests and helmets direct the traffic with sweeping, bossy gestures, sometimes yelling at those who don’t comply.

But now and then, a ghostly calm descends on the city. Just after noon, when it’s time for Friday prayers, the square in front of the huge Id Kah Mosque lies empty. There’s no muezzin piercing the air, just a gentle buzz on the rare occasion that someone passes through the metal detector at the entrance to the mosque. Dozens of surveillance cameras overlook the square. Security forces, some in uniform and others in plain-clothes, do the rounds of the Old Town with such stealth it’s as if they were trying to read people’s minds.

Journalists are not immune to their attentions. No sooner have we arrived than two police officers insist on sitting down with us for a “talk.” The next day in our hotel, one of them emerges from a room on our floor. When we take a walk through the city in the morning, we’re followed by several plain-clothes officers. Eventually, we’re being tailed by some eight people and three cars, including a black Honda with a covered license plate — apparently the secret police. Occasionally, our minders seem to be leaving us alone, but already awaiting us at the next intersection are the surveillance cameras that reach into every last corner of Kashgar’s inner city. The minute we strike up conversation with anyone, officials appear and start interrogating them.

Before too long, they’ll detain us too. More on that later. But while the authorities in Xinjiang keep close tabs on foreign reporters, their vigilance is nothing compared to their persecution of the Uighur population.

Nowhere in the world, not even in North Korea, is the population monitored as strictly as it is in the Xinjiang Uighur Autonomous Region, an area that is four times the size of Germany and shares borders with eight countries, including Pakistan, Afghanistan Tajikistan and Kazakhstan.

Oppression has been in place for years, but has worsened massively in recent months. It is targeted primarily at the Uighur minority, a Turkic ethnic group of some 10 million Sunni Muslims considered by Beijing to be a hindrance to the development of a “harmonious society.” A spate of attacks involving Uighur militants has only consolidated this belief.

The Uighurs see themselves as a minority facing cultural, religious and economic discrimination. When Xinjiang was incorporated into the People’s Republic of China in 1949, they comprised roughly 80 percent of the region’s population. Controlled migration to Xinjiang of Han Chinese has reduced this share to 45 percent, and it is mainly these migrants who benefit from the economic boom in the region, which has plentiful supplies of oil, gas and coal.

With the Uighurs protesting, Beijing has tightened its grip and turned Xinjiang into a security state that is extreme even by China’s standards, being a police state itself. According to Adrian Zenz, a German expert on Xinjiang, the provincial government has recruited over 90,000 police officers in the last two years alone — twice as many as it recruited in the previous seven years. With around 500 police officers for every 100,000 inhabitants, the police presence will soon be almost as tight as it is in neighboring Tibet.

At the same time, Beijing is equipping the far-western region with state-of-the-art surveillance technology, with cameras illuminating every street all over the region, from the capital Urumqi to the most remote mountain village. Iris scanners and WiFi sniffers are in use in stations, airports and at the ubiquitous checkpoints — tools and programs that allow data traffic from wireless networks to be monitored.

The data is then collated by an “integrated joint operations platform” that also stores further data on the populace — from consumer habits to banking activity, health status and indeed the DNA profile of every single inhabitant of Xinjiang.

 

FILE — Uighurs, a group of mostly Sunni Muslims, in Kashgar in the far western Chinese region of Xinjiang, Dec. 7, 2015. The Chinese government, dominated by the Han ethnic group, has tightened control and confiscated passports in areas with Uighurs, who are mostly Sunni Muslims. (Adam Dean/The New York Times)

 

Anyone with a potentially suspicious data trail can be detained. The government has built up a grid of hundreds of re-education camps. Tens of thousands of people have disappeared into them in recent months. Zenz estimates the number to be closer to hundreds of thousands. More precise figures are difficult to obtain. Censorship in Xinjiang is the strictest in China and its authorities the most inscrutable.

But a distinct impression forms after a trip through the territory and numerous conversations with its inhabitants, who all want to remain anonymous. Xinjiang, one of the most remote and backward regions in booming China, has become a real-life dystopia. It provides a glimpse of what an authoritarian regime armed with 21st century technology is capable of.

Urumqi: Police, Block Leaders and Snitches

With its ultra-modern skyline, the capital of Xinjiang is home to a population of some 3.5 million, 75 percent of which are Han Chinese. The Uighurs make up the largest minority. Kazakhs, Mongolians and Chinese-speaking Muslim Hui people also live here. “All ethnic groups belong together like the seeds of a pomegranate,” reads a banner overlooking Urumqi’s multilane ring road.

“The truth is, you can’t trust the Uighurs,” says a Han Chinese who used to work for the military. “They act like they’re your friend but they only really stick together.”

Mistrust between these two ethnic groups has been growing for decades. In 2009, tensions erupted in Xinjiang and claimed nearly 200 lives. Most of the dead were Han Chinese. In 2014, knife-wielding Uighur militants killed 31 people in Kunming. Just months later, two cars sped into a busy street market in Urumqi, killing dozens. There have been fewer major attacks since, but rumors abound among the Han Chinese that serious incidents frequently occur in the south of Xinjiang but go unreported.

In a bid to see calm return to the region, Beijing brought in hardliner Chen Quanguo, party boss in Tibet, and put him in charge in Xinjiang. Within two years, he implemented the same policy he enacted in Tibet and installed police stations across the region. These bunker-like, barricaded and heavily guarded buildings now litter every crossroads of the major cities.

Chen also introduced a block leader system not unlike the old German “Blockwarts,” with members of the local Communist Party committee given powers to inspect family homes and interrogate them about their lives: Who lives here? Who visited? What did you talk about? Even the controllers are getting controlled: Many apartments have bar code labels on the inside of the front door which the official must scan to prove that he or she carried out the visit.

To optimize social control, neighbors are now also instructed to turn each other in. “They came to me at the start of the year,” says a businessman from Urumqi. “They said: You and your neighbor are now responsible for each other. If either of you does anything unusual, the other will be held responsible.” The businessman says he loves his country. “But I refuse to spy on my neighbor.”

Chen’s predecessor pinned his hopes on an economic upswing in Xinjiang, says a driver who also lives in Urumqi, gesturing at the downtown skyscrapers. He hoped that the more economically comfortable the population would become, the safer the region would be. “No one believes that anymore. The economy continues to grow, but the first priority now is repression.”

Turpan: A Duty To Ramp Up Security

A two-hour drive south of Urumqi is the city-oasis of Turpan, historically located directly on the Silk Road. Over the centuries, temples and mosques were built here by Chinese, Persians, Uighurs, Buddhists, Manichees and Muslims. It’s also a wine-growing region and a place suited to prayer and contemplation. Beyond the oasis are two ancient city ruins. A grand modern museum in the city center charts their history. But anyone who enters must show an ID and there’s a barbed wire fence outside. A dozen surveillance cameras watch the surrounding park, complete with pond and playground.

The museum’s security guards wear helmets and flak jackets. Next to the baggage scanners at the entrance are protective shields used by police for crowd control. It “can all be purchased,” says an assistant in the museum shop. “On the other side of the street.”

Indeed, there is a store selling security equipment just opposite the museum: Helmets and bayonets, surveillance electronics, 12-packs of batons and, above all, protective vests. “300 yuan each,” says a salesperson. That’s about 40 euros. “But they only help against stab wounds. We’ve got bulletproof vests too, but they’re much more expensive. Do you have the paperwork?”

 

Map of the Xinhiang region

 

All this gear is intended for use by security personnel protecting stores, restaurants, museums, hospitals and hotels. Their operators are obligated to ramp up security measures. “There’s just been a new directive,” says one hotel manager in Turpan, holding up a stamped piece of paper. Guests must show IDs when they check in and every time they re-enter — however often they leave and return. More security staff also have to be employed. In Xinjiang, these tightened security measures are designed not only to make the region a safer place but also to create jobs.

“There are 30 men in each bunker,” says a Uighur with suppressed anger as he passes one of the new police stations. “Thirty men, 30 breakfasts, 30 lunches and dinners. Every day. What for? Who’s paying for everything?”

Hotan: ‘Sent To School’

Hotan, a city of 300,000 people, is an oasis in the southwestern fringe of the Taklamakan desert. Attacks have been common there and surveillance is therefore especially prevalent.

When DER SPIEGEL visited Hotan in 2014, it was still possible to meet with a man who told us about the Chinese government’s harsh measures in the surrounding towns. Such a meeting would be out of the question today, the man now informs us through a messaging app. It’s not even possible to drive from one town to another without written permission, much less meet with a foreigner. “Maybe in a few years,” he writes, adding: “Delete this conversation from your phone immediately. Delete everything that could be suspicious.”

There is a modern shopping center at the edge of the city, though barely one in five stores is still open. Most of the others were closed recently due to “security and stability measures,” according to the official seals adhered to the doors. “Everyone was sent to school,” one passerby says quietly while looking around.

Qu xuexi,” meaning to go or be sent to study, is one of the most common expressions in Xinjiang these days. It is a euphemism for having been taken away and not having been seen or heard from since. The “schools” are re-education centers in which the detainees are being forced to take courses in Chinese and patriotism, without any indictment, due process or a fair hearing.

More than half the people we met along the way during our journey spoke of family members or acquaintances who were “sent to school.” One driver in Hotan talked about his 72-year-old grandfather. A person in Urumqi told the story of his daughter’s professor. An airplane passenger spoke of his best friend.

The stories differ, yet they all contain important parallels. Most of the people affected are men. The arrests usually occur at night or in the early morning. The reasons cited include contacts abroad, too many visits to a mosque or possessing forbidden content on a mobile phone or computer. Relatives of those who are apprehended often don’t hear from them for months. And when they do manage to see them again, it’s never in person but rather via video stream from the prison visitor area.

During a conversation with a rug salesman at the market in Hotan, a woman in a short dress shows up and joins the chat. She says she works for an office nearby, and that she has taken the day off. She offers to translate the conversation with the salesman from Uighur into Chinese. No, she will later say as she walks across the nearly empty market, the store closures have nothing to do with re-education camps. “The employees were sent away for technical training,” she says. Then she politely says goodbye.

A few hours later, we arrive at the train station for the 500-kilometer (311 mile) ride to Kashgar. The station is guarded like a military base. Travelers must pass through three checkpoints and dozens of surveillance cameras to get to the platform.

“Ah,” the ticket inspector says to her colleague as we inquire about our seats. “This is the foreign journalist.” The train is nearly full, with hundreds of passengers aboard. A few compartments away, I later notice the woman in the short dress who offered her services as a translator at the market.

Kashgar: ‘Allergic Images’

The train to Kashgar takes six hours and passes by more oasis towns and settlements, the names of which are synonymous with the Uighur resistance in China: Moyu, Pishan, Shache, Shule. All the train stations are surrounded by checkpoints and barbed wire fences. When the train stops at a platform, the train dispatcher is often accompanied by a police officer with either a billy club or a gun.

Kashgar is more than 2,000 years old. It was one of the most important stations along the old Silk Road. Visitors could once gaze upon one of the best preserved Islamic old cities in central Asia, made almost entirely of mud houses. But the government demolished most of the old buildings and erected a picturesque tourist quarter in its stead.

Unlike in Urumqi and Turpan, most taxis in Kashgar are outfitted with two cameras. One is aimed at the passenger up front while the other points at those in the backseat. “That was imposed over a year ago,” one driver says. “The cameras are directly connected to Public Security. They turn them on and off whenever they want. We have no influence.”

Normal journalistic research in Kashgar is inconceivable. No one wants to talk. A Uighur human rights activist who met up with us four years ago didn’t respond to a single one of our text messages. His phone number is no longer listed. As we later learned, he disappeared months ago. But whether he was thrown into a re-education camp or prison is unknown.

And then the police officers from the beginning of this story show up again and don’t let us out of their sight.

There’s a bit of drama as we buy apricots from a fruit shop. We speak to a woman who’s sitting and reading a book. It’s a language book — the woman is learning to speak Chinese. South of Xinjiang, very few Uighurs above the age of 20 speak Chinese well.

We only exchange a few words with the woman, but as we leave the store, three of our minders, including a woman in a red jacket, walk inside and confront her. I go back and begin to film the scene with my phone. Surprised, the government officials stop the conversation, pretend to be shopping and hide their faces.

An hour later, a police officer flanked by several government officials approaches us. The woman in the red coat is with them. She’s a tourist, the officials claim, and she just learned that she was filmed without her permission. According to Chinese law, the footage must now be deleted. The officer escorts us to a police station, where he confiscates the phone and not only deletes the clip from the fruit stand, but also other clips in which our government minders are recognizable. One of the officials warns us against taking any more such “allergic images.” We are then allowed to go.

The surveillance infrastructure in Kashgar is state of the art, but the Chinese government is already working on the next level of control. It wants to introduce a “social credit system” that rates the “trustworthiness” of each citizen, to reward loyalty and punish bad behavior. While the rollout of this system in the densely populated east has been sluggish and spotty, the Uighurs are evidently already subjected to a similar point-based system. This system primarily involves details that could be interesting to the police.

Every family begins with 100 points, one person affected by the system tells us. But anyone with contacts or relatives abroad, especially in Islamic countries like Turkey, Egypt or Malaysia, is punished by losing points. A person with fewer than 60 points is in danger. One wrong word, a prayer or one telephone call too many and they could be sent to “school” in no time.

 

from: https://www.spiegel.de/international/world/china-s-xinjiang-province-a-surveillance-state-unlike-any-the-world-has-ever-seen-a-1220174.html

 

***

First published: 02 August 2018

Funding information
National Natural Science Foundation of China, Grant Number: 61562093, 61772575; China Education & Research Network Innovation Project, Grant Numbers: NGII20170419, NGII20170631

 

Abstract

The salient facial feature discovery is one of the important research tasks in ethnical group face recognition. In this paper, we first construct an ethnical group face dataset including Chinese Uyghur, Tibetan, and Korean. Then, we show that the effective sparse sensing approach to general face recognition is not working anymore for ethnical group facial recognition if the features based on whole face image are used. This is partially due to a fact that each ethnical group may have its own characteristics manifesting only in specified face regions. Therefore, we will analyze the particularity of three ethnical groups and aim to find the common characterizations in some local regions for the three ethnical groups. For this purpose, we first use the facial landmark detector STASM to find some important landmarks in a face image, then, we use the well‐known data mining technique, the mRMR algorithm, to select the salient geometric length features based on all possible lines connected by any two landmarks. Second, based on these selected salient features, we construct three “T” regions in a face image for ethnical feature representation and prove them to be effective areas for ethnicity recognition. Finally, some extensive experiments are conducted and the results reveal that the proposed “T” regions with extracted features are quite effective for ethnical group facial recognition when the L2‐norm is adopted using the sparse sensing approach. In comparison to face recognition, the proposed three “T” regions are evaluated on the olivetti research laboratory face dataset, and the results show that the constructed “T” regions for ethnicity recognition are not suitable for general face recognition.

This article is categorized under:

  • Algorithmic Development > Structure Discovery
  • Algorithmic Development > Biological Data Mining
  • Fundamental Concepts of Data and Knowledge > Knowledge Representation
  • Technologies > Classification

 

1 INTRODUCTION

The analysis of race, nation, and ethnical groups based on facial images is a popular topic recently in face recognition community (Fu, He, & Hou, 2014). With rapid advance of people globalization, face recognition has great application potential in border control, customs check, and public security. Meanwhile, it is also an important research branch in physical anthropology. Usually, facial features are influenced by gene, environment, society, and other factors comprehensively. However, the gene of one ethnical group is hardly unique and it may include various gene fragments from some other ethnical groups. Hence, it may lead to the similarities of facial features among several ethnicities (Jianwen, Lihua, Lilongguang, & Shourong, 2010). Therefore, it is significant to analyze facial attributes for different ethnicities in computationally artificial intelligence. This work is also helpful to the research in anthropology as it may indicate the facial features evolution (Cunrui, Qingling, Xiaodong, Yuangang, & Zedong, 2018).

This paper focuses on the analysis of some Chinese ethnical groups. First, it is necessary for us to differentiate three definitions, namely race, nation, and ethnicity (Wade, 2007). Race is a concept which is formed based on the differences from physical structures such as skin, hair, and and so on, while nation is a social‐oriented concept which refers to a community based on economics, language, and culture of a given area. Ethnicity describes a group of people, who have similar gene, culture, and language in geologically close regions. One can find that race and ethnicity are closely related though they have differences. For example, Chinese includes ethnical groups such as Han, Korean, Jing, Mongolian, Tibetan, Qiang, Miao, Turkic, Jurchen, and so on (Shiyuan, 2002). Based on homologous gene, ethnicities are steady groups and their facial features are regular and exhibit certain patterns. Although race and ethnicity have close relationship, the analysis of facial features among ethnicities is more difficult than that of race as the discrimination of facial features from different ethnicities is more difficult than that from different races (Fu et al., 2014).

Also, in cognitive process for a human face, human brains receive ethnicity or race information prior to age, gender, and expression. As shown in Figure 1, the information of ethnicity or race is processed in 80–120 ms, and the rest features, such as age and gender, are then gradually perceived later (Ito & Bartholow, 2009). This implies that race or ethnicity information is very important in face recognition.

 

Figure 1 — The order of attribute identification in face recognition

 

In recent years, the sparse representation (SR) has broad applications in face recognition, expression recognition, and age estimation (Ortiz, Wright, & Shah, 2013; Ptucha, Tsagkatakis, & Savakis, 2011; Sun, Wang, & Tang, 2015; Wagner et al., 2012), but rarely used in ethnical group facial analysis (Fu et al., 2014). Although the SR has high effectiveness in general for face recognition, it is not effective in ethnicity recognition with the features from the whole face image, as demonstrated in this paper, especially when the sample size of each ethnicity is small. We believe this phenomena is due to a fact that the significant facial features for each ethnicity are only located in some typical regions on a face image and the features from other regions will reduce the discriminative capability for ethnical group recognition. Thus, we need to find some salient regions for these corresponding features and discover the effective facial features for ethnicity recognition.

 

2 PRELIMINARIES

The past decade has witnessed the increasing popularity of facial ethnical recognition. Many researches have been conducted for extracting ethnical facial features using various approaches such as geometrical feature, holistic feature, local feature, and fusion features. Chan and Bledsoe (1965) analyzed the facial features of the White by using the distance and ratio of facial geometrical features. According to geometrical relationship of eyes, mouth, and underjaw, Kanade (1977) matched face images in a dataset constructed by himself. Brunelli and Poggio (1993) measured face similarity using facial geometrical features, which include nose length, mouth width, and underjaw shape, and the results indicated that geometrical features could be used to identify ethnical groups quite well. Brooks and Gwinn (2010) analyzed the differences between the White and Black using the skin color. According to their proposed skin color model, Gwinn extracted the facial features from Asian and European. Akbari and Mozaffari (Mar. 2012) explored the relations of facial skin color using south Indian, Australian, and African. Anzures, Pascalis, Quinn, Slater, and Lee (2011) confirmed that skin color was very sensitive to illumination, so that the skin color was usually fused in combined features to classify people primarily. Since Turk and Pentland (1991) proposed principal component analysis (PCA) in facial feature analysis including eyes, nose, and mouth successfully, PCA has been a popular method in face recognition. Based on PCA, Levine (1996) conducted facial feature extraction between Burman and non‐Burman. Awwad, Ahmad, and Salameh (2013) accomplished facial features analysis for Arabian, Asian, and Caucasian. Based on scale, illumination and pose, Yan and Zhang (2009) used PCA to analyze the facial features on CMU and UCSD databases. Recently, many deep neural network methods are also used for face analysis and recognition (Chen, Zhang, Dong, Le, & Rao, 2017; Luan et al., 2018; Trigeorgis, Snape, Kokkinos, & Zafeiriou, 2017; Zhang, Song, & Qi, 2017). Srinivas et al. (2017) focused on predicting ethnicity using a convolutional neural network (CNN) with the Wild East Asian Face Dataset.

Local features can reduce the influence of illumination and obstacle occlusion, which are usually performing better than holistic features. For example, wavelet and local binary pattern (LBP) had shown their effectiveness on FERET database (Kumar, Berg, Belhumeur, & Nayar, 2011; Salah, Du, & Al‐Jawad, 2013). In addition, Fu, Yang, and Hou (2011) analyzed facial expression using embedded topographic independent component analysis (TICA), the results showed the advantages of local features. However, the combined features which usually include skin color features, local wavelets features, and holistic features were used in practice instead of a single type of facial features. Ding, Huang, Wang, and Chen (2013) described face representations using texture and geometrical shape. Previously, we also combined several different geometric features to represent ethnical groups, such as length, angular, and proportion features (Li et al., 2017). Also the semantic descriptions for ethnical groups were constructed based on Axiomatic Fuzzy Set (AFS) theory, and the manifolds of ethnical groups were learned in our recent study (Duan, Li, Wang, Zhang, & Liu, 2016; Wang, Duan, Liu, Wang, & Li, 2016).

SR has been intensively used in the field of face recognition, expression analysis, age estimation, and facial image super‐resolution (Dian, Fang, & Li, 2017). Wright, Yang, Ganesh, Sastry, and Ma (2009) proposed sparse representation‐based classification (ESRC) approach and brought the SR into face recognition. It assumed that a face image could be viewed as a sparse linear representation of other face images for the same person. Aharon, Elad, and Bruckstein (2006) applied the ESRC approach directly for occluded facial expression recognition. The performance is not as good as expected due to the fact that the identity information of human face is more obvious than that of expression, which implies that the features of identity would affect facial expression recognition severely. Recently, the SR has been extended to some recognition tasks with small sample size. Mairal, Leordeanu, Bach, Hebert, and Ponce (2008) proposed the extended ESRC approach, which has refined SR by adding general learning in the framework of ESRC. This method improved the performance for small sample size face recognition problem and single sample based face recognition problem by unitizing the information extracted from other datasets. Yang, Zhang, Yang and Zhang (2010) proposed the sparse variation dictionary learning (SVDL) approach, in which one could obtain the projection matrix according to a training set. The SVDL was then embedded in ESRC to conduct face recognition. However, SVDL needs plenty of training data which contained all type of images for each class to learn an effective dictionary. Yang, Zhang, Yang, and Niu (2007) proposed sparse illumination learning and transfer (SILT) approach. This approach could match a few targets for obtaining information of face images with different illuminations. The methods mentioned above can improve face recognition performance to different extent, and also have significant achievement in solving small sample size problem in face recognition. In this paper, we aim to use the SR approach to solve the ethnical group recognition with extracted regional features via data mining.

 

3 THE MULTIETHNIC GROUP DATABASE

In order to investigate the ethnicity description and recognition, we collected a dataset including facial images of different ethnical students on campus in Dalian Minzu University, whose ages are ranging from 18 to 22 years old. The database includes three ethnicities, namely Korean, Tibetan, and Uyghur. The students of the three ethnicities are from the regions inhabited by the corresponding ethnical groups, as shown in Figure 2. For each ethnicity, 100 students are selected and their facial images are captured. The capture environment and setup are illustrated in Figure 3, in which we have three cameras, three lights with one person sitting in the center. The images of several participants are shown in Figure 4. Remember only the frontal images are used in this paper though we have collected images with different poses and expressions.

 

Figure 2 — The living area distribution of three ethnicities

 

Figure 3 — Data capture environment

 

Figure 4 — A part of face dataset

 

4 FACIAL IMAGE PREPROCESSING

Due to the variations in pose, illumination, and camera parameters, it is necessary to align the images before further processing. This mainly involves face alignment and illumination normalization. The aim of face alignment is to correct face pose and resize the face resolution. The details are shown as follows:

  • The coordinates of eyes are obtained automatically by eye detector, and the coordinates of two eye corners are denoted by E l and E r.
  • The face image is rotated to make the line segment which connecting E l and E r to be horizontal.
  • The facial area is cropped out according to the ratio of eye separation and the the rest of face.
  • The cropped face images are resized to a given resolution.

As the skin color and texture of different ethnicities vary a lot, due to influence rendered by gene or environment, we conduct the illumination normalization in face image preprocessing stage. However, illumination normalization will affect the skin color. According to literature (Brooks & Gwinn, 2010), the skin color has a poor correlation with facial ethnic attributes. Hence, illumination normalization is implemented and the skin color change is ignored in our study.

Many methods have been proposed to deal with illumination variations (Biglari, Mirzaei, & Ebrahimpour‐Komeh, 2013) such as single scale retinex (SSR), multiscale retinex (MSR), and homomorphic filtering (HOMO). In this paper, the SSR is used to normalize the illumination variations for simplicity. Suppose that the light is smoothly distributed over space, the brightness of object depends on the lighting of environment and reflection of objective surface, as shown in formula (1),

urn:x-wiley:19424787:media:widm1278:widm1278-math-0001  (1)

where S(x, y) is the facial image captured by camera, L(x, y) indicates component of lighting, and R(x, y) represents reflection components of object. In order to separate reflection components and lighting components, logarithm operation is used as follows:

urn:x-wiley:19424787:media:widm1278:widm1278-math-0002  (2)

where R(x, y) is corresponding to the high‐frequency components of image, L(x, y) represents the low‐frequency components of image. In order to obtain R(x, y), the Gaussian filter (Hyvrinen, Hoyer, & Oja, 1999) is then applied to estimate L(x, y) as follows.

urn:x-wiley:19424787:media:widm1278:widm1278-math-0003  (3)

where G(x, y) is the Gaussian function with urn:x-wiley:19424787:media:widm1278:widm1278-math-0004; c is the scale of Gaussian function; x is the size of Gaussian kernel; y is standard deviation of Gaussian distribution; K is a constant, and it satisfies ∬G(x, y)dxdy = 1.

As shown in Figure 5, the experimental results demonstrate that SSR not only has good performance in illumination normalization, but also has a quick computational speed.

 

Figure 5 — The results of face image using single scale retinex

 

5 THE ETHNICITY RECOGNITION USING SPARSE SENSING

5.1 The kNN‐based fast sparse sensing for ethnicity recognition

The SR for facial ethnicity recognition contains two steps. First, the K‐nearest neighbors of a sample are selected from the whole training set for each group. Second, the sample is described and catergrized by the selected K‐nearest neighbors via SR. The testing sample is described as a linear combination using its K‐nearest neighbors (Waqas, Yi, & Zhang, 2013).

The proposed fast SR algorithm consists of three steps: K‐nearest neighbors identification, linear representation, and classification. In K‐nearest neighbors identification, the K‐nearest neighbors are identified and the corresponding labels are recorded. If a training sample belongs to jth (j = 1, 2, ⋯, L) class, j is taken as the label. Suppose {x1, ⋯, x K} are the K‐nearest neighbors of a testing sample y, their labels could form a new set C = {c1, c2, ⋯, c d}. The number of elements in this set is less than or equal to L or K. That is to say, C is a subset of {1, 2, ⋯, L}.

The testing sample y could be represented as a linear combination of the K‐nearest neighbors

urn:x-wiley:19424787:media:widm1278:widm1278-math-0005 (4)

where a i(i = 1, 2, ⋯, K) are the coefficients. The formula (4) can be rewritten as follows:

urn:x-wiley:19424787:media:widm1278:widm1278-math-0006  (5)

where A = [a1, ⋯, a K] T, X = [x1, ⋯, x K]. Our aim is to solve the minimum error between XA and y, subject to that the norm of A must be minimum. This optimization problem can be described by a Lagrangian function,

urn:x-wiley:19424787:media:widm1278:widm1278-math-0007  (6)

where μ is a positive constant. According to Lagrangian method, A should satisfy that urn:x-wiley:19424787:media:widm1278:widm1278-math-0008. Therefore, the optimal solution could be obtained as follows:

urn:x-wiley:19424787:media:widm1278:widm1278-math-0009  (7)

where I is an identity matrix.The class label of a testing sample will be estimated according to its K‐nearest neighbors’s weight contribution in the SR (Wang et al., 2016). Specifically, in K‐nearest neighbors of a testing sample, the subset {x s, ⋯, x t} belongs to rth(rC) class, the contribution of rth class is described as follows:

urn:x-wiley:19424787:media:widm1278:widm1278-math-0010  (8)

The error between g r and the testing sample is given in formula (9).

urn:x-wiley:19424787:media:widm1278:widm1278-math-0011  (9)

The smaller of the value of e r = ||y − g r||2, implies the greater of influence on the r class. The testing sample y is then classified as the class which has the greatest contribution. In addition, if all K‐nearest neighbors are not from rth class, r does not belong to C. Hence, the SR will not classify the sample y as the rth class. Two kinds of measurement similarity are usually used in the SR. One is Euclidean distance (Aharon et al., 2006), the other is cosine measure,

urn:x-wiley:19424787:media:widm1278:widm1278-math-0012  (10)

where s(y, x i) represents the similarity between y and ith neighbor.

5.2 Holistic ethnical facial features based on SR

In this paper, the “O” region represents the whole face. We first implement SR based on holistic facial features, which implies that the whole image of a testing sample y is approximated by a linear combination of all the training images. The class label of a testing sample y is then assigned based on the difference between y and the weighted combination of the samples from each class. Let A = (A1, A2, ⋯, An) denote n training samples, the testing sample y can be approximated as a linear combination of all training samples

urn:x-wiley:19424787:media:widm1278:widm1278-math-0013  (11)

Without loss of generality, the formula is expressed as follows:

urn:x-wiley:19424787:media:widm1278:widm1278-math-0014

where β = (β1, β2, ⋯, βn)T, A = (A1, A2, ⋯, An).

If AT A is nonsingular, the coefficients of β could be obtained by β = (AT A)−1 AT y. Otherwise, if AT A is singular, β could be calculated by β = (AT A + γI)−1 AT y, where γ is a small positive number, and I is a unit matrix.

It can be seen from formula (11) that each training sample has its contribution to the representation of the testing sample, and the contribution of ith training sample is βi Ai. Suppose the training samples from kth class are As, ⋯, At, and the total contribution of these samples to the testing sample y is denoted by gk = βs As + ⋯ + βt At, the error of the SR could be calculated by ek = ||y − gk||2. The smaller error value implies that the contribution is greater from the samples of kth class.

Now, we conduct a simple experiment for ethnicity recognition based on the holistic facial features on the captured dataset in this paper and the experimental results are shown in Table 1, the accuracy rate of ethnicity recognition is only 45% with 90% for training and 10% for testing on 10‐fold experiments. It can be seen that the ethnicity recognition accuracy based on holistic facial features is quite low. In fact, the ethnic face is represented by sparse combination of various faces. However, one particular problem of ethnicity recognition is that the ethnic attributes come from various individuals but the facial attributes of individuals from different ethnicities may have significant contributions. In addition, holistic face features may contain insensitive features to ethnicity classification, since the facial ethnical differences are mainly conveyed by local features. Hence, it is important for us to figure out the local facial regions that are related to ethnic differences, and investigate whether the sparsity of such local features is useful for facial feature representation in terms of ethnicity recognition. We will investigate local feature extraction issue via data mining approach in next section.

Table 1. The ethnicity recognition based on holistic features
The ethnicity recognition based on holistic features

5.3 Salient ethnic facial region extraction

In this section, some salient ethnic facial regions will be investigated based on the three ethnicities. Since the geometric features are often used in anthropometry, this work also analyzes salient ethnic facial regions according to geometrical relationship of key points based on facial components. Here, we use the facial landmark detector STASM (Milborrow & Nicolls, 2014) to extract 77 landmarks as shown in Figure 6.

 

Figure 6 — Landmarks obtained using STASM

 

Based on these 77 landmarks, we can construct 2,926 facial features by connecting any two landmarks. Considering the redundancy and relevance of the obtained line features, the well‐known data mining technique, the minimal‐redundancy‐maximal‐relevance (mRMR) (Ding & Peng, 2005; Peng, Long, & Ding, 2005) feature selection method, is applied to select the most salient features. According to mutual information, the mRMR aims to select the significant features based on the minimal redundancy and maximal relevance, using the Equations 12 and 13,

urn:x-wiley:19424787:media:widm1278:widm1278-math-0015  (12)
urn:x-wiley:19424787:media:widm1278:widm1278-math-0016   (13)

where F is facial geometrical feature subset, c is class label of ethnicities, f i is ith feature of F. I(f i, c) is mutual information between feature f i and class c, and I(f i, f j) indicates mutual information between f i and f j. The mutual information is calculated by Equation 14, and the mRMR selection criterion is achieved by the Equation 15.

urn:x-wiley:19424787:media:widm1278:widm1278-math-0017  (14)
urn:x-wiley:19424787:media:widm1278:widm1278-math-0018   (15)

Based on these 2,926 facial features, 195 salient length features are then selected out to represent the ethnic attributes of the three ethnicities using the mRMR approach. These features are divided into four parts, and then compared with anthropological features (Farkas, 1994). As shown in Figure 7, these four parts of the features are plotted on facial images. Figures 7a, b, c, and d show 19, 37, 63, and 65 length features, respectively. One can see that the best weights of features focus on nose, eyes, and eyebrows and these feature regions together form a “T” region, which can be seen clearly in Figure 7a and c. With the weights decreasing, the important region would extend to mouth area gradually. This observation shows that this “T” region is ethnic salient as demonstrated in next section.

Figure 7 — The various weight of length facial features

5.4 Local facial feature based ethnicity SR

From analysis in last section, the ethnic‐salient “T” regions are first identified according to analysis of facial geometrical features, and the facial “T” regions are then used to recognize the ethnicity. As the shape of “T” regions are different in order to deal with various situations as described below, we propose three types of “T” regions, denoted as “T1,” “T2,” and “T3,” which contain different facial components. As shown in Figure 8, “T1” includes eyes and nose, “T2” contains eyebrows, eyes, and nose, and “T3” contains eyebrows, eyes, nose, and mouth. Furthermore, the images of “T” regions are encoded according to zigzag rule for feature extraction in ethnicity recognition. “O” region represents the whole face image as explained below in Figure 9.

 

Figure 8 — Facial feature region of various weights
Figure 9 — The image coding of “T” region

In the following analysis, the feature vector of “T” region is extracted to represent ethnic attributes. The K‐nearest neighbors of a testing sample are selected based on the features from the corresponding “T” regions. The SR approach mentioned in previous section is then implemented to describe ethnicity attributes, which only locate in these “T” regions. The detailed algorithm can be described as follows:

  • The “T” regions are identified based on landmarks obtained by STASM.
  • The facial images are divided into training set X = [x1, x2, ⋯, x m], and testing set Y = [y1, y2, ⋯, y n], where x i and y i are the feature vectors extracted from the corresponding “T” regions.
  • The K‐nearest neighbors of each testing sample are selected, and the training labels are recorded.
  • The testing sample yY is represented by a linear combination
urn:x-wiley:19424787:media:widm1278:widm1278-math-0019  (16)
  • According to Lagrange optimization, the problem (16) could be solved, the optimal solution is given by:
urn:x-wiley:19424787:media:widm1278:widm1278-math-0020   (17)
  • The contribution of every class could be calculated:
urn:x-wiley:19424787:media:widm1278:widm1278-math-0021  (18)
  • The error between y and g r could be obtained, and the class label of y is then identified according to the error e r.
urn:x-wiley:19424787:media:widm1278:widm1278-math-0022  (19)

where we can select different norms in (19).

In summary, In order to represent each ethnical group effectively, we have used the STASM facial landmark detector to extract 77 landmarks in each facial image. Then, we construct 2,926 geometrical facial features. As the number of these features is too large, we used the data mining approach mRMR to select some salient geometrical features for these three ethnical groups and then 195 salient features are selected. One can find that these salient features are mainly located in a “T” region and then three types of “T” regions are constructed. We believe the features in these “T” regions are more important for ethnical group recognition. In next section, we demonstrate the effectiveness of the proposed framework.

 

6 EXPERIMENTAL RESULTS

In this section, we conduct several experiments on the face images of Uygur, Tibetan, and Korean, and four types regions, that is, “0,” “T1,” “T2,” and “T3,” are established to extract ethnic salient features via using the data mining technique mRMR. The captured face images are first preprocessed, in which the faces are aligned and the illuminations are normalized. The effectiveness of the extracted features is then verified by using several different norms.

The performances of ethnicity recognition models on different “T” regions are evaluated by several different criteria, which include the true positive rate (TPR), false positive rate (FPR), Precision, Recall, and F‐measure defined in (Anselmo, 1991; Bouckaert et al., 2010; Han, Pei, & Kamber, 2011). Next, we first conduct experiments on different “T” regions and validate the effectiveness of the proposed approach.

6.1 The effectiveness of the three “T” regions

In this section, the coefficient number K is set to be 90, and the L2 norm is applied in SR. Table 2 lists the results obtained based on different types of “T” regions. It can be seen that the result in the region of “T3” is the best among all regions. It indicates that the “T” region surrounded by eyes, eyebrows, nose, and mouth is more effective for ethnicity recognition than other “T” regions. Meanwhile, the results show that the region O is the worst in all regions, which is because the holistic facial images contain too much identity information rather than ethnic group features. Therefore, the SR based on local features is an effective approach to solve facial ethnic recognition. We believe the core contribution of “T” region via data mining plays an important role.

Table 2. The results based on various “T” regions
FPR, false positive rate; TPR, true positive rate.

 

6.2 Parameter selection

In order to study the influence of norms and K‐neighbors, a series of different norms and K‐neighbors are selected to identify ethnicities, and the recognition performances are compared based on the features from different facial regions. Specifically, the norms of L0, L1, and L2 are adopted to evaluate recognition performance and the accuracy curves are plotted in Figure 10, Figure 11, and Figure 12, respectively.

 

Figure 10 — The accuracy based on L0 norm

 

Figure 11 — The accuracy based on L1 norm

 

Figure 12 — The accuracy based on L2 norm

 

Figure 10 shows the ethnicity recognition accuracy when the performance is evaluated by using L0 norm. It can be seen that the best accuracy is achieved when the number of neighbors K equals to 77 and the features are extracted from “T3” region.

The results obtained based on the L1 norm are shown in Figure 11. The recognition rate approaches to the peak based on the features from “T3” region when the neighbor number is 50. With the of neighbor number increasing from 15 to 50, the accuracy achieved based on “T3” region increases gradually with fluctuations. It suggests that “T3” region is the salient region for ethnic feature extraction and ethnicity recognition.

Figure 12 presents the recognition results using the L2 norm. It can be seen that the best recognition performance is achieved based on the features extracted from “T3” region when neighbor number K is 80. Compared with L0 and L1 norm shown in Figure 10 and Figure 11, the highest accuracy is achieved by using L2 norm, which reveals that L2 norm is more appropriate than the other two types of norms in facial ethnicity recognition. In addition, the experimental results show that the performance obtained based on the features from “T3” region is better than that of “T1” and “T2,” which means that identifying ethnic salient region can improve the recognition rate significantly.

In summary, one can see that the proposed “T3” region is the most effective region for ethnicity recognition in combination with the L2 norm using the SR. Next, we will develop a software platform for the visualization of ethnic facial feature description.

6.3 Facial ethnic feature description

Based on previous analysis, this work attempts to describe the ethnic attributes according to the contribution of testing samples. As shown in Figure 13, a facial ethnicity evaluation system is constructed based on the SR coefficients. The k‐nearest neighbors of a testing image on the left are determined based on the feature vector extracted from the “T” region. The SR coefficient (coe), the distance from the testing image to its k‐nearest neighbors (dis), and the ethnicity identity of the testing image (type) are then obtained accordingly.

 

Figure 13 — The software for face ethnic analysis

 

The error distance (err) from the testing sample to its k‐nearest neighbors could serve as an important reference for facial ethnic description. As illustrated in Figure 14, the error distance err of a testing sample to the ethnic of Uyghur male is 0.01992, which means the most possible ethnic category of this sample is Uyghur. It can be seen from Figure 13, the k‐nearest neighbors of this testing sample belong to several different ethnicities, its ethnicity could be estimated more precisely based on the error distance err. Therefore, the ethnicity recognition depends on ethnic features in the constructed “T” region. It should be reminded that the error distance in the software platform is normalized for easy use.

 

Figure 14 — The results of classifiers

 

6.4 The investigation of the “T” region for face recognition

In previous section, one can see that the facial “T” region has shown its effectiveness in ethnicity recognition. Thus, it is straightforward to ask whether it is also useful for face recognition. In order to answer this question, some experiments of face recognition are conducted on olivetti research laboratory (ORL) database Figure 15 (Samaria, Harter & Harter, 1994). ORL database includes 400 face images of 40 persons with minor pose variations, and has been used for face recognition algorithm evaluation for decades. Since it is lack of pattern variation, the recognition rates for many face recognition systems have exceeded 90%.

 

Figure 15 — Olivetti research laboratory face dataset

 

The facial images of ORL database are divided into training and testing set, and the feature vectors are extracted from “T3” and “O” regions separately. The fast sparse classification based on k‐nearest neighbors is also used to perform face recognition, and the results are shown in Figure 16 and Table 3. It can be seen clearly that recognition rate obtained based on holistic face (“O” region) is much better than that obtained based on local region (“T3” region). When k = 90, the recognition rate based on “T3” region is only 63% but the accuracy based on holistic face has reached 90%. In fact, the performance achieved based on “T3” region never exceeds 70%, no matter how many neighbors are taken into consideration.

 

Figure 16 — Use of the olivetti research laboratory database for testing

 

Table 3. Different T‐zone recognition results for olivetti research laboratory datasets

 

FPR, false positive rate; TPR, true positive rate.

 

The experimental results indicate that the constructed “T” region is only suitable for ethnicity identification and not suitable for face recognition. This is mainly caused by the differences in samples referred in the SR. The referred samples of ethnicity are consisting of different individuals, while the referred samples of face recognition are from one individual with different poses and expressions. Moreover, the ethnic salient information concentrates in the “T” regions, but the information enclosed in “T” regions is not enough for general face recognition. Actually, the facial features extracted from the “T” regions are more suitable for ethnicity recognition since the unrelated information has been filtered out.

 

7 CONCLUSIONS

This paper aims to extract salient features via data mining for ethnicity recognition. First, the features extracted from holistic facial images are utilized for ethnicity recognition, and the recognition rate is quite low. This is because the facial ethnic features are different from the features extracted for face recognition. Consequently, this work continues to extract salient regions for ethnicity recognition. For such purpose, this work detects 77 facial landmarks to construct features for ethnicity representation according to anthropometry. The distance between each pair of landmarks is used to form a feature set, and 2,926 length features are produced for ethnical group description and then 199 features are selected after mRMR feature selection. Second, based on the selected features via using the data mining technique mRMR, three “T” regions including the most salient ethnic features are constructed. The experiments are conducted based on the features extracted from holistic face, “T1,” “T2,” and “T3” regions, the results show that the features from “T3” region would achieve the best performance when L2 norm is adopted. Third, in order to verify the suitability of “T” region in face recognition, the facial features are extracted from “T” region on ORL dataset, and the fast sparse classification approach based on k‐nearest neighbors is used to conduct face recognition, and the results suggest that the proposed “T” region is not suitable for face recognition.

The contributions of this paper are as follows: (a) The holistic facial features are proved to be ineffective for ethnicity analysis and recognition based on sparse sensing recognition. (b) The ethnic salient “T” region is proposed for ethnic attribute description via data mining technique. (c) The effectiveness of “T” region to ethnicity classification is verified. (d) The application of “T” region is investigated, it is suitable for ethnicity recognition but not for face recognition. In addition, this paper proposes a new approach for extracting facial ethnic features based on sparse description via data mining. The testing samples are sparsely represented and then assigned with its ethnic category accurately even under small sample size circumstance. Meanwhile, a framework for facial features analysis is proposed, that is, a framework for salient area search based on the data‐driven feature selection, which can improve the effectiveness of the attribute discrimination using SR.

In the future, we will use different approaches instead of the SR to investigate this ethnicity recognition problem, which is different from general face recognition as shown in this paper. One possible direction is to extract the geometric features in the identified “T” region and use some deep learning (Pathirage, Li, & Liu, 2017) or stochastic configuration neural networks for classification (Wang & Li, 2017).

 

ACKNOWLEDGMENTS

We thank the two reviewers and Associate editor for their constructive comments and the quality of this paper is significantly improved after careful revision based on their comments. At same time, this work is supported by the National Natural Science Foundations of China with grant number 61562093 & 61772575 and the China Education & Research Network Innovation projects with grant numbers NGII20170419 & NGII20170631.

 

from: https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1278

 

 

 

Dependency on Centralized Services: Massive Outage at SalesForce.com

Salesforce Woes Linger as Admins Clean Up After Service Outage

An accidental permissions snafu caused a massive outage for all Salesforce customers that continues to affect some businesses.

After a massive service outage on Friday, software-as-a-service giant Salesforce restored partial access to its affected customers over the weekend, while admins continued with cleanup into Monday.

The outage was brought on by a scripting error that affected all Pardot marketing automation software clients; a database script that Salesforce pushed out accidentally gave users broader access to data than their permissions levels should allow.

In response, Salesforce on Friday cut off all access to all Salesforce software clients, not just Pardot clients, while it triaged the situation – leading to a bit of a meltdown among users. Twitter hashtags #salesforcedown and #permissiongeddon began trending as users took to social media to complain.

“my salesforce rollout was scheduled at 2pm today. 300 folks on a call to do training with me. oops @salesforce #salesforcedown,” tweeted one user.

#Salesforce #outage means that I can’t access any meaningful records, or properly do my job Now that most tabs have disappeared like 1/2 the universe in @Avengers, please bring them back, Tony Stark of @salesforce!” tweeted another.

 

https://twitter.com/JarinChu/status/1129391714911772672

 

“To all our @salesforce customers, please be aware that we are experiencing a major issue with our service and apologize for the impact it is having on you,” Salesforce co-founder and CTO Parker Harris tweeted on Friday. “Please know that we have all hands on this issue and are resolving as quickly as possible.”

Some users saw an upside to the situation:

All the Sales teams in headed to the bar once they heard @salesforce was down.

 

https://twitter.com/huskysize/status/1129471433749479426

 

Over the weekend, the cloud app provider said that access was restored to everyone not affected by the database script, so regular Salesforce.com users were back in business. However, for companies using the affected Pardot software, only system administrators were given access to their accounts – so they could help rebuild user profiles and restore user permissions. According to the incident status page, some regular users remained incapable of logging into the system as of Monday morning as administrators continued the restoration process.

That process could be onerous for many: Salesforce said that if there’s a valid backup of their profiles and user permission data in the service’s sandbox, admins can simply deploy that. However, if there’s no valid backup, admins will need to manually update the profile and permission settings. Salesforce noted in an update Monday that it has deployed automated provisioning to restore permissions where possible.

Balaji Parimi, CEO at CloudKnox, told Threatpost that admins should take care when restoring the settings.

“Enterprises need to understand that their biggest security risk is not from the attackers targeting them or even malicious insiders – it’s identities with over-provisioned privileges,” he said via email. “Security teams need to make sure that privileges with massive powers are restricted to a small number of properly trained personnel. Until companies better understand which identities have the privileges that can lead to these types of accidents and proactively manage those privileges to minimize their risk exposure, they’ll be vulnerable to devastating incidents like the one we’re seeing with Salesforce right now.”

 

from: https://threatpost.com/salesforce-clean-up-service-outage/144891/

 

 

Power Point presentations should be forbidden at meetings – tell a story instead

ethos, logos, and pathos
the three key elements to persuade

Jeff Bezos is prohibited from using Power Point presentations at meetings, as he considers them a waste of time. However, the alternative method by which he has replaced them is most useful and effective. Do you want to know what it is?

In his annual letter to employees, Jeff Bezos, the CEO of Amazon, recalled that Power Points were prohibited in any meeting. However, this does not mean that you can not use any presentation method in company meetings.

In fact, the founder of the most powerful ecommerce company in the world offers an alternative so that the ideas or strategies to be carried out are understood more clearly by the attendees: the memos, paper or essays (maximum of six pages).

“Instead of wasting time listening to one person while the rest of the audience is silent, it is more efficient to spend 30 minutes reading a 6-page essay explaining everything you want to say at the meeting. The narrative structure is easier to understand by human beings than general ideas summarized in bullet points, “explains the CEO.

But why? Inc has compiled the 3 keys by which the idea of Bezos to replace Power Points by trials is brilliant.

1. Our brains are designed to understand stories

The problem with Power Point slides is that, in general, they do not tell a story and our brain is designed to understand narratives. “When our ancestors discovered the fire, they gathered around it to cook and tell stories. In this way, the narrative served to tell anecdotes or dangers that could haunt the tribe, “explains Carmine Gallo, author of Five Stars: The Communication Secrets to Get from Good to Great.

In this way, and according to anthropologists, for us the world “is a story”, especially in leadership roles. Thus, telling events in a narrated way is essential because people remember things more with this structure.

2. Persuasive stories

Aristotle is the father of persuasion, and more than 2000 years ago he revealed the three key elements to persuade: ethos, logos and pathos.

  • The first one refers to character and credibility;
  • the second appeals to logic (an argument must have a reason);
  • while the last one has to do with emotion.

Therefore, the first two have no meaning without the last one.

In fact, the great orators of the history exposed in their speeches as much rational elements as emotional (it is only necessary to think about the famous I have a dream, of Martin Luther King).

In addition, according to a series of scientific studies developed by neurologists, the best way to create synapses between our neurons is emotion. In other words, if you want to communicate an idea, it is best to tell a story. “I love telling anecdotes at meetings. It’s very effective, “says Bezos.

3. Bullet points do not work

Bullet points are not useful for anyone. In fact, they do not use them in companies like Google, Virgin or Tesla.

The brain is not prepared to retain information in the form of lists. Instead, a story, a photo or an idea is easier to retain.

 

from: https://www.ticbeat.com/empresa-b2b/jeff-bezos-prohibe-usar-power-point-en-sus-reuniones-y-su-alternativa-es-brillante/

 

 

Intel MDS Vulnerabilities: ZombieLoad, RIDL (Rogue In-Flight Data Load), Fallout, and Store-to-Leak Forwarding – affect almost every Intel chip since 2011

Tech giants have published security advisories and blog posts in response to the Microarchitectural Data Sampling (MDS) vulnerabilities affecting most Intel processors made in the last decade.

Remedy? The microcode updates, like previous patches, would have an impact on processor performance.

The vulnerabilities are related to speculative execution and they can be exploited for side-channel attacks. Researchers started reporting the flaws to Intel in June 2018, but the chip maker said its own researchers found them first. Nevertheless, in addition to its own employees, Intel has credited researchers from several universities and companies for the security holes.

Researchers named the new attack methods

  • ZombieLoad
  • RIDL (Rogue In-Flight Data Load)
  • Fallout
  • Store-to-Leak Forwarding.

Intel has assigned them the following names and CVEs:

  • Microarchitectural Fill Buffer Data Sampling (MFBDS, CVE-2018-12130)
  • Microarchitectural Store Buffer Data Sampling (MSBDS, CVE-2018-12126)
  • Microarchitectural Load Port Data Sampling (MLPDS, CVE-2018-12127)
  • Microarchitectural Data Sampling Uncacheable Memory (MDSUM, CVE-2018-11091)

The attack methods pose a threat to both PCs and cloud environments, and they allow hackers to get applications, the operating system, virtual machines and trusted execution environments to leak information, including passwords, website content, disk encryption keys and browser history. Attacks can be launched both by a piece of malware present on the targeted system and from the internet.

However, Intel says exploitation in a real-world attack is not an easy task and the attacker may not be able to obtain valuable information even if the exploit is successful.

The products of several major tech companies are impacted by the flaws and most of them have already published blog posts and advisories providing information on their impact and the availability of patches and mitigations.

Intel

Intel says its newer products, such as some 8th and 9th generation Core processors and 2nd generation Xeon Scalable processors, address these vulnerabilities at hardware level. Some of the other affected products have received or will receive microcode updates that should mitigate the flaws. The company has published a technical deep dive and a list that users can check to see if their processors will receive microcode updates.

Intel says the mitigations should have minimal performance impact for a majority of PCs, but performance may be impacted in the case of data center workloads.

Disabling hyper-threading on vulnerable CPUs should prevent exploitation of the vulnerabilities.

Apple

Apple informed customers that macOS Mojave 10.14.5 and Security Update 2019-003 for Sierra and High Sierra include the option to enable full mitigation for the MDS attacks. Mojave 10.14.5 also includes a Safari update that should prevent exploitation from the internet.

Microsoft

Microsoft has started releasing software updates for Windows and deployed server-side fixes to its cloud services to mitigate the vulnerabilities. The company has pointed out that in addition to software updates, firmware updates are also required for full protection against attacks.

Microsoft has also released a PowerShell script that users can run on their systems to check the status of speculative execution mitigations.

Google

Google has made available a page where users are informed about the actions they need to take depending on the products they have. The internet giant says its infrastructure, G Suite, and Google Cloud Platform products and services are protected against attacks, but some cloud users may need to take action.

The company says a vast majority of Android devices are not impacted. In the case of Chrome OS devices, Google has disabled hyper-threading by default starting with version 74 and additional mitigations will be available in Chrome OS 75.

VMware

VMware told users that the vulnerabilities impact its VMware vCenter Server
, vSphere ESXi, Workstation, Fusion, vCloud Usage Meter, Identity Manager, vCenter Server, vSphere Data Protection, vSphere Integrated Containers, and vRealize Automation products.

The company provides hypervisor-specific mitigations and hypervisor-assisted guest mitigations for the impacted products. These mitigations involve software updates and patches from VMware.

VMware pointed out that exploitation of the flaws requires local access to the targeted virtual machine and the ability to execute code.

IBM

IBM says it’s rolling out the microcode updates from Intel and mitigations to its cloud services. The company told users that its POWER processors are not impacted by the MDS vulnerabilities.

Citrix

Citrix says full mitigation of the Intel chip vulnerabilities involves updates to the Citrix hypervisor and updates to the CPU microcode. The company has released a hotfix for XenServer 7.1, which includes both hypervisor and CPU microcode updates, and it plans on releasing similar hotfixes for other affected products.

Oracle

A blog post from Oracle describes the impact of the flaws on the company’s hardware, operating systems, and cloud services. X86-based systems need to be assessed by their administrators and Oracle Engineered Systems customers will receive specific guidance from the company.

Oracle SPARC servers and Solaris on SPARC are not impacted, but Solaris on x86 systems is affected. Patches have been released by Oracle for Oracle Linux and VM Server products.

AWS

Amazon Web Services (AWS) said on Tuesday that it had deployed protections for MDS attacks to all its infrastructure and no action is required from users. The company has released updated kernels and microcode packages for Amazon Linux AMI 2018.3 and Amazon Linux 2.

Xen Project

The Xen Project says systems running all versions of Xen are affected by the vulnerabilities if they use x86 Intel processors.

Linux distributions

Advisories for the MDS vulnerabilities in Intel processors have been published by Linux kernel developers, Red Hat, Debian, Ubuntu and SUSE. Linux distributions have already started rolling out updates that should mitigate the flaws.

Hardware manufacturers

Many hardware manufacturers whose products use Intel processors are likely affected by the ZombieLoad and RIDL vulnerabilities. However, so far, only Lenovo and HP appear to have started releasing firmware patches for their devices.

from: https://www.securityweek.com/intel-mds-vulnerabilities-what-you-need-know

 

***

New secret-spilling flaw affects almost every Intel chip since 2011

Security researchers have found a new class of vulnerabilities in Intel chips which, if exploited, can be used to steal sensitive information directly from the processor.,

The bugs are reminiscent of Meltdown and Spectre, which exploited a weakness in speculative execution, an important part of how modern processors work. Speculative execution helps processors predict to a certain degree what an application or operating system might need next and in the near-future, making the app run faster and more efficient. The processor will execute its predictions if they’re needed, or discard them if they’re not.

Both Meltdown and Spectre leaked sensitive data stored briefly in the processor, including secrets — such as passwords, secret keys and account tokens, and private messages.

Now some of the same researchers are back with an entirely new round of data-leaking bugs.

“ZombieLoad,” as it’s called, is a side-channel attack targeting Intel chips, allowing hackers to effectively exploit design flaws rather than injecting malicious code. Intel said ZombieLoad is made up of four bugs, which the researchers reported to the chip maker just a month ago.

Almost every computer with an Intel chips dating back to 2011 are affected by the vulnerabilities. AMD and ARM chips are not said to be vulnerable like earlier side-channel attacks.

ZombieLoad takes its name from a “zombie load,” an amount of data that the processor can’t understand or properly process, forcing the processor to ask for help from the processor’s microcode to prevent a crash. Apps are usually only able to see their own data, but this bug allows that data to bleed across those boundary walls. ZombieLoad will leak any data currently loaded by the processor’s core, the researchers said. Intel said patches to the microcode will help clear the processor’s buffers, preventing data from being read.

Practically, the researchers showed in a proof-of-concept video that the flaws could be exploited to see which websites a person is visiting in real-time, but could be easily repurposed to grab passwords or access tokens used to log into a victim’s online accounts.

Like Meltdown and Spectre, it’s not just PCs and laptops affected by ZombieLoad — the cloud is also vulnerable. ZombieLoad can be triggered in virtual machines, which are meant to be isolated from other virtual systems and their host device.

Daniel Gruss, one of the researchers who discovered the latest round of chip flaws, said it works “just like” it does on PCs and can read data off the processor. That’s potentially a major problem in cloud environments where different customers’ virtual machines run on the same server hardware.

Although no attacks have been publicly reported, the researchers couldn’t rule them out nor would any attack necessarily leave a trace, they said.

What does this mean for the average user? There’s no need to panic, for one.

These are far from drive-by exploits where an attacker can take over your computer in an instant. Gruss said it was “easier than Spectre” but “more difficult than Meltdown” to exploit — and both required a specific set of skills and effort to use in an attack.

But if exploit code was compiled in an app or delivered as malware, “we can run an attack,” he said.

There are far easier ways to hack into a computer and steal data. But the focus of the research into speculative execution and side channel attacks remains in its infancy. As more findings come to light, the data-stealing attacks have the potential to become easier to exploit and more streamlined.

But as with any vulnerability where patches are available, install them.

Intel has released microcode to patch vulnerable processors, including Intel Xeon, Intel Broadwell, Sandy Bridge, Skylake and Haswell chips. Intel Kaby Lake, Coffee Lake, Whiskey Lake and Cascade Lake chips are also affected, as well as all Atom and Knights processors.

But other tech giants, like consumer PC and device manufacturers, are also issuing patches as a first line of defense against possible attacks.

Computer makers Apple and Microsoft and browser makers Google have released patches, with other companies expected to follow.

In a call with TechCrunch, Intel said the microcode updates, like previous patches, would have an impact on processor performance. An Intel spokesperson told TechCrunch that most patched consumer devices could take a 3 percent performance hit at worst, and as much as 9 percent in a datacenter environment. But, the spokesperson said, it was unlikely to be noticeable in most scenarios.

And neither Intel nor Gruss and his team have released exploit code, so there’s no direct and immediate threat to the average user.

But with patches rolling out today, there’s no reason to pass on a chance to prevent such an attack in any eventuality.

from: https://techcrunch.com/2019/05/14/zombieload-flaw-intel-processors/

 

 

North Korea: Bitten by Bitcoin Bug – New Dimension Lazarus Hacking

A report from ProofPoint by Darien Huss

 

Executive Summary

With activity dating at least to 2009, the Lazarus Group has consistently ranked among the most disruptive, successful, and far-reaching state-sponsored actors.

Law enforcement agencies suspect that the group has amassed nearly $100 million worth of cryptocurrencies based on their value today.

  • The March 20, 2013 attack in South Korea,
  • the Sony Pictures hack in 2014,
  • the successful SWIFT theft of $81 million from the Bangladesh Bank in 2014,
  • and perhaps most famously this year’s WannaCry ransomware attack and its global impact have all been attributed to the group.

The Lazarus Group is widely accepted as being a North Korean state-sponsored threat actor by numerous organizations in the information security industry, law enforcement agencies, and intelligence agencies around the world. The Lazarus Group’s arsenal of tools, implants, and exploits is extensive and under constant development. Previously, they have employed DDoS botnets, wiper malware to temporarily incapacitate a company, and a sophisticated set of malware targeting the SWIFT banking system to steal millions of dollars. In this report we describe and analyze a new, currently undocumented subset of the Lazarus Group’s toolset that has been widely targeting individuals, companies, and organizations with interests in cryptocurrency.

Threat vectors for this new toolset, dubbed PowerRatankba, include highly targeted spearphishing campaigns using links and attachments as well as massive email phishing campaigns targeting both personal and corporate accounts of individuals with interests in cryptocurrency. We also share our discovery of what may be the first publicly documented instance of a state targeting a point-of-sale related framework for the theft of credit card data, again using a variant of malware that is closely related to PowerRatankba.

 

Conclusion

This report has introduced several new additions to Lazarus Group’s ever-growing arsenal, including a variety of different attack vectors, a new PowerShell implant and Gh0st RAT variant, as well as an emerging point-of-sale threat targeting South Korean devices. In addition to insight into Lazarus’ emerging toolset, there are two key takeaways from this research:

  • Analyzing a financially motivated arm of a state actor highlights an often overlooked or underestimated aspect of state-sponsored attacks; in this case, we were able to differentiate the actions of the financially motivated team within Lazarus from those of their espionage and disruption teams that have recently grabbed headlines.
  • This group now appears to be targeting individuals rather than just organizations: individuals are softer targets, often lacking resources and knowledge to defend themselves and providing new avenues of monetization for a state-sponsored threat actor’s toolkit.
  • Moreover, both the explosive growth in cryptocurrency values and the emergence of new point-of-sale malware near the peak holiday shopping season provide an interesting example of how one state-sponsored actor is following the money, adding direct theft from individuals and organizations to the more “traditional” approach of targeting financial institutions for espionage that we often observe with other APT actors.

Download the entire report here or read this local copy:

 

 

 

Large-Scale “BOLD5000” MRI Dataset Bridges Human Vision And Machine Learning

Improving computer vision was an important part of the BOLD5000 project from its onset. Senior author Elissa Aminoff, then a post-doctoral fellow in CMU’s Psychology Department and now an assistant professor of psychology at Fordham, initiated this research direction with co-author Abhinav Gupta, an associate professor in the Robotics Institute. Image is in the public domain.

 

Summary: BOLD5000, a new, large scale data set of brain scans of people viewing images, is helping researchers to better understand how the brain processes images. The data set is a big step towards using computer visual models to study biological vision.

Source: Carnegie Mellon University

Abstract: BOLD5000, a public fMRI dataset while viewing 5000 visual images

Vision science, particularly machine vision, has been revolutionized by introducing large-scale image datasets and statistical learning approaches. Yet, human neuroimaging studies of visual perception still rely on small numbers of images (around 100) due to time-constrained experimental procedures. To apply statistical learning approaches that include neuroscience, the number of images used in neuroimaging must be significantly increased. We present BOLD5000, a human functional MRI (fMRI) study that includes almost 5,000 distinct images depicting real-world scenes. Beyond dramatically increasing image dataset size relative to prior fMRI studies, BOLD5000 also accounts for image diversity, overlapping with standard computer vision datasets by incorporating images from the Scene UNderstanding (SUN), Common Objects in Context (COCO), and ImageNet datasets. The scale and diversity of these image datasets, combined with a slow event-related fMRI design, enables fine-grained exploration into the neural representation of a wide range of visual features, categories, and semantics. Concurrently, BOLD5000 brings us closer to realizing Marr’s dream of a singular vision science–the intertwined study of biological and computer vision.

Neuroscientists and computer vision scientists say a new dataset of unprecedented size — comprising brain scans of four volunteers who each viewed 5,000 images — will help researchers better understand how the brain processes images.

Researchers at Carnegie Mellon University and Fordham University, reporting today in the journal Scientific Data, said acquiring functional magnetic resonance imaging (fMRI) scans at this scale presented unique challenges.

Each volunteer participated in 20 or more hours of MRI scanning, challenging both their perseverance and the experimenters’ ability to coordinate across scanning sessions. The extreme design decision to run the same individuals over so many sessions was necessary for disentangling the neural responses associated with individual images.

The resulting dataset, dubbed BOLD5000, allows cognitive neuroscientists to better leverage the deep learning models that have dramatically improved artificial vision systems. Originally inspired by the architecture of the human visual system, deep learning may be further improved by pursuing new insights into how human vision works and by having studies of human vision better reflect modern computer vision methods. To that end, BOLD5000 measured neural activity arising from viewing images taken from two popular computer vision datasets: ImageNet and COCO.

“The intertwining of brain science and computer science means that scientific discoveries can flow in both directions,” said co-author Michael J. Tarr, the Kavči?-Moura Professor of Cognitive and Brain Science and head of CMU’s Department of Psychology. “Future studies of vision that employ the BOLD5000 dataset should help neuroscientists better understand the organization of knowledge in the human brain. As we learn more about the neural basis of visual recognition, we will also be better positioned to contribute to advances in artificial vision.”

Lead author Nadine Chang, a Ph.D. student in CMU’s Robotics Institute who specializes in computer vision, suggested that computer vision scientists are looking to neuroscience to help innovate in the rapidly advancing area of artificial vision — reinforcing the two-way nature of this research.

“Computer-vision scientists and visual neuroscientists essentially have the same end goal: to understand how to process and interpret visual information,” Chang said.

Improving computer vision was an important part of the BOLD5000 project from its onset. Senior author Elissa Aminoff, then a post-doctoral fellow in CMU’s Psychology Department and now an assistant professor of psychology at Fordham, initiated this research direction with co-author Abhinav Gupta, an associate professor in the Robotics Institute.

Among the challenges faced in connecting biological and computer vision is that the majority of human neuroimaging studies include very few stimulus images — often 100 or less — which typically are simplified to depict only single objects against a neutral background. In contrast, BOLD5000 includes more than 5,000 real-world, complex images of scenes, single objects and interacting objects.

The group views BOLD5000 as only the first step toward leveraging modern computer vision models to study biological vision.

“Frankly, the BOLD5000 dataset is still way too small,” Tarr said, suggesting that a reasonable fMRI dataset would require at least 50,000 stimulus images and many more volunteers to make headway in light of the fact that the class of deep neural nets used to analyze visual imagery are trained on millions of images. To this end, the research team hopes their ability to generate a dataset of 5,000 brain scans will pave the way for larger collaborative efforts between human vision and computer vision scientists.

So far, the field’s response has been positive. The publicly available BOLD5000 dataset has already been downloaded more than 2,500 times.

In addition to Chang, Tarr, Gupta, and Aminoff, the research team included John A. Pyles, senior research scientist and scientific operations director of the CMU-Pitt BRIDGE Center, and Austin Marcus, a research assistant in Tarr’s lab.

Funding: The National Science Foundation, U.S. Office of Naval Research, the Alfred P. Sloan Foundation and the Okawa Foundation for Information and Telecommunications sponsored this research.

Source: Carnegie Mellon University
Media Contacts: Byron Spice – Carnegie Mellon University
Image Source: The image is in the public domain.

Original Research: Open access
“BOLD5000, a public fMRI dataset while viewing 5000 visual images”. Nadine Chang, John A. Pyles, Austin Marcus, Abhinav Gupta, Michael J. Tarr & Elissa M. Aminoff .
Scientific Data. doi:10.1038/s41597-019-0052-3

 

from: https://neurosciencenews.com/machine-learning-vision-data-set-13034/

 

 

Alibaba-backed, Chinese Gov-supporting facial recognition AI startup Megvii raises $750 million

One of China’s most ambitious artificial intelligence startups, Megvii, more commonly known for its facial recognition brand Face++, announced Wednesday that it has raised $750 million in a Series E funding round.

Founded by three graduates from the prestigious Tsinghua University in China, the eight-year-old company specializes in applying its computer vision solutions to a range of use cases such as public security and mobile payment. It competes with its fast-growing Chinese peers, including the world’s most valuable AI startup, SenseTime — also funded by Alibaba — and Sequoia-backed Yitu.

Bloomberg reported in January that Megvii was mulling to raise up to $1 billion through an initial public offering in Hong Kong. The new capital injection lifts the company’s valuation to just north of $4 billion as it gears up for its IPO later this year, sources told Reuters.

China is on track to overtake the United States in AI on various fronts. Buoyed by a handful of mega-rounds, Chinese AI startups accounted for 48 percent of all AI fundings in 2017, surpassing those in the U.S. for the first time, shows data collected by CB Insights. An analysis released in March by the Allen Institute for Artificial Intelligence found that China is rapidly closing in on the U.S. by the amount of AI research papers published and the influence thereof.

A critical caveat to China’s flourishing AI landscape is, as The New York Times and other publications have pointed out, the government’s use of the technology. While facial recognition has helped the police trace missing children and capture suspects, there have been concerns around its use as a surveillance tool.

Megvii’s new funding round arrives just days after a Human Rights Watch report listed it as a technology provider to the Integrated Joint Operations Platform, a police app allegedly used to collect detailed data from a largely Muslim minority group in China’s far west province of Xinjiang. Megvii denied any links to the IJOP database per a Bloomberg report.

Kai-Fu Lee, a world-renowned AI expert and investor who was Google’s former China head, warned that any country in the world has the capacity to abuse AI, adding that China also uses the technology to transform retail, education and urban traffic among other sectors.

Megvii has attracted a rank of big-name investors in and outside China to date. Participants in its Series E include Bank of China Group Investment Limited, the central bank’s wholly owned subsidiary focused on investments, and ICBC Asset Management (Global), the offshore investment subsidiary of the Industrial and Commercial Bank of China.

Foreign backers in the round include a wholly owned subsidiary of the Abu Dhabi Investment Authority, one of the world’s largest sovereign wealth funds, and Australian investment bank Macquarie Group.

Megvii says its fresh proceeds will go toward the commercialization of its AI services, recruitment and global expansion.

China has been exporting its advanced AI technologies to countries around the world. Megvii, according to a report by the South China Morning Post from last June, was in talks to bring its software to Thailand and Malaysia. Last year, Yitu opened its first overseas office in Singapore to deploy its intelligence solutions to partners in Southeast Asia. In a similar fashion, SenseTime landed in Japan by opening an autonomous driving test park this January.

“Megvii is a global AI technology leader and innovator with cutting-edge technologies, a scalable business model and a proven track record of monetization,” read a statement from Andrew Downe, Asia regional head of commodities and global markets at Macquarie Group. “We believe the commercialization of artificial intelligence is a long-term focus and is of great importance.”

 

from: https://techcrunch.com/2019/05/08/megvii-750-million/

see also: https://www.bgp4.com/2019/05/06/security-lapse-exposed-a-chinese-smart-city-surveillance-system/

 

 

Russia-linked Threat Group Turla Uses Sophisticated Backdoor ‘LightNeuron’ to Hijack Exchange Mail Servers

The Russia-linked threat group known as Turla has been using a sophisticated backdoor to hijack Microsoft Exchange mail servers, ESET reported on Tuesday.

The malware, dubbed LightNeuron, allows the attackers to read and modify any email passing through the compromised mail server, create and send new emails, and block emails to prevent the intended recipients from receiving them.

According to ESET, LightNeuron has been used by Turla — the group is also known as Waterbug, KRYPTON and Venomous Bear — since at least 2014 to target Microsoft Exchange servers. The cybersecurity firm has analyzed a Windows version of the malware, but evidence suggests a Linux version exists as well.

ESET has identified three organizations targeted with LightNeuron, including a Ministry of Foreign Affairs in an Eastern European country, a regional diplomatic organization in the Middle East, and an entity in Brazil. ESET became aware of the Brazilian victim based on a sample uploaded to VirusTotal, but it has not been able to determine what type of organization has been targeted.

The company’s researchers have determined that LightNeuron leverages a persistence technique not used by any other piece of malware, a transport agent. Transport agents are designed to allow users to install custom software on Exchange servers.

The malware runs with the same level of trust as spam filters and other security products, ESET said.

As for command and control (C&C), the malware is controlled by attackers using emails containing specially crafted PDF documents or JPG images. The malware can recognize these emails and extract the commands from the PDF or JPG files.

The commands supported by LightNeuron allow attackers to take complete control of a server, including writing and executing files, deleting files, exfiltrating files, executing processes and commands, and disabling the backdoor for a specified number of minutes.

Last year, ESET detailed a backdoor used by Turla to target Microsoft Outlook. That piece of malware had also used PDF files attached to emails for command and control purposes.

ESET has linked LightNeuron to Turla based on several pieces of evidence, including the presence of known Turla malware on compromised Exchange servers, the use of file names similar to ones known to be used by the group, and the use of a packer exclusively utilized by the threat actor.

In an APT trends report published last year by Kaspersky Lab, the Russian cybersecurity firm also mentioned LightNeuron and attributed it with medium confidence to Turla. Kaspersky had spotted victims in the Middle East and Central Asia.

ESET also noticed that the compromised Exchange servers received commands mostly during work hours in UTC+3, the Moscow time zone. Furthermore, the attackers apparently took a break between December 28, 2018, and January 14, 2019, when many Russians take time off to celebrate the New Year and Christmas.

 

from: https://www.securityweek.com/turla-uses-sophisticated-backdoor-hijack-exchange-mail-servers

 

 

Chinese Hackers Used NSA Tool a Year Before Shadow Brokers Leak

A Chinese threat actor was spotted using a tool attributed to the NSA-linked Equation Group more than one year prior to it being leaked by the mysterious Shadow Brokers, Symantec revealed on Monday.

The Chinese cyber espionage group is tracked as Buckeye, APT3, UPS Team, Gothic Panda, and TG-0110, and it has been linked by researchers to the Chinese Ministry of State Security. The threat actor had been active since at least 2009 before it apparently ceased operations in mid-2017.

In late November 2017, the US government announced charges against three Chinese nationals for attacks launched by the hacker group against Siemens, Trimble, and Moody’s Analytics.

Buckeye’s attacks involved several pieces of malware, including a backdoor implant known as DoublePulsar and an exploit tool, dubbed Bemstour, that had been used to deliver the backdoor.

DoublePulsar became widely known in April 2017, when it was leaked by the Shadow Brokers group.

 

 

The Shadow Brokers announced in August 2016 that it had hacked the Equation Group, a threat actor widely believed to be sponsored by the U.S. National Security Agency (NSA). Over the coming months, the Shadow Brokers leaked many tools obtained from Equation Group and apparently attempted to make a profit by selling and auctioning the stolen data.

However, Symantec now says it has found evidence that Buckeye used a variant of DoublePulsar as early as March 2016 in an attack aimed at Hong Kong — that is more than one year before DoublePulsar was leaked by Shadow Brokers.

Buckeye’s DoublePulsar appeared to be newer than the one leaked by Shadow Brokers as it was designed to target newer versions of Windows, including Windows 8.1 and Server 2012 R2.

“Based on the timing of the attacks and the features of the tools and how they are constructed, one possibility is that Buckeye may have engineered its own version of the tools from artefacts found in captured network traffic, possibly from observing an Equation Group attack. Other less supported scenarios, given the technical evidence available, include Buckeye obtaining the tools by gaining access to an unsecured or poorly secured Equation Group server, or that a rogue Equation group member or associate leaked the tools to Buckeye,” Symantec said in a blog post.

Interestingly, despite Buckeye apparently no longer being active since mid-2017, its DoublePulsar variant was still spotted in September 2018. Furthermore, threat actors apparently continued to improve Bemstour, with the latest sample found by Symantec dated March 23, 2019.

“It may suggest that Buckeye retooled following its exposure in 2017, abandoning all tools publicly associated with the group. However, aside from the continued use of the tools, Symantec has found no other evidence suggesting Buckeye has retooled. Another possibility is that Buckeye passed on some of its tools to an associated group,” Symantec explained.

Buckeye had been known to use zero-day vulnerabilities in its attacks. According to Symantec, Bemstour uses two Windows vulnerabilities for remote kernel code execution: CVE-2017-0143, a Windows SMB code execution flaw patched by Microsoft in March 2017, and CVE-2019-0703, a Windows SMB information disclosure bug that Microsoft addressed with its March 2019 Patch Tuesday updates. Buckeye had exploited both of these flaws before fixes were released.

Symantec reported CVE-2019-0703 to Microsoft in September 2018. It’s worth noting, however, that Microsoft’s advisory for CVE-2019-0703 indicates that the company has no evidence of exploitation.

UPDATE. Microsoft has confirmed to SecurityWeek that CVE-2019-0703 has been exploited in attacks. The company blamed a clerical error and it has updated its advisory.

 

 

from: https://www.securityweek.com/chinese-hackers-used-nsa-tool-year-shadow-brokers-leak

 

 

New report explains how China thinks about information warfare

The Chinese military has established a Network Systems Department, responsible for information warfare.

 

The Department of Defense’s annual report on China’s military and security developments provides new details about how China’s military organizes its information warfare enterprise, an area that has been of particular interest to U.S. military leaders.

In 2015, the People’s Liberation Army created the Strategic Support Force, which centralizes space, cyber, electronic warfare and psychological warfare missions under a single organization. The Chinese have taken the view, according to the DoD and other outside national security experts, that information dominance is key to winning conflicts. This could be done by denying or disrupting the use of communications equipment of its competitors.

The 2019 edition of report, released May 2, expands on last year’s version and outlines the Chinese Network Systems Department, one of two deputy theater command level departments within the Strategic Support Force responsible for information operations.

“The SSF Network Systems Department is responsible for information warfare with a mission set that includes cyberwarfare, technical reconnaissance, electronic warfare, and psychological warfare,” the report read. “By placing these missions under the same organizational umbrella, China seeks to remedy the operational coordination challenges that hindered information sharing under the pre-reform organizational structure.”

As described in previous Pentagon assessments, Chinese military leaders hope to use these so-called non-kinetic weapons in concert with kinetic weapons to push adversaries farther away from its shores and assets.

“In addition to strike, air and missile defense, anti-surface, and anti-submarine capabilities improvements, China is focusing on information, cyber, and space and counterspace operations,” the report said of China’s anti-access/area denial efforts. This concept aims to keep enemies at bay by extending defenses through long range missiles and advanced detection measures, which in turn make it difficult for enemies to penetrate territorial zones.

Cyber theft and collective strategic importance

This year’s report includes two subtle changes from last year’s edition regarding China’s cyber activities directed at the Department of Defense.

While last year’s report documents China’s continued targeting of U.S. diplomatic, economic, academic, and defense industrial base sectors to support intelligence collection, the latest edition points out that China’s exfiltration of sensitive military information from the defense industrial base could allow it to gain a military advantage.

In recent years, China has been accused of leading major hacks on defense contractors and the U.S. Navy, leading an internal review by the Navy to assert that both groups are “under cyber siege,” according to the Wall Street Journal.

Additionally, this year’s report points out that taken together, the cyber-enabled campaigns threatened to erode military advantages, a trope often heralded by top leaders.

New strategies and approaches from the U.S. military seek to be more assertive in the defense of U.S. interests from such cyber probes.

 

The DoD Report from 02 MAY 2019

PDF (local copy) opens in a new tab

 

from: https://www.c4isrnet.com/c2-comms/2019/05/03/new-report-explains-how-china-thinks-about-information-warfare/

 

 

 

By continuing to use this site, you agree to the use of cookies. Please consult the Privacy Policy page for details on data use. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close