Investitionsstau in Deutschland

Investitionsstau in Deutschland

In Deutschland wird derzeit diskutiert, ob der Staat genug in Infrastruktur und Bildung investiert. Wie die Statista-Grafik auf Basis einer Auswertung des Handelsblatts zeigt, ist das Volumen nicht abgerufener Fördergelder beträchtlich. So sind die Gelder der beiden Fonds, mit denen besonders finanzschwache Kommunen gefördert werden sollen, bis Ende letzten Jahres zu rund 44 Prozent bzw. 92 Prozent noch nicht abgerufen worden. Mit den beiden so genannten Kommunalinvestitionsförderungsfonds sollen unter anderem Krankenhäuser oder Straßen saniert werden.

Ähnlich ist die Lage beim Digitalfonds, wo nur ein Bruchteil des Gesamtvolumens geflossen sind. Der Fond besteht allerdings auch erst seit letztem Jahr. Mit dem Fonds sollen der Breitbandausbau und die Digitalisierung von Schulen gefördert werden. Mit dem Kita-Ausbaufonds sollen 100.000 zusätzliche Betreuungsplätze für Kinder im Alter von drei Jahren geschaffen werden. Das Investitionsprogramm wurde 2017 ins Leben gerufen und soll 2020 abgeschlossen sein. Derzeit sind jedoch erst 0,25 von 1,1 Milliarden Euro abgerufen worden. Auch die Gelder des Ausbauhilfefonds Hochwasser sind noch nicht komplett geflossen. Mit den Mitteln sollen die Schäden der Hochwasserkatastrophe des Jahres 2013 beseitigt werden. Betroffen sind 11 der 16 Bundesländer.

Dies wirft die Frage auf, ob der Investitionsstau in Deutschland tatsächlich mit mehr finanziellen Mitteln zu lösen ist. Eine Untersuchung des IW Köln (PDF-Download) aus dem Jahr 2017 weist auf einen wichtigen Grund für die unbefriedigende Situation hin: so bestünde im Bereich Infrastruktur in vielen Bundesländern ein Mangel an baufähigen Projekten. Das bedeutet, bei diesen Vorhaben besteht kein sofortiges Baurecht. Die Gelder würden dann vielfach an die Bundesländer fließen, die über Projekte verfügen, die baureif sind. Im Bereich Verkehrsinfrastruktur hätte hiervon vor allem Bayern profitiert. Der Mangel an baufähigen Projekten bestehe hauptsächlich aufgrund von Kapazitätsengpässen in Baubehörden, die durch Personalabbau zustande gekommen seien.

 

from: https://de.statista.com/infografik/20577/nicht-abgerufene-mittel-aus-sondervermoegen-des-bundes/

 

 

World’s Most Valuable Tech Companies

World’s Most Valuable Tech Companies

Die Digitalwirtschaft des Silicon Valley konnte ihre weltweite Vormachtstellung weiter ausbauen. Plattform-Unternehmen wie Apple, Microsoft, Amazon, Facebook oder Google, dessen Mutterkonzern Alphabet als vierter US-Konzern eine Börsenbewertung von mehr als einer Billion Dollar erreichte, dominieren die westliche Welt und treffen nur in Asien auf ebenbürtige Konkurrenten. Europa ist abgemeldet.

This Linux smartphone is now shipping for $150

This Linux smartphone is now shipping for $150

Shipping at only $149, Brave Heart is a fully open-source smartphone running Linux.  

Pine64’s open source PinePhone runs Linux and is designed for developers and early-adopters.

Computer and developer-board maker Pine64 has started shipping the first edition of its much-anticipated – at least in the open-source community – PinePhone, after pre-orders sold out. Dubbed “Brave Heart”, the device is indeed designed only for the keener hobbyists.

Shipping at only $149.99, Brave Heart is a fully open-source smartphone running Linux, which the company claims was developed “with the community for the community”, which means with developers and early adopters, and for developers and early adopters; and in this case, preferably for those who have extensive Linux experience.

In a departure from Android and iOS, Pine’s new project provides a platform for customers to develop Linux-on-phone projects. It does not come with a pre-installed OS, but supports all major Linux phone projects such as Ubuntu Touch, Sailfish OS and Plasma Mobile.

Although buyers get to choose their OS, it will be up to them to upload the platform to the Pine Phone – meaning the device is not designed for the average Joe.

“The “BraveHeart” Edition PinePhone does not come with default OS build installed, user needs to install their own favorite build. Most of the OS builds are still in beta stage,” it notes: “Only intend for these units to find their way into the hands of users with extensive Linux experience and an interest in Linux-on-phone.”

The company has been selling single-board computers and notebook computers, initially to compete with Raspberry Pi, since 2016. The devices are designed for developers who are interested in free and open-source software (FOSS) to work on applications. “Regardless of if you want to sequence DNA, build a robot or kill space invaders, we’ve got you covered,” says Pine64 on its website.

Powered by the same signature quad-core ARM64 found in Pine’s A64 single-board computers, the new phone’s specs are promising. Brave Heart has 2GB of RAM, 16GB of storage, a 5MP rear camera and a 2MP front one. There is also a headphone jack, a USB-C port and a Micro-SD slot.

Keeping in line with the company’s objectives, Pine64 also includes strong privacy settings in the new device. Under the removable back, for example, are six dip switches that let users kill the modem, GPS, WiFi, Bluetooth, microphone and cameras. 

The device sets itself against Purism’s Librem 5 smartphone, which started shipping last year albeit at the much higher price of $749. Contrary to Pine64’s technology, Librem 5 comes with Pure OS and Ubuntu Touch; but it includes similar security features such as hardware kill switches for the camera, mic, WiFi, Bluetooth and modem.

Pine64 has called the Brave Heart device a “milestone” for the company and the phone has certainly generated a lot of enthusiasm among developers. Although the early version of the Pine Phone is only shipping to the select few, the company says a consumer-ready version will be available from Spring 2020.

The manufacturer is also working on an open-source Linux tablet with a detachable keyboard, as well as on a smartwatch, so watch this space for more.

from: https://www.zdnet.com/article/this-linux-smartphone-is-now-shipping-for-150/

 

https://www.pine64.org/pinephone/

 

PINEPHONE – “BraveHeart” Limited Edition Linux SmartPhone for early adopters

$149.99

**********************  Disclaimer ***********************

  • The “BraveHeart” Limited Edition PinePhones are aimed solely for developer and early adopter. More specifically, only intend for these units to find their way into the hands of users with extensive Linux experience and an interest in Linux-on-phone.
  • The “BraveHeart” Edition PinePhone does not come with default OS build installed, user needs to install their owns favorite build. Most of the OS builds are still in beta stage.
  • Estimate dispatch in mid January 2020

BODY:

  • Dimensions: 160.5mm x 76.6mm x 9.2mm
  • Weight: 185 grams
  • Build: Plastic
  • Colour: Black
  • SIM: Micro-SIM

DISPLAY:

  • Type: HD IPS capacitive touchscreen, 16M colors
  • Size: 5.95 inches
  • Resolution: 1440×720 pixels, 18:9 ratio

PLATFORM:

  • OS: Various open source mainline Linux or BSD mobile OSes
  • Chipset: Allwinner A64
  • CPU: 64-bit Quad-core 1.2 GHz ARM Cortex A-53
  • GPU: MALI-400

MEMORY:

  • Internal Flash Memory: 16GB eMMC
  • System Memory: 2GB LPDDR3 SDRAM
  • Expansion: micro SD Card support SDHC and SDXC, up to 2TB

CAMERA:

  • Main Camera: Single 5MP, 1/4″, LED Flash
  • Selfie Camera: Single 2MP, f/2.8, 1/5″
  • SOUND:
  • Loudspeaker: Yes, mono
  • 3.5mm jack with mic: Yes, stereo

COMMUNICATION:

  • Worldwide, Global LTE bands
  • LTE-FDD: B1/ B2/ B3/ B4/ B5/ B7/ B8/ B12/ B13/ B18/ B19/ B20/ B25/ B26/ B28
  • LTE-TDD: B38/ B39/ B40/ B41
  • WCDMA: B1/ B2/ B4/ B5/ B6/ B8/ B19
  • GSM: 850/900/1800/1900MHz
  • WLAN: Wi-Fi 802.11 b/g/n, single-band, hotspot
  • Bluetooth: 4.0, A2DP
  • GPS: Yes, with A-GPS, GLONASS

FEATURES:

  • USB: type C (SlimPort), USB Host, DisplayPort Alternate Mode output
  • Sensors: Accelerometer, gyro, proximity, ambient light, magnetometer(compass)
  • Actuator: Vibrator
  • Privacy Switches: LTE (include GPS), Wifi/BT, Mic, and Camera

BATTERY:

  • Removable Li-Po 2750-3000 mAh battery
  • Charging: USB type-C, 15W – 5V 3A Quick Charge, follows USB PD specification

PACKAGE:

  • PinePhone
  • USB-A to USB-C charging cable

Warranty: 30 days

Note:
  • The “BraveHeart” Limited Edition PinePhones are aimed solely for developer and early adopter. More specifically, only intend for these units to find their way into the hands of users with extensive Linux experience and an interest in Linux-on-phone.
  • Due to Lithium-ion battery in PinePhone, the shipment of PinePhone orders will be handled differently from other Pine64 products, that’s the reason we didn’t allow to combined PinePhone order with other Pine64 products. Sorry for any inconvenience caused.
  • Small numbers (1-3) of stuck or dead pixels are a characteristic of LCD screens. These are normal and should not be considered a defect.
  • When fulfilling the purchase, please bear in mind that we are offering the PinePhone at this price as a community service to PINE64, Linux and BSD communities. If you think that a minor dissatisfaction, such as a dead pixel, will prompt you to file a PayPal dispute then please do not purchase the PinePhone. Thank you.

Out of stock

SKU: PPHONE-BH Category:

from: https://store.pine64.org/?product=pinephone-braveheart-limited-edition-linux-smartphone-for-early-adaptor

 

Desktop Operating Systems as of DEC 2019

Desktop Operating Systems as of DEC 2019

Morgen stellt Microsoft den Support für Windows 7 ein.

Das Betriebssystem ist derzeit laut NetMarkeShare mit einem Desktop-Marktanteil von 26,6 Prozent die Nummer zwei hinter Windows 10. Das heißt, dass ab Dienstag weltweit Dutzende Millionen Menschen keine Updates mehr für ihr Betriebssystem bekommen. Damit wird die 2010 erschienene Windows-Version für Anwender zum Sicherheitsrisiko.

Das gilt vor allem für Privatpersonen. Unternehmen und Behörden erhalten gegen Bezahlung drei weitere Jahre Support. Wer dennoch weiter auf Windows 7 benutzt, sollte sich der Gefahren bewusst sein. Dazu das Bundesamt für Sicherheit in der Informationstechnik: “Da öffentlich bekannte Schwachstellen nicht mehr geschlossen werden, birgt die weitere Nutzung von Windows 7 hohe Risiken für die IT-Sicherheit”.

 

from: https://de.statista.com/infografik/20466/desktop-marktanteile-von-betriebssystemen-weltweit/

 

 

2019 – Banner Year For Data Exposures: Top 10 Breaches and Leaky Server Screw Ups

2019 – Banner Year For Data Exposures: Top 10 Breaches and Leaky Server Screw Ups

[Motivation finally enough to walk away from ‘black box systems’
and secure everything with the original Blockchain? — TJACK]

Top 10 Breaches and Leaky Server Screw Ups of 2019

From massive credential spills on the Dark Web and hacked data to card-skimming and rich profiles exposed by way of cloud misconfigurations, 2019 was a notable year for data breaches. Big names like Capital One, Macy’s and Sprint were impacted, as was the entire country of Ecuador and supply-chain companies like the American Medical Collection Agency. Here are our Top 10 data leak moments of the year.

Collections 1-4 Spill Millions of Credentials on the Dark Web

The year started out with a bang when a huge trove of data – containing 773 million unique email addresses and passwordswas discovered on a popular underground hacking forum. The credential spill was dubbed “Collection #1” and totaled 87 GB of data, with records culled from breaches that occurred as far back as 2010, including the well-known compromise of Yahoo. It was one of the largest jackpots ever seen when it comes to account-compromise efforts. Collections 2-4 soon followed, and ultimately more than 840 million account records from 38 companies appeared for sale on the Dark Web in February.

AMCA Supply-Chain Breach Impacts 20.1 Million

A hack of the American Medical Collection Agency (AMCA), a third-party bill collection vendor, impacted 20.1 million patients over the summer, exposing personally identifiable information such as names, addresses and dates of birth, and also payment data. Three clinical laboratories offering blood tests and the like that relied on AMCA to process a portion of their consumer billing were hit: 12 million patients from Quest Diagnostics, another 7.7 million patients from LabCorp and 400,000 victims from OPKO Health.

Capital One: Another Year, Another Major FinServ Breach

In July, a massive breach of Capital One customer data hit more than 100 million people in the U.S. and 6 million in Canada. Thanks to a cloud misconfiguration, a hacker was able to access credit applications, Social Security numbers and bank account numbers in one of the biggest data breaches to ever hit a financial services company — putting it in the same league in terms of size as the Equifax incident of 2017. The FBI arrested a suspect in the case: A former engineer at Amazon Web Services (AWS), Paige Thompson, after she boasted about the data theft on GitHub. Researchers said that Capital One victims are going to be phished for years to come – long after their 12 months’ of credit monitoring is done.

Facebook ‘s Year of Breach Problems

Facebook had a bad year for breaches, including the December emergence of a hacked database containing the names, phone numbers and Facebook user IDs of 267 million platform users. The data may have been stolen from Facebook’s developer API before the company restricted API access to phone numbers and other data in 2018. And in September, an open server was discovered leaking hundreds of millions of Facebook user phone numbers. In April, researchers found two separate datasets, held by two app developers (Cultura Colectiva and At the Pool). The actual data source for the records (like account names and personal data) in these databases was Facebook.

Deep Profiles for the Entire Population of Ecuador Are Exposed

In September it came to light that the entire population of Ecuador (as well as Julian Assange) had been impacted by an open database with rich, detailed life information collected from public-sector sources by a marketing analytics company. The trove of data offered any attacker the ability to cross-reference and combine the data into a highly personal, richly detailed view of a person’s life. The records, for 20 million individuals, were gleaned from Ecuadorian government registries, an automotive association called Aeade, and the Ecuadorian national bank. Ecuador has about 16.5 million citizens in total (some of the entries were for deceased persons).

1.2B Rich Profiles Exposed By Data Brokers

In a similar incident to the Ecuador debacle, an open Elasticsearch server emerged in December that exposed the rich profiles of more than 1.2 billion people. The database consisted of scraped information from social media sources like Facebook and LinkedIn, combined with names, personal and work email addresses, phone numbers, Twitter and Github URLs and other data. Taken together, the profiles provide a 360-degree view of individuals, including their employment and education histories. All of the information was unprotected, with no login needed to access it. The data was linked to People Data Labs (PDL) and OxyData.io

Security Specialist Imperva Smarts from Cloud Misconfiguration

In an ironic turn of events, cybersecurity company Imperva allowed hackers to steal and use an administrative Amazon Web Services (AWS) API key in one of Imperva’s production AWS accounts, thanks to a cloud misconfiguration. Hackers used Imperva’s Cloud Web Application Firewall (WAF) product to access a database snapshot containing emails, hashed and salted passwords, and some customers’ API keys and TLS keys. Because the database was accessed as a snapshot, the hackers made off with only old Incapsula records that go up to Sept. 15, 2017. However, the theft of API keys and SSL would allow an attacker to break companies’ encryption and access corporate applications directly.

Sprint Contractor Lays Open Phone Bills for 260K Subscribers

A cloud misconfig was also behind hundreds of thousands of mobile phone bills for AT&T, Verizon and T-Mobile subscribers being exposed to the open internet in December, thanks to the oversight of a contractor working with Sprint. More than 261,300 documents were stored – mainly cell phone bills from Sprint customers who switched from other carriers. Cell phone bills are a treasure trove of data, and include names, addresses and phone numbers along with spending histories and in many cases, call and text message records.

Magecart Siphons Off Millions of Payment Card Details

Magecart, the digital card-skimming collective encompassing several different affiliates all using the same modus operandi, is now so ubiquitous that its infrastructure is flooding the internet, researchers said earlier this year. Magecart attacks, which involve inserting virtual credit-card skimmers into e-commerce check-out pages, affected a range of companies throughout 2019; these included bedding retailers MyPillow and Amerisleep, the subscription website for the Forbes print magazine, at least 80 reputable brands in the motorsports industry and luxury apparel segments, popular skin care brand First Aid Beauty, Macy’s and streaming video and podcast content company Rooster Teeth.

Equifax Settlement Rankles Consumers

Equifax made notable news this year when it agreed to pay as much as $700 million to settle federal and state investigations on the heels of its infamous 2017 breach, which exposed the data of almost 150 million customers. That includes $300 million to cover free credit monitoring services for impacted consumers, $175 million to 48 states in the U.S, and $100 million in civil penalties. Some consumers are furious over what they view as an unfair settlement though, with 200,000 of them signing a petition against the deal. The petition argues that very little of that cash will trickle down to those who actually suffered because of the breach.

 

from: https://threatpost.com/top-10-breaches-leaky-server-2019/151386/

 

 

The Great .ORG Heist: Internet Registry is Snatched Up By Private Equity Firm Ethos Capital for $1.1bn, Provoking Uproar

The Great .ORG Heist: Internet Registry is Snatched Up By Private Equity Firm Ethos Capital for $1.1bn, Provoking Uproar

see also the previous article: https://www.bgp4.com/2019/11/26/internet-world-despairs-as-non-profit-org-tld-sold-by-isoc-for-to-private-equity-firm/

The old dream of an internet run in the public interest has long dissipated under pressure from huge corporations seeking to profit from what has become a worldwide information utility.

But one corner of the web seemed to maintain its character as a preserve for public service — the .org domain, which since its creation has been reserved for nonprofit organizations and has become something of a badge of honor of noncommercial activity.

The world’s first web page, in 1992. Things have changed since then.
(Fabrice Coffrini / AFP/Getty Images)

That’s why many in the nonprofit world were startled by the announcement on Nov. 13 that the .org registry had been sold to a private equity firm, Ethos Capital. The seller was the Internet Society, a nonprofit that plays an important role in creating and maintaining internet engineering standards, but has been mostly the guardian of the .org domain. The price, as was revealed more than two weeks later, was a stunning $1.135 billion.

A private equity firm has an incentive to sell censorship as a service.

Mitch Stoltz, Electronic Frontier Foundation

In the original announcement, Internet Society Chief Executive Andrew Sullivan called the sale “an important and exciting development” and described Ethos as “a strong strategic partner that understands the intricacies of the domain industry.”

Others are not so sure. Ethos didn’t even exist until earlier this year, and currently appears to have only two employees, including Erik Brooks, its founder.

Brooks listed his investment principles for me as “intellectual honesty, humility and respect and believing that prosperity can be built together.” But a week after the sale announcement, it emerged that the financial backers of Ethos included several firms with more conventional investment approaches, including funds associated with the families of H. Ross Perot, Mitt Romney and the Johnsons, owners of Fidelity Investments.

Brooks says Ethos is committed to running the .org registry in accordance with principles followed by the Internet Society, but hasn’t made that commitment in writing.

At stake are internet addresses ending in “.org” used by some 10 million organizations. The .org designation, or domain, is one of the oldest on the internet, along with .com (for commercial businesses), .edu (educational institutions), .gov (government agencies) and a handful of others.

It’s traditionally reserved for nonprofit organizations devoted to the public interest, such as the Red Cross, the Girl Scouts, and the United Way.

Not every dot-org meets the public service standard, since applicants aren’t screened. Websites for political fronts, such as the Koch network’s Americans for Prosperity, carry the .org label. So do sites for neo-Nazi hate groups.

But for the most part, organizations genuinely aimed at doing good tend to choose .org addresses. And, for that matter, so do Democratic and Republican party websites.

The domain holds a special place in the hearts of internet users; environmentalist and internet activist Jacob Malthouse calls .org a “digital Yosemite,” evoking the reverence naturalists such as John Muir felt for the real thing.

During a recent online discussion on the sale, Jon Nevett, chief executive of the Public Interest Registry, or PIR, the Internet Society unit that manages .org and is the entity being sold to Ethos, called it “the crown jewel of the domain name system, full stop.”

The sale, which is expected to close in the first quarter of next year, could be derailed only by two entities. One is the Internet Corp. for Assigned Names and Numbers, or ICANN, the web’s Playa Vista-based governing body, which could rule on the transfer any day now. The other is Pennsylvania Orphans Court, which has jurisdiction because PIR is a nonprofit incorporated in that state.

In the meantime, the deal has drawn brickbats from several internet luminaries.

They include Tim Berners-Lee, the inventor of the World Wide Web, who tweeted that “it would be a travesty” if the .org domain were no longer operated in the public interest. Also weighing in was Esther Dyson, the founding chairwoman of ICANN, who tweeted that she was “appalled” at what she called “the great .ORG heist.”

The parties involved in the sale have tried to tamp down the controversy, without notable success. On Nov. 29, Sullivan and Gonzalo Camarillo, the Internet Society chairman, held a conference call with users to defend the deal.

That was followed by a web discussion on Dec. 5 hosted by NTEN, an advocacy group for nonprofits, at which Sullivan was joined by Brooks and Nevett.

Brooks said he was committed to operating PIR in the dot-org community’s interest but was vague about the “mechanism” that would be established to do so. He said Ethos would not be making its financial data public, unlike the Internet Society, which issues an annual financial disclosure.

The dot-org community has two main concerns about the sale. One is that Ethos will jack up the registration fee for .org websites, which is currently about $10 per year and has been subject to a traditional limit on increases of 10% a year.

More important may be Ethos’ ability to facilitate more censorship of .org websites by allowing third parties more latitude to object to content on those sites and prompt their shutdown.

“The .org registry is a point of control on the internet,” says Mitch Stoltz, an attorney at the Electronic Frontier Foundation, which has launched a campaign protesting the deal. “A private equity firm has an incentive to sell censorship as a service.”

Already, registrars of other domains have cut agreements with corporate players, such as the Motion Picture Assn. of America, giving them the authority to order shutdowns of sites they claim are infringing on copyrights without affording site owners the opportunity to appeal.

Academic publishers such as Elsevier have won court rulings aimed at shutting down Sci-Hub, a web service that offers free access to copyrighted scientific research — but it’s up to registries to decide whether to comply with the court orders. And repressive governments such as Turkey and Saudi Arabia have worked through internet intermediaries to censor information on the web.

As the owner of the .org domain, Stoltz observes, Ethos could “enforce any limitations on nonprofits’ speech.” Since many nonprofit organizations “are engaged in speech that seeks to hold governments and industry to account, those powerful interests have every incentive to buy the cooperation of a well-placed intermediary, including an Ethos-owned PIR.”

Brooks said during the NTEN forum that Ethos would take steps to ensure that “.org is a domain that’s open and free and not curated or censored in any way, shape or form.” But he stopped short of agreeing to a legally binding undertaking.

Adding to misgivings about the sale is its chronology. Talks between Ethos and the Internet Society began only weeks after June 30, when ICANN removed price restrictions on the .org domain and made it easier for PIR to take down sites that were the subject of third-party complaints about content.

Brooks says the end of the price caps had nothing to do with the sale, which he would have pursued anyway. But the deal’s critics point out that nonprofits with .org addresses are a “captive audience” for the domain’s owner. Once an organization has begun operating as a dot-org, changing to a different domain would be horrifically costly. Followers would have to be notified of the internet name change, email addresses reconfigured, and so on.

That would give Ethos considerable latitude to raise prices, notwithstanding Brooks’ promise to limit increases to 10% a year.

Sullivan and Camarillo said in their conference call that they had not been planning to put PIR up for sale, but Ethos’ bid was so large “we couldn’t just say no without considering” it.

Since the announcement, Ethos and the Internet Society have been stingy with details of the deal and its goals. Only on Nov. 20 — a week after the sale was announced — did Sullivan reveal, in an email to insiders, that the financial backers of Ethos included Perot Holdings, which is the investment arm of the late Ross Perot’s family; FMR LLC, which owns Fidelity Investments and is privately controlled by the Johnson family of Boston; and Solamere Capital, which was co-founded by Tagg Romney, son of Mitt Romney (who was himself a Solamere partner until he joined the U.S. Senate this year).

One open question is what Ethos expects to gain from its purchase. Domain registries such as PIR are responsible chiefly for maintaining a database of registrations and collecting annual fees. That makes the job “pretty much a license to print money,” Stoltz says.

Will Ethos and its private financial backers be satisfied with running a demure internet registry in the public interest, as opposed to squeezing their $1.135-billion investment for every penny?

Brooks told me by email that he expects PIR to invest in “growth initiatives” to “provide Ethos with a good return on its investment.” Yet there doesn’t seem to be much scope for turbocharging demand for the .org domain, which largely sells itself. That means the opportunity for generating more revenue could hinge on raising the annual fee, unless the firm has other new ideas.

As for the Internet Society, its interest seemed to be stabilizing its finances by replacing the revenue from .org fees — which reached $44.4 million last year, about 85% of its total revenue — with income from a professionally managed $1.135-billion endowment. “Responsibly invested and managed,” Sullivan told listeners on the Nov. 29 conference call, the society could replicate its annual take from .org fees “in perpetuity.”

Sullivan’s words point to what may really be roiling the dot-org community about the deal. That’s the transformation of what was one of the last vestiges of the web’s image as a public utility managed informally in the public interest, immune from commercial or government control, into just another asset to be monetized.

During the conference call and in other forums, Sullivan and Camarillo talked about the need to “diversify” the Internet Society’s revenue stream rather than relying for revenue on “one company in one industry,” which made them sound a bit like the CEO of a washing machine company pondering whether to branch out into refrigerators and cooktops.

Commerce has infiltrated virtually every corner of the web except, up to now, the nonprofit corner represented by dot-orgs. The implication of the .org sale is that no piece of the internet is, in fact, immune from the world of getting and spending — everything is for sale, the public interest be damned.

 

from: https://www.latimes.com/business/story/2019-12-12/dot-org-sale-outrage-internet-society-ethos-capital

 

 

What is a brain-computer interface? Everything you need to know about BCIs, neural interfaces and the future of mind-reading computers

What is a brain-computer interface? Everything you need to know about BCIs, neural interfaces and the future of mind-reading computers

Systems that allow humans to control or communicate with technology using only the electrical signals in the brains or muscles are fast becoming mainstream. Here’s what you need to know.

What is a brain-computer interface? It can’t be what it sounds like, surely?
Yep, brain-computer interfaces (BCIs) are precisely what they sound like — systems that connect up the human brain to external technology.

It all sounds a bit sci-fi. Brain-computer interfaces aren’t really something that people are using now, are they?
People are indeed using BCIs today — all around you. At their most simple, a brain-computer interface can be used as a neuroprosthesis — that is, a piece of hardware that can replace or augment nerves that aren’t working properly. The most commonly used neuroprostheses are cochlear implants, which help people with parts of their ear’s internal anatomy to hear. Neuroprostheses to help replace damaged optic nerve function are less common, but a number of companies are developing them, and we’re likely to see widespread uptake of such devices in the coming years.

So why are brain-computer interfaces described as mind-reading technology?
That’s where this technology is heading. There are systems, currently being piloted, that can translate your brain activity — the electrical impulses — into signals that software can understand. That means your brain activity can be measured; real-life mind-reading. Or you can use your brain activity to control a remote device.

When we think, thoughts are transmitted within our brain and down into our body as a series of electrical impulses. Picking up such signals is nothing new: doctors already monitor the electrical activity in the brain using EEG (electroencephalography) and in the muscles using EMG (electromyography) as a way of detecting nerve problems. In medicine, EEG and EMG are used to find diseases and other nerve problems by looking for too much, too little or unexpected electrical activity in a patient’s nerves.

Now, however, researchers and companies are looking at whether those electrical impulses could be decoded to give an insight into a person’s thoughts.

Can BCIs read minds? Would they be able to tell what I’m thinking right now?
At present, no. BCIs can’t read your thoughts precisely enough to know what your thoughts are at any given moment. Currently, they’re more about picking up emotional states or which movements you intend to make. A BCI could pick up when someone is thinking ‘yes’ or ‘no’, but detecting more specific thoughts, like knowing you fancy a cheese sandwich right now or that your boss has been really annoying you, are beyond the scope of most brain-computer interfaces.

OK, so give me an example of how BCIs are used.
A lot of interest in BCIs is from medicine. BCIs could potentially offer a way for people with nerve damage to recover lost function. For example, in some spinal injuries, the electrical connection between the brain and the muscles in the limbs has been broken, leaving people unable to move their arms or legs. BCIs could potentially help in such injuries by either passing the electrical signals onto the muscles, bypassing the broken connection and allowing people to move again, or help patients use their thoughts to control robotics or prosthetic limbs that could make movements for them.

They could also help people with conditions such as locked-in syndrome, who can’t speak or move but don’t have any cognitive problems, to make their wants and needs known.

What about the military and BCIs?
Like many new technologies, BCIs have attracted interest from the military, and US military emerging technology agency DARPA is investing tens of millions of dollars in developing a brain-computer interface for use by soldiers.

More broadly, it’s easy to see the appeal of BCIs for the military: soldiers in the field could patch in teams back at HQ for extra intelligence, for example, and communicate with each other without making a sound. Equally, there are darker uses that the army could put BCIs too — like interrogation and espionage.

What about Facebook and BCIs?  
Facebook has been championing the use of BCIs and recently purchased a BCI company, CTRL-labs, for a reported $1bnFacebook is looking at BCIs from two different perspectives. It’s working with researchers to translate thoughts to speech, and its CTRL-labs acquisition could help interpret what movements someone wants to make from their brain signals alone. The common thread between the two is developing the next hardware interface.

Facebook is already preparing for the way we interface with our devices to change. In the same way we’ve moved from keyboard to mouse to touchscreen and most recently to voice as a way of controlling technology around us, Facebook is betting that the next big interface will be our thoughts. Rather than type your next status update, you could think it; rather than touch a screen to toggle between windows, you could simply move your hands in the air.

I’m not sure I’m willing to have a chip put in my brain just to type a status update.
You may not need to: not all BCI systems require a direct interface to read your brain activity.

There are currently two approaches to BCIs: invasive and non-invasive. Invasive systems have hardware that’s in contact with the brain; non-invasive systems typically pick up the brain’s signals from the scalp, using head-worn sensors.

The two approaches have their own different benefits and disadvantages. With invasive BCI systems, because electrode arrays are touching the brain, they can gather much more fine-grained and accurate signals. However, as you can imagine, they involve brain surgery and the brain isn’t always too happy about having electrode arrays attached to it — the brain reacts with a process called glial scarring, which in turn can make it harder for the array to pick up signals. Due to the risks involved, invasive systems are usually reserved for medical applications.

Non-invasive systems, however, are more consumer friendly, as there’s no surgery required — such systems record electrical impulses coming from the skin either through sensor-equipped caps worn on the head or similar hardware worn on the wrist like bracelets. It’s likely to be that in-your-face (or on-your-head) nature of the hardware that holds back adoption: early adopters may be happy to sport large and obvious caps, but most consumers won’t be keen to wear an electrode-studded hat that reads their brain waves.

There are, however, efforts to build less intrusive non-invasive systems: DARPA, for example, is funding research into non-surgical BCIs and one day the necessary hardware could be small enough to be inhaled or injected.

Why are BCIs becoming a thing now?
Researchers have been interested in the potential of BCIs for decades, but the technology has come on at a far faster pace than many have predicted, thanks largely to better artificial intelligence and machine-learning software. As such systems have become more sophisticated, they’ve been able to better interpret the signals coming from the brain, separate the signals from the noise, and correlate the brain’s electrical impulses with actual thoughts.

Should I worry about people reading my thoughts without my permission? What about mind control?
On a practical level, most BCIs are only unidirectional — that is, they can read thoughts, but can’t put any ideas into users’ minds. That said, experimental work is already being undertaken around how people can communicate through BCIs: one recent project from the University of Washington allowed three people to collaborate on a Tetris-like game using BCIs.

The pace of technology development being what it is, bidirectional interfaces will be more common before too long. Especially if Elon Musk’s BCI outfit Neuralink has anything to do with it.

What is Neuralink? 
Elon Musk galvanised interest in BCIs when he launched Neuralink. As you’d expect from anything run by Musk, there’s an eye-watering level of both ambition and secrecy. The company’s website and Twitter feed revealed very little about what it was planning, although Musk occasionally shared hints, suggesting it was working on brain implants in the form of ‘neural lace’, a mesh of electrodes that would sit on the surface of the brain. The first serious information on Neuralink’s technology came with a presentation earlier this year, showing off a new array that can be implanted into the brain’s cortex by surgical robots.

Like a lot of BCIs, Neuralink’s was framed initially as a way to help people with neurological disorders, but Musk is looking further out, claiming that Neuralink could be used to allow humans a direct interface with artificial intelligence, so that humans are not eventually outpaced by AI. It might be that the only way to stop ourselves becoming outclassed by machines is to link up with them — if we can’t beat them, Musk’s thinking goes, we may have to join them.

 

from: https://www.zdnet.com/article/what-is-bci-everything-you-need-to-know-about-brain-computer-interfaces-and-the-future-of-mind-reading-computers/

see also: https://www.zdnet.com/article/musks-neuralink-uses-brain-threads-to-try-and-read-your-mind/

 

 

One Of The Largest Data Centers In The US – CyrusOne, Texas – Hit by Ransomware Attack

One Of The Largest Data Centers In The US – CyrusOne, Texas – Hit by Ransomware Attack

Texas-based data center provider CyrusOne has reportedly fallen victim to an attack from REvil (Sodinokibi) ransomware, business tech-focused publication ZDNet reported on Dec. 5.

One of the largest data centers in the United States, CyrusOne has reportedly been exposed to an attack by a variant of the REvil (Sodinokibi) ransomware, which previously hit a number of service providers, local governments and businesses in the country.

The scope of the attack

In an email to Cointelegraph, CyrusOne confirmed:

“Six of our managed service customers, located primarily in our New York data center, have experienced availability issues due to a ransomware program encrypting certain devices in their network.” 

The firm went on to assure viewers that law enforcement was working on the matter and that their “data center colocation services, including IX and IP Network Services, are not involved in this incident.” 

Just business

Per the ransom note obtained by ZDNet, the attackers targeted CyrusOne’s network, with the sole objective of receiving a ransom. Those behind the attack claimed in the note that they consider the attack nothing more than a business transaction, aimed exclusively at profiting.

In the event the company does not cooperate with the attackers, it will purportedly lose the affected data as the cybercriminals claim to have the private key.

To pay or not to pay?

This spring, Riviera Beach, Florida, was hit by a hacker attack, in which the hackers allegedly encrypted government records, blocking access to critical information and leaving the city without an ability to accept utility payments other than in person or by regular mail. The city council eventually agreed to pay nearly $600,000 worth of Bitcoin (BTC) to regain access to data encrypted in the attack.

In late October, hackers compromised the website of the city of Johannesburg, South Africa, and demanded ransom in Bitcoin. The breach affected several customer-facing systems — hardware or software customers interact with directly, such as user interfaces and help desks. The city authorities refused to pay the ransom.

Meanwhile, a number of Finnish cities and organizations are rehearsing how to respond when a group of hackers demands the participants pay ransomware during a series of simulated cyberattacks.

 

from: https://cointelegraph.com/news/texas-based-data-center-cyrusone-hit-by-ransomware-attack

 

 

Hilarious Phishing & Malware Attempts

Hilarious Phishing & Malware Attempts

Like everyone else (well, maybe more than everyone else)  I regularly get these phishing messages (“we try to make you click on the attachment, which of course is riddled with mal/ransomware”).

Hilarious to me, when it is sent to an automated, harvested e-mail address, which is 32 years old now (still works, obviously), and a “honeytrap” address these days.

Usually I just click on the “Junk” button, so the sender’s email address is fed into the global anti-spam and anti-phishing databases (the kind of ‘Spamhaus‘, SORBS, SPEWS, and such, which I helped survive against massive dDoS attacks originating from Russian spammers between 2002 and 2005) and thus “burned” … but in some cases, like this one, I am curious where they actually come from.

In this case, no effort is made to hide the origin in the SMTP headers:

Looking up that IP in geo-location services, three different services put it in St Petersburg, Russia (formerly known as ‘Leningrad’, now the second largest city in the Russian Federation):

That does not necessarily mean it is Russians behind it, but for such a lame phishing attempt, it seems hardly useful to run a proxy-server in St Petersburg to make it look like it comes from there.

So, to my friends over there behind the digital iron curtain: nice try! :wink:

Lesson for the esteemed reader: do not ever click on attachments you have the slightest doubt about; if the common-sense-check on a message fails, delete it.

If you are sure it is spam: “junk” it instead of “delete” – as outlined above, it burns the sender e-mail address in a very short time.

And if you actually think such a message could have any validity at all, go directly to your provider’s website (manually!)  and check on it there — let me repeat: do not ever click on any attachments.

Especially if you are of the faithful kind and run Microsoft Windows of any version …

 

 

 

 

 

 

Data on 1.2 Billion Users Found in Exposed AWS Elasticsearch Server

Data on 1.2 Billion Users Found in Exposed AWS Elasticsearch Server

An exposed Elasticsearch server was found to contain data on more than 1.2 billion people, Data Viper security researchers report.

The server was accessible without authentication and it contained 4 billion user accounts, spanning more than 4 terabytes of data, security researchers Bob Diachenko and Vinny Troia discovered last month.

Analysis of the data revealed that it pertained to over 1.2 billion unique individuals and that it included names, email addresses, phone numbers, and LinkedIn and Facebook profile information.

Further investigation led the researchers to the conclusion that the data came from two different data enrichment companies. Thus, the leak in fact represents data aggregated from various sources and kept up to date.

Most of the data was stored in 4 separate data indexes, labeled “PDL” and “OXY”, and the researchers discovered that the labels refer to two data aggregator and enrichment companies, namely People Data Labs and OxyData.

Analysis of the nearly 3 billion PDL user records found on the server revealed the presence of data on roughly 1.2 billion unique people, as well as 650 million unique email addresses.

Not only do these numbers fall in line with the statistics the company posted on their website, but the researchers were able to verify that the data on the server was nearly identical to the information returned by the People Data Labs API.

“The only difference being the data returned by the PDL also contained education histories. There was no education information in any of the data downloaded from the server. Everything else was exactly the same, including accounts with multiple email addresses and multiple phone numbers,” the researchers explain.

Vinny Troia also found in the leak information related to a landline phone number he was given roughly 10 years back as part of an AT&T TV bundle. Although the landline was never used, the information was present on the researcher’s profile, and was included in the data set PeopleDataLabs.com had on him.

The company told the researchers that the exposed server, which resided on Google Cloud, did not belong to it. The data, however, was clearly coming from People Data Labs.

Some of the information on the exposed Elasticsearch, the researchers revealed, came from OxyData, although this company too denied being the owner of that server. After receiving a copy of his own user record with the company, Troia confirmed that the leaked information came from there.

The researchers couldn’t establish who was responsible for leaving the server wide open to the Internet, but suggest that this is a customer of both People Data Labs and OxyData and that the data might have been misused rather than stolen.

“Due to the sheer amount of personal information included, combined with the complexities of identifying the data owner, this has the potential to raise questions on the effectiveness of our current privacy and breach notification laws,” the researchers conclude.

“From the perspective of the people whose information was part of this dump, this doesn’t qualify as a cut-and-dry data breach. The information ‘exposed,’ is already available on LinkedIn, Facebook, GitHub, etc. begging a larger discussion about how we feel about data aggregators who compile this information and sell it, because it’s a standard practice,” Dave Farrow, senior director of information security at Barracuda Networks, told SecurityWeek in an emailed comment.

Jason Kent, hacker at Cequence Security, also commented via email, saying, “Here we see a new and potentially dangerous correlation of data like never before. […] if an attacker has a rich set of data, they can formulate very targeted attacks. The sorts of attacks that can result in knowing password recovery information, financial data, communication patterns, social structures, this is how people in power can be targeted and eventually the attack can work.”

 

from: https://www.securityweek.com/data-12-billion-users-found-exposed-elasticsearch-server

 

 

Can hundreds of unrelated satellites create a GPS backup?

Can hundreds of unrelated satellites create a GPS backup?

The Space Development Agency’s head says that position and timing data from low-Earth orbit satellites can be used to verify or replace GPS in denied or degraded environments. (DARPA)

The head of the Space Development Agency wants to use proliferated low-Earth orbit satellites for navigation when GPS is unavailable.

As adversaries develop tools that can jam or spoof Global Positioning System signals, the military has prioritized the development of alternative sources of positioning, navigation and timing data for the war fighter. Solutions range from using real-time drone imagery to chip-scale atomic clocks, but at the Association of the United States Army conference Oct. 16, Acting Director Derek Tournear threw out another idea: using the positioning and timing data of the hundreds of satellites his agency plans to put in orbit for navigation.

The SDA was established earlier this year to rapidly develop a number of capabilities in low-Earth orbit, and the agency’s current plan calls for hundreds of satellites operating in LEO serving a variety of missions, from hypersonic missile detection and tracking to finding and identifying objects in cislunar space. An important component of that architecture is a data transport layer providing a crosslink between satellites in orbit and then bringing that data down to the ground. According to Tournear, that transport layer could be used to transfer positioning and timing data to ground users from satellites without having another dedicated PNT satellite system in orbit.

“If you have this crosslink between satellites, you can do timing transfer. So, you have very good timing information at the satellite level. If you have open communication down to any system and you can see multiple satellites, that gives you another means to use your existing comms system to get navigation independent of any other user equipment,” explained Tournear.

Using the precise timing and positional information of those satellites in LEO, users could triangulate their position in GPS-denied or -degraded environments. It’s essentially the same way smartphones can use cell towers for navigation if they can’t get a GPS signal.

“If you turn off your GPS receiver on your phone, you will still get a navigation signal on your phone based on cellphone towers, because the cellphone towers know their position and they know exact timing, so they can triangulate your position,” said Tournear. “That is not a replacement for how GPS is used for worldwide PNT coverage, but it is another way to get assured PNT and another way to validate a GPS signal.

 

from: https://www.c4isrnet.com/battlefield-tech/c2-comms/2019/11/29/can-hundreds-of-unrelated-satellites-create-a-gps-backup/

 

 

Persistent broadband connection: Intellian’s 1.5 meter antenna can switch between LEO and GEO

Persistent broadband connection: Intellian’s 1.5 meter antenna can switch between LEO and GEO

The US Navy recently live tested a new antenna that can switch between satellites in low earth orbit and geostationary orbit, fulfilling a key need for the military moving forward.

Using Intellian’s 1.5 meter antenna, the Navy was able to maintain a broadband connection while switching between Telesat’s satellites in low earth orbit and geostationary orbit. The demonstration shows how in a scenario where a satellite in geostationary orbit is attacked or denied, the antenna is able to switch to a LEO satellite to maintain a persistent broadband connection.

“Live testing over Telesat Ka-band satellites with Intellian’s 1.5m Ka convertible VSAT confirms that the antenna is an important innovation accessing space-based ‘layers’ of satellites in next-gen space architecture,” said Kurt Fiscko, technical director of PMW/A 170 at PEO C4I in a statement.

“One of the key elements that the government is looking for, particularly the military, is a path to more resilient, more flexible networking in space,” said Telestat’s Don Brown in an interview. “What Telesat is doing in this demonstration with Intellian is addressing one of the key proof points of future resiliency and flexibility … the ability to go between GEO satellite constellation and LEO constellations.”

According to Telesat’s Rich Pang, the antenna is perfectly sized for use on the Navy’s small ship variants.

Telesat is also a contractor working on DARPA’s Project Blackjack, an effort to demonstrate the military utility of a constellation of small LEO satellites. The Space Development Agency is building off of that effort to build the U.S. military’s next generation space architecture in LEO. Comprised of hundreds of small satellites in LEO, that architecture is meant to create resiliency through numbers and provide a backup to many capabilities that are currently provided through a few exquisite satellites in GEO.

“The real impetus for this demonstration is that the government has come out and said, ‘we don’t want to be locked into not only one particular provider, but we want to be able to operate in multiple regimes so we can be disaggregated and resilient,” said Pang. “So if someone attacks the GEO belt and takes out those assets I can switch to LEO, or vise versa.

 

from: https://www.c4isrnet.com/special-reports/space-missile-defense/2019/11/29/this-antenna-can-switch-between-leo-and-geo/

 

 

Cyborg warriors could be here by 2050, DoD study group says

Cyborg warriors could be here by 2050, DoD study group says

A mockup of U.S. SOCOM’s TALOS suit — a bold project,
but one that ultimately brought less tech than initially hoped. (DoD)

Ear, eye, brain and muscular enhancement is “technically feasible by 2050 or earlier,” according to a study released this month by the U.S. Army’s Combat Capabilities Development Command.

The demand for cyborg-style capabilities will be driven in part by the civilian healthcare market, which will acclimate people to an industry fraught with ethical, legal and social challenges, according to Defense Department researchers.

Implementing the technology across the military, however, will likely run up against the dystopian narratives found in science fiction, among other issues, the researchers added.

The report — entitled Cyborg Soldier 2050: Human/Machine Fusion and the Implications for the Future of the DOD — is the result of a year-long assessment.

It was written by a study group from the DoD Biotechnologies for Health and Human Performance Council, which is tasked to look at the ripple effects of military biotechnology.

The team identified four capabilities as technically feasible by 2050:

  • ocular enhancements to imaging, sight and situational awareness;
  • restoration and programmed muscular control through an optogenetic bodysuit sensor web;
  • auditory enhancement for communication and protection; and
  • direct neural enhancement of the human brain for two-way data transfer.

The study group suggested that direct neural enhancements in particular could revolutionize combat.

“This technology is predicted to facilitate read/write capability between humans and machines and between humans through brain-to-brain interactions,” an executive summary reads. “These interactions would allow warfighters direct communication with unmanned and autonomous systems, as well as with other humans, to optimize command and control systems and operations.”

Cyborg technologies are likely to be used among civil society as well over the next 30 years, the researchers noted.

Development of these capabilities will probably “be driven by civilian demand” and “a robust bio-economy that is at its earliest stages of development in today’s global market,” the group wrote.

But it’s after the year 2050 that the implications of cyborg capabilities become concerning.

Introduction of augmented human beings into the general population, DoD active-duty personnel, and near-peer competitors will accelerate in the years following 2050 and will lead to imbalances, inequalities, and inequities in established legal, security, and ethical frameworks,” the summary reads.

The study group proposed seven recommendations, listed in no particular order, for Pentagon leaders to consider:

  • The military should take a second look at the global and societal perception of human-machine augmentation. Americans typically imagine China or Russia developing runaway technologies because of a lack of ethical concerns, but “the attitudes of our adversaries toward these technologies have never been verified,” researchers wrote.
  • U.S. political leaders should use forums like NATO to discuss how cyborg advancements could impact interoperability between allied forces during operations.
  • The Pentagon should start investing in legal, security and ethical frameworks to anticipate emerging technologies and better prepare for their impact. Leaders should support policies that “protect individual privacy, sustain security, and manage personal and organizational risk, while maximizing defined benefits to the United States and its allies and assets,” the study group wrote.
  • Military leaders should also work to reverse the “negative cultural narratives of enhancement technologies.” It’s no secret that science fiction’s depiction of cyborg technologies revolves around dystopian futures. Transparency in how the military adopts this technology will help to alleviate concerns, while capitalizing on benefits, according to the study group.
  • The Pentagon should use wargames to gauge the impact of asymmetric biotechnologies on tactics, techniques and procedures. DoD personnel can support this through targeted intelligence assessments of the emerging field.
  • A whole-of-nation, not whole-of-government, approach to cyborg technologies is preferred. As it stands, “federal and commercial investments in these areas are uncoordinated and are being outpaced by Chinese research and development,” the study group wrote. If Chinese firms dominate the commercial sector, the U.S. defense sector will also be at a disadvantage.
  • Finally, the long-term safety concerns and the impact of these technologies on people should be monitored closely.

“The benefits afforded by human/machine fusions will be significant and will have positive quality-of-life impacts on humankind through the restoration of any functionality lost due to illness or injury,” the study group wrote.

But as these technologies evolve, “it is vital that the scientific and engineering communities move cautiously to maximize their potential and focus on the safety of our society,” the study group added.

 

from: https://www.armytimes.com/news/your-army/2019/11/27/cyborg-warriors-could-be-here-by-2050-dod-study-group-says/

 

 

Insecure Microsoft Azure Database Exposes Millions of Private SMS Messages

Insecure Microsoft Azure Database Exposes Millions of Private SMS Messages

Researchers discovered an unprotected TrueDialog database hosted by Microsoft Azure with diverse and business-related data from tens of millions of users.

Tens of millions of SMS messages have been found on an unprotected database, putting the private data of hundreds of millions of people in the United States at risk for theft or exposure and leaving a communications company open for potential intrusion, security researchers discovered.

Noam Rotem and Ran Locar from the research team of vpnMentor found the database, which they said belongs to TrueDialog, a U.S.-based communications company, according to a blog post. Based in Austin, Texas, TrueDialog provides bulk SMS services for small businesses, colleges and universities, which means that the majority of the messages were business-related, researchers said.

Moreover, the insecure database was linked to “many aspects” of TrueDialog’s business, potentially increasing unauthorized access to the data of millions of people as well as exposing an unusually diverse data set, they said.

“Hundreds of millions of people were potentially exposed in a number of ways,” according to the post. “It’s rare for one database to contain such a huge volume of information that’s also incredibly varied.”

Despite companies knowing the risks of leaving data unprotected online in this era of cloud-based storage, insecure databases are a persistent problem and remain one of the leading ways data breaches occur. These breaches not only leave customers and users of the companies who exposed the data at risk, but also leave the owners of the databases more susceptible to security threats as well.

Researchers discovered the exposed TrueDialog database on Nov. 26 and contacted TrueDialog two days later, on the 28th. At last look, the database—hosted by Microsoft Azure and on the Oracle Marketing Cloud–included 604 gigabytes of data, including nearly a billion entries that included “sensitive data,” according to researchers.

Types of data found unprotected included:

  • full names of message recipients,
  • TrueDialog account holders and TrueDialog users;
  • message content;
  • email addresses;
  • phone numbers of both recipients and account users;
  • dates and times that messages were sent;
  • and message status indicators.

The account details of TrueDialog account holders also were exposed in the messages, researchers said.

The scope of the leaky data has broad implications for TrueDialog, their users and the recipients of the messages, researchers said.

For users and message-recipients whose data was exposed, their personal details could be sold to marketers and spammers and used for purposes that range from annoying to criminal.

TrueDialog may get the brunt of the impact, however, researchers said. Not only does the unprotected data harm the company’s reputation and allow competitors to capitalize on this, but it also can give competitors an edge over them by providing insight into TrueDialog’s business model and practices, according to the post.

Bad actors also have an opportunity to find and exploit vulnerabilities within TrueDialog’s system by accessing the logs of internal system errors included in the exposed data, researchers added.

 

from: https://threatpost.com/insecure-database-exposes-millions-of-private-sms-messages/

 

 

France to Test Its Central Bank Digital Euro Currency in Q1/2020

France to Test Its Central Bank Digital Euro Currency in Q1/2020

The central bank of France plans to pilot a central bank digital currency (CBDC) for financial institutions in 2020. François Villeroy de Galhau, the governor of the Bank of France, announced that the bank will start testing the digital euro project by the end of the first quarter 2020, French financial publication Les Echos reports Dec. 4.

The Bank of France confirmed the news on Twitter, noting that the announcement was made at a conference co-hosted by two major French financial regulators, the French Prudential Supervision and Resolution Authority and the Autorité des marchés financiers.

https://twitter.com/banquedefrance/status/1202217934560608256?s=20

Digital euro pilot won’t involve retail customers

According to the report, the digital euro pilot will only target private financial sector players and won’t involve retail payments made by individuals. Villeroy reportedly noted that a digital currency for retail customers would “be subject to special vigilance.”

As reported by Les Echos, the initiative intends to strengthen the efficiency of the French financial system, while ensuring trust in the currency.

Preventing Libra’s impact

Moreover, the project aims to assert France’s sovereignty over private digital currency initiatives like Facebook’s stablecoin Libra, Villeroy reportedly said.

Villeroy’s stance falls in line with previous statements by French finance minister Bruno Le Maire, who argued that regulators cannot allow the launch of Libra on European soil due to monetary sovereignty concerns.

According to some reports, France led the anti-Libra effort alongside Germany, Italy, Spain and the Netherlands.

Villeroy calls on France to become the first country in the world to issue a CBDC

According to a tweet by the Bank of France, its governor emphasized that France should become the first country in the world to issue a CBDC and provide an exemplary model to other jurisdictions. He stated:

“I see the interest in rapidly advancing the issuance of at least one central bank digital currency in order to be the leading issuer globally and get the benefits associated with providing an exemplary central bank digital currency.”

France has emerged as a major adopter of blockchain tech and Bitcoin

Meanwhile, France has appeared to be at the forefront of adopting crypto and blockchain technology as its government has initiated and encouraged a number of industry-related projects.

In late November 2019, the first deputy governor of the Bank of France called for a blockchain-based settlements and payments systems in Europe. As reported by Cointelegraph on Nov. 20, the French Armies and Gendarmerie’s Information and Public Relations Center was validating judicial expenses incurred during investigations on the Tezos (XTZ) blockchain at the time.

Alongside developments in blockchain, France has also emerged as a major adopter of biggest cryptocurrency, Bitcoin (BTC). In mid-October, French crypto startup Keplerk relaunched its service to accept Bitcoin payments in over 5,200 tobacco shops in France. Previously, Cointelegraph reported that at least 30 French retailers plan to launch Bitcoin payments support at over 25,000 sales points by early 2020.

 

from: https://cointelegraph.com/news/france-to-test-its-central-bank-digital-currency-in-q1-2020-official

 

 

$100M Funding: can industry help US Air Force Research Lab develop new Cyber and SIGINT tech?

$100M Funding: can industry help US Air Force Research Lab develop new Cyber and SIGINT tech?

A notice is asking for industry’s help in developing new and innovative cyber and signals intelligence technologies. (Greg Davis/U.S. Air Force/Getty Images)

The Air Force is asking for industry’s help developing advanced cyber and signals intelligence technologies. Specifically, the Air Force Research Lab wants technologies that can improve extraction, identification, analysis and reporting of tactical information to support intelligence, surveillance and reconnaissance; protect forces with digital systems; and support battlespace awareness.

In a notice posted online Dec. 4, the service is asking for white papers in an ongoing basis from now until 2021; from these, it will invite certain companies to issue formal proposals.

The total funding for the effort is $99.9 million, which will be broken up in fiscal years 2019 to 2021.

The notice states that technology needs range from quick reaction to critical near-term shortfalls to proof-of-concept.

The request is broken into two parts. The first portion covers ISR information for signals intelligence in order to discover new and innovative methods and processing techniques to provide decision-makers with near real-time ISR.

The second portion includes research to develop methods for the detection, identification, characterization and geolocation of emerging communications; advance digital signal processing software to provide new and existing systems and waveforms; as well as provide new and innovative software and hardware architectures for standoff collection systems and software-defined radios from either airborne or ground-based platforms operating in dense signal environments, to name a few.

from: https://www.c4isrnet.com/battlefield-tech/2019/12/04/the-air-force-wants-help-with-these-technologies/

***

https://beta.sam.gov/opp/ed88173366ce46b7a25a3f39bf8ec42a/view?keywords=%22cyber%22&sort=-modifiedDate&index=opp&is_active=true&page=1

 

 

 

 

 

 

DHS wants to expand airport face recognition scans to include US citizens

DHS wants to expand airport face recognition scans to include US citizens

Homeland Security wants to expand facial recognition checks for travelers arriving to and departing from the U.S. to also include citizens, which had previously been exempt from the mandatory checks.

In a filing, the department has proposed that all travelers, and not just foreign nationals or visitors, will have to complete a facial recognition check before they are allowed to enter the U.S., but also to leave the country.

Facial recognition for departing flights has increased in recent years as part of Homeland Security’s efforts to catch visitors and travelers who overstay their visas. The department, whose responsibility is to protect the border and control immigration, has a deadline of 2021 to roll out facial recognition scanners to the largest 20 airports in the United States, despite facing a rash of technical challenges.

But although there may not always be a clear way to opt-out of facial recognition at the airport, U.S. citizens and lawful permanent residents — also known as green card holders — have been exempt from these checks, the existing rules say.

Now, the proposed rule change to include citizens has drawn ire from one of the largest civil liberties groups in the country.

“Time and again, the government told the public and members of Congress that U.S. citizens would not be required to submit to this intrusive surveillance technology as a condition of traveling,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union .

“This new notice suggests that the government is reneging on what was already an insufficient promise,” he said.

Travelers, including U.S. citizens, should not have to submit to invasive biometric scans simply as a condition of exercising their constitutional right to travel. The government’s insistence on hurtling forward with a large-scale deployment of this powerful surveillance technology raises profound privacy concerns,” he said.

Citing a data breach of close to 100,000 license plate and traveler images in June, as well as concerns about a lack of sufficient safeguards to protect the data, Stanley said the government “cannot be trusted” with this technology and that lawmakers should intervene.

A spokesperson for Customs & Border Protection said the agency was “currently in the rulemaking process and will ensure that the public has the opportunity to comment prior to the implementation of any regulation,” and that it was “committed to its privacy obligations.”

 

from: https://techcrunch.com/2019/12/02/homeland-security-face-recognition-airport-citizens/

 

 

Ongoing Research Project Examines Application of AI to Cybersecurity

Ongoing Research Project Examines Application of AI to Cybersecurity

Project Blackfin: Multi-Year Research Project Aims to Unlock the Potential of Machine Intelligence in Cybersecurity

Project Blackfin is ongoing artificial intelligence (AI) research challenging the current automatic assumption that deep-learning neural network principles are the best way to teach a system to detect anomalous behavior or malicious activity on a network. Run by security firm F-Secure, the project is examining the alternative applicability of distributed swarm intelligence in decision making.

“People’s expectations that ‘advanced’ machine intelligence simply mimics human intelligence is limiting our understanding of what AI can and should do,” explains Matti Aksela, F-Secure’s VP of artificial intelligence. “Instead of building AI to function as though it were human, we can and should be exploring ways to unlock the unique potential of machine intelligence, and how that can augment what people do.” 

Project Blackfin is being run by F-Secure with collaboration between in-house engineers, researchers, data scientists and academic partners. “We created Project Blackfin,” continued Aksela, “to help us reach that next level of understanding about what AI can achieve.” Although it is a long-term project, some early principles are already being incorporated into F-Secure’s own products.

The primary problem with many current anomaly detection AI systems is well-known: too many false positives or too many false negatives. This is difficult to solve simply by the nature of how the systems work. Streams of data from endpoints and network traffic are centralized and analyzed on arrival, and then stored for later audit or forensic analysis. Because the data arrives from multiple sources it is difficult to correlate events across multiple sources. Since attackers often build delays into their attacks, new events may also need to be related to historical events to be able to contextualize possibly malicious activity.

The result is that finding the best sensitivity settings for detection of behaviors is critical. Set high to ensure nothing is missed results in huge numbers of false positives that need to be manually triaged by the security team. Set too low to reduce the false negatives increases the potential for false positives.

Blackfin is exploring the use of distributing the AI as agents within each endpoint and server of a network in a collaborative manner. That intelligence becomes expert in the acceptable use of its own host. The model is inspired by the patterns of collective behavior found in nature, such as the swarm intelligence found in ant colonies or schools of fish. “The project aims to develop these intelligent agents to run on individual hosts,” says F-Secure.

“Instead of receiving instructions from a single, centralized AI model, these agents would be intelligent and powerful enough to communicate and work together to achieve common goals.”

Consider the machine learning predictive text input capabilities of individual phones. They learn the text habits of their owners very quickly, being able to rapidly offer probable word completions based on their owners’ habits. This is the type of distributed intelligence being explored by Blackfin, with the intelligence located in the device — but with the added ability for each intelligence to collaborate with the intelligence of adjacent intelligences. What may be just suspicious activity in the context of one endpoint can be confirmed as malicious or benign in the context of its action on adjacent endpoints — each of which has its own endpoint-specific intelligence.

This improves the correlation and contextualization of suspicious activity since the event is immediately, in situ, seen in the context of both the source and destination hosts. In our phone example, it might be equivalent for the text input intelligence on one phone being able collaborate with the destination intelligence and say, ‘Stop. You should not use that language with your grandmother.’

“Essentially,” said Aksela, “you’ll have a colony of fast local AIs adapting to their own environment while working together, instead of one big AI making decisions for everyone.”

F-Secure has published the first of what it expects to be regular papers on the progress of Blackfin (PDF). For now, it is exploring different anomaly detection models to detect specific phenomena. “By combining the outputs of multiple different models associated with each of the [different categories],” says the paper, “a contextual understanding of what is happening on a system can be derived, enabling downstream logic to more accurately predict whether a specific event or item is anomalous, and if it is, if it is worth alerting on. This approach enables generic methodologies for detecting attacker actions (or sequences of actions), without baking specific logic into the detection system itself.”

Research is ongoing and will continue for several years. Nevertheless, says F-Secure, through Blackfin, it has “identified a rich set of interactions between models running on endpoints, servers, and the network that have the potential to vastly improve breach detection mechanisms, forensic analysis capabilities, and response capabilities in future cyber security solutions… we expect to regularly report new results and findings as they present themselves.”

from: https://www.securityweek.com/ongoing-research-project-examines-application-ai-cybersecurity

***

https://www.f-secure.com/content/dam/f-secure/en/business/common/collaterals/f-secure-whitepaper-blackfin.pdf

https://www.bgp4.com/wp-content/uploads/2019/12/f-secure-whitepaper-blackfin.pdf

 

Can open source intelligence combat Russian disinformation in the Baltics?

Can open source intelligence combat Russian disinformation in the Baltics?

NATO will need to utilize social media and other publicly available information to combat Russian disinformation says a new report from the Atlantic Council.

Utilizing open source intelligence will be essential to combating Russian disinformation in the Baltics, according to a new report published Nov. 14 by the Atlantic Council.

The report focuses on how NATO joint intelligence, surveillance and reconnaissance operations can help Estonia, Latvia and Lithuania — three ex-Soviet states that face the most direct threat from Russia of any NATO member. While there are limits to what military assistance the alliance can provide to the region without prompting a Russian response, the report notes that using the alliance’s networked system of sensors, collectors and analysts to provide situational awareness and early warning remains a low risk way to help out the embattled states.

“One of the things that our alliance can do with far less controversy than any of its other activities is gain intelligence — understand the situation as it exists at any one moment,” said retired Air Marshal Sir Christopher Harper, who co-chaired the task force that authored the report.

The alliance collects that intelligence through a number of means, from drones that can detect troop movements to radars and more. According to the report, NATO possesses impressive collection capabilities in all domains and is able to utilize the United States space-based systems, which are unmatched.

But the situation in the Baltics is complex for intelligence gathering. These three states are under a constant barrage of propaganda, subversion and disinformation originating from Russia, according to the report. In order to develop a common operating picture, NATO needs to know more than just troop locations and military capabilities — it needs to know how Russia is using disinformation tactics and what effect it has on the population.

“Understanding what is happening on the ground is no longer a case of tracking the movement of military units, of tracking military intent. This is all about understanding how information is being used to influence populations, influence behavior. And with states that are so close, that have that common border with the Russian Federation, that sort of intelligence and information gathering is all the more important,” said Harper.

The report goes even further, noting that open source intelligence can be used to see through Russian deception in the region.

“Classical indicators like troop movements or railway and airfield activity may be reduced or absent altogether. Instead, JISR may take the form of intensive monitoring and analysis of propaganda and social media, open sources, atypical commercial ship and airline movements, stepped-up diplomatic activity, unusual financial transactions, increased volumes of cyber intrusion and denial-of-service attacks, and unrest in ethnic Russian areas and populations,” reads the report.

The next stage of NATO JISR operations in the Baltics will need to be able to utilize open source intelligence — publicly available information from sources like social media that can provide intelligence on the population.

One step towards such a solution could be the establishment of a Baltic region JISR operations center that could train local people to rapidly process and fuse military intelligence and open source intelligence and disseminate it to the alliance, said Harper. An alliance-wide ISR academy would also be of benefit, he added.

And in the future, artificial intelligence and machine learning could provide the solution to focusing and fusing data from traditional sensors and collection methods and open source intelligence at scale before quickly disseminating it to the alliance. The report calls for NATO to prioritize bringing those sorts of technologies online in the near term.

“The ability to understand the world around us in a very very complex information environment is going to be key to the future,” said Harper.

 

from: https://www.c4isrnet.com/intel-geoint/2019/11/16/can-open-source-intelligence-combat-russian-disinformation-in-the-baltics/

 

 

China’s Achilles’ heel when it comes to cyberspace

China’s Achilles’ heel when it comes to cyberspace

Despite being considered extremely vulnerable in cyberspace, the United States does pose some asymmetric advantages in the domain as compared to authoritarian regimes. (Andy Wong/AP)

If “mutually assured cyber destruction” were to occur, one Marine Corps leader said, authoritarian nations such as China might have more to lose than the United States.

Top national security experts have warned that despite the United States’ cyber prowess, the country is vulnerable to cyberattacks because of how interconnected society is with essential services and the internet. But in the case of a cyber catastrophe, “we’ll still be America. We’ll be a little beaten up, a little dirty, but China won’t be China anymore because they will not maintain control,” said Lt. Gen. Eric Smith, head of the Marine Corps Combat Development Command and the deputy commandant for combat development and integration. Smith spoke at an AFCEA Northern Virginia chapter lunch Nov. 15.

Smith said if much of the country goes offline, places like Plano, Texas, will essentially be the same. While certain elements of daily life could get ugly, residents could still rely on local-, county-, state- and national-level law enforcement entities.

China, however, as an authoritarian state, must maintain central control, Smith said. This, in turn, become an Achilles’ heel.

“If I take all the cameras offline and all the mechanisms of control cease, Shanghai is not Shanghai anymore six months after that event,” he said. “Everything within China, which has one time zone, by the way … should have nine, but they have one … because they have to maintain central control.”

Smith added that the weakness within authoritarian regimes should be exploited more through offensive cyber means.

 

from: https://www.fifthdomain.com/international/2019/11/18/chinas-achilles-heel-when-it-comes-to-cyberspace/

 

 

 

By continuing to use this site, you agree to the use of cookies. Please consult the Privacy Policy page for details on data use. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close