Researchers have managed to quantum teleport information between two computer chips for the first time
Scientists at the University of Bristol and the Technical University of Denmark have achieved quantum teleportation between two computer chips for the first time. The team managed to send information from one chip to another instantly without them being physically or electronically connected, in a feat that opens the door for quantum computers and quantum internet.
This kind of teleportation is made possible by a phenomenon called quantum entanglement, where two particles become so entwined with each other that they can “communicate” over long distances. Changing the properties of one particle will cause the other to instantly change too, no matter how much space separates the two of them. In essence, information is being teleported between them.
Hypothetically, there’s no limit to the distance over which quantum teleportation can operate – and that raises some strange implications that puzzled even Einstein himself. Our current understanding of physics says that nothing can travel faster than the speed of light, and yet, with quantum teleportation, information appears to break that speed limit. Einstein dubbed it “spooky action at a distance.”
Harnessing this phenomenon could clearly be beneficial, and the new study helps bring that closer to reality. The team generated pairs of entangled photons on the chips, and then made a quantum measurement of one. This observation changes the state of the photon, and those changes are then instantly applied to the partner photon in the other chip.
“We were able to demonstrate a high-quality entanglement link across two chips in the lab, where photons on either chip share a single quantum state,” says Dan Llewellyn, co-author of the study. “Each chip was then fully programmed to perform a range of demonstrations which utilize the entanglement. The flagship demonstration was a two-chip teleportation experiment, whereby the individual quantum state of a particle is transmitted across the two chips after a quantum measurement is performed. This measurement utilizes the strange behavior of quantum physics, which simultaneously collapses the entanglement link and transfers the particle state to another particle already on the receiver chip.”
The team reported a teleportation success rate of 91 percent, and managed to perform some other functions that will be important for quantum computing. That includes entanglement swapping (where states can be passed between particles that have never directly interacted via a mediator), and entangling as many as four photons together.
Information has been teleported over much longer distances before – first across a room, then 25 km (15.5 mi), then 100 km (62 mi), and eventually over 1,200 km (746 mi) via satellite. It’s also been done between different parts of a single computer chip before, but teleporting between two different chips is a major breakthrough for quantum computing.
[remember: these are curated news, not an endorsement of the content]
The application of Google’s quantum computing technology could purportedly help improve the technology which underpins proof-of-stake (PoS) cryptocurrencies.
Quantum computing would create truly random numbers
PoS is a type of consensus algorithm where block creators are randomly chosen with probability proportional to their stake, while the algorithm of proof-of-work-based digital currencies uses mining.
However, the PoS variant has raised doubts regarding the integrity of random selections.
Scott Aaronson, a quantum theoretician at the University of Texas at Austin, told Fortune on Oct. 23 that quantum computing could assuage PoS-skeptics doubts, as a quantum supremacy experiment could generate certifiably random numbers. He previously wrote on his personal blog:
“A sampling-based quantum supremacy experiment could almost immediately be repurposed to generate bits that can be proven to be random to a skeptical third party (under computational assumptions). This, in turn, has possible applications to proof-of-stake cryptocurrencies and other cryptographic protocols. I’m hopeful that more such applications will be discovered in the near future.”
Google’s project challenges the Church-Turing thesis
On Oct. 23, Google published the results of its quantum supremacy experiment, which Aaronson peer-reviewed. In the experiment, “Sycamore” — a 54-qubit processor with quantum logic gates — took 200 seconds to sample one instance of a quantum circuit a million times. In contrast, IBM’s supercomputer Summit, which is purportedly the most powerful computer to date, would run such a calculation for 10,000 years.
Google states that its experiment is the first experimental challenge against the extended Church-Turing thesis — also known as computability thesis — which claims that traditional computers can effectively carry out any “reasonable” model of computation. In a dedicated blog post, Google explained:
“We first ran random simplified circuits from 12 up to 53 qubits, keeping the circuit depth constant. We checked the performance of the quantum computer using classical simulations and compared with a theoretical model. Once we verified that the system was working, we ran random hard circuits with 53 qubits and increasing depth, until reaching the point where classical simulation became infeasible. […] With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.”
Is Bitcoin affected?
Previously, ex-Bitcoin Core developer Peter Todd poured cold water on fears that recent advances in quantum computing could endanger the security of Bitcoin (BTC) — which is a proof-of-work-based cryptocurrency. Todd concluded that financial impediments alone would keep Bitcoin free from potential trouble.
The UNSW Sydney research team: Professor Andrew Dzurak, Mr Wister Huang, Dr Henry Yang. Credit: UNSW Sydney
13 MAY 2019
For the first time ever, researchers have measured the fidelity—that is, the accuracy—of two-qubit logic operations in silicon, with highly promising results that will enable scaling up to a full-scale quantum processor.
The research, carried out by Professor Andrew Dzurak’s team in UNSW Engineering, was published today in the world-renowned journal Nature.
The experiments were performed by Wister Huang, a final-year Ph.D. student in Electrical Engineering, and Dr. Henry Yang, a senior research fellow at UNSW.
“All quantum computations can be made up of one-qubit operations and two-qubit operations—they’re the central building blocks of quantum computing,” says Professor Dzurak.
“Once you’ve got those, you can perform any computation you want—but the accuracy of both operations needs to be very high.”
In 2015 Dzurak’s team was the first to build a quantum logic gate in silicon, making calculations between two qubits of information possible—and thereby clearing a crucial hurdle to making silicon quantum computers a reality.
A number of groups around the world have since demonstrated two-qubit gates in silicon—but until this landmark paper today, the true accuracy of such a two-qubit gate was unknown.
Accuracy crucial for quantum success
“Fidelity is a critical parameter which determines how viable a qubit technology is — you can only tap into the tremendous power of quantum computing if the qubit operations are near perfect, with only tiny errors allowed,”Dr. Yang says.
In this study, the team implemented and performed Clifford-based fidelity benchmarking — a technique that can assess qubit accuracy across all technology platforms — demonstrating an average two-qubit gate fidelity of 98%.
“We achieved such a high fidelity by characterising and mitigating primary error sources, thus improving gate fidelities to the point where randomised benchmarking sequences of significant length — more than 50 gate operations — could be performed on our two-qubit device,” says Mr Huang, the lead author on the paper.
Quantum computers will have a wide range of important applications in the future thanks to their ability to perform far more complex calculations at much greater speeds, including solving problems that are simply beyond the ability of today’s computers.
“But for most of those important applications, millions of qubits will be needed, and you’re going to have to correct quantum errors, even when they’re small,” Professor Dzurak says.
“For error correction to be possible, the qubits themselves have to be very accurate in the first place — so it’s crucial to assess their fidelity.”
“The more accurate your qubits, the fewer you need — and therefore, the sooner we can ramp up the engineering and manufacturing to realise a full-scale quantum computer.”
Silicon confirmed as the way to go
The researchers say the study is further proof that silicon as a technology platform is ideal for scaling up to the large numbers of qubits needed for universal quantum computing. Given that silicon has been at the heart of the global computer industry for almost 60 years, its properties are already well understood and existing silicon chip production facilities can readily adapt to the technology.
“If our fidelity value had been too low, it would have meant serious problems for the future of silicon quantum computing. The fact that it is near 99% puts it in the ballpark we need, and there are excellent prospects for further improvement. Our results immediately show, as we predicted, that silicon is a viable platform for full-scale quantum computing,” Professor Dzurak says.
“We think that we’ll achieve significantly higher fidelities in the near future, opening the path to full- scale, fault-tolerant quantum computation. We’re now on the verge of a two-qubit accuracy that’s high enough for quantum error correction.”
In another paper — recently published in Nature Electronics and featured on its cover — on which Dr. Yang is lead author, the same team also achieved the record for the world’s most accurate 1-qubit gate in a silicon quantum dot, with a remarkable fidelity of 99.96%.
“Besides the natural advantages of silicon qubits, one key reason we’ve been able to achieve such impressive results is because of the fantastic team we have here at UNSW. My student Wister and Dr. Yang are both incredibly talented. They personally conceived the complex protocols required for this benchmarking experiment,” says Professor Dzurak.
Other authors on today’s Nature paper are UNSW researchers Tuomo Tanttu, Ross Leon, Fay Hudson, Andrea Morello and Arne Laucht, as well as former Dzurak team members Kok Wai Chan, Bas Hensen, Michael Fogarty and Jason Hwang, while Professor Kohei Itoh from Japan’s Keio University provided isotopically enriched silicon wafers for the project.
UNSW Dean of Engineering, Professor Mark Hoffman, says the breakthrough is yet another piece of proof that this world-leading team are in the process of taking quantum computing across the threshold from the theoretical to the real.
“Quantum computing is this century’s space race — and Sydney is leading the charge,” Professor Hoffman says.
“This milestone is another step towards realising a large-scale quantum computer — and it reinforces the fact that silicon is an extremely attractive approach that we believe will get UNSW there first.”
Spin qubits based on silicon CMOS technology — the specific method developed by Professor Dzurak’s group — hold great promise for quantum computing because of their long coherence times and the potential to leverage existing integrated circuit technology to manufacture the large numbers of qubits needed for practical applications.
Professor Dzurak leads a project to advance silicon CMOS qubit technology with Silicon Quantum Computing, Australia’s first quantum computing company.
“Our latest result brings us closer to commercialising this technology — my group is all about building a quantum chip that can be used for real-world applications,” Professor Dzurak says.
A full-scale quantum processor would have major applications in the finance, security and healthcare sectors—it would help identify and develop new medicines by greatly accelerating the computer- aided design of pharmaceutical compounds, it could contribute to developing new, lighter and stronger materials spanning consumer electronics to aircraft, and faster information searching through large databases.
Scientia Professor Michelle Simmons with a scanning tunnelling microscope. Credit: UNSW
07 Mar 2018 by Deborah Smith
The unique Australian approach of creating quantum bits from precisely positioned individual atoms in silicon is reaping major rewards, with two of these atom qubits made to “talk” to each other for the first time.
The unique Australian approach of creating quantum bits from precisely positioned individual atoms in silicon is reaping major rewards, with UNSW Sydney-led scientists showing for the first time that they can make two of these atom qubits “talk” to each other.
The team – led by UNSW Scientia Professor Michelle Simmons, Director of the Centre of Excellence for Quantum Computation and Communication Technology, or CQC2T – is the only group in the world that has the ability to see the exact position of their qubits in the solid state.
Simmons’ team create the atom qubits by precisely positioning and encapsulating individual phosphorus atoms within a silicon chip. Information is stored on the quantum spin of a single phosphorus electron.
The team’s latest advance – the first observation of controllable interactions between two of these qubits – is published in the journal Nature Communications. It follows two other recent breakthroughs using this unique approach to building a quantum computer.
By optimising their nano-manufacturing process, Simmons’ team has also recently created quantum circuitry with the lowest recorded electrical noise of any semiconductor device.
And they have created an electron spin qubit with the longest lifetime ever reported in a nano-electric device – 30 seconds.
“The combined results from these three research papers confirm the extremely promising prospects for building multi-qubit systems using our atom qubits,” says Simmons.
2018 Australian of the Year inspired by Richard Feynman
Simmons, who was named 2018 Australian of the Year in January for her pioneering quantum computing research, says her team’s ground-breaking work is inspired by the late physicist Richard Feynman.
“Feynman said: ‘What I cannot create, I do not understand’. We are enacting that strategy systematically, from the ground up, atom by atom,” says Simmons.
“In placing our phosphorus atoms in the silicon to make a qubit, we have demonstrated that we can use a scanning probe to directly measure the atom’s wave function, which tells us its exact physical location in the chip. We are the only group in the world who can actually see where our qubits are.
“Our competitive advantage is that we can put our high-quality qubit where we want it in the chip, see what we’ve made, and then measure how it behaves. We can add another qubit nearby and see how the two wave functions interact. And then we can start to generate replicas of the devices we have created,” she says.
For the new study, the team placed two qubits – one made of two phosphorus atoms and one made of a single phosphorus atom – 16 nanometres apart in a silicon chip.
“Using electrodes that were patterned onto the chip with similar precision techniques, we were able to control the interactions between these two neighbouring qubits, so the quantum spins of their electrons became correlated,” says study lead co-author, Dr Matthew Broome, formerly of UNSW and now at the University of Copenhagen.
“It was fascinating to watch. When the spin of one electron is pointing up, the other points down, and vice versa.
“This is a major milestone for the technology. These type of spin correlations are the precursor to the entangled states that are necessary for a quantum computer to function and carry out complex calculations,” he says.
Study lead co-author, UNSW’s Sam Gorman, says: “Theory had predicted the two qubits would need to be placed 20 nanometres apart to see this correlation effect. But we found it occurs at only 16 nanometres apart.
“In our quantum world, this is a very big difference,” he says. “It is also brilliant, as an experimentalist, to be challenging the theory.”
UNSW Sydney-led scientists have shown for the first time that they can make two precisely placed phosphorous atom qubits “talk” to each other.
Leading the race to build a quantum computer in silicon
UNSW scientists and engineers at CQC2T are leading the world in the race to build a quantum computer in silicon. They are developing parallel patented approaches using single atom and quantum dot qubits.
“Our hope is that both approaches will work well. That would be terrific for Australia,” says Simmons.
The UNSW team have chosen to work in silicon because it is among the most stable and easily manufactured environments in which to host qubits, and its long history of use in the conventional computer industry means there is a vast body of knowledge about this material.
In 2012, Simmons’ team, who use scanning tunnelling microscopes to position the individual phosphorus atoms in silicon and then molecular beam epitaxy to encapsulate them, created the world’s narrowest conducting wires, just four phosphorus atoms across and one atom high.
In a recent paper published in the journal Nano Letters, they used similar atomic scale control techniques to produce circuitry about 2-10 nanometres wide and showed it had the lowest recorded electrical noise of any semiconductor circuitry. This work was undertaken jointly with Saquib Shamim and Arindam Ghosh of the Indian Institute of Science.
“It’s widely accepted that electrical noise from the circuitry that controls the qubits will be a critical factor in limiting their performance,” says Simmons.
“Our results confirm that silicon is an optimal choice, because its use avoids the problem most other devices face of having a mix of different materials, including dielectrics and surface metals, that can be the source of, and amplify, electrical noise.
“With our precision approach we’ve achieved what we believe is the lowest electrical noise level possible for an electronic nano-device in silicon – three orders of magnitude lower than even using carbon nanotubes,” she says.
In another recent paper in Science Advances, Simmons’ team showed their precision qubits in silicon could be engineered so the electron spin had a record lifetime of 30 seconds – up to 16 times longer than previously reported. The first author, Dr Thomas Watson, was at UNSW undertaking his PhD and is now at Delft University of Technology.
“This is a hot topic of research,” says Simmons. “The lifetime of the electron spin – before it starts to decay, for example, from spin up to spin down – is vital. The longer the lifetime, the longer we can store information in its quantum state.”
In the same paper, they showed that these long lifetimes allowed them to read out the electron spins of two qubits in sequence with an accuracy of 99.8 percent for each, which is the level required for practical error correction in a quantum processor.
Australia’s first quantum computing company
Instead of performing calculations one after another, like a conventional computer, a quantum computer would work in parallel and be able to look at all the possible outcomes at the same time. It would be able to solve problems in minutes that would otherwise take thousands of years.
Last year, Australia’s first quantum computing company – backed by a unique consortium of governments, industry and universities – was established to commercialise CQC2T’s world-leading research.
Operating out of new laboratories at UNSW, Silicon Quantum Computing Pty Ltd has the target of producing a 10-qubit demonstration device in silicon by 2022, as the forerunner to a silicon-based quantum computer.
The Australian government has invested $26 million in the $83 million venture through its National Innovation and Science Agenda, with an additional $25 million coming from UNSW, $14 million from the Commonwealth Bank of Australia, $10 million from Telstra and $8.7 million from the NSW Government.
It is estimated that industries comprising approximately 40% of Australia’s current economy could be significantly impacted by quantum computing. Possible applications include software design, machine learning, scheduling and logistical planning, financial analysis, stock market modelling, software and hardware verification, climate modelling, rapid drug design and testing, and early disease detection and prevention.
ethos, logos, and pathos the three key elements to persuade
Jeff Bezos is prohibited from using Power Point presentations at meetings, as he considers them a waste of time.However, the alternative method by which he has replaced them is most useful and effective.Do you want to know what it is?
In his annual letter to employees, Jeff Bezos, the CEO of Amazon, recalled that Power Points were prohibited in any meeting.However, this does not mean that you can not use any presentation method in company meetings.
In fact, the founder of the most powerful ecommerce company in the world offers an alternative so that the ideas or strategies to be carried out are understood more clearly by the attendees: the memos, paper or essays (maximum of six pages).
“Instead of wasting time listening to one person while the rest of the audience is silent, it is more efficient to spend 30 minutes reading a 6-page essay explaining everything you want to say at the meeting.The narrative structure is easier to understand by human beings than general ideas summarized in bullet points, “explains the CEO.
But why?Inc has compiled the 3 keys by which the idea of Bezos to replace Power Points by trials is brilliant.
1. Our brains are designed to understand stories
The problem with Power Point slides is that, in general, they do not tell a story and our brain is designed to understand narratives.“When our ancestors discovered the fire, they gathered around it to cook and tell stories.In this way, the narrative served to tell anecdotes or dangers that could haunt the tribe, “explains Carmine Gallo, author of Five Stars: The Communication Secrets to Get from Good to Great.
In this way, and according to anthropologists, for us the world “is a story”, especially in leadership roles.Thus, telling events in a narrated way is essential because people remember things more with this structure.
2. Persuasive stories
Aristotle is the father of persuasion, and more than 2000 years ago he revealed the three key elements to persuade: ethos, logos and pathos.
The first one refers to character and credibility;
the second appeals to logic (an argument must have a reason);
while the last one has to do with emotion.
Therefore, the first two have no meaning without the last one.
In fact, the great orators of the history exposed in their speeches as much rational elements as emotional (it is only necessary to think about the famous I have a dream, of Martin Luther King).
In addition, according to a series of scientific studies developed by neurologists, the best way to create synapses between our neurons is emotion.In other words, if you want to communicate an idea, it is best to tell a story.“I love telling anecdotes at meetings.It’s very effective, “says Bezos.
3. Bullet points do not work
Bullet points are not useful for anyone.In fact, they do not use them in companies like Google, Virgin or Tesla.
The brain is not prepared to retain information in the form of lists.Instead, a story, a photo or an idea is easier to retain.
Embedding on a special graph of the D-Wave 2000Q by solving a problem like a puzzle in our technique. Credit: Tohoku University
Tohoku University researchers have developed an algorithm that enhances the ability of a Canadian-designed quantum computer to more efficiently find the best solution for complicated problems, according to a study published in the journal Scientific Reports.
Quantum computing takes advantage of the ability of subatomic particles to exist in more than one state at the same time. It is expected to take modern-day computing to the next level by enabling the processing of more information in less time.
The D-Wave quantum annealer, developed by a Canadian company that claims it sells the world’s first commercially available quantum computers, employs the concepts of quantum physics to solve ‘combinatorial optimization problems.’ A typical example of this sort of problem asks the question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the original city?” Businesses and industries face a large range of similarly complex problems in which they want to find the optimal solution among many possible ones using the least amount of resources.
Ph. D candidate Shuntaro Okada and information scientist Masayuki Ohzeki of Japan’s Tohoku University collaborated with global automotive components manufacturer Denso Corporation and other colleagues to develop an algorithm that improves the D-Wave quantum annealer’s ability to solve combinatorial optimization problems.
The algorithm works by partitioning an originally large problem into a group of subproblems. The D-Wave annealer then iteratively optimizes each subproblem to eventually solve the original larger one. The Tohoku University algorithm improves on another algorithm using the same concept by allowing the use of larger subproblems, ultimately leading to the arrival at more optimal solutions more efficiently.
“The proposed algorithm is also applicable to the future version of the D-Wave quantum annealer, which contains many more qubits,” says Ohzeki. Qubits, or quantum bits, form the basic unit in quantum computing. “As the number of qubits mounted in the D-Wave quantum annealer increases, we will be able to obtain even better solutions,” he says.
The team next aims to assess the utility of their algorithm for various optimization problems.
Scientists have created a Back to the Future device that can predict alternative realities.
In the time travel movie, starring Michael J Fox as Marty McFly, the future is shown not to be fixed and unmovable, but a range of possibilities. Only one timeline in which McFly’s parents fall in love allows him to be born. In another, he would cease to exist.
Now scientists have built a quantum computer that can generate simultaneous “futures”, as if looking into a series of different crystal balls. (The machine is not quite up to predicting potential national lottery results.)
Working on a subatomic scale, it can simulate 16 timelines at most for photons — “packets” of light — occupying different locations. For excited scientists, it is another practical demonstration of the weirdness of quantum mechanics, the set of rules that govern the subatomic world.
“When we think about the future, we are confronted by a vast array of possibilities. These possibilities grow exponentially as we go deeper into the future.”
Mile Gu, one of the scientists from Nanyang Technical University (NTU) in Singapore, said: “When we think about the future, we are confronted by a vast array of possibilities. These possibilities grow exponentially as we go deeper into the future. For instance, even if we have only two possibilities to choose from each minute, in less than half an hour there are 14 million possible futures. In less than a day, the number exceeds the number of atoms in the universe.”
His research group realised that, at a much smaller scale, a quantum computer can examine all possible futures of a decision process. It does this by placing them in a quantum “superposition” — a kind of limbo in which different potential states occur simultaneously. Only when the system is observed or disturbed does it “collapse” into one state or another.
This fundamental pillar of quantum mechanics was illustrated by the famous “Schroedinger’s cat” thought experiment, in which a cat is neither dead nor alive but a superposition of both states.
‘Many worlds’ hypothesis
It also gave rise to the “many worlds” hypothesis — the idea that a myriad universes co-exist in parallel in which different fates are played out.
In the new study, potential future outcomes of a decision process were represented by the locations of photons. The scientists showed that a superposition of multiple potential futures was weighted by the probability of one or other occurring when the system collapsed.
The machine has already demonstrated one application — measuring how much our bias towards a specific choice in the present impacts the future.
“This is what makes the field so exciting. It is very much reminiscent of classical computers in the 1960s”
Lead researcher Farzad Ghafari, a PhD student at Griffith University in Australia, said: “Our approach is to synthesise a quantum superposition of all possible futures for each bias. Many current artificial intelligence algorithms learn by seeing how small changes in their behaviour can lead to different future outcomes, so our techniques may enable quantum-enhanced AIs to learn the effect of their actions much more efficiently.”
The study, published in the journal Nature Communications, was inspired by the late Nobel Laureate theoretical physicist Richard Feynman. He was the first to realise that when a subatomic particle travels from point A to point B, it does not necessarily choose a single path. Instead, it simultaneously follows all the possible paths connecting the points.
Jayne Thompson, a member of the Singapore team, said: “Our work extends this phenomenon and harnesses it for modelling statistical futures.”
While the prototype device simulates no more than 16 futures, the underlying quantum algorithm could in principle “scale without bound”, said the researchers.
Geoff Pryde, who was in charge of the Griffiths group, said: “This is what makes the field so exciting. It is very much reminiscent of classical computers in the 1960s. Just as few could imagine the many uses of classical computers in the 1960s, we are still very much in the dark about what quantum computers can do.”
D-Wave, the well-funded quantum computing company, today announced its next-gen quantum computing platform with 5,000 qubits, up from 2,000 in the company’s current system. The new platform will come to market in mid-2020.
The company’s new so-called Pegasus topology connects every qubit to 15 other qubits, up from six in its current topology. With this, developers can use the machine to solve larger problems with fewer physical qubits — or larger problems in general.
It’s worth noting that D-Wave’s qubits are different from those of the company’s competitors like Rigetti, IBM and Google, with shorter coherence times and a system that mostly focuses on solving optimization problems. To do that, D-Wave produces lots of qubits, but in a relatively high-noise environment. That means that you can’t compare D-Wave’s qubit count to that of its competitors (with D-Wave claiming the superiority of its machine for certain problems), which are building universal quantum computers.
The company also says that it has brought down the noise in its new system, making it its lowest-noise system yet. That’s to be expected in an updated system, of course, but with lower noise comes longer coherence times, which allows for running more complex applications, too.
It’s worth noting that while there was plenty of controversy around D-Wave’s earliest efforts and that it took a while to prove that the company’s hardware actually exploited any quantum effects, this isn’t really in questions anymore.
In 2020, D-Wave will make the new platform available through its Leap cloud-computing platform as it makes updates to its existing systems. Before that, though, developers can start writing applications for it using the company’s simulation and other developer tools.
“With the next-generation platform, we are making investments in things like connectivity and hybrid software and tools to allow customers to solve even more complex problems at greater scale, bringing new emerging quantum applications to life,” said Alan Baratz, D-Wave’s chief product officer, in today’s announcement. “Every decision we’ve made and every decision we’ll make will reflect an ongoing commitment to helping developers learn quantum systems and helping customers build the first commercial quantum killer applications.”
Im schlimmsten Fall kann einer dieser Punkte ausreichen, damit ein Investor sich gegen die Finanzierung eines Startups entscheidet. Werden mehrere dieser Punkte erfüllt, wird es für Startups bei professionellen Investoren sehr schwer, eine Finanzierung zu erhalten.
In meiner langjährigen Startup-Beratung haben sich einige Punkte herauskristallisiert, die für professionelle Investoren oftmals entscheidungsrelevant sind und die ich daher als (potenzielle) Deal Breaker bezeichne. Im schlimmsten Fall kann einer dieser Punkte ausreichen, damit sich der Investor gegen eine Finanzierung eines Startups entscheidet. Werden mehrere dieser Punkte erfüllt, wird es für Startups bei allen professionellen Investoren sehr schwer werden, eine Finanzierung zu erhalten.
Sollten bei Ihnen (potenzielle) Deal Breaker vorliegen, sollten Sie versuchen, diese zu beseitigen. Ist dies derzeit (noch) nicht möglich, sollten Sie zumindest Lösungsansätze und Argumentationsketten erarbeiten, wie Sie mit diesen potentiellen Deal Breaker umgehen, bevor Sie einen professionellen Investor ansprechen.
Diese Deal Breaker sind für institutionelle Investoren wie Fonds und Venture Captials noch wichtiger als für Business Angels. Investoren sind jedoch keine gleich handelnde homogene Gruppe. Daher können die aufgeführten Punkte von Investoren unterschiedlich stark gewichtet werden oder gegebenenfalls für einige Investoren gar keine Deal Breaker darstellen.
Nachfolgend finden Sie eine Checkliste potenzieller Deal Breaker. Bei allen genannten Punkten handelt es sich um Gründe, die mir in meiner Beratungspraxis tatsächlich begegnet sind und dafür ausschlaggebend waren, dass ein Investor Abstand von einem Investment genommen hat.
* Die Chemie im Startup-Team stimmt nicht oder es gibt bereits Streit im Team.
* Schlüsselmitarbeiter verlassen das Unternehmen im Finanzierungsprozess.
* Einer der für die Geschäftsentwicklung benötigten Gründer hält keinen fairen Geschäftsanteil am Startup.
* Schlüsselpersonen sind nicht ausreichend durch Geschäftsanteile, virtuelle Anteile oder Gehalt inzentiviert.
* Das Startup besteht entweder aus nur einem Gründer oder aus sehr vielen Gründern. Jeff Bezos, der Gründer von Amazon, hat dazu gesagt: „If you can’t feed a team with two pizzas, it’s too large.”
* Unqualifizierte Freunde oder Familienangehörige besetzen Managementpositionen des Startups.
* Familienangehörige oder Freunde haben Geschäftsanteile am Startup ohne durch Geld oder Zeit zum Erfolg des Startups beizutragen. * Die Gründer wollen nicht ins Unternehmen wechseln, sondern „es nebenbei machen“. Dies kann teilweise sinnvoll sein (z. B. ein Doktorvater eines Gründers), aber nicht für alle Gründer.
* Es besteht eine räumliche Trennung der Schlüsselpersonen des Startup-Teams nach der Anfangsphase und keine Bereitschaft der Gründer, dies zu ändern.
* Kein Mitglied des Kernteams präsentiert die Gründungsidee/das Startup vor den Investoren.
* Zwei der Gründer sind ein Paar.
* Das Kernteam geht umfangreichen Nebentätigkeiten nach.
* Das Kernteam hält Beteiligungen an Wettbewerbern oder Konkurrenten.
* Die Gründer wollen Gründer sein, weil das gerade angesagter ist, als in einer Beratung zu arbeiten und brennen nicht für das eigene Produkt/Geschäftsmodell/Dienstleistung.
* Anstellung von persönlichen Assistenzen in der Anfangsphase, z. B. vor der Serie-A- Finanzierung.
* Ein Rechtsanwalt ist Gesellschafter eines Startups, obwohl das Startup kein Produkt im Feld der Rechtsberatung anbietet.
* Das Management wird sehr hoch vergütet.
* Starkes Übertreiben bei der Darstellung vorhandener Erfahrungen oder eine arrogante Einstellung des Gründerteams.
* Die Gründer benötigen häufig mehr als 48 Stunden, um dem Investor per E-Mail zu antworten.
* Kommunikation über Assistenz und komplizierte Terminfindung
* Dead-Equity-Anteile im Startup sind größer als 10 %.
* Zu viele Investoren sind mit sehr geringen Anteilen am Startup beteiligt.
* Viele Gesellschafter sind mit sehr geringen Geschäftsanteilen beteiligt, die nicht gepoolt sind.
* Ein Software-Startup hat keinen Softwareentwickler im Gründerteam.
* Die Gründer haben schon mehr als fünf Preise bei Pitch-Wettbewerben gewonnen. (Die Fokussierung auf die Weiterentwicklung des Produkts und der Geschäftsidee wird hierdurch gegebenenfalls in Frage gestellt.)
* Startup-untypische Rechte/Vergünstigungen, z. B. Dienstwagen in der Anfangsphase, Rentenansprüche, sehr hohe Reisekosten (weil Business oder First Class geflogen wird).
* Es sind keine Mentoren oder Supporter etc. vorhanden.
* Die Gründer haben zu früh zu viele Anteile zu einer zu geringen Bewertung abgegeben.
* Einem Inkubator oder Company Builder gehören zu viele Geschäftsanteile.
* Es ist noch kein minimum viable product (MVP) vorhanden.
* Es ist kein Kunden-Feedback vorhanden (es ist schwierig, ein gutes Produkt ohne Kunden-Feedback zu entwickeln).
* Es ist kein Alleinstellungsmerkmal vorhanden.
* Der Fokus des Pitch-Decks liegt ausschließlich auf dem Produkt und lässt andere Aspekte weitgehend unberücksichtigt.
* Business-to-Consumer-Produkte mit weniger als 25 Prozent Marge und keinem Konzept, die Marge zu erhöhen.
* Die Gründer haben nicht ernsthaft in das Startup investiert, obwohl sie es könnten.
* Obwohl wesentliche Probleme identifiziert wurden, wurden diese weder vor dem Start des Fundraising geklärt noch Lösungsansätze erarbeitet.
* Es bestehen keine Kenntnisse über die eigenen Key Performance Indicators (KPIs, Kennzahlen für die operative und strategische Ausrichtung und Steuerung des Startups).
* Das Investment soll nicht für die Geschäftsentwicklung, sondern (auch) für viele Altlasten genutzt werden, z. B. aufgelaufener Lohnverzicht, gestundete Geschäftsführervergütung, Gesellschafter- oder Bankdarlehen und gestundete Zinsen.
* Die Gründer haben sich vor dem ersten Treffen nicht ausreichend über den Investor informiert. Grundsätzlich sollten das Portfolio, die Strategie und der Industriefokus des Investors bekannt sein.
* Die Finanzierung/Liquidität des Startups reicht nur noch für weniger als zwei Monate oder es liegt bereits eine Überschuldung vor.
* Die im Investorentermin gezeigten Unterlagen werden auf Anforderung nicht digital zur Verfügung gestellt.
* Nach dem ersten Meeting mit den Investoren werden die finanziellen Kennzahlen nicht zur Verfügung gestellt.
* Investoren, die bereits in das Startup investiert haben, gehen bei einer weiteren Finanzierungsrunde ohne wirklich überzeugenden Grund nicht mit.
* Die Bewertung des eigenen Unternehmens ist unrealistisch.
* Hohe Burn Rates/monatliche Ausgaben können nicht sinnvoll begründet werden.
* Die Nennung „mutiger“, aber noch begründbarer Zahlen hinsichtlich prognostizierte Wachstumsraten und Marktanteile wird meist von Investoren akzeptiert, wenn die Zahlen bei einem deutschen Unternehmen innerhalb von 5 Jahren zu einem Milliardenumsatz per anno führen, sollte man sich dies aber gut überlegen.
* Es sind keine Marktzahlen vorhanden
* Der Fokus liegt auf zu kleinen Märkten und es fehlt zudem die Kompetenz größere Märkte zu erschließen (z. B. fehlende englische Sprachkenntnisse).
* Das Startup behauptet, keine Wettbewerber zu haben.
* Es ist nur ein einziger Zulieferer vorhanden und es können keine weiteren identifiziert werden.
* Es werden Kunden oder Kooperationspartner in den Unterlagen angegeben, obwohl zu ihnen in Wirklichkeit keine Beziehung besteht.
* Es gibt Wettbewerber, die mit sehr hohen Investments ausgestattet sind.
* Nahezu der gesamte Umsatz wird mit einem Kunden gemacht.
* Das Startup hat kein Vertriebskonzept.
* Die Gründer kennen den Fachjargon der fokussierten Branche/Märkte/Industrie nicht.
* Das Wachstum beziehungsweise die Userzahlen stagnieren.
* Der erzielbare Exit ist zu klein.
* Es ist kein starkes Verständnis der eigenen Businessstrategie vorhanden.
* Es ist kein nachvollziehbarer Marketingplan vorhanden. (Wofür steht die Marke/das Startup?).
* Das Startup ist nicht als Kapitalgesellschaft organisiert.
* In der Due Diligence Prüfung tauchen (bewusst) verschwiegene Probleme auf.
* Das Startup besteht auf die Unterzeichnung einer Geheimhaltungsvereinbarung bevor ein Pitch-Deck an den Investor geschickt wird.
* Das Geschäftsmodell/die Produkte sind nicht durch Schutzrechte geschützt.
* Das Produkt ist nicht frei von Schutzrechten Dritter.
* Die Geschäftsidee berücksichtigt nicht existierende Gesetze, z. B. das Datenschutzrecht.
* Das Startup wird steuerlich nicht beraten.
* Open-Source-Software ist unter einer unpassenden Lizenz in der Software des Startups enthalten.
* Es bestehen ungeklärte Eigentumsrechte am geistigen Eigentum des Startups.
* Es gibt laufende oder angedrohte Gerichtsverfahren über wichtiges geistiges Eigentum, das vom Startup verwendet wird/des Startups.
Zum Autor Jan Schnedler ist Autor des Buches Startup-Recht. Rechtsanwalt Schnedler liefert in seinem Werk nicht nur einen umfassenden Überblick in Sachen juristischen Aspekte rund um Startups aller Art, nein, er schriebt dies alles auch noch in lesbarer Form zusammen. Nach der Lektüre sollten Gründer in der Lage sein, informiert Entscheidungen zu treffen, Fehler zu vermeiden oder zumindest zu korrigieren. Die Bandbreite reicht dabei von Gesellschaftsformen über Logofindung bis zum Thema Investoren-Verträge.
Normal fiber optics can be easily physically tapped and massages can be intercepted. Intruders can bend the cable with a small clamp, then use a specialized piece of hardware to split the beam of light that carries digital ones and zeros through the line. The people communicating have no way of knowing someone is eavesdropping, because they’re still getting their messages without any perceptible delay.
The power of quantum mechanics can be harnessed to protect critical banking data from such potential spies. Banks and governments are testing quantum key distribution technology (QKD) to guard their closest secrets.
QKD solves this problem by taking advantage of the quantum physics notion that light — normally thought of as a wave — can also behave like a particle. At each end of the fiber-optic line, QKD systems use lasers to fire data in weak pulses of light, each just a little bigger than a single photon. If any of the pulses’ paths are interrupted and they don’t arrive at the endpoint at the expected nanosecond, the sender and receiver know their communication has been compromised, according to bloomberg.com.
“Financial firms see this as a differentiator,” says John Prisco, chief executive officer of Quantum Xchange, the company that’s been operating the fiber-optic cable along the Holland Tunnel between Lower Manhattan and New Jersey. He said banks and asset management firms were considering using QKD to guard their most sensitive secrets. The company hopes to stretch its cables from Boston to Washington, D.C., and is also promoting them to U.S. government agencies.
The Chinese government has created a 1,240-mile QKD-protected link between Beijing and Shanghai. It’s also demonstrated the ability to use QKD to transmit and receive messages from a satellite.
Telecommunications giants including the U.K.’s BT Group and Japan’s NTT say they’re considering whether to build the protection into their network infrastructure.
Encryption is worthless if an attacker manages to get the digital keys used to encode and decode messages. Each key is usually extra-encrypted, but documents disclosed by former National Security Agency contractor Edward Snowden in 2013 showed that the U.S. government, which hoovers up most of the world’s internet traffic, can also break those tougher codes.
Quantum computers are another potential threat to conventional encryption. Like QKD systems, these machines use quantum physics principles to process information and may one day achieve processing power far beyond that of conventional computers. These computers will give almost any user the code-breaking powers of today’s NSA.
In 2016 the NSA warned companies that do business with the U.S. government that their next generation of encryption systems would have to be resistant to attacks by quantum computers.
At CES, IBM today announced its first commercial quantum computer for use outside of the lab. The 20-qubit system combines into a single package the quantum and classical computing parts it takes to use a machine like this for research and business applications. That package, the IBM Q system, is still huge, of course, but it includes everything a company would need to get started with its quantum computing experiments, including all the machinery necessary to cool the quantum computing hardware.
While IBM describes it as the first fully integrated universal quantum computing system designed for scientific and commercial use, it’s worth stressing that a 20-qubit machine is nowhere near powerful enough for most of the commercial applications that people envision for a quantum computer with more qubits — and qubits that are useful for more than 100 microseconds. It’s no surprise then, that IBM stresses that this is a first attempt and that the systems are “designed to one day tackle problems that are currently seen as too complex and exponential in nature for classical systems to handle.” Right now, we’re not quite there yet, but the company also notes that these systems are upgradable (and easy to maintain).
“The IBM Q System One is a major step forward in the commercialization of quantum computing,” said Arvind Krishna, senior vice president of Hybrid Cloud and director of IBM Research. “This new system is critical in expanding quantum computing beyond the walls of the research lab as we work to develop practical quantum applications for business and science.”
More than anything, though, IBM seems to be proud of the design of the Q systems. In a move that harkens back to Cray’s supercomputers with its expensive couches, IBM worked with design studios Map Project Office and Universal Design Studio, as well Goppion, the company that has built, among other things, the display cases that house the U.K.’s crown jewels and the Mona Lisa. IBM clearly thinks of the Q system as a piece of art and, indeed, the final result is quite stunning. It’s a nine-foot-tall and nine-foot-wide airtight box, with the quantum computing chandelier hanging in the middle, with all of the parts neatly hidden away.
If you want to buy yourself a quantum computer, you’ll have to work with IBM, though. It won’t be available with free two-day shipping on Amazon anytime soon.
In related news, IBM also announced the IBM Q Network, a partnership with ExxonMobil and research labs like CERN and Fermilab that aims to build a community that brings together the business and research interests to explore use cases for quantum computing. The organizations that partner with IBM will get access to its quantum software and cloud-based quantum computing systems.
My recent story for Quanta explained a newly proved phenomenon that might seem surprising from a naive perspective: Virtually all polynomials of a certain type are “prime,” meaning they can’t be factored.
The proof has implications for many areas of pure mathematics. It’s also great news for a pillar of modern life: digital encryption.
The main technique we use to keep digital information secure is RSA encryption. It’s a souped-up version of the encryption scheme a seventh grader might devise to pass messages to a friend: Assign a number to every letter and multiply by some secretly agreed-upon key. To decode a message, just divide by the secret key.
RSA encryption works in a similar fashion. In simplified form, it goes like this: A user starts with a message and performs arithmetic on it that involves multiplication by a very large number (hundreds of digits long). The only way to decode the message is to find the prime factors of the resulting product. The security of RSA encryption rests on the fact that there’s no fast way to identify the prime factors of very large numbers. If you’re not the intended recipient of a message — and if you therefore lack the right key to decode it — you could search for a thousand years with the best computers and still not find the right prime factors.
But there is a back door, and it has to do with polynomial equations. Every number can be represented as a unique polynomial equation. While it’s hard to find the prime factors of a number, it’s easy to find the factors of a polynomial. And once you know the factors of a polynomial, you can use that information to find the prime factors of the number you started with.
Here’s how it works.
Step One: Pick a number whose prime factors you’d like to know. To take a simple example, let’s use the number 15.
Step Two: Convert 15 into binary notation:
Step Three: Turn that binary expression into a polynomial by treating the binary digits as coefficients of a polynomial:
x3 + x2 + x + 1.
(Note that this polynomial equals 15 when x = 2, because 2 is the base of binary notation.)
Step Four: Factor the polynomial:
(x2 + 1) × (x + 1).
Step Five: Plug x = 2 into each of those factors:
(22 + 1) = 5
(2 + 1) = 3.
Conclusion: 5 and 3 are the prime factors of 15.
This seems like a complicated way to find the prime factors of a small number like 15, whose factors are easy to spot straightaway. But with very large numbers — numbers with hundreds of digits — this polynomial method gives you a remarkable advantage. There’s no fast algorithm for factoring large numbers. But there are fast algorithms for factoring large polynomials. So once you convert your large number to a large polynomial, you’re very close to finding the number’s prime factors.
Does this mean RSA encryption is in trouble? Actually, no. The reason for this has to do with the new proof about polynomials. The mathematicians Emmanuel Breuillard and Péter Varjú of the University of Cambridge proved that as polynomials with only 0 and 1 as coefficients get longer, they’re less and less likely to be factorable at all. And if a polynomial can’t be factored, it can’t be used to identify the prime factors of the number it’s based on.
Breuillard and Varjú’s proof effectively slams shut the polynomial back door for breaking RSA encryption. The very large numbers used in RSA encryption correspond to very long polynomials. Breuillard and Varjú proved that it’s nearly impossible to find polynomials of that length that can be factored. Mathematicians and cryptographers alike have long suspected this is the case. But when the cybersecurity of the entire world depends on some mathematical hack not working, it’s good to have proof that it doesn’t.
Linear computation: montage of a photo of the chip containing the trapped ions and an image of the ions in a 1D array (Courtesy: Christopher Monroe)
The first commercial quantum computer that uses trapped ions for quantum bits (qubits) has been launched by the US-based start-up IonQ. The device is unlike other commercial systems, which use qubits made from superconducting circuits. The company is now working with a small number of users to improve the technology.
Over the past few years, quantum computing has gone from an enticing promise of vastly superior computing power to real devices that can do increasingly useful calculations. A modest number of commercial quantum computers have already been made by small companies such as Rigetti as well as tech giants such as IBM. What these systems all have in common are qubits made from superconducting circuits.
But now University of Maryland spin-out IonQ is bucking this trend by using trapped-ion technology developed by Maryland physicist Christopher Monroe, who is the company’s cofounder and chief executive.
The IonQ device can host 160 ion qubits. The company has performed simple quantum operations on a string of 79 qubits and full quantum computations on 11 qubits. It was announced on 11 December at the “Quantum for Business” conference in Mountain View, California. The company claims that
“IonQ’s systems are the first in the market that store information on individual atoms. They are more accurate and can perform more complex calculations than any quantum computer built to date.”
“I think the announcement is significant, and shows they are making good progress”, says quantum-information specialist John Preskill at Caltech, who was not involved in the work. He believes that the ion-trap technology is competitive with superconducting qubits.
“Even at this early stage, the results show the ion trap design has all the advantages we expected and more” – Christopher Monroe
Ion-trap computers work by holding the ions in a geometrical array – IonQ uses a simple linear arrangement (see figure above). Laser beams encode and read-out information to and from individual ions by causing transitions between an ion’s electronic states. During a computation, the ions “feel” one another’s state via electrostatic interactions.
The IonQ device uses ytterbium ions, but unlike superconducting qubits, they don’t need to be cooled to within a fraction of a degree of absolute zero. Bulky cryogenic equipment is therefore not needed and the entire system occupies about one cubic metre.
Progressions of power
The qubit tally of IonQ’s device exceeds the 50-qubit devices reported by IBM and Google, although Google is said to be preparing a 70-qubit machine. But the power of a quantum computer is not simply a question of how many qubits it has; it’s equally important how well each of them performs.
This is where Monroe and colleagues think ion-trap devices might show an advantage. Ensuring that every qubit is identical, for example, is easier with ions because superconducting circuits are much more complicated to make. What is more, the ions are less error-prone, showing an excellent “gate fidelity” of more than 99%.
Gate fidelity is the probability that the gate produces the quantum state it is supposed to, explains Monroe. “A fidelity of 99% roughly means that you can do about 100 operations before the quantum state becomes gibberish”, he says. This means that IonQ’s quantum computer might be able to handle “deeper”, more complex algorithms with more operations.
Preskill adds “I don’t think the other companies have reported two-qubit gate fidelities this good in their multi-qubit devices”.
The IonQ device reportedly performs well for a standard benchmark test for quantum computing called the Bernstein-Vazirani algorithm. This encodes a number, represented by several bits, into a mathematical function from which the number can be extracted by making a single ‘yes/no’ enquiry on one of the qubits of the encoding function.
The IonQ device has also been used to calculate the binding energies of simple molecules. In principle, quantum computers can do these calculations exactly, rather than needing the approximations that must traditionally be used for any atom or molecule with more than one electron. The IonQ machine was used to do this calculation for a water molecule – a more complicated case than the lithium and beryllium hydrides studied using IBM’s quantum computer.
IonQ’s announcement “came as a pleasant surprise”, says Umesh Vazirani of the University of California at Berkeley, one of the creators of the benchmark algorithm. “They are much further along than I was expecting, and I am impressed with the performance they claim. IonQ’s ion traps are serious contenders with devices that use superconducting qubits.”
Rainer Blatt of the University of Innsbruck in Austria, who conducts experimental work on ion-trap quantum computation, says “the race is still on for which platform is the best, but ions are surely at the front.” Although ion-trap devices often receive less publicity, he says, they “often yield notably better performance.”
“Long road ahead”
But “there is a long road ahead and it is too early to declare winners”, Vazirani warns. “It is also quite possible that it may not end up being an either/or situation”, he says. “The two technologies have different strengths, and eventually a quantum computer might incorporate both for different functions.”
“We have a very long view on the business plan”, says Monroe. “Since it is very unlikely that quantum computers will be able to solve useful problems anytime in the next few years, we are starting to train our system on small problems and algorithms that are of the same form as those that are more difficult.”
The IonQ device is not, unlike the IBM Q quantum computer, yet available to all comers. Monroe explains that they are still a small company, with just 32 employees, and so “we will be partnering with a few users that can help us design and improve our current and future systems.” Ultimately, however, the company aims to make the computer more widely accessible via a cloud server.
“Quantum phenomenon has been detected in a large class of superconducting materials, fueling a growing belief among physicists that an unknown organizing principle governs the collective behavior of particles and determines how they spread energy and information. Understanding this organizing principle could be a key into “quantum strangeness at its deepest level”
Experiments suggest that exotic superconducting materials share a “strange metal” state characterized by a quantum speed limit that somehow acts as a fundamental organizing principle.
A ubiquitous quantum phenomenon has been detected in a large class of superconducting materials, fueling a growing belief among physicists that an unknown organizing principle governs the collective behavior of particles and determines how they spread energy and information. Understanding this organizing principle could be a key into “quantum strangeness at its deepest level,” said Subir Sachdev, a theorist at Harvard University who was not involved with the new experiments.
The findings, reported today in Nature Physics by a team working at the University of Sherbrooke in Canada and the National Laboratory for Intense Magnetic Fields (LNCMI) in France, indicate that electrons inside a variety of ceramic crystals called “cuprates” seem to dissipate energy as quickly as possible, apparently bumping up against a fundamental quantum speed limit. And past studies, especially a 2013 paper in Science, found that other exotic superconducting compounds — strontium ruthenates, pnictides, tetramethyltetrathiafulvalenes and more — also burn energy at what appears to be a maximum allowed rate.
Strikingly, this speed limit is linked to the numerical value of Planck’s constant, the fundamental quantity of quantum mechanics representing the smallest possible action that can be taken in nature.
“When you see that, you know you’re touching on something very, very deep and fundamental,” said Louis Taillefer, a condensed matter physicist at Sherbrooke, who conducted the new cuprate experiment with his graduate student Anaëlle Legros, Cyril Proust of LNCMI, and 13 collaborators.
Strange metals appear to dissipate energy as fast as the laws of quantum mechanics allow.
This energy-burning behavior occurs when the cuprates and other exotic compounds are in a “strange metal” phase, in which they resist the flow of electricity more than conventional metals. But when they’re cooled to a critical temperature, these strange metals transform into perfect, lossless conductors of electricity. Physicists have been struggling for 32 years to understand and control this powerful form of superconductivity, and the behavior of electrons in the preceding strange-metal phase is increasingly seen as a key part of the story.
Exactly what electrons, the carriers of electricity, are doing in strange metals isn’t known. But experts hypothesize that they may be organizing themselves into a “maximally scrambled” quantum state, in which the properties of each electron depend on those of every other. This state of maximum scrambling might allow the electrons to scatter off one another and spread energy as quickly as the laws of quantum mechanics permit.
This scrambled state is quantum strangeness in the extreme, Sachdev said. In the 1930s, Albert Einstein bristled at the idea of two particles becoming entangled, with properties that stay interdependent even after the particles have traveled far apart. “Here we have entanglement of millions of electrons leading to a whole state of matter,” Sachdev said, “so we are really exploring the frontier of entanglement.”
An organizing principle could be a way in.
“The experiments point to a tantalizing universality across materials, one that would involve a deep idea in quantum mechanics and statistical mechanics,” said Sean Hartnoll, a theoretical physicist at Stanford University. The effort to pinpoint that deep idea has turned up surprising connections to black holes, gravity and quantum information theory.
In 1986, when Georg Bednorz and Alex Müller of IBM Research Zurich synthesized the first cuprate and discovered what’s known as “high-temperature superconductivity,” they noticed something strange about their revolutionary new crystal. As the duo cooled down their cuprate — this one made of lanthanum, barium, copper and oxygen atoms — toward its critical temperature, they observed that the crystal’s electrical resistance decreased linearly with the falling temperature, so that when plotted it formed a downward-trending straight line. For conventional materials, this relationship forms a more complicated curve.
At the time, this observation was overshadowed by the more dramatic result. Bednorz and Müller’s discovery of superconductivity at a higher critical temperature than was previously thought possible quickly won them the physics Nobel Prize and set off a fevered search for similar materials. “It was a pretty mad time,” said Joseph Orenstein, a physicist who was then at Bell Labs in New Jersey. “The place went crazy.”
Other labs soon discovered cuprates and other compounds that superconducted at even higher temperatures. Since then, physicists have dreamed of finding or synthesizing materials that superconduct electricity all the way up to room temperature. Such materials could make human electrical infrastructure vastly more efficient and could power magnetically levitating vehicles, revolutionizing the way we live.
But to create higher-temperature superconductors, physicists had to strengthen the glue that binds electrons together, allowing the electrons to effortlessly convey electric charge. The problem was, the researchers first had to figure out what that glue is. Theories proliferated, but the striking complexity of cuprates and other high-temperature superconductors confounded every attempt.
Over time, one part of the fuzzy picture came into focus: The mysterious linear resistivity that Bednorz and Müller observed in their first cuprate kept showing up in other cuprates and materials before the onset of superconductivity. This behavior became associated with the strange-metal phase that seems to underlie superconductivity in some way. The phase not only transitions to superconductivity at a critical temperature, but persists at lower temperatures if magnetic fields are used to destroy the superconducting state. The superconducting and strange-metal phases appear to compete, with the critical temperature acting as the tipping point between them. To dial up the critical temperature, physicists need to understand both phases. “We probably won’t understand why the superconducting temperature in cuprates is high until we understand the strange-metal phase out of which the superconductivity emerges,” Hartnoll said.
The straight line indicated the existence of “a beautiful, simple, robust law,” said Taillefer. “There has to be a simple, deep theoretical explanation.”
Starting in 1990, researchers began finding evidence of a quantum nature to the linear resistivity. That year, Orenstein and his colleagues at Bell Labs studied a cuprate called yttrium barium copper oxide and found that, like Bednorz and Müller’s sample, its electrical resistance dropped linearly as it was cooled toward its critical temperature. By using an alternating current, they were able to measure the rate at which electrons in the material scatter off each other, which is the source of resistance. They discovered that the new straight line representing the electron scattering rate as a function of temperature had a slope strikingly close to the fundamental constant ħ (pronounced “h-bar”), called the reduced Planck’s constant. In quantum mechanics, ħ represents, among other things, the smallest possible action, which is an amount of energy multiplied by an amount of time.
“At that time I thought it was interesting,” said Orenstein, who is now a professor at the University of California, Berkeley, and a senior scientist at Lawrence Berkeley National Laboratory, “but I didn’t realize that 30 years later it would still be a completely unexplained mystery that was being related to black holes and information theory.”
The 2013 Science paper and today’s Nature Physics findings show that the slope of the line relating electron scattering rate to temperature in strange metals is invariably the same: ħ.
The Quantum Speed Limit
In 2004, the Dutch theorist Jan Zaanen gave this curious phenomenon a name: Planckian dissipation. He argued in a Nature News & Views article that electrons in these materials, and in other exotic states of matter sometimes referred to as “quantum soup,” are all reaching a fundamental quantum speed limit on how fast they can dissipate energy.
“If you’re on a freeway and all the cars are going at the same speed, it’s not because their engines are identical; it’s just because there’s a speed limit,” Hartnoll said.
To understand why electrons in strange metals push up against the putative speed limit, theorists want to figure out where it comes from. The best argument traces the speed limit to the uncertainty principle, the famous formula introduced by Werner Heisenberg in 1927 that puts an upper limit on the amount of certainty that you can have about the world — or, equivalently, on the amount of definiteness the world itself possesses. This upper limit is determined by ħ.
Conceived and approximated by Max Planck in 1900 and later put in reduced form by Paul Dirac, ħ shows up all over quantum theory. Its extremely small value, now known with high precision, represents the quantum unit of action, but in addition, as Heisenberg showed, ħ is the quantum unit of uncertainty: an inescapable, base-level fuzziness in nature. The fuzziness appears when you try measuring two things at once: the position and momentum of a particle, for instance, or how much energy it possesses and for how long. In other words, position and momentum can’t both be defined to greater accuracy than ħ; nor can energy and time. The better you know one, the less certain the other.
The hypothesis is that electrons in strange metals might be “dissipating as quickly as they can consistent with the uncertainty principle,” Hartnoll explained. The electrons possess an amount of energy that’s proportional to the temperature of the strange metal, and dissipation is a process that takes a certain amount of time. Time and energy can’t both be defined to arbitrary precision because of the uncertainty principle, Hartnoll said, so it’s possible that Planckian dissipation arises “when the dissipation time is as fast as it can be.”
It’s only a rough sketch, he admits. He and other theorists want to prove the quantum bound more rigorously, which might help clarify why hordes of electrons in materials like cuprates so naturally reach it.
For the last few years, Hartnoll, Sachdev and other theorists have been attacking the problem using a surprising “holographic duality” that mathematically connects systems of scrambled quantum particles, like those in strange metals, to imaginary black holes in one higher dimension. (The black hole pops out of the particle system like a hologram.) Remarkably, physicists find that black holes — incredibly dense, spherical objects whose gravity is so strong that not even light can escape — do the equivalent of Planckian dissipation, reaching a bound on how fast they can possibly scramble information that falls into them. In other words, black holes and strange metals go to extremes in some common way. The holographic duality is enabling the researchers to translate properties of black holes into dual properties of the scrambled-particle systems. They hope this will reveal what electrons are doing in strange metals, what happens in the competing superconducting phase, and potentially how to tip the balance between the two, extending superconductivity to higher temperatures.
As they study the behavior of scrambled electrons using the holographic duality and other methods, researchers are gaining a sense of progress and partial insight. Some feel that the field is on a cusp of a conceptual breakthrough. Hartnoll said of the Planckian dissipation phenomenon, “I think it may be understood soon.”
IBM researcher Jerry Chow in the quantum computing lab at IBM’s T.J. Watson Research Center
by William (“Whurley”) Hurley
The word “quantum” gained currency in the late 20th century as a descriptor signifying something so significant it defied the use of common adjectives. For example, a “quantum leap” is a dramatic advancement (also an early 1990’s television series starring Scott Bakula).
At best, that is an imprecise (though entertaining) definition. When “quantum” is applied to “computing,” however, we are indeed entering an era of dramatic advancement.
Quantum computing is technology based on the principles of quantum theory, which explains the nature of energy and matter on the atomic and subatomic level. It relies on the existence of mind-bending quantum-mechanical phenomena, such as superposition and entanglement.
Erwin Schrödinger’s famous 1930’s thought experiment involving a cat that was both dead and alive at the same time was intended to highlight the apparent absurdity of superposition, the principle that quantum systems can exist in multiple states simultaneously until observed or measured. Today, quantum computers contain dozens of qubits (quantum bits), which take advantage of that very principle. Each qubit exists in a superposition of zero and one (i.e. has non-zero probabilities to be a zero or a one) until measured. The development of qubits has implications for dealing with massive amounts of data and achieving previously unattainable levels of computing efficiency that are the tantalizing potential of quantum computing.
While Schrödinger was thinking about zombie cats, Albert Einstein was observing what he described as “spooky action at a distance,” particles that seemed to be communicating faster than the speed of light. What he was seeing were entangled electrons in action. Entanglement refers to the observation that the state of particles from the same quantum system cannot be described independently of each other. Even when they are separated by great distances, they are still part of the same system. If you measure one particle, the rest seem to know instantly. The current record distance for measuring entangled particles is 1,200 kilometers or about 745.6 miles. Entanglement means that the whole quantum system is greater than the sum of its parts.
If these phenomena make you vaguely uncomfortable so far, perhaps I can assuage that feeling simply by quoting Schrödinger, who purportedly said after his development of quantum theory, “I don’t like it, and I’m sorry I ever had anything to do with it.”
Various parties are taking different approaches to quantum computing, so a single explanation of how it works would be subjective. But one principle may help readers get their arms around the difference between classical computing and quantum computing. Classical computers are binary. That is, they depend on the fact that every bit can exist only in one of two states, either 0 or 1. Schrödinger’s cat merely illustrated that subatomic particles could exhibit innumerable states at the same time. If you envision a sphere, a binary state would be if the “north pole,” say, was 0, and the south pole was 1. In a qubit, the entire sphere can hold innumerable other states and relating those states between qubits enables certain correlations that make quantum computing well-suited for a variety of specific tasks that classical computing cannot accomplish. Creating qubits and maintaining their existence long enough to accomplish quantum computing tasks is an ongoing challenge.
Humanizing quantum computing
These are just the beginnings of the strange world of quantum mechanics. Personally, I’m enthralled by quantum computing. It fascinates me on many levels, from its technical arcana to its potential applications that could benefit humanity. But a qubit’s worth of witty obfuscation on how quantum computing works will have to suffice for now. Let’s move on to how it will help us create a better world.
Quantum computing’s purpose is to aid and extend the abilities of classical computing. Quantum computers will perform certain tasks much more efficiently than classical computers, providing us with a new tool for specific applications. Quantum computers will not replace their classical counterparts. In fact, quantum computers require classical computers to support their specialized abilities, such as systems optimization.
Quantum computers will be useful in advancing solutions to challenges in diverse fields such as energy, finance, healthcare and aerospace, among others. Their capabilities will help us cure diseases, improve global financial markets, detangle traffic, combat climate change and more. For instance, quantum computing has the potential to speed up pharmaceutical discovery and development, and to improve the accuracy of the atmospheric models used to track and explain climate change and its adverse effects.
I call this “humanizing” quantum computing, because such a powerful new technology should be used to benefit humanity, or we’re missing the boat.
An uptick in investments, patents, startups and more
That’s my inner evangelist speaking. In factual terms, the latest verifiable, global figures for investment and patent applications reflect an uptick in both areas, a trend that’s likely to continue. Going into 2015, non-classified national investments in quantum computing reflected an aggregate global spend of about $1.75 billion USD, according to The Economist. The European Union led with $643 million. The U.S. was the top individual nation with $421 million invested, followed by China ($257 million), Germany ($140 million), Britain ($123 million) and Canada ($117 million). Twenty countries have invested at least $10 million in quantum computing research.
At the same time, according to a patent search enabled by Thomson Innovation, the U.S. led in quantum computing-related patent applications with 295, followed by Canada (79), Japan (78), Great Britain (36) and China (29). The number of patent families related to quantum computing was projected to increase 430 percent by the end of 2017.
The upshot is that nations, giant tech firms, universities and startups are exploring quantum computing and its range of potential applications. Some parties (e.g. nation states) are pursuing quantum computing for security and competitive reasons. It’s been said that quantum computers will break current encryption schemes, kill blockchain and serve other dark purposes.
I reject that proprietary, cutthroat approach. It’s clear to me that quantum computing can serve the greater good through an open-source, collaborative research and development approach that I believe will prevail once wider access to this technology is available. I’m confident crowd-sourcing quantum computing applications for the greater good will win.
If you want to get involved, check out the free tools that the household-name computing giants such as IBM and Google have made available, as well as the open-source offerings out there from giants and startups alike. Actual time on a quantum computer is available today, and access opportunities will only expand.
In keeping with my view that proprietary solutions will succumb to open-source, collaborative R&D and universal quantum computing value propositions, allow me to point out that several dozen startups in North America alone have jumped into the QC ecosystem along with governments and academia. Names such as Rigetti Computing, D-Wave Systems, 1Qbit Information Technologies, Inc., Quantum Circuits, Inc., QC Ware, Zapata Computing, Inc. may become well-known ,or they may become subsumed by bigger players — anything is possible in this nascent field.
Developing quantum computing standards
Another way to get involved is to join the effort to develop quantum computing-related standards. Technical standards ultimately speed the development of a technology, introduce economies of scale and grow markets. Quantum computer hardware and software development will benefit from a common nomenclature, for instance, and agreed-upon metrics to measure results.
Currently, the IEEE Standards Association Quantum Computing Working Group is developing two standards. One is for quantum computing definitions and nomenclature so we can all speak the same language. The other addresses performance metrics and performance benchmarking to enable measurement of quantum computers’ performance against classical computers and, ultimately, each other.
The need for additional standards will become clear over time.
Prozessor von D-Wave Systems mit 2000 Qubits (Bild: D-Wave Systems Inc.)
Die Superrechner sollen das Maschinelle Lernen massiv voranbringen. Pionier D-Wave will mit einem ersten Projekt zeigen, was möglich ist.
Von Wolfgang Stieler, Robert Thielicke – 07 NOV 2018
Quantencomputer könnten ein großes Hindernis aktueller KI-Algorithmen beheben: Bisher sind riesige Datenmengen nötig, um sie zu trainieren. Soll ein KI-System beispielsweise automatisch erkennen, ob Röntgenbilder krankhafte Veränderungen zeigen, müssen ausgebildete Mediziner in mühsamer Kleinarbeit die Bilder vorher genau auszeichnen. Gleiches gilt für viele weitere Anwendungen – von der Erkennung von Radfahrern in autonomen Autos bis hin zur automatischen Analyse von Qualitätsmängeln in der Industrieproduktion.
Maschinelles Lernen auf Quantenrechnern
Um das Problem zu lösen, will Quantencomputer-Pionier D-Wave nun das Maschinelle Lernen auf seine Quantenrechner bringen. Das Versprechen: Deep Learning mit viel weniger Daten. “Quanten-Prozessoren können mit viel kleineren Modellen rechnen”, sagt Sheir Yarkoni, Data Scientist bei D-Wave.
Das kanadische Unternehmen stellt spezielle Quantenchips zur Lösung von Optimierungsproblemen her, wie zum Beispiel die Berechnung optimaler Routen zur Stauvermeidung. Nun hat D-Waves neu gegründete Geschäftseinheit Quadrant Software entwickelt, die mit Hilfe so genannter generativer Modelle arbeitet. Der Nutzer kann solche Modelle mit nur sehr wenigen gelabelten – also von Hand ausgezeichneten – und zusätzlichen nicht gelabelten Daten trainieren. Solche Modelle können deutlich effizienter klassifizieren, ob ein Datenpunkt falsch oder richtig ist – ob also beispielsweise ein Bild einen Radfahrer enthält oder nicht. Mit klassischen KI-Algorithmen geschieht das über aufwändige Wahrscheinlichkeitsrechnungen. In Quantencomputern hingegen ist diese Rechnung gewissermaßen physikalisch hinterlegt. “Es geschieht auf natürliche Weise”, umschreibt Yarkoni den Prozess.
Erster Platz im KI-Wettstreit
Ein erster Beweis für die Überlegenheit des Prinzips glückte dem Unternehmen gemeinsam mit dem Medizintechnik-Anbieter Siemens Healthineers 2017. Das Team hatte an der Cataract Medical Imaging Grand Challenge teilgenommen. KI-Algorithmen traten gegeneinander an, um Operationsbesteck in einem Video zu identifizieren. Gefilmt wurde eine Augen-OP, um den Grauen Star zu beseitigen. Das Team belegte den ersten Platz.
Die Anwendung “ist sehr theoretisch”, gibt Yarkoni zu. Bisher läuft der Quanten-Teil dieses Algorithmus zudem noch auf klassischer Hardware. Aber als Proof-of-Concept dafür, dass Quantencomputer das Maschinelle Lernen deutlich voranbringen, sei sie durchaus geeignet. Schon bald will D-Wave die Anwendung aber auch auf seinem neusten Quantenchip zur Verfügung stellen.
We make Artificial Intelligence tangible for you. The German edition of the MIT Technology Review has been reporting on new developments in this field for years. In 2013 we have already launched the question: “Computers do the work, what do we do?” Now at our “Innovators Summit AI”, we bring together relevant experts from business and science to explore the real potential of this new technology in comprehensive lectures, real case studies and inspiring workshops.
We believe that AI will change almost every business field – from automotive industry to medical industry to logistics. But we also know that their true potential is still overestimated while underestimated at the same time. Overrated because technology is reaching its limits. Underestimated, because the existing AI systems already mean major changes for individual companies and whole industries .The key questions are: Where is technology moving to? What about the relevant business model? How far is the competition?
Good answers seem rare. For this reason, the Technology Review editorial team has launched the “Innovators Summit AI”.
Top-class scientists present the current state of AI research, explain the important technologies in the field of artificial intelligence including their advantages and disadvantages and answer the question: For which application is which technology suitable? Which further developments are to be expected where?
Applications and business models
With AlphaGo’s victory over the world’s best Go player at the latest, it is clear what potential Artificial Intelligence theoretically has. However, it is much less clear which applications result from this. In concrete case studies, startups and established companies explain how to use AI profitably, what implementation means for an organization, where the pitfalls and opportunities are waiting.
Cross-innovation and inspiration
Learn from other industries. Because with artificial intelligence in particular, the similarities between the different markets are greater than the differences. The combination of lectures and workshops gives you excellent opportunities to network and exchange ideas.
Do you have any questions regarding the event? Svenja Goltz, Event-Manager // email@example.com
Paul Allen tours the ITER in Cadarache. Source: ITER
[Amazingly, the article fails to mention the Stellarator Fusion Reactor, fails to differentiate between tokamak -ITER and 9 others- and stellarator concepts, and also the fact that all ten of the global fusion research reactors are located in Europe.]
Jeff Bezos, Bill Gates and Peter Thiel are funneling cash into nuclear fusion projects.
Not long before he died, tech visionary Paul Allen traveled to the south of France for a personal tour of a 35-country quest to replicate the workings of the Sun. The goal is to one day produce clean, almost limitless energy by fusing atoms together rather than splitting them apart.
The Microsoft Corp. co-founder said he wanted to view the early stages of the International Thermonuclear Experimental Reactor in Cadarache firsthand, to witness preparations “for the birth of a star on Earth.”
Allen wasn’t just a bystander in the hunt for the holy grail of nuclear power. He was among a growing number of ultra-rich clean-energy advocates pouring money into startups that are rushing to produce the first commercially viable fusion reactor long before the $23 billion ITER program’s mid-century forecast.
Jeff Bezos, Bill Gates and Peter Thiel are just three of the billionaires chasing what the late physicist Stephen Hawking called humankind’s most promising technology. Scientists have long known that fusion has the potential to revolutionize the energy industry, but development costs have been too high for all but a handful of governments and investors. Recent advances in exotic materials, 3D printing, machine learning and data processing are all changing that.
“It’s the SpaceX moment for fusion,” said Christofer Mowry, who runs the Bezos-backed General Fusion Inc. near Vancouver, Canada. He was referring to Elon Musk’s reusable-rocket maker. “If you care about climate change you have to care about the timescale and not just the ultimate solution. Governments aren’t working with the urgency needed.”
The company Allen supported, TAE Technologies, stood alone when it was incorporated as Tri-Alpha Energy two decades ago. Now it has at least two dozen rivals, many funded by investors with a track record of disruption. As a result, there’s been an explosion of discoveries that are driving the kind of competition needed for a transformational breakthrough, according to Mowry.
One of the clearest measures of progress in the field was on display last week in Gandhinagar, India, where the Vienna-based International Atomic Energy Agency held its biennial fusion forum. The conference highlighted a record 800 peer-reviewed research papers, 60 percent more than a decade ago.
Fusion itself isn’t the problem. The tricky part is generating more energy than is used in the process. Such reactors have to mimic conditions found only in deep space, a much more complex and costly endeavor than fission. Heating plasma to temperatures higher than stars and then containing the ensuing reactions inside cryogenic cooling vessels can require a million parts or more.
Even if commercial fusion takes longer than expected to achieve, many of the innovations produced along the way will prove lucrative on their own, according to IP Group Plc, a London-based investor in intellectual property. Research firms are already minting patents to protect their creations, from software that simulates plasma burning at 150 million degrees Celsius (270 million Fahrenheit) to a new type of magnet that has applications in health care.
“There’ll still be significant residual value,” said Robert Trezona, who oversees IP Group’s investment in First Light Fusion Ltd., a company near Oxford University whose advisory board includes former U.S. Energy Secretary Steven Chu. “It would have been inconceivable for a small company like First Light to make advancements in fusion sciences 20 years ago.”
One of the most ambitious ventures is Commonwealth Fusion Systems, a company founded last year by six MIT professors. Backed by some of the biggest names in business, they’re confident they’ll be able to produce a prototype of a so-called net energy reactor by 2025.
The startup raised $50 million in March from a group led by Italy’s Eni SpA, one of several oil producers preparing for a carbon-neutral world. And last month it secured an unspecified sum from Breakthrough Energy Ventures, a fund seeded by Gates, Bezos and fellow tycoons including Richard Branson, Ray Dalio and Michael Bloomberg, the majority owner of Bloomberg LP, the parent company of Bloomberg News.
“The greater danger is not having anybody succeed than having everybody,” Commonwealth Fusion CEO Bob Mumgaard said by phone from Cambridge, Massachusetts. “We need more smart people driving very hard to crack this.”
Still, ITER remains the best bet in terms of breaking the code for producing cheap energy on a massive scale, according to Nawal Prinja, a nuclear engineer at Aberdeen-based John Wood Group Plc and a featured speaker at the forum in India.
“They’re coming up with all kinds of new ideas to make the industry more efficient, but turning ideas into a commercial station is a different story,” Prinja said. Only ITER, Latin for “the way,” has the resources needed to perfect the kind of reactor that can run entire cities, he said.
If so, many of today’s fusion investors may not live long enough to benefit from the rollout. It’s already taken ITER more than three decades just to lay the foundation of a machine designed to prove the viability of its concept. And it doesn’t expect to have a reactor capable of powering a couple million U.S. households until sometime around 2050.
Tim Luce, ITER’s chief scientist and Allen’s host at the sprawling research facility about 50 kilometers north of Marseille, dismissed criticism of the time horizon. What the international effort is trying to accomplish, he said, is simply too ambitious for any one actor in the private sector.
“These other competitors have a vision for doing something smaller, but I haven’t seen a compelling piece of physics that shows they can do it,” Luce said. “It’s the story of the tortoise and the hare and we’re the tortoise.”
And then there’s Musk, a serial innovator who thinks the whole fusion crowd is barking up the wrong tree. In a weed-and-whiskey podcast that went viral last month, the Tesla and Solar City co-founder said smart money like his is better spent on finding more efficient ways to capture the Sun’s energy than on trying to recreate it.
“We’ve got a giant thermonuclear reactor in the sky,” Musk said. “It shows up every day very reliably. If you can generate solar panels and store it with batteries, you can have it 24-hours a day.”
Ab dem 1. November findet für 10 Tage die Berlin Science Weekvom 1.11. bis 10.11. statt:
“BERLIN SCIENCE WEEK 1-10 NOV 2018 is an international gathering, bringing together people from the world’s most innovative scientific institutions in Berlin. It is dedicated to the dialogue between science and society to inspire a deeper understanding of our world. BERLIN SCIENCE WEEK fosters interdisciplinary exchange and encourages fellow participants to connect and learn from each other.”
Während dieser Konferenz fidnet auch das Helmholtz Horizons Symposium am 6.11. statt:
“The Helmholtz Horizons Symposium highlights scientific breakthroughs achieved by eminent Helmholtz researchers as well as the organization’s rising stars, from doctoral candidates to junior research group leaders.”
(I was asked how an “Entrepreneur-in-Residence” is defined – and struggling to find a good catch-all explanation, I looked it up)
3 Benefits of Hiring an ‘Entrepreneur-in-Residence’
For most companies, having a CEO who’s forced to constantly divide his or her attention is a surefire way to kill the business. Companies need to focus on successful growth, not developing new projects.
This is why so many big companies, like Google Ventures, Target and Dell, and venture capital funds like Accel Partners and Battery Ventures, are investing in entrepreneurs-in-residence. Harvard Business School and MIT have EIRs in place, as well, and colleges and universities have even started offering educational programming in entrepreneurship.
Hiring an EIR may sound counterintuitive to other entrepreneurs, but this move can help a small business or startup leverage its core competencies, allowing the CEO to focus on big-picture issues while a trusted “intrapreneur” works on new initiatives.
In my own case, as part of my efforts to create a successful marketing agency, I put everything into building my core business. But, as I found, reaching a certain level of success can be a Catch-22 at times, because a lot of opportunities presented themselves which I couldn’t possibly seize on, on my own.
If you find yourself in a similar position, a symbiotic solution is to invest in an EIR as an internal partner. An EIR can give your company a strategic edge by jumping into exciting new ventures while honing his or her own leadership abilities. What about benefits for the companies involved? Here are several.
1. Intrapreneurship facilitates strategic growth.
In today’s business landscape, entrepreneurship has become a hot commodity. While the concept of the “intrapreneur” has been around for decades, with businesses of all sizes now hiring EIRs, intrapreneurship (entrepreneurship within a larger organization) is riding the crest of a sea change in how businesses scale and expand their offerings.
This new breed of worker is represented by the most forward-thinking minds. That’s why businesses that want to harness their potential for building special projects from the ground up should cultivate an “intrapreneurship mentality” within their organizations.
EIRs make sense for any company looking to develop new goods or services.
2. EIRs don’t mirror — they complement.
CEOs should select EIRs whose talents play off their own. For example, I handle celebrity relationships with our brand, so I didn’t need someone to connect us to more celebrities; rather, I needed an operator who could spin up manufacturing and fulfillment to enhance my own core competencies.
Other entrepreneurs and startup leaders can look for EIRs to give them a boost in three key ways:
1. Managing portfolio business. Entrepreneurs-in-residence can lead spinoff brands and equity deals, freeing up the CEO while the business gains new market shares and audiences.
When my company decided it made sense to use our staff’s diverse expertise to expand our offerings, we hired an EIR to manage our equity deals and launch new internal brands and celebrity products. Because I could devote only about 5 percent of my time to my company’s 12 equity deals, they couldn’t be organized until our EIR started, initiated weekly meetings with portfolio companies and organized summaries of those business.
EIRs also help founders avoid the pitfalls inherent to becoming “parallel entrepreneurs,” or starting companies concurrently. As HubSpot co-founder and CTO Dharmesh Shah learned the hard way, running two startups essentially sets one up to fail because a CEO can’t devote himself or herself to two all-consuming projects. However, EIRs can lift some burdens so that both ventures succeed.
2. Developing key partnership strategies. CEOs should include EIRs in meetings that concern venture capital, influencers and agencies This way, the entrepreneurs-in-residence can connect the dots about portfolio businesses and hit the ground running with spinoff brands.
For instance, when a celebrity wanted our assistance in starting a business, our EIR offered to develop the partnership. His experience as an entrepreneur and his ability to capitalize on connections provided counsel and knowledge that benefited the deal. EIRs must be given all the resources they need to succeed with portfolio companies.
In another situation, I had a particular project that required opening up specific resources. So, I involved my EIR by introducing him to my network of connections — PR execs, agents and relationship managers — to ensure he had at his disposal all the communication tools he needed.
3. Streamlining nitty-gritty operations. EIRs must be given enough autonomy to delve into the nuts and bolts of meeting clients’ needs. Google’s EIR Jewel Burks, for example, assists small businesses by optimizing their web efforts and giving advice to their teams. EIRs must dig in and work at this granular level, providing the specific services customers demand.
In another instance, one of our portfolio companies had fulfillment issues. So, our EIR helped the client’s team overhaul its entire process. He now sits down with the CEO every week to answer questions and solve operational problems.
EIRs can also pull in team members and bring them up to speed on projects as needed. This will allow companies to capitalize on the strengths and expertise of their staffs to solve specific issues without everyone needing to be on board from the outset.
Not only do small businesses with EIRs benefit from having dedicated intrapraneurs develop unique initiatives, but they can keep their best people engaged long-term.
Meanwhile, entrepreneurs-in-residence complement their CEOs’ strengths, leverage their companies’ strategic advantages and boost their brands’ output. With both sides seeing such incredible reciprocal benefits, it’s a win-win.
Sep 9, 2014 – I’ve held the title of entrepreneur-in-residence (EIR) at four different organizations, so I get asked this question frequently. … Traditionally, an EIR is a position at a venture capital firm. Usually the EIR is an accomplished executive whom the firm is willing to back financially.
I’ve held the title of entrepreneur-in-residence (EIR) at four different organizations, so I get asked this question frequently. There’s no unambiguous answer because the job has to fit the priorities of the institution. Still, I can shed some light.
[Disclosure: I have an ownership position in the companies mentioned in this article.]
Each time I’ve been EIR, it’s been a newly created position. I’ve never had a predecessor, and in each instance I wrote my own job description. Despite the name of the position being the same, the duties, responsibilities and compensation were different.
Traditionally, an EIR is a position at a venture capital firm. Usually the EIR is an accomplished executive whom the firm is willing to back financially. Often, an EIR is between stints running a company or is someone who just exited from one of the portfolio companies of the firm. An EIR typically gets office space, some administrative support, a business card and maybe even a stipend. It is not meant to be a high paying job nor a permanent position. The goal is for the EIR to create the next company that the VC firm will fund. Typically VC firms evaluate deals that come their way and then decide if the management team is strong enough to justify an investment. With an EIR it’s inverted—they already like the management. They just need to find the right investment vehicle.
An EIR may get asked to help out on due diligence and/or provide operational assistance to existing portfolio companies. The goal, however, remains: to develop a fundable concept that the VC firm can seed and which the EIR can run (or at least be co-founder). There are many variations on this theme, such as investing in an existing company, doing a roll-up or buying a distressed company.
Another way in which an EIR might help a VC firm is if they are a fundable executive but also have deep domain expertise in an area of thematic interest to the firm. For example, a VC firm might say to themselves, “We believe that distributed energy generation is going to be a megatrend, and we really need to understand the opportunity in the space and develop our investment strategy.” They might hire an EIR with the right skills to research the market, develop an investment thesis and then either find a deal or write a business plan for a new company. The best VC firms see these trends emerging long before most people—and are exiting their investments just as the masses are rushing in.
At the University of Illinois at Urbana-Champaign, I worked for their captive venture capital arm, Illinois Ventures. The mission of Illinois Ventures was to do seed stage investing in technologies coming out of research laboratories in Illinois. Funding a graduate student or professor as a part-time CEO has never been a good practice. When a professor or group of professors (or their students) developed an innovation that justified starting a company that Illinois Ventures wanted to fund (assuming there wasn’t already a credentialed leader in place), I became the business co-founder of the company.
The professors and their students were the technical leads, and I helped craft an investable thesis, handled all of the foundational issues (corporate form, licensing the technology, setting up an option pool, recruiting the team, negotiating the seed round, etc.) and figured out the business model. Because I typically was running two or three businesses at a time (each 1-2 days/week), the startups got the benefit of a seasoned executive without paying full freight.
We did this several times successfully at the University of Illinois and out of this work came SolarBridge Technologies and Semprius. [Disclosure: At Semprius I was an advisor and not the CEO].
With university-based innovations, one often isn’t sure at the outset which industry sectors the business is going to enter—nor is it certain what the business model is going to be. In this context the EIR is an all-around athlete who can wear many hats and add value in multiple ways . As the businesses progressed and we moved toward their Series A rounds, we were then able to recruit a team with the domain expertise needed to take the companies to the next level.
In some instances I then passed the baton to a new CEO, but in the case of Advanced Diamond Technologies, I stayed with the company and ran it for the next several years as full-time CEO. In another instance, I recommended to the investors that we shut down a company since the technology, in my opinion, would never be commercially viable.
At Northern Illinois University, where there aren’t many organic startups and the entrepreneurial culture is not yet ingrained, my job was to light a fire underneath the students and faculty to get them to begin thinking more entrepreneurially about their work and provide coaching and mentoring as needed. I also assisted with tech transfer, licensing and industrial partnerships.
At Argonne National Laboratory, where I was co-executive director of the entrepreneurship center (and de facto EIR), the goal was to mine the portfolio of technologies that Argonne had and identify those with the most commercial potential. From there, we would help form the companies in much the same way that an incubator might.
Just recently I became EIR at Energy Foundry, an energy and smart-grid focused venture capital firm in Chicago. At Energy Foundry, I’m helping to identify promising companies or technologies that are worthy of a deeper dive by the investment team. My colleagues and I also spend time giving guidance and advice to companies that aren’t quite right for the fund.
As you can see, there is no single definition nor a one-size-fits-all job description for an EIR. To understand the EIR’s motives, you need to look to see who’s paying them. If it’s a VC firm, you can bet that sooner or later, the rubber will need to meet the road in a fundable opportunity. If it’s at a university, there may be other payoffs, such as faculty mentoring or student experience.
Neil Kane (@neildkane) is the president of Illinois Partners which helps companies, universities and investors with innovation strategies and technology commercialization.
I write about leadership and turning innovations into businesses.
By day I’m the Director of Undergraduate Entrepreneurship at Michigan State University, but really I’m a repeat entrepreneur and technologist with a penchant for doing the hardest things possible—namely turning technological innovations from universities and federal laboratories into businesses. I founded and am the president of Illinois Partners, a firm that is a leading authority on technology commercialization and innovation, and I have the battle scars to prove it. I’m also the CEO of a toy/puzzle company that makes The X-Cube. I co-founded Advanced Diamond Technologies, among others, and have been the entrepreneur-in-residence at the University of Illinois and Northern Illinois University. I had a comparable role at Argonne National Laboratory. With an undergraduate degree in mechanical engineering, an MBA in finance, sales training at IBM, and improv instruction at The Second City, I’ve worn just about every hat there is. My first business was a barbecue sauce company. Earlier in my career I did time at IBM and Microsoft. I was named a Technology Pioneer by the World Economic Forum and was recognized for excellence in entrepreneurship by the National Science Foundation. Twice I was invited to testify in Congress on issues related to technology transfer. I had the pleasure of being a mentor in the first cohort of the National Science Foundation’s Innovation Corps program.
This Feb. 27, 2018, photo shows a seven cubit quantum device is seen at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. Describing the inner workings of a quantum computer isn’t easy, even for top scholars. That’s because the machines process information at the scale of elementary particles such as electrons and photons, where different laws of physics apply. (AP Photo/Seth Wenig)
The quantum threat comes just as the internet of things and 5G mobile connectivity are arriving, with higher security requirements
For close to 40 years, quantum computing has been seen as a curious but exciting mix of science fiction and hard computer science somewhere in the distant future.
In the past 10 years, however, large organisations such as Microsoft, Google and IBM have invested more heavily in quantum computing, which uses quantum mechanics rather than binary digital transistors for its calculations. Tools that had previously been theoretical have materialised. Quantum computing is no longer “decades away”.
IBM started giving the public access to a basic quantum computer in the cloud in 2016 and, so far, more than 100,000 people have run more than 6.7m experiments on it. This year, the first quantum computers came into use. These noisy intermediate-scale quantum, or NISQ, computers are not error-corrected and therefore only able to accomplish part of what full quantum computers will be able to do, but people are now able to move beyond theory.
Not surprisingly, the main area of concern emerging for governments and large corporations is security and encryption. Quantum technologies dramatically move the boundaries of what can be computed. Calculations that would take advanced supercomputers thousands of years to perform will eventually be done in less than a minute by quantum computers.
At some point this will render whole swaths of current encryption technologies obsolete.
Governments are aware of the security threat. The US House of Representatives recently passed the Quantum Computing Research Act, which aims to establish a new federal programme to advance quantum technologies. Tellingly, the bill is being managed through the US Senate by the armed services committee.
The US National Institute of Standards and Technology, meanwhile, recently unveiled a consortium to support the quantum industry. A new cohort of companies such as Intel, Hitachi and Huawei are also entering the fray alongside more established players. It is critical that companies and governments adopt quantum-secure methods sooner rather than later to replace existing methods of encryption.
I often use the analogy of a child in a room to highlight the difference between “secure” encryption and “unhackable” methods of securing data. Imagine a three-month-old infant in a room with an unlocked door. The baby is incapable of escaping; it is secure. A toddler in that same room simply walks up to the door, turns the handle, and walks out. Secure suddenly becomes vulnerable.
The quantum threat comes just as the internet of things and 5G mobile connectivity are arriving, with higher security requirements than we have had before.
There are broadly two ways to create a quantum secure future.
One is to create new algorithms that, in the view of many mathematicians and computer scientists, not even the most powerful quantum computers will be able to breach. Organisations such as NIST have a list of such “quantum-proof” constructs with names such as “lattice-based” or “hash-based” protocols, and some organisations may decide this is the way to become quantum-proof.
However, being reliant on algorithms is risky. As man-made constructs, these will always have patterns that are capable of being unpicked. The history of the past 25 years shows just how quickly today’s secure algorithm is tomorrow’s vulnerability. AES-256 encryption, for example, which secures much of the internet backbone and which has been widely touted as “uncrackable”, was hacked by Dutch security researchers last year.
The second is building a physical solution based on quantum mechanics. Unlike algorithms, quantum computers can produce truly random numbers with no patterns. Only these can withstand hacking attempts by other quantum computers. The first commercially available quantum security solutions will start to be deployed in 2019.
The commercialisation of quantum technologies will continue as we move into 2019. According to Gartner, the US research group, within five years more than 20 per cent of all companies will be investing in quantum computing products, including quantum secure encryption, to ensure their safety from cyber attacks.
U.S. House of Representatives passed the $1.275B National Quantum Initiative Act (NQIA)
September 17, 2018
Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantum Coordination Office within the White House Office of Science and Technology Policy to oversee a “whole-of-government” effort. A companion bill is under consideration in the U.S Senate. Meanwhile another proposal, the Quantum Computing Research Act (QCRA), was introduced in the Senate in June and directs DoD to form a Defense Quantum Information Consortium.
Sorting through the various quantum computing efforts in Congress can be confusing. Bipartisan support from House Committee on Science, Space and Technology chairman Lamar Smith (R-TX) and ranking member Eddie Bernice Johnson (D-TX) helped speed the NQIA effort through the House. The bill has backers from industry and academia and comes at a time when the global race in quantum computing is heating up despite the admittedly nascent stage of the technology. In total, the House bill calls for $1.275 billion in funding for the National Quantum Initiative during its first five years but makes no provisions for funding the next five years.
Intel likened pursuit of leadership in quantum computing to a “modern day space race” and Jim Clarke, director of quantum hardware, Intel Labs, issued a statement in support: “This legislation will allocate funding for public research in the emerging area of Quantum Computing, which has the potential to help solve some of our nation’s greatest challenges through exponential increases in compute speed. [We] look forward to working with leaders in the Senate to help keep the U.S. at the cutting edge of quantum information science and maintain the economic advantages of this technological leadership.”
“establish the goals, priorities, and metrics for a 10-year plan to accelerate development of quantum information science and technology applications in the United States;
“invest in fundamental Federal quantum information science and technology research, development, demonstration, and other activities to achieve the goals established in paragraph;
“invest in activitiesto develop a quantum information science and technology workforce pipeline;
“provide for interagency coordination of Federal quantum information science and technology research, development, demonstration, and other activities undertaken pursuant to the Program;
“partner with industry and academia to leverage knowledge and resources;
“leverage existing Federal investments efficiently to advance Program goals and objectives.
As spelled out in the bill, 1) National Institute of Standards and Technology (NIST) Activities and Workshops would receive $400 million (2019-2023 at $80 million per year); 2) National Science Foundation (NSF) Multidisciplinary Centers for Quantum Research and Education would receive $250 million (2019-2023, at $50 million per year); and 3) Department of Energy Research and National Quantum Information Science Research Centers would receive $625 million (2019-2023 at $125 million per year). These numbers will have to be reconciled with what the Senate passes, and then actually get appropriated.
“The Quantum Industry Coalition strongly supports the NQI Act and is working to help get it to the President’s desk. My understanding is that right now it’s a matter of negotiations between the Senate Commerce and Energy and Natural Resources Committees, with the House participating in the discussions as well; the Senate is aiming to pass a compromise that the House can then pass, with a goal of getting everything done before the end of the year,” said Paul Stimers, a partner in K&L Gates and an organizer of the quantum industry lobbying group.
The second major bill, the Quantum Computing Research Act of 2018, is focused on DoD and calls for, “alignment of effort within the United States government, academic, and private sectors are vital to ensuring the best technology is made available for the defense of the United States; and to the extent possible to protect national security, work performed under section 3 should be maintained at the lowest possible classification level to promote coordination between consortium partners and growth within the field of quantum information science.”
Introduced in June by Sen. Kamala Harris (D-CA), it directs the Secretary of Defense to establish the Consortium composed of members (academia and industry) selected by the Chiefs of U.S. Naval and Army research and supported by a board which would include member of the National Quantum Initiative. From a first reading of the bill it looks actual work would be supported by issuing grants.
YORKTOWN HEIGHTS, N.Y. (AP) — A race by U.S. tech companies to build a new generation of powerful “quantum computers” could get a $1.3 billion boost from Congress, fueled in part by lawmakers’ fear of growing competition from China.
Legislation passed earlier in September by the U.S. House of Representatives would create a 10-year federal program to accelerate research and development of the esoteric technology. As the bill moves to the Senate, where it also has bipartisan support, the White House showed its enthusiasm for the effort by holding a quantum summit Monday.
Scientists hope government backing will help attract a broader group of engineers and entrepreneurs to their nascent field. The goal is to be less like the cloistered Manhattan Project physicists who developed the first atomic bombs and more like the wave of tinkerers and programmers who built thriving industries around the personal computer, the internet and smartphone apps.
WHAT’S A QUANTUM COMPUTER?
Describing the inner workings of a quantum computer isn’t easy, even for top scholars. That’s because the machines process information at the scale of elementary particles such as electrons and photons, where different laws of physics apply.
“It’s never going to be intuitive,” said Seth Lloyd, a mechanical engineering professor at the Massachusetts Institute of Technology. “At this microscopic level, things are weird. An electron can be here and there at the same time, at two places at once.”
Conventional computers process information as a stream of bits, each of which can be either a zero or a one in the binary language of computing. But quantum bits, known as qubits, can register zero and one simultaneously.
WHAT CAN IT DO?
In theory, the special properties of qubits would allow a quantum computer to perform calculations at far higher speeds than current supercomputers. That makes them good tools for understanding what’s happening in the realms of chemistry, material science or particle physics.
That speed could aid in discovering new drugs, optimizing financial portfolios and finding better transportation routes or supply chains. It could also advance another fast-growing field, artificial intelligence, by accelerating a computer’s ability to find patterns in large troves of images and other data.
What worries intelligence agencies most about the technology’s potential — and one reason for the heightened U.S. interest — is that a quantum computer could in several decades be powerful enough to break the codes of today’s best cryptography.
Today’s early quantum computers, however, fall well short on that front.
WHERE CAN YOU FIND ONE?
While quantum computers don’t really exist yet in a useful form, you can find some loudly chugging prototypes in a windowless lab about 40 miles north of New York City.
Qubits made from superconducting materials sit in colder-than-outer-space refrigerators at IBM’s Thomas J. Watson Research Center. Take off the cylindrical casing from one of the machines and the inside looks like a chandelier of hanging gold cables — all of it designed to keep 20 fragile qubits in an isolated quantum state.
“You need to keep it very cold to make sure the quantum bits only entangle with each other the way you program it, and not with the rest of the universe,” said Scott Crowder, IBM’s vice president of quantum computing.
IBM is competing with Google and startups like Berkeley, California-based Rigetti Computing to get ever-more qubits onto their chips. Microsoft, Intel and a growing number of venture-backed startups are also making big investments. So are Chinese firms Baidu, Alibaba and Tencent, which have close ties to the Chinese government.
But qubits are temperamental, and early commercial claims mask the ongoing struggle to control them, either by bombarding them with microwave signals — as IBM and Google do — or with lasers.
“It only works as long as you isolate it and don’t look at it,” said Chris Monroe, a University of Maryland physicist. “It’s a grand engineering challenge.”
WHY DOES QUANTUM COMPUTING NEED FEDERAL SUPPORT?
Monroe is among quantum leaders from academia and industry who gathered in Washington on Monday with officials from the White House science office. Some federal agencies, including the departments of defense and energy, already have longstanding quantum research efforts, but advocates are pushing for more coordination among those agencies and greater collaboration with the private sector.
“The technology that underlies this area comes from some pretty weird stuff that we professors are used to at the university,” said Monroe, who is also the founder of quantum startup IonQ, which floats individual atoms in a vacuum chamber and points lasers to control them. But he said corporate investment can be risky because of the technical challenges and the long wait for a commercial payoff.
“The infrastructure required, the hardware, the personnel, is way too expensive for anyone to go in it alone,” said Prineha Narang, a Harvard University assistant professor of computational materials science.
By investing more in basic discovery and training — as the House-passed National Quantum Initiative Act would do — Narang said the U.S. could expand the ranks of scientists and engineers who build quantum computers and then find commercial applications for them.
WHAT ARE THE INTERNATIONAL IMPLICATIONS?
The potential economic benefits have won bipartisan support for the initiative, which is estimated to cost about $1.3 billion in its first five years. Also pushing action on Capitol Hill is a belief that if the U.S. doesn’t adopt a unified strategy, it could one day be overtaken by other countries.
“China has publicly stated a national goal of surpassing the U.S. during the next decade,” said Texas Republican Rep. Lamar Smith, chairman of the House science, space and technology committee, as he urged his colleagues on the House floor to support the bill to “preserve America’s dominance in the scientific world.”
Smith said he expects the Senate will pass a companion bill before the end of the year.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.