A U.K. startup that designs semiconductors used for artificial intelligence applications has raised $200 million from investors including BMW AG and
A U.K. startup that designs semiconductors used for artificial intelligence applications has raised $200 million from investors including BMW AG and Microsoft Corp.
Graphcore Ltd. is one of a number of companies trying to design a new class of chips that will be better at crunching the vast amount of data needed to make computers smarter. They argue that the processors that have defined the PC age — made by Intel Corp. and Nvidia Corp. — are not suited for this task, and more tailored solutions are needed to achieve the kind of speed up required.
The funding round was led by U.K. venture capital firm Atomico, and valued Graphcore at $1.7 billion, the company said Tuesday. Existing investors Dell Technologies Inc. and Robert Bosch Venture Capital also participated in the round.
The billion-dollar-plus valuation for the chip company comes at a time when the semiconductor industry is suffering from investor skepticism, but AI is getting increased attention. The U.K. company’s chips are specifically designed to accelerate machine learning — the process by which predictive computer algorithms improve from digesting large amounts of data. Graphcore designs chips for power-intensive tasks such AI in cloud computing, enterprise services and automotive systems.
Graphcore, headquartered in Bristol, England, will also be opening up new offices in Beijing and in Hsinchu, Taiwan. With plans to eventually IPO, it is currently rolling out its first run of chips and is working on its next product. Initial clients include Dell and Samsung Electronics Co., said Graphcore co-founder Nigel Toon. The company is targeting $50 million in revenue in 2019.
Graphcore’s technology “is well-suited for a wide variety of applications from intelligent voice assistants to self-driving vehicles,” Tobias Jahn, principal at BMW i Ventures, said in a statement.
Last year the company raised $50 million from investors including U.S. venture capital firm Sequoia Capital. Previous investors include AI luminaries Hermann Hauser, co-founder of Arm Holdings Plc, and Demis Hassabis, co-founder of Google’s DeepMind.
Other companies working on AI-specific chips include Alphabet Inc.’s Google, which now has its own tensor-processing unit designing chips the company doesn’t sell, but does deploy in its data centers.
“It doesn’t actually have to be supervised. The machines are finding out what to do for themselves”
The semiconductor industry is currently debating the sustainability of Moore’s law, an observation dating back to the 1960s that says the number of transistors on a chip—and thus, its price performance—will double about every two years. Graphcore’s leaders are instead concerned with a related concept, called Dennard scaling, which stated that as transistor density improved, power demands would stay constant. But the principle no longer applies, and adding more transistors to chips now means the chips tend to get hotter and more energy-hungry. To mitigate this issue, some chipmakers design their products so they don’t use all their processing power at once—unused areas of the chip are called “dark silicon”—and instead run only the parts necessary to support an application.
Knowles and Toon say the heat problem especially will stop phones and laptops from getting much faster in the years ahead unless circuits can be radically redesigned for efficiency. “I was given a blank sheet of paper to start, which never happens in chip design,” says Daniel Wilkinson, who works on Graphcore’s chip architecture. They challenged the team of a few dozen engineers, mostly castoffs from their past startups, to design a chip that could harness all its processing horsepower at once while using less energy than a state-of-the-art GPU. One of the bigger energy stresses in silicon involves moving and retrieving data, but historically processors are kept separate from memory. Transporting that data back and forth between these components is “very energy expensive,” Knowles says. Graphcore set out to design what he calls a more “homogeneous structure” that “intermingles” a chip’s logic with memory, so it doesn’t have to expend as much power to transport data to another piece of hardware.
Over three years, they simulated computer tests on hundreds of chip layouts, eventually settling on a design with 1,216 processor cores, which Knowles refers to as “lots of tiny islands of processors that split up energy resources.” The resulting IPU, first manufactured in 2018, is a sleek chip the size of a Wheat Thin that has almost 24 billion transistors and is able to access data for a fraction of the power of a GPU. “Each of these chips runs at 120 watts”—about the same as a bright incandescent lightbulb—“so about 0.8 of a volt and about 150 amps,” Toon says, standing in a messy electronics lab at the Bristol headquarters, sliding his thumb over an IPU’s mirrorlike finish.
To test out its prototype, the team fed it a standard data-training model of millions of labeled images of common objects (fruits, animals, cars). An engineer then queried the IPU with a photo of his own cat, Zeus, and within an hour the computer not only identified it correctly but correctly described Zeus’ coat. “The IPU was able to recognize it as a tabby,” Knowles says. Since that first test, the IPU has sped up and can now recognize more than 10,000 images per second. The goal is for the chip to be able to digest and ascertain far more complex data models, to the point the system would understand what a cat is on some more fundamental level. “We don’t tell the machine what to do; we just describe how it should learn and give it lots of examples and data—it doesn’t actually have to be supervised,” he says. “The machines are finding out what to do for themselves.”
Read more at : Bloomberg