Patterns #003: The War for Talent, LLMs are a Dead End, and Vacheron’s Louvre Display

On the leverage of talent, the cost of abandoning product development, why LLMs will not get us to AGI, and the audacity to do incredible things.

Patterns #003: The War for Talent, LLMs are a Dead End, and Vacheron’s Louvre Display
Patterns is a running series from Feel Eternity on the ideas and forces shaping design, business, and culture — not the headlines of the week, but the patterns emerging underneath them.

The individual usurps the firm as the leading actor in business

Illustration: Simon Bailly

The Economist:

From little-known programmers and hedge-fund managers to celebrated writers and singers, superstars’ influence and earnings are through the roof. Politicians have trained their gaze on the top 1%. But the most telling shift in the labour market is playing out higher in the stratosphere. Likewise, stockmarket analysts spend little time thinking about individuals. They should spend more. Technology and culture are conspiring to make the individual, rather than the firm, the animating force of commercial life.

Talent is winner-take-all because the returns are exponential to the firm and not linear. This tells me that finding valuable skills and improving one’s skills is still the only moat. To command outsized returns at anything, you have to be damn good at what you do, and not just a little, but a lot than the average (especially because the average is always improving). Maybe the real skill here is being intellectually honest about where you are on this spectrum and keeping close track of how much you’re improving your skills and not riding only on things you’ve done in the past, which is helpful to some extent, but not as helpful in domains that are new and growing rapidly.

Real talent can strike out on their own and make a lot of money that way if that was all they wanted. But talent likes scale, distribution, and impact. That’s what the firm can promise. But real talent has the leverage.


Nike’s Trajectory Improves on Better-Than-Expected Sales

Photographer: Bing Guan/Bloomberg

Bloomberg:

The company is looking to end a prolonged sales slump after previous management pulled back too aggressively from longstanding wholesale partners and overemphasized casual footwear over performance products such as running shoes.

The comeback bid is pinned on refocusing product development and marketing on sports while rebuilding relationships with retailers.

I’m a Nike head (or should I say “used to be” because I can’t wear shoes with small toe boxes any longer). But even as a sneaker fan, I know that shoes are a commodity. In commodity industries, you can’t sacrifice product development (i.e. innovation), marketing to your core, and partnerships that provide distribution, because these are your real points of differentiation—it’s the perception of your audience, and how they can access product.


Top A.I. Researchers Leave OpenAI, Google and Meta for New Start-Up

Photo: Jason Henry for The New York Times

The New York Times:

At Periodic Labs, A.I. systems will learn from scientific literature, physical experimentation and repeated efforts to modify and improve these experiments. For instance, one of the company’s robots might run thousands of experiments in which it combines various powders and other materials in an effort to create a new kind of superconductor, which could be used to build all sorts of new electrical equipment.

Guided by the company’s staff, the robot might choose several promising powders based on existing scientific literature, mix them in a laboratory flask, heat them in a furnace, test the material and repeat the whole process with different powders. After analyzing enough of this scientific trial and error — pinpointing the patterns that lead to success — an A.I. system could, in theory, learn to automate and accelerate similar experiments.

This is pretty cool. I love and use LLMs multiple times a day, but I am wary of all their hype around superintelligence, especially claims that neural networks trained on existing human data can come up with new scientific discoveries through pure reasoning. Approaches like what Periodic Labs is taking, which is creating a lab with physical robots and running real world trials and using A.I. to automate and accelerate experiments, seems like a more rational and non-snake-oil approach to discovery.

Which brings me to the next interesting story…


Richard Sutton – Father of RL thinks LLMs are a dead end

Fascinating conversation from the father of Reinforcement Learning and Turing Award winner Richard Sutton. In a nutshell, Sutton believes we need an entirely new systems that can learn from experience, and by itself a goal. He argues that LLMs don’t and can’t learn from experience, only mimicking from data it’s trained on, whereas a true intelligent system is one that can learn in realtime from what actually happens not just spit out what should happen, be genuinely surprised by outcomes, and be able to predict what will happen in the physical world, like any other animal intelligence.

Because of this limitation he sees from LLMs, he does not think that it can scale to AGI no matter how much compute and data we can give it, which feels groundbreaking from the author of “The Bitter Lesson,” perhaps the most influential essay in A.I.

From the Dwarkesh Patel Podcast:

What we want, to quote Alan Turing, is a machine that can learn from experience, where experience is the things that actually happen in your life. You do things, you see what happens, and that’s what you learn from. The large language models learn from something else. They learn from “here’s a situation, and here’s what a person did”. Implicitly, the suggestion is you should do what the person did. […]

In some ways it’s a classic case of the bitter lesson. The more human knowledge we put into the large language models, the better they can do. So it feels good. Yet, I expect there to be systems that can learn from experience. Which could perform much better and be much more scalable. In which case, it will be another instance of the bitter lesson, that the things that used human knowledge were eventually superseded by things that just trained from experience and computation. […] The scalable method is you learn from experience. You try things, you see what works. No one has to tell you. First of all, you have a goal. Without a goal, there’s no sense of right or wrong or better or worse. Large language models are trying to get by without having a goal or a sense of better or worse. That’s just exactly starting in the wrong place.

If he’s correct, it means that the current trajectory of A.I. development is fundamentally misguided: LLMs cannot and will not get us to AGI, and we will need to develop entirely new systems that can do reinforcement learning and continual experience-based learning to reach AGI.


Bloomberg: The Case for Mechanical Watches in a Digital Age

The La Quête du Temps by Vacheron Constantin.Source: Vacheron Constantin

Bloomberg:

In the center of the salon, before a looming painting of Louis XIV, sits a wide octagonal pyramid with mirrored sides about 20 feet across, its angles alien against its baroque surroundings. Atop is something equally out of this world: a 3½-foot-tall confection of crystal, delicate metal gears and glittering gold. Its base houses what appears to be the inner workings of a music box; in the middle, a clock — one that displays the hours, minutes, months, days, year, moon phase and time of sunrise and sunset. Somehow the seasons, solstices and stars in the sky are also discernible in this thing, which sits on a plate of lapis lazuli with mother-of-pearl inlay planets. On top of the clock there’s a golden man about 11 inches high, with zodiacal constellations etched into a glass dome around him. […]

Vacheron Constantin learned so much from engineering La Quête du Temps that the company applied for 15 new patents.⁠ ⁠Much like technology for a supercar or a Formula One track winner will eventually trickle down into mass automobiles, a project like La Quête du Temps will influence future Vacheron Constantin watches.

In Vacheron Constantin’s 270th anniversary, they built an insanely complicated (no pun intended) watch display and put it in the middle of the Louvre. Its development is an absolute marvel in watchmaking and they will be using what they learned in the 7 years it took to design, engineer, and build the display (producing 15 patents in the process) for their own line of watches. Truly incredible. This to me represents what true design and craftsmanship does: it moves the entire industry forward.