The only way to understand this is by knowing: Meta already has two (!!) AI labs who are already at existential odds with one-another and both are in the process of failing spectacularly.
One (FAIR) is lead by Rob Fergus (who? exactly!) because the previous lead quit. Relatively little gossip on that one other than top AI labs have their pick of outgoing talent.
The other (GenAI) is lead by Ahmad Al-Dahle (who? exactly!) and mostly comprises of director-level rats who jumped off the RL/metaverse ship when it was clear it was gonna sink and by moving the centre of genAI gravity from Paris where a lot of llama 1 was developed to MPK where they could secure political and actual capital. They've since been caught with their pants down cheating on objective and subjective public evals and have cancelled the rest of Llama 4 and the org lead is in the process of being demoted.
Meta are paying absolute top dollar (exceeding OAI) trying to recruit superstars into GenAI and they just can't. Basically no-one is going to re-board the Titanic and report to Captain Alexandr Wang of all people. Its somewhat telling that they tried to get Koray from GDM and Mira from OAI and this was their 3rd pick. Rumoured comp for the top positions is well into the 10's of millions. The big names who are joining are likely to stay just long enough for stocks to vest and boomerang L+1 to an actual frontier lab.
I wouldn't categorize FAIR as failing. Their job is indeed fundamental research and are still a leading research lab, especially in perception and vision. See SAM2, DINOv2, V-JEPA-2, etc. The "fair" (hah) comparisons of FAIR are not to DeepMind/OAI/Anthropic, but to other publishing research labs like Google Research, NVIDIA Research, and they are doing great by that metric. It does seem that for whatever reason that FAIR resisted productization, unlike DeepMind, which is not necessarily a bad thing if you care about open research culture (see [1]). GenAI was supposed to be the "product lab" but failed for many reasons, including the ones you mentioned. Anyways, Meta does have a reputation problem that they are struggling to solve with $$ alone, but its somewhat of a category error to deem it FAIR's fault when FAIR is not a product LLM lab. Also Rob Fergus is a legit researcher; he published regularly with people like Ilya and Pushmeet (VP of Deepmind Research), just didn't get famous :P.
This is exactly why Zuck feels he needs a Sam Altman type in charge. They have the labs, the researchers, the GPUs, and unlimited cash to burn. Yet it takes more than all that to drive outcomes. Llama 4 is fine but still a distant 6th or 7th in the AI race. Everyone is too busy playing corporate politics. They need an outsider to come shake things up.
The corporate politics at Meta is the result of Zuck's own decisions. Even in big tech, Meta is (along with Amazon) rather famous for its highly political and backstabby culture.
This is because these two companies have extremely performance-review oriented cultures where results need to be proven every quarter or you're grounds for laying off.
Labs known for being innovative all share the same trait of allowing researchers to go YEARS without high impact results. But both Meta and Scale are known for being grind shops.
Can't upvote this enough. From what I saw at Meta, the idea of a high performance culture (which I generally don't have an issue with) found its ultimate form and became performance review culture. Almost every decision made filtered through "but how will this help me during the next review". If you ever wonder about some of the moves you see at Meta, perf review optimization was probably at the root of it.
I may or may not have worked there for 4 years and may or may not be able to confirm that Meta is one of the most poorly run companies I've ever seen.
They are, at best, 25-33% efficient at taking talent+money and turning it into something. Their PSC process creates the wrong incentives, they either ignore or punish the type of behavior you actually want, and talented people either leave (especially after their cliff) or are turned into mediocre performers by Meta's awful culture.
Interesting that "high-impact" on the one hand, and innovative/successful in the marketplace on the other, should be at odds at Meta. Makes one wonder how they measure impact.
Beyond that, the leaders at Facebook are deeply unlikeable, well beyond the leaders at Google, which is not a low bar. I know more people who reflexively ignore Facebook recruiters than who ignore recruiters from any other company. With this announcement, they have found a way to make that problem even worse.
This is wrong. OpenAI has almost no upside now at these valuations and there is a >2 year effective cliff on any possibility of liquidity whereas Meta is paying 7-8 figures liquid.
Metas problem is that everyone knows that it’s a dumpster fire so you will only attract people who only care about comp which is typically not the main motivation for the best people.
It means that you can keep those shares even if you leave. Otherwise the term vesting cliff would be meaningless at any startup where the shares are not liquid.
Except you can only sell a prescribed amount at an undetermined time. By the earliest possible sell date you have already made 8 figures liquid at Meta.
They are yours. That’s a huge difference between a real cliff and illiquid stock.
If you decide you don’t like it, you take what’s vested after the cliff and leave. Even if you have to wait another year and a half to sell, you still got the gain.
These people should better make a lot of money while they can, because for most of them their careers may be pretty short. The half life of AI technologies is measured in months.
Anyone know what scale does these days beyond labeling tools that would make them this interesting to Meta? Data labeling tools seem more of a traditional software application and not much to do with AI models themselves that would be somewhat easily replicated, but guessing my impression is out of date. Also now apparently their CEO is leaving [1], so the idea that they were super impressed with him doesn't seem to be the explanation.
OpenAI and Anthropic rely on multiple data vendors for their models so that no outside company is aware of how they train their proprietary models. Forbes reported the other day that OpenAI had been winding down their usage of Scale data: https://www.forbes.com/sites/richardnieva/2025/06/12/scale-a...
Yeah, but they know how to get the quality human labeled data at scale better than anyone — and they know what Anthropic and OpenAI wanted — what made it quality
But then huge revenue streams for Scale basically disappear immediately.
Is it worth Meta spending all that money just to stop competitors using Scale? There are competitors who I am sure would be very eager to get the money from Google, OpenAI, Anthropic etc that was previously going to Scale. So Meta spends all that money for basically nothing because the competitors will just fill the gap if Scale is turned-down.
I am guessing they are just buying stuff to try to be more "vertically integrated" or whatever (remember that Facebook recently got caught pirating books etc).
Yeah, also the industry could come up with their own Scale if they were forced to.
But probs. it just makes sense on paper, Scale's revenue will pay this for itself and what they could do is to give/keep the best training sets for Meta, for "free" now.
Zuck's not an idiot. The Instagram and WhatsApp acquisitions were phenomenal in hindsight.
> The metaverse will happen, IMO. The tech is just not there, yet.
This seems possible, and it just sounds so awful to me. Think about the changes to the human condition that arose from the smartphone.
People at concerts and other events scrolling phones, parents missing their children growing up while scrolling their phones. Me, "watching" a movie, scrolling my phone.
VR/AR makes all that sound like a walk in the park.
“We went outside this weekend. Terrible. I wasn’t hot anymore, the smog was everywhere. House was tiny. No AI to help with conversations and people were unfriendly. I’m staying plugged in, where we can fly amongst the stars on unicorns. Some say it’s fake but I say life has been fake for a while.”
Meta has done great work on the underlying technology of the metaverse, but what they really need is a killer app. And I don't think Meta or really Silicon Valley types have the proper institutional ability or really cultural acumen to achieve it. We think back to Horizon Worlds that looked more like a amateur weekend asset flip than the product of a billion dollar conglomerate.
If it does come, it will likely come from the gaming industry, building upon the ideas of former mmorpgs and "social" games like Pokemon Go. But recent string of AAA disasters should obviously tell you that building a good game is often orthogonal to the amount of funding or technical engineering. It's creativity, and artistic passion, and that's something that someone who spends their entire life in the real world optimizing their TC for is going to find hard to understand.
The prevailing theory is that Meta did a 49% deal so it didn't set off anti-trust alarm bells. In other words, the 49% doesn't give them ultimate power, but you can best believe when Meta tells them to jump, the board and the execs are going to ask "how high?".
Power struggles like this are weird to me. Is kicking the board likely to succeed at 49%? If so it feels like the control percentage isn't the primary factor in actual control.
At 49% I'm certain they would become the largest shareholder, by far. Then allying with another smaller shareholder to get majority - especially as you are Meta and can repay in various ways - is trivial. This is control, in all forms but name.
There's a lot of things shareholders can do to screw over other shareholders. Smaller shareholders are at least somewhat likely to follow along with the largest shareholder, just to avoid becoming their enemies and getting squeezed out.
It's a smart purchase, it's just that I don't see how these datasets factor into super-intelligence. I don't think you can create a super-intelligent AI with more human data, even if it's high-quality data from paid human contributors.
Unless we watered-down the definition of super-intelligent AI. To me, super-intelligence means an AI that has an intelligence that dwarfs anything theoretically possible from a human mind. Borderline God-like. I've noticed that some people have referred to super-intelligent AI as simply AI that's about as intelligent as Albert Einstein in effectively all domains. In the latter case, maybe you could get there with a lot of very, very good data, but it's also still a leap of imagination for me.
I think this is kind of a philosphical distinction to a lot of people: the assumption is that a computer that can reason like a smart person but still runs at the speed of a computer would appear superintelligent to us. Speed is already the way we distinguish supercomputers from normal ones.
I'd say superintelligence is more about producing deeper insight, making more abstract links across domains, and advancing the frontiers of knowledge than about doing stuff faster. Thinking speed correlates with intelligence to some extent, but at the higher end the distinction between speed and quality becomes clear.
If anything, "abstract links across domains" is the one area where even very low intelligence AI's will still have an edge, simply because any AI trained on general text has "learned" a whole lot of random knowledge about lots of different domains; more than any human could easily acquire. But again, this is true of AI's no matter how "smart" they are. Not related to any "super intelligence" specifically.
Similarly, "deeper insight" may be surfaced occasionally simply by making a low-intelligence AI 'think' for longer, but this is not something you can count on under any circumstances, which is what you may well expect from something that's claimed to be "super intelligent".
I don't think current models are capable of making abstract links across domains. They can latch onto superficial similarities, but I have yet to see an instance of a model making an unexpected and useful analogy. It's a high bar, but I think that's fair for declaring superintelligence.
In general, I agree that these models are in some sense extremely knowledgeable, which suggests they are ripe for producing productive analogies if only we can figure out what they're missing compared to human-style thinking. Part of what makes it difficult to evaluate the abilities of these models is that they are wildly superhuman in some ways and quite dumb in others.
> It's a high bar, but I think that's fair for declaring superintelligence.
I have to disagree because the distinction between "superficial similarities" and genuinely "useful" analogies is pretty clearly one of degree. Spend enough time and effort asking even a low-intelligence AI about "dumb" similarities, and it'll eventually hit a new and perhaps "useful" analogy simply as a matter of luck. This becomes even easier if you can provide the AI with a lot of "context" input, which is something that models have been improving at. But either way it's not superintelligent or superhuman, just part of the general 'wild' weirdness of AI's as a whole.
I think you misunderstood what I meant about setting a high bar. First, passing the bar is a necessary but not sufficient condition for superintelligence. Secondly, by "fair for" I meant it's fair to set a high bar, not that this particular bar is the one fair bar for measuring intelligence. It's obvious that usefulness of an analogy generator is a matter of degree. Eg, a uniform random string generator is guaranteed to produce all possible insightful analogies, but would not be considered useful or intelligent.
I think you're basically agreeing with me. Ie, current models are not superintelligent. Even though they can "think" super fast, they don't pass a minimum bar of producing novel and useful connections between domains without significant human intervention. And, our evaluation of their abilities is clouded by the way in which their intelligence differs from our own.
Comparing the process of research to tending a garden or raising children is fairly common. This is an iteration on that theme. One thing I find interesting about this analogy is that there's a strong sense of the model's autoregressiveness here in that the model commits early to the gardening analogy and then finds a way to make it work (more or less).
The sorts of useful analogies I was mostly talking about are those that appear in scientific research involving actionable technical details. Eg, diffusion models came about when folks with a background in statistical physics saw some connections between the math for variational autoencoders and the math for non-equilibrium thermodynamics. Guided by this connection, they decided to train models to generate data by learning to invert a diffusion process that gradually transforms complexly structured data into a much simpler distribution -- in this case, a basic multidimensional Gaussian.
I feel like these sorts of technical analogies are harder to stumble on than more common "linguistic" analogies. The latter can be useful tools for thinking, but tend to require some post-hoc interpretation and hand waving before they produce any actionable insight. The former are more direct bridges between domains that allow direct transfer of knowledge about one class of problems to another.
> The sorts of useful analogies I was mostly talking about are those that appear in scientific research involving actionable technical details. Eg, diffusion models came about when folks with a background in statistical physics saw some connections between the math for variational autoencoders and the math for non-equilibrium thermodynamics.
These connections are all over the place but they tend to be obscured and disguised by gratuitous divergences in language and terminology across different communities. I think it remains to be seen if LLM's can be genuinely helpful here even though you are restricting to a rather narrow domain (math-heavy hard sciences) and one where human practitioners may well have the advantage. It's perhaps more likely that as formalization of math-heavy fields becomes more widespread, that these analogies will be routinely brought out as a matter of refactoring.
I'll believe that AI is anywhere near as smart as Albert Einstein in any domain whatsoever (let alone science-heavy ones, where the tiniest details can be critical to any assessment) when it stops making stuff up with the slightest provocation. Current 'AI' is nothing more than a toy, and treating it as super smart or "super intelligent" may even be outright dangerous. I'm way more comfortable with the "stochastic parrot" framing, since we all know that parrots shouldn't always be taken seriously.
Earlier today in a conversation about how AI ads all look the same, I described them as 'clouds of usually' and 'a stale aftertaste of many various things that weren't special'.
If you have a cloud of usually, there may be perfectly valid things to do with it: study it, use it for low-value normal tasks, make a web page or follow a recipe. Mundane ordinary things not worth fussing over.
This is not a path to Einstein. It's more relevant to ask whether it will have deleterious effects on users to have a compliant slave at their disposal, one that is not too bright but savvy about many menial tasks. This might be bad for people to get used to, and in that light the concerns about ethical treatment of AIs are salient.
> It's a smart purchase, it's just that I don't see how these datasets factor into super-intelligence.
It's a smart purchase for the data, and it's a roadblock for the other AI hyperscalers. Meta gets Scale's leading datasets and gets to lock out the other players from purchasing it. It slows down OpenAI, Anthropic, et al.
These are just good chess moves. The "super-intelligence" bit is just hype/spin for the journalists and layperson investors.
yes probably. but it already is. there is also an assumption that meta would turn it off. not saying they will or will not just that there an assumption here.
It seems very short-sighted given how far Meta's latest model release was behind Qwen and DeepSeek, both of which relied heavily on automatically generated reasoning/math/coding data to achieve impressive results, not human annotated data. I.e. Scale's data is not going to help Meta build a decent reasoning model.
This is by all indications the world's most expensive acquihire of a single person. Reporting has been that Zuckerberg has seen Wang as a confidant of sorts, and has proposed a vision of AI that's said to be non consensus.
Wang didn't get $14b, he only owns about 15% of Scale. We also don't know how much he sold. He could have sold all of his stock (netting him around $4.5b), none, or something in the middle.
It looks like security/surveillance play more than anything. Scale has strong relationships with the US MIC, the current administration (predating Zuck's rebranding), and gulf states.
Their Wikipedia history section lists accomplishments that align closely with DoD's vision for GenAI. The current admin, and the western political elite generally, are anxious about GenAI developments and social unrest, the pairing of Meta and Scale addresses their anxieties directly.
I doubt Scale is interesting by itself. This is all about Alexandr Wang. Guy is in his mid 20s and has somehow worked his way up in Silicon Valley to the same stature as CEOs of multi trillion dollar companies. Got a front row seat at Trump's inaugration. Advises the DoD. Routinely rubs shoulders with world leaders. I can't say whether there's actual substance or not, by clearly Zuck sees something in him (probably a bit of himself).
It's a wild story for sure. Dropped out of MIT after freshman year and starts Scale to do data labeling. Three years later Scale has a $1B valuation and two years after that Wang is the world's youngest billionaire. Nine years after Scale's founding they're still doing less than $1B in annual revenue. Yet Meta is doing a $14B acquihire. There's definitely more than meets the eye. I suspect it involves multiple world governments including the US.
I didn't mean to imply he started it alone. Though his co-founder Lucy Guo is almost as bizarre of a story as Wang himself. I'm curious, what were they doing before data labeling?
> Though his co-founder Lucy Guo is almost as bizarre
Well, kind of. I went to school with Lucy, and she was a completely different person back then. Sure she was among the more social of the CS majors, but the gliz and glamour and weirdness with Lucy came after she got her fame and fortune.
I suspect a similar thing happen with Wang. When you are in charge of a billion dollar business, you tend to grow into the billion dollar CEO.
> what were they doing before data labeling?
They were building an API for mechanical turks. Think "send an api call, with the words 'call up this pizza restaurant and ask if they are open'" and then this API call would cause a human to follow the instructions and physically call the restaurant, and type back a response that is sent back to your API call.
The pivot to data labelling, as money poured into self driving cars, makes some amount of sense given their previous business idea. Is almost the same type of "API for humans" idea, except much more focussed on one specific usecase.
I’m nowhere near fully confident in these rumors… so there’s nothing to spill. I don’t post specific accusations without some completely reliable basis.
i don't know Alex directly that well but i believe his "freshman year" skipped all GIRs and was spent polishing off the most advanced graduate courses in CS theory (18.404), machine learning (6.867), algorithms (6.854), etc.
so basically he did MIT at the PhD level in 1 year.
As a classmate myself who did it in 3, at a high level too (and I think Varun - of Windsurf - completed his undergrad in 3 years also)...
Wang's path and trajectory, thru MIT at least, is unmatched to my knowledge.
That courseload is completely unremarkable for a first-year with experience in competitive programming (like Wang had). I know a dozen people who did the same.
i know a dozen who come close but none who did the same, nor who had the entrepreneurial bent so early... curious who are these people you have in mind?
Alexandr is just a dude, like you or me, with his own life and his own worries and his own problems. He’s more like the rest of us than you seem to think.
Not trying to diminish his academic accomplishments, but it isn't that uncommon for experienced freshman students to just jump straight into advanced topics. If you're the type that has been coding since you were 10, been active in Olympic teams, or whatever, you can probably do just fine in such courses.
If anything, you'd be bored with some undergrad courses.
Meta buys a non-controlling stake and says no customers will be affected but the CEO and others are leaving Scale for Meta. Meta also says they won’t have access to competitor data but at 49% ownership they get major investor rights?
The host on this podcast[0] had a good point about the "investment". It was really a merger, but framed as an investment to sidestep regulators. Key attributes:
These types of "Aaackshually" business strategies are repulsive, and are evidence that these people who wield immense responsibility do not deserve it.
The stake of FB and people now employed at FB at the executive level is clearly over 50% it seems very odd they are convincing anyone this is a minority?
> people now employed at FB at the executive level
I think it's reasonable that this is not counted, unless there's some possible condition on the stock ownership that I'm not aware of. If they ultimately disagree with the decisions from Facebook then they can, in theory, get help from the other stakeholders to override them.
That said, I would not be the least bit surprised if this turned out to be some scheme in which they use a series of technicalities that make the deal look like a merger but "be" an investment.
Perhaps, but it's worth hedging against a potential Trump-Zuck fallout. Also, the statute of limitations for commencing an antitrust investigation is 4 years (15 U.S.C. § 15b).
Reverse acquisition? IE, similar to how Disney "bought" Pixar, but much of Pixar's IP overshadows Disney's IP; or how Apple bought Next and the current MacOS is basically NextOS under the hood.
It's a technique that companies do to avoid disruption: Buy early stage startups, and by the time they could "disrupt" the parent company, the parent company's management is ready to retire, and the former startup's management is ready to take their place.
In what way does Pixar's IP overshadow Disney's? Listing the highest-grossing media franchises [1], Mickey Mouse, Winnie the Pooh, Star Wars, and Disney Princesses are on #2-#5 respectively, while Pixar's top spot is #16 with Cars.
Because, in the 1990s, Pixar's IP was more popular than Disney's. The story (as I remember it from a Jobs biography) was that, in the Disney parks, there were longer lines for Pixar characters than Disney characters.
Someone in leadership (don't remember the name) basically swallowed pride and bought Pixar from Jobs. It was considered a "reverse acquisition" because Jobs had so much stock he technically controlled Disney afterwards.
This was certainly true for animated movies in the 2000s (where Pixar clearly dominated), although not the companies as a whole. Pixar shareholders (including Jobs) owned about 15% after the deal.
This isn't a reverse acquisition, it's just a normal acquisition. Company A (Disney) has many things but is missing one thing (an animation team that doesn't suck), so they buy a company that does have that thing.
It might reduce scrutiny, but not completely prevent it.
Clayton act says
"No person engaged in commerce or in any activity affecting commerce shall acquire, directly or indirectly, the whole or any part of the stock or other share capital and no person subject to the jurisdiction of the Federal Trade Commission shall acquire the whole or any part of the assets of another person engaged also in commerce or in any activity affecting commerce, where in any line of commerce or in any activity affecting commerce in any section of the country, the effect of such acquisition may be substantially to lessen competition, or to tend to create a monopoly."
It does seem likely that the deal was at least partially structured to avoid antitrust scrutiny but my impression is that under the DMA/DSA (I can never keep track of which is which) in Europe Meta is technically a gatekeeper and has different obligations so I'll be curious to see what happens there
If this is marketed as a strategic acquisition for the national interest of the US tech industry in order to counter-act the Chinese trying to catch up on AI then nothing of the sorts will happen.
not weird at all if you assume the goal was never to integrate but to neutralize. 49 percent gives them just enough leverage to shape roadmap, slow-roll access, nudge governance. the public terms are clean because the real effects show up over time. it’s a slow freeze.
It doesn't really affect the other frontier labs too much because OpenAI and Anthropic rely on multiple data vendors for their models so that no outside company is aware of how they train their proprietary models. Forbes reported the other day that OpenAI had been winding down their usage of Scale data: https://www.forbes.com/sites/richardnieva/2025/06/12/scale-a...
That's the MO for all these big players. They don't shell out merely for the marketplace advantage, there's always some meta (no pun intended) Gordon Gecko corpo warfare schtick going on in the background.
To quote Peter Thiel, "competition is for losers".
Of all the tech companies, Meta is the most ruthless and shameless. You'd have to be a total fool to trust Zuck, especially Zuck who put billions into AR for not much return and now billions into AI to create lackluster lagging models.
Off base when considering the likes of Palantir and many others.
Not a fan of the person or many of Meta's business practices. But Meta has given a lot back with Llama and PyTorch, among many other open source contributions. Which others in the space are not doing.
> Llama [...] among many other open source contributions
Llama is not Open Source. Don't buy Meta's marketing that's trying to dilute the term. Llama is only available under restrictive terms that favor Meta.
Fair point. I cannot amend my original comment but you are correct in that the weights have restrictions which go against the nature of Open Source software.
React made quite a mess of the web just so we couldn't browse with JavaScript disabled, thereby allowing Facebook to track us through those like buttons that popped up everywhere.
Are there hidden barbs in llama and pytorch too? I'm not close enough to them to know.
Not a fan of Facebook or React (though React Native is IMHO the one eyed among the blind for cross platform mobile development), but I think that's a bit far fetched. I do think Facebook has (or had?) genuinely an engineering culture that wants to give something back.
This is true. I've been to several conferences where FB sent engineers to talk about their open source projects or how they used a particular language or framework.
I remember the conflicted feeling of strongly disliking their products and leadership but liking their contributions. Same energy but more intense in both directions many years later.
React is the defacto standard of web development for a reason. That's not the reason you can't browse the web with JS (it would be Angular if it wasn't React or others). And just because you use React, doesn't mean Meta can track you.
Their point was that (i) React becoming the defacto standard played into the hands of Meta, who are interested in tracking people. (ii) Tracking is made easier by running arbitrary JavaScript in the browser. And (iii) before SPAs were big (pre-React), more people used to completely disable JS in their browser.
Not saying I buy this theory. Just trying to explain what I think they were alluding to, as I had the impression you missed it and went in a different direction.
First, the only model creators ones who have not "given back" in the way you mean, are OpenAI and Anthropic, everyone else has at least some models in the open.
Second, I would argue that it's strange how we are discounting the contribution of OpenAI and Anthropic, because being the first to show that something valuable is possible actually counts for quite a lot in my book. Competition and open-source copies are nice, but the value add attribution in ai labs feels really strange at times.
What Meta has given, so far, are decent copies, which mostly serve their own needs and are making it harder for the above companies (who actually have to generate revenue through AI efforts, because it's all they do) to exist. And that's fine and all, Meta can do what they want to the degree the law permits, but I have a hard time understanding them as the good guys in the AI space, unless I squint very heavily.
Yes they would? Mega corps can afford to commoditize various layers which prevents competition from accessing any profit. Meanwhile Meta et al can capture that profit in their own layers instead.
I don't think it's a big stretch to say that Meta has not only been more successful than Palantir at mass surveillance, but has also likely caused a greater magnitude of harm (a lot through negligence) when considering events like the genocide in Myanmar.
I'd trust Zuck if I had a signed, airtight agreement for a large amount of money he paid into an escrow account for something I owned or was transferring ownership.
He's very close to peak homo economicus. (EDIT: this next point is wrong, the oral history I heard referred to Winklevoss pops, not Zuckerberg, and I misremembered) Which makes sense, given his father is deep in actuarial services.
I get that people hate Oracle for a variety of reasons, but this is just such a ridiculous assertion. They've been one of the largest tech companies for multiple decades. Do you honestly believe that a majority of their revenue came from legal settlements from suing their own customers, at any time in their history?
Do you have a citation for this claim? I mean if the company is as absurdly litigious as you're saying, it stands to reason that you wouldn't make unsubstantiated claims about them in a public forum, right?
I'd guess it's something to do with Oracle's licensing policies. My understanding is they'd audit businesses who used their software and bill them an additional fee for violations. Maybe it's not strictly legal settlements but it's plausible that they made more money from these fees than from their regular fees at some point and even ongoing today. (That also lines up with jokes I've heard about them hiring more lawyers than software developers given someone's gotta do the audits.)
No, I do not believe it is even remotely plausible that they have ever made a majority of their revenue from licensing violation fees, especially today, when their total annual revenue is over $57 billion.
Oracle is an enormous company. I'm in my 40s, and literally every non-startup I've worked for in my career has been an Oracle customer, across multiple product lines. They're a 48-year-old company with more than 150,000 employees.
To be absolutely clear, I'm not expressing an opinion here on Oracle or its licensing and auditing practices. I'm just responding to the wild claims about revenue from lawsuits or license violations. Oracle stock has been publicly-traded for nearly four decades, so there's plenty of data available from their earnings statements. If these claims were even remotely based in reality, it would be easy to cite a source.
Maybe a few years ago at <$megacorp> where I work, Oracle requires, as part of their licensing, the ability to scan every machine owned by the company to make sure there is no unlicensed use of any of their software. If any offending installations were found, they would charge the company the cost of the license for every machine. So, thousands of users times $thousands per license.
Even if you had a license for a Java runtime for, say, your Oracle database instance, if that was found to be used for another purpose you'd get hit. Again, for every machine in the entire company, not just the offending one.
Needless to say, there was a huge firedrill to root out any rogue installs.
OK, but that anecdote is orthogonal to your original claim. No mention of a lawsuit or actually having to pay extra fees. And "rogue installs" essentially means "using copyrighted software in a quantity that exceeds what we actually paid for", i.e. theft.
My original assertion was just that Meta is unlikely to be 'the most ruthless and shameless [of all the tech companies].' There's so much competition out there for that title.
But also for a long time the best available open-weights models on the market - this investment has done a lot to kickstart open AI research, which I am grateful for no matter the reasons.
> especially Zuck who put billions into AR for not much return
While, it's indisputable about the current state of AR/VR. Zuck has a large exetensial risk to Microsoft/Apple/Google. If those companies want to revoke access to Meta's apps (ex [1]) they can and Zuck is in trouble. At one point Google was trying to compete with FaceBook with Google+ and while that didn't work, it's still a large business risk.
Putting billions into trying to get a moat for your product seems like prudent business sense when you're raking in hundreds of billions.
This is a very interesting buy because Scale AI has been spamming anyone and everyone on freelancer platforms; and they don’t have a very good reputation online so far from people they have contracted with.
Just go look at what people say about them on Reddit. It’s rare to find anything positive, or even a single brand champion that had some sort of great experience with them.
Just like Uber, Doordash & co don't have a good reputation among their contract workers. The entire business model is based on exploitation of labor. That doesn't mean it isn't valuable (in a capitalist sense).
No, those were entirely different user experiences when the services you mentioned were gaining traction and finding product market fit.
UberCab and Palo Alto Delivery were both services that had great success at user experiences for everyone involved including drivers, riders, small businesses, people ordering food. These experiences created brand champions who went out and raved about these technological innovations nonstop.
I don’t see any mentions of any positive experiences with Scale Ai here on HN or Reddit.. maybe that’s the reason behind the acquisition?
People on HN aren't the ones driving Ubers, so I'm not sure what experiences you are expecting to hear about. Go talk to actual drivers and you'll find that things aren't exactly rosy.
There were plenty of people on HN who signed up for the app to drive people back home before and after work.
Being able to see your car move in real time on the uber database with >2s lag between your car GPS and customers phone was magical in a way that's hard to describe today.
Lots of things were amazing when there was unlimited VC money flowing in and no expectation of profit. No point bringing it up 15 years later in a new reality.
I just don't get why Scale and/or Alexandr Wang are so important to Meta. Like sure, data is good and all, but does Scale really bring something so unique and valuable to the table? What vision or insight does Wang offer that's worth so much?
Until now I've actually been a believer in the amount of money that Zuck has poured into metaverse investments. I'm not a believer of the metaverse per se, but a believer that innovation takes unafraid capex. The last thing you want to be is scared money like microsoft who chose to scuttle the hololense project over the thought of spending a couple extra billion dollars on it.
But this deal really has left me with my head scratching. Scale is, to put it charitably, a glorified wrapper over workers in the Philippines. What meta gets in this deal is, in effect, is Alexander Wang. This is the same Wang who has said enough in public for me to think, "huh?" Said a lot of revealing stuff like at Davos (dont have the pull quotes off the top of my head) that made me realize he's just kind of a faker. A very good salesman who ultimately gets his facts off the same twitter feed we all do.
On top of what makes this baffling is that Meta has very publicly faced numerous issues and setbacks due to very poor data from Scale that caused public fires in both companies. So you're bringing in a guy whose company has caused grief for your researchers, is not research nor product oriented, and expect to galvanize talent from both the inside and outside to move towards GAI? What is Mark thinking?
Zuckerberg seems to have had all the pieces to make this work but I'm a lot less confident if I'm a shareholder now than a week ago. This is a huge miss.
Sam Altman is a huge risk to META. He has similar morals to Zuck and a much better technical team. If OpenAI turns on the slop generator, they could hit Facebook and Instagram hard. Wang is probably smart enough to help navigate that risk.
Wang has, seemingly, spent just as much time and energy over the last couple of years on PR stunts and publicity than on Scale itself. Between testifying to congress about how China is an AI risk (duh) and how AI is important (obviously), putting out press releases about joining boards of large orgs, and getting himself invited to trumps second inauguration. A high-billion-dollar story-headline framed as "Meta paying $14 BILLION for this one guy" is the same.
It very much seems it's been an investment in getting himself to be more of a "household name in AI". That is exactly what Meta needs (or at least thinks it needs) now.
I very much believe that there is very little moat in AI (currently, and in the forseeable future short some underlying hardware/etc breakthrough), and the success (from a consumer perspective) will come down to which of the big-cos (Facebook v Amazon v Google v OpenAI v Anthropic/Claude) consumers trust more. Zuck is, to put it mildly, *not* a trustworthy name for Meta to associate to leading the product that they want consumers to trust and depend on for their entire lives.
Whether or not Wang has any more qualifications than 1) is somewhat of a recognized AI name, and 2) is okay at speaking confidently on topics someone briefed him about, I don't think really plays much into this. If he needs help/assistance/etc with any of the meta scale/politics/management/etc, zuck can buy that for him.
What Zuck can't seem to buy (for himself) is some level of trust.
$14.3 Billion seems excessive for it to be a pure aquihire play. There's undoubtedly some IP acquisition (or at least exclusive access to certain IP) involved.
It's about 0.85% of Meta's market cap - less than the 1% they paid for (granted, all of) Instagram. They also paid about 1% of market cap for Oculus ($2b into a ~$220b market cap)
Seems about par for Facebook when it comes to company-shifting acquisitions.
Little known secret is they paid $2.7B, not $2B. And Zuck and the FB head of M&A were talking shit about John Carmack’s crazy wife, who was doing his negotiation for him. On WhatsApp no less.
> the weird setup where they only buy non voting shares is to not trigger any regulatory review
Do regulators actually fall for these sort of things in the US? One would expect companies to be judged based on following the spirit of the law, rather than nitpicking and allowing wide holes like this.
>One would expect companies to be judged based on following the spirit of the law, rather than nitpicking and allowing wide holes like this.
The letter of the law is what people follow. The spirit, or intent, of the law is what they argue about in court cases.
If the regulation says 49% and a company follows it, who's to say they're exploiting a loophole? They're literally following the law. Until there is a court case and precedent is set.
The Clayton act explicitly includes partial acquisition as still being covered. "No person engaged in commerce or in any activity affecting commerce shall acquire, directly or indirectly, the whole or any part of the stock... [where] the effect of such acquisition may be substantially to lessen competition, or to tend to create a monopoly."
There may be some other regulations that are avoided by a partial acquisition, but it doesn't bring it wholly outside of the relevant antitrust laws.
I guess "intent" is what matters really. If the intent is to avoid regulatory review and you could prove that intent, then they're trying to exploit it. That in itself should probably trigger a review regardless. If they've arrived at 49% for some other reason(s) than just to avoid regulatory review, then fair enough.
I agree. Slightly amused at the running from one side of the boat to the other. So we're done with the Metaverse now?
(Or maybe the metaverse needs AI bots running around … perhaps scalping tickets or something. In fact I get it though — they're looking for the Next Big Thing — as all big companies are. I even think they're on to it this time. The whole metaverse thing was just so obviously misguided, misspent capital.)
Spending $15B to hire a 28 year old to build you some AI is certainly a move. You can call Zuck many things, but "afraid to take risks" isn't one of them.
He was spending $5b a year on whatever the metaverse was supposed to be and even renamed the company. Does anyone here use it? I think he weathers what I consider failure well. Or, conversely he can take big risks without too much blowback.
He would still be unbelievably rich even if Meta went bankrupt. He is in the unique position to have majority control of one of the world's largest tech companies and can pretty much use it to do what ever he wants. I doubt he cares much about Facebook past it's ability to generate money at this point.
What is the big risk here? He has the cash to burn and he has full control of his position. Nothing will happen to him if he wastes a few Billions. He is not some poor single mom who has to decide between fixing her car and paying for kid’s Christmas presents
Cash is the least concern. I listened to a podcast about the history of Microsoft and there were times where Gates said "at least it was only money" or something to that effect.
Microsoft floundered an entire decade on mobile and Windows Vista when they were going to lose out on Google that was literally paying OEMs to use their software and Apple, who had a vertical stack and made money off hardware. Huge setback in terms of focus that took them a long time to recover from.
The main constraint is focus of talent to work on one thing. This is a huge move in terms of coordinated effort into this space that may or may not pay off.
I've been in the tech industry a quarter century, I've never heard this "pretty common saying" before, and most importantly I don't think it makes any sense. If anything, tech excels at disruption, where smaller competitors and new ideas are able to solve problems where "just throwing money at it" has failed.
I challenge you to name a tech unicorn who's biggest advantage wasn't the ability to hemorrhage money over a period of time that would've killed any normal business stone dead.
The risk here is that you've now put a person with no track record of success outside of being good at sales in charge of your AI efforts. You're betting that he's going to be the one to attract talent.
The right leadership might be able to get talent to work for a discount. The wrong one would lead to talent not coming at all.
If Meta is known for anything besides user privacy violations it's for taking risks that often pay off. They were laughed at for overpaying for Instagram and Whatsapp, yet both were instrumental to their current success. Their continued bet on VR is still highly criticized, yet they were the first Big Tech company in the space, they're the current market leaders, and I'm sure it will have huge ROI in the near future. So is this bet on Wang, as ludicrous as it may seem now.
> laughed at for overpaying for Instagram and Whatsapp
That's not how it went down. They were laughed at for screwing up so badly that these apps were drinking their milkshake, and then they panicked and paid way more than any fundamental analysis would price these apps at, because they weren't actually buying an app, they were paying a ransom on their monopoly.
> Their continued bet on VR is still highly criticized
Because Zuckerberg thinks people are going to go around wearing his face hugger.
> and I'm sure it will have huge ROI in the near future
The "VR play" is predicated upon VR somehow taking even more time away from its users than cellphones do. The only way it works is if people put it on when they wake up and take it off when they go to bed. Heck, maybe leave it on in some kind of REM-mode so zuck can put ads in our dreams.
Meta "succeeds" as you demonstrated, when they wait for someone else to outflank them, mostly by not being Meta because Meta is creepy and nobody likes it, and then they fire a money bomb at at. The way for VR could have succeeded is if Occulus stayed independent and focused on gaming where it shines for another decade, and then as people start to feel like it could be a building block for something more, snatch it out from under them. Instead Zuck bought it too early and smothered it with his empire of ick.
I've been all for it, look at my comment elsewhere in this thread. For me this is a huge mistake of a transaction that's neither accretive financially nor talent-wise.
This isn't the Instagram or Whatsapp transaction. Scale's been exclusively in the data labeling space.
Let's put this into perspective. OpenAI bought Jony Ive for about $5bln. Meta spent 3x that on Wang.
I'm not familiar with Scale, but data labeling can make or break a ML model. So if Scale is good at this, the $14B investment will pay itself off in no time.
There was quite loud reporting earlier this (or last?) year that Scale's data was so poor that it led to a great deal of setbacks, scrambling, and friction between the two companies. That reporting was at the top of my mind when I read this news, so I was even more baffled.
Imagine being the people at Meta who've had to deal with Scale now seeing Mark buy Scale's CEO for $14bln.
Sure, but this is a different value proposition. FB paid $1B for Instagram, which was trendy, growing fast, and already had 30 million users. FB paid $19B for Whatsapp, which was already established worldwide with ~400 million users. These acquisitions were very much in-line with FB's core product. The people saying it was risky were mostly just saying that it was a waste of money and that FB could have just beaten their competition instead of buying them.
And bringing up VR is probably not the best comparison to make- sure, Meta is a leader here, and they are competitive with their AI team too. But "I'm sure it will have huge ROI in the near future" is just saying that it hasn't paid off and they don't have an obvious path to getting there. Shoving VR and the Metaverse into everyone's face hasn't paid off for several years, and the VR segment as a whole has remained niche despite being around for decades.
This acquisition is different- AI is not Meta's core product, it's just something hot right now and CEOs are trying to figure out how to stuff it into their products and hoping they can figure out how to make money later. Plus, they paid a pretty big chunk of money for a company that does, what? Cleans data for LLM training? Meta's Llama team clearly has a good data group already. They paid for a few employees that are clearly popular amongst the executives in the tech industry, but I don't know how this will go in terms of attracting other talent. Unless Wang is bringing something secret along with him, I think this one is an overpayment- Meta will need to both figure out how AI makes them money and Wang will have to attract several billion dollars worth of talent to the team. I'm skeptical that people will talk about this the same way they will about Meta getting Yann LeCun to work for them for a lot less money.
Facebook couldn't beat them on merit because it's not very good at what it does. It does, however, have money because it was good at one thing one time, and therefore it could solve its inability to execute with M&A.
'Laughed at', but one of the things that's probably very likely also true is 'Now a prime target for antitrust violations' especially Whatsapp
Acquisitions do happen but it's telling when the people whose company you bought publically disparage you (in other words it wasn't a peaceful takeover)
What is convincing you that VR will have huge ROI in the near future? We are over 10 years in to the modern VR era and despite even Meta’s strong device sales it’s still very niche. Apple is trying their hardest and even they had to correct their sales expectations.
The thing is, I can imagine some futuristic version of AI that transforms humanity. But with VR, even in my wildest dreams with all the problems solved, it’s still just second best to smartphones and computers.
Imagine the perfect headset. Tiny, battery lasts forever, photo-realistic. I would still rather browse the Internet on my phone. I’d rather do my work on my laptop. I’d rather watch movies on my TV. What is the VR adding? Nothing but extra hoops to just through to get things done.
The only usecase that makes any sense is gaming. But only some games. It’s just too niche.
> What is the VR adding? Nothing but extra hoops to just through to get things done.
It gives you maximum immersion into a digital world. Rather than view it through a rectangular 2D window, it can encompass 360 degrees of your vision in full 3D. If you don't see how this would be appealing for consuming content, work, entertainment, etc., then I can't convince you otherwise.
VR adoption has always been held back by what is technically possible and how expensive it is. Nobody other than tech enthusiasts wants to wear a bulky headset for extended periods of time. Once we're able to produce that perfect headset that you mention, so that it's portable and comfortable like a pair of sunglasses, at an affordable price, the floodgates will open, and demand will skyrocket.
The same already happened with mobile phones, several times. The cellular phone was invented in the early 1980s. It was heavy, bulky, and expensive, and only business people and enthusiasts used them. It wasn't until the mid-to-late 90s that they got cheap and comfortable for the general public. Then the modern smartphone had several precursors that were also clunky and expensive. It wasn't until the iPhone and Android devices that the technology became useful and accessible to everyone. There's no reason to think that the current iteration is the ultimate design of a personal computer.
The same story is repeated for any new technology. VR itself has seen multiple resurgences in the last few decades. We're only now reaching a state where the vision is technically possible. There are several products on the market that come close. VR headsets are getting smaller, cheaper, and more comfortable, and AR glasses are getting cheaper and more powerful. I reckon we're a few generations away from someone launching a truly groundbreaking product. Thinking that all this momentum is just a risky bet on a niche platform would be a mistake.
> If you don't see how this would be appealing for consuming content, work, entertainment, etc., then I can't convince you otherwise.
I don't, legitimately I don't.
Okay, maximum immersion. And how does that help?
Like even just on the surface having a 360 degree view doesn't do anything. Because my eyes are on the front of my head, so I'm going to be looking forward. Stuff behind me doesn't matter much.
Same thing with 3D. Okay... but paper is two-dimensional, you know what I mean? Something being 3D by itself doesn't mean it's better or contains more information or is easier to use. I'd rather read and write on a two-dimensional surface. Reading and writing is the core of a lot of stuff, so there's goes that.
The test for me really is imagining some usecase and then imagining how it would be on super advanced VR. If you try that, you'll find that 90% of usecases just fail compared to already existing technology. Like imagine some perfect VR tech 5,000 years from now. Okay, now a usecase: programming. I would rather program with a keyboard and mouse and a monitor. I don't want to talk to VR. I don't want a dumbass virtual keyboard, that's worse. The 3D stuff makes no difference because I'm reading text. So even with alien technology, my current computer right now would beat it.
With the phones you mention, when we envision some futuristic technology we can see how the phones would be useful. Same thing with TVs - I mean, people were envisioning wall-wide flat screens in the 60s. But when you do that with VR, the product still isn't very good. That's the difference, in my eyes.
What you're saying makes sense, and I do agree with some of it. But you're not seeing the experiences that a fully immersive digital world can deliver once the technology and UX improves.
Yes, we read and write on 2D surfaces, but a 3D environment allows you to have an infinite number of them. You could be writing on one, with a video playing in the background, while keeping an eye on a stock ticker, all inside a virtual beach, outer space, or whatever you want. You could be having a conversation with someone who looks and feels like they're physically in front of you. Have you seen the new visionOS avatars? They're incredibly realistic. Talking to a 2D video from a crappy webcam feels awfully primitive in comparison.
There are an infinite number of these experiences that we can't imagine yet, that a 2D display simply can't deliver.
As for HIDs, we'll figure it out. Voice and touch will become far more useful and user-friendly. Remember how touch keyboards on smartphones were unbearable to use initially, and tech geeks like myself strongly preferred a device with a physical keyboard? Well, text prediction and haptics got a lot better, and we invented swipe typing, so the experience improved considerably. I'm typing this on a swipe keyboard from my phone, and while I would prefer to use a keyboard on a real computer, I also like being able to type with one hand from bed. :)
So I'm sure we'll invent input devices that feel natural and friendly to use in VR as well. In the meantime you can also use a physical keyboard. There are many software developers who have adopted AR glasses, with a phone and keyboard as their mobile workstations. The tech is almost there.
If you think about it, a 2D board with keys you press to compose words is an archaic method of inputting data into a machine. It's a remnant from typewriters dating back to the 19th century. They're far from being an optimal method for doing the type of things we use computers for today.
Ultimately, we can have different opinions on whether we want these experiences or not, but the reality is that this future is inevitable, for better or worse. As we move towards transhumanism, fully immersing our visual sense in a digital world is an obvious first step. The current iteration of devices that we're used to is merely ~50 years old, and ~20 for mobile devices. It's far too early in our technological progress to consider these the best designs we can produce.
>If Meta is known for anything besides user privacy violations it's for taking risks that often pay off. [...] Their continued bet on VR is still highly criticized, yet they were the first Big Tech company in the space, they're the current market leaders, and I'm sure it will have huge ROI in the near future.
Are you using something that hasn't yet paid off as an example of how their big risks often pay off just because you are personally sure it will have huge ROI?
But I'm not actually sure I agree with the premise.
What risks is Meta known for taking? Instagram and Whatsapp purchases were defensive moves; they were laughed at for the prices not for risk.
Here they are similarly being laughed at for the price.
Is there much risk beyond that?
If Instagram had petered out and people had stayed on Facebook proper, they would've been fine. Same with Whatsapp. It's not like they've been trying to push people away from their core Facebook product. More the opposite - they've used acquisitions to try to push Facebook accounts to more people.
Compare to Apple, letting Mac software flounder for a while while focused on growing the iPhone and iPad business. Risky, worked out. Compare to Microsoft, going down years of dead-ends trying to come up with a next-gen operating system - a big part of their core bread-and-butter - and then having to release the generally-panned Vista because they bet too big on stuff they couldn't realize with Longhorn. Risky, failed. Compare to Snap, even - turning down Meta cash for independence. Risky, kinda meh results? But adding another social media app to a social media company's portfolio? Less so.
VR, on the other hand, does seem like the closest analog here. Buying their way into a non-core-competency space. There they bought the undisputed leader but it still hasn't paid off to date. Here? Eh....
What the commenter is saying is that this is no risk at all because $14B is negligible money for Meta at a year-long scale. It can always be written off as an investment that didn't work. In a company with $1.73T market cap, this is sometimes the loss you get in a single day of trading.
Yes, $14B can be considered negligible, but it's a signal. My bet is that it's not a great one. Wang has been great at courting clients, time will tell us about how that translate to leadership.
Meta already had an AI division led by the venerable Yann LeCun - someone with actual AI bona fides. I'm not seeing any info about what this does to LeCun's position in Meta.
I don’t think either Demis or Ilya are available for $15bn. They’re already both comfortably billionaires. Demis seems a sensible candidate for heir to the top job at Google in the long term. Ilya is away focusing on super intelligence.
It’s not clear to me why either would take a subservient role in a company flailing incoherently around AI, rather than stick with the incredibly high-leverage opportunities they both have now.
Spending $15B is a risk. Yes they have the money, but we're talking a quarter of Meta's annual income. That's not nothing.
And money concerns aside, Meta needs to be a major player in AI. If they have made the wrong bet with Scale AI & Wang then the company will suffer in the long term.
Honest question, why do they so existentially need to be a major player in AI? It's a social network that connects people, serves UGC and some spam/ads, and sells advertisement. Same but with photos for IG, same but with messages for Whatsap. Which part of this will die without AI? If they are obsessed with chatting with an AI bot on Whatsapp, just plug in grok or openai like Telegram, is that the killer feature worth bazillions of dollars?
(everything else seems like a failed experiment, VR, Libra, Facebook phone, whatever, nobody even remembers half these things)
You're thinking in the wrong scale. Meta is a $1.7T company, for them a $15B investment is less than 1% of the company. In the span of a year, this is negligible money.
I genuinely struggle to understand how that will affect him in any way. Not even any meaningful way, I mean, how could that possible change anything about his life, in any way?
I don't think people understand that once you're a billionaire with full ownership of your business and no risk of being ousted, making or losing billions is irrelevant.
Maybe, but it doesn’t seem to stop them from actually looking like dumbasses. But that wealth attracts fans who constantly proclaim that their latest idea is brilliant.
Maybe we should pity the poor billionaires, hopped up on T or ketamine and trapped in an echo chamber…but I’ll think they’ll be ok.
In light of these (accurate) critiques, I think I should've gone with my original wording, which referred to them still caring about their legacy regardless of how many billions they had. That's more what I was going for, but I dumbed it down, apparently too much.
The risk is the lost time by betting on the wrong horse. I don't know much about Wang, but the current phase of the AI race will likely shake out winners and losers of the tech industry who persist for many years to come.
Being first to achieve certain milestones matters a lot.
calling it a risk assumes the goal is success. sometimes you spend 15b not to win, but to make sure no one else does. its board control. meta didnt buy potential. they bought positioning. everything else is narrative dressing.
I actually ran into the Zuck eating Thai(could be wrong) food in Palo Alto at night. He was having dinner with his wife at the restaurants outdoor seating, sure looked risky for someone as important as him.
OpenAI uses scale via eg data made by humans through outlier. Part of this move could probably be seen as starving competition from data as much as buying talent at scale.
I was looking for side angles also. Thinking the size of the deal gives substantial information rights, so there is likely good visibility into competitors surface areas.
I dont understand, why would they make a super-intelligence group instead of something more ambitious like a super-super intelligence group or supreme-mega-intelligence group? Zuck is clearly not thinking this one through.
Because he put a big "AI" button on every single one of Meta's apps and surfaces. I'd bet that most of the usage is accidental. Great way to show inflated user counts in your earnings reports and get some directors promoted, sure, but not a long term strategy.
True, that AI button is shoved pretty violently in unexpected places, and surprises me in an unpleasant way, since I've never considered Meta itself as a place to search for information, let alone something trustworthy.
But this is actually interesting. Asking for medical information in the past was the realm of Google Search, now a combo of Google/Gemini/Chatgpt/whatever. Could it be they are going to try to bite off a chunk of that market? kind of like how they chose not to compete with Google search in 2010's. But now are taking another pass at it?
I don't really think Meta ever had a vision beyond "Facebook is a social network to connect people". Since then, their strategy has primarily been driven by their fear of being left behind, or of losing the next platform war. Instagram, Whatsapp, Threads, VR, AR, and now AI, they all weren't driven by a vision as much as it was their fear of someone else opening a door to a new market that renders them obsolete. They are good at executing and capturing the first wins, but not at innovating, redefining a market, or pushing the frontier forward; which is why they eventually get stuck, lose direction, and fall behind (Tiktok, Apple Vision Pro, AI).
Yes, but they’ve definitely made a big contribution to AI / LLMs. I just don’t understand how they plan on monetizing upon things, apart from “better AI integration inside their own products”.
Are they planning to launch a ChatGPT competitor?
It seems like this acquisition is focused on technology, but what’s the product vision?
Mark wants to own platforms. He always has. That’s why they tried to make a phone, networks, VR headsets, horizon worlds and now glasses.
AI is just their vessel to draw people in. It’s the flash that gets people on board. It’s the commoditizing your complement, in that they want to undercut the competition and have the money to do so, as a means to pull people in rather than lose them to OpenAI or the like who are also trying to build platforms.
Put another way: these companies want to be the next iOS or Android, and they are doing what they can to be as sticky and appealing as possible to make that happen.
Meta is a huge nation-state of a company, like Apple or Google, but its actual sources of income are arguably precarious: in the past decades new social media platforms have crept up and become really popular pretty frequently. Ai and Vr are mostly ways to find new sources of income: Meta has the means to outinvest smaller companies like Anthropic, but not the obligation to fit it into an existing product, like Apple.
They sell ads. Their strategy is to use AI to take over more of the marketing process. They want to move spend from in-house marketing (creative, strategy, analytics) to Meta.
meta's not trying to win on assistant UX or consumer wow factor. their AI strategy is defensive infrastructure. they’re building open weights, subsidising inference, investing in supply chain choke points like scale not to monetise directly, but to force the market to move on their terms. the vision isn’t a product. it’s insulation. if AGI goes closed and centralizsed, meta’s out. so they’re betting on keeping the floor open long enough to stay in the game. everything else is noise.
Meta monetizes content. AI makes it easier and cheaper to create content. If everyone has access to high quality AI, high quality content may be more plentiful and that content can be posted on Meta's platforms, which would make it more money.
Still confused, i show up to "consume content" from people i follow, not from Meta. Sure, occasional trash and spam and ads, and "recommended", and "promoted", but mostly things that people post. If content creators still come to post on say Instagram, which provides distribution and monetization, how does it matter which tools they used? Should Meta then try to acquire say Adobe Photoshop too, to produce better IG content?
> What’s Meta’s strategy here? What’s their vision?
The metaverse was their strategy. Then AI hype took over Silicon Valley and the unloved under-resourced AI team at Facebook became the stars of the show. Meta are now standing on the shoulders of those teams and the good will they generated from their foundational and open research efforts.
An AI first strategy from Facebook would not have involved a rebrand or open sourcing any research or models and would probably have looked a lot like OpenAi or Grok.
When a company reaches a certain size they basically become a bank. Meta owns a bunch of social media properties and the long term prognosis for that industry is not great. It would make sense to get into other areas if they can do it.
> What’s Meta’s strategy here? What’s their vision?
I don't get it, either. Facebook/Instagram/WhatsApp are ways to communicate with people you know, and they have a monopoly on that. (Well, Instagram is also softcore porn and product placement...)
TikTok beat them as mindless entertainment, showing people videos they're likely to watch until they end, and Zuck freaked out. Sometimes people would rather just watch TV than hang out with their friends! OMG! TikTok's bottleneck is that humans have to create the videos, so if Zuck can generate videos to maximize watch time, he wins.
Paying billions of dollars for a data-labeling company, though... Well, I guess it's not easy to put together a bunch of digital sweatshops in Kenya and the Philippines, but is it worth that much?
We intentionally didn’t use them at all for Llama 2 and mostly avoided using them for Llama 3, but execs kept pushing Scale on us. Total mystery why until now, guess this explains it.
I guess Mark Zuckerberg likes Alexandr Wang a lot. They have a lot in common. They both value intensity and are willing to engage in morally dubious tactics. The culture of Scale sounded quite a bit like Meta these days (and not Meta in mid 2010s) and may indeed be what Zuckerberg envisioned Meta's culture should be. So even if Scale's product isn't the best, it probably seemed like delegating to a copy of Zuckerberg himself, so it just "felt right".
However, a top research lab needs to be competitive yet still have an environment that fosters intellectual honesty. Meta Gen AI did not seem like that, and I don't think Scale's culture is like that either.
It was initially killed by a software filter. Those are tuned more strictly for (some) new accounts. It had nothing to do with the content of the comment, though admittedly that is less "interesting".
Meta buying their way into trends generally seems to pan out (WA, Insta, Oculus). Any reason to think this will be different? I’m not familiar with Scale but the particulars of this deal seem odd to me.
People are saying this is an acquihire, and the article states that Scale AI's CEO will join Meta in a "top leadership role" in their new Superintelligence lab. Does this mean Lecun is stepping down?
Superintelligence, really? I'm really looking forward to their "super intelligent" stochastic parrots — or stochastic llamas, as the case may be. The only thing "super" here is the ludicrous hype. It's totally out of control.
Sounds like they're getting paid based on his note to employees:
> "The proceeds from Meta's investment will be distributed to those of you who are shareholders and vested equity holders [...] The exceptional team here has been the key to our success, so I'm thrilled to be able to return the favor with this meaningful liquidity distribution."
Honestly if this acts as a liquidity event for a whole bunch of current employees, while at the same time giving off "Meta hand picked the CEO and whoever they felt were the best AI engineers and jumped ship" energy, I wouldn't be tooo surprised if current "scaliens" view this as the inflection point, and decide it's not worth staying for the other ~51% of their shares.
scale sits on the request layer for data, labeling, eval loops. even without direct coordination, owning part of that pipe lets meta infer which teams are scaling, which modalities are getting attention, how fast the frontier’s moving based on throughput and task complexity. it’s telemetry without consent. they don’t need your weights if they can see your ask patterns. that’s the real intel layer nobody’s talking about.
Reminds me of "Don't dig for the gold, sell the shovels".
Could also be read:
> Meta spends 10% of last year's revenue to acquire 49% of a top AI data company and poach their leadership, to ensure they are a key player in what could be a ~5-trillion dollar industry by 2033.
Meta has a history of this. Acquiring Oculus (and leaning in on VR), Ray-Ban partnership (and leaning in on AR)... etc.
These all just seem like decisions to ensure the company's survival (and participation) in whatever this AI revolution will eventually manifest into.
What top AI company? Certainly not Scale, right? You realise that frontier labs don't really use Scale anymore exactly because they can't be trusted not to sell their secret-sauce human data collection protocols. No-one in the industry takes this guy seriously.
It's always a warning sign where the only thing I know about a CEO is how many podcasts and media events they do each week, and nothing about their business.
Bringing up Oculus and VR is quite fitting, because I think it's the same problem. Meta is attempting to find their next business, but like with social media they don't really have a plan. It worked out well with Facebook and some of the purchases surrounding social media, but it was never a clear path to profit, so they slapped ads on it.
Why does Meta want VR to work? Create the Meta-verse? We're back at why, what problem does it solve? Same with AI, what's the goal here, besides being an AI company?
Just a thought, but VR/AR is constantly recording video and audio.
It wouldn't surprise me if at least some of that data is being piped back to Meta. Data that can latter be used by LLMs to train on.
Even if this isn't enabled on consumer models, on the corporate side it can make sense. Say you're a risk adjuster for a factory. Walk around with your VR headset. In real time MetaOshaHelper can identify issues, you can tag them yourself.
Then send the video back to your on prem LLM for data processing. New hires get a VR headset which can use this data for help on boarding.
Or... Robots will use the data and replace human workers entirely.
I sort of doubt that most business would want that. Sorry to latch on to one specific thing in an interesting comment. But just imagine having AI tracking in the workplace, e.g. OSHA violations, violations of building code and workplace regulations in general. You'd have shitty manufactures, builders, trucking companies, kitchens, warehouses and everything in between begging you to stop.
Occasionally you're going to ignore things that get flagged, but I would love an AI to say oh by the way that machine over there isn't latched on correctly and can fall over if not corrected.
This deal brings into focus whether the shovels are data or GPUs. Advantage to data comes, surprisingly, in perishability: a GPU fleet remains cutting edge for only one product cycle.
I mean have you seen the interview he did with Theo Von? The guy is a straight up Alien in the way he talks and acts, all those memes about him being a lizard aren't exactly far off.
He was isolated from the real world starting at around age, what, 22?
I'm all for calling out his random flailing in this space for what it is, but it always strikes me as strange when people are surprised that he's weird and robotic. I'm betting he never learned how to actually interact with other professional humans.
He's lived in a golden tower surrounded by people who agree with him or want something from him since he was 21 or 22. Imagine what you would be like if you didn't have any struggles from such an early age. Imagine what your personality would be like if you didn't have substantive, non-transactional, human interactions since the age of 22.
I kind of feel bad for the guy. His wealth and fame have ensured that he would never be normal, or anything approaching normal. Think about it - how does he even know if he has a bad idea? Do you think there are a ton of people around him that want to call out whatever dumbass idea he has? I doubt it. B-b-b-illions of dollars tends to flavor conversations, I would imagine.
That being said, I don't feel that bad, because he can literally change the world and chooses not to.
Roblox, Rec Room, Epic/Fortnite are worth about 10-20% of Meta. There is a market there, Horizon just hasn't worked out and they don't have anything like the amount of technology behind Unreal Engine/Fortnite or even Roblox, closer to Rec Room but without the game design chops.
> Today's investment also allows us to give back in recognition of your hard work and dedication to Scale over the past several years. The proceeds from Meta's investment will be distributed to those of you who are shareholders and vested equity holders, while maintaining the opportunity to continue participating in our future growth as ongoing equity holders. The exceptional team here has been the key to our success, so l'm thrilled to be able to return the favor with this meaningful liquidity distribution.
Is this the same company that has their recruiters (or whatever the hell they're called) spamming me incessantly via all my emails, even the throwaway ones I've used literally one single time that they presumably got via some unscrupulous means, about "annotating" data for training AI? And their rate is something laughable like 20 euros an hour?
We truly live in a clown world when a casual $14B is being chucked at garbage like this, I hope those Iranian nukes turn out to be real this time and I get to be the first in line to be cleansed via nuclear fire, 'cause I'm tired boss.
Word on the street is scale.ai knows which data sets/data approaches anthropic and openai used to do their RL reasoning training. Meta is paying for that know how/information.
Seems like desperation to convince the industry that they're still relevant in AI after the fiasco that was Llama4. Scale doesn't create foundational models, they compile proprietary datasets that everyone has already licensed and trained on - not a lot of reoccurring value. Maybe this slows down competitors a bit but I doubt it.
Why link to this copyright maximalist site with paywall? They argued against copyright in 2001 (NYT vs Tasini) when it was in their favor, and recently pro copyright (NYT vs OpenAI) because it was in their ..favor. They want to put restrictions on our creativity, to extend copyright from expression to abstractions. I would not touch it with a 10 foot pole.
This is as bold as Instagram and WhatsApp were at the time, typical Zuck move, most likely a successful story, perhaps bigger than the other two.
See, when he paid $1 billion in 2012 for a 7 employees company, everybody thought it is the biggest mistake he made.
Who he paid $21.8 billion in 2014 for 55 employees size messaging company, people said similar things, but both turned out to be a great success in market dominance.
Scale serves the top tier AI companies, and Alexandr is a prodigy by all means, so hell ye.
Unlike many other cases where M&A simply killed the companies / product, here it is going to be power multiplier, Meta's data to scale and back, will make scale better and meta's AI better.
I have been working at Scale for 2yrs now. Their data is shit, the majority of the contributions live in the 3rd world and all of them use gpt and other big models.
I'm eagerly waiting for the AI hype to disappear to see all those corporates lose their money.
1. Mark no longer wants to run the company and he is picking alexander wang.
2. Mark believes that Ai is the top priority, his teams have failed, (this is all clearly true so far) and he wants completely change the org structure of his AI efforts (not recommendation but everything else).
3. Mark wants to cut off the supply of information to other labs
4. Mark thinks that full access to ScaleAi's data could accelerate their research and somehow they couldn't do this with a less expensive options.
(2) seems semi-reasonable (in that Meta has failed with near infinite resources) but acquiring a handful of execs for this price seems absurd.
(3) seems like a conspiracy theory and the technology is moving away from this path of data collection, although it is still important at this very moment.
(4) Maybe.
I guess some combination of all 4 is plausible. But the amount of money seems, frankly, absurd.
I wonder if it's really an adjacent benefit to your #4: Mark doesn't just get full access to Scale's data, they also get access to an army of >100,000 data labelers to use however they want.
It doesn’t seem at all wise to give the least ethical among us access to a super intelligence. Not trying to single out Zuck specifically, though his challenges with ethics do seem well documented.
Rather, I’m speaking about the entire industry. Humanity isn’t demanding this, only those at the top seem to want it, and they seem to want it so they can keep more share of the pie for themselves, and decrease the size of everyone else’s share.
I'm really not sure why they are acquihiring a digital sweatshop owner. It's not obvious that Wang has any special insight or expertise related to building a Superintelligence other than Data Annotation...
Yeah, half of my AI skepticism isn't that the tools don't work. It's that I'm having a tough time figuring out how these things are ever going to produce ROI.
Like, at some point the end product needs to be a literal genie's lamp or fountain of youth.
that’s the trap though. assuming roi has to show up as product. the real return is upstream. market positioning. narrative control. gatekeeping the stack. the genie’s lamp isn’t the deliverable. it’s the excuse to reorganize power under the hood while everyone chases the magic trick
Scale AI has deep cooperation with military agencies and a fresh large contract with the Gulf state with the largest US military presence. It's likely they're building a new generation of combat command systems and the like, consolidation of surveillance and management tooling in the ongoing and future US wars isn't surprising.
[Okay so the third option that I thought of but decided not to put down was: literal genie's lamp, fountain of youth, or robot army. Because then who cares if you collapse the economy if AGI, you'll be safe with your robot army. Not particularly happy that this option is potentially not far from the mark.]
If "absurd" implies "too high": I always thought strong reactions to valuations a bit strange. Businesses are complicated and assuming that somebody who is willing to spend billions of dollars thought a bit harder about the value than what I can provide with my gut reaction seems reasonable.
So I started to treat it as more of an update, as in "Huh, my idea of what something is worth just really clashed with the market, curious."
Does not mean the market is right, of course. But most of the time, when digging into it and thinking a bit more about it, I would not be willing to take the short position and as a consequence moderate my reaction.
yeah they are. but the numbers aren’t the product. they’re the filter. you throw 14b at the space not to get ROI but to set the bar so high no one else can enter without bending the knee. absurd is the point.
These people believe they're on track to create life and displace the majority of labor in the world. Nothing else makes the level of investment make sense. It's a prisoner's dilemma where they all think they need to try because regardless of the likelihood of success, the expected value remains astronomical and the risk of not being the winner is extinction.
Meta bought Scale for 14.3 Billion. This is the the new way Google, MS, Meta buy other companies and avoid any scrutiny. They invest huge amounts of cash and then the CEO comes to work for them. Anti Trust law is circumvent while the big companies gobble up any competition.
I haven't read the article yet (I'm about to) but I'm curious: what are Yann's views on this? He's been pretty vocal that LLMs won't lead to AGI.
Edit: read the article. No mention of Yann. What kind of journalists are these people, to not get viewpoints from different angles? They might as well just reproduce press releases and be done with it.
You can believe LLMs won't lead to AGI and still believe that spending billions to have a best in class model will allow you to make products that will recoup that investment.