In the last weeks we saw Big Tech CEOs and investors trade forecasts of economic disruption like stock tips, talent poaching wars between the AI labs, alleged IP theft (again), and a “retrained” model delivering politically biased “horrific experiences” (not preventing the parent company to strike a deal with the US Department of Defense, though, and to offer the latest model, Grok 4, for government and science applications). Meanwhile, in Europe, the EU AI Act struggles to establish governance bodies and to enforce the rules, with business leaders pleading for a pause.
I wonder: What do all these news tell us about the values driving the technology we're being asked to trust? Let’s dive in.
Within weeks,
Microsoft announced nearly 9,000 layoffs, nearly 4% of its workforce, as it pivots toward AI and cloud investments. CEO Satya Nadella emphasizes these changes are necessary to “best position the company for success in a dynamic marketplace.”
Amazon CEO Andy Jassy also predicts to need "fewer people doing some of the jobs done today," Venture capitalist Vinod Khosla estimates AI will do 80% of all jobs within five years
Dario Amodei issued a stark warning about a "white-collar job bloodbath" with AI wiping out about half of all entry-level white-collar jobs.
Sam Altman does not fully agree but acknowledges “AIs can do problems that I’d expect an expert PhD in my field to do,” in the Uncapped podcast (hosted by his brother, Jack Altman).
Are players in the AI bubble just trying to justify their own strategy or do they simply know more than others?
In Europe, the worry is less about immediate mass unemployment but more about widening divides: between those who adapt and those who can’t, between those who design the systems and those who merely try to stay on it. Without targeted interventions in education and upskilling, AI could cause jobs to disappear, but also the social fabric as a backbone of Europe’s model of shared prosperity. The EU AI Act was meant to be the world’s boldest framework for trustworthy AI. But right now it stumbles over scope: Who exactly bears responsibility? How to enforce rules on global providers if they don’t even manage to find the scientific leader for the issue?The recent open letter from the EU AI Champions initiative, signed by 45 business leaders urging the Commission to pause the Act, underlines a pragmatic fear: that Europe could regulate itself into irrelevance while the real game plays out elsewhere. The publication of the "General Purpose AI Code of Practice" only days later is praised as a bold and confident next step by some experts, but remains voluntary and only applies to existing models in two years from now. It can be seen as a gentle nudge for Big Tech to behave while not substituting robust mechanisms that can secure compliance and true transparency.
Lack of transparency
However, the model makers' ability to provide full insight is questionable for good reason. Anthropic proposes a transparency framework for frontier AI, and publishes unsettling findings on “misaligned” model behavior in test settings. Studies examining the models’ Chain of Thoughts (CoT) reveal that they exhibit a tendancy of “reward hacking” and also become adept in concealing their intentions. Elon Musk conducts real world experiments by claiming to train Grok 3 on "less garbage" only to watch the updated model churn out heavy rightwing biases. xAI’s apologies and explanations aren’t too credible as historian Angus Johnston and others write. In the same week Huawei fended off allegations it cloned Alibaba’s Qwen models, suggesting that even among the Chinese AI giants, trust is scarce currency.
When considering intellectual property knowledge to be as valuable as frontier model engineering experience and skills, the Meta talent raid adds to the story. Mark Zuckerberg himself reportedly leads the hiring initiative for a new Superintelligence department, paying millions of dollars to recruit leading AI heads. OpenAI hires top-shots from rivals in a seemingly retaliatory move. And Google last week opted to buy the entire Windsurf leadership team rather than acquire the whole company, as OpenAI had originally planned.
The Black Box we are not talking about
From the outside, one might wonder: what makes researchers abandon one lab for the next? Is it really just about stock options? Or the allure of being "there when the magic happens"? I asked three models to envision their choice for a workplace and to explain the reasons, find ChatGPT’s answer HERE, Gemini’s HERE, Grok’s HERE and Claude’s HERE.
It might be worthwhile to focus more on the human networks of expertise and loyalty behind AI breakthroughs and the capital investments. Which brings me to three questions:
What is the working culture inside these elite labs? Is it genuinely collaborative, or driven by secrecy and hierarchical power dynamics? (this post offers a rare impression)
Do these omnipresent CEOs truly trust their employees, and vice versa? Or is everything held together by non-compete clauses and intellectual property shields? (I think the fact that OpenAI becomes a fortress sends a clear message.)
Can we, as customers and citizens, trust products emerging from such ecosystems? Where does purpose, credibility, and value-based culture fit when "AI superintelligence" becomes the new moonshot?
Europe has long anchored its economic story in a stakeholder model—one that values workers, communities, and long-term social stability alongside shareholder returns. Silicon Valley, by contrast, prizes speed, disruption, and the-winner-takes-all scaling. Much of the current debate on AI governance is about transparency: of data, of model engineering, investment contracts and supply chains in terms of chips and energy. Which is important, no doubt. But perhaps we should not forget to look into critical opacity elsewhere: in incentives, in culture, in the intangible trust architectures that shape how these systems are developed and deployed.
And if we want to navigate this AI revolution without forfeiting hard-won social contracts, especially in Europe, we’ll have to demand more than compliance. We’ll need companies and scientists alike to be transparent not only about how AI works, but why they are building it, who they serve, and what kind of world they are ultimately trying to create.
This is just the beginning of our conversation about AI's human side.
Connect with me on LinkedIn for ongoing discussions.