We are currently living through a period of “compounded uncertainty”, where longterm plans are pretty useless compared to the importance of getting the next steps right. It is not merely that one sector of our global order is shifting; it is that the geopolitical, political, and technological tectonic plates are grinding against each other simultaneously.
In the Middle East, the ongoing conflict remains a volatile variable with no clear off-ramp yet, casting a shadow over global energy markets and heavily impacting local economy. Coming from the US, the “Trump Effect” brings further sofar unknown unpredictability. EBITDA is now defined as earnings before interests and Trumps daily actions.
However, the perhaps most intense source of today’s “jittery” atmosphere in most companies is the accelerating speed of Artificial Intelligence and the uncertainty about the implications which come with it.
Two recent publications have perfectly captured this cultural and economic anxiety:
- Matt Shumer’s “Something Big Is Happening”: This essay went viral by comparing our current AI moment to the early weeks of 1914 or February 2020—the “eerie window” before a paradigm shift becomes a lived reality. Shumer argues that the latest models (like GPT-5.3 Codex) have crossed a rubicon from tools to autonomous workers, capable of self-debugging and complex reasoning that makes previous versions look like calculators.
- The Citrini Report: Their speculative “2028 Global Intelligence Crisis” scenario—which envisioned AI agents destroying the margins of “friction-based” businesses like Visa, Uber, and Salesforce—was a primary driver behind the recent tech sell-off. While the authors labeled it a “thought exercise,” the market’s reaction proved that investors are no longer viewing AI disruption as a distant “maybe,” but as a structural threat to current valuations.
Both articles are a good and entertaining read, but arguably much more on the speculative side than actually data driven or based on real research. Still, “entertaining” enough do send certain markets down, which by itself is an indicator for the uncertainty we live in.

Sorting AI Signal from Science Fiction Noise
The speed of the development, the big importance of certain aspects of it, the “magic like” examples we all experience daily (or at least should) – this all creates a pretty unique and complex environment for every company.
Our role in our consultancy mandates is mainly filtering the enormously important AI Signal from the overwhelming, but often purely speculative AI noise. And then being the “dating app algorithm”, matching what is now available and working in terms of technology with usecases where it really makes sense. And there are many where it does, and many where it doesn’t (yet).
Yes, for Software Development (or to more exact, for coding) AI has very obvious implications being a huge enabler. But no, besides pilots and presentations, agentic commerce is not ready for primetime yet, still a bit stuck in the messy complexity of reality. So forget it? Not at all, check out MCP, UCP (and whatever 3 letter acronym is next around the corner), think how it might disrupt value chains (disintermediate OTA’s?) and prepare (system architecture, data quality etc.).
Yes, the various generative tools are extremely impressive. But, uh uh, so is the very foreseeable impact regarding fraud, identification and KYC processes. Document forgery was yesterday, deep faked live video calls are the frontier now. Might be a good idea to prepare and use alternatives (based on cryptography), alternatives which will also be needed to identify agents in a future agentic commerce environment.

Yes, also the advances in robotics are big. But no, they will not solve the staffing crisis in many industries any time soon. Robots are already “employed” in masses where it is about specific, predictable processes – warehouse logistics being a perfect example and no wonder, Amazon being the biggest “employer” in this field. Hotel housekeeping? Not that much.
Given the enormous speed of development going on, the hard part is to sort out what is relevant now, what is a very likely future development and should already be on the short term radar but is too early to implement yet, what is still years away and last but not least, what is more or less entertaining science fiction.
Making this distinctions while designing and implementing a solid strategy is our core competency – besides staying calm.