Introduction: When Technological Frenzy Drowns Out Independent Thinking
We are in an unprecedented era of technological noise. Daily headlines proclaim that AI will destroy certain jobs, reshape all industries, or that a giant’s market value has skyrocketed by betting on a particular model. In this environment, the greatest temptation for CEOs and decision-makers is to mistake “market consensus” for “market truth.” However, the trajectory of industry development has never been driven by consensus, but by the minority who dare to stay calm amidst the cheers of the crowd and see the path when others doubt.
The root of the problem lies in the fact that evaluating a technology’s “potential” is far easier than assessing its “current practicality.” This leads to a peculiar industry phenomenon: everyone enthusiastically discusses what AI “can” do, but delves less into what it “should” do in specific business contexts and what users are “truly willing to pay for.” This disconnect is the breeding ground for groupthink. When everyone is running in the same direction, few stop to ask: does the end of this road really have what we want?
Why Is “Following the Mainstream” Fatal in AI Transformation?
The answer is simple: because AI is not a single product, but a collection of technologies with vastly different capabilities and fragmented application scenarios. Blindly following trends means betting resources on someone else’s race track while ignoring the potentially more fertile soil in your own backyard.
Looking back at the tech history of the past two decades, the winners of each major transformation were often not those with the loudest initial voices. When smartphones rose, the market consensus was that BlackBerry’s physical keyboard and enterprise security were unshakable; in the early days of cloud computing, the mainstream view was that large enterprises would never entrust critical data to others’ servers. These consensuses sounded reasonable but overlooked humanity’s pursuit of experience and technology’s power to disrupt costs.
Applying this logic to AI, we can see several dangerous “consensus traps” forming:
- The “Must Build Our Own Foundation Model” Trap: As if without its own large model, a company loses its ticket to the future. But for 95% of enterprises, using APIs or fine-tuning open-source models offers the optimal cost-benefit solution.
- The “Full Automation Replaces Human Labor” Trap: Viewing AI merely as a cost-cutting tool, overlooking the new value dimensions and service depth created by human-machine collaboration.
- The “Chasing the Coolest Applications” Trap: Investing resources in generating marketing images or videos while neglecting AI’s role in optimizing supply chain forecasting, improving customer service quality resolution, or accelerating internal knowledge flow—“unsexy but critical” areas.
The table below compares AI strategies driven by “groupthink” versus “independent judgment”:
| Strategy Dimension | Groupthink-Driven AI Strategy | Independent Judgment-Driven AI Strategy |
|---|---|---|
| Starting Point | Fear of Missing Out (FOMO), market noise | Core business pain points, unique data assets |
| Investment Focus | Chasing star teams, hot technology trends | Internal process transformation, employee skill enhancement, data governance |
| Success Metrics | Launched AI features, media buzz | Improvement in key metrics (e.g., customer satisfaction, operational efficiency), ROI |
| Decision Basis | Analyst reports, competitor movements | Internal experimental data, frontline user feedback |
| Risk Appetite | Avoids deviating from industry “standard practices” | Allows controlled risks to test core assumptions |
More critically, AI’s development trajectory is not linear. According to Stanford University’s “2025 AI Index Report,” the annual growth rate of training costs for top global AI models exceeds 30%, but the marginal benefit growth in specific professional tasks is beginning to slow. This means that indiscriminately investing huge sums in pursuing “bigger and stronger” models is seeing diminishing returns on investment for most enterprises. The real opportunity is shifting towards more skillfully combining existing capabilities with domain-specific knowledge.
mindmap
root(AI Decision Groupthink Traps)
(Technical Myths)
Must build own foundation model<br>Ignores API and fine-tuning benefits
Pursues parameter scale<br>Ignores application scenario fit
Believes in full automation<br>Undervalues human-machine collaboration design
(Market Myths)
Chases hot application scenarios<br>(e.g., generative marketing)
Ignores mundane but critical<br>operational optimization scenarios
Imitates competitor layouts<br>Lacks first-principles thinking
(Organizational Myths)
Establishes independent AI departments<br>Decoupled from core business
Uses technical metrics as KPIs<br>Rather than business outcomes
Fear-driven culture dominates<br>Lacks experimentation and error tolerance spaceFrom CES to AI: Those Tech Turning Points Misjudged by Consensus
History is the best teacher. The fate of the Consumer Electronics Show (CES) is a classic case. After online video and virtual exhibition technology matured, industry analysts almost unanimously declared the death of physical trade shows—“Why spend millions on travel to see products that can be browsed online?” This logic was flawless until after the COVID-19 pandemic. People flocked back to Las Vegas, not because online tools were inferior, but because the “serendipitous discoveries,” trust-building, and inspiration sparked by physical interactions are difficult for any digital platform to replicate. Consensus overlooked the fundamental need of humans as social beings: face-to-face connection.
The same script has played out repeatedly in tech history:
- 3D TV: Once seen as the next big thing in home entertainment, hardware manufacturers and content providers formed a powerful alliance to push it. But consumers rejected clunky glasses, and there was a lack of compelling content, leading to rapid market decline.
- Metaverse Frenzy: Around 2022, it seemed every company needed a metaverse strategy. However, beyond gaming and specific social scenarios, most “enterprise metaverse” applications lay idle due to clunky experiences and lack of clear value propositions. According to Gartner tracking, by 2025, less than 15% of enterprise metaverse projects will achieve their set business goals.
- Contrasting Case: Online Streaming Media: When Netflix early on shifted from DVD mail to streaming, industry consensus was that internet bandwidth was insufficient and licensing models were unfeasible. But the company, based on user data (viewing habits, buffering tolerance), made decisions contrary to the “common sense” of the time, ultimately reshaping the entire entertainment industry.
These cases reveal a pattern: When the driving force behind a technology comes mainly from supply-side consensus (vendors, investors, media) rather than spontaneous, sustained adoption and willingness to pay from the demand side (end-users), the risk of a bubble increases sharply. Certain application areas of AI are showing similar warning signs.
Regulation: Guardrail or Roadblock? A New Test for Leaders
Beyond technical judgment, leaders in the AI era must also navigate an increasingly complex regulatory environment. Here lies another “following” trap: passively waiting for regulations to become completely clear before acting. In an era of global competition and technology iteration measured in months, this conservative strategy equates to forfeiting market opportunities.
The core spirit of the recent U.S. government AI executive order and national framework is noteworthy: it attempts to balance “promoting innovation” with “managing risk,” emphasizing the importance of federal-level consistency to avoid businesses facing fragmented regulations across 50 states. This sends a key signal: Future winners will be organizations that can proactively participate in shaping rules and innovate to the extreme within compliant frameworks.
Taking the EU’s “Artificial Intelligence Act” as an example, its risk-based regulatory approach, while increasing compliance costs, also establishes market differentiation standards for “trustworthy AI.” Forward-looking companies won’t see it merely as a cost, but transform it into guiding principles for product design and a cornerstone of brand trust. According to McKinsey analysis, companies that proactively integrate ethical and compliant design (Ethical by Design) into their AI development processes reduce the risk of needing major post-launch modifications due to regulatory or public opinion issues by about 40%.
The table below outlines the AI regulatory orientations of major global regions and their potential impact on corporate strategy:
| Region | Core Regulatory Orientation | Potential Business Impact | Corporate Strategy Implication |
|---|---|---|---|
| United States | Innovation-first, risk-tiered management, emphasizes agency coordination and federal leadership. | High market flexibility, but may face state-level legal challenges; significant cooperation opportunities in defense and advanced tech. | Actively participate in standard-setting, build strong legal and compliance teams, leverage flexible environment for rapid iteration. |
| European Union | Rights-based, strict ex-ante (pre-market) regulation based on risk, high penalties. | High compliance threshold, slow market access, but good circulation within the single market post-compliance, easy to build trust brands. | Integrate compliance considerations from the design phase, make “trustworthy AI” a core product selling point, prioritize high-value, compliance-sensitive applications. |
| China | National security and social governance-oriented, emphasizes controllability and manageability, promotes independent technology systems. | Vast market but unique rules, many data cross-border restrictions, significant government collaboration project opportunities. | Must deeply localize, closely integrate with the local ecosystem, clearly understand and align with national-level AI development goals. |
The leader’s role is to understand the deep logic behind these regulatory trends—is it to protect citizen rights, safeguard national security, or ensure technological sovereignty?—and adjust global market entry strategies and resource allocation accordingly.
timeline
title AI Regulation and Corporate Response Evolution Timeline
section 2023-2024 Concept Formation Period
EU AI Act reaches preliminary agreement : Companies begin forming<br>compliance and ethics teams
U.S. issues AI Executive Order : Large tech companies<br>strengthen policy lobbying
China's generative AI management measures take effect : Chinese companies quickly<br>adapt to filing system
section 2025-2026 Framework Implementation Period
Major jurisdictions'<br>laws formally enacted : Compliance becomes a key<br>path to product launch
International standards bodies (ISO/IEC)<br>release key standards : Companies adjust development<br>processes based on standards
Cross-border data flow rules<br>further clarified : Impacts global AI service<br>architecture deployment
section 2027 Onwards Normalization and New Challenges
Regulatory Technology (RegTech) rises : Use AI tools for<br>automated compliance detection
New regulations discussed for<br>Autonomous Agents : Leaders need to anticipate<br>next round of regulatory focus
Geopolitical influence intensifies : Technology standards and supply chains<br>become strategic toolsCultivating Anti-Fragile AI Leadership: Where to Start?
So, in an environment filled with noise, how should leaders exercise the muscle of “independent judgment” to build an organization that can master AI transformation rather than be disrupted by it? This requires a systematic approach, not just personal intuition.
Step 1: Reshape the intelligence system to cut through data noise. Stop over-relying on second-hand analysis reports. Establish mechanisms to draw insights directly from the market front lines, from your own product usage data, from customer service conversations. For example, instead of focusing on “Top 10 AI Trends in Retail,” a retail company should deeply analyze its own customers’ search logs to see which needs are unmet by the current search engine—that might be the starting point where an AI assistant can create the most value. According to a survey of successful digital transformation companies, their leaders spend 35% more time analyzing internal first-hand data than the industry average.
Step 2: Establish a culture of rapid “hypothesis-experiment-learn” cycles. View AI projects as a series of business hypotheses to be validated, not as inevitably successful technology deployments. Set clear, small-scale controlled experiments. For example, before fully deploying an AI customer service agent, first randomly assign 5% of customer requests for testing, strictly comparing its resolution rate, customer satisfaction, and subsequent complaint rate with human agents. This effectively avoids being dazzled by the technology halo, ensuring every investment points to real business value.
Step 3: Consciously introduce “challenging perspectives.” In decision-making meetings, establish a “red team” or appoint a “devil’s advocate” whose sole responsibility is to challenge the foundational assumptions of mainstream proposals. Invite advisors from different industries, generations, and professional backgrounds to participate in AI strategy discussions. External perspectives often puncture blind spots that the industry overlooks. Historical data shows that companies that institutionalize challenge mechanisms in their decision-making processes reduce the incidence of major strategic errors by an average of about 25%.
Step 4: Treat regulation and ethics as part of core competitiveness. Do not position the compliance team as adversaries to the business team. Instead, involve them early in product design to jointly consider how to create the smoothest user experience within the compliance framework. Promote data privacy, algorithmic fairness, and system transparency as product features—in markets with high consumer awareness, this itself is a powerful differentiator.
Conclusion: Be the Anchor in the AI Frenzy
AI is undoubtedly one of the most powerful technological forces of this era, but it also acts like a mirror, reflecting an organization’s deepest decision-making culture and leadership quality. Technology becomes obsolete, models iterate; today’s most advanced architecture may be standard equipment in three years. However, an organizational capability to make independent judgments under incomplete information, dare to deviate from the crowd, and learn from rapid experiments is a durable advantage that any competitor finds difficult to replicate.
The future industrial landscape will be dominated by two types of enterprises: a few “technology sovereigns” with top-tier foundational research and computing resources; and a larger number of “smart application experts” spread across various fields. The success of the latter does not depend on whether they use the coolest models, but on whether they understand their customers, their processes, and their value chain better than anyone else, and can use AI tools to translate this understanding into better products, services, or experiences.
As a leader, your primary task is not to become an AI technology expert, but to be the chief architect of your organization’s “judgment.” When everyone is rushing toward the same hill, can you calmly ask: Should our company instead explore the unnoticed valley next to it? The answer to this question will determine whether your enterprise in the AI era is a surfer riding the wave or a sandcastle swallowed by the tide.
