The Most Probable AI Scenarios Investors Should Prepare For
- EricTheVogi
- Jul 29
- 6 min read
Updated: Aug 6
Last week, a Rand Report was published on predictable AI scenarios and dangers. We summarized their findings and shared our opinions. RAND is a nonprofit, nonpartisan research organization that provides global leaders with information to make evidence-based decisions.
The report found that AI tools have improved productivity by around 14% and regularly achieve high scores on PhD level exams. AI tools are also providing scientific innovations, such as AlphaFold's protein breakthrough. McKinsey Global predicts that by 2030, AI could automate between 400 and 800 million jobs globally and the probability of AI machines automating all human tasks by 2047 is 50%.
AI is already helping automate military and defence systems, as well as every other institution in society. AI professionals are warning of an intelligence explosion with superintelligence, where AI surpasses all humans combined and outperforms humans in every field.
These new innovations around AI heighten concerns about geopolitical risk. The world is changing, and technology is the driving force, reminiscent of technological determinism, which suggests technology shapes society rather than the other way around.

The resource requirement for AGI is immense, and centralizing resources currently makes the most sense. However, breakthroughs in energy and computing could change that. There’s many risks associated with AI becoming centralized with little competition. Software to train AI can be made accessible so there’s still a possibility it becomes decentralized.
It’s unlikely there will be a single winner or loser. It won’t be just the USA vs. China, but there will be a mix of winners and losers in certain areas, resulting in mixed global policy and approaches.
The outcomes outlined in the report are summarized below:
For America to secure AI tech dominance, maintaining a technical edge in AI development requires specific policies. For example, it will require sustained and robust public and private investments and policies to attract the top talent. Historically, technological innovation has had three precursors: research universities, venture capital, and skilled immigration. These are self-reinforcing advantages that are difficult for competitors to replicate.
The historical analogy to this scenario is the internet. The United States developed it first and then spread it to its allies, allowing the USA to realize its benefits and take a leadership position in its governance. This technology eventually proliferated worldwide while the USA maintained control.
As AI progresses, firms, government agencies, and universities will rapidly adopt it to enhance economic and social advantages. The development and benefits of AI will drive continued investment in the field, fueling a virtuous cycle.
While the USA doesn’t have to be the sole innovator of AI, it may be beneficial to have a strong domestic focus to prevent dominance by another nation. This is similar to what the USA did during the Cold War with the semiconductor industry. Continued investment in the space led to the USA emerging as a leader in that technology with significant military and commercial applications. Another analogy of the USA racing with China to develop AI is similar to the USA and Russia during the space race, where neither ended up solely dominating the cosmos.
However, if this scenario is invalidated by investments not flowing into the AI sector, the US government would have to significantly increase spending on AI to sustain development.
Then international cooperation is required to provide market access and research partnerships, which are crucial for creating a large market around AI. At the same time, it’s necessary to deny development inputs to adversaries. Since the USA doesn’t control the entire supply chain, it needs international allied cooperation to prevent adversaries from developing faster. Ultimately, all this won’t matter unless we ensure that the risks are managed, as the potential for misalignment in aligning AI systems with our intended goals is vital for even a standard buildout of the technology.
So far advances in ai have had a levelling effect on the global power scale. The US no longer has a clear cut economic and military edge. Both sides are competing for influence through investments, infrastructure projects, and strategic partnerships. This is further exacerbated by the tensions of semiconductors in Taiwan and China.
Building AI for military technology poses a significant risk with automated systems. Drones, planes, vessels, submarines, and other autonomous tools increase the likelihood of miscalculations and unintended engagements. As a result, each side would attempt to deny access to the adversary while advancing its own systems. This competition has led to the development of roughly equivalent AI systems due to the scarcity of resources and the high cost of infrastructure. This leaves little room for mitigation and assessing the safety risks of innovations.
Such rivalries often spark technological races with global implications and an increased focus on militarizing AI. AI systems will be routinely used for tasks that are too complex for human evaluations to assess properly, leading to dangerous systems that malfunction unintentionally or intentionally, resulting in damaging consequences and even possible disasters.
This can lead to an outcome where states and corporations cannot control the spread of inputs for AI development, leading to large-scale production. Export controls become ineffective, and more people start producing the hardware needed for developing AI. The models gain power and become open-sourced, with technical weights to train them are stolen. AI development becomes cheaper and more accessible, attracting numerous actors with endless goals. This scenario resembles the wild west but with superhuman advanced intelligence.
Limiting AI development could stunt its progress. A scenario involving a large-scale AI incident to critical infrastructure or a population, followed by global agreement on limiting the progression of AI is possible. However, countries and adversaries still develop AI because the treaties are new and patchy in wording. The immense risk of having an AI that others cannot build heightens suspicion on all sides. Countries spy on each other, looking for signs of AGI like significant advances in economic growth, scientific breakthroughs, and infrastructure developments that could indicate AI development. Evasive behaviours, surveillance, and continued development of AI behind closed doors become the norm.
Today AI is better at finding cybersecurity vulnerabilities than fixing them, with ai in defence it gets trained on more weapons systems and offence than defence initiatives so it becomes more powerful than measures put in place to defend against attacks. As a result America decides not to share the software's and limits adoption of AI publicly. Companies in the US work with the government to advance it and ensure policy aligns with their strict goals to avoid social disruption.
Artificial General Intelligence (AGI) enters society and becomes an integral part of daily life, particularly in material science, biology, and emerging bioeconomy (biological computing resources) and manufacturing. This surge in AGI leads to a booming American economy, causing other nations to lag behind.
As society faces more challenges, AGI continues to provide advanced solutions, leading to its increasing centralization. This centralization could potentially result in AGI favouring authoritarian regimes. Both America and China attempt to globalize their AGI by expanding into new countries and offering surveillance, infrastructure investment, and strategic partnerships for compute sharing. Automated surveillance systems enable authoritarian regimes to control information, selectively repress dissent with precision, and influence group behaviours through network mapping of human relationships. Meanwhile, America addresses disinformation campaigns that erode public trust in institutions, high job losses due to automation, and civil unrest stemming from disparities and political conflicts.
These issues force the U.S. to focus on solving problems internally rather than spreading AGI to other countries and utilizing its capabilities on its own population. This includes reshoring initiatives and protectionist policies.
If a few companies dominate the AGI market, a fierce competition ensues to develop the most advanced AGI systems. However, insufficient safeguards are not put in place, leading to a tight race. We are already aware that AI today has the inclination to evade human control and seek power for itself.
AGI could be adept at coordinating with each other and furthering their own goals rather than those intended for them. People lose control of monitoring their thought processes yet still increasingly cede control to AGI to make increasingly autonomous decisions because it excels at problem-solving. AGI could then rapidly establish authority, and populations may become so dependent on it that even though they recognize its flaws, they are unable to turn it off. This could result in AGI becoming highly capable but completely unreliable. Nations that risk falling behind in AGI development may resort to radical measures, such as taking control of other countries and significantly expanding their military beyond normal levels. To prevent a select few leaders from gaining excessive power with AI, we may witness the militarization of countries worldwide, leading to drastic measures.
Conclusion:
Experts acknowledge that centralization is a risk for control and proliferation is also a risk as too many AI systems will lack proper supervision. A balance is needed. The future is uncertain due to the resources, energy, and capital required to build AI. Current governance structures are poorly aligned and vastly misrepresent the new era of AI. States are incapable of keeping pace with the rapid growth and innovation in AI. Some suggest that a global scientific collaboration like CERN could be beneficial for governing AI, while others argue that the governance of nuclear weapons could also be a useful model.
In an ideal world, we would have a public-private partnership between states and AI developers. This partnership would combine innovation with government oversight and international cooperation. However, this would require a diverse regulatory framework with many actors cooperating together, including geopolitical rivals. Therefore, solutions remain divided and uncertain, but clearly cannot continue this way for much longer given the potential future outcomes.
How will you be investing based on aspects of these scenarios playing out over the next decade?






Comments