AGI, Artificial General Intelligence – Superintelligence by 2030

Home » AGI, Artificial General Intelligence – Superintelligence by 2030
AGI, Artificial General Intelligence – Superintelligence by 2030

About 24 hours have passed since a former OpenAI employee published "The coming decade – Leopold Aschenbrenner ", knowledge that no one expected. An honest confession about where humanity is today only confirmed to me that projects such as Q* are not the invention of a mad scientist, but technology existing in laboratories.

I have summarized 165 pages for you – pdf by Leopold Aschenbrenner.

I. from GPT-4 to AGI: counting OOMs

Over the last four years, we have seen incredible progress in the field of artificial intelligence, especially in language models. From GPT-2, that could barely understand and generate simple text, to GPT-4 that could solve complex math problems, write complex code, and pass college-level exams. The qualitative leap between these models was enormous and suggests that we can achieve AGI by 2027 (Artificial General Intelligence), i.e. artificial intelligence with abilities equal to or superior to those of humans.

Increase in computing power and algorithm efficiency:

AI progress is driven by three main factors: the increase in computing power, the effectiveness of algorithms and the so-called "unhobbling", i.e. removing limitations in models. The computing power used to train AI models is growing at a rate of approximately 0.5 orders of magnitude (OOM) per year. The efficiency of algorithms also increases at a similar pace, which allows for better results with less computational power. Additionally, removing limitations in models, such as the introduction of chain-of-thought prompting, significantly increases their capabilities.

Given current trends, we can expect that by 2027, AI models will be capable of performing work at the level of AI engineers and researchers. This doesn't require belief in science fiction, just trust in straight lines on graphs. If these models can automate AI research, it could trigger an exponential increase in technological progress.

What has already been achieved:

GPT-4 already outperforms most of the tests we give high school and college students. For example, he can solve complex mathematical problems and write complex computer programs. Its improvement compared to GPT-3, which was at a level similar to that of an elementary school student, is significant.

Progress in the field of AI is not only impressive, but also rapid. Counting OOMs shows that we are on the way to achieving AGI in the near future. It is important that we are aware of these trends and prepared for the challenges they may bring.

II. from AGI to superintelligence: the intelligence explosion

Reaching a level AGI (Artificial General Intelligence) is just the first step towards even greater AI capabilities. Superintelligence, or AI that significantly exceeds human intelligence, may become a reality sooner than we think. When hundreds of millions of AGIs can automate AI research, we can expect an explosion of technological progress.

The main thesis of this part of the study is simple: AI progress will not stop at the human level. Automation of AI research by AGIs themselves could reduce decades of algorithmic progress to just one year. This means that we can go from human-level to significantly super-intelligent AI systems in a short period of time.

While superintelligence offers enormous opportunities, it also carries serious risks. Such powerful systems can trigger dramatic changes in society and the economy. The introduction of super-intelligent AI systems will require strict control and management to avoid disastrous consequences.

The explosion of intelligence can lead to various scenarios. We may face dramatic technological and economic progress, but also new challenges in national security and global governance. In the worst-case scenario, the rapid development of superintelligence without proper control could lead to catastrophic consequences. The transition from AGI to superintelligence is a key stage in the development of artificial intelligence. Although the opportunities are enormous, we must be aware of the threats and prepare for them appropriately. It is essential to develop strategies to manage and control super-intelligent AI systems to ensure that their introduction brings benefits rather than harm.

see also: https://arxiv.org/abs/2303.12712

III. Challenges

IIIa. race to a trillion-dollar cluster

Development of artificial intelligence requires enormous resources, both financial and technological. The race to create a trillion-dollar computing cluster is already on. As AI revenues grow, billions of dollars will be invested in GPUs, data centers, and energy infrastructure before the end of the decade.
Industrial mobilization in the field of AI will be intense. Increase in electricity production in the USA by tens of percent will be necessary to meet the growing energy requirements of data centers and GPU clusters. This will require massive investment and cooperation on a scale not seen in decades.
Building computer clusters on such a scale involves many challenges. Not only will huge financial investments be required, but also technological innovations will be needed to meet growing computing and energy demands. Moreover, securing these resources against cyber threats will be crucial for their effective functioning.
The race to create a trillion-dollar computing cluster is not only a technological challenge, but also an economic and political one. Success in this area will require coordinated efforts on many fronts, including investment, technological innovation and cybersecurity.

IIIb. Securing laboratories: AGI security

Security in AI laboratories is a key element in the context of AGI development. Currently, leading AI labs treat security as a secondary concern, which poses serious risks. Unsecured AI labs may become the target of attacks from foreign countries such as China, which may seize key AGI information. Securing this information from state threats will be a huge challenge, and we are not currently on track to solve it.
To effectively secure AI labs, it will be necessary to adopt new security strategies and technologies. This will require both investment in safety technologies and changes in the organizational culture of laboratories to make safety a priority.
Security in AI labs is crucial to the continued development of AGI. Coordinated efforts are needed to secure critical information and technology from cyber threats. Without this, the risk of key information being intercepted by foreign countries may threaten global security.

IIIc. Superalignment

Superalignment is a term that describes the problem of controlling AI systems that are much smarter than humans. This is one of the most important challenges we face in the development of AGI and superintelligence.
Superalignment this is still an unresolved technical problem. Control over AI systems much smarter than us is extremely difficult and requires innovative technological solutions. As intelligence explodes, serious problems can arise that can lead to catastrophic consequences.
Solution superalignment is crucial for safe development AGI. Investments in research and development of new technologies will be necessary to control superintelligent AI systems. Otherwise, uncontrolled development of AI may lead to serious threats to humanity.
Superalignment This is one of the most important challenges in the context of AGI development. A concerted effort is needed to develop technologies that will allow the control of superintelligent AI systems. This is the only way to ensure that the development of AI brings benefits, not threats.

IIId. The free world must win

Development superintelligence will provide a decisive economic and military advantage. In the context of global competition, especially with China, the survival of the free world is at stake.
Superintelligence could provide enormous economic and military benefits. Countries that are able to leverage super-intelligent AI systems will have a significant advantage over others. That is why it is so important for the free world to maintain its advantage over authoritarian powers.
Maintaining the AI advantage will require coordinated efforts on multiple fronts. Investments in research and development, international cooperation and effective risk management will be necessary. Otherwise, authoritarian powers may take the upper hand, which could have serious consequences for freedom and democracy around the world.

The free world must win the race to superintelligence. Coordinated efforts are needed to maintain technological and military superiority over authoritarian powers. Only in this way can we ensure that the development of superintelligence will serve the good of humanity, not its destruction.

IV. Design

As the race to AGI intensifies, the state will begin to get involved in the development of artificial intelligence. There will be some government AGI project by 2027/28. No startup can handle superintelligence, so government involvement will be necessary.
Government AGI projects will be important for the further development of this technology. State involvement will provide adequate financial and technological resources that are necessary for development superintelligent AI systems. Collaboration between the public and private sectors will be key to success.
Superintelligence will be of great importance to national security. The government's AGI projects will be key to ensuring that this technology is used safely and in a manner consistent with the national interest. The introduction of super-intelligent AI systems will require strict control and management to avoid potential threats.

Government involvement in the development of AGI will be crucial for continued technological progress. Coordinated efforts are needed to develop superintelligent AI systems and ensure that they are used safely and in accordance with the national interest. This is the only way to ensure that the development of superintelligence brings benefits, not threats.

V. Conclusions

If our predictions about the development of AI are correct, we will face a dramatic future. AI has the potential to solve many problems, but it also carries serious risks.

Artificial intelligence has enormous potential to solve social, economic and technological problems. It can contribute to improving the quality of life, increasing efficiency in various fields and accelerating technological progress.

AI Risks:

The development of AI also carries serious threats. Superintelligence may lead to new challenges in national security, global governance and ethics. Uncontrolled development of AI can lead to disastrous consequences, which is why it is so important that we are aware of these threats and prepare for them appropriately.

The development of AI is the most important stage in human history. Coordinated efforts are needed to ensure that this technology is used safely and in the public interest. We must be aware of the challenges that the development of AI brings and prepare for them to ensure that it brings benefits rather than threats.

Each of these sections provides a lot of material for reflection and discussion about the future of artificial intelligence and its impact on our lives, society and global order. I encourage you to have a substantive discussion.

Source:  "The coming decade - Leopold Aschenbrenner"


And at this point there should be a discussion about where we as Poland are. [curtain]

chat-mini

Order your assistant

Save time and money by automating repetitive tasks with our AI Assistant.

Modern and intuitive interface

Our intuitive interface allows you to easily manage the Assistant without the need for specialized technical knowledge.

Scroll to Top