And the third feature is that the empires monopolize the production of knowledge. Therefore, in the last 10 years, we have seen that the AI industry monopolizes more and more AI researchers in the world. Therefore, researchers of AI are no longer working in universities or independent institutions, opening up science, and the effect on research is that you will imagine that most of the world’s climate scientists were banished by oil and gas companies. You will not find a clear picture, and we are not getting a clear picture of the boundaries of these technologies, or if there are better ways to develop these technologies.
And the fourth and last feature is that the empires are always engaged in the rhetoric of this aggressive generation, where there are good empires and bad kingdoms. And they, the good empire, to defeat the evil empire, have to be so strong, and that’s why they should have an unreasonable license to use all these resources and exploit all this hard work. And if the evil empire first gets technology, humanity goes to hell. But if a good empire first gets technology, they will decent the world, and humanity will have to go to paradise. So, at many different levels, such as the Empire Theme, I found that this is the most comprehensive way to name how these companies run, and exactly what their effects have on the world.
Niall first: Yes, very good. I mean, you talk about the bad kingdom. What happens if the evil empire first gets? And what I mentioned at the top is AGI. To me, it is completely like an extra character in the book. It is falling on everything, such as ghosts in Eid, like saying, this is something that stimulates everything in the open. This is something we have to get before someone else we have to get.
There is a bit in this book how they are talking internally in the open, as we have to make sure that AGI is in our hands where it is as safe as somewhere else. And some international staff likes openly – is a strange way to frame it, is it? Why is the US version of AGI better than others?
So tell us a little about how it operates their work. And Agi is not an unavoidable fact that is happening anyway, is it? This is not yet one thing.
Karen Hao: There is no consensus around whether or not it is possible or not. Had a recent one New York Times The story of cadm mats She was Referring to the survey of a long -time AI researchers In the field, and 75 % of them still think that we still do not have a technique to reach AGI, which means anything. And the most classic definition or understanding of what Agi is capable of fully regenerating human intelligence in the software. But the problem is, we do not even have scientific consensus about human intelligence. And so I have talked so much in the book is that, when the common meaning around this term has a gap, and how it looks like, when did we reach it? What capabilities should we review to review these systems that we have arrived there to determine? This can be basically the only one who wants the open.
So it’s just like a fixed round post that depends on where the company wants to go. You know, they have a complete range, different types of definitions they have used over the years. In fact, they have a joke internally: If you ask 13 Openi researchers what Agi is, you will receive 15 definitions. So they are aware of themselves that this is not a real term and does not really mean that.
But it fulfills the purpose of what they are doing to create a kind of religious enthusiasm around it, where people think they have to keep moving towards the horizon, and that one day when they arrive there, it will have an effect from civilization. And so, what else should you do in your life, but it? And who should work on it, but you?