Who wins, AI or Humans?
For my junior and senior years of undergraduate study I was a university-sponsored debater. This had three functional impacts on my life that I still feel today: my public speaking capabilities were refined through the refiner’s fire, my tuition was reduced, and I was given priority selection in my classes alongside the university-sponsored athletes. As a Rugby (a non-sponsored non-NCAA team) player, I knew the difference well.
In this culture, which got increasingly and exceedingly Progressive and theatrical rather than neutral and policy-based over time, we deconstructed problematic false dichotomies and binaries. If Artificial Intelligence (AI) and humankind were to have an all out war, who would win? “False dichotomy”, the debater would call out. And so too, do I, today.
This is not a zero-sum battle where one party must win and the other must lose. The win-win scenario is visible for those game theorists and practitioners willing to continually think critically. Luminaries Elon Musk, Joe Rogan, Marc Andreessen, Curtis Yarvin, Lex Fridman, and John Danaher have all pontificated on this subject.
Rogan (podcaster par excellence) and Musk (business magnet) are in the Singularity camp. They believe that AI will inevitably and inexorably gain sentience and wipe out the human race. Rogan comes at this POV from hearing Musk and Brian Greene and Sean Carrol and Ray Kurzweil, alongside his view of how we underestimate evolution, and copious amounts of cannabis. I’m not convinced Musk really believes this, but it is possible, and he definitely says he does. His market incentive is Neuralink; his plan to join them if you can’t beat them, by placing an implant in your brain that grants you internet access through your thoughts, continuing our cyborgification most obviously seen in how glued our smart phones are to our hands.
Yarvin (computer programmer and writer) and Andreessen (software engineer and venture capitalist investor) don’t view AI as a current threat, and Yarvin uses stronger language that says AI will never be a threat. He has a brilliant analogy of computer code as law code, which should be straightforward given the language. From Hammurabi’s Code to the U.S. Constitution, humankind has written code to self-regulate society, but it has failed every time. The lacking component is the human element. A human is necessary to make a judgment, which is a decision with authority. I learned the same thing from my Ukrainian teachers in my Software Quality Assurance (QA) bootcamp. Many companies are using automation technologies in Python and Java, and some technologists think Manual QA will be replaced each year. And yet, there is always a need for a human to check, double-check, and triple-check the work of other humans and the machines.
Lex Fridman (AI computer scientist and podcaster) and John Danaher (Jujitsu & Striking coach par excellence) talk about AI vs. humankind vs. humankind + AI, with the last option being the most powerful. This is in the context of the Lindy games Chess and Go. The human assisted by AI fared better than AI alone or the human alone. A few years before this conversation, the Japanese are often ahead of the game, the 2018 anime Megalobox explored this idea in an epic about boxers using mechanized arms, and some of the high-end human boxers using AI assistants borderline AI masters. Without any spoilers, reality is stranger than fiction, and Megalobox is fiction, with the protagonist having a machine arm but no AI. Do the math.
Andreessen tells Rogan about OpenAI’s and Google’s new amazing projects that churn out bizarre and spectacular images amalgamated from the words you instruct the machine learning algorithms to use. A couple standouts for me are, “sloth machine, a sloth with a slot machine body, digital art”, and “A marble statue of a Koala DJ in front of a marble statue of a turntable. The Koala has (sic) wearing large marble headphones.” He points out that they cannot produce this art alone. They must be instructed by a human to do so. I love that.
I love the afroasiatic futuristic art of Fanuel Leul that he did for Rophnan’s recent album SIDIST - VI. I love his afroasiatic archeofuturistic art even better. The best of the future is best when blended with the best of the past. Ethiopia is home to Sophia, an AI that speaks Amharic with the His Excellency the Prime Minister, and encourages hundreds of thousands of enthusiasts that came to see it in Adees Abeba. Whether it is in science or in art, and especially at my favorite spot of the crossroads, we have a lot to think about and do regarding creation and production of win-win scenarios that renew the Ethiopian Empire.
Ethiopia’s food security and security security or rule of law are the two biggest issues we face. No nation is independent if it is reliant on incessant food aid from other nations. There is no such thing as no-strings-attached when it comes to such globalist aid. The internal and external borders have been tested a ton in the past four years, and are likely to be tested more and not less going forward.
The most Machiavellian thing we can do is functionally hear Machiavelli with ears that hear; seeing how we can invest in AI assisted human-centering technology that improves our military might and independent food production capacity.
Aksum Herald is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.