AlphaGo and why libertarians need to be better about AI

AlphaGo poster
It's been a long time since I've played Go. I think I was in middle school when I last played. AlphaGo is a documentary that chronicles the AI machine of the same name that played and beat world-renowned Go players. 

Go is an incredibly simple game where the goal is to surround and capture as much territory on the board as possible. At the same time, the simple mechanics of the game results in a vastly complex game where the game direction is nearly limitless.

It's also been a long time since the events in this documentary occurred. While this documentary doesn't have much to do about politics in itself, it did remind me of Andrew Yang and libertarians' responses to Yang.

Now, I don't like Yang's policies much, though I thought he was one of maybe only two Democratic candidates that were actually thoughtful, intelligent, and a good person that I would absolutely love to grab a beer with (the other being Tulsi Gabbard). At the same time, I groaned at some of the libertarians' responses to Yang's AI UBI proposal, where his premise was that AI would render all human workforce obsolete, therefore, a UBI is the only way forward.

I agree with libertarians in the ultimate policy conclusion against Yang, but some libertarians' responses have been really backward-looking.

One of the central arguments libertarians have posited about AI is that creative destruction has been around for a long time. This is absolutely true. Each and every major technological advance has come with complaints about how this technological advance would destroy jobs. Each and every time, while they are somewhat correct in that many legacy jobs are eliminated, they always result in an expansion of the economy, more jobs than ever before were created due to this technological advance, goods and services get cheaper, and life gets better. There are no examples in history where overall life got worse following a technological advance. Even nuclear power, while it has produced a weapon that could kill millions of lives in one blow, has been used to generate a fantastic amount of energy that has vastly decreased poverty around the world. Some may say climate change is a culmination of the industrial revolution, but so far, life has not gotten worse due to this. Libertarians have been absolutely right on this.

But past returns don't guarantee future results. Many people say AI is different and a game-changer. I see far too often that libertarians just roll their eyes at this. And I can sympathize. This argument has been used in the past as well and we've probably gotten a bit desensitized to this. But we need to get over it. AI really is different. Some libertarians do think that AI is not what it's made out to be, but that's a different discussion I'm not qualified to weigh in on. The premise is, and many top minds in AI agree, that AI absolutely has the potential to do any and every task humans can, but better. Some think AI cannot be creative, but AI has composed numerous songs. The creative fields are not quite so protected from AI as many have previously thought. In the documentary, after World Go Champion Lee Sedol watched AlphaGo make an unusual and brilliant move, he exclaimed that AlphaGo must be creative.

If this is the case, AI is absolutely different than any example in the past. We're not talking about a machine that can sew easier or a machine that can help assemble gizmos or even a car that can drive itself. We are literally talking about a machine that can replace every single function of a human being. The sooner libertarians come to grip on this, the sooner we can get in on the discussion, rather than talking past everyone else.

This isn't to say libertarians don't have anything to offer here, or that libertarianism can't work in the face of AI.

The first thing to recognize is that the AI revolution isn't going to happen overnight. It will happen faster and faster, to be sure, but humans will still have time to adapt. Industries getting phased out will see fewer entrants into its labor force while older participants retire. Younger existing labor may begin to exit the industry to different industries, perhaps new ones created by the AI. All of the things that have always happened with creative destruction will still happen, except this time, human labor may begin to decline while in the past, human labor has always expanded and into newer, more profitable industries.

At the same time, the progress made by AI would continue to decrease the cost of living as efficiencies are found, technological advances decrease the cost per unit of goods, and costly human labor slowly replaced by cheaper AI. Even today, we see human ingenuity decrease pricing to essentially free, most notably in software development where open source programs can be found and downloaded at no cost. Once a program that costs hundreds of dollars, Office Suite programs like Open Office are offered for free, as is Google's cloud office suite. This would continue until ultimately, AI achieves singularity, is able to replicate itself, and discovers a method for nearly limitless energy. Perhaps even a Dyson Swarm someday. At this point, we may very well be living in a post-scarcity world where the laws of economics no longer apply, except for extremely limited, niche markets. The good news is that in a post-scarcity world, everything, by definition, is so abundant, that essentially everything has a price tag of zero provided by open-source AI and nonprofit foundations.

Of course, this all happens when governments do not interfere in free markets, which is a tall ask, particularly when one considers one of the seemingly inalienable truths about governments is that they make things more expensive for people through regulation and monetary expansion for the benefit of the wealthy few, at the cost of the rest of us plebs. So assuming we are still sufficiently libertarian for this to happen, it should be pretty clear that as AI makes things less and less costly, human labor is less and less required for survival. Today, full time is defined by a 40-hour workweek and much government regulation fixes it as such (unless you live in say, France, where it's fixed at 35 hours). If we get the flexibility, as with independent contract work which the "wonderful" state of California wanted to essentially abolish, but failed at doing, succumbing to the will of the people, people will generally decide to work less and less, deciding that free time is worth more than the additional money to be made working the same amount in times past. This would gradually open up new positions for more workers as each worker works less and less. Ultimately, this will be whittled down to when we achieve a post-scarcity world, human labor is no longer needed to survive.

At that point, humans will focus more on hobbies than anything else. However, there may still be "work" to be had. I can imagine there would still be a market for genuinely human-made goods and services. Some people may still prefer human-generated artwork to AI-generated artwork, as indistinguishable they may be. Some people may still enjoy human sporting events. These sectors would still be subject to scarcity, but the barrier to entry would be ridiculously low and none of it would be necessary for survival, which generally, is the entire point of the UBI.

To me, the biggest thing to fear about AI would be a government's usage of it to wage war against another country or even its own people. Corporations may also have this power, but the incidences of corporations setting out to conduct mass murder campaigns just for the sake of mass murder for petty reasons like land control or religion have been negligible compared to the incidences of the government. Defense corporations are the closest to this, but still require the government to actually wage the war to sell their weapons. Maintaining libertarianism's antiwar stances and suppressing the state's ability to dole out subsidies to preferred industries, therefore, significantly reduces the chances of AI being used in this fashion.

Watching this documentary about AI and Go made me discover something about myself. I typically think of myself as a mostly logical thinker and have shed most of my tribalist tendencies except for family, friends, and sports teams. So when they announce the best of 5 match between AlphaGo and Lee Sedol, I was "cheering" for the machine, mostly because it would represent a massive leap forward in the capabilities of AI. In the documentary, the vast majority of people were cheering for Lee, because people naturally want to cheer for people when the opponent is, well, not a person. I think this is natural for most people and nearly anyone watching it would have this reaction, but I pride myself on not being like most people. However, I saw the ingenuity of AI as a human accomplishment within itself. The birth of AI is akin to mankind's offspring. The next step of evolution, engineered by humans.

However, as the matches went on, I began to find myself switching sides and starting to cheer for Lee Sedol. Perhaps I am human, after all.

[WARNING: spoilers ahead]

When AlphaGo was first becoming known, it was being massively underestimated, perhaps not unlike libertarians' estimations of AI. The AI's first match against a human player was against Fan Hui, the European Champion, ranked as a 2p. Fan Hui went into the match thinking it was just a regular Go program with a static logical algorithm. He ends up getting crushed 0-5.

After intense soul searching, Hui proves to be graceful in the loss, agreeing with the makers of AlphaGo, DeepMind, to return and help the team test the AI for its match against Lee Sedol, a 9p ranked Go Player, holding 18 international titles, the second-highest number in history. Hui mentions that he found a serious error within AlphaGo with certain game conditions confusing the AI after playing it numerous times. It appeared the development team would not be able to figure out how to fix it in time for the match, though the documentary did not specify exactly what the error was or if it was ever fixed.

Continuing the underestimation of AlphaGo, following Hui's loss, the internet trolls exploded with accusations that Hui was not a good player. Beating the European Champion in Go is nothing to sneeze at, particularly for a computer program that has never before reached a level anywhere approaching this. After the match was announced with Lee Sedol, prognosticators appeared to widely predict a clean sweep by Lee. Lee Sedol himself predicted a sweep or a one-loss match.

When the best out of 5 match started, AlphaGo had progressed significantly from when Hui played it. Still, it exposed a major weakness in humans that occurs across the board. Overconfidence. Lee perhaps didn't take the first game as seriously as he should have and lost the game. AI likely doesn't have this problem, an issue rooted in emotion rather than logic. The AI doesn't even consider the opponent in its game.

In the second game, Lee was having a tough time and while out for a smoke break, AlphaGo played a very unusual move by human standards. Move 37 was a move that humans would rarely play, but upon an inspection of the board, the strategy became apparent that the AI was trying to create a network of territory with its pieces, in the center of the board, of all places. Lee ended up losing this game as well.

The third game exposed another human weakness. Frustration. In sports, it's a fairly common saying that someone was "taken outside of his game", indicating that a person was flustered and was starting to try strategies outside their wheelhouse. This almost always ends up in disaster, and this is what happened to Lee in the third game. It was somewhere within this game that I sensed Lee's immense frustration and inability to defend himself against this overpowering machine that my heart melted and began to cheer for Lee. It was not to be in this game.

The "divine move" by Lee Sedol // image by Axl
Lee came out relaxed in the fourth game. The match was already lost so a lot of the pressure of representing all of humankind was alleviated. Despite the release valve, I imagine there was still pressure to win at least one. I was cheering for Lee full-on at this point. Deep in the game, as the center of the board started becoming networked with AlphaGo pieces, Lee played what is maybe now the most famous move in Go history, Move 78. It was a move that was sandwiched between two of AlphaGo's pieces and it completely disrupted AlphaGo's network it was trying to establish. AlphaGo went on tilt (I suspect the move triggered the bug Fan Hui mentioned) after the move and ended up resigning the game, causing the room to erupt with cheers. Amazingly, the AI gave Lee just a 0.007% chance of winning when he played the move and predicted an extremely low chance for a human to place the piece where Lee did, but Lee stated in a press conference that it was the only move he saw. That is what places champions above the rest.

In game five, AlphaGo seemingly came out weak, but as the game went on, it was apparent AlphaGo's analysis of 91% chance of winning was correct over all of the expert human analysis. One post-game analysis was that humans have used score as a proxy for winning, while AlphaGo proceeds through the game thinking it just needs to win by one. While it looks like the AI is making unnecessary moves, it was likely just reinforcing its own territory to win by at least one point. Therefore, this transcendent play looks odd to humans. It is so far advanced that the moves look dumb to us, until reality hits us in the face, finding only that we've lost the game. As Arthur Clarke's third "law" says, "Any sufficiently advanced technology is indistinguishable from magic."

It was interesting to see the reactions from people. Right from the beginning of the match, it was obvious how badly just about everyone aside from the people in DeepMind wanted Lee to win. The female Korean game broadcaster appeared visibly upset when Lee was losing and the male Korean broadcaster was just as upset, but hiding it with nervous laughter. Even one of the DeepMind members couldn't enjoy the win, seeing a fellow human get beaten down by AI. The elation following Lee's victory represented hope for humankind.

Since these events transpired, Go AI has improved dramatically. AlphaGo never lost a game after losing to Lee Sedol, in any official match, including a match against Ke Jie, the top-ranked Go player. Newer versions of the AI were released, the latest named AlphaGo Zero, which was able to beat all previous versions. It was also able to reach a level of play equal to the best humans in just a few days, compared to AlphaGo needing several months to achieve the same level. Lee Sedol retired in 2019, citing that he could never attain the top overall player, with the dominance of AI. But no one can take away his one victory against AlphaGo, an amazing feat in itself.

Lee said following his match that the AI proved that the moves that humans made that they thought were creative were actually conventional. This goes to show that it probably is just a matter of time before AI can completely achieve literally any human task, but better. It will take on more and more complex tasks and any new industries that emerge will likely provide new jobs, not for humans, but for more AI. With this, poetically matched with Lee's reasons for resigning, could pose significant problems for humans, perhaps not in economics, but rather psychology. Without work being necessary, the paradigm of humans completely changes. We would lose an immense sense of purpose that we get from work, discovery, and accomplishment. I think our economy would do fine in a libertarian AI future. But the human psyche is what I would really be worried about.

Still, I am glad to see AI advance. There are so many practical applications of AI and it could greatly advance human civilization. It could eradicate poverty, cancer, and other existential threats. It could greatly prolong human life and understanding. While many likely still fear the robot uprising, which is a war humans have as much a chance of winning as Lee Sedol has of beating AI in Go today, I am more worried of humans ending humankind than AI ending humankind.

Maybe I will be wrong about government being the biggest threat and we all die from the robotic uprising complete with glowing red eyes, looking like Arnold Schwarzenegger and all. But I think it's just as plausible that we humans may just begin to merge with AI and robotics, implementing machine learning algorithms right into human brains and robotics beginning to replace human limbs, perhaps one day even through genetic modification. Or maybe we enter a transhumanist era, where our consciousnesses are uploaded directly into a cloud server or an AI-assisted robot. Maybe I am an eternal optimist, though it certainly doesn't feel that way, looking around the political world today. It's a non-libertarian AI future I'm worried about, not a libertarian one.

Popular Posts