paulgorman.org

< ^ txt

Thu May 17 09:15:49 EDT 2018 Slept from eleven to seven without waking. High of seventy and mostly sunny today. Stopped at Starbucks on my way to work. Work: - 10 AM business team meeting Done. - Archive HZ container, close ticket Done. Twenty-five-minute walk at lunch. Warmer than expected. Saw a turkey vulture. Home: - Finish reading Tsathoggua story Not finished, but I read and enjoyed more of it. Short walk after I got home. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/ Kissinger's piece isn't entirely worthless, but I have to snipe a bit. "...culmination may be a world relying on machines powered by data and algorithms and ungoverned by ethical or philosophical norms." Which would make a change from a world powered by policy wonks ungoverned by ethical or philosophical norms? "Users of the internet emphasize retrieving and manipulating information over contextualizing or conceptualizing its meaning. They rarely interrogate history or philosophy; as a rule, they demand information relevant to their immediate practical needs." Kids today! In our information age, more people have access to more information, but I doubt there's much evidence that the data deluge has somehow reduced the number of serious thinkers prone to "interrogate history or philosophy" in absolute terms. Kissinger opines that a surplus of ill-considered opinions will prevent the formation of sound opinions, but he never puts forward a mechanism by which that might happen. "All of these pressures weaken the fortitude required to develop and sustain convictions that can be implemented only by traveling a lonely road, which is the essence of creativity." Spend an hour on Tumblr, and tell me again only traveling a lonely road fosters creativity. Communication and the interplay of ideas have always been the fountain of creativity. Looks at the Beats. Look at the Enlightenment thinkers. One might make a case that highly available information/media leads to derivative creations, but there are plenty of original weirdos on the Internet too. "The impact of internet technology on politics is particularly pronounced. The ability to target micro-groups has broken up the previous consensus on priorities by permitting a focus on specialized purposes or grievances. Political leaders, overwhelmed by niche pressures, are deprived of time to think or reflect on context, contracting the space available for them to develop vision." This is so fucking disingenuous. Special interests were literally writing legislation decades before the advent of the Internet. Now, niche interests can have a megaphone without deep pockets. Kissinger is effectively bemoaning that politicians are being forced to listen to their constituents. "The digital world’s emphasis on speed inhibits reflection..." A tool is what you make of it. People made the same argument against the telegraph, but somehow we managed to scrape together a few quiet moments of contemplation since 1840. "Perhaps most significant is the project of producing artificial intelligence—a technology capable of inventing and solving complex, seemingly abstract problems by processes that seem to replicate those of the human mind." AI isn't one technology, and the most useful pieces of it tend to work very much _unlike_ the processes of the human mind. "Automation deals with means; it achieves prescribed objectives by rationalizing or mechanizing instruments for reaching them. AI, by contrast, deals with ends; it establishes its own objectives." False, and a badly reversed conclusion. AI is useful to the extent that it accomplishes our objectives; we select useful techniques and discard useless ones. Kissinger misses the most startling, even alarming, aspect of AI — when it reaches an objective using means inscrutable to its creators. "The driverless car illustrates the difference between the actions of traditional human-controlled, software-powered computers and the universe AI seeks to navigate. Driving a car requires judgments in multiple situations impossible to anticipate and hence to program in advance. What would happen, to use a well-known hypothetical example, if such a car were obliged by circumstance to choose between killing a grandparent and killing a child? Whom would it choose? Why? Which factors among its options would it attempt to optimize? And could it explain its rationale? Challenged, its truthful answer would likely be, were it able to communicate: “I don’t know (because I am following mathematical, not human, principles),” or “You would not understand (because I have been trained to act in a certain way but not to explain it).” Yet driverless cars are likely to be prevalent on roads within a decade." Human ethicists have no consensus on the "trolley problem", which has been bandied about since at least the mid-twentieth century. Self-driving cars will not, each of them in the moment, solve the trolley problem for us. They will not make ethical judgments. In even the most autonomous of cases (and I'm not certain that actual implementations will be significantly more complex than "when in doubt, brake!"), the designers of the cars will predetermine the desired outcomes. Is it a problem if we can't determine exactly how and why the AI decided its particular actions during a particular event? Maybe. Maybe that's a problem. Or maybe we have to satisfy our selves with dramatic statistical reductions in motor vehicle deaths. "AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data." Has the ready availability of mass-produced hammers and axes diminished your competence in knapping flint tools? Most likely. Is the human condition therefore diminished? Did humans stop playing chess in 1996? Chess is not a glib example. People have not stopped playing chess. Furthermore, the advent of powerful chess computers triggered a massive leap in the sophistication of human play; we learned from the computers and got better. "More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context. [...] To what extent is it possible to enable AI to comprehend the context that informs its instructions? [...] will AI, left to its own devices, inevitably develop slight deviations that could, over time, cascade into catastrophic departures?" Yes, missteps will happen, probably even some horrific ones. But we will learn over time, and to mark such failures as "inevitable" seems myopically dystopian. Interestingly, "comprehend the context that informs its instructions" seems well within the problem domain where some machine learning techniques excel. "Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?" Significant technology has always significantly changed human thinking. Strong chess engines have already changed the way humans play chess, and AlphaGo has already started teaching us lessons about Go. Will the reason for some AI moves remain inscrutable? Maybe? Some of Bobby Fischer's chess moves seemed to come out of nowhere, first striking observers as blunders before they were recognized as brilliances. But Go is more complex that chess, and many ML technologies are more opaque than the most advanced (non-AI) chess engine. We'll learn some things and not understand others; I don't think that's a problem if our tools produce desirable outcomes. "AI knows only one purpose: to win. [...] Does this single-minded insistence on prevailing characterize all AI?" I mean... yes? If we want it do. AI technologies don't operate in a vacuum; we select the tools for their efficiency in accomplishing our goals. "Do we want children to learn values through discourse with untethered algorithms?" Do we want children to learn about rabies through unsupervised play with strange animals? Does that mean we shouldn't keep dogs anymore? "It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?" No, Kissinger is making a rather insane strawman argument. No one has suggested that AI should become our ethical arbiter. "AI may reach intended goals, but be unable to explain the rationale for its conclusions." This aspect has always struck me as the most unsettling element of machine learning, but if a black box gives us good outcomes I'm not sure its opacity is a problem. "Through all human history, civilizations have created ways to explain the world around them—in the Middle Ages, religion; in the Enlightenment, reason; in the 19th century, history; in the 20th century, ideology. The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?" This is one of the exciting possibilities. Just as hyper-modern human chess play, heavily influenced by chess engines, might have seemed nonsensical to eighteenth-century players like Philidor, we may eventually come to understand the reasoning that drives our algorithms. The black boxes may point us to whole knew domains of knowledge. "Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition." Kissinger writes nothing that impresses on me any sense danger about how AI might strip me of my capacity for cognition. Furthermore, Kissinger dramatically overestimates how much we understand about human cognition. We can't even agree what constitutes consciousness, or if we're living in a computer simulation. This is well-trod yet uncertain territory. Stick Kissinger in a room with some neuroethicists for an afternoon, and watch his head explode. Ultimately, what Kissinger bemoans here is that AI might be too good a tool, and devalue intellectual accomplishments by making thinking too easy. I see it as a lever not a crutch. "If AlphaZero was able to achieve this mastery so rapidly, where will AI be in five years? What will be the impact on human cognition generally? What is the role of ethics in this process, which consists in essence of the acceleration of choices?" This is Kissinger's core idea — the most honest and interesting idea in the piece if framed without dystopian alarm. If a desired conclusion is reached efficiently, what is the moral weight of the individual judgments that lead to the conclusion? Is the answer different if we swap "conclusion" for "outcome" and "judgments" for "acts"? I say: inconsequential and yes (but not in a way that tars responsibly used AI). "Typically, these questions are left to technologists and to the intelligentsia of related scientific fields. Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities. [...] AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts." Again, Kissinger is extremely naive here. Go talk to some of the people at CMU who have been thinking about this for decades. This is no clarion call. Kissinger simply grew alarmed when he stumbled into unfamiliar territory. (Or, more likely, this is a cynical lobbying piece for some group with anti-AI interests.) Breakfast: cafe latte, sausage sandwich Lunch: grape leaves sandwich, coffee Dinner: ice cream

< ^ txt