By Laurie Weston
Part 2: The state of the art
It is difficult to assess the “state of the art” when it comes to artificial intelligence because in the time it has taken you to read this sentence, AI has advanced. Unlike any other scientific endeavour that humans have undertaken – which tend to progress relatively systematically with the occasional breakthrough – AI is in a period of mind-blowing, unpredictable change.
I recently asked an audience of about 50 people how many were excited about AI. Most people raised a hand. I then asked how many were concerned about AI, and everyone’s hand shot up. This is probably representative of the public in general – we are simultaneously excited and nervous about imminent, massive changes to our lives with the increasing encroachment and integration of artificial intelligence.
Regardless of our individual excitement-to-trepidation ratio, every one of us living in the modern western world has already welcomed AI into our lives willingly – eagerly even – knowingly or not. Indeed, we may already be on the path to utopia or dystopia, and rapidly approaching the fork in the road that will determine the outcome. As I write this, there is global dismay at recent developments in the pursuit of AGI (artificial general intelligence), of which ChatGPT is an example (more about this later), to the extent that more than 1,000 leading AI developers, including Elon Musk, have signed a letter recommending a six-month pause on AI development. This pause, the letter explains, would primarily be to allow regulators and regulations to catch up. Is this even possible?
Let’s step back from the brink and look at what led us to this point. In the first article in this series (AI: Where are we and where are we going? Part 1: the basics), I talked about data inputs, outputs, and processors (algorithms), each typically involving contributions from both humans and computers. These are the foundations of AI. There are also different categories of AI that I call gadgets, assistants, and apps. These categories are not necessarily separate entities; an “assistant” can also be an app and a gadget. However, they each use data and algorithms a little differently.
Gadgets have become essential in our modern lives – from the time our alarm wakes us in the morning (after recording how long and how well we slept), we rely on gadgets to get us through the day. They brew our coffee and count our steps; they control our thermostats and rock our babies to sleep. We remotely unlock our car and drive (or let it drive) us to work or school, stopping and going at synchronized traffic light systems. These are just a few obvious examples of hundreds – no, thousands – of gadgets that we accept as normal in our lives. We dutifully do the software updates or automatic maintenance on all of them, without question or suspicion, satisfied that we are always up to date on the latest advancements.
Gadgets involve hardware, but that hardware often gathers and processes data, ostensibly to improve its service to us, whether it does so in its own body, or sends it back to “base” for assimilation and analysis with contributions from its brothers in the field.
This is one of the reasons everything seems to need an internet connection, even if it is just a doorbell. “Smart” anything is synonymous with a two-way data stream.
“What do we have to fear from these gadgets?” you may be wondering. “They all make our lives so much easier.” That is certainly true, and the benign, facevalue use of most of them is just that – an improvement to our everyday routines that relieve us of mundane or unpleasant tasks. There are, however, more secret, possibly even sinister, uses of gadgets. The data they gather is used for product or experience improvement for us, the consumers, but it can also be aggregated for AI to determine how to profile individuals and target them for ad campaigns for everything from consumer goods, to political aims, to societal agendas. Since these can be tailored to any level of personal detail, they are very effective.
Data and instructions can be sent to your gadgets, too, taking control of your thermostat, for example, which will obey, not you, but an unseen master.
Military drones, weapons, and robot soldiers also fall under my definition of “gadgets”, albeit gadgets that can behave viciously without conscience or remorse, controlled from a safe distance. I will leave the pros and cons of these attributes to your imagination and the military experts.
In the last 10 minutes, I have probably used AI assistants half a dozen times as Microsoft checked my spelling and grammar, correcting typos, suggesting commas, and highlighting phrasing it thinks is too wordy. With one click on the suggestion, evidence of my human fallibility was erased – very useful.
Maps, voice and face recognition, games such as Chess and Go, video games, music and movie suggestions, internet search engines, and many more virtual aides that entertain or help us accomplish something fall under the “assistants” category.
In order for a computer algorithm to excel at chess, it only needs to be programmed with the allowable moves and the criteria for winning. This is called “perfect” data. On its turn, it easily computes every possible future move, choosing a move that maximizes its chances of winning. Computers started to win against chess masters as early as 1997, but it took until 2016 to win the game of Go against the best player in the world. Go reportedly has 10170 possible moves. That is 1 with 170 “0”s behind it.
The world champion Go player, Lee Sedol, with years of dedicated practice and sacrifice to hone his skill, stamina, and intellectual ability, was beaten by Deep Mind’s AlphaGo program in a tense five-game match in March 2016. For an enthralling documentary about the experience, watch AlphaGo, available on YouTube.
Watch on YouTube at: https://www.youtube.com/watch?v=WXuK6gekU1Y