
I know, I know, when you think of artificial intelligence, biology isn’t exactly the first thing that comes to mind — heck, it’s probably not even on your top 10 ~potentially~ related fields; let alone understanding ai through the subject. But stick with me — I promise that by this article you will (hopefully) have a sense of the nonsensical rambles that make up my mind.
Through a Biological Lense
Evolution. That topic most have or will have wasted the many valuable years of their youth being forced to study in elementary, middle, and high school science classes (sorry Ontario curriculum). Yet, the key concepts of this broken record of a topic are amazing theoretical and philosophical lenses through which we can learn the mechanics of artificial intelligence — and vice versa.
I know this sounds absurd, but let’s take a look.
First, let’s frame both concepts in terms of a game — In nature, the objective is to survive. As such, essentially the “winner”/“successful” mutations, traits, or characteristics of each generation, or “round” are determined by how well they allow the organism they are “attached to” successfully evade death long enough to pass on their those traits; with those “attached” to the organisms that successfully pass on their genes the most (successfully raising the most offspring) “winning” the most and being able to be used as a “starting point” for the next generation. And so, nature “learns” which traits to promote and favour in the evolutionary hands-race. Moreover, the “inputs” of each generation (“iteration”) include both the information gained about variables in testing the last generation/iteration (or randomly initialized if it’s the first iteration), and some random combination of other variables. The “outputs” of each generation (“iteration”) being individual organisms, with random combinations of traits/mutations, that would be tested “against” each other to identify the traits/mutations most advantageous to reproducing. Then, the best-performing organisms would be able to pass their advantageous traits on to the next iteration
To frame AI though is where it can get tricky. Currently, there are four main “types”/”categories” of AI:
- Reactive Machines*—Has no “memory” of the past, and solely execute pre-written algorithms; thus behaving the same in every situation.
- Limited Memory* — Is able to tap into a pre-set and/or expanding, limited memory bank of information to make “decisions” and execute functions. This includes the bulk of today’s AI algorithms.
- Theory of Mind —Is able to discern emotion, beliefs, thought processes, etc… of those it interacts with; the “next level” currently being researched; essentially would be artificial emotional intelligence. note — as “conscious” emotional intelligence may seem, it wouldn’t necessarily mean consciousness would be achieved.
- Self-Aware — evolved to be so similar to the human mind that it’s gained consciousness, and thus their own wants, opinions, etc…; the basis of many futuristic science-fiction novels on AI and isn’t even close to existing yet; due to technicalities is often split into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), Artificial Superintelligence (ASI)
Although for our purposes, as they differ mainly in the memory banks/theoretical consciousnesses accessible by each — and limited memory is the most prevalent/“hyped” (currently), let’s look at it from that perspective.
With, say a reinforcement learning model, the program is run (initially) with randomized weights to different variables (inputs), for which the program will output a number of combinations of said variables to be “graded” in how well each output combination is able to complete the set task (in comparison to other output combinations), with the output combinations able to complete the end-goal most successfully “winning” and being used as a starting point for the next “round”.
A Fast-Tracked timeline
Now if we compare the two, we can see how they both, in essence, choose “winners” (or define success) based on the variables that provide the result closest to that which we are after. And as such, we might also be able to imagine evolution as a slow-running, limited memory simulation or limited memory programs as a sped-up, human-controlled, technological evolution of sorts that is able/needed to be “re-started” from the beginning at-will/every time parameters of a simulation are tweaked, respectively. Moreover, in both nature and AI algorithms, the more time spent evolving or training (respectively), the closer to some end goal both species and output are able to get. In short, both we and nature tell the algorithm and species (respectively) what is a “good” trait and what is a “bad” trait (narrowing down variables they need to test), and then send both off to keep experimenting until all variables have been eliminated (in theory), at which point equilibrium will occur in nature, and the most optimized combination of traits for a specific end goal will be reached for an algorithm.
Where “iterations” (generations) in biological evolution take tens of years (dependent on species) per iteration, and mutations per “iteration” generally not being very pronounced (taking thousands of years for a positive/negative trait to be selectively bred/bred out, respectively), “iterations” in algorithmic “evolution” takes seconds, minutes, perhaps hours if it’s complicated and with algorithms able to be coded to have a greater deviation (more pronounced mutations) in variables tested per iteration.
Moreover, those who have coded will know that any program — artificially intelligence-d or not, takes everything literally. Whether that be the specific coded directions — silly mistakes and all — or parameters/variables set for a neural network. And, like someone who wants to put in the least amount of effort possible for a project, algorithms will always find the “path of least resistance”, or some parameter/rule you didn’t think about setting (that you technically didn’t specify, but that a program wouldn’t know to workaround; ie when an AI programmed to jump just grew longer legs). Similarily in nature, both extremely creative adaptations and weird quirks (that though may help in reproduction [ie look “sexy” to a potential partner or be helpful in selecting mates/mating], isn’t actually advantageous to individual organisms — just take a look at the various [often painful] reproductive organs such as the female protopenis in hyenas and penile spines in feline species, those who sacrifice their lives to mate, or the colourful plumage/skin/hair/fur of various species used as a marker of “mate-ability” that could also attract predators) can arise. Further, useless but harmless traits won’t be “eliminated”, as there wouldn’t be a particular need for them to be (ie vestigial structures).
These facts could be insanely useful for future advancements within STEM fields.
Consciousness
Defined as the ability to perceive (sentience of) both inner (hunger, arrangment of various body parts) and outer (spatial, existance ) surroundings/existence, consciousness has always been a difficult topic to debate and study — with many competing schools of thought currently battling it out. Yet, it is something we intuitively know as just being “this” (experience), and something we and other animals have (in plants and fungi can be a bit debated).
This too we can get a better understanding of through evolutionary terms, and we can approach the subject from many perspectives.
Firstly (philosophically), at what point in the evolutionary “path” does a being gain consciousness, and how can consciousness be proved? At the moment, studies on consciousness are mainly studied through first-hand accounts and brain scans, meaning we aren’t able to fully define consciousness as precisely or accurately as say the fact that something is 1.0000 m long (due to individual experiences and perceptions). So for example, we couldn’t definitively say if a rock has consciousness or not. This brings up a lot of unanswered (and currently unanswerable) questions.
For example, as humans, consciousness is something that we just know we have, and something a rock (probably) doesn’t have. But that would mean that at some point we developed consciousness from a state without— likely some sort of basic consciousness at first, then eventually the complex conscience we have today. So then at what point did we develop consciousness? How do we define that point, and how could we tell if something (organism or machine) suddenly gained consciousness (instead of just doing what it’s either consciously or subconsciously told to do)?
Additionally, though they’ve been evolved to have their specific traits through natural selection, individual organisms don’t know why they are the way they are (for example, even leading experts in evolutionary biology still don’t understand various aspects of the field), or what “parameters” they evolved around— just that they will have a preference in terms of who to mate with (which will likely have been bred [sexually selected] by previous generations to encompass some sort of trait advantageous to successfully mating and survival). And similarily, the algorithm itself doesn’t understand why it has “tweaked itself” to output results it does, just that they’re doing great and that they should continue doing more of what they did “right” (according to parameters we set that they wouldn’t necessarily comprehend).
AI Consciousness and Chinese Room thought experiment
As such, if we set about proving or disproving machine consciousness, we wouldn’t be able to definitively prove if it was the machine developing consciousness that caused an outcome whereby our parameters the machine appears conscious, or rather the algorithm being “selected” for a certain outcome (ie to present said markers of consciousness). Moreover, if we wanted to avoid an ~AI takeover~ where robots become our overlords and included parameters that negatively select for (get rid of) markers of consciousness, algorithms could just “trick us” into believing it hasn’t gained consciousness by realizing, and hiding those “markers”
Similarly, let’s think about the Chinese room thought experiment proposed by philosopher John Searle in 1980. In short, he proposes a situation where he sits in a room where periodically Chinese characters are given to him through a slot, and which has a book (in English) that will tell him what characters to output for each “string” of characters. By this process, it would appear to those outside the room that he understands the language (where in reality, he is just following a step-by-step guide to “send” out results that he has “been instructed to” send out). Now, if we replace Searle with a program, for example, there would exist no real difference between their roles, which mainly encompass following a pre-set path blindly. And due to this, the program could pass the (then relatively new) Turing Test (where a questioner tries to discern which of the two respondents, a computer and a human, is the human and which is the computer), and it’s variations (ie Marcus, Lovelace 2.0, Winograd Schema, etc…)
Thus, through this, he proposes that as the machine isn’t “understanding”, and “thinking”, it cannot have consciousness in the current definition of the word. Though personally, I’d say I’m split — as, if we are saying something such as a rock or simple life doesn’t have consciousness, eventually, through the process of evolving into humans, we somehow developed consciousness. This means that just like us, eventually, an algorithm that runs for a long enough time could theoretically gain said consciousness.
Applications
In short, this could have an immense impact on the advancements in STEM fields as we know it — Evolutionary biology theories could be modeled using AI and tested; Biomimicry could become some sort of predictive-biomimicry-AI fusion where certain challenge could be set as parameters and the algorithm allowed to run some sort of biological/evolutionary model, yielding wacky and amazingly creative results able to be applied to a wide breadth of projects; parallel fields to this topic such as psychology/philosophy, for example, would for sure be able to be advanced, with a scramble to concretely define consciousness an inevitability. Those are just some quick examples — there are for sure many more applications just waiting to be unearthed.
Anddddd… That’s it! Thanks for reading that entire article, and I hope you liked it. 🙂
Let me know what you think! Shoot me a message on Instagram, Linkedin, or Facebook. I’d love to chat!
Also, feel free to check out my newsletter (updates every first Friday of the month, in a much less formal format), Youtube Channel, and Medium page!
Original post: https://medium.com/swlh/the-biology-of-artificial-intelligence-5e1cf9fc9798