Jim Glasgow Jim Glasgow

Current Project Video

We’re uploading a video for people interested in what we’re doing at Unseen Technologies.

Watch the video

Read More
Jim Glasgow Jim Glasgow

Beyond the Turing Test

The Turing test was proposed by Alan Turing in 1950 in a paper called "Computing Machinery and Intelligence." The Glasgow-Habecker test was proposed seventy years later, in 2020.

The Turing test is a test of a computer's ability to provide responses to questions that would be indistinguishable from those given by a human. The idea was inspired by a Victorian parlor game called the imitation game, in which a man would try to give responses to questions that would be indistinguishable from those of a woman, or vice versa.

The idea was that if a computer could give responses that could not be told from those of a human, then the computer was exhibiting intelligence, or something so similar that it would not matter whether intelligence produced it or not. In other words, if the computer could fool a human judging the responses, it would have passed the test.

The Glasgow-Habecker test is based on emotional intelligence. The idea behind it is that two human beings in conversation will naturally begin to detect, mirror, and affect each other's emotional valences. Bringing these emotions under control, no matter how disregulated they were to begin with, is called coregulation. The Glasgow-Habecker test asserts that a computer is displaying emotional intelligence if it can coregulate the emotions of a human, with the result that the human feels better or at least feels understood.

Like the Turing test, the Glasgow-Habecker test relies only on textual language. It is also worth knowing that both tests are based on a human's self-reported experience. "Did that seem to come from a human?" and "Did you feel better after that interaction?" Neither test proves intelligence directly. Rather, each tries to direct efforts away from trying to create what intelligence is and refocus it on getting computers to do something that intelligence does. In so doing, each test can be used to discover evidence for the effective simulation of intelligence of a different type.

Read More
Jim Glasgow Jim Glasgow

Inferring Emotions

Beyond Emotional Gradients

The study of emotions, with its interdisciplinary appeal, is marked by a quite a bit of controversy. This is partly because there is no general agreement over what words like “emotion” or “feeling” actually mean, just as there is no agreement whether conscious affective states are created in the higher brain regions from signals generated lower down, or whether they begin as felt states in the lower brain and elaborated upon in higher areas where they are modified by learned interpretations into more complex emotions. In either interpretation, it is generally agreed that fully experienced, conscious affective states exist only in higher brain regions.

For our purposes, the precise answer to how feelings come to be felt does not really matter. Feelings are qualia by definition and there is no useful debate over qualia in humans or anything else. We are more concerned with which gradients resulting in feelings a person internally creates, rather than the precise mechanisms humans use to integrate them into conscious experience. We are also concerned with which feelings are projected from one person to another, and how those might be detected by an AI system. In other words, we want to know how to detect what a person projects as feelings, and how to model that data and use it to control a response.

The base of this model is differential perception. We assume that regions in the brain are associated with different feeling states because they either provoke the release of related chemicals or receive signals about their concentrations. The states with which we are concerned can be mixed, but the basic ones are generally accepted to be few.

The late Jaak Panksepp, the neuroscientist who coined the term “affective neuroscience,” proposed a set of seven systems, which he called Seeking, Rage, Fear, Grief, Caring, Lust, and Play. Other formulations exist, but most are similar. Panksepp maps each of these to different regions of the mammalian brain, all of which are in medial and lower regions, and each of which is attended by the release of different chemicals into the body. The mapping of feelings to these brain areas, sometimes referred to as parts of the central limbic system, implies that these feelings are evolutionarily old. It also suggests that they have similar functions in all mammals. Indeed, from the hissing of cats and the growling of dogs to the playfulness of puppies and kittens, the behaviors related to these states are easily observed and their functions readily explained.

 

Inferences vs. Projections

For humans, with their complex neocortices, basic feelings are obviously not the whole story of emotions. Rather, humans experience many complicated emotional states. It is believed that these complex emotions are learned directly from explicit teaching or are culturally absorbed. This implies that the same feeling systems can generate the inputs to a variety of emotions, each of which is dependent on the learned social context in which it arises and the way people have been taught to interpret it.

This means that while feelings may begin in lower brain systems with changes in chemistry, they are interpreted into more nuanced emotions in the higher regions of the neocortex. The need to learn and interpret such contextual emotions before the brain can generate them internally also accounts for why very young children and those with cognitive disabilities do not seem to have the range of complex emotions found in typical adults. Even so, most people, and especially children, will display very strong basic feelings when aroused.

Another important implication in this concerns communication. People communicate their basic feelings rather clearly and very likely have done so for millions of years, but are unlikely to be able to communicate complex emotions so easily. Instead, they must either rely on context to carry the additional data needed to reconstruct their emotional states to others, or explain them with language. For example, complex emotions such as familial embarrassment and schadenfreude are elaborations of basic states. If it is only the feeling states that are being transmitted, then it is the recipient, knowing the context and sharing the culture, who is reconstructing the actual emotion cognitively, using similar modules to those that construct them in the transmitter. This decoding would have rough accuracy at first, and be refined as more information arrives.

Because culture overlays biology, even the communication of basic feelings becomes less overt as the young mature. Nonetheless, we would argue that it is still all or nearly all of what is being directly projected. Indeed, when interacting with each other, feelings, if they are strongly present, are communicated before anyone says anything. Tone of voice, speaking speed, body language, word choice, and similar cues are all part of this transmission and reception.

Different theories have been advanced for how exactly these feelings are picked up between humans, including the recently popularized discovery of mirror neurons, whose role is not fully understood and which has almost certainly been overstated in popular articles. Whatever the exact mechanisms, what is important is that feelings are clearly being transmitted and received, and that if AI systems are to fully participate in human communication, they will need to simulate the use of them as well.

 

Salience and Attention

For building AI systems, the fact that only a limited number of feelings can be projected is welcome news. It means that that AI systems only need components for detecting these basic feelings, and that attempting to detect complex emotions is unlikely to be a fruitful direction. It is also reasonable to assume that machine learning models trained to recognize the presence of any non-neutral emotional valence can be used to trigger the hand-off to a set of specialty models trained to separate out the basic feelings.

Once valence is known, it would be helpful to quantify arousal or affect, the degree of excitability being expressed. This is harder to measure from language alone, but there are clues. For example, excited people tend to speak in bursts, use shorter sentences, include exclamations, and use shorter words. They may also use more profanity, or use statements that display cognitive bias errors.

From there, the degrees of specific feelings, affect, and valence can be combined with information from the context, the speaker’s age, and the content of the communication to create a proxy for a detected emotion in the AI.

Thus, detecting a feeling that turned out to be anxiety from an adult in a statement about work could be used to generate a response such as “You seem anxious about something going on at work. Is there anything I can do to help with that?” Over time, as more emotional data were gathered, more refined outputs could be produced. One caveat is that such models will always need to be culturally specific, but that, too, may be something already being solved within LLMs themselves, as they are now trained on enormous language samples, and these samples are direct representative of culture.

In any case, once a control system is built use modular subsystems to discover what is emotionally salient in a user’s conversational input, it can use that information to adjust the instructions to the LLM, essentially directing its attention to those parts of its linguistic vector space from which emotionally appropriate responses can be generated.

Read More
Jim Glasgow Jim Glasgow

Feelings from Chemical Gradients

Out of the Sea

The earliest known animals, those comprised of one or only a few cells, lived in the ocean and had receptors on their outside surfaces to detect chemicals in their environments. These receptors could be used to cause the animals to turn toward desirable chemicals and away from undesirable ones.

This was probably the earliest use something like decision-making in animals, despite being purely reflexive at first. This “move into the good and out of the bad” impulse was also probably the earliest use of something like valence, although there would have been no mechanism by which good or bad could be felt.

In slightly more complex organisms, those with simple nervous systems, receptors can send signals to represent whether a chemical gradient in the environment is growing stronger or weaker. Stronger or weaker reception can then be taken to indicate whether the direction of movement is correct. Neurons that detect such changes can adjust their firing rates accordingly. This allows the animal to find the source of something it needs, or avoid danger by sensing whether a poison or threat is becoming fainter, and so further away.

While this description is an over-simplification, the general idea is valid. Animals tend to have “approach/withdraw” reactions that prompt them to move toward what is desirable and away from what is undesirable. For most of evolutionary time, that is probably all that simple animals could use their ability to move to do. The basic mechanism depends only on having receptors to detect changes in the concentrations of desirable and undesirable chemicals, and the ability to follow their gradients by turing toward or away from their sources.

In complex animals, especially those that live on land such as humans, many of these chemical gradients are now inside the body. Indeed, internal gradients are mostly caused by chemicals released by the body’s own organs. These chemicals can then be detected by receptors elsewhere in the body or nervous system. While this is not news, it raises an interesting question. Why is this so. Why would an organ evolve to send chemical signals, only to have those chemicals detected elsewhere in the same body? Why not have the organ simply induce a reflexive action based on what it detects?

 

Central Control

The answer, it would seem, is that reflexive responses were not sufficient as competition evolved. Over time, as animals became more complex, they did not simply become capable of more complex behaviors. They also needed to combine a wider range of inputs that might independently suggest approach or withdrawal. Sometimes, these inputs would be in conflict, such as when a potential mate and a predator were sensed in the area. Sometimes, the inputs were of enhanced or diminished importance, as when an animal detects a food gradient, but is already digesting a meal, as opposed to its being hungry.

Central decision-making emerged to offer a solution to this problem. It is what allows choices to be made among competing alternatives. Much later, as learning emerged, the results of these choices would support continuing and intensifying behavior in the short term, and reinforcing behavior for the long term.

A central decision system needs inputs. These can come in different forms. Direct signaling is one such, and neurons use this method to transmit electrical states in the form of chemical ions to one another across the synapses between their axon-to-dendrite connections. Adjusting the way that neurons behave with additional chemicals such as neurotransmitters and hormones is another. For reasons that will become clear, brains evolved to use both mechanisms.

Benefits of direct signaling are obvious. They are relatively fast and can have specific meanings. This is why transmitted signals are used in sending sensed data to the brain and for constructing an internal representation of the organism in relation to its environment.

Internal chemical gradients, on the other hand, are slower, disperse more generally, and may not convey a specific meaning. However, using internal chemical gradients also offers several advantages. First, multiple areas can be reached simultaneously. Some of these could act quickly, and others more slowly. Second, following gradients is an old solution, and so systems to detect changes in the concentration of internally produced chemicals could be evolved from proven, existing systems. Finally, chemical gradients have likely always been used to influence behavior because they have dispersal patterns that are geometric or logarithmic, making changes in concentration easy to detect by receptors without the capacity for quantitative measure.

If we fast-forward evolution to the appearance of social animals, we can see how emotions might have emerged from internal gradients. Different chemicals could be released by different organs to indicate different feeling states, with parts of the nervous system having receptors for those chemicals built with the impulse to follow or avoid those feelings by engaging in behaviors that either increased or decreased the release of those chemicals. In short, acting as if they were independent and mobile entities following gradients, despite being statically connected elements in a machine that are seeking higher or lower concentrations indirectly.

To phrase this another way, a chemical receptor in a simple organism that sends a signal that may ultimately result in movement that leads to more of that chemical is not, at the level of the neuron, doing anything materially different than a receptor cell in a complex organism detecting the release of an internal chemical and sending a signal that may ultimately lead to the release of more of that chemical. The same could be said if the receptors were detecting chemicals with a negative valence. All are just signaling detection and adjusting their signals to indicate whether the detection is growing stronger or weaker.

Despite the simplicity of this mechanism, it allows the development of behaviors that let animals be pulled toward bonding, play, mating, and parental behaviors. We can also see how it might generate behaviors consistent with excitement, competition, anger, loss, and fear. This would give the central decision system more to work with than just valence, allowing priorities to be established among competing inputs even as the body’s other systems are shifting blood flow, releasing hormones, and readying responses. Despite this, it would not need to fundamentally alter a basic feature of sensory neuronal operation, that of determining whether it is moving up or down a gradient, and sending a signal that can be used to follow or escape that gradient.

Having evolved to interpret changes in the concentration of gradients may also account for many of the intuitive heuristic approximations animals can make where geometric or logarithmic changes in detected inputs are observed. These may be physical, such as when sound or light volumes are used to estimate distances, but may also help in determining things that are felt emotionally, such as relationships, kinship, opportunities, and risks that would otherwise take rather complicated mathematics to work out. In other words, these gradients may be the roots of intuitive comprehension.

Toward Emotional AI

Animals in nature have neither the time nor the ability for mathematical computations, although they may also develop statistically based heuristics over time for emotional response or environmental signals, but these require experience, and so cannot be the starting point for learning what is punishing or rewarding about the world, including the social environment. However, such heuristics can be trained by a mechanism for comparing the relative strengths of those inputs to experience, providing they can be valenced and compared.

In AI, if we want to imitate this, we must provide an artificial proxy for these motivating gradients. The idea of artificial valence is not new. Punishments and rewards are how many AI systems are trained. In chess playing programs we can see how estimates of board position values might be considered mathematical representations of such valences. Of course, these board position scores are not nuanced, like emotions, but are instead used for reinforcement learning. If we want to a system that can interpret the valence in emotional terms, we will need to go further, providing mechanisms to detect different feeling states and algorithms for combining them.

Read More