Extension of the extension

Extension of the extension

After spending a long time pondering on the notion of language and its dual nature in terms of self-recursive system and reality actuator, my inner debate on meaning got heated.

I see many people on social media juggling with the concept of meaning as the new je ne sais quoi now that neural networks deflated the mysticism behind words.

It is difficult to state if it is an intellectual fad or a serious crisis we need to deal with, especially since it remains rather intangible for now.

Nonetheless, we could argue that, at least in symbolic terms, we are providing our tools with extra steps for symbolic and epistemic actuation, enabling the extension of the extension.

Previously, I wrote about extended cognition and I mentioned a triad we need to keep an eye on: meaning, interpretation and interpretability.

Whatever is useful/meaningful to us is brought to reality by social agreements. We speak about what needs to be spoken and such exchanges are subsequently instantiated in our dictionaries and linguistic heritage.

Collective decisions and reflections have led to terms, aka linguistic tools that allow us to intervene in specific domains and so far, this process was carried out by the only agents in the informational arena embedded with linguistic capabilities: human beings.

Deliberating on the intrinsic agency of artificial intelligence wouldn’t necessarily add anything to this particular case. What we should be looking at is automation and how far it can go if given a range of exploratory freedom.

I am, by no means, a hype lord when it comes to technology. I don’t even know how to use Excel very well and I type every single word I publish. However, I am definitely aware of the agentic-like turn AI is taking.

For example, take Google Deepmind’s recent announcement. They are releasing SIMA 2, which acts like a general agent capable of understanding and reasoning about complex problems and instructions in a simulation. This prowess represents a huge step for machines and implies the possibility of outsourcing limitless trials and errors in any experiment viable through digital environments.

It’s still very early to test that type of technology in terms of language and communication issues, but we can already assume that that type of approach to testing and experimenting carried out by artificial agents leads us to questioning the ways in which new knowledge and findings are going to be coined.

Scientific responsibility would dictate that findings should be verified. Nonetheless, what I am aiming at here is not the result, but rather the process: our tools are becoming entire workflows for experiences, knowledge findings, coinage and functional categorisation.

Scientific meaning within digital controlled environments is, henceforth, tokenisable and executable in binary code.

But the question remains: what are the consequences for meaning after the extension of the extension? How do we avoid a predictable concern about the value of our words and focus on what is really necessary in terms of language and communication between humans and machines?

These are some of my personal concerns at least. If we are to co-exist in the future with other intelligences or ultra sophisticated machines, I believe that learning how to trace meaning in a drastically different scientific and epistemic environment becomes essential.

Next time I should be expanding on the idea within the last paragraph to make my statements look more reasonable and grounded.

This post helped me make the idea of what we have to deal with in the future a little bit more clearer, and I hope it is the same for you.

See you soon,

Javier