Июль-Декабрь 2020 — Заметка №2

Интересный взгляд на проблему самосознания:

While a problem solver is interacting with the world, it should store the entire raw history of actions and sensory observations including reward signals. The data is ‘holy’ as it is the only basis of all that can be known about the world. If you can store the data, do not throw it away! Brains may have enough storage capacity to store 100 years of lifetime at reasonable resolution.

As we interact with the world to achieve goals, we are constructing internal models of the world, predicting and thus partially compressing the data history we are observing. If the predictor/compressor is a biological or artificial recurrent neural network (RNN), it will automatically create feature hierarchies, lower level neurons corresponding to simple feature detectors similar to those found in human brains, higher layer neurons typically corresponding to more abstract features, but fine-grained where necessary. Like any good compressor, the RNN will learn to identify shared regularities among different already existing internal data structures, and generate prototype encodings (across neuron populations) or symbols for frequently occurring observation sub-sequences, to shrink the storage space needed for the whole (we see this in our artificial RNNs all the time). Self-symbols may be viewed as a by-product of this, since there is one thing that is involved in all actions and sensory inputs of the agent, namely, the agent itself.

To efficiently encode the entire data history through predictive coding, it will profit from creating some sort of internal prototype symbol or code (e. g. a neural activity pattern) representing itself. Whenever this representation becomes activated above a certain threshold, say, by activating the corresponding neurons through new incoming sensory inputs or an internal ‘search light’ or otherwise, the agent could be called self-aware. No need to see this as a mysterious process — it is just a natural by-product of partially compressing the observation history by efficiently encoding frequent observations.

То есть существо для решения своих проблем (например проблем выживания) должно сохранять всю историю действий, результатов и наблюдений, чтобы в будущем принимать более лучшие решения. Это приводит к необходимости компреcсии данных (для построения моделей и предсказаний). Поэтому возникают “символы” — штуки, которые описывают повторяющиеся наблюдения или паттерны. Возникают символы символов и (как глубока кроличья нора) это приводит к необходимости иметь “символ”, который обозначает само существо и его процесс мышления. Это приводит к самосознанию.

Элегантная и простая теория. (Но конечно мы пока не знаем, насколько это правда)