top of page

Neurosymbolic Research

Research interests: 

Intersection between DL & BN | Hierarchical Memory | Artificial Consciousness, Model of Self

A.I. agents need to have a hierarchical representation of their environment (including the self).

One of the main issues of the current Deep Learning architectures, is that the agents do not have any "real" connection with the environment they are trained to predict. One way to solve this, is to investigate paradigms that could allow an artifficial agent to create the model that describes itself, as part of its environment. The last few years, there has been a major leap in Artifficial Intelligence (e.g. GPT-3, DALL-E), with the main focus being Deep Learning models that could solve mainly perception tasks. Such models seem to be getting closer and closer to solving sensory predictions and feature extraction, but it is not clear if they will be able to solve cognition or to even have a hierarchical reconstruction of their understanding of their environment, including the self. This serves as an opportunity to investigate Neurosymbolic architectures, where the two main pillars would be Perception and Cognition. Perception can be explored through Deep Learning models solving for a) sensory inputs, b) feature extraction, while Cognition would be explored through symbolic frameworks like Bayesian Networks solving for a) memory, b) causality, c) self-reference.

First attempt to solve cognition

Tzager: The first computational symbolic framework, built on Bayesian Networks.

As explained thoroughly in the post "We built the closest to a Bayesian Brain", in Intoolab (my previous company), we managed to built the first Bayesian Networks framework, that was able to computationally predict hierarches of symbols, as a computational language, connecting millions of mechanisms and pathways in the same causal structure (in that case the human body). You can watch the video showcasing Tzager's platform above. 

​

One of the major aspects that Tzager was missing of course, is the model of the self. And the reason is that the dataset that Tzager was trained on, came from research papers and scientific knowledge. Which means that there was no environment where Tzager could be an entity in. But, such computational symbolic architectures like Tzager are able, if trained with the correct dataset, to be able predict models of itself (if parts of itself are in the environment represented by the dataset).

What's Next? Deep Learning <> Bayesian Networks 

​

While Bayesian Networks are​ great for cognition, they are not ideal for perception. The good news of cources are, that perception can be solved through Deep Learning frameworks, like LLM's. Which means that we can actually have the most completed cognitive architecture ever made, if we are able to find a way that models like the davinci-3 (GPT-3) ac "talk" to models like Tzager, or other Bayesian-like architectures.

​

​

Upcoming first test: Create a cognitive hierarchical model out of GPT-3's perception

With this test we are making GPT-3 to turn it's predictions into symbolic hierarches, with the sole purpose of tesing the capacities of Deep Learning models to turn their features to symbols, thus creating a secondary emergent framework that learns from the already existing inference of

davinci-3.

Screenshot 2023-01-31 at 10.19.54 PM.png
bottom of page