18 results on '"Ben Goertzel"'
Search Results
2. Combinatorial Decision Dags: A Natural Computational Model for General Intelligence
- Author
-
Ben Goertzel
- Subjects
Theoretical computer science ,010308 nuclear & particles physics ,Computer science ,Theory ,0103 physical sciences ,0202 electrical engineering, electronic engineering, information engineering ,Decision tree ,Entropy (information theory) ,020201 artificial intelligence & image processing ,02 engineering and technology ,Combinatory logic ,01 natural sciences ,Quantum computer - Abstract
A novel computational model (CoDD) utilizing combinatory logic to create higher-order decision trees is presented. A theoretical analysis of general intelligence in terms of the formal theory of pattern recognition and pattern formation is outlined, and shown to take especially natural form in the case where patterns are expressed in CoDD language. Relationships between logical entropy and algorithmic information, and Shannon entropy and runtime complexity, are shown to be elucidated by this approach. Extension to the quantum computing case is also briefly discussed.
- Published
- 2020
3. Guiding Symbolic Natural Language Grammar Induction via Transformer-Based Sequence Probabilities
- Author
-
Ben Goertzel, Gino Yu, and Andrés Suárez-Madrigal
- Subjects
Automated learning ,Exploit ,business.industry ,Rule induction ,Computer science ,02 engineering and technology ,computer.software_genre ,Grammar induction ,03 medical and health sciences ,0302 clinical medicine ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Language model ,business ,Cluster analysis ,computer ,030217 neurology & neurosurgery ,Natural language processing ,Natural language ,Transformer (machine learning model) - Abstract
A novel approach to automated learning of syntactic rules governing natural languages is proposed, based on using probabilities assigned to sentences (and potentially longer word sequences) by transformer neural network language models to guide symbolic learning processes like clustering and rule induction. This method exploits the learned linguistic knowledge in transformers, without any reference to their inner representations; hence, the technique is readily adaptable to the continuous appearance of more powerful language models. We show a proof-of-concept example of our proposed technique, using it to guide unsupervised symbolic link-grammar induction methods drawn from our prior research.
- Published
- 2020
4. Embedding Vector Differences Can Be Aligned with Uncertain Intensional Logic Differences
- Author
-
Mike Duncan, Man Hin Leung, Debbie Duong, Matthew Iklé, Hedra Seid, Abdulrahman Semrie, Ben Goertzel, and Nil Geisweiller
- Subjects
Hypergraph ,Vector algebra ,Theoretical computer science ,Computer science ,Intensional logic ,Inference ,Biological Ontologies ,Context (language use) ,02 engineering and technology ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Embedding ,020201 artificial intelligence & image processing ,Control (linguistics) - Abstract
The DeepWalk algorithm is used to assign embedding vectors to nodes in the Atomspace weighted, labeled hypergraph that is used to represent knowledge in the OpenCog AGI system, in the context of an application to probabilistic inference regarding the causes of longevity based on data from biological ontologies and genomic analyses. It is shown that vector difference operations between embedding vectors are, in appropriate conditions, approximately alignable with “intensional difference” operations between the hypergraph nodes corresponding to the embedding vectors. This relationship hints at a broader functorial mapping between uncertain intensional logic and vector arithmetic, and opens the door for using embedding vector algebra to guide intensional inference control.
- Published
- 2020
5. What Kind of Programming Language Best Suits Integrative AGI?
- Author
-
Ben Goertzel
- Subjects
010302 applied physics ,Programming language ,Computer science ,020207 software engineering ,Gradual typing ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Dependent type ,Rotation formalisms in three dimensions ,Inheritance (object-oriented programming) ,Component (UML) ,0103 physical sciences ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,Rewriting ,Lambda calculus ,computer ,computer.programming_language - Abstract
What kind of programming language would be most appropriate to serve the needs of integrative, multi-paradigm, multi-software-system approaches to AGI? This question is broached via exploring the more particular question of how to create a more scalable and usable version of the “Atomese” programming language that forms a key component of the OpenCog AGI design (an “Atomese 2.0”). It is tentatively proposed that The core of Atomese 2.0 should be a very flexible framework of rewriting rules for rewriting a metagraph (where the rules themselves are represented within the same metagraph, and some of the intermediate data created and used during the rule-interpretation process may be represented in the same metagraph). This framework should (among other requirements) support concurrent rewriting of the metagraph according to rules that are labeled with various sorts of uncertainty-quantifications, and that are labeled with various sorts of types associated with various type systems. A gradual typing approach should be used to enable mixture of rules and other metagraph nodes/links associated with various type systems, and untyped metagraph nodes/links not associated with any type system. allow reasonable efficiency and scalability, including in concurrent and distributed processing contexts, in the case where a large percentage of processing time is occupied with evaluating static pattern-matching queries on specific subgraphs of a large metagraph (including a rich variety of queries such as matches against nodes representing variables, and matches against whole subgraphs, etc.) allow efficient and convenient invocation and manipulation of external libraries for carrying out processing that is not efficiently done in Atomese directly Among the formalisms we will very likely want to implement within this framework is probabilistic dependent-linear-typed lambda calculus or something similar, perhaps with a Pure IsoType approach to dependent type inheritance. Thus we want the general framework to support reasonably efficient/convenient operations within this particular formalism, as an example.
- Published
- 2020
6. Programmatic Link Grammar Induction for Unsupervised Language Learning
- Author
-
Anton Kolonin, Oleg Baskov, Alex Glushchenko, Andres Suarez, and Ben Goertzel
- Subjects
Grammar ,Computer science ,business.industry ,media_common.quotation_subject ,Link grammar ,010501 environmental sciences ,computer.software_genre ,Language acquisition ,01 natural sciences ,Grammar induction ,010104 statistics & probability ,Formal grammar ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Unsupervised learning ,Artificial intelligence ,0101 mathematics ,Computational linguistics ,business ,computer ,Natural language processing ,Natural language ,0105 earth and related environmental sciences ,media_common - Abstract
Although natural (i.e. human) languages do not seem to follow a strictly formal grammar, their structure analysis and generation can be approximated by one. Having such a grammar is an important tool for programmatic language understanding. Due to the huge number of natural languages and their variations, processing tools that rely on human intervention are available only for the most popular ones. We explore the problem of unsupervisedly inducing a formal grammar for any language, using the Link Grammar paradigm, from unannotated parses also obtained without supervision from an input corpus. The details of our state-of-the-art grammar induction technology and its evaluation techniques are described, as well as preliminary results of its application on both synthetic and real world text-corpora.
- Published
- 2019
7. Evolving 3D Facial Expressions Using Interactive Genetic Algorithms
- Author
-
Meareg A. Hailemariam, Tesfa Yohannes, and Ben Goertzel
- Subjects
Facial expression ,education.field_of_study ,Fitness function ,Facial bone ,business.industry ,Computer science ,Crossover ,Population ,Evolutionary algorithm ,Pattern recognition ,Chromosome (genetic algorithm) ,Genetic algorithm ,Artificial intelligence ,business ,education ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
Interactive Genetic Algorithms (IGA) are applied in optimization problems where the fitness function is fuzzy or subjective. Its application transcends several domains including photography, fashion, gaming and graphics. This work introduces a novel implementation of Interactive Genetic Algorithm (IGA) for evolving facial animations on a 3D face model. In this paper, an animation of a facial expression represents a chromosome; while genes are equivalent, depending on the crossover method applied, either to a keyframe point information (f-curve) of a facial bone or f-curves of grouped sub-parts such as the head, mouth or eyes. Crossover techniques uniform, cut-and-spice, blend and their hybrids were implemented with a user playing fitness function role. Moreover, in order to maximize user preference and minimize the user fatigue during evolution, sub-parts based elitism was implemented. Subjective measurements of credibility and peculiarity parameters among a given artist animated and evolved expressions were done. For the experiment results here, an average crossover percentage of 85%, a mutation level of 0.01, initial population of 36, and 8 rounds of evolution settings were considered. As detailed in the experiment section, the IGA based evolved facial expressions scored competitive results to the artist-animated ones.
- Published
- 2019
8. Unsupervised Language Learning in OpenCog
- Author
-
Claudia Castillo, Ben Goertzel, Man Hin Leung, Anton Kolonin, Alex Glushchenko, Andres Suarez, and Oleg Baskov
- Subjects
Computer science ,Cognition ,02 engineering and technology ,Language acquisition ,Pipeline (software) ,Grammar induction ,Formal grammar ,Categorization ,Human–computer interaction ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Unsupervised learning ,020201 artificial intelligence & image processing ,Computational linguistics - Abstract
We discuss technology capable to learn language without supervision. While the entire goal may be too ambitious and not achievable to full extent, we explore how far we can advance grammar learning. We present the current approach employed in the open source OpenCog Artificial Intelligence Platform, describe the cognitive pipeline being constructed and present some intermediate results.
- Published
- 2018
9. A Formal Model of Cognitive Synergy
- Author
-
Ben Goertzel
- Subjects
Cognitive science ,Cognitive systems ,Computer science ,020206 networking & telecommunications ,Cognition ,02 engineering and technology ,Cognitive architecture ,InformationSystems_MODELSANDPRINCIPLES ,0202 electrical engineering, electronic engineering, information engineering ,Key (cryptography) ,Feature (machine learning) ,020201 artificial intelligence & image processing ,Control (linguistics) ,Category theory - Abstract
“Cognitive synergy”– a dynamic in which multiple cognitive processes, cooperating to control the same cognitive system, assist each other in overcoming bottlenecks encountered during their internal processing. – has been posited as a key feature of real-world general intelligence, and has been used explicitly in the design of the OpenCog cognitive architecture. Here category theory and related concepts are used to give a formalization of the cognitive synergy concept. Cognitive synergy is proposed to correspond to a certain inequality regarding the relative costs of different paths through certain commutation diagrams. Applications of this notion of cognitive synergy to particular cognitive phenomena, and specific cognitive processes in the PrimeAGI design, are discussed.
- Published
- 2017
10. From Abstract Agents Models to Real-World AGI Architectures: Bridging the Gap
- Author
-
Ben Goertzel
- Subjects
Computer Science::Multiagent Systems ,Hypergraph ,Intelligent agent ,Theoretical computer science ,Computer science ,Probabilistic logic ,Reinforcement learning ,Hyperlink ,computer.software_genre ,ComputingMethodologies_ARTIFICIALINTELLIGENCE ,computer ,Bridging (programming) - Abstract
A series of formal models of intelligent agents is proposed, with increasing specificity and complexity: simple reinforcement learning agents; “cognit” agents with an abstract memory and processing model; hypergraph-based agents (in which “cognit” operations are carried out via hypergraphs); hypergraph agents with a rich language of nodes and hyperlinks (such as the OpenCog framework provides); “PGMC” agents whose rich hypergraphs are endowed with cognitive processes guided via Probabilistic Growth and Mining of Combinations; and finally variations of the PrimeAGI design, which is currently being built on top of the OpenCog framework.
- Published
- 2017
11. Intelligence Science I
- Author
-
Ben Goertzel, Jiali Feng, and Zhongzhi Shi
- Subjects
Cognitive science ,Human intelligence ,Intelligence amplification ,Intelligence assessment ,Psychology ,Intelligence science - Published
- 2017
12. Probabilistic Growth and Mining of Combinations: A Unifying Meta-Algorithm for Practical General Intelligence
- Author
-
Ben Goertzel
- Subjects
Cognitive systems ,Computer science ,business.industry ,Probabilistic logic ,sort ,Logical inference ,Artificial intelligence ,Probabilistic inference ,business ,Solomonoff's theory of inductive inference ,Algorithm - Abstract
A new conceptual framing of the notion of the general intelligence is outlined, in the form of a universal learning meta-algorithm called Probabilistic Growth and Mining of Combinations (PGMC). Incorporating ideas from logical inference systems, Solomonoff induction and probabilistic programming, PGMC is a probabilistic inference based framework which reflects processes broadly occurring in the natural world, is theoretically capable of arbitrarily powerful generally intelligent reasoning, and encompasses a variety of existing practical AI algorithms as special cases. Several ways of manifesting PGMC using the OpenCog AI framework are described. It is proposed that PGMC can be viewed as a core learning process serving as the central dynamic of real-world general intelligence; but that to achieve high levels of general intelligence using limited computational resources, it may be necessary for cognitive systems to incorporate multiple distinct structures and dynamics, each of which realizes this core PGMC process in a different way (optimized for some particular sort of sub-problem).
- Published
- 2016
13. Controlling Combinatorial Explosion in Inference via Synergy with Nonlinear-Dynamical Attention Allocation
- Author
-
Ben Goertzel, Misgana Bayetta Belachew, Matthew Iklé, and Gino Yu
- Subjects
0209 industrial biotechnology ,business.industry ,Logical reasoning ,Computer science ,Inference ,Context (language use) ,Cognition ,02 engineering and technology ,Core (game theory) ,020901 industrial engineering & automation ,Simple (abstract algebra) ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Artificial intelligence ,Control (linguistics) ,business ,Combinatorial explosion - Abstract
One of the core principles of the OpenCog AGI design, “cognitive synergy”, is exemplified by the synergy between logical reasoning and attention allocation. This synergy centers on a feedback in which nonlinear-dynamical attention-spreading guides logical inference control, and inference directs attention to surprising new conclusions it has created. In this paper we report computational experiments in which this synergy is demonstrated in practice, in the context of a very simple logical inference problem.
- Published
- 2016
14. From Specialized Syntax to General Logic: The Case of Comparatives
- Author
-
Amen Belayneh, Gino Yu, Rodas Solomon, Ruiting Lian, Ben Goertzel, and Changle Zhou
- Subjects
Description logic ,business.industry ,Computer science ,Probabilistic logic ,Artificial intelligence ,Predicate (mathematical logic) ,business ,computer.software_genre ,Syntax ,computer ,Natural language ,Humanoid robot ,Natural language processing - Abstract
General-purpose reasoning based on knowledge encoded in natural language, requires mapping this knowledge out of its syntax-dependent form into a more general representation that can be more flexibly applied and manipulated. We have created a system that accomplishes this in a variety of cases via mapping English syntactic expressions into predicate and term logic expressions, which can then be cognitively manipulated by tools such as a probabilistic logic engine, an information-theoretic pattern miner and others. Here we illustrate the functionality of this system in the particular case of comparative constructions.
- Published
- 2015
15. Are There Deep Reasons Underlying the Pathologies of Today’s Deep Learning Algorithms?
- Author
-
Ben Goertzel
- Subjects
Computer science ,business.industry ,Deep learning ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,Episodic memory ,computer ,Convolutional neural network - Abstract
Some currently popular and successful deep learning architectures display certain pathological behaviors e.g. confidently classifying random data as belonging to a familiar category of nonrandom images; and misclassifying miniscule perturbations of correctly classified images. It is hypothesized that these behaviors are tied with limitations in the internal representations learned by these architectures, and that these same limitations would inhibit integration of these architectures into heterogeneous multi-component AGI architectures. It is suggested that these issues can be worked around by developing deep learning architectures that internally form states homologous to image-grammar decompositions of observed entities and events.
- Published
- 2015
16. Speculative Scientific Inference via Synergetic Combination of Probabilistic Logic and Evolutionary Pattern Recognition
- Author
-
Amen Belayneh, Matthew Iklé, Gino Yu, Meseret Dastaw, Nil Geisweiller, Eddie Monroe, Ben Goertzel, Selamawit Yilma, Misgana Bayetta, and Mike Duncan
- Subjects
Descriptive knowledge ,business.industry ,Computer science ,Probabilistic logic ,Cognition ,Cognitive architecture ,Ontology (information science) ,Machine learning ,computer.software_genre ,Procedural knowledge ,Pattern recognition (psychology) ,Artificial intelligence ,Inference engine ,business ,computer - Abstract
The OpenCogPrime cognitive architecture is founded on a principle of "cognitive synergy" --- judicious combination of different cognitive algorithms, acting on different types of memory, in a way that helps overcome the combinatorial explosions each of the algorithms would suffer if used on its own. Here one manifestation of the cognitive synergy principle is explored --- the use of probabilistic logical reasoning based on declarative knowledge to generalize procedural knowledge gained by evolutionary program learning. The use of this synergy is illustrated via an example drawn from a practical application of the OpenCog system to the analysis of gene expression data, wherein the MOSES program learning algorithm is used to recognize data patterns and the PLN inference engine is used to generalize these patterns via cross-referencing them with a biological ontology. This is a case study of both automated scientific inference, and synergetic cognitive processing.
- Published
- 2015
17. Guiding Probabilistic Logical Inference with Nonlinear Dynamical Attention Allocation
- Author
-
Amen Belayneh, Matthew Iklé, Cosmo Harrigan, Ben Goertzel, and Gino Yu
- Subjects
Nonlinear system ,Theoretical computer science ,Markov chain ,Order (exchange) ,business.industry ,Computer science ,Added value ,Probabilistic logic ,Cognition ,Artificial intelligence ,Markov logic network ,business ,Value (mathematics) - Abstract
In order to explore the practical manifestations of the “cognitive synergy” between the PLN (Probabilistic Logic Networks) and ECAN (Economic Attention Network) components of the OpenCog AGI architecture, we explore the behavior of PLN and ECAN operating together on two standard test problems commonly used with Markov Logic Networks (MLN). Our preliminary results suggest that, while PLN can address these problems adequately, ECAN offers little added value for the problems in their standard form. However, we outline modified versions of the problem that we hypothesize would demonstrate the value of ECAN more effectively, via inclusion of confounding information that needs to be heuristically sifted through.
- Published
- 2014
18. A Cognitive API and Its Application to AGI Intelligence Assessment
- Author
-
Ben Goertzel and Gino Yu
- Subjects
Software ,Theoretical computer science ,Application programming interface ,business.industry ,Computer science ,Intelligence assessment ,Cognition ,business ,Bridging (programming) - Abstract
An Application Programming Interface for human-level AGI systems is proposed, aimed at bridging the gap between proto-AGI R&D systems and practical AI application development. The API contains simply formalized queries corresponding to the various key aspects of human-like intelligence, organized so as to be independent of the algorithms used under the hood for query resolution and associated supporting cognitive processes. A novel, qualitative (and in principle quantifiable) measure of software general intelligence is proposed (the APIQ), measuring the degree to which a system succeeds at fulfilling the various API functions using a compact set of representations and algorithms.
- Published
- 2014
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.