Volume 18, Issue 1, 2020


A Roadmap for Artificial General Intelligence: Intelligence, Knowledge, and Consciousness
Garrett Mindt, Tiny Blue Dot Foundation, Wisconsin Institute for Sleep and Consciousness, University of Wisconsin at Madison, USA, and Carlos Montemayor, Department of Philosophy, San Francisco State University, USA

We have seen a significant increase in the attention AI research is receiving this past decade, in large part due to some of the impressive feats of machine learning, particularly deep learning. This has resulted in something of a hype in the ability of AIs in tackling various issues. The aim of the current essay is to examine the speculative questions "what would it mean for systems to transition from merely intelligently executing a task to knowledgably executing a task?" and "what is the relation between consciousness and intelligence, such that specific evaluations about conscious AI might be made?" We offer what we hope is a roadmap for how to navigate asking these questions and a route forward in understanding how AI goes from intelligence to knowledge and potentially to some types of consciousness.


Intelligence and Understanding - Limits of Artificial Intelligence
Markus Gabriel, Department of Philosophy, University of Bonn, Germany

Much contemporary artificial intelligence (AI) research neglects to investigate the nature of its key object of study: intelligence. his paper seeks to compensate for this neglect by offering an ontology of intelligence so as to determine whether an artificial system can truly be described as intelligent. AI research frequently operates with an "efficiency" sense of intelligence, which associates this property with a system's problem-solving capacity. By invoking the thesis of biological externalism, according to which our mentalistic vocabulary is essentially tied to picking out behaviors of living creatures, I argue that ascriptions of mental properties to non-living systems is categorially inappropriate, given how relating to a problem space to begin with necessarily has biological parameters. I explain AIs as thought-models, which need not therefore be understood as thinking models: it makes no sense to attribute intelligence to such models outside of the context of our intelligent use of them for our own problem-solving ends. Finally, I maintain that thought itself is a sense modality, which is bound to inherently contextual forms of understanding. As yet, there is no reason to think that we can substitute any alternative for the "lifeworld" such understanding inhabits, let alone anything digital.


The Ontological Impossibility of Digital Consciousness
Riccardo Manzotti, Department of Philosophy,IULM University, Milan, Italy, and Gregory Owcarz, Department of Philosophy, Syracuse University Strasbourg, France

In the field of consciousness studies, a recurrent approach has consisted in explaining consciousness as an emergent property of information or as a special kind of information. The idea is that the central nervous system processes information and that under the right circumstances information is responsible for the emergence of phenomenal experience. Many consider information to be more akin to the mental than to its raw physical underpinnings. If information had such ontological status, it would be conceivable to realize consciousness in digital systems, either by creating artificial consciousness, or by uploading and preserving human consciousness, or both. Unfortunately, this is not a viable possibility since information so construed simply does not exist and thus cannot be a case of consciousness nor be the underpinnings of consciousness. In this paper we will show that information is only an epistemic shortcut to refer to joint probabilities between states of affairs among physical events. If information as an entity beyond those relations is not part of our ontology, then digital consciousness is impossible.


Artificial Selves
Andrew Bailey, Department of Philosophy, University of Guelph, Canada

Under what circumstances might AI systems have moral standing: when might they have rights or other morally relevant attributes that will constrain how we should treat them? Current approaches to this question assume either that AIs will have a special (dilute) form of moral standing that does not resemble human rights; or that they will acquire moral rights resembling those of human beings only after they pass an ill-defined and technically difficult watershed, such as the acquisition of phenomenal consciousness. This paper argues that there is another, more tractable, standard, according to which AI systems will arrive at moral standing, unambiguously and quite soon: this will happen when they satisfy the criteria for selfhood, as these criteria are applied to human beings. I consider the four main theories of personal identity - psychological continuity, bodily theories, narrative/hermeneutical theories, and non-identity theories - and show that any one of these plausibly will apply to near-future AIs. I further argue that constituting a self ipso facto bestows some form of moral standing, and propose a research program for understanding the consequences for how we morally should treat near-future AI systems.


Virtual Self and Digital Depersonalization: Between Existential Dasein and Digital Design
Elena Bezzubova, School of Medicine, University of California at Irvine, USA

This paper explores the phenomenon of virtuality and the digital shift in self and self-consciousness through the prism of clinical phenomenology. The phenomenon of virtuality is examined through the mirroring complementarity of the experiences of cyber-generated virtual reality and the experiences of brain-generated depersonalization. Two properties of virtuality - the "as-if" quality and uncanniness - are characterized. The notion of the Virtual Self is proposed as a framework for understanding the self in a digital world. The notion of digital depersonalization is introduced as a framework for understanding self-consciousness in a world that is extended by virtuality. The notion of digital ontological depersonalization is discussed from the perspectives of existential Dasein and digital design.

Last revision: 16 June 2020