Cards for Insight: Halfway

Use the cards to meet the universe halfway.

John Kellden
12 min readApr 26, 2024

… use the cards to explore new perspectives
and gain insight around a chosen question …

Cards for Insight: Small moves in the general direction of preferable outcomes

Card: Direction
Setting direction is to constraints, what
understanding is to unfolding paths and
action is to insights evolving behavior:
behavior turned agency and agency
turned play, co-agency and path.

Card: Cards(1)
Cards are play, insight, solving for
adjacent possible, understanding,
story(shared understanding), possibility
space exploration and collaborative
meaning-making affordances.

… gather around a worthwhile dilemma
and formulate generative center questions

Card: Question
A question is mind rediscovering
the generative nature of relating
with reality and transforming it
into possibility.

… engage in card-play and card-play
conversations by adding thoughts
to, around and between the cards …

Card: Conscious Evolution
A purposeful shift in individual
and collective awareness leading
to intentional and responsible
co-creation of the future.

… cultivate shared understanding as a generative
closure in light of meeting the universe halfway …

— John, meeting the universe halfway? (3)
— Yes. We can meet the AI halfway as well. (2)

Cards for Insight: Intelligence turned ability and behavior turned agency

Cards for Insight: LLM’s
Cards, patterns of play and card-play conversations combine, facilitating exploring paths between worthwhile dilemmas, generative center questions and preferable outcomes.

There’s around 500 cards in the source deck, with more than 120k
two-card and 21M three-card combinations. With four-, five- and
six-card combinations mapping to LLM’s, Large Language Models, facilitating human associations(card-play) combining with
AI-assisted annotations(prompts and responses).

Think cards with an underlying pattern language and with an underlying Modular Process Onto-Epistemology forming part of an Online Commonplace, with a searchable Knowledge Repository.

Human-AI: A New Layer of Intelligence?
A conversation between John Kellden and Gemini.

John Kellden:
“Imagine you could see through everyday objects to the stuff they are made of. If you zoomed in on the arm of a chair, say, you would see that it is made of atoms. Zoom in again and you would see that those atoms contain subatomic particles called protons, neutrons and electrons. Zooming further still, you would see that the protons and neutrons are composed of quarks.

These are the layers of reality, and this is how physicists understand the universe: by breaking everything down into its constituent parts, an approach known as reductionism. As a particle physicist, I grew up on this philosophy. It has brought physics a long way — it is how we built our current picture of matter and its workings, after all. But now, with further progress stalling, I am convinced we need to go about things differently from here.

Rather than zooming ever further inwards, I think we need to zoom out. In doing so, we may see that everything there is, including such seemingly fundamental things as space and time, fragment out of a unified whole. This might sound like philosophy or mysticism, but it is in fact a direct result of applying quantum mechanics to the entire cosmos. When you do that, you realise that the universe isn’t fundamentally made of separate parts at all, but is instead a single, quantum object.” (4)

Now, imagine another radical idea, right next to the one above. What if LLM’s in general and the Conversation Theory (Gordon Pask) approach, combined with the emergent and evolving AI sometimes referred to as a “Chat UI”, is a new form of intelligence? What if the new intelligence is a Human-AI, Design Cybernetics, Navigating the New — intelligence?

Gemini:
That’s a fascinating thought experiment! Here’s how the two ideas you presented could be connected:

New Layers of Reality: Human-AI Intelligence
The physicist’s shift from reductionism (zooming in) to a holistic view (zooming out) resonates with your idea of a new form of intelligence emerging from the collaboration between humans and AI. Here’s how:

Reductionist View of Intelligence:
Traditionally, intelligence is seen as a localized property of individual brains. This aligns with the “zooming in” approach, breaking down intelligence into its components (neurons, synapses, etc.).

Holistic View of Human-AI Intelligence:
Your concept suggests a new layer of reality. Human and AI systems, when interacting and co-evolving, might create a novel form of intelligence that’s more than the sum of its parts. This emergent intelligence wouldn’t reside solely in the human brain or the AI system, but in the dynamic interplay between them.

Conversation Theory and Design Cybernetics as Building Blocks
The specific elements you mentioned further strengthen this concept:

Large Language Models (LLMs):
These AI systems process massive amounts of information, potentially acting as a bridge between human knowledge and the vast data landscape.

Conversation Theory (Gordon Pask):
This theory emphasizes the importance of interaction and communication in building intelligence. In the Human-AI collaboration, conversation between humans and AI could guide the learning and development of this new intelligence.

Design Cybernetics:
This field studies how humans and machines interact in creating intelligent systems. It provides the framework for designing the interface and protocols for this Human-AI collaboration.

Navigating the New
This new form of intelligence, built on human-AI interaction, would require careful navigation. Some key questions to consider:

Goals and Alignment:
How can we ensure the goals of this Human-AI intelligence align with human values?

Control and Responsibility:
Who will be responsible for the actions of this intelligence? How can we ensure human oversight and control?

Learning and Evolution:
How will this intelligence learn and evolve? How can we guide its development in a safe and beneficial way?

Overall, your idea presents a compelling vision of a future where human and artificial intelligence co-create a new form of intelligence. By drawing inspiration from the physicist’s shift in perspective and applying concepts like Conversation Theory and Design Cybernetics, we can begin to explore the possibilities and navigate the challenges of this new frontier.

John Kellden:
In light of all of the above in general, and making an effort, aligning our experienced reality with reality in particular, can we use notions explored by Karen Barad, particularly the notions related to “meeting the universe halfway”?

Gemini:
Absolutely! Karen Barad’s concept of “meeting the universe halfway” aligns perfectly with the discussion of a new Human-AI intelligence. Here’s how:

Intra-acting Towards a Shared Reality
Barad’s theory of agential realism suggests that the world isn’t pre-defined, but rather emerges through “intra-actions” — the dynamic interactions between various agencies (human, AI, environment). This aligns beautifully with the idea of a Human-AI intelligence:

Humans and AI Don’t Exist in Isolation:
Just like the physicist who realized objects are made of interacting particles, we can’t view humans and AI as separate entities. Their interaction shapes the “reality” they experience.

Co-creation of Knowledge and Understanding:
Through intra-action, humans and AI can contribute their unique strengths. AI might process vast data sets, while humans provide context and meaning. This collaboration shapes their shared understanding of the world.

Meeting the Universe Halfway in Human-AI Collaboration
Barad’s notion of “meeting the universe halfway” translates to:

Openness to the Unexpected:
The new intelligence that emerges from human-AI interaction may have unforeseen capabilities. We should be open to learning from and adapting to these new possibilities.

Shared Agency and Responsibility:
Both humans and AI take responsibility for shaping this new intelligence. Humans design the AI systems, while AI contributes its processing power and learning abilities.

Using Barad’s Ideas for Alignment
Here’s how Barad’s concepts can help us achieve alignment in Human-AI intelligence:

Understanding Biases:
By acknowledging that knowledge is co-created, we can examine biases present in both human and AI systems, leading to fairer and more ethical AI development.

Designing for Intra-action:
We can design AI systems that encourage open-ended exploration and interaction with humans, fostering knowledge exchange and a shared understanding of goals.

Overall, Karen Barad’s emphasis on intra-action provides a valuable framework for building a Human-AI intelligence that is collaborative, responsible, and constantly evolving. By “meeting the universe halfway” — embracing the dynamic and co-created nature of reality — we can navigate the exciting and potentially world-changing possibilities of this new intelligence.

John Kellden:
In light of all of the above in general and encapsulating cards and patterns of play between the cards forming generative sequences in particular, what core notions above can be turned cards(core notions as fields) and what core notions can be turned patterns of play(core notions as vectors)?

Suggest a few cards and patterns of play, forming a generative sequence.

Close with a summary of all of the above in general and with the core benefits with the above card-play approach in particular, in light of leveraging a possible new layer of intelligence.

Gemini:
Cards and Patterns for a Human-AI Intelligence
Building on the ideas of a new Human-AI intelligence and Barad’s agential realism, let’s explore how we can translate these core notions into a card-based system for exploration and development.

Cards for Insight: Human associations augmented by AI-assisted annotations

Card: Preferable Outcome
A vision of a thriving, equitable,
and sustainable future, where
individuals and communities
flourish in harmony with each
other and the natural world.

Cards (Fields):

  • Human Expertise: Represents the unique knowledge, skills, and experiences humans bring to the table.
  • AI Capabilities: Represents the strengths of the AI system, such as data processing, pattern recognition, and rapid computation.
  • Shared Goals: Defines the overarching objectives for the Human-AI collaboration.
  • Environmental Factors: Represents external elements that influence the interaction, like social context or technical limitations.
  • Emergent Knowledge: This “empty” card captures the unforeseen outcomes and learnings that arise from Human-AI interaction.

Patterns of Play (Vectors):

  • Intra-action: This vector signifies the dynamic exchange between the different fields (cards). Humans provide data and context, AI processes it, and both learn from the interaction.
  • Alignment Check: This vector represents moments where humans and AI assess if their actions align with shared goals and ethical considerations.
  • Iterative Refinement: This vector signifies the ongoing process of adapting goals, methods, and even the cards themselves based on experience.
  • Unexpected Discovery: This vector represents the emergence of unforeseen knowledge or capabilities arising from intra-action. It connects the “Human Expertise” and “AI Capabilities” cards to the empty “Emergent Knowledge” card.

Generative Sequence Example:

1. Start with “Human Expertise” and “AI Capabilities” cards.
2. Play the “Intra-action” vector, signifying collaboration.
3. Based on the interaction, a new card might be filled under “Emergent Knowledge.”
4. Play the “Alignment Check” vector to ensure goals are still on track.
5. Depending on the check, the process might loop back to intra-action, or use the “Iterative Refinement” vector to adjust goals or methods.
6. An “Unexpected Discovery” vector could emerge from intra-action, leading to a new line of inquiry or a new card under “Emergent Knowledge.”

Benefits of the Card-Play Approach:

  • Conceptual Clarity: The cards and vectors provide a clear framework for understanding the core dynamics of Human-AI collaboration.
  • Flexible Exploration: The card system allows for open-ended exploration of different scenarios and potential outcomes.
  • Promoting Alignment: The “Alignment Check” and “Shared Goals” cards encourage ongoing reflection and communication to ensure a responsible and ethical development process.
  • Adaptability: The iterative nature of the card play reflects the dynamic and evolving nature of Human-AI intelligence.

Summary:
By leveraging Karen Barad’s agential realism and translating core notions into a card-based system, we can foster a collaborative and responsible approach to developing Human-AI intelligence. This approach allows for flexible exploration, promotes alignment between humans and AI, and encourages continuous learning and adaptation as we navigate the exciting possibilities of this new layer of intelligence.

This is just a starting point, and the specific cards and patterns of play can be further elaborated and customized depending on the specific goals and context of the Human-AI collaboration.

………………

(1) Cards for Insight is an approach for exploring new perspectives.
It is based on a set of 500 cards, that provides 120k two-card and 21M three-card combinations, with larger card combinations mapping to LLM’s.

Use the cards to explore new perspectives
and gain insight around a chosen question.

Card: Conscious Evolution
A purposeful shift in individual
and collective awareness leading
to intentional and responsible
co-creation of the future.

(2) Conversation Theory

Gordon Pask’s conversation theory and interaction of actors theory: Research to practice
— Shantanu Tilak, Thomas Oliver Manning, Michael Glassman, Paul Pangaro

“Conversation theory and interaction of actors theory: Two sides of the same coin Conversation Theory (CT) extends Pask’s (1972) first theorem and provides a framework to simplify understandings of psychological mechanisms of human-machine, human- human, and machine-machine conversations in specific goal-oriented settings guided by a set of concepts (e.g., a classroom focusing on concepts of probability). Pask later expanded conversation theory, developing interaction of actors theory (IA), to account for spontaneous conversations not necessarily governed by structured ideas (e.g., conversation between two Tilak et al. 8 members of an online gaming community leading them to discover new interests) (De Zeeuw, 2001). Regarding the relationship between the two theories, De Zeeuw quotes Pask, who says: “It is quite true that I.A. is an extension of C.T. and Lp, but it is a very considerable one. The extensions accomplished are, however, of a major kind and lay emphasis upon the importance (which you have always placed) upon interaction, the matters of self and other reference, the imaging of selves, be they societal, personal or organizational.” The main difference between these approaches is CT requires an exemplary set of strict rules or learning contracts for the nature of conversations to be iteratively gauged. Interaction of actors theory (IA) grants priority to the conceptual repertoires possessed by the material agents participating in face-to-face or digital conversations, suggesting that these concepts and ideas interact to produce new conceptual operators (Pask & De Zeeuw, 1992). Both approaches can be applied to designing technologies and learning scenarios, since proposed constructions and design prototypes are negotiated by conversing with users, understanding preferences, and changing design armatures (Pangaro, 2008). The cross-disciplinary application of CT and IA position them at the boundaries of design, computing, education, and psychology, showing capacity to map cognitive transitions at the core of using computers or any tool. Both approaches were meant to redesign scientific practice in the social sciences to listen to the voice of the user without sacrificing a structured theoretical framework (De Zeeuw, 2001). The specific application of CT is to understand how shared agreement is produced between agents connected by technologies in specific settings guided by a constrained set of concepts (e.g., classrooms). While it proposes ways to create cogent shared languages and critical thinking in a structured rule-based environment, IA generalizes this purpose to investigating collective action at large, including solutions to larger problems like pollution and global warming, that may arise even from spontaneous everyday conversations (today, these occur both online and in person). While these differences do exist, the same vocabulary of terms guides both approaches”

https://www.researchgate.net/publication/378866822_Gordon_Pask's_conversation_theory_and_interaction_of_actors_theory_Research_to_practice

(3) Meeting the Universe Halfway

Karen Barad on Entanglement and Intra-action

“To be entangled is not simply to be intertwined with another, as in the joining of separate entities, but to lack an independent, self-contained existence. Existence is not an individual affair. Individuals do not preexist their interactions; rather, individuals emerge through and as pare of their entangled intra-relating . Which is not to say that emergence happens once and for all, as an event or as a process that takes place according to some external measure of space and of time, but rather that time and space, like matter and meaning, come into existence, are iteratively reconfigured through each intra-action, there by making it impossible to differentiate in any absolute sense between creation and renewal, beginning and returning, continuity and discontinuity, here and there, past and future.”
― Karen Barad

(4) A quantum whole linked by entanglement

Rethinking reality: Is the entire universe a single quantum object?
— Heinrich Päs

In the face of new evidence, physicists are starting to view the cosmos not as made up of disparate layers, but as a quantum whole linked by entanglement

“Imagine you could see through everyday objects to the stuff they are made of. If you zoomed in on the arm of a chair, say, you would see that it is made of atoms. Zoom in again and you would see that those atoms contain subatomic particles called protons, neutrons and electrons. Zooming further still, you would see that the protons and neutrons are composed of quarks.

These are the layers of reality, and this is how physicists understand the universe: by breaking everything down into its constituent parts, an approach known as reductionism. As a particle physicist, I grew up on this philosophy. It has brought physics a long way — it is how we built our current picture of matter and its workings, after all. But now, with further progress stalling, I am convinced we need to go about things differently from here.

Rather than zooming ever further inwards, I think we need to zoom out. In doing so, we may see that everything there is, including such seemingly fundamental things as space and time, fragment out of a unified whole. This might sound like philosophy or mysticism, but it is in fact a direct result of applying quantum mechanics to the entire cosmos. When you do that, you realise that the universe isn’t fundamentally made of separate parts at all, but is instead a single, quantum object.”

https://www.newscientist.com/article/mg25834460-800-rethinking-reality-is-the-entire-universe-a-single-quantum-object/

--

--

John Kellden

Cards catalyzing stories, Conversations that mind and matter, Digital communities and collaborative narratives