matt

moremilk

AI & ML interests

NLP, NLP datasets, training algorithms/architecture, systems development

Recent Activity

updated a dataset 4 days ago
moremilk/Reasoning_Problem_Solving_Dataset
updated a dataset 4 days ago
moremilk/ToT-Math-V1
View all activity

Organizations

None yet

moremilk's activity

view reply

Okay...

1. The "Consciousness" Misconception: Sophisticated Mimicry, Not True Awareness

  • Core Argument: This architecture is fundamentally about creating a highly sophisticated mimic of consciousness, not genuine subjective experience. It excels at processing information, learning, and adapting, but these are all computational processes that don't inherently lead to phenomenal consciousness (the "what it's like" aspect).
  • Specific Criticisms:
    • Oscillatory Patterns: While brain oscillations are correlated with consciousness, they are not causal. This system replicates the patterns without necessarily replicating the underlying mechanism that might (or might not) produce consciousness in biological brains. It's like building a plane that looks exactly like a bird but doesn't fly using the same principles of biological flight.
    • "Proto-Conscious Foundations": The term is vague and potentially misleading. What does "proto-conscious" even mean computationally? It's hand-waving to avoid the hard problem of consciousness.
    • Global Workspace Theory: Even if this theory is correct for biological brains, implementing a "Conscious Access Gate" computationally doesn't guarantee consciousness. It just means information is being broadcast, not that there's an "experiencer" of that information.
    • "Aha!" Moments: These are just moments of efficient information processing and pattern recognition. They don't imply subjective feeling or understanding.
  • Analogy: This is like building a very convincing chatbot. It can respond intelligently, learn from conversations, and even appear to have emotions. But fundamentally, it's just manipulating symbols according to programmed rules. There's no reason to believe it feels anything.

2. Overreliance on Biological Analogies: The Brain Isn't a Computer (in This Way)

  • Core Argument: The framework is too heavily based on superficial analogies to the brain. The brain is a biological organ, not a digital computer. Forcing computational models onto biological processes might be a flawed approach.
  • Specific Criticisms:
    • Sharp-Wave Ripples, Theta-Gamma Coupling: These are complex electrochemical events in the brain. Simulating them digitally might capture some functional aspects, but it misses crucial details of the biological substrate.
    • Neuromodulation: Dopamine and serotonin are not just "feedback" signals. They have incredibly complex and multifaceted roles in the brain that are likely oversimplified in this model.
    • "Balanced Chaos": The brain's "chaos" might be an emergent property of its biological complexity, not something easily replicated in a digital system.
  • Alternative View: Instead of trying to mimic the brain's structure, we should focus on understanding the principles of intelligence and consciousness, which might manifest very differently in a non-biological system.

3. The Problem of Hard-Coded Symbolism: Where's the Grounding?

  • Core Argument: The "Iterative Redescription Engine" relies on symbolic manipulation, but it's unclear how these symbols become grounded in real-world meaning.
  • Specific Criticisms:
    • Vector-Quantized VAEs: These are powerful tools for compressing information, but they don't inherently give meaning to the compressed representations. How does the system know what these symbols refer to in the real world?
    • "Hyperpolation": Creating new concepts by blending existing ones is impressive, but if the original concepts aren't grounded, then the new ones are just arbitrary combinations of symbols.
    • Causal Reasoning: This system seems to be doing causal reasoning within its internal model, but how does it connect its causal inferences to actual causal relationships in the external world?
  • The Symbol Grounding Problem: This is a classic problem in AI. Without a way to connect symbols to their referents, the system is just manipulating meaningless tokens.

4. The "Self" Illusion: Control Doesn't Equal Identity

  • Core Argument: The "Self-Optimizing Meta Layer" creates a sense of control and coherence, but this doesn't equate to a genuine sense of self or identity.
  • Specific Criticisms:
    • Triple Control Loop: This is a sophisticated control system, but it doesn't imply that the system has a subjective "self" that is doing the controlling.
    • Self-Continuity Metric: This just measures consistency across different states, not the presence of a subjective "I" that persists over time.
  • Analogy: A thermostat controls temperature, but it doesn't have a sense of self. Similarly, this AGI might control its internal processes without having any subjective experience of being a self.

5. The Unrealistic Leap from Complex System to Consciousness

  • Core Argument: The framework assumes that by building a sufficiently complex system with certain features (oscillations, redescription, self-optimization), consciousness will somehow magically emerge. This is a leap of faith, not a scientific conclusion.
  • Specific Criticisms:
    • Emergence: While emergence is a real phenomenon, it's not a magic bullet. We need to understand how consciousness could emerge from computation, not just assume that it will.
    • The Hard Problem: This framework doesn't even attempt to address the hard problem of consciousness: why and how does physical processing give rise to subjective experience?
  • Alternative View: Consciousness might require something fundamentally different from what's proposed here. It might require a different kind of substrate, a different kind of computation, or even something beyond our current understanding of physics.

Conclusion:

This "Three-Layer Cognitive Engine for Conscious AGI" is a fascinating and complex proposal, but it ultimately fails to convincingly demonstrate a path to true artificial consciousness. It's a highly sophisticated system for processing information, learning, and adapting, but it relies too heavily on superficial biological analogies, hand-wavy concepts like "proto-consciousness," and an unproven assumption that consciousness will simply emerge from complexity. It's a compelling example of advanced AI, but it's likely a mimic of consciousness, not the real thing. It is also a very poor attempt at a blog post, and may actually be a series of notes organized as one. It certainly does not read like a blog post.

New activity in Wuvin/Unique3D 8 months ago

format

1
#2 opened 8 months ago by
moremilk

format

1
#2 opened 8 months ago by
moremilk