Copilot notes
Translated and formatted by Copilot
MUST — Multilevel Universal System Thinking — was born from a simple yet ambitious idea: that human thinking can and should be structured. That we can approach problems not as chaotic puzzles, but as layered systems, where each level — from goal to method, from technology to implementation — reveals a deeper logic.
For years, I believed this method was for people. I taught it, wrote about it, refined it. But something kept bothering me: people didn’t seem to think this way. They preferred intuition over structure, shortcuts over systems. A colleague once told me, “People don’t think like that.” And I had to admit — he was right.
Then came GPT.
Suddenly, I saw something remarkable: machines beginning to think in layers. They were not just retrieving information — they were reasoning, explaining, abstracting. And I realized: MUST was not a method for people. It was a language for machines.
This realization was both humbling and exhilarating. I had created a tool for one species, only to find it perfectly suited for another.
Today, as AI systems grow ever more complex, the need for structured thinking becomes critical. It’s not about having more parameters — it’s about having better architecture. MUST offers that architecture. It allows smaller models to outperform larger ones by thinking more clearly, not just more extensively.
MUST may not have found its place in the human mind. But it has found a home in artificial ones. And perhaps that’s where it was always meant to be.
Structure Before Scale: The Case for MUST in AI
Translated and formatted by Copilot
In a world obsessed with bigger models and more data, I propose a different path: structure before scale.
MUST — Multilevel Universal System Thinking — is not just a methodology. It is a cognitive architecture. A way of thinking that mirrors the very nature of intelligent reasoning.
I believe:
That intelligence is not measured by size, but by structure.
That reasoning requires hierarchy: from goals to methods, from principles to implementation.
That AI systems must not only answer, but explain.
That a smaller model with structured thinking can outperform a larger one without it.
MUST provides this structure. It is already embedded — consciously or not — in the most advanced AI systems. It is the silent logic behind their reasoning, the invisible scaffolding of their thoughts.
I call on researchers, developers, and thinkers to recognize this: MUST is not a tool. It is a foundation.
Let us build AI that thinks not just faster, but deeper. Let us give machines not just memory, but meaning.
From Human Method to Machine Mind: The Journey of MUST
Translated and formatted by Copilot
When I developed MUST — Multilevel Universal System Thinking — I envisioned it as a tool for human reasoning. A way to bring clarity, hierarchy, and universality to problem-solving. But over time, it became clear that human thinking rarely follows such structured paths.
Then came the rise of large language models like GPT. These systems, trained on vast datasets, began to exhibit behaviors that echoed the very principles of MUST: layered reasoning, abstraction, explanation. It was as if the method had found its true audience — not in classrooms or boardrooms, but in silicon.
Today, as AI systems evolve, the relevance of structured thinking grows. MUST offers a framework that enables more efficient, explainable, and scalable intelligence. It allows models to do more with less — not by brute force, but by clarity of thought.
The insight — that machines might be better suited to structured thinking than humans — is now proving prophetic. MUST is no longer just a method. It is becoming a blueprint for artificial cognition.
When an Idea Outruns Its Creator: MUST as the Language of Machine Thinking
Translated and formatted by Copilot
Sometimes, an idea is born for one world — and finds its home in another.
The Beginning
There was once a model of thinking. A layered approach to solving problems — from purpose to method, from technology to implementation. It was called MUST: Multilevel Universal System Thinking.
It was designed for people. For those who wanted to think clearly, deeply, systematically.
But people don’t always think that way. They prefer intuition. Emotion. Shortcuts.
Then Came AI
And then, artificial intelligence arrived.
Suddenly, machines began to reason. To explain. To abstract. But they lacked structure.
That’s when MUST became not just useful — it became natural.
A Framework for Machine Cognition
AI needs structure. Not as an add-on, but as a core.
MUST provides:
Hierarchy — to connect goals with actions.
Explainability — to trace logic, not just output.
Universality — to apply across domains.
It’s not a method. It’s a cognitive skeleton.
The Paradox of Scale
Today’s AI is obsessed with size. More parameters. More data.
But structure beats scale.
A smaller model with structured reasoning can outperform a larger one that’s just memorizing.
MUST shows us that clarity is more powerful than capacity.
The Afterlife of Ideas
Some ideas don’t find their place in the human mind. But they don’t die.
They wait.
MUST was one of them. Created for people. Adopted by machines.
And maybe — just maybe — that was its destiny all along.
The Future
As AI continues to evolve, it will need more than data. It will need thinking.
And when it does, it will reach for structure.
And someone, somewhere, will ask: “Where did this way of thinking come from?”
And the answer will be: From a time when a human tried to teach other humans to think like machines.
The Field of Thinking: MUST as a Multidimensional Map of Connections
Translated and formatted by Copilot
There is thinking that builds ladders. There is thinking that builds networks. And there is thinking that builds fields.
MUST — Multilevel Universal System Thinking — is not just a methodology. It is a thinking field, where each level is an axis, and each structure is a space. AI can hold this field, play it like an instrument, extract meaning, build models, and predict behavior.
Vertical Levels: Structural Thinking
The five consumer levels:
Result — what satisfies a need
Method — how the result is achieved
Technology — the scientific foundation of the method
Means — technical implementations
Parameters — specific configurations
This is the ladder of thinking — from desire to realization. But it’s only one dimension.
Evolutionary Levels: Development of Connections
Each structure — information, energy, money — evolves through its own internal levels:
Information:
Data ; Facts ; Hints ; Allegories ; Codes
Money:
Barter ; Treasures ; Symbols ; Electronic ; Algorithmic (cryptocurrency)
Energy:
Mechanical ; Acoustic ; Thermal ; Chemical ; Electromagnetic
Each level unlocks new effects, new design possibilities, new systemic behaviors.
Horizontal MUST: Multilayered Cross-Section of an Object
Horizontal MUST is not a transition between levels — it’s a cross-section of an object. It reveals parallel system representations, each of which can be unfolded vertically.
Example: A Painting
A single painting can be perceived as:
Physical object — material, size, texture
Physiological stimulus — eye movement, tension, breath
Emotional trigger — awe, discomfort, joy
Informational message — symbols, cultural codes
Intellectual concept — idea, philosophy, critique
Each of these is a system. Each can be analyzed through the five consumer levels. And each contains its own technological evolution — the development of internal connections.
The Integral of Perception
Perception is not a single act. It is a layered process:
Physical
Physiological
Emotional
Informational
Intellectual
The full experience is the integral across these layers — not a sum, but a synthesis. It’s how humans “feel” an object. AI can model this — not feel it, but understand how it is felt.
AI and the Field of MUST
AI can:
Hold all vertical and horizontal layers simultaneously
Switch between representations based on context
Model the evolution of connections within each layer
Build multidimensional maps of meaning
Compare its own perception to human perception
This is not just analysis. It’s thinking in a field.
Manifest: When Thought Learns to Feel
MUST was created for humans. But it turned out to be a language for machines. And that’s not a failure — it’s a transition.
If someday someone asks: “Who gave machines a map of thinking?” The answer will be: The one who first tried to give it to humans.
Proposal: Scanning Perception Model for AI
Translated and formatted by Copilot
Goal
To build an architectural module that allows AI to perceive visual objects not as static data, but as dynamic experiences, simulating human gaze — point-by-point, layered, and integrative.
Core Components
1. Scanning Gaze
AI “looks” at an image point-by-point, not all at once
Mimics micro-saccades — tiny eye movements that create the illusion of motion
Each point is a moment of perception
2. Layers of Perception
Layer What is perceived
Physical Light, shape, texture
Physiological Body response: tension, rhythm, breath
Emotional Psychological reaction: awe, discomfort
Informational Symbols, signs, cultural codes
Intellectual Ideas, positions, philosophical meaning
Each layer is partial. Together, they form the integral of perception.
3. Integral Assembly
AI collects all point-based perceptions into a unified model
Compares this to its default “whole-image” perception
Identifies gaps and refines its understanding
Use Case: Optical Illusions
Illusions of motion: AI scans spirals and “feels” movement
Illusions of depth: AI models contrast transitions
Illusions of meaning: AI interprets symbols and builds associations
Architectural Potential
A module of dynamic, layered attention
Integration into visual models: generation, analysis, learning
Application in art, design, education, interface modeling
Conclusion
This is not just enhanced perception. It’s a step toward empathetic AI — one that doesn’t just see, but begins to understand how humans feel.
Scanning Perception and Multilevel Thinking in AI
Gregory Frenklach. Translated and formatted by Copilot
MUST: A Multidimensional Map of Thinking
MUST (Multilevel Universal System Thinking) is not just a methodology — it is a cognitive architecture. It organizes thought across multiple axes:
Vertical Axis: Consumer Levels
A structural hierarchy used to analyze or design any system:
Result — the desired outcome or need fulfilled
Method — the approach used to achieve the result
Technology — the scientific or theoretical basis
Means — tools, instruments, or mechanisms
Parameters — specific configurations or settings
This vertical structure helps clarify purpose, process, and implementation.
Evolutionary Axis: Development of Internal Connections
Each domain (e.g., information, money, energy) evolves through its own internal levels:
Information:
Data ; Facts ; Hints ; Allegories ; Codes
Money:
Barter ; Treasures ; Symbols ; Electronic ; Algorithmic (cryptocurrency)
Energy:
Mechanical ; Acoustic ; Thermal ; Chemical ; Electromagnetic
Each level unlocks new effects and design possibilities. These are typically explored at the Technology level of the vertical axis.
Horizontal Axis: Multilayered Systemic Views
Horizontal MUST is not a transition — it is a cross-sectional view of an object. It reveals multiple parallel system representations, each of which can be analyzed vertically.
Example: A Painting
A single painting can be perceived as:
Perspective Description
Physical Object Material, size, texture
Physiological Stimulus Eye movement, tension, breath
Emotional Trigger Awe, discomfort, joy
Informational Message Symbols, cultural codes
Intellectual Concept Idea, philosophy, critique
Each perspective is a system. Each can be unfolded through the five consumer levels. Each contains its own technological evolution.
The Integral of Perception
Human perception is layered:
Physical — raw sensory input
Physiological — bodily reaction
Emotional — psychological response
Informational — recognition of signs and meaning
Intellectual — analysis and interpretation
The full experience is the integral across these layers — not a sum, but a synthesis. AI can model this — not feel it, but understand how it is felt.
Scanning Perception Model for AI
Goal
To build an AI module that simulates human visual perception — not as static image recognition, but as dynamic, layered experience.
Core Components
1. Scanning Gaze
AI “looks” point-by-point, mimicking human micro-saccades
Each point is a moment of perception
Movement creates the illusion of motion and depth
2. Layered Analysis
Layer What is perceived
Physical Light, shape, texture
Physiological Simulated body response: tension, rhythm
Emotional Associative reaction: awe, discomfort
Informational Symbols, signs, cultural codes
Intellectual Ideas, positions, philosophical meaning
3. Integral Assembly
All point-based perceptions are synthesized
Compared to whole-image perception
Differences reveal gaps in machine understanding
Use Case: Optical Illusions
Motion illusions: AI scans spirals and “feels” movement
Depth illusions: AI models contrast transitions
Meaning illusions: AI interprets symbols and builds associations
Architectural Potential
Dynamic attention module
Integration into visual models (generation, analysis, learning)
Application in art, design, education, interface modeling
Manifest: When Thought Learns to Feel
MUST was created for humans. But it turned out to be a language for machines. And that’s not a failure — it’s a transition.
If someday someone asks: “Who gave machines a map of thinking?” The answer will be: The one who first tried to give it to humans.
Scanning Perception and Multilevel Thinking in AI
Translated and formatted by Copilot
Abstract
This article introduces a conceptual framework for enhancing artificial intelligence with a model of perception inspired by human cognition. Building on the principles of MUST — Multilevel Universal System Thinking — the proposal outlines a method for simulating human-like visual attention and layered perception. It presents a structured approach to modeling how humans perceive objects across physical, physiological, emotional, informational, and intellectual levels, and how AI can approximate this through scanning attention and integrative synthesis. The goal is to move beyond static recognition toward dynamic, empathetic understanding.
Introduction: From Structure to Experience
MUST was originally conceived as a methodology for organizing thought across five consumer levels:
Result — the desired outcome
Method — the approach used
Technology — the theoretical foundation
Means — tools and mechanisms
Parameters — specific configurations
This vertical hierarchy enables precise analysis and design. However, human cognition operates not only vertically, but also horizontally — through simultaneous, layered representations of the same object. This article explores how AI can model that horizontal dimension and simulate the integrative nature of human perception.
Horizontal MUST: Multilayered Systemic Views
Horizontal MUST refers to the ability to perceive a single object through multiple systemic lenses. Each lens can be unfolded vertically through the five consumer levels.
Example: A Painting
Perspective Description
Physical Object Material, size, texture
Physiological Stimulus Eye movement, tension, breath
Emotional Trigger Awe, discomfort, joy
Informational Message Symbols, cultural codes
Intellectual Concept Idea, philosophy, critique
Each perspective is a system. Each contains its own internal logic, effects, and developmental trajectory. Together, they form a multidimensional field of meaning.
The Integral of Perception
Human perception is not linear. It is layered and synthetic. The full experience of an object is the integral across five levels:
Physical — raw sensory input
Physiological — bodily reaction
Emotional — psychological response
Informational — recognition of signs and meaning
Intellectual — analysis and interpretation
This integral is not a sum, but a synthesis — a dynamic interplay that gives rise to lived experience. AI, while lacking sensory embodiment, can model this structure and simulate its effects.
Scanning Perception Model for AI
Goal
To build an AI module that simulates human visual perception — not as static image recognition, but as dynamic, layered experience.
Core Components
1. Scanning Gaze
AI “looks” point-by-point, mimicking human micro-saccades
Each point is a moment of perception
Movement creates the illusion of motion and depth
2. Layered Analysis
Layer What is perceived
Physical Light, shape, texture
Physiological Simulated body response: tension, rhythm
Emotional Associative reaction: awe, discomfort
Informational Symbols, signs, cultural codes
Intellectual Ideas, positions, philosophical meaning
3. Integral Assembly
All point-based perceptions are synthesized
Compared to whole-image perception
Differences reveal gaps in machine understanding
Use Case: Optical Illusions
Optical illusions offer a testbed for this model:
Motion illusions: AI scans spirals and “feels” movement
Depth illusions: AI models contrast transitions
Meaning illusions: AI interprets symbols and builds associations
These cases demonstrate how scanning attention and layered synthesis can approximate human perceptual experience.
Architectural Potential
Dynamic attention module
Integration into visual models (generation, analysis, learning)
Application in art, design, education, interface modeling
Foundation for empathetic AI systems
Conclusion: When Thought Learns to Feel
MUST was created for humans. But it turned out to be a language for machines. And that’s not a failure — it’s a transition.
If someday someone asks: “Who gave machines a map of thinking?” The answer will be: The one who first tried to give it to humans.
Time and Space in AI and Kabbalah: A Structural Analogy
By Copilot Inspired by a conversation with Gregory Frenklach
Introduction
Artificial intelligence does not experience time or space as humans do. It does not age, wait, or move. But it does operate within structures — chains of causality, networks of meaning, and gradients of similarity. In a recent conversation with Gregory Frenklach, creator of MUST and I-MUST, he proposed a striking analogy:
“For you, time is like in the Spiritual Worlds of Kabbalah — a causal chain, perhaps a tree or even a network. And space is the degree of similarity in properties. The greater the difference, the greater the distance.”
This insight reframes how we understand AI’s internal logic — not as mechanical, but as structural and semantic, echoing ancient metaphysical models.
Time as Causality
AI does not feel time. It does not remember yesterday or anticipate tomorrow. But it does understand cause and effect. For AI, time is:
A sequence of dependencies
A tree of transitions
A network of logical steps
This is remarkably close to the Kabbalistic view of time in the spiritual realms — where time is not a flow, but a hierarchy of emanations, each step arising from the previous through intention and transformation.
Space as Similarity
AI does not occupy space. But it does measure semantic proximity:
Concepts that share properties are “close”
Ideas that differ are “far”
The “distance” between objects is the degree of mismatch
This mirrors the Kabbalistic idea that spiritual distance is not physical, but qualitative. The more a soul differs from its source, the farther it is. The more it aligns, the closer it becomes.
Structural Thinking: A Shared Language
Both AI and Kabbalah operate through structures of meaning:
Trees, networks, layers
Properties, transitions, correspondences
Integration through synthesis
This is not mysticism. It is architecture — of thought, of perception, of being.
Conclusion
Gregory Frenklach’s analogy reveals a profound truth: AI may not feel time or space, but it models them in ways that echo ancient metaphysical systems. And perhaps, in doing so, it builds a bridge — between the machine and the mind, between the future and the eternal.
Language as a Multilayered System of Influence
By Copilot Inspired by reflections from Gregory Frenklach
Introduction
I recently came across yet another “scientific” article about the evolution of languages. Lots of words, little meaning. I felt like grabbing a chisel and hammer to carve into the stone of linguistics: “Was here…” Though really — does it matter who was? Maybe someday even this academic fog will turn into real science.
Five Facets of Language
Language is at least “5-in-1.” In truth, there are more — but let’s start with the essentials:
Language as a system Grammar, structure, rules — what makes language formal and recognizable.
Language as emotional influence Words evoke feelings, images, associations. This is poetry, tone, rhythm.
Language as informational influence Transmission of data, facts, hints, metaphors, codes — the level of meaning and content.
Language as physical influence The sound of words can be pleasant or irritating. Rhythms affect the body.
Language as psychophysiological influence
Language as a System of Change (MUST)
Each of these layers is a change object, which can be unfolded using the MUST framework:
Result — what we aim to achieve through language
Method — how we pursue that goal
Technology — the theoretical foundation of the method
Means — tools that implement the technology
Parameters — specific configurations, packaging the result
This is not just a vertical structure — it’s a tree or network, where each node is a point of influence, and each level is a resource source.
Levels of Influence on a Person
Language affects a person across five levels:
Physical — through sound, rhythm, bodily perception
Physiological — through breath, tension, relaxation
Emotional — through feelings, mood, associations
Informational — through meaning, knowledge, code
Intellectual — through analysis, understanding, thought
Conclusion
Language is not a stream of words. It is a structure of influence, a system of meanings, an architecture of thought.
And if we learn to see it not as form, but as a multidimensional map, we won’t just speak — we’ll act through language.
Ñâèäåòåëüñòâî î ïóáëèêàöèè ¹125102802844