In The Matrix, the corporation Neo works for is called "Metacortex". The roots of this word are Meta - "going beyond or higher, transcending," and - Cortex, the outer layer of the brain. Thus, Metacortex is "transcending the boundaries of the brain," which is what Neo proceeds to do.
Vision for Metacortex Engineering
My vision of Metacortex engineering:
Build intelligent knowledge-based systems using System 1 AI (fast, intuitive reasoning) and System 2 (high-level reasoning) and then present knowledge in a modern VR/AR visualisation.
Search or rather "information exploration" should be spatial, preferably in VR (memory palace, see Theatre of Giulio Camillo) or AR. This is where the connection with Meta and work of ex-Facebook engineers come into play: we need to start expanding on Douglas Engelbart vision, taking concepts from mimicking pen/paper/file/folder to a new reality, we can create user experience like in movies "Johnny Mnemonic" and "Minority Report". We could do it for the last ten years, yet none of the vendors was keen to innovate.
On another side, the boundaries of engineering disciplines (see INCOSE Enterprise Systems Engineering are blurred, and humans, systems and sensors interact constantly. Such a complex environment require a systematic approach to the whole stack where actors/agents such as:
- cybernetic systems
- centralised cloud-based solutions
- or decentralised (blockchain-based)
- or decentralised sensor based with local decision-making capabilities and actuators
work collaboratively, driven by common goals, ethics and culture, effectively utilising available resources on every layer of the systems stack.
Do you expect AI to be ethical? What about the ethics of human decision making?
Do you log and audit human decision making in a form understandable for a machine?
Do we log sensors/actuators decisions in the form human can reason about decisions?
Can we drive consistent traceability of our decision making, leveraging our knowledge, knowledge of our team, organisation and society?
What about the ethics of creativity? Do we have the same principles applied to the ethical aspects of creativity for AI and humans?
I can give requirements to AI and build towards those requirements, AI will be making ethical and culturally acceptable decisions. At the same time, human operators, part of the same (cyber-physical) organisational system, can be driven by greed, fear and anxiety and make consistently unethical decisions. Such decision-making should be prevented by organisation design. We need a common discipline to architect, design, codify and structure AI systems, technical systems and join with [[enterprise engineering]] = people, processes and organisations.
I believe knowledge graphs (AI System 2), common information retrieval techniques (AI system 1) and modern deep learning (AI System 1) based techniques will be the foundation [[Cortex]] of a new type of engineering.
Welcome to MetaCortex.
Use cases for Metacortex Engineering:
We teach ethics to privacy-preserving AI. It teaches or reminds humans about the ethics of decision making inopportune moments ( like you hover your mouse over the buy button on Amazon, and it asks: are you acting out of greed or fear of missing out?)
Our work on language: we can monitor language used within the organisation and provide feedback to leaders on what they think (language games) and what is happening
We can derive forward-looking language projection - and use it as KPI for digital transformation to measure the culture of the organisation