For by Him were all things created, that are in heaven, and that are in earth, visible and invisible,...For the invisible things of Him from the creation of the world are clearly seen, being understood by the things that are made, ...so that THEY ARE WITHOUT EXCUSE: Col 1:16 / Rom.1:20

Friday, November 7, 2025

Who Created the "geometry of human thought" & the "semantic web"?

Q: Who Created the "geometry of human thought" & the "semantic web" in the first place? [This level of sophistication could never arise from time + chance.....then interweave seamlessly together]
A: But now thus saith the LORD that created thee.... 
Isaiah 43:1

"In a dark MRI scanner outside Tokyo, a volunteer watches a video of someone hurling themselves off a waterfall. Nearby, a computer digests the brain activity pulsing across millions of neurons. A few moments later, the machine produces a sentence: “A person jumps over a deep water fall on a mountain ridge.”

No one typed those words. No one spoke them. They came directly from the volunteer’s brain activity.

That’s the startling premise of “mind captioning,” a new method developed by Tomoyasu Horikawa and colleagues at NTT Communication Science Laboratories in Japan. 
Published this week in Science Advances, the system uses a blend of brain imaging and artificial intelligence to generate textual descriptions of what people are seeing — or even visualizing with their mind’s eye based only on their neural patterns.
As Nature journalist Max Kozlov put it, the technique “generates descriptive sentences of what a person is seeing or picturing in their mind using a read-out of their brain activity, with impressive accuracy.

To build the system, Horikawa had to bridge two universes: the intricate geometry of human thought and the sprawling 
semantic web that language models use to understand words.

The researchers also made a surprising discovery when they scrambled the word order of the generated captions. 
The quality and accuracy dropped sharply, showing that the AI wasn’t just picking up on keywords but grasping something deeper — perhaps the structure of meaning itself, the relationships between objectsactions, and context.

Even when subjects were only imagining the videos, the AI generated accurate sentences describing them, sometimes identifying the right clip out of a hundred. That result hinted at a powerful idea: the brain uses similar representations for seeing and visual recall, and those representations can be translated into language without ever engaging the traditional “language areas” of the brain.

The ethical implications of this technology are hard to ignore. If machines can turn brain activity into words, even imperfectly, who controls that information?" 
ZMEscience