Skip to content

ConceptGraphEnergyEfficiency

DavidFreely edited this page Nov 8, 2025 · 3 revisions

Your brain uses only 20 watts of energy while an AI server farm might use a million times more. How do we account for this huge difference? And what can learning about the human brain and its graph-like information structure tell us about how to slow AI's voracious energy appetites?

Here's the important point regarding energy usage. Both of these searches scale beautifully. Why? Because each searches only a tiny portion of the graph. All the seed nodes, the initial search inputs, can be activated in parallel. Then each node fans out signals across all of its relationships simultaneously, so overall search time depends linearly on the number of hops and nothing else. Let me re-emphasize that. It doesn't matter how big the graph is or how many attributes things have, all that matters is the number of hops. And with the graph's data design with hierarchy and exceptions, a small number of hops can be guaranteed.

This stands in stark contrast with today's artificial neural networks and large language models. ANNs and LLMs require vast amounts of computation for every search or inference. Their cost grows with the token count, the data set size, the model size, and the length of the context window. Why? Looking at a simple example of a neural network, clearly it does the same number of matrix multiplication operations regardless of the number of inputs which are activated. Even though many of these multiplications are by zero, GPUs have been optimized for multiplication to such an extent that multiplying by zero is just as costly as multiplying by other numbers. In essence, the ANN searches its entire knowledge base for every input pattern regardless of how simple or complex that pattern is. In contrast, the graph only searches the significant non-zero inputs, so it is doing orders of magnitude less computation.

  • Source: 2025-10-08 Killing AI’s Energy Hog.

Clone this wiki locally