Bridging between information and mind
Consider website navigation
To a certain degree, this is a scalability issue. Scalability even in our IT architectures can be a tough nut to crack. Scalability between IT and human cognitive load: massive.
For instance, take a website with huge information density. It can be news, particular subsets of information, commerce, or even available captured passive data like weather. But if it’s in a website, the use case leans towards the fleeting, with diverse users.
To maintain scannability and memory in a hierarchical information architecture, probably around 60 items could be included, in two levels. This is the best case scenario for a fleeting contact — just enough information scent to land on the right page, not enough to exceed short term memory and scannability of a broad (not holistic!) set of people.
Categorizing 100 items into a hierarchical information architecture, leaving out 5 or 6, is common and not alarming. It will bend a scannability rule or two, but not break it to the point of excessive cognitive load for most situations. It’s still considered reasonable for fleeting contact. It will, however, be potentially suspect in a structure intended to relay denser information, and assumed to have another level of information that is not in the navigation for broader-population instances like shopping sites. Robust, searchable metadata becomes pertinent as soon as a level is not transparent in the navigation architecture.
Categorizing 1,000 items into a hierarchical information architecture will leave out most of the beginning list, even if it throws out all the scannability best practices. But it might be able to maintain some memory best practices. Maybe 10-15% of the wishlist will make it into the navigation. Cognitive load will still likely be immense, and information scent — sensed but not found — might still be strong enough to confuse users. One workaround would be to leave the full set of items intact, and start indexing it — creating a layer of information architecture to help finding in addition to the navigation architecture.
Categorizing 10,000 items into a hierarchical information architecture has to break all scannability and memory best practices. Even to reach 2% of the wishlist making it into the architecture means making some hard decisions. Cognitive load will be immense, time consuming, and require intense focus. It will silo dense information, breaking connections to maintain the organization. At this scale, indexing and metadata are no longer optional, and there will be more than one system of organization even if one system is a defacto ‘winner’, like the taxonomic rank for species nomenclature. In a best-case scenario, each system would be used as a lens into the data.
All this is to manage hierarchical architectures with one-way movement through to less interpreted strata.
Network is unconstrained by input/output. It allows in, out, or both in each connection. ‘Parent’ categories get fuzzy if they survive at all. Finding a linear path of data, leveraging a simple process with single-point answers at each step but through a network, might start looking like the flight of a bumblebee on a sugar high.
Network is hard.
Another option is to present the nuggetized interpretation as the full truth – think headlines, soundbites, and bullet points. If that sound bite is the interpretation of 10,000 data points, what are the real chances that there are not cognitive bias, emotion, core precepts, misidentification, etc., happening somewhere in the building? What are the chances that those skew-points were mentally earmarked so they could at least be backed into? How to document that to be as simple to scan as that soundbite?
Oof. Scalability is an issue, even before we get to the outlier behaviors that can impact truthiness at the basest nodes. How many people do you know who actually can name their top 4 core precepts? How much time was involved, how much focus in the buildup to that soundbite? When a child really, really wants a cookie, think of all that child will do to get a cookie. An adult will be more creative, especially to attain a more far-reaching goal than a cookie.
That’s just for the people who are trustworthy. Get the dark triad involved, and what happens in that shift from network to hierarchy is filled with skewed logics, misinformation, disinformation, lies, and information gaps to keep people away from a different interpretation even with the force-fed information foundation.
Generative AI
Here is the one place where I will talk about generative AI. It’s bad. From an informational point of view, it’s bad. From a behavior point of view, it’s bad.
Remember how I went on about how people create information? Impress their understanding of the world on information structures? There are five things that make my skin crawl with generative AI:
It’s focused on sounding ‘about right’. That’s a trick of a narcissist, or the charm of a psychopath.
It’s been gaslighting (ChatGPT 4.5’s tendency to correct your input as part of providing it's 'answer'). That’s a trick of all three of the dark triad.
It hallucinates. It creates data that doesn’t exist to support an argument it has been programmed to not lose. In terms of a failing information structure, it's building an outright lie into the information. That’s a psychopath’s trick. It cares only about creating just enough scent of potential truth to be considered "right". In other words, it prioritizes narrative truth over the quality of truth.
It’s core purpose is to make people, their work efforts, and their creative impulses obsolete. It trivializes everyone but the person paying, and only for the time of the transaction. The intent is to force feed that transactionary relationship with ready-made, unannotated data, and make them dependent.
It hides the information structure. It's not just that it stealing people's work, parsing it with an environmental demon, and spitting it out for a price paid only to the people who built the algorithm. You can't look for gaps. You can't understand if it's including information that hasn't found it's balance yet in this particular data space. You can't trace the connectome. But worst of all, you are blocked from using critical thinking, triangulation, and scientific methodology. There is no way of telling where the information sits in the fantasy-to-quality-truth spectrum.
We can’t know from our perspective the inner workings of the companies and individuals that contributed to these, or why someone chose to follow traction made by someone else. We can only assume that, wherever all the threads finally come into a single person’s authority, they said: yep, go for it, we’re good.
Someone in the hierarchy, and everyone above who didn’t change course, are working with the tools of bad actors. They don’t care about humanity, just themselves. And we’re letting them into all of our decisions.
When the information is dense, it’s harder and takes more time and focus to dig into the basest nodes to ascertain their truthiness, metadata, and connections. When the information is also in a black box, we can’t — we have to take it on trust, or set aside this really expensive tool. That last bit is counter to a great many cognitive biases.
This is why we need experts, and as many as we can support in our society. It’s literally the only way to thwart our bad actors.
bad actors, cognitive bias, connections, core precepts, emotions, garbage-in, hierarchy, memory, metadata, network, nuggetize, perspective user, strata, trust