A Tiny iim for Idea-Emergence: The Fight Against LLMs


A Tiny iim for Idea-Emergence: The Fight Against LLMs

Eccles-01:9: “That which has been is that which shall be; and that which has been done is that which shall be done: and there is no new thing under the sun.”

With everyone releasing Large Language Models (LLMs) this week, I felt a bit … left behind. After some jealous appreciation of this historically significant moment, I wondered - could I apply the same principles that are expected to elevate AI to Artificial General Intelligence (AGI) Artificial Superintelligence (ASI) status to idea-emergence in my own PKM?

If you can’t beat them, join them!

As a quick reminder, without nerding out, in essence the value of a LLM revolves around how well it calculates the best next outputs through a iterative series of transformational process and probability calculations.

In my case though, this needs to be applied to existing ideas or concepts, in the form of humble evergreen or atomic notes, and obviously directed at a far smaller scale than a Large Language Model process.

While I’m feeling cheekily combative, I just cannot compete with those spectacular L’s, so let me stick to i-i-m- and call it the interconnected idea model.

Now, let’s say you have some atomic notes or ideas in your personal knowledge management (PKM) system that you are focusing on to further boost your productivity:

  1. Mindfulness
  2. Productivity
  3. Flow state
  4. Pomodoro Technique
  5. Deep Work

Here’s how you could apply the interconnected idea model to these existing notes:

  1. Tokenisation: Break down each note into its core components. For example, “Mindfulness” could be tokenised into “present moment,” “awareness,” “non-judgmental,” etc.
  2. Initial Embeddings: For each token, list out its key attributes, related concepts, or personal associations. For example, the token “present moment” could have initial embeddings like “focus,” “attention,” “here and now,” “meditation,” etc.
  3. Self-Attention: Consider each token in the context of the others. Are there any natural connections or relationships between them? For instance, you might notice that “focus” and “attention” (from the “present moment” token) are closely related to the concept of “Flow state.” Mark these connections.
  4. Transformer Layers: Repeat the self-attention process multiple times, each time considering the tokens from different angles or perspectives. For example, you could look at them through the lens of “Productivity” and ask yourself how each token might contribute to or hinder productivity. In other words, keep twisting and turning your ideas until you’ve examined them from every possible angle.
  5. Embedding Matrix: Create a visual representation or matrix of your tokens and the connections you’ve identified. Yes - this is your Obsidian graph, a mind map or even a physical board with sticky notes. Try to arrange the tokens in a way that highlights the strongest connections. Don’t lose focus, remember that the AI is going to do this in milliseconds!
  6. Idea Generation: Examine your embedding matrix and look for areas with many intersecting connections or areas that seem underdeveloped. For example, you might notice that “Deep Work” is strongly connected to “Flow state” and “Pomodoro Technique,” but lacks a clear connection to “Mindfulness.” This could be an opportunity to explore how mindfulness practices might facilitate deep work.
  7. Apply your Inner AI: Let’s say you decide to explore the connection between “Mindfulness” and “Deep Work.” You could generate new ideas by:
    • Considering analogies (e.g., “Just as a sailor needs to be aware of the wind and currents, a knowledge worker needs to be mindful of distractions and focus.”)
    • Introducing contrasting ideas (e.g., “While deep work requires prolonged focus, mindfulness emphasises moment-to-moment awareness. How can these seemingly opposing concepts complement each other?”)
    • Conducting thought experiments (e.g., “Imagine a scenario where you incorporate a brief mindfulness exercise before starting a deep work session. How might that affect your ability to concentrate and enter a flow state?”)
  8. Iterative Process: Incorporate any new ideas or connections you’ve generated into your existing pool of atomic notes or concepts, and repeat the process as needed.
  9. Relief: A sigh of relief since this is more relevant your life than the mass of half-truths feeding that Frankenstein LLM!

And there you have it: the interconnected idea model, a feeble attempt by a mere human to harness the power of AI for personal knowledge management. It may not lead to artificial superintelligence, but at least it might spark a few good ideas!

Credit Notes

1 Like

I think the overall idea is intriguing yet I must disagree. LLMs need so much data and parameters to reach human level writing abilities due to the fundamental simplicity of next token prediction problems. I would focus on what LLMs are not good at, which for the time being includes backtracking, planning, exploration, and few-shot learning.