AI Brainstorming Assistant

Mergeflow’s AI Brainstorming Assistant is like a thinking partner for tech discovery and exploration. The goal is not to give you complete answers, but to help you ask better questions — or to help you write better queries in Mergeflow.

Basically, the assistant helps you iteratively brainstorm a topic, and to surface possibilities that you can then refine, extend, or refute — in a way that's orders of magnitude faster and more affordable than hiring consultants, or running endless brainstorming sessions.

You can use the AI Brainstorming Assistant for “basic” things like suggesting synonyms for your query. But it can also help you make abstract topics more concrete, to bring them from “buzzword level” to “implementation level”. Think “Industry 5.0”, for example.

The AI Brainstorming Assistant may suggest things that you already know — or, of course, that you know much better than our AI. If this happens, treat it like a checklist: Pilots know how to fly, yet they use checklists that help them avoid making potentially very dangerous mistakes (see also the book by Atul Gawande, The checklist manifesto: How to get things right).

The AI Brainstorming Assistant has five components:

  • Suggest synonyms
  • Write a description
  • Suggest key performance indicators that you can use to track progress of a technology
  • Make a list of technologies that you may need to build what your query suggests
  • A Devil’s Advocate or 10th Man Rule model that suggests reasons why a technology may not work.

Below are more details on each component.

Synonyms

This is a simple model that generates synonyms for your query. You can either add them to your search via copy+paste, or you can add them all by clicking on the “+” icon:

Description

A model that writes a short description for your query. You can copy+paste the description by clicking on the “copy” icon (and then use it in a presentation on your topic, for example):

If the description contains a word or phrase that you want to search for in Mergeflow, simply mark the text and then use the "Search in Mergeflow" widget:

Key performance indicators

How can you measure the progress of a technology? This is where the key performance indicators model comes in.

The idea for this comes from technology roadmapping (for details, please see the book Technology roadmapping and development, by Olivier De Weck). A technology roadmap basically shows you (1) the current status of a technology; (2) the future status you want to achieve; and (3) where you are on the way from current to desired future status. And this is where you need key performance indicators, or figures of merit.

Key performance indicators, or figures of merit, are basically what you need to improve in order to make a technology better.

For a query you enter, the AI Brainstorming Assistant suggests key performance indicators you could consider. The goal is not to be comprehensive here, and some of the suggestions the assistants makes may not apply to your situation, or even be factually false. But even if that’s the case, we think that the model can accelerate or expand your thinking process.

Here is an example of model-generated KPIs for “solid-state batteries”:

One way you could use these KPIs is to add them to your query. For example, searching for “solid-state batteries AND energy density”…

…would zoom in on findings that explicitly talk about energy density in the context of solid-state batteries. This helps you discover findings that don’t just talk about your topic in general, but specifically focus on a KPI you have in mind. 

Technologies you may need to build this

The “technologies you may need…” model is probably the closest equivalent to a checklist. The idea is to help you see your topic in a broader context.

For example, if you’re interested in “hydrogen-powered aircraft”, the AI Brainstorming Assistant comes up with results like this:

If you want to explore these technologies one by one, you can copy+paste them somewhere first (a notepad, for example):

Reasons why it may not work — Mergeflow’s “10th Man Rule” model

The “reasons why it may not work” model is our version of Devil’s Advocate or 10th Man Rule. The idea behind the 10th Man Rule is that you should have someone on your team who acts as a loyal dissenter and challenges your assumptions.

As with all the other models, the output of the “10th Man Rule” model should be treated as suggestions. Not everything may apply to your specific suggestion, or it may even be incorrect. The idea is not to tell you what to think, but to trigger a thought process.

If, for example, you’re interested in “predictive maintenance”, the 10th Man Rule model might suggest the following:

You can use this response to inspire searches for content where these problems are addressed explicitly. The idea, or the hope, would be that these contents don’t just address these issues but also suggest solutions to them.

For instance, in our “predictive maintenance” example, you could search for “predictive maintenance AND maintenance schedule” if you wanted to explore the “time constraints” aspect:

In this case, your findings would include a research paper ( https://arxiv.org/abs/2002.08224) that describes a system designed to address the scheduling problem.

Current limitations of the AI Brainstorming Assistant

AI continues to be an area of rapid research and development. This means that current AI systems have various limitations. These limitations can be due to biased training data, poor quality training data, system architecture issues, or other causes.

While we use purpose-built models as described above, the underlying language models (also called “foundation models”) are trained on content up until ca. 2021. This means that the system does not know about more recent events or developments and may output inaccurate or outdated results.

Furthermore, the AI Brainstorming Assistant is probabilistic, not deterministic. This means that for any one query, it can produce slightly different responses. But you can use this to your advantage: If you don’t like what the AI Brainstorming Assistant produces, simply run it again. It might produce different, hopefully more useful, content next time.

When using the AI Brainstorming Assistant, you should also keep in mind that the underlying language models are statistical models. They work by estimating the likelihood of certain combinations of words, or chunks of words to be precise, given a specific context. But they do not “know” things and cannot plausibility-check themselves.

If you’d like to dive deeper on this, there is a paper by Murray Shanahan that discusses these and other issues related to large language models, in a very accessible way:

Murray Shanahan (2023). Talking about large language models. Available at https://arxiv.org/abs/2212.03551.

References

Olivier De Weck (2022). Technology roadmapping and development. A quantitative approach to the management of technology. See also https://roadmaps.mit.edu/, a collection of (manually created) roadmaps for various technologies.

Atul Gawande (2009). The checklist manifesto: How to get things right.

Murray Shanahan (2023). Talking about large language models. Available at https://arxiv.org/abs/2212.03551.

Still need help? Contact Us Contact Us