AI in R&D: Reliable problem solving?

R&D leaders strive to mitigate factors that limit effectiveness and productivity. Time and money are often wasted re-learning things, because teams don’t start new projects from the firm foundations of what has gone before.  Organisational knowledge is easily lost through staff turnover and poor record keeping.  Data systems record what works, but recording what didn’t is more haphazard. And delivering low risk new variants is often incentivised over new innovations. 

Developing a bespoke AI model, trained on the company’s own data and learnings, to provide a robust institutional memory and creativity tool, sounds a promising solution. However, unlocking this potential will involve a lot more than simply buying an enterprise LLM license. It will be essential to provide the right data to train the model and to provide training and context to help people get the best out of the new tools.  Simply feeding in every scrap of information and hoping for the best is unlikely to yield good results.

The necessity of providing the right data and critically assessing the output was brought home to me recently, when I spent some time testing the ability of LLMs to solve cryptic crossword clues, as a proxy for solving technical problems.  It highlighted potential pitfalls in deploying AI, and the need to be vigilant about both what goes in and the output.

I found that the models were very good at some types of problem, but struggled with others.  Anagrams generally posed little problem and these can be seen as a proxy for new variants. Even so not 100% accurate, when on occasion the model didn’t recognise a valid combination of letters.

Success with clues which required more word play or lateral thinking was more varied. Amongst the successes there were several types of error.

-Offering a completely made-up word as a solution

-Offering an incorrect solution with an explanation that only fits parts of the clue

-Fixating on an answer and arguing that there was a mistake in the clue when it didn’t fit.

 Seeing how the AI approached these problems was a reminder that AI cannot “see” the answer to the puzzle. They work with tokens and probabilities, not logic. LLMs can be persuasive in pushing incorrect but plausible answers.  

It’s well known that AI can make mistakes and these examples highlighted the need to carefully evaluate the proposed solutions. Nonetheless, being able to produce a list of context based plausible solutions, based on accumulated knowledge, will speed up the R&D process and allow greater focus on selecting and further developing the best ideas.

 AI models have the capacity to help capture implicit knowledge and provide a smart, comprehensive memory of what the company knows and has tried before.  The pre-requisites are making sure the data is available in the right structured way, and training teams to use these tools. Are you ready?

Next
Next

Making the most of AI Pt 2.