Home » Uncategorized

Generative AI megatrends: Generative AI for enterprise is proven  vs generative AI for  consumer is not – Part two

  • ajitjaokar 
Generative AI megatrends: Generative AI for enterprise is proven  vs generative AI for  consumer is not – Part two

In part one of this blog, we saw how there is an increasing case for an enterprise chatbot use case.

In part two, we ask the question 

Could a consumer chatbot i.e. directly customer facing chatbot be a flawed use case for an LLM?

The consumer (customer facing) chatbot case is a familiar use case and people are attached to it because they are familiar with it from personal experience. 

Some companies also see the consumer AI use case as a business model to get rid of staff. 

Recently, the National Eating Disorders Association (NEDA) removed its chatbot from its help hotline over concerns that it was providing harmful advice about eating disorders. The chatbot, named Tessa, recommended weight loss, counting calories, and measuring body fat, which could potentially exacerbate eating disorders. NEDA supposedly intended to used the Tessa AI to replace six paid employees and a volunteer staff of about 200 people, who fielded nearly 70,000 calls last year.

Why does generative AI hallucinate?

Recently, Yann LeCun compared generative AI to the game of telephone. Each person whispers the same message  to the next – but when they make small mistakes, these mistakes get amplified down the line – leading to a completely different message at the end of the line. From a technical perspective, this problem is hard to fix. 

But are we addressing the right problem?

By this I mean, if we get rid of hallucination

we get rid of the creativity, and the potential for ideas

making the agent useless

The grounding of knowledge is useful but there will need to be a tradeoff. if you ground too much – you lose any real real reason to use Gen AI in the first place. you may as well write a SQL query :). Hallucination is a loaded word. People who hallucinate also ideate. Kill hallucination in AI and kill all creativity with it. That’s what makes the entity unique. More broadly, considering the ‘known unknowns’ idea as per Donald Rumsfeld. there is value in addressing the unknown unknowns use case  https://lnkd.in/eHn_jA-2

To conclude, its hard to see how we can present generative AI technology can be prevented from hallucinating. For a consumer facing use case, even one mistake is enough to damage the reputation. However, in an enterprise, as an assistant to a human expert, the chatbot use case is already proven.

Image source: https://pixabay.com/photos/sisters-secret-whisper-kids-6274746/

Yann LeCun video https://www.youtube.com/watch?v=vyqXLJsmsrk